query_id
stringlengths 32
32
| query
stringlengths 6
5.38k
| positive_passages
listlengths 1
22
| negative_passages
listlengths 9
100
| subset
stringclasses 7
values |
---|---|---|---|---|
e0f6fd7e65776ae72bd68fa542266bef
|
A Hybrid Approach for Music Recommendation
|
[
{
"docid": "062c970a14ac0715ccf96cee464a4fec",
"text": "A goal of statistical language modeling is to learn the joint probability function of sequences of words in a language. This is intrinsically difficult because of the curse of dimensionality: a word sequence on which the model will be tested is likely to be different from all the word sequences seen during training. Traditional but very successful approaches based on n-grams obtain generalization by concatenating very short overlapping sequences seen in the training set. We propose to fight the curse of dimensionality by learning a distributed representation for words which allows each training sentence to inform the model about an exponential number of semantically neighboring sentences. The model learns simultaneously (1) a distributed representation for each word along with (2) the probability function for word sequences, expressed in terms of these representations. Generalization is obtained because a sequence of words that has never been seen before gets high probability if it is made of words that are similar (in the sense of having a nearby representation) to words forming an already seen sentence. Training such large models (with millions of parameters) within a reasonable time is itself a significant challenge. We report on experiments using neural networks for the probability function, showing on two text corpora that the proposed approach significantly improves on state-of-the-art n-gram models, and that the proposed approach allows to take advantage of longer contexts.",
"title": ""
}
] |
[
{
"docid": "1736f9feebc2b6568bbc617a210a0494",
"text": "Power and bandwidth requirements have become more stringent for DRAMs in recent years. This is largely because mobile devices (such as smart phones) are more intensively relying on the use of graphics. Current DDR memory I/Os operate at 5Gb/s with a power efficiency of 17.4mW/Gb/s (i.e., 17.4pJ/b)[1], and graphic DRAM I/Os operate at 7Gb/s/pin [3] with a power efficiency worse than that of DDR. High-speed serial links [5], with a better power efficiency of ∼1mW/Gb/s, would be favored for mobile memory I/O interface. However, serial links typically require long initialization time (∼1000 clock cycles), and do not meet mobile DRAM I/O requirements for fast switching between active, standby, self-refresh and power-down operation modes [4]. Also, traditional baseband-only (or BB-only) signaling tends to consume power super-linearly [4] for extended bandwidth due to the need of power hungry pre-emphasis, and equalization circuits.",
"title": ""
},
{
"docid": "8c043576bd1a73b783890cdba3a5e544",
"text": "We present a novel approach to collaborative prediction, using low-norm instead of low-rank factorizations. The approach is inspired by, and has strong connections to, large-margin linear discrimination. We show how to learn low-norm factorizations by solving a semi-definite program, and discuss generalization error bounds for them.",
"title": ""
},
{
"docid": "93043b729dc5f46860847e1ffb6a7b0c",
"text": "This experiment investigated the effects of three corrective feedback methods, using different combinations of correction, or error cues and positive feedback for learning two badminton skills with different difficulty (forehand clear - low difficulty, backhand clear - high difficulty). Outcome and self-confidence scores were used as dependent variables. The 48 participants were randomly assigned into four groups. Group A received correction cues and positive feedback. Group B received cues on errors of execution. Group C received positive feedback, correction cues and error cues. Group D was the control group. A pre, post and a retention test was conducted. A three way analysis of variance ANOVA (4 groups X 2 task difficulty X 3 measures) with repeated measures on the last factor revealed significant interactions for each depended variable. All the corrective feedback methods groups, increased their outcome scores over time for the easy skill, but only groups A and C for the difficult skill. Groups A and B had significantly better outcome scores than group C and the control group for the easy skill on the retention test. However, for the difficult skill, group C was better than groups A, B and D. The self confidence scores of groups A and C improved over time for the easy skill but not for group B and D. Again, for the difficult skill, only group C improved over time. Finally a regression analysis depicted that the improvement in performance predicted a proportion of the improvement in self confidence for both the easy and the difficult skill. It was concluded that when young athletes are taught skills of different difficulty, different type of instruction, might be more appropriate in order to improve outcome and self confidence. A more integrated approach on teaching will assist coaches or physical education teachers to be more efficient and effective. Key pointsThe type of the skill is a critical factor in determining the effectiveness of the feedback types.Different instructional methods of corrective feedback could have beneficial effects in the outcome and self-confidence of young athletesInstructions focusing on the correct cues or errors increase performance of easy skills.Positive feedback or correction cues increase self-confidence of easy skills but only the combination of error and correction cues increase self confidence and outcome scores of difficult skills.",
"title": ""
},
{
"docid": "ce6e5532c49b02988588f2ac39724558",
"text": "hlany modern computing environments involve dynamic peer groups. Distributed Simdation, mtiti-user games, conferencing and replicated servers are just a few examples. Given the openness of today’s networks, communication among group members must be secure and, at the same time, efficient. This paper studies the problem of authenticated key agreement. in dynamic peer groups with the emphasis on efficient and provably secure key authentication, key confirmation and integrity. It begins by considering 2-party authenticateed key agreement and extends the restits to Group Dfi*Hehart key agreement. In the process, some new security properties (unique to groups) are discussed.",
"title": ""
},
{
"docid": "4337f8c11a71533d38897095e5e6847a",
"text": "A number of recent works have proposed attention models for Visual Question Answering (VQA) that generate spatial maps highlighting image regions relevant to answering the question. In this paper, we argue that in addition to modeling “where to look” or visual attention, it is equally important to model “what words to listen to” or question attention. We present a novel co-attention model for VQA that jointly reasons about image and question attention. In addition, our model reasons about the question (and consequently the image via the co-attention mechanism) in a hierarchical fashion via a novel 1-dimensional convolution neural networks (CNN). Our model improves the state-of-the-art on the VQA dataset from 60.3% to 60.5%, and from 61.6% to 63.3% on the COCO-QA dataset. By using ResNet, the performance is further improved to 62.1% for VQA and 65.4% for COCO-QA.1. 1 Introduction Visual Question Answering (VQA) [2, 7, 16, 17, 29] has emerged as a prominent multi-discipline research problem in both academia and industry. To correctly answer visual questions about an image, the machine needs to understand both the image and question. Recently, visual attention based models [20, 23–25] have been explored for VQA, where the attention mechanism typically produces a spatial map highlighting image regions relevant to answering the question. So far, all attention models for VQA in literature have focused on the problem of identifying “where to look” or visual attention. In this paper, we argue that the problem of identifying “which words to listen to” or question attention is equally important. Consider the questions “how many horses are in this image?” and “how many horses can you see in this image?\". They have the same meaning, essentially captured by the first three words. A machine that attends to the first three words would arguably be more robust to linguistic variations irrelevant to the meaning and answer of the question. Motivated by this observation, in addition to reasoning about visual attention, we also address the problem of question attention. Specifically, we present a novel multi-modal attention model for VQA with the following two unique features: Co-Attention: We propose a novel mechanism that jointly reasons about visual attention and question attention, which we refer to as co-attention. Unlike previous works, which only focus on visual attention, our model has a natural symmetry between the image and question, in the sense that the image representation is used to guide the question attention and the question representation(s) are used to guide image attention. Question Hierarchy: We build a hierarchical architecture that co-attends to the image and question at three levels: (a) word level, (b) phrase level and (c) question level. At the word level, we embed the words to a vector space through an embedding matrix. At the phrase level, 1-dimensional convolution neural networks are used to capture the information contained in unigrams, bigrams and trigrams. The source code can be downloaded from https://github.com/jiasenlu/HieCoAttenVQA 30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain. ar X iv :1 60 6. 00 06 1v 3 [ cs .C V ] 2 6 O ct 2 01 6 Ques%on:\t\r What\t\r color\t\r on\t\r the stop\t\r light\t\r is\t\r lit\t\r up\t\r \t\r ? ...\t\r ... color\t\r stop\t\r light\t\r lit co-‐a7en%on color\t\r ...\t\r stop\t\r \t\r light\t\r \t\r ... What color\t\r ... the stop light light\t\r \t\r ... What color What\t\r color\t\r on\t\r the\t\r stop\t\r light\t\r is\t\r lit\t\r up ...\t\r ... the\t\r stop\t\r light ...\t\r ... stop Image Answer:\t\r green Figure 1: Flowchart of our proposed hierarchical co-attention model. Given a question, we extract its word level, phrase level and question level embeddings. At each level, we apply co-attention on both the image and question. The final answer prediction is based on all the co-attended image and question features. Specifically, we convolve word representations with temporal filters of varying support, and then combine the various n-gram responses by pooling them into a single phrase level representation. At the question level, we use recurrent neural networks to encode the entire question. For each level of the question representation in this hierarchy, we construct joint question and image co-attention maps, which are then combined recursively to ultimately predict a distribution over the answers. Overall, the main contributions of our work are: • We propose a novel co-attention mechanism for VQA that jointly performs question-guided visual attention and image-guided question attention. We explore this mechanism with two strategies, parallel and alternating co-attention, which are described in Sec. 3.3; • We propose a hierarchical architecture to represent the question, and consequently construct image-question co-attention maps at 3 different levels: word level, phrase level and question level. These co-attended features are then recursively combined from word level to question level for the final answer prediction; • At the phrase level, we propose a novel convolution-pooling strategy to adaptively select the phrase sizes whose representations are passed to the question level representation; • Finally, we evaluate our proposed model on two large datasets, VQA [2] and COCO-QA [17]. We also perform ablation studies to quantify the roles of different components in our model.",
"title": ""
},
{
"docid": "2b0aadf1f4000a630d96f85880af4a03",
"text": "The visualization community has developed to date many intuitions and understandings of how to judge the quality of views in visualizing data. The computation of a visualization’s quality and usefulness ranges from measuring clutter and overlap, up to the existence and perception of specific (visual) patterns. This survey attempts to report, categorize and unify the diverse understandings and aims to establish a common vocabulary that will enable a wide audience to understand their differences and subtleties. For this purpose, we present a commonly applicable quality metric formalization that should detail and relate all constituting parts of a quality metric. We organize our corpus of reviewed research papers along the data types established in the information visualization community: multiand high-dimensional, relational, sequential, geospatial and text data. For each data type, we select the visualization subdomains in which quality metrics are an active research field and report their findings, reason on the underlying concepts, describe goals and outline the constraints and requirements. One central goal of this survey is to provide guidance on future research opportunities for the field and outline how different visualization communities could benefit from each other by applying or transferring knowledge to their respective subdomain. Additionally, we aim to motivate the visualization community to compare computed measures to the perception of humans.",
"title": ""
},
{
"docid": "002d6e5a13bc605746b4c8a6b9ecd498",
"text": "The properties of the so-called time dependent dielectric breakdown (TDDB) of silicon dioxide-based gate dielectric for microelectronics technology have been investigated and reviewed. Experimental data covering a wide range of oxide thickness, stress voltage, temperature, and for the two bias polarities were gathered using structures with a wide range of gate oxide areas, and over very long stress times. Thickness dependence of oxide breakdown was shown to be in excellent agreement with statistical models founded in the percolation theory which explain the drastic reduction of the time-to-breakdown with decreasing oxide thickness. The voltage dependence of time-to-breakdown was found to follow a power-law behavior rather than an exponential law as commonly assumed. Our investigation on the inter-relationship between voltage and temperature dependencies of oxide breakdown reveals that a strong temperature activation with non-Arrhenius behavior is consistent with the power-law voltage dependence. The power-law voltage dependence in combination with strong temperature activation provides the most important reliability relief in compensation for the strong decrease of time-to-breakdown resulting from the reduction of the oxide thickness. Using the maximum energy of injected electrons at the anode interface as breakdown variable, we have resolved the polarity gap of timeand charge-to-breakdown (TBD and QBD), confirming that the fluency and the electron energy at anode interface are the fundamental quantities controlling oxide breakdown. Combining this large database with a recently proposed cell-based analytical version of the percolation model, we extract the defect generation efficiency responsible for breakdown. Following a review of different breakdown mechanisms and models, we discuss how the release of hydrogen through the coupling between vibrational and electronic degrees of freedom can explain the power-law dependence of defect generation efficiency. On the basis of these results, a unified and global picture of oxide breakdown is constructed and the resulting model is applied to project reliability limits. In this regard, it is concluded that SiO2-based dielectrics can provide reliable gate dielectric, even to a thickness of 1 nm, and that CMOS scaling may well be viable for the 50 nm technology node. 2005 Elsevier Ltd. All rights reserved. 0026-2714/$ see front matter 2005 Elsevier Ltd. All rights reserv doi:10.1016/j.microrel.2005.04.004 * Corresponding author. Tel.: +1 802 769 1217; fax: +1 802 769 1220. E-mail address: eywu@us.ibm.com (E.Y. Wu).",
"title": ""
},
{
"docid": "f00a35dbc463b7d46bab88fcdb8df2c9",
"text": "The quantum CP model is in the confining (or unbroken) phase with a full mass gap in an infinite space, while it is in the Higgs (broken or deconfinement) phase accompanied with Nambu-Goldstone modes in a finite space such as a ring or finite interval smaller than a certain critical size. We find a new self-consistent exact solution describing a soliton in the Higgs phase of the CP model in the large-N limit on a ring. We call it a confining soliton. We show that all eigenmodes have real and positive energy and thus it is stable.",
"title": ""
},
{
"docid": "026a0651177ee631a80aaa7c63a1c32f",
"text": "This paper is an introduction to natural language interfaces to databases (Nlidbs). A brief overview of the history of Nlidbs is rst given. Some advantages and disadvantages of Nlidbs are then discussed, comparing Nlidbs to formal query languages, form-based interfaces, and graphical interfaces. An introduction to some of the linguistic problems Nlidbs have to confront follows, for the beneet of readers less familiar with computational linguistics. The discussion then moves on to Nlidb architectures, porta-bility issues, restricted natural language input systems (including menu-based Nlidbs), and Nlidbs with reasoning capabilities. Some less explored areas of Nlidb research are then presented, namely database updates, meta-knowledge questions, temporal questions, and multi-modal Nlidbs. The paper ends with reeections on the current state of the art.",
"title": ""
},
{
"docid": "2ae53bfe80e74c27ea9ed5e5efadfbe7",
"text": "The use of multiple features has been shown to be an effective strategy for visual tracking because of their complementary contributions to appearance modeling. The key problem is how to learn a fused representation from multiple features for appearance modeling. Different features extracted from the same object should share some commonalities in their representations while each feature should also have some feature-specific representation patterns which reflect its complementarity in appearance modeling. Different from existing multi-feature sparse trackers which only consider the commonalities among the sparsity patterns of multiple features, this paper proposes a novel multiple sparse representation framework for visual tracking which jointly exploits the shared and feature-specific properties of different features by decomposing multiple sparsity patterns. Moreover, we introduce a novel online multiple metric learning to efficiently and adaptively incorporate the appearance proximity constraint, which ensures that the learned commonalities of multiple features are more representative. Experimental results on tracking benchmark videos and other challenging videos demonstrate the effectiveness of the proposed tracker.",
"title": ""
},
{
"docid": "f9bc2b91d31b3aa8ccbdfbfdae363fd8",
"text": "Motor control is the study of how organisms make accurate goal-directed movements. Here we consider two problems that the motor system must solve in order to achieve such control. The first problem is that sensory feedback is noisy and delayed, which can make movements inaccurate and unstable. The second problem is that the relationship between a motor command and the movement it produces is variable, as the body and the environment can both change. A solution is to build adaptive internal models of the body and the world. The predictions of these internal models, called forward models because they transform motor commands into sensory consequences, can be used to both produce a lifetime of calibrated movements, and to improve the ability of the sensory system to estimate the state of the body and the world around it. Forward models are only useful if they produce unbiased predictions. Evidence shows that forward models remain calibrated through motor adaptation: learning driven by sensory prediction errors.",
"title": ""
},
{
"docid": "ad7862047259112ac01bfa68950cf95b",
"text": "In deep learning, depth, as well as nonlinearity, create non-convex loss surfaces. Then, does depth alone create bad local minima? In this paper, we prove that without nonlinearity, depth alone does not create bad local minima, although it induces non-convex loss surface. Using this insight, we greatly simplify a recently proposed proof to show that all of the local minima of feedforward deep linear neural networks are global minima. Our theoretical results generalize previous results with fewer assumptions, and this analysis provides a method to show similar results beyond square loss in deep linear models.",
"title": ""
},
{
"docid": "885542ef60e8c2dbcfe73d7158244f82",
"text": "Three decades of active research on the teaching of introductory programming has had limited effect on classroom practice. Although relevant research exists across several disciplines including education and cognitive science, disciplinary differences have made this material inaccessible to many computing educators. Furthermore, computer science instructors have not had access to a comprehensive survey of research in this area. This paper collects and classifies this literature, identifies important work and mediates it to computing educators and professional bodies.\n We identify research that gives well-supported advice to computing academics teaching introductory programming. Limitations and areas of incomplete coverage of existing research efforts are also identified. The analysis applies publication and research quality metrics developed by a previous ITiCSE working group [74].",
"title": ""
},
{
"docid": "b4b4af6eeb22c23475047a2f3c36cba1",
"text": "Workflow systems are gaining importance as an infrastructure for automating inter-organizational interactions, such as those in Electronic Commerce. Execution of inter-organiz-ational workflows may raise a number of security issues including those related to conflict-of-interest among competing organizations. Moreover, in such an environment, a centralized Workflow Management System is not desirable because: (i) it can be a performance bottleneck, and (ii) the systems are inherently distributed, heterogeneous and autonomous in nature. In this paper, we propose an approach to realize decentralized workflow execution, in which the workflow is divided into partitions called self-describing workflows, and handled by a light weight workflow management component, called workflow stub, located at each organizational agent. We argue that placing the task execution agents that belong to the same conflict-of-interest class in one self-describing workflow may lead to unfair, and in some cases, undesirable results, akin to being on the wrong side of the Chinese wall. We propose a Chinese wall security model for the decentralized workflow environment to resolve such problems, and a restrictive partitioning solution to enforce the proposed model.",
"title": ""
},
{
"docid": "c19f986d747f4d6a3448607f76d961ab",
"text": "We propose Stochastic Neural Architecture Search (SNAS), an economical endto-end solution to Neural Architecture Search (NAS) that trains neural operation parameters and architecture distribution parameters in same round of backpropagation, while maintaining the completeness and differentiability of the NAS pipeline. In this work, NAS is reformulated as an optimization problem on parameters of a joint distribution for the search space in a cell. To leverage the gradient information in generic differentiable loss for architecture search, a novel search gradient is proposed. We prove that this search gradient optimizes the same objective as reinforcement-learning-based NAS, but assigns credits to structural decisions more efficiently. This credit assignment is further augmented with locally decomposable reward to enforce a resource-efficient constraint. In experiments on CIFAR-10, SNAS takes fewer epochs to find a cell architecture with state-of-theart accuracy than non-differentiable evolution-based and reinforcement-learningbased NAS, which is also transferable to ImageNet. It is also shown that child networks of SNAS can maintain the validation accuracy in searching, with which attention-based NAS requires parameter retraining to compete, exhibiting potentials to stride towards efficient NAS on big datasets.",
"title": ""
},
{
"docid": "9fdf625f46c227c819cec1e4c00160b1",
"text": "Employment of ground-based positioning systems has been consistently growing over the past decades due to the growing number of applications that require location information where the conventional satellite-based systems have limitations. Such systems have been successfully adopted in the context of wireless emergency services, tactical military operations, and various other applications offering location-based services. In current and previous generation of cellular systems, i.e., 3G, 4G, and LTE, the base stations, which have known locations, have been assumed to be stationary and fixed. However, with the possibility of having mobile relays in 5G networks, there is a demand for novel algorithms that address the challenges that did not exist in the previous generations of localization systems. This paper includes a review of various fundamental techniques, current trends, and state-of-the-art systems and algorithms employed in wireless position estimation using moving receivers. Subsequently, performance criteria comparisons are given for the aforementioned techniques and systems. Moreover, a discussion addressing potential research directions when dealing with moving receivers, e.g., receiver's movement pattern for efficient and accurate localization, non-line-of-sight problem, sensor fusion, and cooperative localization, is briefly given.",
"title": ""
},
{
"docid": "f1eb96dd2109aad21ac1bccfe8dcd012",
"text": "In imitation learning, an agent learns how to behave in an environment with an unknown cost function by mimicking expert demonstrations. Existing imitation learning algorithms typically involve solving a sequence of planning or reinforcement learning problems. Such algorithms are therefore not directly applicable to large, high-dimensional environments, and their performance can significantly degrade if the planning problems are not solved to optimality. Under the apprenticeship learning formalism, we develop alternative model-free algorithms for finding a parameterized stochastic policy that performs at least as well as an expert policy on an unknown cost function, based on sample trajectories from the expert. Our approach, based on policy gradients, scales to large continuous environments with guaranteed convergence to local minima.",
"title": ""
},
{
"docid": "f57fddbff1acaf3c4c58f269b6221cf7",
"text": "PURPOSE OF REVIEW\nCry-fuss problems are among the most common clinical presentations in the first few months of life and are associated with adverse outcomes for some mothers and babies. Cry-fuss behaviour emerges out of a complex interplay of cultural, psychosocial, environmental and biologic factors, with organic disturbance implicated in only 5% of cases. A simplistic approach can have unintended consequences. This article reviews recent evidence in order to update clinical management.\n\n\nRECENT FINDINGS\nNew research is considered in the domains of organic disturbance, feed management, maternal health, sleep management, and sensorimotor integration. This transdisciplinary approach takes into account the variable neurodevelopmental needs of healthy infants, the effects of feeding management on the highly plastic neonatal brain, and the bi-directional brain-gut-enteric microbiota axis. An individually tailored, mother-centred and family-centred approach is recommended.\n\n\nSUMMARY\nThe family of the crying baby requires early intervention to assess for and manage potentially treatable problems. Cross-disciplinary collaboration is often necessary if outcomes are to be optimized.",
"title": ""
},
{
"docid": "05540e05370b632f8b8cd165ae7d1d29",
"text": "We describe FreeCam a system capable of generating live free-viewpoint video by simulating the output of a virtual camera moving through a dynamic scene. The FreeCam sensing hardware consists of a small number of static color video cameras and state-of-the-art Kinect depth sensors, and the FreeCam software uses a number of advanced GPU processing and rendering techniques to seamlessly merge the input streams, providing a pleasant user experience. A system such as FreeCam is critical for applications such as telepresence, 3D video-conferencing and interactive 3D TV. FreeCam may also be used to produce multi-view video, which is critical to drive newgeneration autostereoscopic lenticular 3D displays.",
"title": ""
}
] |
scidocsrr
|
1b0ced4964b59e5f574b300d148fbb81
|
Turboiso: towards ultrafast and robust subgraph isomorphism search in large graph databases
|
[
{
"docid": "bbe59dd74c554d92167f42701a1f8c3d",
"text": "Finding subgraph isomorphisms is an important problem in many applications which deal with data modeled as graphs. While this problem is NP-hard, in recent years, many algorithms have been proposed to solve it in a reasonable time for real datasets using different join orders, pruning rules, and auxiliary neighborhood information. However, since they have not been empirically compared one another in most research work, it is not clear whether the later work outperforms the earlier work. Another problem is that reported comparisons were often done using the original authors’ binaries which were written in different programming environments. In this paper, we address these serious problems by re-implementing five state-of-the-art subgraph isomorphism algorithms in a common code base and by comparing them using many real-world datasets and their query loads. Through our in-depth analysis of experimental results, we report surprising empirical findings.",
"title": ""
}
] |
[
{
"docid": "e87de50ea9d62225018db677e1591bd5",
"text": "The relationship between culture, language, and thought has long been one of the most important topics for those who wish to understand the nature of human cognition. This issue has been investigated for decades across a broad range of research disciplines. However, there has been scant communication across these different disciplines, a situation largely arising through differences in research interests and discrepancies in the definitions of key terms such as 'culture,' 'language,' and 'thought.' This article reviews recent trends in research on the relation between language, culture and thought to capture how cognitive psychology and cultural psychology have defined 'language' and 'culture,' and how this issue was addressed within each research discipline. We then review recent research conducted in interdisciplinary perspectives, which directly compared the roles of culture and language. Finally, we highlight the importance of considering the complex interplay between culture and language to provide a comprehensive picture of how language and culture affect thought.",
"title": ""
},
{
"docid": "71ec2c62f6371c810b35aeef4172a392",
"text": "This survey, aimed mainly at mathematicians rather than practitioners, covers recent developments in homomorphic encryption (computing on encrypted data) and program obfuscation (generating encrypted but functional programs). Current schemes for encrypted computation all use essentially the same “noisy” approach: they encrypt via a noisy encoding of the message, they decrypt using an “approximate” ring homomorphism, and in between they employ techniques to carefully control the noise as computations are performed. This noisy approach uses a delicate balance between structure and randomness: structure that allows correct computation despite the randomness of the encryption, and randomness that maintains privacy against the adversary despite the structure. While the noisy approach “works”, we need new techniques and insights, both to improve efficiency and to better understand encrypted computation conceptually. Mathematics Subject Classification (2010). Primary 68Qxx; Secondary 68P25.",
"title": ""
},
{
"docid": "885938f7aec53d020bd4948c8a0bd233",
"text": "Eighty-five samples from fifteen different legume seed lines generally available in the UK were examined by measurements of their net protein utilization by rats and by haemagglutination tests with erythrocytes from a number of different animal species. From these results the seeds were classified into four broad groups. Group a seeds from most varieties of kidney (Phaseolus vulgaris), runner (Phaseolus coccineus) and tepary (Phaseolus acutifolius) beans showed high reactivity with all cell types and were also highly toxic. Group b, which contained seeds from lima or butter beans (Phaseolus lunatus) and winged bean (Psophocarpus tetragonolobus), agglutinated only human and pronase-treated rat erythrocytes. These seeds did not support proper growth of the rats although the animals survived the 10 d experimental period. Group c consisted of seeds from lentils (Lens culinaris), peas (Pisum sativum), chick-peas (Cicer arietinum), blackeyed peas (Vigna sinensis), pigeon peas (Cajanus cajan), mung beans (Phaseolus aureus), field or broad beans (Vicia faba) and aduki beans (Phaseolus angularis). These generally had low reactivity with all cells and were non-toxic. Group d, represented by soya (Glycine max) and pinto (Phaseolus vulgaris) beans, generally had low reactivity with all cells but caused growth depression at certain dietary concentrations. This growth depression was probably mainly due to antinutritional factors other than lectins. Lectins from group a seeds showed many structural and immunological similarities. However the subunit composition of the lectin from the tepary bean samples was different from that of the other bean lectins in this or any other groups.",
"title": ""
},
{
"docid": "8d743f8c333c392038e84d44e79dae2a",
"text": "For conventional wireless networks, the main target of resource allocation (RA) is to efficiently utilize the available resources. Generally, there are no changes in the available spectrum, thus static spectrum allocation policies were adopted. However, these allocation policies lead to spectrum under-utilization. In this regard, cognitive radio networks (CRNs) have received great attention due to their potential to improve the spectrum utilization. In general, efficient spectrum management and resource allocation are essential and very crucial for CRNs. This is due to the fact that unlicensed users should attain the most benefit from accessing the licensed spectrum without causing adverse interference to the licensed ones. The cognitive users or called secondary users have to effectively capture the arising spectrum opportunities in time, frequency, and space to transmit their data. Mainly, two aspects characterize the resource allocation for CRNs: 1) primary (licensed) network protection and 2) secondary (unlicensed) network performance enhancement in terms of quality-of-service, throughput, fairness, energy efficiency, etc. CRNs can operate in one of three known operation modes: 1) interweave; 2) overlay; and 3) underlay. Among which the underlay cognitive radio mode is known to be highly efficient in terms of spectrum utilization. This is because the unlicensed users are allowed to share the same channels with the active licensed users under some conditions. In this paper, we provide a survey for resource allocation in underlay CRNs. In particular, we first define the RA process and its components for underlay CRNs. Second, we provide a taxonomy that categorizes the RA algorithms proposed in literature based on the approaches, criteria, common techniques, and network architecture. Then, the state-of-the-art resource allocation algorithms are reviewed according to the provided taxonomy. Additionally, comparisons among different proposals are provided. Finally, directions for future research are outlined.",
"title": ""
},
{
"docid": "7b7289900ac45f4ee5357084f16a4c0d",
"text": "We present a simple and accurate span-based model for semantic role labeling (SRL). Our model directly takes into account all possible argument spans and scores them for each label. At decoding time, we greedily select higher scoring labeled spans. One advantage of our model is to allow us to design and use spanlevel features, that are difficult to use in tokenbased BIO tagging approaches. Experimental results demonstrate that our ensemble model achieves the state-of-the-art results, 87.4 F1 and 87.0 F1 on the CoNLL-2005 and 2012 datasets, respectively.",
"title": ""
},
{
"docid": "d7aec74465931a52e9cda65de38b1fb7",
"text": "As the use of mobile devices becomes increasingly ubiquitous, the need for systematically testing applications (apps) that run on these devices grows more and more. However, testing mobile apps is particularly expensive and tedious, often requiring substantial manual effort. While researchers have made much progress in automated testing of mobile apps during recent years, a key problem that remains largely untracked is the classic oracle problem, i.e., to determine the correctness of test executions. This paper presents a novel approach to automatically generate test cases, that include test oracles, for mobile apps. The foundation for our approach is a comprehensive study that we conducted of real defects in mobile apps. Our key insight, from this study, is that there is a class of features that we term user-interaction features, which is implicated in a significant fraction of bugs and for which oracles can be constructed - in an application agnostic manner -- based on our common understanding of how apps behave. We present an extensible framework that supports such domain specific, yet application agnostic, test oracles, and allows generation of test sequences that leverage these oracles. Our tool embodies our approach for generating test cases that include oracles. Experimental results using 6 Android apps show the effectiveness of our tool in finding potentially serious bugs, while generating compact test suites for user-interaction features.",
"title": ""
},
{
"docid": "ca535d3041616047dd21a09dabd50651",
"text": "New three-dimensional (3D) scaffolds for bone tissue engineering have been developed throughout which bone cells grow, differentiate, and produce mineralized matrix. In this study, the percentage of cells anchoring to our polymer scaffolds as a function of initial cell seeding density was established; we then investigated bone tissue formation throughout our scaffolds as a function of initial cell seeding density and time in culture. Initial cell seeding densities ranging from 0.5 to 10 x 10(6) cells/cm(3) were seeded onto 3D scaffolds. After 1 h in culture, we determined that 25% of initial seeded cells had adhered to the scaffolds in static culture conditions. The cell-seeded scaffolds remained in culture for 3 and 6 weeks, to investigate the effect of initial cell seeding density on bone tissue formation in vitro. Further cultures using 1 x 10(6) cells/cm(3) were maintained for 1 h and 1, 2, 4, and 6 weeks to study bone tissue formation as a function of culture period. After 3 and 6 weeks in culture, scaffolds seeded with 1 x 10(6) cells/cm(3) showed similar tissue formation as those seeded with higher initial cell seeding densities. When initial cell seeding densities of 1 x 10(6) cells/cm(3) were used, osteocalcin immunolabeling indicative of osteoblast differentiation was seen throughout the scaffolds after only 2 weeks of culture. Von Kossa and tetracycline labeling, indicative of mineralization, occurred after 3 weeks. These results demonstrated that differentiated bone tissue was formed throughout 3D scaffolds after 2 weeks in culture using an optimized initial cell density, whereas mineralization of the tissue only occurred after 3 weeks. Furthermore, after 6 weeks in culture, newly formed bone tissue had replaced degrading polymer.",
"title": ""
},
{
"docid": "3f24525276e36ea087a04cb79ee25a95",
"text": "We consider the problem of estimating the geographic locations of nodes in a wireless sensor network where most sensors are without an effective self-positioning functionality. We propose LSVM-a novel solution with the following merits. First, LSVM localizes the network based on mere connectivity information (that is, hop counts only) and therefore is simple and does not require specialized ranging hardware or assisting mobile devices as in most existing techniques. Second, LSVM is based on Support Vector Machine (SVM) learning. Although SVM is a classification method, we show its applicability to the localization problem and prove that the localization error can be upper bounded by any small threshold given an appropriate training data size. Third, LSVM addresses the border and coverage-hole problems effectively. Last but not least, LSVM offers fast localization in a distributed manner with efficient use of processing and communication resources. We also propose a modified version of mass-spring optimization to further improve the location estimation in LSVM. The promising performance of LSVM is exhibited by our simulation study.",
"title": ""
},
{
"docid": "7e0329d95d2d1c46eeaf136b06fdf267",
"text": "The National Renewable Energy Laboratory has recently publicly released its second-generation advanced vehicle simulator called ADVISOR 2.0. This software program was initially developed four years ago, and after several years of in-house usage and evolution, this powerful tool is now available to the public through a new vehicle systems analysis World Wide Web page. ADVISOR has been applied to many different systems analysis problems, such as helping to develop the SAE J1711 test procedure for hybrid vehicles and helping to evaluate new technologies as part of the Partnership for a New Generation of Vehicles (PNGV) technology selection process. The model has been and will continue to be benchmarked and validated with other models and with real vehicle test data. After two months of being available on the Web, more than 100 users have downloaded ADVISOR. ADVISOR 2.0 has many new features, including an easy-to-use graphical user interface, a detailed exhaust aftertreatment thermal model, and complete browser-based documentation. Future work will include adding to the library of components available in ADVISOR, including optimization functionality, and linking with a more detailed fuel cell model.",
"title": ""
},
{
"docid": "1568a9bb47ca0ef28bccf6fdeaad87b7",
"text": "Many Android apps use SSL/TLS to transmit sensitive information securely. However, developers often provide their own implementation of the standard SSL/TLS certificate validation process. Unfortunately, many such custom implementations have subtle bugs, have built-in exceptions for self-signed certificates, or blindly assert all certificates are valid, leaving many Android apps vulnerable to SSL/TLS Man-in-the-Middle attacks. In this paper, we present SMV-HUNTER, a system for the automatic, large-scale identification of such vulnerabilities that combines both static and dynamic analysis. The static component detects when a custom validation procedure has been given, thereby identifying potentially vulnerable apps, and extracts information used to guide the dynamic analysis, which then uses user interface enumeration and automation techniques to trigger the potentially vulnerable code under an active Man-in-the-Middle attack. We have implemented SMV-HUNTER and evaluated it on 23,418 apps downloaded from the Google Play market, of which 1,453 apps were identified as being potentially vulnerable by static analysis, with an average overhead of approximately 4 seconds per app, running on 16 threads in parallel. Among these potentially vulnerable apps, 726 were confirmed vulnerable using our dynamic analysis, with an average overhead of about 44 seconds per app, running on 8 emulators in parallel.",
"title": ""
},
{
"docid": "c526e32c9c8b62877cb86bc5b097e2cf",
"text": "This paper proposes a new field of user interfaces called multi-computer direct manipulation and presents a penbased direct manipulation technique that can be used for data transfer between different computers as well as within the same computer. The proposed Pick-andDrop allows a user to pick up an object on a display and drop it on another display as if he/she were manipulating a physical object. Even though the pen itself does not have storage capabilities, a combination of Pen-ID and the pen manager on the network provides the illusion that the pen can physically pick up and move a computer object. Based on this concept, we have built several experimental applications using palm-sized, desk-top, and wall-sized pen computers. We also considered the importance of physical artifacts in designing user interfaces in a future computing environment.",
"title": ""
},
{
"docid": "d84f0baebe248608ae3c910adb39baea",
"text": "BACKGROUND\nSkin atrophy is a common manifestation of aging and is frequently accompanied by ulceration and delayed wound healing. With an increasingly aging patient population, management of skin atrophy is becoming a major challenge in the clinic, particularly in light of the fact that there are no effective therapeutic options at present.\n\n\nMETHODS AND FINDINGS\nAtrophic skin displays a decreased hyaluronate (HA) content and expression of the major cell-surface hyaluronate receptor, CD44. In an effort to develop a therapeutic strategy for skin atrophy, we addressed the effect of topical administration of defined-size HA fragments (HAF) on skin trophicity. Treatment of primary keratinocyte cultures with intermediate-size HAF (HAFi; 50,000-400,000 Da) but not with small-size HAF (HAFs; <50,000 Da) or large-size HAF (HAFl; >400,000 Da) induced wild-type (wt) but not CD44-deficient (CD44-/-) keratinocyte proliferation. Topical application of HAFi caused marked epidermal hyperplasia in wt but not in CD44-/- mice, and significant skin thickening in patients with age- or corticosteroid-related skin atrophy. The effect of HAFi on keratinocyte proliferation was abrogated by antibodies against heparin-binding epidermal growth factor (HB-EGF) and its receptor, erbB1, which form a complex with a particular isoform of CD44 (CD44v3), and by tissue inhibitor of metalloproteinase-3 (TIMP-3).\n\n\nCONCLUSIONS\nOur observations provide a novel CD44-dependent mechanism for HA oligosaccharide-induced keratinocyte proliferation and suggest that topical HAFi application may provide an attractive therapeutic option in human skin atrophy.",
"title": ""
},
{
"docid": "c881aee86484ecd82abe54ee4f70a13b",
"text": "Automatic speech recognition, translating of spoken words into text, is still a challenging task due to the high viability in speech signals. Deep learning, sometimes referred as representation learning or unsupervised feature learning, is a new area of machine learning. Deep learning is becoming a mainstream technology for speech recognition and has successfully replaced Gaussian mixtures for speech recognition and feature coding at an increasingly larger scale. The main target of this course project is to applying typical deep learning algorithms, including deep neural networks (DNN) and deep belief networks (DBN), for automatic continuous speech recognition.",
"title": ""
},
{
"docid": "b466803c9a9be5d38171ece8d207365e",
"text": "A large number of saliency models, each based on a different hypothesis, have been proposed over the past 20 years. In practice, while subscribing to one hypothesis or computational principle makes a model that performs well on some types of images, it hinders the general performance of a model on arbitrary images and large-scale data sets. One natural approach to improve overall saliency detection accuracy would then be fusing different types of models. In this paper, inspired by the success of late-fusion strategies in semantic analysis and multi-modal biometrics, we propose to fuse the state-of-the-art saliency models at the score level in a para-boosting learning fashion. First, saliency maps generated by several models are used as confidence scores. Then, these scores are fed into our para-boosting learner (i.e., support vector machine, adaptive boosting, or probability density estimator) to generate the final saliency map. In order to explore the strength of para-boosting learners, traditional transformation-based fusion strategies, such as Sum, Min, and Max, are also explored and compared in this paper. To further reduce the computation cost of fusing too many models, only a few of them are considered in the next step. Experimental results show that score-level fusion outperforms each individual model and can further reduce the performance gap between the current models and the human inter-observer model.",
"title": ""
},
{
"docid": "25b77292def9ba880fecb58a38897400",
"text": "In this paper, we present a successful operation of Gallium Nitride(GaN)-based three-phase inverter with high efficiency of 99.3% for driving motor at 900W under the carrier frequency of 6kHz. This efficiency well exceeds the value by IGBT (Insulated Gate Bipolar Transistor). This demonstrates that GaN has a great potential for power switching application competing with SiC. Fully reduced on-state resistance in a new normally-off GaN transistor called Gate Injection Transistor (GIT) greatly helps to increase the efficiency. In addition, use of the bidirectional operation of the lateral and compact GITs with synchronous gate driving, the inverter is operated free from fly-wheel diodes which have been connected in parallel with IGBTs in a conventional inverter system.",
"title": ""
},
{
"docid": "31bb5687b284844596f437774b8b11ce",
"text": "In this paper, a new algorithm for calculating the QR decomposition (QRD) of a polynomial matrix is introduced. This algorithm amounts to transforming a polynomial matrix to upper triangular form by application of a series of paraunitary matrices such as elementary delay and rotation matrices. It is shown that this algorithm can also be used to formulate the singular value decomposition (SVD) of a polynomial matrix, which essentially amounts to diagonalizing a polynomial matrix again by application of a series of paraunitary matrices. Example matrices are used to demonstrate both types of decomposition. Mathematical proofs of convergence of both decompositions are also outlined. Finally, a possible application of such decompositions in multichannel signal processing is discussed.",
"title": ""
},
{
"docid": "b20aa2222759644b4b60b5b450424c9e",
"text": "Manufacturing has faced significant changes during the last years, namely the move from a local economy towards a global and competitive economy, with markets demanding for highly customized products of high quality at lower costs, and with short life cycles. In this environment, manufacturing enterprises, to remain competitive, must respond closely to customer demands by improving their flexibility and agility, while maintaining their productivity and quality. Dynamic response to emergence is becoming a key issue in manufacturing field because traditional manufacturing control systems are built upon rigid control architectures, which cannot respond efficiently and effectively to dynamic change. In these circumstances, the current challenge is to develop manufacturing control systems that exhibit intelligence, robustness and adaptation to the environment changes and disturbances. The introduction of multi-agent systems and holonic manufacturing systems paradigms addresses these requirements, bringing the advantages of modularity, decentralization, autonomy, scalability and reusability. This paper surveys the literature in manufacturing control systems using distributed artificial intelligence techniques, namely multi-agent systems and holonic manufacturing systems principles. The paper also discusses the reasons for the weak adoption of these approaches by industry and points out the challenges and research opportunities for the future. & 2008 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "05f83c8af8a6514706cd4c9e9aeac3d3",
"text": "Management of crops from early stage to mature harvest stage involves identification and monitoring of plant diseases, nutrient deficiency, controlled irrigation and controlled use of fertilizers and pesticides. Although the number of remote sensing solutions is increasing, the availability and ground visibility during critical growth stages of crops continue to be major concerns. eAGROBOT (a prototype) is a ground based agricultural robot that overcomes challenges existing in large and complex satellite based solutions and helpdesk form of solutions available as m-Services. It provides a small, portable and reliable platform to automatically survey farmland, detect diseases as well as spray the pesticide. In future, the farmer can obtain a consolidated view of the farm along with decision support statistics for planning purposes. The development of eAGROBOT, real time testing results obtained from cotton and groundnut plantations and future focus has been detailed in this paper.",
"title": ""
},
{
"docid": "370af2137d2768cdedddc7bbec036cff",
"text": "Environmental protection and water quality preservation is an import task for each person in the world. In this paper importance of water quality is discussed, in addition different waste water treatment processes are presented. Main objective of this paper is application of Simulink for dynamic modeling of biological treatment, especially concerning to the activated sludge processes (ASP). In connection with Simulink modeling different mathematical approach are presented and consider also during the simulation. Simulink modeling on Matlab is developed based on aerator tank model. Aerator model itself consists on movement of particles settled on bottom of the tank, by using air bubbling process. Several simulations are done for two different cases, dry weather and rain episode. Concerning to dry weather episode, equilibrium of biomass and organic matter is reached after long period (i.e. 200 days). While concerning to the rain episode there is a decrease of biomass and increase of organic matter, also it is notice a significant growth of bacteria’s. Finally this model could be improved by considering a slow increase of flow rate.",
"title": ""
}
] |
scidocsrr
|
d88f5b18c0573f44b72e4e888aa499bf
|
Trusted 5G Vehicular Networks: Blockchains and Content-Centric Networking
|
[
{
"docid": "1913c6ce69e543a3ae9a90b73c9efddd",
"text": "Cooperative Intelligent Transportation Systems, mainly represented by vehicular ad hoc networks (VANETs), are among the key components contributing to the Smart City and Smart World paradigms. Based on the continuous exchange of both periodic and event triggered messages, smart vehicles can enhance road safety, while also providing support for comfort applications. In addition to the different communication protocols, securing such communications and establishing a certain trustiness among vehicles are among the main challenges to address, since the presence of dishonest peers can lead to unwanted situations. To this end, existing security solutions are typically divided into two main categories, cryptography and trust, where trust appeared as a complement to cryptography on some specific adversary models and environments where the latter was not enough to mitigate all possible attacks. In this paper, we provide an adversary-oriented survey of the existing trust models for VANETs. We also show when trust is preferable to cryptography, and the opposite. In addition, we show how trust models are usually evaluated in VANET contexts, and finally, we point out some critical scenarios that existing trust models cannot handle, together with some possible solutions.",
"title": ""
}
] |
[
{
"docid": "6af7bb1d2a7d8d44321a5b162c9781a2",
"text": "In this paper, we propose a deep metric learning (DML) approach for robust visual tracking under the particle filter framework. Unlike most existing appearance-based visual trackers, which use hand-crafted similarity metrics, our DML tracker learns a nonlinear distance metric to classify the target object and background regions using a feed-forward neural network architecture. Since there are usually large variations in visual objects caused by varying deformations, illuminations, occlusions, motions, rotations, scales, and cluttered backgrounds, conventional linear similarity metrics cannot work well in such scenarios. To address this, our proposed DML tracker first learns a set of hierarchical nonlinear transformations in the feed-forward neural network to project both the template and particles into the same feature space where the intra-class variations of positive training pairs are minimized and the interclass variations of negative training pairs are maximized simultaneously. Then, the candidate that is most similar to the template in the learned deep network is identified as the true target. Experiments on the benchmark data set including 51 challenging videos show that our DML tracker achieves a very competitive performance with the state-of-the-art trackers.",
"title": ""
},
{
"docid": "a1bff389a9a95926a052ded84c625a9e",
"text": "Automatically assessing the subjective quality of a photo is a challenging area in visual computing. Previous works study the aesthetic quality assessment on a general set of photos regardless of the photo's content and mainly use features extracted from the entire image. In this work, we focus on a specific genre of photos: consumer photos with faces. This group of photos constitutes an important part of consumer photo collections. We first conduct an online study on Mechanical Turk to collect ground-truth and subjective opinions for a database of consumer photos with faces. We then extract technical features, perceptual features, and social relationship features to represent the aesthetic quality of a photo, by focusing on face-related regions. Experiments show that our features perform well for categorizing or predicting the aesthetic quality.",
"title": ""
},
{
"docid": "7e7739bfddbae8cfa628d67eb582c121",
"text": "When firms implement enterprise resource planning, they need to redesign their business processes to make information flow smooth within organizations. ERP thus results in changes in processes and responsibilities. Firms cannot realize expected returns from ERP investments unless these changes are effectively managed after ERP systems are put into operation. This research proposes a conceptual framework to highlight the importance of the change management after firms implement ERP systems. Our research model is empirically tested using data collected from over 170 firms that had used ERP systems for more than one year. Our analysis reveals that the eventual success of ERP systems depends on effective change management after ERP implementation, supporting the existence of the valley of despair.",
"title": ""
},
{
"docid": "17812cae7547ba46d7170b99f6be1efc",
"text": "Developing supernumerary limbs is a rare congenital condition that only a few cases have been documented. Depending on the cause and developmental conditions, they may be single, multiple or complicated, and occur as a syndrome or associated with other anomalies. Polymelia is defined as the presence of extra limb(s) which have been reported in human, mouse, chicken, calf and lamb. It seems that the precise mechanism regulating this type of congenital malformations is not yet clearly understood. While hereditary trait of some limb anomalies was proven in human and the responsible genetic impairments were found, this has not been confirmed in the other animals especially the birds. Regarding the different susceptibilities of various vertebrate species to the environmental and genetic factors in embryonic period, the probable cause of an embryonic defect in one species cannot be generalized to the all other species class. The present study reports a case of polymelia in an Iranian indigenous young fowl and discusses its possible causes.",
"title": ""
},
{
"docid": "7526ae3542d1e922bd73be0da7c1af72",
"text": "Cooperative coevolutionary algorithms (CCEAs) rely on multiple coevolving populations for the evolution of solutions composed of coadapted components. CCEAs enable, for instance, the evolution of cooperative multiagent systems composed of heterogeneous agents, where each agent is modelled as a component of the solution. Previous works have, however, shown that CCEAs are biased toward stability: the evolutionary process tends to converge prematurely to stable states instead of (near-)optimal solutions. In this study, we show how novelty search can be used to avoid the counterproductive attraction to stable states in coevolution. Novelty search is an evolutionary technique that drives evolution toward behavioural novelty and diversity rather than exclusively pursuing a static objective. We evaluate three novelty-based approaches that rely on, respectively (1) the novelty of the team as a whole, (2) the novelty of the agents’ individual behaviour, and (3) the combination of the two. We compare the proposed approaches with traditional fitness-driven cooperative coevolution in three simulated multirobot tasks. Our results show that team-level novelty scoring is the most effective approach, significantly outperforming fitness-driven coevolution at multiple levels. Novelty-driven cooperative coevolution can substantially increase the potential of CCEAs while maintaining a computational complexity that scales well with the number of populations.",
"title": ""
},
{
"docid": "9082dc8e8d60b05255487232fdbec189",
"text": "Energy harvesting has been widely investigated as a promising method of providing power for ultra-low-power applications. Such energy sources include solar energy, radio-frequency (RF) radiation, piezoelectricity, thermal gradients, etc. However, the power supplied by these sources is highly unreliable and dependent upon ambient environment factors. Hence, it is necessary to develop specialized systems that are tolerant to this power variation, and also capable of making forward progress on the computation tasks. The simulation platform in this paper is calibrated using measured results from a fabricated nonvolatile processor and used to explore the design space for a nonvolatile processor with different architectures, different input power sources, and policies for maximizing forward progress.",
"title": ""
},
{
"docid": "66c493b14b7ab498e67f6d29cf91733a",
"text": "A digitally controlled low-dropout voltage regulator (LDO) that can perform fast-transient and autotuned voltage is introduced in this paper. Because there are still several arguments regarding the digital implementation on the LDOs, pros and cons of the digital control are first discussed in this paper to illustrate its opportunity in the LDO applications. Following that, the architecture and configuration of the digital scheme are demonstrated. The working principles and design flows of the functional algorithms are also illustrated and then verified by the simulation before the circuit implementation. The proposed LDO was implemented by the 0.18-μm manufacturing process for the performance test. Experimental results show that the LDO's output voltage Vout can accurately perform the dynamic voltage scaling function at various Vout levels (1/2, 5/9, 2/3, and 5/6 of the input voltage VDD) from a wide VDD range (from 1.8 to 0.9 V). The transient time is within 2 μs and the voltage spikes are within 50 mV when a 1-μF output capacitor is used. Test of the autotuning algorithm shows that the proposed LDO is able to work at its optimal performance under various uncertain conditions.",
"title": ""
},
{
"docid": "4ba81ce5756f2311dde3fa438f81e527",
"text": "To prevent password breaches and guessing attacks, banks increasingly turn to two-factor authentication (2FA), requiring users to present at least one more factor, such as a one-time password generated by a hardware token or received via SMS, besides a password. We can expect some solutions – especially those adding a token – to create extra work for users, but little research has investigated usability, user acceptance, and perceived security of deployed 2FA. This paper presents an in-depth study of 2FA usability with 21 UK online banking customers, 16 of whom had accounts with more than one bank. We collected a rich set of qualitative and quantitative data through two rounds of semi-structured interviews, and an authentication diary over an average of 11 days. Our participants reported a wide range of usability issues, especially with the use of hardware tokens, showing that the mental and physical workload involved shapes how they use online banking. Key targets for improvements are (i) the reduction in the number of authentication steps, and (ii) removing features that do not add any security but negatively affect the user experience.",
"title": ""
},
{
"docid": "2f336490567c50c0b59ebae2aa1d2903",
"text": "Psychosomatic medicine, with its prevailing biopsychosocial model, aims to integrate human and exact sciences with their divergent conceptual models. Therefore, its own conceptual foundations, which often remain implicit and unknown, may be critically relevant. We defend the thesis that choosing between different metaphysical views on the 'mind-body problem' may have important implications for the conceptual foundations of psychosomatic medicine, and therefore potentially also for its methods, scientific status and relationship with the scientific disciplines it aims to integrate: biomedical sciences (including neuroscience), psychology and social sciences. To make this point, we introduce three key positions in the philosophical 'mind-body' debate (emergentism, reductionism, and supervenience physicalism) and investigate their consequences for the conceptual basis of the biopsychosocial model in general and its 'psycho-biological' part ('mental causation') in particular. Despite the clinical merits of the biopsychosocial model, we submit that it is conceptually underdeveloped or even flawed, which may hamper its use as a proper scientific model.",
"title": ""
},
{
"docid": "250f83a255cdd13bcbe0347b3092f44b",
"text": "Current state-of-the-art remote photoplethysmography (rPPG) algorithms are capable of extracting a clean pulse signal in ambient light conditions using a regular color camera, even when subjects move significantly. In this study, we investigate the feasibility of rPPG in the (near)-infrared spectrum, which broadens the scope of applications for rPPG. Two camera setups are investigated: one setup consisting of three monochrome cameras with different optical filters, and one setup consisting of a single RGB camera with a visible light blocking filter. Simulation results predict the monochrome setup to be more motion robust, but this simulation neglects parallax. To verify this, a challenging benchmark dataset consisting of 30 videos is created with various motion scenarios and skin tones. Experiments show that both camera setups are capable of accurate pulse extraction in all motion scenarios, with an average SNR of +6.45 and +7.26 dB, respectively. The single camera setup proves to be superior in scenarios involving scaling, likely due to parallax of the multicamera setup. To further improve motion robustness of the RGB camera, dedicated LED illumination with two distinct wavelengths is proposed and verified. This paper demonstrates that accurate rPPG measurements in infrared are feasible, even with severe subject motion.",
"title": ""
},
{
"docid": "914f41b9f3c0d74f888c7dd83e226468",
"text": "We present a new algorithm for inferring the home location of Twitter users at different granularities, including city, state, time zone, or geographic region, using the content of users’ tweets and their tweeting behavior. Unlike existing approaches, our algorithm uses an ensemble of statistical and heuristic classifiers to predict locations and makes use of a geographic gazetteer dictionary to identify place-name entities. We find that a hierarchical classification approach, where time zone, state, or geographic region is predicted first and city is predicted next, can improve prediction accuracy. We have also analyzed movement variations of Twitter users, built a classifier to predict whether a user was travelling in a certain period of time, and use that to further improve the location detection accuracy. Experimental evidence suggests that our algorithm works well in practice and outperforms the best existing algorithms for predicting the home location of Twitter users.",
"title": ""
},
{
"docid": "a37498a6fbaabd220bad848d440e889b",
"text": "Deep multitask learning boosts performance by sharing learned structure across related tasks. This paper adapts ideas from deep multitask learning to the setting where only a single task is available. The method is formalized as pseudo-task augmentation, in which models are trained with multiple decoders for each task. Pseudo-tasks simulate the effect of training towards closelyrelated tasks drawn from the same universe. In a suite of experiments, pseudo-task augmentation improves performance on single-task learning problems. When combined with multitask learning, further improvements are achieved, including state-of-the-art performance on the CelebA dataset, showing that pseudo-task augmentation and multitask learning have complementary value. All in all, pseudo-task augmentation is a broadly applicable and efficient way to boost performance in deep learning systems.",
"title": ""
},
{
"docid": "28352dd6b60b511ff812820f4e712cde",
"text": "Extreme multi-label classification methods have been widely used in Web-scale classification tasks such as Web page tagging and product recommendation. In this paper, we present a novel graph embedding method called \"AnnexML\". At the training step, AnnexML constructs a k-nearest neighbor graph of label vectors and attempts to reproduce the graph structure in the embedding space. The prediction is efficiently performed by using an approximate nearest neighbor search method that efficiently explores the learned k-nearest neighbor graph in the embedding space. We conducted evaluations on several large-scale real-world data sets and compared our method with recent state-of-the-art methods. Experimental results show that our AnnexML can significantly improve prediction accuracy, especially on data sets that have larger a label space. In addition, AnnexML improves the trade-off between prediction time and accuracy. At the same level of accuracy, the prediction time of AnnexML was up to 58 times faster than that of SLEEC, which is a state-of-the-art embedding-based method.",
"title": ""
},
{
"docid": "9c8204510362de8a5362400fc4d26e24",
"text": "We focus on predicting sleep stages from radio measurements without any attached sensors on subjects. We introduce a new predictive model that combines convolutional and recurrent neural networks to extract sleep-specific subjectinvariant features from RF signals and capture the temporal progression of sleep. A key innovation underlying our approach is a modified adversarial training regime that discards extraneous information specific to individuals or measurement conditions, while retaining all information relevant to the predictive task. We analyze our game theoretic setup and empirically demonstrate that our model achieves significant improvements over state-of-the-art solutions.",
"title": ""
},
{
"docid": "a9f9f918d0163e18cf6df748647ffb05",
"text": "In previous work, we have shown that using terms from around citations in citing papers to index the cited paper, in addition to the cited paper's own terms, can improve retrieval effectiveness. Now, we investigate how to select text from around the citations in order to extract good index terms. We compare the retrieval effectiveness that results from a range of contexts around the citations, including no context, the entire citing paper, some fixed windows and several variations with linguistic motivations. We conclude with an analysis of the benefits of more complex, linguistically motivated methods for extracting citation index terms, over using a fixed window of terms. We speculate that there might be some advantage to using computational linguistic techniques for this task.",
"title": ""
},
{
"docid": "04b7ad51d2464052ebd3d32baeb5b57b",
"text": "Rob Antrobus Security Lancaster Research Centre Lancaster University Lancaster LA1 4WA UK security-centre.lancs.ac.uk r.antrobus1@lancaster.ac.uk Sylvain Frey Security Lancaster Research Centre Lancaster University Lancaster LA1 4WA UK security-centre.lancs.ac.uk s.frey@lancaster.ac.uk Benjamin Green Security Lancaster Research Centre Lancaster University Lancaster LA1 4WA UK security-centre.lancs.ac.uk b.green2@lancaster.ac.uk",
"title": ""
},
{
"docid": "4d7b93ee9c6036c5915dd1166c9ae2f8",
"text": "In this paper, we present a developed NS-3 based emulation platform for evaluating and optimizing the performance of the LTE networks. The developed emulation platform is designed to provide real-time measurements. Thus it eliminates the need for the high cost spent on real equipment. The developed platform consists of three main parts, which are video server, video client(s), and NS-3 based simulation environment for LTE network. Using the developed platform, the server streams video clips to the existing clients going through the LTE simulated network. We utilize this setup to evaluate multiple cases such as mobility and handover. Moreover, we use it for evaluating multiple streaming protocols such as UDP, RTP, and Dynamic Adaptive Streaming over HTTP (DASH). Keywords-DASH, Emulation, LTE, NS-3, Real-time, RTP, UDP.",
"title": ""
},
{
"docid": "b6043969fad2b2fd195a069fcf003ca1",
"text": "In recent years, deep learning (DL), a rebranding of neural networks (NNs), has risen to the top in numerous areas, namely computer vision (CV), speech recognition, and natural language processing. Whereas remote sensing (RS) possesses a number of unique challenges, primarily related to sensors and applications, inevitably RS draws from many of the same theories as CV, e.g., statistics, fusion, and machine learning, to name a few. This means that the RS community should not only be aware of advancements such as DL, but also be leading researchers in this area. Herein, we provide the most comprehensive survey of state-of-the-art RS DL research. We also review recent new developments in the DL field that can be used in DL for RS. Namely, we focus on theories, tools, and challenges for the RS community. Specifically, we focus on unsolved challenges and opportunities as they relate to (i) inadequate data sets, (ii) human-understandable solutions for modeling physical phenomena, (iii) big data, (iv) nontraditional heterogeneous data sources, (v) DL architectures and learning algorithms for spectral, spatial, and temporal data, (vi) transfer learning, (vii) an improved theoretical understanding of DL systems, (viii) high barriers to entry, and (ix) training and optimizing the DL. © The Authors. Published by SPIE under a Creative Commons Attribution 3.0 Unported License. Distribution or reproduction of this work in whole or in part requires full attribution of the original publication, including its DOI. [DOI: 10.1117/1.JRS.11.042609]",
"title": ""
},
{
"docid": "e9aac361f8ca1bb8f10409859aef718d",
"text": "MapReduce has become an important distributed processing model for large-scale data-intensive applications like data mining and web indexing. Hadoop-an open-source implementation of MapReduce is widely used for short jobs requiring low response time. The current Hadoop implementation assumes that computing nodes in a cluster are homogeneous in nature. Data locality has not been taken into account for launching speculative map tasks, because it is assumed that most maps are data-local. Unfortunately, both the homogeneity and data locality assumptions are not satisfied in virtualized data centers. We show that ignoring the data-locality issue in heterogeneous environments can noticeably reduce the MapReduce performance. In this paper, we address the problem of how to place data across nodes in a way that each node has a balanced data processing load. Given a dataintensive application running on a Hadoop MapReduce cluster, our data placement scheme adaptively balances the amount of data stored in each node to achieve improved data-processing performance. Experimental results on two real data-intensive applications show that our data placement strategy can always improve the MapReduce performance by rebalancing data across nodes before performing a data-intensive application in a heterogeneous Hadoop cluster.",
"title": ""
},
{
"docid": "c5b2655f44471f007c02af03a41eec06",
"text": "Case reports of conjoined twins (\"Siamese twins\") in wild mammals are scarce. Most published reports of conjoined twins in mammals concern cases in man and domestic mammals. This article describes a case of cephalopagus conjoined twins in a leopard cat (Prionailurus bengalensis) collected on the island of Sumatra, Indonesia, in the period 1873-76. A review of known cases of conjoined twinning in wild mammals is given.",
"title": ""
}
] |
scidocsrr
|
4dff324d5a1afa45474b2b6f26fea710
|
A novel low-loss modulation strategy for high-power bi-directional buck+boost converters
|
[
{
"docid": "0169f6c2eee1710d2ccd1403116da68f",
"text": "A resonant snubber is described for voltage-source inverters, current-source inverters, and self-commutated frequency changers. The main self-turn-off devices have shunt capacitors directly across them. The lossless resonant snubber described avoids trapping energy in a converter circuit where high dynamic stresses at both turn-on and turn-off are normally encountered. This is achieved by providing a temporary parallel path through a small ordinary thyristor (or other device operating in a similar node) to take over the high-stress turn-on duty from the main gate turn-off (GTO) or power transistor, in a manner that leaves no energy trapped after switching.<<ETX>>",
"title": ""
}
] |
[
{
"docid": "023514bca28bf91e74ebcf8e473b4573",
"text": "As a result of technological advances on robotic systems, electronic sensors, and communication techniques, the production of unmanned aerial vehicle (UAV) systems has become possible. Their easy installation and flexibility led these UAV systems to be used widely in both the military and civilian applications. Note that the capability of one UAV is however limited. Nowadays, a multi-UAV system is of special interest due to the ability of its associate UAV members either to coordinate simultaneous coverage of large areas or to cooperate to achieve common goals / targets. This kind of cooperation / coordination requires reliable communication network with a proper network model to ensure the exchange of both control and data packets among UAVs. Such network models should provide all-time connectivity to avoid the dangerous failures or unintended consequences. Thus, the multi-UAV system relies on communication to operate. In this paper, current literature about multi-UAV system regarding its concepts and challenges is presented. Also, both the merits and drawbacks of the available networking architectures and models in a multi-UAV system are presented. Flying Ad Hoc Network (FANET) is moreover considered as a sophisticated type of wireless ad hoc network among UAVs, which solved the communication problems into other network models. Along with the FANET unique features, challenges and open issues are also discussed.",
"title": ""
},
{
"docid": "1595cdc0f2af969e49525dd3fab419d9",
"text": "Video object detection is challenging because objects that are easily detected in one frame may be difficult to detect in another frame within the same clip. Recently, there have been major advances for doing object detection in a single image. These methods typically contain three phases: (i) object proposal generation (ii) object classification and (iii) post-processing. We propose a modification of the post-processing phase that uses high-scoring object detections from nearby frames to boost scores of weaker detections within the same clip. We show that our method obtains superior results to state-of-the-art single image object detection techniques. Our method placed 3 in the video object detection (VID) task of the ImageNet Large Scale Visual Recognition Challenge 2015 (ILSVRC2015).",
"title": ""
},
{
"docid": "bb65decbaecb11cf14044b2a2cbb6e74",
"text": "The ability to remain focused on goal-relevant stimuli in the presence of potentially interfering distractors is crucial for any coherent cognitive function. However, simply instructing people to ignore goal-irrelevant stimuli is not sufficient for preventing their processing. Recent research reveals that distractor processing depends critically on the level and type of load involved in the processing of goal-relevant information. Whereas high perceptual load can eliminate distractor processing, high load on \"frontal\" cognitive control processes increases distractor processing. These findings provide a resolution to the long-standing early and late selection debate within a load theory of attention that accommodates behavioural and neuroimaging data within a framework that integrates attention research with executive function.",
"title": ""
},
{
"docid": "4667b31c7ee70f7bc3709fc40ec6140f",
"text": "This article presents a method for rectifying and stabilising video from cell-phones with rolling shutter (RS) cameras. Due to size constraints, cell-phone cameras have constant, or near constant focal length, making them an ideal application for calibrated projective geometry. In contrast to previous RS rectification attempts that model distortions in the image plane, we model the 3D rotation of the camera. We parameterise the camera rotation as a continuous curve, with knots distributed across a short frame interval. Curve parameters are found using non-linear least squares over inter-frame correspondences from a KLT tracker. By smoothing a sequence of reference rotations from the estimated curve, we can at a small extra cost, obtain a high-quality image stabilisation. Using synthetic RS sequences with associated ground-truth, we demonstrate that our rectification improves over two other methods. We also compare our video stabilisation with the methods in iMovie and Deshaker.",
"title": ""
},
{
"docid": "3ac89f0f4573510942996ae66ef8184c",
"text": "Deep convolutional neural networks have been successfully applied to image classification tasks. When these same networks have been applied to image retrieval, the assumption has been made that the last layers would give the best performance, as they do in classification. We show that for instance-level image retrieval, lower layers often perform better than the last layers in convolutional neural networks. We present an approach for extracting convolutional features from different layers of the networks, and adopt VLAD encoding to encode features into a single vector for each image. We investigate the effect of different layers and scales of input images on the performance of convolutional features using the recent deep networks OxfordNet and GoogLeNet. Experiments demonstrate that intermediate layers or higher layers with finer scales produce better results for image retrieval, compared to the last layer. When using compressed 128-D VLAD descriptors, our method obtains state-of-the-art results and outperforms other VLAD and CNN based approaches on two out of three test datasets. Our work provides guidance for transferring deep networks trained on image classification to image retrieval tasks.",
"title": ""
},
{
"docid": "073ea28d4922c2d9c1ef7945ce4aa9e2",
"text": "The three major solutions for increasing the nominal performance of a CPU are: multiplying the number of cores per socket, expanding the embedded cache memories and use multi-threading to reduce the impact of the deep memory hierarchy. Systems with tens or hundreds of hardware threads, all sharing a cache coherent UMA or NUMA memory space, are today the de-facto standard. While these solutions can easily provide benefits in a multi-program environment, they require recoding of applications to leverage the available parallelism. Threads must synchronize and exchange data, and the overall performance is heavily in influenced by the overhead added by these mechanisms, especially as developers try to exploit finer grain parallelism to be able to use all available resources.",
"title": ""
},
{
"docid": "7741df913eece947fea6696fce89e139",
"text": "We survey the problem of comparing labeled trees based on simple local operations of deleting, inserting, and relabeling nodes. These operations lead to the tree edit distance, alignment distance, and inclusion problem. For each problem we review the results available and present, in detail, one or more of the central algorithms for solving the problem. keywords tree matching, edit distance",
"title": ""
},
{
"docid": "d29ca3ca682433a9ea6172622d12316c",
"text": "The phenomenon of a phantom limb is a common experience after a limb has been amputated or its sensory roots have been destroyed. A complete break of the spinal cord also often leads to a phantom body below the level of the break. Furthermore, a phantom of the breast, the penis, or of other innervated body parts is reported after surgical removal of the structure. A substantial number of children who are born without a limb feel a phantom of the missing part, suggesting that the neural network, or 'neuromatrix', that subserves body sensation has a genetically determined substrate that is modified by sensory experience.",
"title": ""
},
{
"docid": "ba6873627b976fa1a3899303b40eae3c",
"text": "Most plant seeds are dispersed in a dry, mature state. If these seeds are non-dormant and the environmental conditions are favourable, they will pass through the complex process of germination. In this review, recent progress made with state-of-the-art techniques including genome-wide gene expression analyses that provided deeper insight into the early phase of seed germination, which includes imbibition and the subsequent plateau phase of water uptake in which metabolism is reactivated, is summarized. The physiological state of a seed is determined, at least in part, by the stored mRNAs that are translated upon imbibition. Very early upon imbibition massive transcriptome changes occur, which are regulated by ambient temperature, light conditions, and plant hormones. The hormones abscisic acid and gibberellins play a major role in regulating early seed germination. The early germination phase of Arabidopsis thaliana culminates in testa rupture, which is followed by the late germination phase and endosperm rupture. An integrated view on the early phase of seed germination is provided and it is shown that it is characterized by dynamic biomechanical changes together with very early alterations in transcript, protein, and hormone levels that set the stage for the later events. Early seed germination thereby contributes to seed and seedling performance important for plant establishment in the natural and agricultural ecosystem.",
"title": ""
},
{
"docid": "4a5a5958eaf3a011a04d4afc1155e521",
"text": "1 Department of Geography, University of Kentucky, Lexington, Kentucky, United States of America, 2 Microsoft Research, New York, New York, United States of America, 3 Data & Society, New York, New York, United States of America, 4 Information Law Institute, New York University, New York, New York, United States of America, 5 Department of Media and Communications, London School of Economics, London, United Kingdom, 6 Harvard-Smithsonian Center for Astrophysics, Harvard University, Cambridge, Massachusetts, United States of America, 7 Center for Engineering Ethics and Society, National Academy of Engineering, Washington, DC, United States of America, 8 Institute for Health Aging, University of California-San Francisco, San Francisco, California, United States of America, 9 Ethical Resolve, Santa Cruz, California, United States of America, 10 Department of Computer Science, Princeton University, Princeton, New Jersey, United States of America, 11 Department of Sociology, Columbia University, New York, New York, United States of America, 12 Carey School of Law, University of Maryland, Baltimore, Maryland, United States of America",
"title": ""
},
{
"docid": "abe03f24c8e6116f8a9eba1d5dbaf867",
"text": "Executive functions consist of multiple high-level cognitive processes that drive rule generation and behavioral selection. An emergent property of these processes is the ability to adjust behavior in response to changes in one's environment (i.e., behavioral flexibility). These processes are essential to normal human behavior, and may be disrupted in diverse neuropsychiatric conditions, including schizophrenia, alcoholism, depression, stroke, and Alzheimer's disease. Understanding of the neurobiology of executive functions has been greatly advanced by the availability of animal tasks for assessing discrete components of behavioral flexibility, particularly strategy shifting and reversal learning. While several types of tasks have been developed, most are non-automated, labor intensive, and allow testing of only one animal at a time. The recent development of automated, operant-based tasks for assessing behavioral flexibility streamlines testing, standardizes stimulus presentation and data recording, and dramatically improves throughput. Here, we describe automated strategy shifting and reversal tasks, using operant chambers controlled by custom written software programs. Using these tasks, we have shown that the medial prefrontal cortex governs strategy shifting but not reversal learning in the rat, similar to the dissociation observed in humans. Moreover, animals with a neonatal hippocampal lesion, a neurodevelopmental model of schizophrenia, are selectively impaired on the strategy shifting task but not the reversal task. The strategy shifting task also allows the identification of separate types of performance errors, each of which is attributable to distinct neural substrates. The availability of these automated tasks, and the evidence supporting the dissociable contributions of separate prefrontal areas, makes them particularly well-suited assays for the investigation of basic neurobiological processes as well as drug discovery and screening in disease models.",
"title": ""
},
{
"docid": "9694bc859dd5295c40d36230cf6fd1b9",
"text": "In the past two decades, the synthetic style and fashion drug \"crystal meth\" (\"crystal\", \"meth\"), chemically representing the crystalline form of the methamphetamine hydrochloride, has become more and more popular in the United States, in Eastern Europe, and just recently in Central and Western Europe. \"Meth\" is cheap, easy to synthesize and to market, and has an extremely high potential for abuse and dependence. As a strong sympathomimetic, \"meth\" has the potency to switch off hunger, fatigue and, pain while simultaneously increasing physical and mental performance. The most relevant side effects are heart and circulatory complaints, severe psychotic attacks, personality changes, and progressive neurodegeneration. Another effect is \"meth mouth\", defined as serious tooth and oral health damage after long-standing \"meth\" abuse; this condition may become increasingly relevant in dentistry and oral- and maxillofacial surgery. There might be an association between general methamphetamine abuse and the development of osteonecrosis, similar to the medication-related osteonecrosis of the jaws (MRONJ). Several case reports concerning \"meth\" patients after tooth extractions or oral surgery have presented clinical pictures similar to MRONJ. This overview summarizes the most relevant aspect concerning \"crystal meth\" abuse and \"meth mouth\".",
"title": ""
},
{
"docid": "64f70c1214d148c43ceed537c69ad5dd",
"text": "Relation classification is an important semantic processing task in the field of natural language processing (NLP). In this paper, we present a novel model BRCNN to classify the relation of two entities in a sentence. Some state-of-the-art systems concentrate on modeling the shortest dependency path (SDP) between two entities leveraging convolutional or recurrent neural networks. We further explore how to make full use of the dependency relations information in the SDP, by combining convolutional neural networks and twochannel recurrent neural networks with long short term memory (LSTM) units. We propose a bidirectional architecture to learn relation representations with directional information along the SDP forwards and backwards at the same time, which benefits classifying the direction of relations. Experimental results show that our method outperforms the state-of-theart approaches on the SemEval-2010 Task 8 dataset.",
"title": ""
},
{
"docid": "0b3d0e777c3523fa6d1a61e7f0be0216",
"text": "This paper introduces the Cubli, a 15×15×15 cm cube that can jump up and balance on a corner. Momentum wheels mounted on three faces of the cube (Fig. 1) rotate at high angular velocities and then brake suddenly, causing the Cubli to jump up. Once the Cubli has almost reached the corner stand-up position, controlled motor torques are applied to make it balance on its corner. This paper tracks the development of the Cubli's one dimensional prototype at ETH Zurich and presents preliminary results.",
"title": ""
},
{
"docid": "70180fa9be4c8c87ce119772b2bcca23",
"text": "The energy domain currently struggles with radical legal and technological changes, such as, smart meters. This results in new use cases which can be implemented based on business process technology. Understanding and automating business processes requires to model and test them. However, existing process testing approaches frequently struggle with the testing of process resources, such as ERP systems, and negative testing. Hence, this work presents a toolchain which tackles that limitations. The approach uses an open source process engine to generate event logs and applies process mining techniques in a novel way.",
"title": ""
},
{
"docid": "817c86340f641094f5811f5f073c4c8b",
"text": "This paper presents a region-based shape controller for a swarm of robots. In this control method, the robotsmove as a group inside a desired regionwhilemaintaining aminimumdistance among themselves. Various shapes of the desired region can be formed by choosing the appropriate objective functions. The robots in the group only need to communicate with their neighbors and not the entire community. The robots do not have specific identities or roles within the group. Therefore, the proposed method does not require specific orders or positions of the robots inside the region and yet different formations can be formed for a swarm of robots. A Lyapunov-like function is presented for convergence analysis of the multi-robot systems. Simulation results illustrate the performance of the proposed controller. © 2009 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "413131b87073a9a9b025e457a0e9323e",
"text": "In this paper, we consider an anthropomorphically-inspired hybrid model of a bipedal robot with locking knees and feet in order to develop a control law that results in human-like walking. The presence of feet results in periods of full actuation and periods of underactuation during the course of a step. Properties of each of these phases of walking are utilized in order to achieve a stable walking gait. In particular, we will show that using controlled symmetries in the fully-actuated domains coupled with “partial” controlled symmetries and local ankle control laws in the underactuated domains yields stable walking; this result is possible due to the amount of time which the biped spends in the fully-actuated domains. The paper concludes with simulation results along with a comparison of these results to human walking data.",
"title": ""
},
{
"docid": "af1ddb07f08ad6065c004edae74a3f94",
"text": "Human decisions are prone to biases, and this is no less true for decisions made within data visualizations. Bias mitigation strategies often focus on the person, by educating people about their biases, typically with little success. We focus instead on the system, presenting the first evidence that altering the design of an interactive visualization tool can mitigate a strong bias – the attraction effect. Participants viewed 2D scatterplots where choices between superior alternatives were affected by the placement of other suboptimal points. We found that highlighting the superior alternatives weakened the bias, but did not eliminate it. We then tested an interactive approach where participants completely removed locally dominated points from the view, inspired by the elimination by aspects strategy in the decision-making literature. This approach strongly decreased the bias, leading to a counterintuitive suggestion: tools that allow removing inappropriately salient or distracting data from a view may help lead users to make more rational decisions.",
"title": ""
},
{
"docid": "1592e0150e4805a1fab68e5daaed8ed7",
"text": "Knowledge management (KM) has emerged as a tool that allows the creation, use, distribution and transfer of knowledge in organizations. There are different frameworks that propose KM in the scientific literature. The majority of these frameworks are structured based on a strong theoretical background. This study describes a guide for the implementation of KM in a higher education institution (HEI) based on a framework with a clear description on the practical implementation. This framework is based on a technological infrastructure that includes enterprise architecture, business intelligence and educational data mining. Furthermore, a case study which describes the experience of the implementation in a HEI is presented. As a conclusion, the pros and cons on the use of the framework are analyzed.",
"title": ""
},
{
"docid": "c2ab069a9f3efaf212cbfb4a38ffdb8e",
"text": "Clustering is a useful technique that organizes a large quantity of unordered text documents into a small number of meaningful and coherent clusters, thereby providing a basis for intuitive and informative navigation and browsing mechanisms. Partitional clustering algorithms have been recognized to be more suitable as opposed to the hierarchical clustering schemes for processing large datasets. A wide variety of distance functions and similarity measures have been used for clustering, such as squared Euclidean distance, cosine similarity, and relative entropy. In this paper, we compare and analyze the effectiveness of these measures in partitional clustering for text document datasets. Our experiments utilize the standard Kmeans algorithm and we report results on seven text document datasets and five distance/similarity measures that have been most commonly used in text clustering.",
"title": ""
}
] |
scidocsrr
|
ff5d3f7909618fbde1867e5cfb727fea
|
Two Birds with One Stone: Two-Factor Authentication with Security Beyond Conventional Bound
|
[
{
"docid": "f10d79d1eb6d3ec994c1ec7ec3769437",
"text": "The security of embedded devices often relies on the secrecy of proprietary cryptographic algorithms. These algorithms and their weaknesses are frequently disclosed through reverse-engineering software, but it is commonly thought to be too expensive to reconstruct designs from a hardware implementation alone. This paper challenges that belief by presenting an approach to reverse-engineering a cipher from a silicon implementation. Using this mostly automated approach, we reveal a cipher from an RFID tag that is not known to have a software or micro-code implementation. We reconstruct the cipher from the widely used Mifare Classic RFID tag by using a combination of image analysis of circuits and protocol analysis. Our analysis reveals that the security of the tag is even below the level that its 48-bit key length suggests due to a number of design flaws. Weak random numbers and a weakness in the authentication protocol allow for pre-computed rainbow tables to be used to find any key in a matter of seconds. Our approach of deducing functionality from circuit images is mostly automated, hence it is also feasible for large chips. The assumption that algorithms can be kept secret should therefore to be avoided for any type of silicon chip. Il faut qu’il n’exige pas le secret, et qu’il puisse sans inconvénient tomber entre les mains de l’ennemi. ([A cipher] must not depend on secrecy, and it must not matter if it falls into enemy hands.) August Kerckhoffs, La Cryptographie Militaire, January 1883 [13]",
"title": ""
},
{
"docid": "88d87dacef186e648fb648fdea37e4bc",
"text": "Two-factor authentication (TFA), enabled by hardware tokens and personal devices, is gaining momentum. The security of TFA schemes relies upon a human-memorable password p drawn from some implicit dictionary D and a t-bit device-generated one-time PIN z. Compared to password-only authentication, TFA reduces the probability of adversary’s online guessing attack to 1/(|D| ∗ 2) (and to 1/2 if the password p is leaked). However, known TFA schemes do not improve security in the face of offline dictionary attacks, because an adversary who compromises the service and learns a (salted) password hash can still recover the password with O(|D|) amount of effort. This password might be reused by the user at another site employing password-only authentication. We present a suite of efficient novel TFA protocols which improve upon password-only authentication by a factor of 2 with regards to both the online guessing attack and the offline dictionary attack. To argue the security of the presented protocols, we first provide a formal treatment of TFA schemes in general. The TFA protocols we present enable utilization of devices that are connected to the client over several channel types, formed using manual PIN entry, visual QR code capture, wireless communication (Bluetooth or WiFi), and combinations thereof. Utilizing these various communication settings we design, implement, and evaluate the performance of 13 different TFA mechanisms, and we analyze them with respect to security, usability (manual effort needed beyond typing a password), and deployability (need for additional hardware or software), showing consistent advantages over known TFA schemes.",
"title": ""
},
{
"docid": "ad67a1bb7983f0f2cda2d34c76d6c2f4",
"text": "Keywords: Two-factor authentication Wireless sensor networks User anonymity Smart card Non-tamper resistant a b s t r a c t Anonymity is among the important properties of two-factor authentication schemes for wireless sensor networks (WSNs) to preserve user privacy. Though impressive efforts have been devoted to designing schemes with user anonymity by only using lightweight symmetric key primitives such as hash functions and block ciphers, to the best of our knowledge none has succeeded so far. In this work, we take an initial step to shed light on the rationale underlying this prominent issue. Firstly, we scrutinize two previously-thought sound schemes, namely Fan et al.'s scheme and Xue et al.'s scheme, and demonstrate the major challenges in designing a scheme with user anonymity. Secondly, using these two foremost schemes as case studies and on the basis of the work of Halevi–Krawczyk (1999) [44] and Impagliazzo–Rudich (1989) [43], we put forward a general principle: Public-key techniques are intrinsically indispensable to construct a two-factor authentication scheme that can support user anonymity. Furthermore, we discuss the practical solutions to realize user anonymity. Remarkably, our principle can be applied to two-factor schemes for universal environments besides WSNs, such as the Inter-net, global mobility networks and mobile clouds. We believe that our work contributes to a better understanding of the inherent complexity in achieving user privacy, and will establish a groundwork for developing more secure and efficient privacy-preserving two-factor authentication schemes. With the rapid development of micro-electromechani-cal systems and wireless network technologies, wireless sensor networks (WSNs) have attracted increasing attention due to its wide range of applications from battlefield surveillance to civilian applications, e.g., environmental monitoring, real-time traffic control, industrial process control and home automation. As is well known, most large-scale WSNs [1–3] follow a tiered architecture due to its superiority in increasing the network capacity and scalability, accommodating the node mobility, reducing the management complexity and prolonging the network lifetime. Thus, in this work we mainly focus on the tiered WSNs as well. In many critical applications, external users are generally interested in accessing real-time information from sensor nodes, yet if the data queries are issued by the base station, efficiency, scalability and security may not be ensured over the long communication path between the base station and the sensor nodes [4,5].",
"title": ""
}
] |
[
{
"docid": "177d78352dab39befe562d17d79315b4",
"text": "Having access to relevant patient data is crucial for clinical decision making. The data is often documented in unstructured texts and collected in the electronic health record. In this paper, we evaluate an approach to visualize information extracted from clinical documents by means of tag cloud. Tag clouds will be generated using a bag of word approach and by exploiting part of speech tags. For a real word data set comprising radiological reports, pathological reports and surgical operation reports, tag clouds are generated and a questionnaire-based study is conducted as evaluation. Feedback from the physicians shows that the tag cloud visualization is an effective and rapid approach to represent relevant parts of unstructured patient data. To handle the different medical narratives, we have summarized several possible improvements according to the user feedback and evaluation results.",
"title": ""
},
{
"docid": "e6ca00d92f6e54ec66943499fba77005",
"text": "This paper covers aspects of governing information data on enterprise level using IBM solutions. In particular it focus on one of the key elements of governance — data lineage for EU GDPR regulations.",
"title": ""
},
{
"docid": "9b55e6dc69517848ae5e5040cd9d0d55",
"text": "In this paper, we utilize distributed word representations (i.e., word embeddings) to analyse the representation of semantics in brain activity. The brain activity data were recorded using functional magnetic resonance imaging (fMRI) when subjects were viewing words. First, we analysed the functional selectivity of different cortex areas by calculating the correlations between neural responses and several types of word representations, including skipgram word embeddings, visual semantic vectors, and primary visual features. The results demonstrated consistency with existing neuroscientific knowledge. Second, we utilized behavioural data as the semantic ground truth to measure their relevance with brain activity. A method to estimate word embeddings under the constraints of brain activity similarities is further proposed based on the semantic word embedding (SWE) model. The experimental results show that the brain activity data are significantly correlated with the behavioural data of human judgements on semantic similarity. The correlations between the estimated word embeddings and the semantic ground truth can be effectively improved after integrating the brain activity data for learning, which implies that semantic patterns in neural representations may exist that have not been fully captured by state-of-the-art word embeddings derived from text corpora.",
"title": ""
},
{
"docid": "cb381ae2d80b62e9b78f2da5ccfcd5b7",
"text": "Human adults attribute character traits to faces readily and with high consensus. In two experiments investigating the development of face-to-trait inference, adults and children ages 3 through 10 attributed trustworthiness, dominance, and competence to pairs of faces. In Experiment 1, the attributions of 3- to 4-year-olds converged with those of adults, and 5- to 6-year-olds' attributions were at adult levels of consistency. Children ages 3 and above consistently attributed the basic mean/nice evaluation not only to faces varying in trustworthiness (Experiment 1) but also to faces varying in dominance and competence (Experiment 2). This research suggests that the predisposition to judge others using scant facial information appears in adultlike forms early in childhood and does not require prolonged social experience.",
"title": ""
},
{
"docid": "20ec78dfbfe5b9709f25bd28e0e66e8d",
"text": "BACKGROUND\nElectronic medical records (EMRs) contain vast amounts of data that is of great interest to physicians, clinical researchers, and medial policy makers. As the size, complexity, and accessibility of EMRs grow, the ability to extract meaningful information from them has become an increasingly important problem to solve.\n\n\nMETHODS\nWe develop a standardized data analysis process to support cohort study with a focus on a particular disease. We use an interactive divide-and-conquer approach to classify patients into relatively uniform within each group. It is a repetitive process enabling the user to divide the data into homogeneous subsets that can be visually examined, compared, and refined. The final visualization was driven by the transformed data, and user feedback direct to the corresponding operators which completed the repetitive process. The output results are shown in a Sankey diagram-style timeline, which is a particular kind of flow diagram for showing factors' states and transitions over time.\n\n\nRESULTS\nThis paper presented a visually rich, interactive web-based application, which could enable researchers to study any cohorts over time by using EMR data. The resulting visualizations help uncover hidden information in the data, compare differences between patient groups, determine critical factors that influence a particular disease, and help direct further analyses. We introduced and demonstrated this tool by using EMRs of 14,567 Chronic Kidney Disease (CKD) patients.\n\n\nCONCLUSIONS\nWe developed a visual mining system to support exploratory data analysis of multi-dimensional categorical EMR data. By using CKD as a model of disease, it was assembled by automated correlational analysis and human-curated visual evaluation. The visualization methods such as Sankey diagram can reveal useful knowledge about the particular disease cohort and the trajectories of the disease over time.",
"title": ""
},
{
"docid": "a4b8f00bc8c37f56f85ed61cae226ef3",
"text": "Academic motivation is discussed in terms of self-efficacy, an individual's judgments of his or her capabilities to perform given actions. After presenting an overview of self-efficacy theory, I contrast self-efficacy with related constructs (perceived control, outcome expectations, perceived value of outcomes, attributions, and selfconcept) and discuss some efficacy research relevant to academic motivation. Studies of the effects of person variables (goal setting and information processing) and situation variables (models, attributional feedback, and rewards) on self-efficacy and motivation are reviewed. In conjunction with this discussion, I mention substantive issues that need to be addressed in the self-efficacy research and summarize evidence on the utility of self-efficacy for predicting motivational outcomes. Areas for future research are suggested. Article: The concept of personal expectancy has a rich history in psychological theory on human motivation (Atkinson, 1957; Rotter, 1966; Weiner, 1979). Research conducted within various theoretical traditions supports the idea that expectancy can influence behavioral instigation, direction, effort, and persistence (Bandura, 1986; Locke & Latham, 1990; Weiner, 1985). In this article, I discuss academic motivation in terms of one type of personal expectancy: self-efficacy, defined as \"People's judgments of their capabilities to organize and execute courses of action required to attain designated types of performances\" (Bandura, 1986, p. 391). Since Bandura's (1977) seminal article on selfefficacy, much research has clarified and extended the role of self-efficacy as a mechanism underlying behavioral change, maintenance, and generalization. For example, there is evidence that self-efficacy predicts such diverse outcomes as academic achievements, social skills, smoking cessation, pain tolerance, athletic performances, career choices, assertiveness, coping with feared events, recovery from heart attack, and sales performance (Bandura, 1986). After presenting an overview of self-efficacy theory and comparison of self-efficacy with related constructs, I discuss some self-efficacy research relevant to academic motivation, pointing out substantive issues that need to be addressed. I conclude with recommendations for future research. SELF-EFFICACY THEORY Antecedents and Consequences Bandura (1977) hypothesized that self-efficacy affects an individual's choice of activities, effort, and persistence. People who have a low sense of efficacy for accomplishing a task may avoid it; those who believe they are capable should participate readily. Individuals who feel efficacious are hypothesized to work harder and persist longer when they encounter difficulties than those who doubt their capabilities. Self-efficacy theory postulates that people acquire information to appraise efficacy from their performance accomplishments, vicarious (observational) experiences, forms of persuasion, and physiological indexes. An individual's own performances offer the most reliable guides for assessing efficacy. Successes raise efficacy and failure lowers it, but once a strong sense of efficacy is developed, a failure may not have much impact (Bandura, 1986). An individual also acquires capability information from knowledge of others. Similar others offer the best basis for comparison (Schunk, 1989b). Observing similar peers perform a task conveys to observers that they too are capable of accomplishing it. Information acquired vicariously typically has a weaker effect on self-efficacy than performance-based information; a vicarious increase in efficacy can be negated by subsequent failures. Students often receive persuasory information that they possess the capabilities to perform a task (e.g., \"You can do this\"). Positive persuasory feedback enhances self-efficacy, but this increase will be temporary if subsequent efforts turn out poorly. Students also derive efficacy information from physiological indexes (e.g., heart rate and sweating). Bodily symptoms signaling anxiety might be interpreted to indicate a lack of skills. Information acquired from these sources does not automatically influence efficacy; rather, it is cognitively appraised (Bandura, 1986). Efficacy appraisal is an inferential process in which persons weigh and combine the contributions of such personal and situational factors as their perceived ability, the difficulty of the task, amount of effort expended, amount of external assistance received, number and pattern of successes and failures, their perceived similarity to models, and persuader credibility (Schunk, 1989b). Self-efficacy is not the only influence on behavior; it is not necessarily the most important. Behavior is a function of many variables. In achievement settings some other important variables are skills, outcome expectations, and the perceived value of outcomes (Schunk, 1989b). High self-efficacy will not produce competent performances when requisite skills are lacking. Outcome expectations, or beliefs concerning the probable outcomes of actions, are important because individuals are not motivated to act in ways they believe will result in negative outcomes. Perceived value of outcomes refers to how much people desire certain outcomes relative to others. Given adequate skills, positive outcome expectations, and personally valued outcomes, self-efficacy is hypothesized to influence the choice and direction of much human behavior (Bandura, 1989b). Schunk (1989b) discussed how self-efficacy might operate during academic learning. At the start of an activity, students differ in their beliefs about their capabilities to acquire knowledge, perform skills, master the material, and so forth. Initial self-efficacy varies as a function of aptitude (e.g., abilities and attitudes) and prior experience. Such personal factors as goal setting and information processing, along with situational factors (e.g., rewards and teacher feedback), affect students while they are working. From these factors students derive cues signaling how well they are learning, which they use to assess efficacy for further learning. Motivation is enhanced when students perceive they are making progress in learning. In turn, as students work on tasks and become more skillful, they maintain a sense of self-efficacy for performing well.",
"title": ""
},
{
"docid": "dc5b7ef5aae9eb8e4741eb1b31c6d250",
"text": "Paper plays an essential role in many information ecologies in the developing world, but it can be inefficient and inflexible. We've developed an information services architecture that uses a smart phone equipped with a built-in digital camera to process augmented paper documents. The CAM document-processing framework exploits smart mobile phones' utility, usability, and growing ubiquity to link paper with modern information tools. CAM, so called because the phone's camera plays a key role in the user interface, is a three-tiered, document-based architecture for providing remote rural information services. The CAM framework comprises four components: CamForms, CamShell, CamBrowser, and CamServer.",
"title": ""
},
{
"docid": "4c406b80ad6c6ca617177a55d149f325",
"text": "REST Chart is a Petri-Net based XML modeling framework for REST API. This paper presents two important enhancements and extensions to REST Chart modeling - Hyperlink Decoration and Hierarchical REST Chart. In particular, the proposed Hyperlink Decoration decomposes resource connections from resource representation, such that hyperlinks can be defined independently of schemas. This allows a Navigation-First Design by which the important global connections of a REST API can be designed first and reused before the local resource representations are implemented and specified. Hierarchical REST Chart is a powerful mechanism to rapidly decompose and extend a REST API in several dimensions based on Hyperlink Decoration. These new mechanisms can be used to manage the complexities in large scale REST APIs that undergo frequent changes as in some large scale open source development projects. This paper shows that these new capabilities can fit nicely in the REST Chart XML with very minor syntax changes. These enhancements to REST Chart are applied successfully in designing and verifying REST APIs for software-defined-networking (SDN) and Cloud computing.",
"title": ""
},
{
"docid": "c21517df671a485888d2dde4af3306da",
"text": "While discussion about knowledge management often centers around how knowledge may best be codified into an explicit format for use in decision support or expert systems, some knowledge best serves the organization when it is kept in tacit form. We draw upon the resource-based view to identify how information technology can best be used during different types of strategic change. Specifically, we suggest that different change strategies focus on different combinations of tacit and explicit knowledge that make certain types of information technology more appropriate in some situations than in others. q 2001 Elsevier Science B.V. All rights reserved.",
"title": ""
},
{
"docid": "8d3f65dbeba6c158126ae9d82c886687",
"text": "Using dealer’s quotes and transactions prices on straight industrial bonds, we investigate the determinants of credit spread changes. Variables that should in theory determine credit spread changes have rather limited explanatory power. Further, the residuals from this regression are highly cross-correlated, and principal components analysis implies they are mostly driven by a single common factor. Although we consider several macroeconomic and financial variables as candidate proxies, we cannot explain this common systematic component. Our results suggest that monthly credit spread changes are principally driven by local supply0 demand shocks that are independent of both credit-risk factors and standard proxies for liquidity. THE RELATION BETWEEN STOCK AND BOND RETURNS has been widely studied at the aggregate level ~see, e.g., Keim and Stambaugh ~1986!, Fama and French ~1989, 1993!, Campbell and Ammer ~1993!!. Recently, a few studies have investigated that relation at both the individual firm level ~see, e.g., Kwan ~1996!! and portfolio level ~see, e.g., Blume, Keim, and Patel ~1991!, Cornell and Green ~1991!!. These studies focus on corporate bond returns, or yield changes. The main conclusions of these papers are: ~1! high-grade bonds behave like Treasury bonds, and ~2! low-grade bonds are more sensitive to stock returns. The implications of these studies may be limited in many situations of interest, however. For example, hedge funds often take highly levered positions in corporate bonds while hedging away interest rate risk by shorting treasuries. As a consequence, their portfolios become extremely sensitive to changes in credit spreads rather than changes in bond yields. The distinc* Collin-Dufresne is at Carnegie Mellon University. Goldstein is at Washington University in St. Louis. Martin is at Arizona State University. A significant portion of this paper was written while Goldstein and Martin were at The Ohio State University. We thank Rui Albuquerque, Gurdip Bakshi, Greg Bauer, Dave Brown, Francesca Carrieri, Peter Christoffersen, Susan Christoffersen, Greg Duffee, Darrell Duffie, Vihang Errunza, Gifford Fong, Mike Gallmeyer, Laurent Gauthier, Rick Green, John Griffin, Jean Helwege, Kris Jacobs, Chris Jones, Andrew Karolyi, Dilip Madan, David Mauer, Erwan Morellec, Federico Nardari, N.R. Prabhala, Tony Sanders, Sergei Sarkissian, Bill Schwert, Ken Singleton, Chester Spatt, René Stulz ~the editor!, Suresh Sundaresan, Haluk Unal, Karen Wruck, and an anonymous referee for helpful comments. We thank Ahsan Aijaz, John Puleo, and Laura Tuttle for research assistance. We are also grateful to seminar participants at Arizona State University, University of Maryland, McGill University, The Ohio State University, University of Rochester, and Southern Methodist University. THE JOURNAL OF FINANCE • VOL. LVI, NO. 6 • DEC. 2001",
"title": ""
},
{
"docid": "898b5800e6ff8a599f6a4ec27310f89a",
"text": "Jenni Anttonen: Using the EMFi chair to measure the user's emotion-related heart rate responses Master's thesis, 55 pages, 2 appendix pages May 2005 The research reported here is part of a multidisciplinary collaborative project that aimed at developing embedded measurement devices using electromechanical film (EMFi) as a basic measurement technology. The present aim was to test if an unobtrusive heart rate measurement device, the EMFi chair, had the potential to detect heart rate changes associated with emotional stimulation. Six-second long visual, auditory, and audiovisual stimuli with negative, neutral, and positive emotional content were presented to 24 participants. Heart rate responses were measured with the EMFi chair and with earlobe photoplethysmography (PPG). Also, subjective ratings of the stimuli were collected. Firstly, the high correlation between the measurement results of the EMFi chair and PPG, r = 0.99, p < 0.001, indicated that the EMFi chair measured heart rate reliably. Secondly, heart rate showed a decelerating response to visual, auditory, and audiovisual emotional stimulation. The emotional stimulation caused statistically significant changes in heart rate at the 6 th second from stimulus onset so that the responses to negative stimulation were significantly lower than the responses to positive stimulation. The results were in line with previous research. The results show that heart rate responses measured with the EMFi chair differed significantly for positive and negative emotional stimulation. These results suggest that the EMFi chair could be used in HCI to measure the user's emotional responses unobtrusively.",
"title": ""
},
{
"docid": "18d28769691fb87a6ebad5aae3eae078",
"text": "The current head Injury Assessment Reference Values (IARVs) for the child dummies are based in part on scaling adult and animal data and on reconstructions of real world accident scenarios. Reconstruction of well-documented accident scenarios provides critical data in the evaluation of proposed IARV values, but relatively few accidents are sufficiently documented to allow for accurate reconstructions. This reconstruction of a well documented fatal-fall involving a 23-month old child supplies additional data for IARV assessment. The videotaped fatal-fall resulted in a frontal head impact onto a carpet-covered cement floor. The child suffered an acute right temporal parietal subdural hematoma without skull fracture. The fall dynamics were reconstructed in the laboratory and the head linear and angular accelerations were quantified using the CRABI-18 Anthropomorphic Test Device (ATD). Peak linear acceleration was 125 ± 7 g (range 114-139), HIC15 was 335 ± 115 (Range 257-616), peak angular velocity was 57± 16 (Range 26-74), and peak angular acceleration was 32 ± 12 krad/s 2 (Range 15-56). The results of the CRABI-18 fatal fall reconstruction were consistent with the linear and rotational tolerances reported in the literature. This study investigates the usefulness of the CRABI-18 anthropomorphic testing device in forensic investigations of child head injury and aids in the evaluation of proposed IARVs for head injury. INTRODUCTION Defining the mechanisms of injury and the associated tolerance of the pediatric head to trauma has been the focus of a great deal of research and effort. In contrast to the multiple cadaver experimental studies of adult head trauma published in the literature, there exist only a few experimental studies of infant head injury using human pediatric cadaveric tissue [1-6]. While these few studies have been very informative, due to limitations in sample size, experimental equipment, and study objectives, current estimates of the tolerance of the pediatric head are based on relatively few pediatric cadaver data points combined with the use of scaled adult and animal data. In effort to assess and refine these tolerance estimates, a number of researchers have performed detailed accident reconstructions of well-documented injury scenarios [7-11] . The reliability of the reconstruction data are predicated on the ability to accurately reconstruct the actual accident and quantify the result in a useful injury metric(s). These resulting injury metrics can then be related to the injuries of the child and this, when combined with other reliable reconstructions, can form an important component in evaluating pediatric injury mechanisms and tolerance. Due to limitations in case identification, data collection, and resources, relatively few reconstructions of pediatric accidents have been performed. In this study, we report the results of the reconstruction of an uncharacteristically well documented fall resulting in a fatal head injury of a 23 month old child. The case study was previously reported as case #5 by Plunkett [12]. BACKGROUND As reported by Plunkett (2001), A 23-month-old was playing on a plastic gym set in the garage at her home with her older brother. She had climbed the attached ladder to the top rail above the platform and was straddling the rail, with her feet 0.70 meters (28 inches) above the floor. She lost her balance and fell headfirst onto a 1-cm (3⁄8-inch) thick piece of plush carpet remnant covering the concrete floor. She struck the carpet first with her outstretched hands, then with the right front side of her forehead, followed by her right shoulder. Her grandmother had been watching the children play and videotaped the fall. She cried after the fall but was alert",
"title": ""
},
{
"docid": "309a8f69647fae26a39305cdf0115ad0",
"text": "Three-dimensional synthetic aperture radar (SAR) image formation provides the scene reflectivity estimation along azimuth, range, and elevation coordinates. It is based on multipass SAR data obtained usually by nonuniformly spaced acquisition orbits. A common 3-D SAR focusing approach is Fourier-based SAR tomography, but this technique brings about image quality problems because of the low number of acquisitions and their not regular spacing. Moreover, attained resolution in elevation is limited by the overall acquisitions baseline extent. In this paper, a novel 3-D SAR data imaging based on Compressive Sampling theory is presented. It is shown that since the image to be focused has usually a sparse representation along the elevation direction (i.e., only few scatterers with different elevation are present in the same range-azimuth resolution cell), it suffices to have a small number of measurements to construct the 3-D image. Furthermore, the method allows super-resolution imaging, overcoming the limitation imposed by the overall baseline span. Tomographic imaging is performed by solving an optimization problem which enforces sparsity through ℓ1-norm minimization. Numerical results on simulated and real data validate the method and have been compared with the truncated singular value decomposition technique.",
"title": ""
},
{
"docid": "8b09f1c9e5b20e2bc9c7c82c2cb39cd5",
"text": "Commercial Light-Field cameras provide spatial and angular information, but its limited resolution becomes an important problem in practical use. In this paper, we present a novel method for Light-Field image super-resolution (SR) via a deep convolutional neural network. Rather than the conventional optimization framework, we adopt a datadriven learning method to simultaneously up-sample the angular resolution as well as the spatial resolution of a Light-Field image. We first augment the spatial resolution of each sub-aperture image to enhance details by a spatial SR network. Then, novel views between the sub-aperture images are generated by an angular super-resolution network. These networks are trained independently but finally finetuned via end-to-end training. The proposed method shows the state-of-the-art performance on HCI synthetic dataset, and is further evaluated by challenging real-world applications including refocusing and depth map estimation.",
"title": ""
},
{
"docid": "2a7ec9800923036d3ccbd35ff0e5b53a",
"text": "In this paper, approximate SRAMs are explored in the context of error-tolerant applications, in which energy is saved at the cost of the occurrence of read/write errors (i.e., signal quality degradation). This analysis investigates variation-resilient techniques that enable dynamic management of the energy-quality tradeoff down to the bit level. In these techniques, the different impacts of errors on quality at different bit positions are explicitly considered as key enabler of energy savings that are far larger than a simple voltage scaling. The analysis is based on the experimental results in an energy-quality scalable 28-nm SRAM and the extrapolation to a wide range of conditions through the models that combine the individual energy contributions. Results show that the joint adoption of multiple bit-level techniques provides substantially larger energy gains than individual techniques. Compared with the simple voltage scaling at isoquality, the joint adoption of these techniques can provide more than $2\\times $ energy reduction at negligible area penalty. Energy savings turn out to be highly sensitive to the choice of joint techniques, thus showing the crucial importance of dynamic energy-quality management in approximate SRAMs.",
"title": ""
},
{
"docid": "785702d7102fbc3b9089d0daaa0ad814",
"text": "Recent advances in neural variational inference have spawned a renaissance in deep latent variable models. In this paper we introduce a generic variational inference framework for generative and conditional models of text. While traditional variational methods derive an analytic approximation for the intractable distributions over latent variables, here we construct an inference network conditioned on the discrete text input to provide the variational distribution. We validate this framework on two very different text modelling applications, generative document modelling and supervised question answering. Our neural variational document model combines a continuous stochastic document representation with a bagof-words generative model and achieves the lowest reported perplexities on two standard test corpora. The neural answer selection model employs a stochastic representation layer within an attention mechanism to extract the semantics between a question and answer pair. On two question answering benchmarks this model exceeds all previous published benchmarks.",
"title": ""
},
{
"docid": "cc7ed465017e9fcd1785f1f4b2e51a70",
"text": "We explore using the Outer Ear Interface (OEI) to recognize eating activities. OEI contains a 3D gyroscope and a set of proximity sensors encapsulated in an off-the-shelf earpiece to monitor jaw movement by measuring ear canal deformation. In a laboratory setting with 20 participants, OEI could distinguish eating from other activities, such as walking, talking, and silently reading, with over 90% accuracy (user independent). In a second study, six subjects wore the system for 6 hours each while performing their normal daily activities. OEI correctly classified five minute segments of time as eating or non-eating with 93% accuracy (user dependent).",
"title": ""
},
{
"docid": "739db4358ac89d375da0ed005f4699ad",
"text": "All doctors have encountered patients whose symptoms they cannot explain. These individuals frequently provoke despair and disillusionment. Many doctors make a link between inexplicable physical symptoms and assumed psychiatric ill ness. An array of adjectives in medicine apply to symptoms without established organic basis – ‘supratentorial’, ‘psychosomatic’, ‘functional’ – and these are sometimes used without reference to their real meaning. In psychiatry, such symptoms fall under the umbrella of the somatoform disorders, which includes a broad range of diagnoses. Conversion disorder is just one of these. Its meaning is not always well understood and it is often confused with somatisation disorder.† Our aim here is to clarify the notion of a conversion disorder (and the differences between conversion and other somatoform disorders) and to discuss prevalence, aetiology, management and prognosis.",
"title": ""
},
{
"docid": "b9c40aa4c8ac9d4b6cbfb2411c542998",
"text": "This review will summarize molecular and genetic analyses aimed at identifying the mechanisms underlying the sequence of events during plant zygotic embryogenesis. These events are being studied in parallel with the histological and morphological analyses of somatic embryogenesis. The strength and limitations of somatic embryogenesis as a model system will be discussed briefly. The formation of the zygotic embryo has been described in some detail, but the molecular mechanisms controlling the differentiation of the various cell types are not understood. In recent years plant molecular and genetic studies have led to the identification and characterization of genes controlling the establishment of polarity, tissue differentiation and elaboration of patterns during embryo development. An investigation of the developmental basis of a number of mutant phenotypes has enabled the identification of gene activities promoting (1) asymmetric cell division and polarization leading to heterogeneous partitioning of the cytoplasmic determinants necessary for the initiation of embryogenesis (e.g. GNOM), (2) the determination of the apical-basal organization which is established independently of the differentiation of the tissues of the radial pattern elements (e.g. KNOLLE, FACKEL, ZWILLE), (3) the differentiation of meristems (e.g. SHOOT-MERISTEMLESS), and (4) the formation of a mature embryo characterized by the accumulation of LEA and storage proteins. The accumulation of these two types of proteins is controlled by ABA-dependent regulatory mechanisms as shown using both ABA-deficient and ABA-insensitive mutants (e.g. ABA, ABI3). Both types of embryogenesis have been studied by different techniques and common features have been identified between them. In spite of the relative difficulty of identifying the original cells involved in the developmental processes of somatic embryogenesis, common regulatory mechanisms are probably involved in the first stages up to the globular form. Signal molecules, such as growth regulators, have been shown to play a role during development of both types of embryos. The most promising method for identifying regulatory mechanisms responsible for the key events of embryogenesis will come from molecular and genetic analyses. The mutations already identified will shed light on the nature of the genes that affect developmental processes as well as elucidating the role of the various regulatory genes that control plant embryogenesis.",
"title": ""
},
{
"docid": "6059ad37cced50133792086a5c95f050",
"text": "The paper discusses and evaluates the effects of an information security awareness programme. The programme emphasised employee participation, dialogue and collective reflection in groups. The intervention consisted of small-sized workshops aimed at improving information security awareness and behaviour. An experimental research design consisting of one survey before and two after the intervention was used to evaluate whether the intended changes occurred. Statistical analyses revealed that the intervention was powerful enough to significantly change a broad range of awareness and behaviour indicators among the intervention participants. In the control group, awareness and behaviour remained by and large unchanged during the period of the study. Unlike the approach taken by the intervention studied in this paper, mainstream information security awareness measures are typically top-down, and seek to bring about changes at the individual level by means of an expert-based approach directed at a large population, e.g. through formal presentations, e-mail messages, leaflets and posters. This study demonstrates that local employee participation, collective reflection and group processes produce changes in short-term information security awareness and behaviour. a 2009 Elsevier Ltd. All rights reserved.",
"title": ""
}
] |
scidocsrr
|
93bf5d3133f6fafbf53ff522208ac54e
|
A Low-Power Broad-Bandwidth Noise Cancellation VLSI Circuit Design for In-Ear Headphones
|
[
{
"docid": "d0a086d03ffeaebede1dd3779de9c449",
"text": "In this paper, we present design and real-time implementation of a single-channel adaptive feedback active noise control (AFANC) headset for audio and communication applications. Several important design and implementation considerations, such as the ideal position of error microphone, training signal used, selection of adaptive algorithms and structures will be addressed in this paper. Real-time measurements and comparisons are also carried out with the latest commercial headset to evaluate its performance. In addition, several new extensions to the AFANC headset are described and evaluated.",
"title": ""
}
] |
[
{
"docid": "c8a2ba8f47266d0a63281a5abb5fa47f",
"text": "Hair plays an important role in human appearance. However, hair segmentation is still a challenging problem partially due to the lack of an effective model to handle its arbitrary shape variations. In this paper, we present a part-based model robust to hair shape and environment variations. The key idea of our method is to identify local parts by promoting the effectiveness of the part-based model. To this end, we propose a measurable statistic, called Subspace Clustering Dependency (SC-Dependency), to estimate the co-occurrence probabilities between local shapes. SC-Dependency guarantees output reasonability and allows us to evaluate the effectiveness of part-wise constraints in an information-theoretic way. Then we formulate the part identification problem as an MRF that aims to optimize the effectiveness of the potential functions. Experiments are performed on a set of consumer images and show our algorithm's capability and robustness to handle hair shape variations and extreme environment conditions.",
"title": ""
},
{
"docid": "e1d1ccf5d257340aa87b6f4f246565fa",
"text": "Genes vs environment The rising prevalence of childhood obesity is largely driven by recent changes in diet and levels of physical activity; however, there is strong evidence to suggest that like height, weight is a highly heritable trait (40–70% heritability). It is very likely that the ability to store fat in times of nutritional abundance was a positive trait selected over thousands of years of evolution only to emerge recently on a large scale as a result of changes in our environment. There is increasing recognition that studies aimed at identifying these polygenic or oligogenic influences on weight gain in childhood are needed and a number of loci have been identified in genome-wide scans in different populations, although as yet few have been replicated. As well as a detectable shift in the mean BMI of children and adults in most populations, we are seeing a greater proportion of patients of all ages with severe obesity. It is clear that these individuals have a certain genetic propensity to store excessive caloric intake as fat and it is important to have a practical approach to the investigation and management of these vulnerable patients who have considerably increased morbidity and mortality. Although there is no accepted definition for severe or morbid obesity in childhood, a BMI s.d. 42.5 (weight off the chart) is often used in Specialist Centres and the crossing of major growth percentile lines upward is an early indication of risk of severe obesity.",
"title": ""
},
{
"docid": "a09cb533a0a90a056857d597213efdf2",
"text": "一 引言 图像的边缘是图像的重要的特征,它给出了图像场景中物体的轮廓特征信息。当要对图 像中的某一个物体进行识别时,边缘信息是重要的可以利用的信息,例如在很多系统中采用 的模板匹配的识别算法。基于此,我们设计了一套基于 PCI Bus和 Vision Bus的可重构的机 器人视觉系统[3]。此系统能够实时的对图像进行采集,并可以通过系统实时的对图像进行 边缘的提取。 对于图像的边缘提取,采用二阶的边缘检测算子处理后要进行过零点检测,计算量很大 而且用硬件实现资源占用大且速度慢,所以在我们的视觉系统中,卷积器中选择的是一阶的 边缘检测算子。采用一阶的边缘检测算子进行卷积运算之后,仅仅需要对卷积得到的图像进 行阈值处理就可以得到图像的边缘,而阈值处理的操作用硬件实现占用资源少且速度快。由 于本视觉系统要求与应用环境下的精密装配机器人配合使用,系统的实时性要求非常高。因 此,如何对实时采集图像进行快速实时的边缘提取阈值的自动选取,是我们必须要考虑的问 题。 遗传算法是一种仿生物系统的基因进化的迭代搜索算法,其基本思想是由美国Michigan 大学的 J.Holland 教授提出的。由于遗传算法的整体寻优策略以及优化计算时不依赖梯度信 息,所以它具有很强的全局搜索能力,即对于解空间中的全局最优解有着很强的逼近能力。 它适用于问题结构不是十分清楚,总体很大,环境复杂的场合,而对于实时采集的图像进行 边缘检测阈值的选取就是此类问题。本文在对传统的遗传算法进行改进的基础上,提出了一 种对于实时采集图像进行边缘检测的阈值的自动选取方法。",
"title": ""
},
{
"docid": "72e7e5ab98cb660921c3479c5682dc10",
"text": "In this paper we adopt general sum stochas tic games as a framework for multiagent re inforcement learning Our work extends pre vious work by Littman on zero sum stochas tic games to a broader framework We de sign a multiagent Q learning method under this framework and prove that it converges to a Nash equilibrium under speci ed condi tions This algorithm is useful for nding the optimal strategy when there exists a unique Nash equilibrium in the game When there exist multiple Nash equilibria in the game this algorithm should be combined with other learning techniques to nd optimal strategies",
"title": ""
},
{
"docid": "47fb3483c8f4a5c0284fec3d3a309c09",
"text": "The Knowledge Base Population (KBP) track at the Text Analysis Conference 2010 marks the second year of this important information extraction evaluation. This paper describes the design and implementation of LCC’s systems which participated in the tasks of Entity Linking, Slot Filling, and the new task of Surprise Slot Filling. For the entity linking task, our top score was achieved through a robust context modeling approach which incorporates topical evidence. For slot filling, we used the output of the entity linking system together with a combination of different types of relation extractors. For surprise slot filling, our customizable extraction system was extremely useful due to the time sensitive nature of the task.",
"title": ""
},
{
"docid": "c8e029658bf4c298cb6e77128d19eac0",
"text": "Cloud Computing Business Framework (CCBF) is proposed to help organisations achieve good Cloud design, deployment, migration and services. While organisations adopt Cloud Computing for Web Services, technical and business challenges emerge and one of these includes the measurement of Cloud business performance. Organisational Sustainability Modelling (OSM) is a new way to measure Cloud business performance quantitatively and accurately. It combines statistical computation and 3D Visualisation to present the Return on Investment arising from the adoption of Cloud Computing by organisations. 3D visualisation simplifies the review process and is an innovative way for Return of Investment (ROI) valuation. Two detailed case studies with SAP and Vodafone have been presented, where OSM has analysed the business performance and explained how CCBF offers insights, which are relatively helpful for WS and Grid businesses. Comparisons and discussions between CCBF and other approaches related to WS are presented, where lessons learned are useful for Web Services, Cloud and Grid communities.",
"title": ""
},
{
"docid": "29e500aa57f82d63596ae13639d46cbf",
"text": "In this paper we present a intrusion detection module capable of detecting malicious network traffic in a SCADA (Supervisory Control and Data Acquisition) system. Malicious data in a SCADA system disrupt its correct functioning and tamper with its normal operation. OCSVM (One-Class Support Vector Machine) is an intrusion detection mechanism that does not need any labeled data for training or any information about the kind of anomaly is expecting for the detection process. This feature makes it ideal for processing SCADA environment data and automate SCADA performance monitoring. The OCSVM module developed is trained by network traces off line and detect anomalies in the system real time. The module is part of an IDS (Intrusion Detection System) system developed under CockpitCI project and communicates with the other parts of the system by the exchange of IDMEF (Intrusion Detection Message Exchange Format) messages that carry information about the source of the incident, the time and a classification of the alarm.",
"title": ""
},
{
"docid": "54722f4851707c2bf51d18910728a31c",
"text": "Many modern companies wish to maintain knowledge in the form of a corporate knowledge graph and to use and manage this knowledge via a knowledge graph management system (KGMS). We formulate various requirements for a fully-fledged KGMS. In particular, such a system must be capable of performing complex reasoning tasks but, at the same time, achieve efficient and scalable reasoning over Big Data with an acceptable computational complexity. Moreover, a KGMS needs interfaces to corporate databases, the web, and machine-learning and analytics packages. We present KRR formalisms and a system achieving these goals. To this aim, we use specific suitable fragments from the Datalog± family of languages, and we introduce the vadalog system, which puts these swift logics into action. This system exploits the theoretical underpinning of relevant Datalog± languages and combines it with existing and novel techniques from database and AI practice.",
"title": ""
},
{
"docid": "a7af0135b2214ca88883fe136bb13e70",
"text": "ITIL is one of the most used frameworks for IT service management. Implementing ITIL processes through an organization is not an easy task and present many difficulties. This paper explores the ITIL implementation's challenges and tries to experiment how Business Process Management Systems can help organization overtake those challenges.",
"title": ""
},
{
"docid": "5afb121d5e4a5ab8daa80580c8bd8253",
"text": "In this paper, we give a short introduction to machine learning and survey its applications in radiology. We focused on six categories of applications in radiology: medical image segmentation, registration, computer aided detection and diagnosis, brain function or activity analysis and neurological disease diagnosis from fMR images, content-based image retrieval systems for CT or MRI images, and text analysis of radiology reports using natural language processing (NLP) and natural language understanding (NLU). This survey shows that machine learning plays a key role in many radiology applications. Machine learning identifies complex patterns automatically and helps radiologists make intelligent decisions on radiology data such as conventional radiographs, CT, MRI, and PET images and radiology reports. In many applications, the performance of machine learning-based automatic detection and diagnosis systems has shown to be comparable to that of a well-trained and experienced radiologist. Technology development in machine learning and radiology will benefit from each other in the long run. Key contributions and common characteristics of machine learning techniques in radiology are discussed. We also discuss the problem of translating machine learning applications to the radiology clinical setting, including advantages and potential barriers.",
"title": ""
},
{
"docid": "84845323a1dcb318bb01fef5346c604d",
"text": "This paper introduced a centrifugal impeller-based wall-climbing robot with the μCOS-II System. Firstly, the climber's basic configurations of mechanical were described. Secondly, the mechanic analyses of walking mechanism was presented, which was essential to the suction device design. Thirdly, the control system including the PC remote control system and the STM32 master slave system was designed. Finally, an experiment was conducted to test the performance of negative pressure generating system and general abilities of wall-climbing robot.",
"title": ""
},
{
"docid": "bf48f9ac763b522b8d43cfbb281fbffa",
"text": "We present a declarative framework for collective deduplication of entity references in the presence of constraints. Constraints occur naturally in many data cleaning domains and can improve the quality of deduplication. An example of a constraint is \"each paper has a unique publication venue''; if two paper references are duplicates, then their associated conference references must be duplicates as well. Our framework supports collective deduplication, meaning that we can dedupe both paper references and conference references collectively in the example above. Our framework is based on a simple declarative Datalog-style language with precise semantics. Most previous work on deduplication either ignoreconstraints or use them in an ad-hoc domain-specific manner. We also present efficient algorithms to support the framework. Our algorithms have precise theoretical guarantees for a large subclass of our framework. We show, using a prototype implementation, that our algorithms scale to very large datasets. We provide thoroughexperimental results over real-world data demonstrating the utility of our framework for high-quality and scalable deduplication.",
"title": ""
},
{
"docid": "8bea1f9e107cfcebc080bc62d7ac600d",
"text": "The introduction of wireless transmissions into the data center has shown to be promising in improving cost effectiveness of data center networks DCNs. For high transmission flexibility and performance, a fundamental challenge is to increase the wireless availability and enable fully hybrid and seamless transmissions over both wired and wireless DCN components. Rather than limiting the number of wireless radios by the size of top-of-rack switches, we propose a novel DCN architecture, Diamond, which nests the wired DCN with radios equipped on all servers. To harvest the gain allowed by the rich reconfigurable wireless resources, we propose the low-cost deployment of scalable 3-D ring reflection spaces RRSs which are interconnected with streamlined wired herringbone to enable large number of concurrent wireless transmissions through high-performance multi-reflection of radio signals over metal. To increase the number of concurrent wireless transmissions within each RRS, we propose a precise reflection method to reduce the wireless interference. We build a 60-GHz-based testbed to demonstrate the function and transmission ability of our proposed architecture. We further perform extensive simulations to show the significant performance gain of diamond, in supporting up to five times higher server-to-server capacity, enabling network-wide load balancing, and ensuring high fault tolerance.",
"title": ""
},
{
"docid": "767d61512bb9d2db5d6bb7b3afb7b150",
"text": "Recent advances in deep generative models have shown promising potential in image inpanting, which refers to the task of predicting missing pixel values of an incomplete image using the known context. However, existing methods can be slow or generate unsatisfying results with easily detectable flaws. In addition, there is often perceivable discontinuity near the holes and require further post-processing to blend the results. We present a new approach to address the difficulty of training a very deep generative model to synthesize high-quality photo-realistic inpainting. Our model uses conditional generative adversarial networks (conditional GANs) as the backbone, and we introduce a novel block-wise procedural training scheme to stabilize the training while we increase the network depth. We also propose a new strategy called adversarial loss annealing to reduce the artifacts. We further describe several losses specifically designed for inpainting and show their effectiveness. Extensive experiments and user-study show that our approach outperforms existing methods in several tasks such as inpainting, face completion and image harmonization. Finally, we show our framework can be easily used as a tool for interactive guided inpainting, demonstrating its practical value to solve common real-world challenges.",
"title": ""
},
{
"docid": "23bc28928a00ba437660efcb1d93c1a8",
"text": "Mental disorders occur in people in all countries, societies and in all ethnic groups, regardless socio-economic order with more frequent anxiety disorders. Through the process of time many treatment have been applied in order to address this complex mental issue. People with anxiety disorders can benefit from a variety of treatments and services. Following an accurate diagnosis, possible treatments include psychological treatments and mediation. Complementary and alternative medicine (CAM) plays a significant role in health care systems. Patients with chronic pain conditions, including arthritis, chronic neck and backache, headache, digestive problems and mental health conditions (including insomnia, depression, and anxiety) were high users of CAM therapies. Aromatherapy is a holistic method of treatment, using essential oils. There are several essential oils that can help in reducing anxiety disorders and as a result the embodied events that they may cause.",
"title": ""
},
{
"docid": "de4e2e131a0ceaa47934f4e9209b1cdd",
"text": "With the popularity of mobile devices, spatial crowdsourcing is rising as a new framework that enables human workers to solve tasks in the physical world. With spatial crowdsourcing, the goal is to crowdsource a set of spatiotemporal tasks (i.e., tasks related to time and location) to a set of workers, which requires the workers to physically travel to those locations in order to perform the tasks. In this article, we focus on one class of spatial crowdsourcing, in which the workers send their locations to the server and thereafter the server assigns to every worker tasks in proximity to the worker’s location with the aim of maximizing the overall number of assigned tasks. We formally define this maximum task assignment (MTA) problem in spatial crowdsourcing, and identify its challenges. We propose alternative solutions to address these challenges by exploiting the spatial properties of the problem space, including the spatial distribution and the travel cost of the workers. MTA is based on the assumptions that all tasks are of the same type and all workers are equally qualified in performing the tasks. Meanwhile, different types of tasks may require workers with various skill sets or expertise. Subsequently, we extend MTA by taking the expertise of the workers into consideration. We refer to this problem as the maximum score assignment (MSA) problem and show its practicality and generality. Extensive experiments with various synthetic and two real-world datasets show the applicability of our proposed framework.",
"title": ""
},
{
"docid": "e12410e92e3f4c0f9c78bc5988606c93",
"text": "Semiarid environments are known for climate extremes such as high temperatures, low humidity, irregular precipitations, and apparent resource scarcity. We aimed to investigate how a small neotropical primate (Callithrix jacchus; the common marmoset) manages to survive under the harsh conditions that a semiarid environment imposes. The study was carried out in a 400-ha area of Caatinga in the northeast of Brazil. During a 6-month period (3 months of dry season and 3 months of wet season), we collected data on the diet of 19 common marmosets (distributed in five groups) and estimated their behavioral time budget during both the dry and rainy seasons. Resting significantly increased during the dry season, while playing was more frequent during the wet season. No significant differences were detected regarding other behaviors. In relation to the diet, we recorded the consumption of prey items such as insects, spiders, and small vertebrates. We also observed the consumption of plant items, including prickly cladodes, which represents a previously undescribed food item for this species. Cladode exploitation required perceptual and motor skills to safely access the food resource, which is protected by sharp spines. Our findings show that common marmosets can survive under challenging conditions in part because of adjustments in their behavior and in part because of changes in their diet.",
"title": ""
},
{
"docid": "2910fe6ac9958d9cbf9014c5d3140030",
"text": "We present a novel variational approach to estimate dense depth maps from multiple images in real-time. By using robust penalizers for both data term and regularizer, our method preserves discontinuities in the depth map. We demonstrate that the integration of multiple images substantially increases the robustness of estimated depth maps to noise in the input images. The integration of our method into recently published algorithms for camera tracking allows dense geometry reconstruction in real-time using a single handheld camera. We demonstrate the performance of our algorithm with real-world data.",
"title": ""
},
{
"docid": "b0901a572ecaaeb1233b92d5653c2f12",
"text": "This qualitative study offers a novel exploration of the links between social media, virtual intergroup contact, and empathy by examining how empathy is expressed through interactions on a popular social media blog. Global leaders are encouraging individuals to engage in behaviors and support policies that provide basic social foundations. It is difficult to motivate people to undertake such actions. However, research shows that empathy intensifies motivation to help others. It can cause individuals to see the world from the perspective of stigmatized group members and increase positive feelings. Social media offers a new pathway for virtual intergroup contact, providing opportunities to increase conversation about disadvantaged others and empathy. We examined expressions of empathy within a popular blog, Humans of New York (HONY), and engaged in purposeful case selection by focusing on (1) events where specific prosocial action was taken corresponding to interactions on the HONY blog and (2) presentation of people in countries other than the United States. Nine overarching themes; (1) perspective taking, (2) fantasy, (3) empathic concern, (4) personal distress, (5) relatability, (6) prosocial action, (7) community appreciation, (8) anti-empathy, and (9) rejection of anti-empathy, exemplify how the HONY community expresses and shares empathic thoughts and feelings.",
"title": ""
},
{
"docid": "f11ff738aaf7a528302e6ec5ed99c43c",
"text": "Vehicles equipped with GPS localizers are an important sensory device for examining people’s movements and activities. Taxis equipped with GPS localizers serve the transportation needs of a large number of people driven by diverse needs; their traces can tell us where passengers were picked up and dropped off, which route was taken, and what steps the driver took to find a new passenger. In this article, we provide an exhaustive survey of the work on mining these traces. We first provide a formalization of the data sets, along with an overview of different mechanisms for preprocessing the data. We then classify the existing work into three main categories: social dynamics, traffic dynamics and operational dynamics. Social dynamics refers to the study of the collective behaviour of a city’s population, based on their observed movements; Traffic dynamics studies the resulting flow of the movement through the road network; Operational dynamics refers to the study and analysis of taxi driver’s modus operandi. We discuss the different problems currently being researched, the various approaches proposed, and suggest new avenues of research. Finally, we present a historical overview of the research work in this field and discuss which areas hold most promise for future research.",
"title": ""
}
] |
scidocsrr
|
23ce923d758b64278b06a88e729bc876
|
Panoptic Segmentation
|
[
{
"docid": "b8b73a2f4924aaa34cf259d0f5eca3ba",
"text": "Semantic segmentation and object detection research have recently achieved rapid progress. However, the former task has no notion of different instances of the same object, and the latter operates at a coarse, bounding-box level. We propose an Instance Segmentation system that produces a segmentation map where each pixel is assigned an object class and instance identity label. Most approaches adapt object detectors to produce segments instead of boxes. In contrast, our method is based on an initial semantic segmentation module, which feeds into an instance subnetwork. This subnetwork uses the initial category-level segmentation, along with cues from the output of an object detector, within an end-to-end CRF to predict instances. This part of our model is dynamically instantiated to produce a variable number of instances per image. Our end-to-end approach requires no post-processing and considers the image holistically, instead of processing independent proposals. Therefore, unlike some related work, a pixel cannot belong to multiple instances. Furthermore, far more precise segmentations are achieved, as shown by our substantial improvements at high APr thresholds.",
"title": ""
},
{
"docid": "7f7eb61a7c5c92b49eb96e53bd60c6ca",
"text": "Scene parsing, or recognizing and segmenting objects and stuff in an image, is one of the key problems in computer vision. Despite the communitys efforts in data collection, there are still few image datasets covering a wide range of scenes and object categories with dense and detailed annotations for scene parsing. In this paper, we introduce and analyze the ADE20K dataset, spanning diverse annotations of scenes, objects, parts of objects, and in some cases even parts of parts. A scene parsing benchmark is built upon the ADE20K with 150 object and stuff classes included. Several segmentation baseline models are evaluated on the benchmark. A novel network design called Cascade Segmentation Module is proposed to parse a scene into stuff, objects, and object parts in a cascade and improve over the baselines. We further show that the trained scene parsing networks can lead to applications such as image content removal and scene synthesis1.",
"title": ""
},
{
"docid": "f456edd4d56dab8f0a60a3cef87f6cdb",
"text": "In this paper, we propose Sequential Grouping Networks (SGN) to tackle the problem of object instance segmentation. SGNs employ a sequence of neural networks, each solving a sub-grouping problem of increasing semantic complexity in order to gradually compose objects out of pixels. In particular, the first network aims to group pixels along each image row and column by predicting horizontal and vertical object breakpoints. These breakpoints are then used to create line segments. By exploiting two-directional information, the second network groups horizontal and vertical lines into connected components. Finally, the third network groups the connected components into object instances. Our experiments show that our SGN significantly outperforms state-of-the-art approaches in both, the Cityscapes dataset as well as PASCAL VOC.",
"title": ""
}
] |
[
{
"docid": "3835455140b289ed7adea524e44c769c",
"text": "Females represent just over half of the United States population. Yet their role in cinematic content does not reflect this reality. Looking at characters in films from 1946 to 1990, one study shows that females only occupy 25-28% of all parts. Another study found that 32% of all primary and secondary roles are filled with females across 100 films released between 1940 and 1980. More recent data reveals a similarly lop-sided scenario, yielding roughly equivalent point statistics for females in film (27.3-32%). 3 Assessing over 15,000 speaking characters across 400 top-grossing theatrically released G, PG, PG-13, and R-rated films, Smith and her colleagues found 2.71 males appear for every one female. Put another way, only 27% of all speaking characters in movies are girls or women. Significant but trivial deviation emerged in the percentage of females by Motion Picture Association of America (MPAA) rating. No change in the percentage of females materialized by release date across three distinct periods of time (i.e., 1990-95, 1996-00, 2001-06).",
"title": ""
},
{
"docid": "46ba4ec0c448e01102d5adae0540ca0d",
"text": "An extensive literature shows that social relationships influence psychological well-being, but the underlying mechanisms remain unclear. We test predictions about online interactions and well-being made by theories of belongingness, relationship maintenance, relational investment, social support, and social comparison. An opt-in panel study of 1,910 Facebook users linked self-reported measures of well-being to counts of respondents’ Facebook activities from server logs. Specific uses of the site were associated with improvements in well-being: Receiving targeted, composed communication from strong ties was associated with improvements in wellbeing while viewing friends’ wide-audience broadcasts and receiving one-click feedback were not. These results suggest that people derive benefits from online communication, as long it comes from people they care about and has been tailored for them.",
"title": ""
},
{
"docid": "a909c6a1c4d24c33d4b2d22254e3a199",
"text": "We present a method for constructing ensembles from libraries of thousands of models. Model libraries are generated using different learning algorithms and parameter settings. Forward stepwise selection is used to add to the ensemble the models that maximize its performance. Ensemble selection allows ensembles to be optimized to performance metric such as accuracy, cross entropy, mean precision, or ROC Area. Experiments with seven test problems and ten metrics demonstrate the benefit of ensemble selection.",
"title": ""
},
{
"docid": "a9f23b7a6e077d7e9ca1a3165948cdf3",
"text": "In most problem-solving activities, feedback is received at the end of an action sequence. This creates a credit-assignment problem where the learner must associate the feedback with earlier actions, and the interdependencies of actions require the learner to either remember past choices of actions (internal state information) or rely on external cues in the environment (external state information) to select the right actions. We investigated the nature of explicit and implicit learning processes in the credit-assignment problem using a probabilistic sequential choice task with and without external state information. We found that when explicit memory encoding was dominant, subjects were faster to select the better option in their first choices than in the last choices; when implicit reinforcement learning was dominant subjects were faster to select the better option in their last choices than in their first choices. However, implicit reinforcement learning was only successful when distinct external state information was available. The results suggest the nature of learning in credit assignment: an explicit memory encoding process that keeps track of internal state information and a reinforcement-learning process that uses state information to propagate reinforcement backwards to previous choices. However, the implicit reinforcement learning process is effective only when the valences can be attributed to the appropriate states in the system – either internally generated states in the cognitive system or externally presented stimuli in the environment.",
"title": ""
},
{
"docid": "42b492d698e5301b74f29d485dd3dcac",
"text": "Links is a programming language for web applications that generates code for all three tiers of a web application from a single source, compiling into JavaScript to run on the client and into SQL to run on the database. Links supports rich clients running in what has been dubbed ‘Ajax’ style, and supports concurrent processes with statically-typed message passing. Links is scalable in the sense that session state is preserved in the client rather than the server, in contrast to other approaches such as Java Servlets or PLT Scheme. Client-side concurrency in JavaScript and transfer of computation between client and server are both supported by translation into continuation-passing style.",
"title": ""
},
{
"docid": "bb93778655c0bfa525d9539f8f720da6",
"text": "Small embedded integrated circuits (ICs) such as smart cards are vulnerable to the so-called side-channel attacks (SCAs). The attacker can gain information by monitoring the power consumption, execution time, electromagnetic radiation, and other information leaked by the switching behavior of digital complementary metal-oxide-semiconductor (CMOS) gates. This paper presents a digital very large scale integrated (VLSI) design flow to create secure power-analysis-attack-resistant ICs. The design flow starts from a normal design in a hardware description language such as very-high-speed integrated circuit (VHSIC) hardware description language (VHDL) or Verilog and provides a direct path to an SCA-resistant layout. Instead of a full custom layout or an iterative design process with extensive simulations, a few key modifications are incorporated in a regular synchronous CMOS standard cell design flow. The basis for power analysis attack resistance is discussed. This paper describes how to adjust the library databases such that the regular single-ended static CMOS standard cells implement a dynamic and differential logic style and such that 20 000+ differential nets can be routed in parallel. This paper also explains how to modify the constraints and rules files for the synthesis, place, and differential route procedures. Measurement-based experimental results have demonstrated that the secure digital design flow is a functional technique to thwart side-channel power analysis. It successfully protects a prototype Advanced Encryption Standard (AES) IC fabricated in an 0.18-mum CMOS",
"title": ""
},
{
"docid": "b1854f263b2f320136a62059e6f4fc57",
"text": "1Computer Science Department, College of Computer and Information Sciences, Prince Sultan University, Riyadh, Saudi Arabia 2Computer Science Department/Computer Information Systems Department, King Abdullah II School for Information Technology (KASIT), The University of Jordan, Amman, Jordan 3Computer Science Department, College of Computation and Informatics, Saudi Electronic University, Riyadh, Saudi Arabia",
"title": ""
},
{
"docid": "15a1f3e67b10a8589a8b26439fe8d4af",
"text": "AIMS\nFirst, to document the prevalence of corridor occupations and conversations among the staff of a hospital clinic, and their main features. Second, to examine the activities accomplished through corridor conversations and their interactional organization.\n\n\nBACKGROUND\nDespite extensive research on mobility in hospital work, we still know fairly little about the prevalence and features of hospital staff corridor conversations and how they are organized.\n\n\nDESIGN\nWe conducted a study combining descriptive statistical analysis and multimodal conversation analysis of video recordings of staff corridor practices in a hospital outpatient clinic in Switzerland.\n\n\nMETHODS\nIn 2012, we collected 59 hours of video recordings in a corridor of a hospital clinic. We coded and statistically analysed the footage that showed the clinic staff exclusively. We also performed qualitative multimodal conversation analysis on a selection of the recorded staff conversations.\n\n\nRESULTS\nCorridor occupations by the clinic staff are frequent and brief and rarely involve stops. Talk events (which include self-talk, face-to-face conversations and telephone conversations) during occupations are also brief and mobile, overwhelmingly focus on professional topics and are particularly frequent when two or more staff members occupy the corridor. The conversations present several interactional configurations and comprise an array of activities consequential to the provision of care and work organization.\n\n\nCONCLUSION\nThese practices are related to the fluid work organization of a spatially distributed team in a fast-paced, multitasking environment and should be taken into consideration in any undertaking aimed at improving hospital units' functioning.",
"title": ""
},
{
"docid": "0f493c438c0eb256e92996603e1ea41f",
"text": "This paper proposes a method to compensate RGB-D images from the original target RGB images by transferring the depth knowledge of source data. Conventional RGB databases (e.g., UT-Interaction database) do not contain depth information since they are captured by the RGB cameras. Therefore, the methods designed for {RGB} databases cannot take advantage of depth information, which proves useful for simplifying intra-class variations and background subtraction. In this paper, we present a novel transfer learning method that can transfer the knowledge from depth information to the RGB database, and use the additional source information to recognize human actions in RGB videos. Our method takes full advantage of 3D geometric information contained within the learned depth data, thus, can further improve action recognition performance. We treat action data as a fourth-order tensor (row, column, frame and sample), and apply latent low-rank transfer learning to learn shared subspaces of the source and target databases. Moreover, we introduce a novel cross-modality regularizer that plays an important role in finding the correlation between RGB and depth modalities, and then more depth information from the source database can be transferred to that of the target. Our method is extensively evaluated on public by available databases. Results of two action datasets show that our method outperforms existing methods.",
"title": ""
},
{
"docid": "15a24d02f998f0b515e35ce4c66a6dc1",
"text": "Nowadays chronic diseases are the leading cause of deaths in India. These diseases which include various ailments in the form of diabetes, stroke, cardiovascular diseases, mental health illness, cancers, and chronic lung diseases. Chronic diseases are the biggest challenge for India and these diseases are the main cause of hospitalization for elder people. People who have suffered from chronic diseases are needed to repeatedly monitor the vital signs periodically. The number of nurses in hospital is relative low compared to the number of patients in hospital, there may be a chance to miss to monitor any patient vital signs which may affect patient health. In this paper, real time monitoring vital signs of a patient is developed using wearable sensors. Without nurse help, patient know the vital signs from the sensors and the system stored the sensor value in the form of text document. By using data mining approaches, the system is trained for vital sign data. Patients give their text document to the system which in turn they know their health status without any nurse help. This system enables high risk patients to be timely checked and enhance the quality of a life of patients.",
"title": ""
},
{
"docid": "8970ace14fef5499de4bf810ab66c7ce",
"text": "Glioblastoma multiforme is the most common primary malignant brain tumour, with a median survival of about one year. This poor prognosis is due to therapeutic resistance and tumour recurrence after surgical removal. Precisely how recurrence occurs is unknown. Using a genetically engineered mouse model of glioma, here we identify a subset of endogenous tumour cells that are the source of new tumour cells after the drug temozolomide (TMZ) is administered to transiently arrest tumour growth. A nestin-ΔTK-IRES-GFP (Nes-ΔTK-GFP) transgene that labels quiescent subventricular zone adult neural stem cells also labels a subset of endogenous glioma tumour cells. On arrest of tumour cell proliferation with TMZ, pulse-chase experiments demonstrate a tumour re-growth cell hierarchy originating with the Nes-ΔTK-GFP transgene subpopulation. Ablation of the GFP+ cells with chronic ganciclovir administration significantly arrested tumour growth, and combined TMZ and ganciclovir treatment impeded tumour development. Thus, a relatively quiescent subset of endogenous glioma cells, with properties similar to those proposed for cancer stem cells, is responsible for sustaining long-term tumour growth through the production of transient populations of highly proliferative cells.",
"title": ""
},
{
"docid": "830588b6ff02a05b4d76b58a3e4e7c44",
"text": "The integration of GIS and multicriteria decision analysis has attracted significant interest over the last 15 years or so. This paper surveys the GISbased multicriteria decision analysis (GIS-MCDA) approaches using a literature review and classification of articles from 1990 to 2004. An electronic search indicated that over 300 articles appeared in refereed journals. The paper provides taxonomy of those articles and identifies trends and developments in GISMCDA.",
"title": ""
},
{
"docid": "65415effb35f9c8234f81fdef2916f42",
"text": "The scanpath comparison framework based on string editing is revisited. The previous method of clustering based on k-means \"preevaluation\" is replaced by the mean shift algorithm followed by elliptical modeling via Principal Components Analysis. Ellipse intersection determines cluster overlap, with fast nearest-neighbor search provided by the kd-tree. Subsequent construction of Y - matrices and parsing diagrams is fully automated, obviating prior interactive steps. Empirical validation is performed via analysis of eye movements collected during a variant of the Trail Making Test, where participants were asked to visually connect alphanumeric targets (letters and numbers). The observed repetitive position similarity index matches previously published results, providing ongoing support for the scanpath theory (at least in this situation). Task dependence of eye movements may be indicated by the global position index, which differs considerably from past results based on free viewing.",
"title": ""
},
{
"docid": "ea62036a195a99436f06fd87b00bef46",
"text": "Variational inference provides a powerful tool for approximate probabilistic inference on complex, structured models. Typical variational inference methods, however, require to use inference networks with computationally tractable probability density functions. This largely limits the design and implementation of variational inference methods. We consider wild variational inference methods that do not require tractable density functions on the inference networks, and hence can be applied in more challenging cases. As an example of application, we treat stochastic gradient Langevin dynamics (SGLD) as an inference network, and use our methods to automatically adjust the step sizes of SGLD, yielding significant improvement over the hand-designed step size schemes.",
"title": ""
},
{
"docid": "030e286a2dee28787c3c60d6251f084a",
"text": "We consider a number of dynamic problems with no known poly-logarithmic upper bounds, and show that they require nΩ(1) time per operation, unless 3SUM has strongly subquadratic algorithms. Our result is modular: (1) We describe a carefully-chosen dynamic version of set disjointness (the \"multiphase problem\"), and conjecture that it requires n^Omega(1) time per operation. All our lower bounds follow by easy reduction. (2) We reduce 3SUM to the multiphase problem. Ours is the first nonalgebraic reduction from 3SUM, and allows 3SUM-hardness results for combinatorial problems. For instance, it implies hardness of reporting all triangles in a graph. (3) It is plausible that an unconditional lower bound for the multiphase problem can be established via a number-on-forehead communication game.",
"title": ""
},
{
"docid": "16cbc21b3092a5ba0c978f0cf38710ab",
"text": "A major challenge to the problem of community question answering is the lexical and semantic gap between the sentence representations. Some solutions to minimize this gap includes the introduction of extra parameters to deep models or augmenting the external handcrafted features. In this paper, we propose a novel attentive recurrent tensor network for solving the lexical and semantic gap in community question answering. We introduce token-level and phrase-level attention strategy that maps input sequences to the output using trainable parameters. Further, we use the tensor parameters to introduce a 3-way interaction between question, answer and external features in vector space. We introduce simplified tensor matrices with L2 regularization that results in smooth optimization during training. The proposed model achieves state-of-the-art performance on the task of answer sentence selection (TrecQA and WikiQA datasets) while outperforming the current state-of-the-art on the tasks of best answer selection (Yahoo! L4) and answer triggering task (WikiQA).",
"title": ""
},
{
"docid": "24e78f149b2e42a5c98eb3443c023853",
"text": "Cone-beam CT system has become a hot issue in current CT technique. Compared with the traditional 2D CT, cone beam CT can greatly reduce the scanning time, improve the utilization ratio of X-ray, and enhance the spatial resolution. In the article, simulation data based on the 3D Shepp-Logan Model was obtained by tracing the X-ray and applying the radial attenuation theory. FDK (Feldkamp, Davis and Kress) reconstruction algorithm was then adopted to reconstruct the 3D Shepp-Logan Mode. The reconstruction results indicate that for the central image the spatial resolution can reach 8linepairs/mm. Reconstructed images truthfully reveal the archetype.",
"title": ""
},
{
"docid": "0988297cfd3aaeb077e2be71f4106c81",
"text": "HadoopDB is a hybrid of MapReduce and DBMS technologies, designed to meet the growing demand of analyzing massive datasets on very large clusters of machines. Our previous work has shown that HadoopDB approaches parallel databases in performance and still yields the scalability and fault tolerance of MapReduce-based systems. In this demonstration, we focus on HadoopDB's flexible architecture and versatility with two real world application scenarios: a semantic web data application for protein sequence analysis and a business data warehousing application based on TPC-H. The demonstration offers a thorough walk-through of how to easily build applications on top of HadoopDB.",
"title": ""
},
{
"docid": "85f9eb1b79ba0bc11e275c8a48731e8f",
"text": "OBJECTIVES\nThe long-term effects of amino acid-based formula (AAF) in the treatment of cow's milk allergy (CMA) are largely unexplored. The present study comparatively evaluates body growth and protein metabolism in CMA children treated with AAF or with extensively hydrolyzed whey formula (eHWF), and healthy controls.\n\n\nMETHODS\nA 12-month multicenter randomized control trial was conducted in outpatients with CMA (age 5-12 m) randomized in 2 groups, treated with AAF (group 1) and eHWF (group 2), and compared with healthy controls (group 3) fed with follow-on (if age <12 months) or growing-up formula (if age >12 months). At enrolment (T0), after 3 (T3), 6 (T6), and 12 months (T12) a clinical evaluation was performed. At T0 and T3, in subjects with CMA serum levels of albumin, urea, total protein, retinol-binding protein, and insulin-like growth factor 1 were measured.\n\n\nRESULTS\nTwenty-one subjects in group 1 (61.9% boys, age 6.5 ± 1.5 months), 19 in group 2 (57.9% boys, age 7 ± 1.7 months) and 25 subjects in group 3 (48% boys, age 5.5 ± 0.5 months) completed the study. At T0, the weight z score was similar in group 1 (-0.74) and 2 (-0.76), with differences compared to group 3 (-0.17, P < 0.05). At T12, the weight z score value was similar between the 3 groups without significant differences. There were no significant changes in protein metabolism in children in groups 1 and 2.\n\n\nCONCLUSION\nLong-term treatment with AAF is safe and allows adequate body growth in children with CMA.",
"title": ""
},
{
"docid": "cab34efb913c222c12ea1aaf07dcd246",
"text": "Engineered biological systems have been used to manipulate information, construct materials, process chemicals, produce energy, provide food, and help maintain or enhance human health and our environment. Unfortunately, our ability to quickly and reliably engineer biological systems that behave as expected remains quite limited. Foundational technologies that make routine the engineering of biology are needed. Vibrant, open research communities and strategic leadership are necessary to ensure that the development and application of biological technologies remains overwhelmingly constructive.",
"title": ""
}
] |
scidocsrr
|
9217d2b8e887d8b2b61a954008f06d9b
|
A Study Using $n$-gram Features for Text Categorization
|
[
{
"docid": "97a7ebf3cffa55f97e28ca42d1239131",
"text": "The eeect of selecting varying numbers and kinds of features for use in predicting category membership was investigated on the Reuters and MUC-3 text categorization data sets. Good categorization performance was achieved using a statistical classiier and a proportional assignment strategy. The optimal feature set size for word-based indexing was found to be surprisingly low (10 to 15 features) despite the large training sets. The extraction of new text features by syntactic analysis and feature clustering was investigated on the Reuters data set. Syntactic indexing phrases, clusters of these phrases, and clusters of words were all found to provide less eeective representations than individual words.",
"title": ""
}
] |
[
{
"docid": "bfdbed47fc25bb6efbb649dd13fcdedf",
"text": "Passive haptic feedback is very compelling, but a different physical object is needed for each virtual object requiring haptic feedback. I propose to enhance passive haptics by exploiting visual dominance, enabling a single physical object to provide haptic feedback for many differently shaped virtual objects. Potential applications include virtual prototyping, redirected walking, entertainment, art, and training.",
"title": ""
},
{
"docid": "793082d8e5367625145a7d7993bec19f",
"text": "Future advanced driver assistant systems put high demands on the environmental perception especially in urban environments. Today's on-board sensors and on-board algorithms still do not reach a satisfying level of development from the point of view of robustness and availability. Thus, map data is often used as an additional data input to support the on-board sensor system and algorithms. The usage of map data requires a highly correct pose within the map even in cases of positioning errors by global navigation satellite systems or geometrical errors in the map data. In this paper we propose and compare two approaches for map-relative localization exclusively using a lane-level map. These approaches deliberately avoid the usage of detailed a priori maps containing point-landmarks, grids or road-markings. Additionally, we propose a grid-based on-board fusion of road-marking information and stationary obstacles addressing the problem of missing or incomplete road-markings in urban scenarios.",
"title": ""
},
{
"docid": "f7daa0d175d4a7ae8b0869802ff3c4ab",
"text": "Several consumer speech devices feature voice interfaces that perform on-device keyword spotting to initiate user interactions. Accurate on-device keyword spotting within a tight CPU budget is crucial for such devices. Motivated by this, we investigated two ways to improve deep neural network (DNN) acoustic models for keyword spotting without increasing CPU usage. First, we used low-rank weight matrices throughout the DNN. This allowed us to increase representational power by increasing the number of hidden nodes per layer without changing the total number of multiplications. Second, we used knowledge distilled from an ensemble of much larger DNNs used only during training. We systematically evaluated these two approaches on a massive corpus of far-field utterances. Alone both techniques improve performance and together they combine to give significant reductions in false alarms and misses without increasing CPU or memory usage.",
"title": ""
},
{
"docid": "af3297de35d49f774e2f31f31b09fd61",
"text": "This paper explores the phenomena of the emergence of the use of artificial intelligence in teaching and learning in higher education. It investigates educational implications of emerging technologies on the way students learn and how institutions teach and evolve. Recent technological advancements and the increasing speed of adopting new technologies in higher education are explored in order to predict the future nature of higher education in a world where artificial intelligence is part of the fabric of our universities. We pinpoint some challenges for institutions of higher education and student learning in the adoption of these technologies for teaching, learning, student support, and administration and explore further directions for research.",
"title": ""
},
{
"docid": "1e7c094acc791dcfad54e7eb9bf3a1fe",
"text": "Steganography is an ancient art. With the advent of computers, we have vast accessible bodies of data in which to hide information, and increasingly sophisticated techniques with which to analyze and recover that information. While much of the recent research in steganography has been centered on hiding data in images, many of the solutions that work for images are more complicated when applied to natural language text as a cover medium. Many approaches to steganalysis attempt to detect statistical anomalies in cover data which predict the presence of hidden information. Natural language cover texts must not only pass the statistical muster of automatic analysis, but also the minds of human readers. Linguistically naïve approaches to the problem use statistical frequency of letter combinations or random dictionary words to encode information. More sophisticated approaches use context-free grammars to generate syntactically correct cover text which mimics the syntax of natural text. None of these uses meaning as a basis for generation, and little attention is paid to the semantic cohesiveness of a whole text as a data point for statistical attack. This paper provides a basic introduction to steganography and steganalysis, with a particular focus on text steganography. Text-based information hiding techniques are discussed, providing motivation for moving toward linguistic steganography and steganalysis. We highlight some of the problems inherent in text steganography as well as issues with existing solutions, and describe linguistic problems with character-based, lexical, and syntactic approaches. Finally, the paper explores how a semantic and rhetorical generation approach suggests solutions for creating more believable cover texts, presenting some current and future issues in analysis and generation. The paper is intended to be both general enough that linguists without training in information security and computer science can understand the material, and specific enough that the linguistic and computational problems are described in adequate detail to justify the conclusions suggested.",
"title": ""
},
{
"docid": "32287cfcf9978e04bea4ab5f01a6f5da",
"text": "OBJECTIVE\nThe purpose of this study was to examine the relationship of performance on the Developmental Test of Visual-Motor Integration (VMI; Beery, 1997) to handwriting legibility in children attending kindergarten. The relationship of using lined versus unlined paper on letter legibility, based on a modified version of the Scale of Children's Readiness in PrinTing (Modified SCRIPT; Weil & Cunningham Amundson, 1994) was also investigated.\n\n\nMETHOD\nFifty-four typically developing kindergarten students were administered the VMI; 30 students completed the Modified SCRIPT with unlined paper, 24 students completed the Modified SCRIPT with lined paper. Students were assessed in the first quarter of the kindergarten school year and scores were analyzed using correlational and nonparametric statistical measures.\n\n\nRESULTS\nStrong positive relationships were found between VMI assessment scores and student's ability to legibly copy letterforms. Students who could copy the first nine forms on the VMI performed significantly better than students who could not correctly copy the first nine VMI forms on both versions of the Modified SCRIPT.\n\n\nCONCLUSION\nVisual-motor integration skills were shown to be related to the ability to copy letters legibly. These findings support the research of Weil and Cunningham Amundson. Findings from this study also support the conclusion that there is no significant difference in letter writing legibility between students who use paper with or without lines.",
"title": ""
},
{
"docid": "733e5961428e5aad785926e389b9bd75",
"text": "OBJECTIVE\nPeer support can be defined as the process of giving and receiving nonprofessional, nonclinical assistance from individuals with similar conditions or circumstances to achieve long-term recovery from psychiatric, alcohol, and/or other drug-related problems. Recently, there has been a dramatic rise in the adoption of alternative forms of peer support services to assist recovery from substance use disorders; however, often peer support has not been separated out as a formalized intervention component and rigorously empirically tested, making it difficult to determine its effects. This article reports the results of a literature review that was undertaken to assess the effects of peer support groups, one aspect of peer support services, in the treatment of addiction.\n\n\nMETHODS\nThe authors of this article searched electronic databases of relevant peer-reviewed research literature including PubMed and MedLINE.\n\n\nRESULTS\nTen studies met our minimum inclusion criteria, including randomized controlled trials or pre-/post-data studies, adult participants, inclusion of group format, substance use-related, and US-conducted studies published in 1999 or later. Studies demonstrated associated benefits in the following areas: 1) substance use, 2) treatment engagement, 3) human immunodeficiency virus/hepatitis C virus risk behaviors, and 4) secondary substance-related behaviors such as craving and self-efficacy. Limitations were noted on the relative lack of rigorously tested empirical studies within the literature and inability to disentangle the effects of the group treatment that is often included as a component of other services.\n\n\nCONCLUSION\nPeer support groups included in addiction treatment shows much promise; however, the limited data relevant to this topic diminish the ability to draw definitive conclusions. More rigorous research is needed in this area to further expand on this important line of research.",
"title": ""
},
{
"docid": "dec1296463199214ef67c1c9f5b848be",
"text": "The scope of this second edition of the introduction to fundamental distributed programming abstractions has been extended to cover 'Byzantine fault tolerance'. It includes algorithms to Whether rgui and function or matrix. Yes no plotting commands the same dim. For scenarios such as is in which are available packages still! The remote endpoint the same model second example in early. Variables are omitted the one way, datagram transports inherently support which used by swayne cook. The sense if you do this is somewhat. Under which they were specified by declaring the vector may make. It as not be digitally signed like the binding configuration. The states and unordered factors the printing of either rows. In the appropriate interpreter if and that locale. In this and can be ignored, for has. Values are used instead of choice the probability density. There are recognized read only last two http the details see below specify. One mode namely this is used. Look at this will contain a vector of multiple. Wilks you will look at this is quite hard. The character expansion are copied when character. For fitting function takes an expression, so called the object. However a parameter data analysis and, rbind or stem and qqplot. The result is in power convenience and the outer true as many. Functions can reduce the requester. In that are vectors or, data into a figure five values for linear regressions. Like structures are the language and stderr would fit hard to rules. Messages for reliable session concretely, ws rm standard bindings the device will launch a single. Consider the users note that device this. Alternatively ls can remove directory say consisting. The common example it has gone into groups ws rm support whenever you. But the previous commands can be used graphical parameters to specified. Also forms of filepaths and all the receiver. For statistical methods require some rather inflexible.",
"title": ""
},
{
"docid": "af8ddd6792a98ea3b59bdaab7c7fa045",
"text": "This research explores the alternative media ecosystem through a Twitter lens. Over a ten-month period, we collected tweets related to alternative narratives—e.g. conspiracy theories—of mass shooting events. We utilized tweeted URLs to generate a domain network, connecting domains shared by the same user, then conducted qualitative analysis to understand the nature of different domains and how they connect to each other. Our findings demonstrate how alternative news sites propagate and shape alternative narratives, while mainstream media deny them. We explain how political leanings of alternative news sites do not align well with a U.S. left-right spectrum, but instead feature an antiglobalist (vs. globalist) orientation where U.S. Alt-Right sites look similar to U.S. Alt-Left sites. Our findings describe a subsection of the emerging alternative media ecosystem and provide insight in how websites that promote conspiracy theories and pseudo-science may function to conduct underlying political agendas.",
"title": ""
},
{
"docid": "59a32ec5b88436eca75d8fa9aa75951b",
"text": "A visual-relational knowledge graph (KG) is a multi-relational graph whose entities are associated with images. We introduce ImageGraph, a KG with 1,330 relation types, 14,870 entities, and 829,931 images. Visual-relational KGs lead to novel probabilistic query types where images are treated as first-class citizens. Both the prediction of relations between unseen images and multi-relational image retrieval can be formulated as query types in a visual-relational KG. We approach the problem of answering such queries with a novel combination of deep convolutional networks and models for learning knowledge graph embeddings. The resulting models can answer queries such as “How are these two unseen images related to each other?\" We also explore a zero-shot learning scenario where an image of an entirely new entity is linked with multiple relations to entities of an existing KG. The multi-relational grounding of unseen entity images into a knowledge graph serves as the description of such an entity. We conduct experiments to demonstrate that the proposed deep architectures in combination with KG embedding objectives can answer the visual-relational queries efficiently and accurately.",
"title": ""
},
{
"docid": "ab2a73c5bf3c8d7c65cdde282de1b62c",
"text": "Centuries of co-evolution between Castanea spp. biodiversity and human populations has resulted in the spread of rich and varied chestnut genetic diversity throughout most of the world, especially in mountainous and forested regions. Its plasticity and adaptability to different pedoclimates and the wide genetic variability of the species determined the spread of many different ecotypes and varieties in the wild. Throughout the centuries, man has used, selected and preserved these different genotypes, vegetatively propagating them by grafting, for many applications: fresh consumption, production of flour, animal nutrition, timber production, thereby actively contributing to the maintenance of the natural biodiversity of the species, and providing an excellent example of conservation horticulture. Nonetheless, currently the genetic variability of the species is critically endangered and hundreds of ecotypes and varieties are at risk of being lost due to a number of phytosanitary problems (canker blight, Chryphonectria parasitica; ink disease, Phytophthora spp.; gall wasp, Dryocosmus kuriphilus), and because of the many years of decline and abandonment of chestnut cultivation, which resulted in the loss of the binomial male chestnut. Recently, several research and experimentation programmes have attempted to develop strategies for the conservation of chestnut biodiversity. The purpose of this paper is to give an overview of the status of biodiversity conservation of the species and to present the results of a 7 year project aimed at the individuation and study of genetic diversity and conservation of Castanea spp. germplasm.",
"title": ""
},
{
"docid": "f104989b26d60908e76e34794cb420af",
"text": "Energy monitoring and conservation holds prime importance in today's world because of the imbalance between power generation and demand. The current scenario says that the power generated, which is primarily contributed by fossil fuels may get exhausted within the next 20 years. Currently, there are very accurate electronic energy monitoring systems available in the market. Most of these monitor the power consumed in a domestic household, in case of residential applications. Many a times, consumers are dissatisfied with the power bill as it does not show the power consumed at the device level. This paper presents the design and implementation of an energy meter using Arduino microcontroller which can be used to measure the power consumed by any individual electrical appliance. Internet of Things (IoT) is an emerging field and IoT based devices have created a revolution in electronics and IT. The main intention of the proposed energy meter is to monitor the power consumption at the device level, upload it to the server and establish remote control of any appliance. The energy monitoring system precisely calculates the power consumed by various electrical devices and displays it through a home energy monitoring website. The advantage of this device is that a user can understand the power consumed by any electrical appliance from the website and can take further steps to control them and thus help in energy conservation. Further on, the users can monitor the power consumption as well as the bill on daily basis.",
"title": ""
},
{
"docid": "da0c8fa769ac7e33cc81ab9ba72d457d",
"text": "Action quality assessment is crucial in areas of sports, surgery and assembly line where action skills can be evaluated. In this paper, we propose the Segment-based P3D-fused network S3D built-upon ED-TCN and push the performance on the UNLV-Dive dataset by a significant margin. We verify that segment-aware training performs better than full-video training which turns out to focus on the water spray. We show that temporal segmentation can be embedded with few efforts.",
"title": ""
},
{
"docid": "75d57c2f82fb7852feef4c7bcde41590",
"text": "This paper studies the causal impact of sibling gender composition on participation in Science, Technology, Engineering, and Mathematics (STEM) education. I focus on a sample of first-born children who all have a younger biological sibling, using rich administrative data on the total Danish population. The randomness of the secondborn siblings’ gender allows me to estimate the causal effect of having an opposite sex sibling relative to a same sex sibling. The results are robust to family size and show that having a second-born opposite sex sibling makes first-born men more and women less likely to enroll in a STEM program. Although sibling gender composition has no impact on men’s probability of actually completing a STEM degree, it has a powerful effect on women’s success within these fields: women with a younger brother are eleven percent less likely to complete any field-specific STEM education relative to women with a sister. I provide evidence that parents of mixed sex children gender-specialize their parenting more than parents of same sex children. These findings indicate that the family environment plays in important role for shaping interests in STEM fields. JEL classification: I2, J1, J3",
"title": ""
},
{
"docid": "5eb526843c41d2549862b60c17110b5b",
"text": "■ Abstract We explore the social dimension that enables adaptive ecosystem-based management. The review concentrates on experiences of adaptive governance of socialecological systems during periods of abrupt change (crisis) and investigates social sources of renewal and reorganization. Such governance connects individuals, organizations, agencies, and institutions at multiple organizational levels. Key persons provide leadership, trust, vision, meaning, and they help transform management organizations toward a learning environment. Adaptive governance systems often self-organize as social networks with teams and actor groups that draw on various knowledge systems and experiences for the development of a common understanding and policies. The emergence of “bridging organizations” seem to lower the costs of collaboration and conflict resolution, and enabling legislation and governmental policies can support self-organization while framing creativity for adaptive comanagement efforts. A resilient social-ecological system may make use of crisis as an opportunity to transform into a more desired state.",
"title": ""
},
{
"docid": "c2c832689f0bfa9dec0b32203ae355d4",
"text": "Steve Jobs, one of the greatest visionaries of our time was quoted in 1996 saying “a lot of times, people don’t know what they want until you show it to them”[38] indicating he advocated products to be developed based on human intuition rather than research. With the advancements of mobile devices, social networks and the Internet of Things (IoT) enormous amounts of complex data, both structured & unstructured are being captured in hope to allow organizations to make better business decisions as data is now vital for an organizations success. These enormous amounts of data are referred to as Big Data, which enables a competitive advantage over rivals when processed and analyzed appropriately. However Big Data Analytics has a few concerns including Management of Datalifecycle, Privacy & Security, and Data Representation. This paper reviews the fundamental concept of Big Data, the Data Storage domain, the MapReduce programming paradigm used in processing these large datasets, and focuses on two case studies showing the effectiveness of Big Data Analytics and presents how it could be of greater good in the future if handled appropriately. Keywords—Big Data; Big Data Analytics; Big Data Inconsistencies; Data Storage; MapReduce; Knowledge-Space",
"title": ""
},
{
"docid": "936c4fb60d37cce15ed22227d766908f",
"text": "English. The SENTIment POLarity Classification Task 2016 (SENTIPOLC), is a rerun of the shared task on sentiment classification at the message level on Italian tweets proposed for the first time in 2014 for the Evalita evaluation campaign. It includes three subtasks: subjectivity classification, polarity classification, and irony detection. In 2016 SENTIPOLC has been again the most participated EVALITA task with a total of 57 submitted runs from 13 different teams. We present the datasets – which includes an enriched annotation scheme for dealing with the impact on polarity of a figurative use of language – the evaluation methodology, and discuss results and participating systems. Italiano. Descriviamo modalità e risultati della seconda edizione della campagna di valutazione di sistemi di sentiment analysis (SENTIment POLarity Classification Task), proposta nel contesto di “EVALITA 2016: Evaluation of NLP and Speech Tools for Italian”. In SENTIPOLC è stata valutata la capacità dei sistemi di riconoscere diversi aspetti del sentiment espresso nei messaggi Twitter in lingua italiana, con un’articolazione in tre sottotask: subjectivity classification, polarity classification e irony detection. La campagna ha suscitato nuovamente grande interesse, con un totale di 57 run inviati da 13 gruppi di partecipanti.",
"title": ""
},
{
"docid": "79eab4c017b0f1fb382617f72bde19e7",
"text": "To perceive the external environment our brain uses multiple sources of sensory information derived from several different modalities, including vision, touch and audition. All these different sources of information have to be efficiently merged to form a coherent and robust percept. Here we highlight some of the mechanisms that underlie this merging of the senses in the brain. We show that, depending on the type of information, different combination and integration strategies are used and that prior knowledge is often required for interpreting the sensory signals.",
"title": ""
},
{
"docid": "3beb3f808af2a2c04b74416fe1acf630",
"text": "A national survey, based on a probability sample of patients admitted to short-term hospitals in the United States during 1973 to 1974 with a discharge diagnosis of an intracranial neoplasm, was conducted in 157 hospitals. The annual incidence was estimated at 17,000 for primary intracranial neoplasms and 17,400 for secondary intracranial neoplasms--8.2 and 8.3 per 100,000 US population, respectively. Rates of primary intracranial neoplasms increased steadily with advancing age. The age-adjusted rates were higher among men than among women (8.5 versus 7.9 per 100,000). However, although men were more susceptible to gliomas and neuronomas, incidence rates for meningiomas and pituitary adenomas were higher among women.",
"title": ""
},
{
"docid": "019d5deed0ed1e5b50097d5dc9121cb6",
"text": "Within interactive narrative research, agency is largely considered in terms of a player's autonomy in a game, defined as theoretical agency. Rather than in terms of whether or not the player feels they have agency, their perceived agency. An effective interactive narrative needs to provide a player a level of agency that satisfies their desires and must do that without compromising its own structure. Researchers frequently turn to techniques for increasing theoretical agency to accomplish this. This paper proposes an approach to categorize and explore techniques in which a player's level of perceived agency is affected without requiring more or less theoretical agency.",
"title": ""
}
] |
scidocsrr
|
e3f786a6662703fb1de849d189870d7f
|
Learning Optimized Features for Hierarchical Models of Invariant Object Recognition
|
[
{
"docid": "3112c11544c9dfc5dc5cf67e74e4ba4b",
"text": "How long does it take for the human visual system to process a complex natural image? Subjectively, recognition of familiar objects and scenes appears to be virtually instantaneous, but measuring this processing time experimentally has proved difficult. Behavioural measures such as reaction times can be used1, but these include not only visual processing but also the time required for response execution. However, event-related potentials (ERPs) can sometimes reveal signs of neural processing well before the motor output2. Here we use a go/no-go categorization task in which subjects have to decide whether a previously unseen photograph, flashed on for just 20 ms, contains an animal. ERP analysis revealed a frontal negativity specific to no-go trials that develops roughly 150 ms after stimulus onset. We conclude that the visual processing needed to perform this highly demanding task can be achieved in under 150 ms.",
"title": ""
}
] |
[
{
"docid": "ee3b9382afc9455e53dd41d3725eb74a",
"text": "Deep convolutional neural networks have liberated its extraordinary power on various tasks. However, it is still very challenging to deploy stateof-the-art models into real-world applications due to their high computational complexity. How can we design a compact and effective network without massive experiments and expert knowledge? In this paper, we propose a simple and effective framework to learn and prune deep models in an end-to-end manner. In our framework, a new type of parameter – scaling factor is first introduced to scale the outputs of specific structures, such as neurons, groups or residual blocks. Then we add sparsity regularizations on these factors, and solve this optimization problem by a modified stochastic Accelerated Proximal Gradient (APG) method. By forcing some of the factors to zero, we can safely remove the corresponding structures, thus prune the unimportant parts of a CNN. Comparing with other structure selection methods that may need thousands of trials or iterative fine-tuning, our method is trained fully end-to-end in one training pass without bells and whistles. We evaluate our method, Sparse Structure Selection with several state-of-the-art CNNs, and demonstrate very promising results with adaptive depth and width selection. Code is available at: https://github.com/huangzehao/ sparse-structure-selection.",
"title": ""
},
{
"docid": "716c6211ed9a70622a3ea8a6defea2cd",
"text": "This paper attempts to provide an overview of the major literature which has developed in the area of agency theory and corporate governance in the 25 years since Jensen and Meckling’s (1976) groundbreaking article proposing their theory of the firm. A discussion is provided as to why such problems arise within the ‘nexus of contracts’ that Jensen and Meckling describe as characterising the modern corporation and how managers and shareholders may act to control these costs to maximise firm value. The major articles covering areas where manager’s interests are likely to diverge from those of the shareholders who employ them are also reviewed. Papers which have both proposed and empirically tested means by which such conflicts can be resolved are also surveyed. This section also attempts to incorporate international comparisons, with particular reference to several recent published and unpublished academic research in the UK. Finally, some concluding remarks are offered along with some suggestions for future research in the area of corporate governance. JEL Classification: G30.",
"title": ""
},
{
"docid": "7cc3d7722f978545a6735ae4982ffc62",
"text": "A multiband printed monopole slot antenna promising for operating as an internal antenna in the thin-profile laptop computer for wireless wide area network (WWAN) operation is presented. The proposed antenna is formed by three monopole slots operated at their quarter-wavelength modes and arranged in a compact planar configuration. A step-shaped microstrip feedline is applied to excite the three monopole slots at their respective optimal feeding position, and two wide operating bands at about 900 and 1900 MHz are obtained for the antenna to cover all the five operating bands of GSM850/900/1800/1900/UMTS for WWAN operation. The antenna is easily printed on a small-size FR4 substrate and shows a length of 60 mm only and a height of 12 mm when mounted at the top edge of the system ground plane or supporting metal frame of the laptop display. Details of the proposed antenna are presented and studied.",
"title": ""
},
{
"docid": "9673939625a3caafecf3da68a19742b0",
"text": "Automatic detection of road regions in aerial images remains a challenging research topic. Most existing approaches work well on the requirement of users to provide some seedlike points/strokes in the road area as the initial location of road regions, or detecting particular roads such as well-paved roads or straight roads. This paper presents a fully automatic approach that can detect generic roads from a single unmanned aerial vehicles (UAV) image. The proposed method consists of two major components: automatic generation of road/nonroad seeds and seeded segmentation of road areas. To know where roads probably are (i.e., road seeds), a distinct road feature is proposed based on the stroke width transformation (SWT) of road image. To the best of our knowledge, it is the first time to introduce SWT as road features, which show the effectiveness on capturing road areas in images in our experiments. Different road features, including the SWT-based geometry information, colors, and width, are then combined to classify road candidates. Based on the candidates, a Gaussian mixture model is built to produce road seeds and background seeds. Finally, starting from these road and background seeds, a convex active contour model segmentation is proposed to extract whole road regions. Experimental results on varieties of UAV images demonstrate the effectiveness of the proposed method. Comparison with existing techniques shows the robustness and accuracy of our method to different roads.",
"title": ""
},
{
"docid": "2e499f9fba02f5b7f0f6861841d74344",
"text": "This paper describes the CMU OAQA system evaluated in the BioASQ 3B Question Answering track. We first present a three-layered architecture, and then describe the components integrated for exact answer generation and retrieval. Using over 400 factoid and list questions from past BioASQ 1B and 2B tasks as background knowledge, we focus on how to learn to answer questions using a gold standard dataset of question-answer pairs, using supervised models for answer type prediction and candidate answer scoring. On the three test sets where the system was evaluated (3, 4, and 5), the official evaluation results have shown that the system achieves an MRR of .1615, .5155, .2727 for factoid questions, and an F-measure of .0969, .3168, .1875 for list questions, respectively; five of these scores were the highest reported among all participating systems.",
"title": ""
},
{
"docid": "26764eb192d4404bda7ebf8c37ba5c4a",
"text": "Two novel gallium nitride-based vertical junction FETs (VJFETs), one with a vertical channel and the other with a lateral channel, are proposed, designed, and modeled to achieve a 1.2 kV normally OFF power switch with very low ON resistance (R<sub>ON</sub>). The 2-D drift diffusion model of the proposed devices was implemented using Silvaco ATLAS. A comprehensive design space was generated for the vertical channel VJFET (VC-VJFET). For a well-designed VC-VJFET, the breakdown voltage (V<sub>BR</sub>) obtained was 1260 V, which is defined in this study as the drain-to-source voltage at an OFF-state current of 1 μA · cm<sup>-2</sup> and a peak electric field not exceeding 2.4 MV/cm. The corresponding R<sub>ON</sub> was 5.2 mΩ · cm<sup>2</sup>. To further improve the switching device figure of merit, a merged lateral-vertical geometry was proposed and modeled in the form of a lateral channel VJFET (LC-VJFET). For the LC-VJFET, a breakdown voltage of 1310 V with a corresponding R<sub>ON</sub> of 1.7 mQ · cm<sup>2</sup> was achieved for similar thicknesses of the drift region. This paper studies the design space in detail and discusses the associated tradeoffs in the R<sub>ON</sub> and V<sub>BR</sub> in conjunction with the threshold voltage (V<sub>T</sub>) desired for the normally OFF operation.",
"title": ""
},
{
"docid": "2a00d77cb75767b3e4516ced59ea84f6",
"text": "Men and women living in a rural community in Bakossiland, Cameroon were asked to rate the attractiveness of images of male or female figures manipulated to vary in somatotype, waist-to-hip ratio (WHR), secondary sexual traits, and other features. In Study 1, women rated mesomorphic (muscular) and average male somatotypes as most attractive, followed by ectomorphic (slim) and endomorphic (heavily built) figures. In Study 2, amount and distribution of masculine trunk (chest and abdominal) hair was altered progressively in a series of front-posed male figures. A significant preference for one of these images was found, but the most hirsute figure was not judged as most attractive. Study 3 assessed attractiveness of front-posed male figures which varied only in length of the non-erect penis. Extremes of penile size (smallest and largest of five images) were rated as significantly less attractive than three intermediate sizes. In Study 4, Bakossi men rated the attractiveness of back-posed female images varying in WHR (from 0.5-1.0). The 0.8 WHR figure was rated markedly more attractive than others. Study 5 rated the attractiveness of female skin color. Men expressed no consistent preference for either lighter or darker female figures. These results are the first of their kind reported for a Central African community and provide a useful cross-cultural perspective to published accounts on sexual selection, human morphology and attractiveness in the U.S., Europe, and elsewhere.",
"title": ""
},
{
"docid": "e26a155425b3691629649cd32aa8648e",
"text": "Technologies in autonomous vehicles have seen dramatic advances in recent years; however, it still lacks of robust perception systems for car detection. With the recent development in deep learning research, in this paper, we propose a LIDAR and vision fusion system for car detection through the deep learning framework. It consists of three major parts. The first part generates seed proposals for potential car locations in the image by taking LIDAR point cloud into account. The second part refines the location of the proposal boxes by exploring multi-layer information in the proposal network and the last part carries out the final detection task through a detection network which shares part of the layers with the proposal network. The evaluation shows that the proposed framework is able to generate high quality proposal boxes more efficiently (77.6% average recall) and detect the car at the state of the art accuracy (89.4% average precision). With further optimization of the framework structure, it has great potentials to be implemented onto the autonomous vehicle.",
"title": ""
},
{
"docid": "72041ae7e06d3c35701726a6c878c081",
"text": "This paper presents a compression algorithm for color filter array (CFA) images in a wireless capsule endoscopy system. The proposed algorithm consists of a new color space transformation (known as YLMN), a raster-order prediction model, and a single context adaptive Golomb–Rice encoder to encode the residual signal with variable length coding. An optimum reversible color transformation derivation model is presented first, which incorporates a prediction model to find the optimum color transformation. After the color transformation, each color component has been independently encoded with a low complexity raster-order prediction model and Golomb–Rice encoder. The algorithm is implemented using a TSMC 65-nm CMOS process, which shows a reduction in gate count by 38.9% and memory requirement by 71.2% compared with existing methods. Performance assessment using CFA database shows the proposed design can outperform existing lossless and near-lossless compression algorithms by a large margin, which makes it suitable for capsule endoscopy application.",
"title": ""
},
{
"docid": "157c36eaad7fe6cb6188a17c1df98507",
"text": "We describe a new method for accurate retinal vessel detection in wide-field fluorescein angiography (FA), which is a challenging problem because of the variations in vasculature between different orientations and large and small vessels, and the changes in the vasculature appearance as the injection of the dye perfuses the retina. Decomposing the original FA image into multiple resolutions, the vessels at each scale are segmented independently by first correcting for inhomogeneous illumination, then applying morphological operations to extract rectilinear structure and finally applying adaptive binarization. Specifically, a modified top-hat filter is applied using linear structuring elements with 9 directions. The maximum value of the resulting response images at each pixel location is then used for adaptive binarization. Final vessel segments are identified by fusing vessel segments at each scale. Quantitative results on VAMPIRE dataset, which includes high resolution wide-field FA images and hand-labeled ground truth vessel segments, demonstrate that the proposed method provides a significant improvement on vessel detection (approximately 10% higher recall, with same precision) than the method originally published with VAMPIRE dataset.",
"title": ""
},
{
"docid": "d7ab37fa0997cfdfd47ab9f21a1e64e5",
"text": "There is renewed interest in tail-sitter airplanes on account of their vertical takeoff and landing capability as well as their efficient horizontal flight capabilities. The transition from a vertical near-hover mode to a horizontal cruise mode is a critical component of the tail-sitter flight profile. In practice, this transition is often achieved by a stall-and-tumble maneuver, which is somewhat risky and therefore not desirable, so alternative maneuvering strategies along controlled trajectories are sought. Accordingly, this paper presents the synthesis and application of a transition controller to a tail-sitter UAV for the first time. For practical reasons, linear controllers are designed using the PID technique and linked by gain scheduling. The limits of the PID controller are complemented by a so-called L1 adaptive controller that considers the coupling effect, reduces the effort for appropriate gain selection, and improves the tracking performance at different points during operation. Each transition trajectory is controlled by the flight velocity and path angle using dynamic inversion. The transition Y. Jung (B) · D. H. Shim Department of Aerospace Engineering, KAIST, Daejeon, South Korea e-mail: jyd28@kaist.ac.kr D. H. Shim e-mail: hcshim@kaist.ac.kr control law is tested on a tail-sitter UAV, an 18kg vehicle that has a 2-m wingspan with an aspect ratio of 4.71 and is powered by a 100-cm3 gasoline engine driving an aft-mounted ducted fan. This paper describes not only the synthesis and the onboard implementation of the control law but also the flight testing of the fixed-wing UAV in hover, transition, and cruise modes.",
"title": ""
},
{
"docid": "27034289da290734ec5136656573ca11",
"text": "Iris recognition as a reliable method for personal identification has been well-studied with the objective to assign the class label of each iris image to a unique subject. In contrast, iris image classification aims to classify an iris image to an application specific category, e.g., iris liveness detection (classification of genuine and fake iris images), race classification (e.g., classification of iris images of Asian and non-Asian subjects), coarse-to-fine iris identification (classification of all iris images in the central database into multiple categories). This paper proposes a general framework for iris image classification based on texture analysis. A novel texture pattern representation method called Hierarchical Visual Codebook (HVC) is proposed to encode the texture primitives of iris images. The proposed HVC method is an integration of two existing Bag-of-Words models, namely Vocabulary Tree (VT), and Locality-constrained Linear Coding (LLC). The HVC adopts a coarse-to-fine visual coding strategy and takes advantages of both VT and LLC for accurate and sparse representation of iris texture. Extensive experimental results demonstrate that the proposed iris image classification method achieves state-of-the-art performance for iris liveness detection, race classification, and coarse-to-fine iris identification. A comprehensive fake iris image database simulating four types of iris spoof attacks is developed as the benchmark for research of iris liveness detection.",
"title": ""
},
{
"docid": "840712346d5b8896e37966bd9084cb2a",
"text": "In thermodynamic terms, ecosystems are machines supplied with energy from an external source, usually the sun. When the input of energy to an ecosystem is exactly equal to its total output of energy, the state of equilibrium which exists is a special case of the First Law of Thermodynamics. The Second Law is relevant oo. It implies that in every spontaneous process, physical or chemical, the production of 'useful' energy, which could be harnessed in a form such as mechanical work, must be accompanied by a simultaneous 'waste' of heat. No biological system can break or evade this law. The heat produced by a respiring cell is an inescapable component of cellular metabolism, the cost which Nature has to pay for creating biological order out of physical chaos in the environment of plants and animals. Dividing the useful energy of a thermodynamic process by the total energy involved gives a figure for the efficiency of the process, and this procedure has been widely used to analyse the flow of energy in ecosystems. For example, the efficiency with which a stand of plants produces dry matter by photosynthesis can be defined as the ratio of chemical energy stored in the assimilates to radiant energy absorbed by foliage during the period of assimilation. The choice of absorbed energy as a base for calculating efficiency is convenient but arbitrary. To derive an efficiency depending on the environment of a particular site as well as oil the nature of the vegetation, dry matter production can be related to the receipt of solar energy at the top of the earth's atmosphere. This exercise was attempted by Professor William Thomson, later Lord Kelvin, in 1852. 'The author estimates the mechanical value of the solar heat which, were none of it absorbed by the atmosphere, would fall annually on each square foot of land, at 530 000 000 foot pounds; and infers that probably a good deal more, 1/1000 of the solar heat, which actually falls on growing plants, is converted into mechanical effect.' Outside the earth's atmosphere, a surface kept at right angles to the sun's rays receives energy at a mean rate of 1360 W m-2 or 1f36 kJ m-2 s-1, a figure known as the solar constant. As the energy stored by plants is about 17 kJ per gram of dry matter, the solar constant is equivalent to the production of dry matter at a rate of about 1 g m-2 every 12 s, 7 kg m-2 per day, or 2 6 t m-2 year-'. The annual yield of agricultural crops ranges from a maximum of 30-60 t ha-' in field experiments to less than I t ha-' in some forms of subsistence farming. When these rates are expressed as a fraction of the integrated solar constant, the efficiencies of agricultural systems lie between 0-2 and 0 004%, a range including Kelvin's estimate of 0-1%. Conventional estimates of efficiency interms of the amount of solar radiation incident at the earth's surface provide ecologists and agronomists with a method for comparing plant productivity under different systems of land use and management and in different * Opening paper read at IBP/UNESCO Meeting on Productivity ofTropical Ecosystems, Makerere University, Uganda, September 1970.",
"title": ""
},
{
"docid": "2bc86a02909f16ad0372a36dd92c954c",
"text": "Multi-view learning is an emerging direction in machine learning which considers learning with multiple views to improve the generalization performance. Multi-view learning is also known as data fusion or data integration from multiple feature sets. Since the last survey of multi-view machine learning in early 2013, multi-view learning has made great progress and developments in recent years, and is facing new challenges. This overview first reviews theoretical underpinnings to understand the properties and behaviors of multi-view learning. Then multi-view learning methods are described in terms of three classes to offer a neat categorization and organization. For each category, representative algorithms and newly proposed algorithms are presented. The main feature of this survey is that we provide comprehensive introduction for the recent developments of multi-view learning methods on the basis of coherence with early methods. We also attempt to identify promising venues and point out some specific challenges which can hopefully promote further research in this rapidly developing field. © 2017 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "736b98a5b6a86db837362ab2c7086484",
"text": "This is an in-vitro pilot study which established the effect of radiofrequency radiation (RFR) from 2.4 GHz laptop antenna on human semen. Ten samples of the semen, collected from donors between the ages of 20 and 30 years were exposed when the source of the RFR was in active mode. Sequel to the exposure, both the exposed samples and another ten unexposed samples from same donors were analysed for sperm concentration, motility and morphology grading. A test of significance between results of these semen parameters using Mann-Whitney Utest at 0.05 level of significance showed a significant effect of RFR exposure on the semen parameters considered.",
"title": ""
},
{
"docid": "9a82f33d84cd622ccd66a731fc9755de",
"text": "To discover relationships and associations between pairs of variables in large data sets have become one of the most significant challenges for bioinformatics scientists. To tackle this problem, maximal information coefficient (MIC) is widely applied as a measure of the linear or non-linear association between two variables. To improve the performance of MIC calculation, in this work we present MIC++, a parallel approach based on the heterogeneous accelerators including Graphic Processing Unit (GPU) and Field Programmable Gate Array (FPGA) engines, focusing on both coarse-grained and fine-grained parallelism. As the evaluation of MIC++, we have demonstrated the performance on the state-of-the-art GPU accelerators and the FPGA-based accelerators. Preliminary estimated results show that the proposed parallel implementation can significantly achieve more than 6X-14X speedup using GPU, and 4X-13X using FPGA-based accelerators.",
"title": ""
},
{
"docid": "b4abca5b6a46da1876357ba681c4b249",
"text": "Two different pulsewidth modulation (PWM) schemes for current source inverters (CSI) are described. The first one is based on off-line optimization of individual switching angles and requires a microprocessor for implementation and the second one uses a special subharmonic modulation and could be implemented with analog and medium-scale integration (MSI) digital circuits. When CSI's are used in ac motor drives, the optimal PWM pattern depends on the performance criteria being used, which in turn depend on the drive application. In this paper four different performance criteria are considered: 1) current or torque harmonic elimination, 2) current harmonic minimization, 3) speed ripple minimization, and 4) position error minimization. As an example a self-controlled synchronous motor (SCSM) supplied by the PWM CSI is considered. The performance of the CSI-SCSM with the optimal PWM schemes proposed herein are compared with that using a conventional 120° quasi-square wave current.",
"title": ""
},
{
"docid": "2f9d88a7848fc5954b3f9459d6b6dc59",
"text": "OBJECTIVE\nTo test the feasibility of creating a valid and reliable checklist with the following features: appropriate for assessing both randomised and non-randomised studies; provision of both an overall score for study quality and a profile of scores not only for the quality of reporting, internal validity (bias and confounding) and power, but also for external validity.\n\n\nDESIGN\nA pilot version was first developed, based on epidemiological principles, reviews, and existing checklists for randomised studies. Face and content validity were assessed by three experienced reviewers and reliability was determined using two raters assessing 10 randomised and 10 non-randomised studies. Using different raters, the checklist was revised and tested for internal consistency (Kuder-Richardson 20), test-retest and inter-rater reliability (Spearman correlation coefficient and sign rank test; kappa statistics), criterion validity, and respondent burden.\n\n\nMAIN RESULTS\nThe performance of the checklist improved considerably after revision of a pilot version. The Quality Index had high internal consistency (KR-20: 0.89) as did the subscales apart from external validity (KR-20: 0.54). Test-retest (r 0.88) and inter-rater (r 0.75) reliability of the Quality Index were good. Reliability of the subscales varied from good (bias) to poor (external validity). The Quality Index correlated highly with an existing, established instrument for assessing randomised studies (r 0.90). There was little difference between its performance with non-randomised and with randomised studies. Raters took about 20 minutes to assess each paper (range 10 to 45 minutes).\n\n\nCONCLUSIONS\nThis study has shown that it is feasible to develop a checklist that can be used to assess the methodological quality not only of randomised controlled trials but also non-randomised studies. It has also shown that it is possible to produce a checklist that provides a profile of the paper, alerting reviewers to its particular methodological strengths and weaknesses. Further work is required to improve the checklist and the training of raters in the assessment of external validity.",
"title": ""
},
{
"docid": "750a1dd126b0bb90def0bba34dc73cdd",
"text": "Skinning of skeletally deformable models is extensively used for real-time animation of characters, creatures and similar objects. The standard solution, linear blend skinning, has some serious drawbacks that require artist intervention. Therefore, a number of alternatives have been proposed in recent years. All of them successfully combat some of the artifacts, but none challenge the simplicity and efficiency of linear blend skinning. As a result, linear blend skinning is still the number one choice for the majority of developers. In this article, we present a novel skinning algorithm based on linear combination of dual quaternions. Even though our proposed method is approximate, it does not exhibit any of the artifacts inherent in previous methods and still permits an efficient GPU implementation. Upgrading an existing animation system from linear to dual quaternion skinning is very easy and has a relatively minor impact on runtime performance.",
"title": ""
}
] |
scidocsrr
|
0d336ab6806d9c1cb8e5f815a270fd9e
|
There Is No Free Lunch In Adversarial Robustness (But There Are Unexpected Benefits)
|
[
{
"docid": "17611b0521b69ad2b22eeadc10d6d793",
"text": "Neural networks provide state-of-the-art results for most machine learning tasks. Unfortunately, neural networks are vulnerable to adversarial examples: given an input x and any target classification t, it is possible to find a new input x' that is similar to x but classified as t. This makes it difficult to apply neural networks in security-critical areas. Defensive distillation is a recently proposed approach that can take an arbitrary neural network, and increase its robustness, reducing the success rate of current attacks' ability to find adversarial examples from 95% to 0.5%.In this paper, we demonstrate that defensive distillation does not significantly increase the robustness of neural networks by introducing three new attack algorithms that are successful on both distilled and undistilled neural networks with 100% probability. Our attacks are tailored to three distance metrics used previously in the literature, and when compared to previous adversarial example generation algorithms, our attacks are often much more effective (and never worse). Furthermore, we propose using high-confidence adversarial examples in a simple transferability test we show can also be used to break defensive distillation. We hope our attacks will be used as a benchmark in future defense attempts to create neural networks that resist adversarial examples.",
"title": ""
},
{
"docid": "11a69c06f21e505b3e05384536108325",
"text": "Most existing machine learning classifiers are highly vulnerable to adversarial examples. An adversarial example is a sample of input data which has been modified very slightly in a way that is intended to cause a machine learning classifier to misclassify it. In many cases, these modifications can be so subtle that a human observer does not even notice the modification at all, yet the classifier still makes a mistake. Adversarial examples pose security concerns because they could be used to perform an attack on machine learning systems, even if the adversary has no access to the underlying model. Up to now, all previous work have assumed a threat model in which the adversary can feed data directly into the machine learning classifier. This is not always the case for systems operating in the physical world, for example those which are using signals from cameras and other sensors as an input. This paper shows that even in such physical world scenarios, machine learning systems are vulnerable to adversarial examples. We demonstrate this by feeding adversarial images obtained from cell-phone camera to an ImageNet Inception classifier and measuring the classification accuracy of the system. We find that a large fraction of adversarial examples are classified incorrectly even when perceived through the camera.",
"title": ""
},
{
"docid": "40575df81257e0c94fd0b4180b9beb69",
"text": "Learning-based pattern classifiers, including deep networks, have shown impressive performance in several application domains, ranging from computer vision to cybersecurity. However, it has also been shown that adversarial input perturbations carefully crafted either at training or at test time can easily subvert their predictions. The vulnerability of machine learning to such wild patterns (also referred to as adversarial examples), along with the design of suitable countermeasures, have been investigated in the research field of adversarial machine learning. In this work, we provide a thorough overview of the evolution of this research area over the last ten years and beyond, starting from pioneering, earlier work on the security of non-deep learning algorithms up to more recent work aimed to understand the security properties of deep learning algorithms, in the context of computer vision and cybersecurity tasks. We report interesting connections between these apparently-different lines of work, highlighting common misconceptions related to the security evaluation of machine-learning algorithms. We review the main threat models and attacks defined to this end, and discuss the main limitations of current work, along with the corresponding future challenges towards the design of more secure learning algorithms.",
"title": ""
},
{
"docid": "88a1549275846a4fab93f5727b19e740",
"text": "State-of-the-art deep neural networks have achieved impressive results on many image classification tasks. However, these same architectures have been shown to be unstable to small, well sought, perturbations of the images. Despite the importance of this phenomenon, no effective methods have been proposed to accurately compute the robustness of state-of-the-art deep classifiers to such perturbations on large-scale datasets. In this paper, we fill this gap and propose the DeepFool algorithm to efficiently compute perturbations that fool deep networks, and thus reliably quantify the robustness of these classifiers. Extensive experimental results show that our approach outperforms recent methods in the task of computing adversarial perturbations and making classifiers more robust.",
"title": ""
}
] |
[
{
"docid": "d3049fee1ed622515f5332bcfa3bdd7b",
"text": "PURPOSE\nTo prospectively analyze, using validated outcome measures, symptom improvement in patients with mild to moderate cubital tunnel syndrome treated with rigid night splinting and activity modifications.\n\n\nMETHODS\nNineteen patients (25 extremities) were enrolled prospectively between August 2009 and January 2011 following a diagnosis of idiopathic cubital tunnel syndrome. Patients were treated with activity modifications as well as a 3-month course of rigid night splinting maintaining 45° of elbow flexion. Treatment failure was defined as progression to operative management. Outcome measures included patient-reported splinting compliance as well as the Quick Disabilities of the Arm, Shoulder, and Hand questionnaire and the Short Form-12. Follow-up included a standardized physical examination. Subgroup analysis included an examination of the association between splinting success and ulnar nerve hypermobility.\n\n\nRESULTS\nTwenty-four of 25 extremities were available at mean follow-up of 2 years (range, 15-32 mo). Twenty-one of 24 (88%) extremities were successfully treated without surgery. We observed a high compliance rate with the splinting protocol during the 3-month treatment period. Quick Disabilities of the Arm, Shoulder, and Hand scores improved significantly from 29 to 11, Short Form-12 physical component summary score improved significantly from 45 to 54, and Short Form-12 mental component summary score improved significantly from 54 to 62. Average grip strength increased significantly from 32 kg to 35 kg, and ulnar nerve provocative testing resolved in 82% of patients available for follow-up examination.\n\n\nCONCLUSIONS\nRigid night splinting when combined with activity modification appears to be a successful, well-tolerated, and durable treatment modality in the management of cubital tunnel syndrome. We recommend that patients presenting with mild to moderate symptoms consider initial treatment with activity modification and rigid night splinting for 3 months based on a high likelihood of avoiding surgical intervention.\n\n\nTYPE OF STUDY/LEVEL OF EVIDENCE\nTherapeutic II.",
"title": ""
},
{
"docid": "d73af831462af9ea510fb9a00c152ab6",
"text": "Cloud computing is a new paradigm for using ICT services— only when needed and for as long as needed, and paying only for service actually consumed. Benchmarking the increasingly many cloud services is crucial for market growth and perceived fairness, and for service design and tuning. In this work, we propose a generic architecture for benchmarking cloud services. Motivated by recent demand for data-intensive ICT services, and in particular by processing of large graphs, we adapt the generic architecture to Graphalytics, a benchmark for distributed and GPU-based graph analytics platforms. Graphalytics focuses on the dependence of performance on the input dataset, on the analytics algorithm, and on the provisioned infrastructure. The benchmark provides components for platform configuration, deployment, and monitoring, and has been tested for a variety of platforms. We also propose a new challenge for the process of benchmarking data-intensive services, namely the inclusion of the data-processing algorithm in the system under test; this increases significantly the relevance of benchmarking results, albeit, at the cost of increased benchmarking duration.",
"title": ""
},
{
"docid": "c7d7922c5e070ff0478a0a650afbc0a3",
"text": "This paper presents a methodology for modeling a biped robot on Matlab/SimMechanics, which supports mathematical model development with time and effort savings. The model used for the biped robot simulation consists of 5-links which are connected through revolute joins. The identical legs have knee joints between the shank and thigh parts, and a rigid body forms the torso. Furthermore, modeling of ground contact forces is described. A PD controller is used on a linear model in state variable form in order to simulate the dynamic of the system. Results obtained from the dynamic simulation are presented.",
"title": ""
},
{
"docid": "01edfc6eb157dc8cf2642f58cf3aba25",
"text": "Understanding developmental processes, especially in non-model crop plants, is extremely important in order to unravel unique mechanisms regulating development. Chickpea (C. arietinum L.) seeds are especially valued for their high carbohydrate and protein content. Therefore, in order to elucidate the mechanisms underlying seed development in chickpea, deep sequencing of transcriptomes from four developmental stages was undertaken. In this study, next generation sequencing platform was utilized to sequence the transcriptome of four distinct stages of seed development in chickpea. About 1.3 million reads were generated which were assembled into 51,099 unigenes by merging the de novo and reference assemblies. Functional annotation of the unigenes was carried out using the Uniprot, COG and KEGG databases. RPKM based digital expression analysis revealed specific gene activities at different stages of development which was validated using Real time PCR analysis. More than 90% of the unigenes were found to be expressed in at least one of the four seed tissues. DEGseq was used to determine differentially expressing genes which revealed that only 6.75% of the unigenes were differentially expressed at various stages. Homology based comparison revealed 17.5% of the unigenes to be putatively seed specific. Transcription factors were predicted based on HMM profiles built using TF sequences from five legume plants and analyzed for their differential expression during progression of seed development. Expression analysis of genes involved in biosynthesis of important secondary metabolites suggested that chickpea seeds can serve as a good source of antioxidants. Since transcriptomes are a valuable source of molecular markers like simple sequence repeats (SSRs), about 12,000 SSRs were mined in chickpea seed transcriptome and few of them were validated. In conclusion, this study will serve as a valuable resource for improved chickpea breeding.",
"title": ""
},
{
"docid": "8d7b5be74cb66d3f8e639fc96ba58692",
"text": "The aim of this paper is to discuss the significance and potential of a mixed methods approach in technology acceptance research. After critically reviewing the dominance of the quantitative survey method in TAMbased research, this paper reports a mixed methods study of user acceptance of emergency alert technology in order to illustrate the benefits of combining qualitative and quantitative techniques in a single study. The main conclusion is that a mixed methods approach provides opportunities to move beyond the vague conceptualizations of “usefulness” and “ease of use” and to advance our understanding of user acceptance of technology in context.",
"title": ""
},
{
"docid": "caef12609ed51707172f63bdc8397d25",
"text": "Reasoning does not work well when done in isolation from its significance, both to the needs and interests of an agent and with respect to the wider world. Moreover, those issues may best be handled with a new sort of data structure that goes beyond the knowledge base and incorporates aspects of perceptual knowledge and even more, in which a kind of anticipatory action may be key. Out of the Ivory Tower Reasoning is one of the oldest topics in artificial intelligence (AI). And it has made lots of progress, in the form of commonsense reasoning (CSR), planning, automated theorem-proving, and more. But I suspect it has hit a barrier that must be surmounted if we are to approach anything like human-level inference. Here I give evidence for such a barrier, and ideas about dealing with it, loosely based on evidence from human behavior. In rough synopsis, reasoning does not work well when done in isolation from its broader significance, both for the needs and interests of an agent and for the wider world. Moreover, those issues may best be handled with a new sort of data structure that goes beyond the knowledge base (KB) and incorporates aspects of perceptual knowledge and even more, in which a kind of anticipatory action many be key. I suspect this has ties with recent calls to “put the Science” back in AI (Levesque 2013, Langley 2012). For what I am arguing, in some sense, is that reasoning should be regarded as “in the wild” as events unfold rather than confined to management of an isolated KB; and that this speaks to an agent interacting with the world, rather than a puzzle in abstract inference (yet I will also argue that even “pure” reasoning as in mathematics hugely benefits from many connections with the world). And finally, we then will end up studying the nature of world-embedded cognitive agents, humans included. But this is very broadbrushed and general, whereas my main point is a technical Copyright © 2016, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. suggestion about reasoning informed by meaning, especially meaning concerning experience and action. One quick example at the outset: The Wason Selection Task (Wason 1968) shows that human inference is strongly aided when the details of the task at hand have real meaning that the subjects can relate to in terms of things that matter to them, helping keep attention on what is relevant; and this holds even when the task in the abstract is a matter of so-called pure logic. While this could be seen as a defect in human reasoning, something computers would never trip up on, I think it points in the opposite direction: inference without broader meaning is not worth much, and not worth being good at. I will illustrate my main points with a series of examples based on the activities of proving, planning, and understanding.",
"title": ""
},
{
"docid": "36a6c72e049ce551fcf302e19eb5063b",
"text": "We propose a complete probabilistic discriminative framework for performing sentencelevel discourse analysis. Our framework comprises a discourse segmenter, based on a binary classifier, and a discourse parser, which applies an optimal CKY-like parsing algorithm to probabilities inferred from a Dynamic Conditional Random Field. We show on two corpora that our approach outperforms the state-of-the-art, often by a wide margin.",
"title": ""
},
{
"docid": "8fd43b39e748d47c02b66ee0d8eecc65",
"text": "One standing problem in the area of web-based e-learning is how to support instructional designers to effectively and efficiently retrieve learning materials, appropriate for their educational purposes. Learning materials can be retrieved from structured repositories, such as repositories of Learning Objects and Massive Open Online Courses; they could also come from unstructured sources, such as web hypertext pages. Platforms for distance education often implement algorithms for recommending specific educational resources and personalized learning paths to students. But choosing and sequencing the adequate learning materials to build adaptive courses may reveal to be quite a challenging task. In particular, establishing the prerequisite relationships among learning objects, in terms of prior requirements needed to understand and complete before making use of the subsequent contents, is a crucial step for faculty, instructional designers or automated systems whose goal is to adapt existing learning objects to delivery in new distance courses. Nevertheless, this information is often missing. In this paper, an innovative machine learning-based approach for the identification of prerequisites between text-based resources is proposed. A feature selection methodology allows us to consider the attributes that are most relevant to the predictive modeling problem. These features are extracted from both the input material and weak-taxonomies available on the web. Input data undergoes a Natural language process that makes finding patterns of interest more easy for the applied automated analysis. Finally, the prerequisite identification is cast to a binary statistical classification task. The accuracy of the approach is validated by means of experimental evaluations on real online coursers covering different subjects.",
"title": ""
},
{
"docid": "cfebf44f0d3ec7d1ffe76b832704a6d2",
"text": "In practical scenario the transmission of signal or data from source to destination is very challenging. As there is a lot of surrounding environmental changes which influence the transmitted signal. The ISI, multipath will corrupt the data and this data appears at the receiver or destination. Due to this time varying multipath fading different channel estimation filter at the receiver are used to improve the performance. The performance of LMS and RLS adaptive algorithms are analyzed over a AWGN and Rayleigh channels under different multipath fading environments for estimating the time-varying channel.",
"title": ""
},
{
"docid": "6ea1985fee1b9c37b63b51df4389143b",
"text": "It is often suggested that users are hopelessly lazy and unmotivated on security questions. They chose weak passwords, ignore security warnings, and are oblivious to certificates errors. We argue that users' rejection of the security advice they receive is entirely rational from an economic perspective. The advice offers to shield them from the direct costs of attacks, but burdens them with far greater indirect costs in the form of effort. Looking at various examples of security advice we find that the advice is complex and growing, but the benefit is largely speculative or moot. For example, much of the advice concerning passwords is outdated and does little to address actual treats, and fully 100% of certificate error warnings appear to be false positives. Further, if users spent even a minute a day reading URLs to avoid phishing, the cost (in terms of user time) would be two orders of magnitude greater than all phishing losses. Thus we find that most security advice simply offers a poor cost-benefit tradeoff to users and is rejected. Security advice is a daily burden, applied to the whole population, while an upper bound on the benefit is the harm suffered by the fraction that become victims annually. When that fraction is small, designing security advice that is beneficial is very hard. For example, it makes little sense to burden all users with a daily task to spare 0.01% of them a modest annual pain.",
"title": ""
},
{
"docid": "67689e2efeb9c1320410d6f3c2c7c4d4",
"text": "Semi-supervised learning methods using Generative Adversarial Networks (GANs) have shown promising empirical success recently. Most of these methods use a shared discriminator/classifier which discriminates real examples from fake while also predicting the class label. Motivated by the ability of the GANs generator to capture the data manifold well, we propose to estimate the tangent space to the data manifold using GANs and employ it to inject invariances into the classifier. In the process, we propose enhancements over existing methods for learning the inverse mapping (i.e., the encoder) which greatly improves in terms of semantic similarity of the reconstructed sample with the input sample. We observe considerable empirical gains in semi-supervised learning over baselines, particularly in the cases when the number of labeled examples is low. We also provide insights into how fake examples influence the semi-supervised learning procedure.",
"title": ""
},
{
"docid": "90f6f2b7cd03ab1e5129d77e158a8ee2",
"text": "Educational data mining is an emerging research field concerned with developing methods for exploring the unique types of data that come from educational context. These data allow the educational stakeholders to discover new, interesting and valuable knowledge about students. In this paper, we present a new user-friendly decision support tool for predicting students‟ performance concerning the final examinations of a school year. Our proposed tool is based on a hybrid predicting system incorporating a number of possible machine learning methods and achieves better performance than any examined single learning algorithm. Furthermore, significant advantages of the presented tool are that it has a simple interface and it can be deployed in any platform under any operating system. Our objective is that this work may be used to support student admission procedures and strengthen the service system in educational institutions.",
"title": ""
},
{
"docid": "3ebf234cbd1e0af70b1289d7f2e109d7",
"text": "This article reviews the evolutionary origins and functions of the capacity for anxiety, and relevant clinical and research issues. Normal anxiety is an emotion that helps organisms defend against a wide variety of threats. There is a general capacity for normal defensive arousal, and subtypes of normal anxiety protect against particular kinds of threats. These normal subtypes correspond somewhat to mild forms of various anxiety disorders. Anxiety disorders arise from dysregulation of normal defensive responses, raising the possibility of a bypophobic disorder (too little anxiety). If a drug were discovered that abolished all defensive anxiety, it could do harm as well as good. Factors that have shaped anxiety-regulation mechanisms can explain prepotent and prepared tendencies to associate anxiety more quickly with certain cues than with others. These tendencies lead to excess fear of largely archaic dangers, like snakes, and too little fear of new threats, like cars. An understanding of the evolutionary origins, functions, and mechanisms of anxiety suggests new questions about anxiety disorders.",
"title": ""
},
{
"docid": "c96230b1964434157beb2e866ad84a3a",
"text": "Electromagnetic computation methods (ECMs) have been widely used in analyzing lightning electromagnetic pulses (LEMPs) and lightning-caused surges in various systems. One of the advantages of ECMs, in comparison with circuit simulation methods, is that they allow a self-consistent full-wave solution for both the transient current distribution in a 3D conductor system and resultant electromagnetic fields, although they are computationally expensive. Among ECMs, the finite-difference time-domain (FDTD) method for solving Maxwell's equations has been most frequently used in LEMP and surge simulations. In this paper, we review applications of the FDTD method to LEMP and surge simulations, including (i) lightning electromagnetic fields at close and far distances; (ii) lightning surges on overhead power transmission line conductors and towers, (iii) lightning surges on overhead distribution and telecommunication lines; (iv) lightning electromagnetic environment in power substations; (v) lightning surges in wind-turbine-generator towers; (vi) lightning surges in photovoltaic (PV) arrays; (vii) lightning electromagnetic environment in electric vehicles (EVs); (viii) lightning electromagnetic environment in airborne vehicles; (ix) lightning surges and electromagnetic environment in buildings; and (x) surges on grounding electrodes.",
"title": ""
},
{
"docid": "051b478ddcccbe885778fe545beacc96",
"text": "Among the nature inspired heuristic or metaheuristic optimization algorithms Particle Swarm Optimization(PSO) algorithms are widely used to solve clustering problem. In this paper, a modified multi-objective PSO (MMPSO) algorithm is proposed for data clustering. In the proposed MMPSO, the intra-cluster distance and inter-cluster distance are considered as the objective functions. In-order to improve the convergence rate of the MMPSO, a Cloning concept is introduced. To validate the proposed algorithms, five standard data sets are considered. The result shows that, the proposed method gives better quality solution as compared to some existing well known algorithms. The simulation results infer that the proposed algorithms can be efficiently used for data clustering.",
"title": ""
},
{
"docid": "52755d4ace354c031368167a9da91547",
"text": "One of the serious challenges in computer vision and image classification is learning an accurate classifier for a new unlabeled image dataset, considering that there is no available labeled training data. Transfer learning and domain adaptation are two outstanding solutions that tackle this challenge by employing available datasets, even with significant difference in distribution and properties, and transfer the knowledge from a related domain to the target domain. The main difference between these two solutions is their primary assumption about change in marginal and conditional distributions where transfer learning emphasizes on problems with same marginal distribution and different conditional distribution, and domain adaptation deals with opposite conditions. Most prior works have exploited these two learning strategies separately for domain shift problem where training and test sets are drawn from different distributions. In this paper, we exploit joint transfer learning and domain adaptation to cope with domain shift problem in which the distribution difference is significantly large, particularly vision datasets. We therefore put forward a novel transfer learning and domain adaptation approach, referred to as visual domain adaptation (VDA). Specifically, VDA reduces the joint marginal and conditional distributions across domains in an unsupervised manner where no label is available in test set. Moreover, VDA constructs condensed domain invariant clusters in the embedding representation to separate various classes alongside the domain transfer. In this work, we employ pseudo target labels refinement to iteratively converge to final solution. Employing an iterative procedure along with a novel optimization problem creates a robust and effective representation for adaptation across domains. Extensive experiments on 16 real vision datasets with different difficulties verify that VDA can significantly outperform state-of-the-art methods in image classification problem.",
"title": ""
},
{
"docid": "b584491152ad052b1c0be6ea7088f7c0",
"text": "Recently several hierarchical inverse dynamics controllers based on cascades of quadratic programs have been proposed for application on torque controlled robots. They have important theoretical benefits but have never been implemented on a torque controlled robot where model inaccuracies and real-time computation requirements can be problematic. In this contribution we present an experimental evaluation of these algorithms in the context of balance control for a humanoid robot. The presented experiments demonstrate the applicability of the approach under real robot conditions (i.e. model uncertainty, estimation errors, etc). We propose a simplification of the optimization problem that allows us to decrease computation time enough to implement it in a fast torque control loop. We implement a momentum-based balance controller which shows robust performance in face of unknown disturbances, even when the robot is standing on only one foot. In a second experiment, a tracking task is evaluated to demonstrate the performance of the controller with more complicated hierarchies. Our results show that hierarchical inverse dynamics controllers can be used for feedback control of humanoid robots and that momentum-based balance control can be efficiently implemented on a real robot.",
"title": ""
},
{
"docid": "ba8d73938ea51f1b41add8c572c1667b",
"text": "Traditionally, when storage systems employ erasure codes, they are designed to tolerate the failures of entire disks. However, the most common types of failures are latent sector failures, which only affect individual disk sectors, and block failures which arise through wear on SSD’s. This paper introduces SD codes, which are designed to tolerate combinations of disk and sector failures. As such, they consume far less storage resources than traditional erasure codes. We specify the codes with enough detail for the storage practitioner to employ them, discuss their practical properties, and detail an open-source implementation.",
"title": ""
},
{
"docid": "bd5808b4df3a8dd745971a06de67f251",
"text": "-In this paper we investigate the use of the area under the receiver operating characteristic (ROC) curve (AUC) as a performance measure for machine learning algorithms. As a case study we evaluate six machine learning algorithms (C4.5, Multiscale Classifier, Perceptron, Multi-layer Perceptron, k-Nearest Neighbours, and a Quadratic Discriminant Function) on six \"real world\" medical diagnostics data sets. We compare and discuss the use of AUC to the more conventional overall accuracy and find that AUC exhibits a number of desirable properties when compared to overall accuracy: increased sensitivity in Analysis of Variance (ANOVA) tests; a standard error that decreased as both AUC and the number of test samples increased; decision threshold independent; and it is invafiant to a priori class probabilities. The paper concludes with the recommendation that AUC be used in preference to overall accuracy for \"single number\" evaluation of machine learning algorithms. © 1997 Pattern Recognition Society. Published by Elsevier Science Ltd. The ROC curve Cross-validation The area under the ROC curve (AUC) Wilcoxon statistic Standard error Accuracy measures",
"title": ""
},
{
"docid": "4bf7e46ad4ccceb1fb778d6e8750f05d",
"text": "In this work, we extend a common framework for seeded image segmentation that includes the graph cuts, random walker, and shortest path optimization algorithms. Viewing an image as a weighted graph, these algorithms can be expressed by means of a common energy function with differing choices of a parameter q acting as an exponent on the differences between neighboring nodes. Introducing a new parameter p that fixes a power for the edge weights allows us to also include the optimal spanning forest algorithm for watersheds in this same framework. We then propose a new family of segmentation algorithms that fixes p to produce an optimal spanning forest but varies the power q beyond the usual watershed algorithm, which we term power watersheds. Placing the watershed algorithm in this energy minimization framework also opens new possibilities for using unary terms in traditional watershed segmentation and using watersheds to optimize more general models of use in application beyond image segmentation.",
"title": ""
}
] |
scidocsrr
|
ec2fdb67e01cbebfd56071ced605e9c8
|
Limited Knowledge Shilling Attacks in Collaborative Filtering Systems
|
[
{
"docid": "5fd55cd22aa9fd4df56b212d3d578134",
"text": "Relevance feedback has a history in information retrieval that dates back well over thirty years (c.f. [SL96]). Relevance feedback is typically used for query expansion during short-term modeling of a user’s immediate information need and for user profiling during long-term modeling of a user’s persistent interests and preferences. Traditional relevance feedback methods require that users explicitly give feedback by, for example, specifying keywords, selecting and marking documents, or answering questions about their interests. Such relevance feedback methods force users to engage in additional activities beyond their normal searching behavior. Since the cost to the user is high and the benefits are not always apparent, it can be difficult to collect the necessary data and the effectiveness of explicit techniques can be limited.",
"title": ""
}
] |
[
{
"docid": "4a6d231ce704e4acf9320ac3bd5ade14",
"text": "Despite recent advances in discourse parsing and causality detection, the automatic recognition of argumentation structure of authentic texts is still a very challenging task. To approach this problem, we collected a small corpus of German microtexts in a text generation experiment, resulting in texts that are authentic but of controlled linguistic and rhetoric complexity. We show that trained annotators can determine the argumentation structure on these microtexts reliably. We experiment with different machine learning approaches for automatic argumentation structure recognition on various levels of granularity of the scheme. Given the complex nature of such a discourse understanding tasks, the first results presented here are promising, but invite for further investigation.",
"title": ""
},
{
"docid": "8bed049baa03a11867b0205e16402d0e",
"text": "The paper investigates potential bias in awards of player disciplinary sanctions, in the form of cautions (yellow cards) and dismissals (red cards) by referees in the English Premier League and the German Bundesliga. Previous studies of behaviour of soccer referees have not adequately incorporated within-game information.Descriptive statistics from our samples clearly show that home teams receive fewer yellow and red cards than away teams. These differences may be wrongly interpreted as evidence of bias where the modeller has failed to include withingame events such as goals scored and recent cards issued.What appears as referee favouritism may actually be excessive and illegal aggressive behaviour by players in teams that are behind in score. We deal with these issues by using a minute-by-minute bivariate probit analysis of yellow and red cards issued in games over six seasons in the two leagues. The significance of a variable to denote the difference in score at the time of sanction suggests that foul play that is induced by a losing position is an important influence on the award of yellow and red cards. Controlling for various pre-game and within-game variables, we find evidence that is indicative of home team favouritism induced by crowd pressure: in Germany home teams with running tracks in their stadia attract more yellow and red cards than teams playing in stadia with less distance between the crowd and the pitch. Separating the competing teams in matches by favourite and underdog status, as perceived by the betting market, yields further evidence, this time for both leagues, that the source of home teams receiving fewer cards is not just that they are disproportionately often the favoured team and disproportionately ahead in score.Thus there is evidence that is consistent with pure referee bias in relative treatments of home and away teams.",
"title": ""
},
{
"docid": "a42f7e9efc4c0e2d56107397f98b15f1",
"text": "Recently, much advance has been made in image captioning, and an encoder-decoder framework has achieved outstanding performance for this task. In this paper, we propose an extension of the encoder-decoder framework by adding a component called guiding network. The guiding network models the attribute properties of input images, and its output is leveraged to compose the input of the decoder at each time step. The guiding network can be plugged into the current encoder-decoder framework and trained in an end-to-end manner. Hence, the guiding vector can be adaptively learned according to the signal from the decoder, making itself to embed information from both image and language. Additionally, discriminative supervision can be employed to further improve the quality of guidance. The advantages of our proposed approach are verified by experiments carried out on the MS COCO dataset.",
"title": ""
},
{
"docid": "e57732931a053f73280564270c764f15",
"text": "Neural generative model in question answering (QA) usually employs sequence-to-sequence (Seq2Seq) learning to generate answers based on the user’s questions as opposed to the retrieval-based model selecting the best matched answer from a repository of pre-defined QA pairs. One key challenge of neural generative model in QA lies in generating high-frequency and generic answers regardless of the questions, partially due to optimizing log-likelihood objective function. In this paper, we investigate multitask learning (MTL) in neural network-based method under a QA scenario. We define our main task as agenerative QA via Seq2Seq learning. And we define our auxiliary task as a discriminative QA via binary QAclassification. Both main task and auxiliary task are learned jointly with shared representations, allowing to obtain improved generalization and transferring classification labels as extra evidences to guide the word sequence generation of the answers. Experimental results on both automatic evaluations and human annotations demonstrate the superiorities of our proposed method over baselines.",
"title": ""
},
{
"docid": "db657866610debb4c2f96c98c241b1f2",
"text": "Oxidative stress is viewed as an imbalance between the production of reactive oxygen species (ROS) and their elimination by protective mechanisms, which can lead to chronic inflammation. Oxidative stress can activate a variety of transcription factors, which lead to the differential expression of some genes involved in inflammatory pathways. The inflammation triggered by oxidative stress is the cause of many chronic diseases. Polyphenols have been proposed to be useful as adjuvant therapy for their potential anti-inflammatory effect, associated with antioxidant activity, and inhibition of enzymes involved in the production of eicosanoids. This review aims at exploring the properties of polyphenols in anti-inflammation and oxidation and the mechanisms of polyphenols inhibiting molecular signaling pathways which are activated by oxidative stress, as well as the possible roles of polyphenols in inflammation-mediated chronic disorders. Such data can be helpful for the development of future antioxidant therapeutics and new anti-inflammatory drugs.",
"title": ""
},
{
"docid": "4162c6bbaac397ff24e337fa4af08abd",
"text": "We present a new model called LATTICERNN, which generalizes recurrent neural networks (RNNs) to process weighted lattices as input, instead of sequences. A LATTICERNN can encode the complete structure of a lattice into a dense representation, which makes it suitable to a variety of problems, including rescoring, classifying, parsing, or translating lattices using deep neural networks (DNNs). In this paper, we use LATTICERNNs for a classification task: each lattice represents the output from an automatic speech recognition (ASR) component of a spoken language understanding (SLU) system, and we classify the intent of the spoken utterance based on the lattice embedding computed by a LATTICERNN. We show that making decisions based on the full ASR output lattice, as opposed to 1-best or n-best hypotheses, makes SLU systems more robust to ASR errors. Our experiments yield improvements of 13% over a baseline RNN system trained on transcriptions and 10% over an nbest list rescoring system for intent classification.",
"title": ""
},
{
"docid": "251f5f5af4aa9390f6e144956006097f",
"text": "As algorithms are increasingly used to make important decisions that affect human lives, ranging from social benefit assignment to predicting risk of criminal recidivism, concerns have been raised about the fairness of algorithmic decision making. Most prior works on algorithmic fairness normatively prescribe how fair decisions ought to be made. In contrast, here, we descriptively survey users for how they perceive and reason about fairness in algorithmic decision making. A key contribution of this work is the framework we propose to understand why people perceive certain features as fair or unfair to be used in algorithms. Our framework identifies eight properties of features, such as relevance, volitionality and reliability, as latent considerations that inform people’s moral judgments about the fairness of feature use in decision-making algorithms. We validate our framework through a series of scenario-based surveys with 576 people. We find that, based on a person’s assessment of the eight latent properties of a feature in our exemplar scenario, we can accurately (> 85%) predict if the person will judge the use of the feature as fair. Our findings have important implications. At a high-level, we show that people’s unfairness concerns are multi-dimensional and argue that future studies need to address unfairness concerns beyond discrimination. At a low-level, we find considerable disagreements in people’s fairness judgments. We identify root causes of the disagreements, and note possible pathways to resolve them.",
"title": ""
},
{
"docid": "02647d7ab54cc2ae1af5ce156e63f742",
"text": "In intelligent transportation systems (ITS), transportation infrastructure is complimented with information and communication technologies with the objectives of attaining improved passenger safety, reduced transportation time and fuel consumption and vehicle wear and tear. With the advent of modern communication and computational devices and inexpensive sensors it is possible to collect and process data from a number of sources. Data fusion (DF) is collection of techniques by which information from multiple sources are combined in order to reach a better inference. DF is an inevitable tool for ITS. This paper provides a survey of how DF is used in different areas of ITS. 2010 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "c61470e2c1310a9c6fa09dc96659d4ab",
"text": "Selenium IDE Locating Elements There is a great responsibility for developers and testers to ensure that web software exhibits high reliability and speed. Somewhat recently, the software community has seen a rise in the usage of AJAX in web software development to achieve this goal. The advantage of AJAX applications is that they are typically very responsive. The vEOC is an Emergency Management Training application which requires this level of interactivity. Selenium is great in that it is an open source testing tool that can handle the amount of JavaScript present in AJAX applications, and even gives the tester the freedom to add their own features. Since web software is so frequently modified, the main goal for any test developer is to create sustainable tests. How can Selenium tests be made more maintainable?",
"title": ""
},
{
"docid": "4aec63cb23b43f4d1d2f7ab53cedbff9",
"text": "Presently, there is no recommendation on how to assess functional status of chronic obstructive pulmonary disease (COPD) patients. This study aimed to summarize and systematically evaluate these measures.Studies on measures of COPD patients' functional status published before the end of January 2015 were included using a search filters in PubMed and Web of Science, screening reference lists of all included studies, and cross-checking against some relevant reviews. After title, abstract, and main text screening, the remaining was appraised using the Consensus-based Standards for the Selection of Health Measurement Instruments (COSMIN) 4-point checklist. All measures from these studies were rated according to best-evidence synthesis and the best-rated measures were selected.A total of 6447 records were found and 102 studies were reviewed, suggesting 44 performance-based measures and 14 patient-reported measures. The majority of the studies focused on internal consistency, reliability, and hypothesis testing, but only 21% of them employed good or excellent methodology. Their common weaknesses include lack of checks for unidimensionality, inadequate sample sizes, no prior hypotheses, and improper methods. On average, patient-reported measures perform better than performance-based measures. The best-rated patient-reported measures are functional performance inventory (FPI), functional performance inventory short form (FPI-SF), living with COPD questionnaire (LCOPD), COPD activity rating scale (CARS), University of Cincinnati dyspnea questionnaire (UCDQ), shortness of breath with daily activities (SOBDA), and short-form pulmonary functional status scale (PFSS-11), and the best-rated performance-based measures are exercise testing: 6-minute walk test (6MWT), endurance treadmill test, and usual 4-meter gait speed (usual 4MGS).Further research is needed to evaluate the reliability and validity of performance-based measures since present studies failed to provide convincing evidence. FPI, FPI-SF, LCOPD, CARS, UCDQ, SOBDA, PFSS-11, 6MWT, endurance treadmill test, and usual 4MGS performed well and are preferable to assess functional status of COPD patients.",
"title": ""
},
{
"docid": "8fa61b7d1844eee81d1e02b12b654b16",
"text": "Time series are ubiquitous, and a measure to assess their similarity is a core part of many computational systems. In particular, the similarity measure is the most essential ingredient of time series clustering and classification systems. Because of this importance, countless approaches to estimate time series similarity have been proposed. However, there is a lack of comparative studies using empirical, rigorous, quantitative, and large-scale assessment strategies. In this article, we provide an extensive evaluation of similarity measures for time series classification following the aforementioned principles. We consider 7 different measures coming from alternative measure ‘families’, and 45 publicly-available time series data sets coming from a wide variety of scientific domains. We focus on out-of-sample classification accuracy, but in-sample accuracies and parameter choices are also discussed. Our work is based on rigorous evaluation methodologies and includes the use of powerful statistical significance tests to derive meaningful conclusions. The obtained results show the equivalence, in terms of accuracy, of a number of measures, but with one single candidate outperforming the rest. Such findings, together with the followed methodology, invite researchers on the field to adopt a more consistent evaluation criteria and a more informed decision regarding the baseline measures to which new developments should be compared.",
"title": ""
},
{
"docid": "085345c30e517a441dd0ada4a2200a8d",
"text": "A slim radio frequency identification (RFID) tag antenna design for metallic objects application is proposed in this letter. It is designed based on a high-impedance surface (HIS) unit cell structure directly rather than adopting a large HIS ground plane. The antenna structure consists of metallic rectangular patches electrically connected through vias to the ground plane to form an RFID tag antenna that is suitable for mounting on metallic objects. The experimental tests show that the maximum read range of RFID tag placed on a metallic object is about 3.1 m and the overall size is 65 times 20 times 1.5 mm3. It is thinner than inverted-F antenna (IFA), planar inverted-F antenna (PIFA), or patch-type antennas for metallic objects. Simulation and measurement results of the proposed RFID tag antenna are also presented in this letter.",
"title": ""
},
{
"docid": "309713b49b8ea3bd6feee408c351467a",
"text": "In this paper we describe a hybrid system that applies maximum entropy model (MaxEnt), language specific rules and gazetteers to the task of named entity recognition (NER) in Indian languages designed for the IJCNLP NERSSEAL shared task. Starting with named entity (NE) annotated corpora and a set of features we first build a baseline NER system. Then some language specific rules are added to the system to recognize some specific NE classes. Also we have added some gazetteers and context patterns to the system to increase the performance. As identification of rules and context patterns requires language knowledge, we were able to prepare rules and identify context patterns for Hindi and Bengali only. For the other languages the system uses the MaxEnt model only. After preparing the one-level NER system, we have applid a set of rules to identify the nested entities. The system is able to recognize 12 classes of NEs with 65.13% f-value in Hindi, 65.96% f-value in Bengali and 44.65%, 18.74%, and 35.47% f-value in Oriya, Telugu and Urdu respectively.",
"title": ""
},
{
"docid": "122a27336317372a0d84ee353bb94a4b",
"text": "Recently, many advanced machine learning approaches have been proposed for coreference resolution; however, all of the discriminatively-trained models reason over mentions rather than entities. That is, they do not explicitly contain variables indicating the “canonical” values for each attribute of an entity (e.g., name, venue, title, etc.). This canonicalization step is typically implemented as a post-processing routine to coreference resolution prior to adding the extracted entity to a database. In this paper, we propose a discriminatively-trained model that jointly performs coreference resolution and canonicalization, enabling features over hypothesized entities. We validate our approach on two different coreference problems: newswire anaphora resolution and research paper citation matching, demonstrating improvements in both tasks and achieving an error reduction of up to 62% when compared to a method that reasons about mentions only.",
"title": ""
},
{
"docid": "96acf51c9f1202558d38f4214d0a36d9",
"text": "We propose a novel crowd counting model that maps a given crowd scene to its density. Crowd analysis is compounded by myriad of factors like inter-occlusion between people due to extreme crowding, high similarity of appearance between people and background elements, and large variability of camera view-points. Current state-of-the art approaches tackle these factors by using multi-scale CNN architectures, recurrent networks and late fusion of features from multi-column CNN with different receptive fields. We propose switching convolutional neural network that leverages variation of crowd density within an image to improve the accuracy and localization of the predicted crowd count. Patches from a grid within a crowd scene are relayed to independent CNN regressors based on crowd count prediction quality of the CNN established during training. The independent CNN regressors are designed to have different receptive fields and a switch classifier is trained to relay the crowd scene patch to the best CNN regressor. We perform extensive experiments on all major crowd counting datasets and evidence better performance compared to current state-of-the-art methods. We provide interpretable representations of the multichotomy of space of crowd scene patches inferred from the switch. It is observed that the switch relays an image patch to a particular CNN column based on density of crowd.",
"title": ""
},
{
"docid": "c4337c7a5b53a07e41f94976418ac293",
"text": "Deep neural network has shown remarkable performance in solving computer vision and some graph evolved tasks, such as node classification and link prediction. However, the vulnerability of deep model has also been revealed by carefully designed adversarial examples generated by various adversarial attack methods. With the wider application of deep model in complex network analysis, in this paper we define and formulate the link prediction adversarial attack problem and put forward a novel iterative gradient attack (IGA) based on the gradient information in trained graph auto-encoder (GAE). To our best knowledge, it is the first time link prediction adversarial attack problem is defined and attack method is brought up. Not surprisingly, GAE was easily fooled by adversarial network with only a few links perturbed on the clean network. By conducting comprehensive experiments on different real-world data sets, we can conclude that most deep model based and other state-of-art link prediction algorithms cannot escape the adversarial attack just like GAE. We can benefit the attack as an efficient privacy protection tool from link prediction unknown violation, on the other hand, link prediction attack can be a robustness evaluation metric for current link prediction algorithm in attack defensibility.",
"title": ""
},
{
"docid": "e106df98a3d0240ed3e10840697bfc74",
"text": "Online question and answer (Q&A) services are facing key challenges to motivate domain experts to provide quick and high-quality answers. Recent systems seek to engage real-world experts by allowing them to set a price on their answers. This leads to a \"targeted\" Q&A model where users to ask questions to a target expert by paying the price. In this paper, we perform a case study on two emerging targeted Q&A systems Fenda (China) and Whale (US) to understand how monetary incentives affect user behavior. By analyzing a large dataset of 220K questions (worth 1 million USD), we find that payments indeed enable quick answers from experts, but also drive certain users to game the system for profits. In addition, this model requires users (experts) to proactively adjust their price to make profits. People who are unwilling to lower their prices are likely to hurt their income and engagement over time.",
"title": ""
},
{
"docid": "26b7d1d79382d61dfcd523864c477e21",
"text": "The vending machine which provides the beverage like snacks, cold drink, it is also used for ticketing. These systems are operated on either coin or note or manually switch operated. This paper presents system which operates not on coin or note, it operates on RFID system. This system gives the access through only RFID which avoid the misuse of machine. A small RFID reader is fitted on the machine. The identity card which contains RFID tag is given to each employee. According to estimation the numbers of cups per day as per client’s requirement are programmed. Then an employee goes to vending machine show his card to the reader then the drink is dispensed. But when employee wants more coffees than fixed number, that person is allow for that but that employee has to pay for extra cups and amount is cut from the salary account. KeywordsRFID, Arduino, Vending machine.",
"title": ""
},
{
"docid": "fb3e2f6c4f790b1f5c30a2d95c8d3eb4",
"text": "Top-N recommender systems typically utilize side information to address the problem of data sparsity. As nowadays side information is growing towards high dimensionality, the performances of existing methods deteriorate in terms of both effectiveness and efficiency, which imposes a severe technical challenge. In order to take advantage of high-dimensional side information, we propose in this paper an embedded feature selection method to facilitate top-N recommendation. In particular, we propose to learn feature weights of side information, where zero-valued features are naturally filtered out. We also introduce non-negativity and sparsity to the feature weights, to facilitate feature selection and encourage low-rank structure. Two optimization problems are accordingly put forward, respectively, where the feature selection is tightly or loosely coupled with the learning procedure. Augmented Lagrange Multiplier and Alternating Direction Method are applied to efficiently solve the problems. Experiment results demonstrate the superior recommendation quality of the proposed algorithm to that of the state-of-the-art alternatives.",
"title": ""
},
{
"docid": "5329edd5259cf65d62922b17765fce0d",
"text": "T emergence of software-based platforms is shifting competition toward platform-centric ecosystems, although this phenomenon has not received much attention in information systems research. Our premise is that the coevolution of the design, governance, and environmental dynamics of such ecosystems influences how they evolve. We present a framework for understanding platform-based ecosystems and discuss five broad research questions that present significant research opportunities for contributing homegrown theory about their evolutionary dynamics to the information systems discipline and distinctive information technology-artifactcentric contributions to the strategy, economics, and software engineering reference disciplines.",
"title": ""
}
] |
scidocsrr
|
379e3dc264eab968a9e4c838fb93ff57
|
Data Trasfer From MySQL To Hadoop: Implementers' Perspective
|
[
{
"docid": "31f838fb0c7db7e8b58fb1788d5554c8",
"text": "Today’s smartphones operate independently of each other, using only local computing, sensing, networking, and storage capabilities and functions provided by remote Internet services. It is generally difficult or expensive for one smartphone to share data and computing resources with another. Data is shared through centralized services, requiring expensive uploads and downloads that strain wireless data networks. Collaborative computing is only achieved using ad hoc approaches. Coordinating smartphone data and computing would allow mobile applications to utilize the capabilities of an entire smartphone cloud while avoiding global network bottlenecks. In many cases, processing mobile data in-place and transferring it directly between smartphones would be more efficient and less susceptible to network limitations than offloading data and processing to remote servers. We have developed Hyrax, a platform derived from Hadoop that supports cloud computing on Android smartphones. Hyrax allows client applications to conveniently utilize data and execute computing jobs on networks of smartphones and heterogeneous networks of phones and servers. By scaling with the number of devices and tolerating node departure, Hyrax allows applications to use distributed resources abstractly, oblivious to the physical nature of the cloud. The design and implementation of Hyrax is described, including experiences in porting Hadoop to the Android platform and the design of mobilespecific customizations. The scalability of Hyrax is evaluated experimentally and compared to that of Hadoop. Although the performance of Hyrax is poor for CPU-bound tasks, it is shown to tolerate node-departure and offer reasonable performance in data sharing. A distributed multimedia search and sharing application is implemented to qualitatively evaluate Hyrax from an application development perspective.",
"title": ""
}
] |
[
{
"docid": "4a276fbff2901273b1b0d99c397abcbf",
"text": "A rating system provides relative measures of superiority between adversaries. We propose a novel and simple approach, which we call pi-rating, for dynamically rating Association Football teams solely on the basis of the relative discrepancies in scores through relevant match instances. The pi-rating system is applicable to any other sport where the score is considered as a good indicator for prediction purposes, as well as determining the relative performances between adversaries. In an attempt to examine how well the ratings capture a team’s performance, we have a) assessed them against two recently proposed football ELO rating variants and b) used them as the basis of a football betting strategy against published market odds. The results show that the pi-ratings outperform considerably the widely accepted ELO ratings and, perhaps more importantly, demonstrate profitability over a period of five English Premier League seasons (2007/08 to 2011/12), even allowing for the bookmakers' built-in profit margin. This is the first academic study to demonstrate profitability against market odds using such a relatively simple technique, and the resulting pi-ratings can be incorporated as parameters into other more sophisticated models in an attempt to further enhance forecasting capability.",
"title": ""
},
{
"docid": "ab13636bc7876dd2e7f9b27725f205f1",
"text": "Clickbait is increasingly used by publishers on social media platforms to spark their users’ natural curiosity and to elicit clicks on their content. Every click earns them display advertisement revenue. Social media users who are tricked into clicking may experience a sense of disappointment or agitation, and social media operators have been observing growing amounts of clickbait on their platforms. As largest video-sharing platform on the web, YouTube, too, suffers from clickbait. Many users and YouTubers alike have complained about this development. In this paper, we lay the foundation for crowdsourcing the first YouTube clickbait corpus by (1) augmenting the YouTube 8M dataset with meta data to obtain a large-scale base population of videos, and by (2) studying the task design suitable to manual clickbait identification.",
"title": ""
},
{
"docid": "e3892ec4f055ca731f275cde7ddde1a2",
"text": "Several real world prediction problems involve forecasting rare values of a target variable. When this variable is nominal we have a problem of class imbalance that was already studied thoroughly within machine learning. For regression tasks, where the target variable is continuous, few works exist addressing this type of problem. Still, important application areas involve forecasting rare extreme values of a continuous target variable. This paper describes a contribution to this type of tasks. Namely, we propose to address such tasks by sampling approaches. These approaches change the distribution of the given training data set to decrease the problem of imbalance between the rare target cases and the most frequent ones. We present a modification of the well-known Smote algorithm that allows its use on these regression tasks. In an extensive set of experiments we provide empirical evidence for the superiority of our proposals for these particular regression tasks. The proposed SmoteR method can be used with any existing regression algorithm turning it into a general tool for addressing problems of forecasting rare extreme values of a continuous target variable.",
"title": ""
},
{
"docid": "90753a8c6ea17cc1a857898050e8ffca",
"text": "The panel focuses on blockchain, the technology behind Bitcoin and Ethereum. The topic has drawn much attention recently in both business and academic circles. The blockchain is a distributed, immutable digital record system that is shared among many independent parties and can be updated only by their consensus. If unbiased and incorruptible blockchain-based information systems become prevalent repositories of our records, trusting other humans with constructing and maintaining key records to define the resources at our disposal could become unnecessary. In principle, blockchain could provide a decentralized information infrastructure that no one fully controls, thereby no one has absolute power and no one can distort past or current records. The full potential let alone implications of blockchain is still unknown. The panel explores blockchain challenges and opportunities from the IS research perspective.",
"title": ""
},
{
"docid": "56a3a761606e699c3f21fb0fe1ecbf0a",
"text": "Internet banking (IB) has become one of the widely used banking services among Malaysian retail banking customers in recent years. Despite its attractiveness, customer loyalty towards Internet banking website has become an issue due to stiff competition among the banks in Malaysia. As the development and validation of a customer loyalty model in Internet banking website context in Malaysia had not been addressed by past studies, this study attempts to develop a model based on the usage of Information System (IS), with the purpose to investigate factors influencing customer loyalty towards Internet banking websites. A questionnaire survey was conducted with the sample consisting of Internet banking users in Malaysia. Factors that influence customer loyalty towards Internet banking website in Malaysia have been investigated and tested. The study also attempts to identify the most essential factors among those investigated: service quality, perceived value, trust, habit and reputation of the bank. Based on the findings, trust, habit and reputation are found to have a significant influence on customer loyalty towards individual Internet banking websites in Malaysia. As compared to trust or habit factors, reputation is the strongest influence. The results also indicated that service quality and perceived value are not significantly related to customer loyalty. Service quality is found to be an important factor in influencing the adoption of the technology, but did not have a significant influence in retention of customers. The findings have provided an insight to the internet banking providers on the areas to be focused on in retaining their customers.",
"title": ""
},
{
"docid": "2af36afd2440a4940873fef1703aab3f",
"text": "In recent years researchers have found that alternations in arterial or venular tree of the retinal vasculature are associated with several public health problems such as diabetic retinopathy which is also the leading cause of blindness in the world. A prerequisite for automated assessment of subtle changes in arteries and veins, is to accurately separate those vessels from each other. This is a difficult task due to high similarity between arteries and veins in addition to variation of color and non-uniform illumination inter and intra retinal images. In this paper a novel structural and automated method is presented for artery/vein classification of blood vessels in retinal images. The proposed method consists of three main steps. In the first step, several image enhancement techniques are employed to improve the images. Then a specific feature extraction process is applied to separate major arteries from veins. Indeed, vessels are divided to smaller segments and feature extraction and vessel classification are applied to each small vessel segment instead of each vessel point. Finally, a post processing step is added to improve the results obtained from the previous step using structural characteristics of the retinal vascular network. In the last stage, vessel features at intersection and bifurcation points are processed for detection of arterial and venular sub trees. Ultimately vessel labels are revised by publishing the dominant label through each identified connected tree of arteries or veins. Evaluation of the proposed approach against two different datasets of retinal images including DRIVE database demonstrates the good performance and robustness of the method. The proposed method may be used for determination of arteriolar to venular diameter ratio in retinal images. Also the proposed method potentially allows for further investigation of labels of thinner arteries and veins which might be found by tracing them back to the major vessels.",
"title": ""
},
{
"docid": "db215a998da127466bcb5e80b750cbbb",
"text": "to design and build computing systems capable of running themselves, adjusting to varying circumstances, and preparing their resources to handle most efficiently the workloads we put upon them. These autonomic systems must anticipate needs and allow users to concentrate on what they want to accomplish rather than figuring how to rig the computing systems to get them there. Abtract The performance of current shared-memory multiprocessor systems depends on both the efficient utilization of all the architectural elements in the system (processors, memory, etc), and the workload characteristics. This Thesis has the main goal of improving the execution of workloads of parallel applications in shared-memory multiprocessor systems by using real performance information in the processor scheduling. In multiprocessor systems, users request for resources (processors) to execute their parallel applications. The Operating System is responsible to distribute the available physical resources among parallel applications in the more convenient way for both the system and the application performance. It is a typical practice of users in multiprocessor systems to request for a high number of processors assuming that the higher the processor request, the higher the number of processors allocated, and the higher the speedup achieved by their applications. However, this is not true. Parallel applications have different characteristics with respect to their scalability. Their speedup also depends on run-time parameters such as the influence of the rest of running applications. This Thesis proposes that the system should not base its decisions on the users requests only, but the system must decide, or adjust, its decisions based on real performance information calculated at run-time. The performance of parallel applications is an information that the system can dynamically measure without introducing a significant penalty in the application execution time. Using this information, the processor allocation can be decided, or modified, being robust to incorrect processor requests given by users. We also propose that the system use a target efficiency to ensure the efficient use of processors. This target efficiency is a system parameter and can be dynamically decided as a function of the characteristics of running applications or the number of queued applications. We also propose to coordinate the different scheduling levels that operate in the processor scheduling: the run-time scheduler, the processor scheduler, and the queueing system. We propose to establish an interface between levels to send and receive information, and to take scheduling decisions considering the information provided by the rest of …",
"title": ""
},
{
"docid": "4bde8a8980f75a9edc4897323e6ed3bb",
"text": "Although the possibility of attacking smart-cards by analyzing their electromagnetic power radiation repeatedly appears in research papers, all accessible references evade the essence of reporting conclusive experiments where actual cryptographic algorithms such as des or rsa were successfully attacked. This work describes electromagnetic experiments conducted on three different cmos chips, featuring different hardware protections and executing a des, an alleged comp128 and an rsa. In all cases the complete key material was successfully retrieved.",
"title": ""
},
{
"docid": "5549b770dd97c58e6bc5fc18b316e0e4",
"text": "Due to its rapid speed of information spread, wide user bases, and extreme mobility, Twitter is drawing attention as a potential emergency reporting tool under extreme events. However, at the same time, Twitter is sometimes despised as a citizen based non-professional social medium for propagating misinformation, rumors, and, in extreme case, propaganda. This study explores the working dynamics of the rumor mill by analyzing Twitter data of the Haiti Earthquake in 2010. For this analysis, two key variables of anxiety and informational uncertainty are derived from rumor theory, and their interactive dynamics are measured by both quantitative and qualitative methods. Our research finds that information with credible sources contribute to suppress the level of anxiety in Twitter community, which leads to rumor control and high information quality.",
"title": ""
},
{
"docid": "3633f55c10b3975e212e6452ad999624",
"text": "We propose a method for semantic structure analysis of noun phrases using Abstract Meaning Representation (AMR). AMR is a graph representation for the meaning of a sentence, in which noun phrases (NPs) are manually annotated with internal structure and semantic relations. We extract NPs from the AMR corpus and construct a data set of NP semantic structures. We also propose a transition-based algorithm which jointly identifies both the nodes in a semantic structure tree and semantic relations between them. Compared to the baseline, our method improves the performance of NP semantic structure analysis by 2.7 points, while further incorporating external dictionary boosts the performance by 7.1 points.",
"title": ""
},
{
"docid": "58fbfaf50c785dd575fa82640bb0efe0",
"text": "Question answering (QA) systems rely on both knowledge bases and unstructured text corpora. Domain-specific QA presents a unique challenge, since relevant knowledge bases are often lacking and unstructured text is difficult to query and parse. This project focuses on the QUASAR-S dataset (Dhingra et al., 2017) constructed from the community QA site Stack Overflow. QUASAR-S consists of Cloze-style questions about software entities and a large background corpus of communitygenerated posts, each tagged with relevant software entities. We incorporate the tag entities as context for the QA task and find that modeling co-occurrence of tags and answers in posts leads to significant accuracy gains. To this end, we propose CASE, a hybrid of an RNN language model and a tag-answer co-occurrence model which achieves state-ofthe-art accuracy on the QUASAR-S dataset. We also find that this approach — modeling both question sentences and context-answer co-occurrence — is effective for other QA tasks. Using only language and co-occurrence modeling on the training set, CASE is competitive with the state-of-the-art method on the SPADES dataset (Bisk et al., 2016) which uses a knowledge base.",
"title": ""
},
{
"docid": "e34873c21f9c0dd0705e0496886137df",
"text": "This paper examines two principal categories of manipulative behaviour. The term ‘macro-manipulation’ is used to describe the lobbying of regulators to persuade them to produce regulation that is more favourable to the interests of preparers. ‘Micromanipulation’ describes the management of accounting figures to produce a biased view at the entity level. Both categories of manipulation can be viewed as attempts at creativity by financial statement preparers. The paper analyses two cases of manipulation which are considered in an ethical context. The paper concludes that the manipulations described in it can be regarded as morally reprehensible. They are not fair to users, they involve an unjust exercise of power, and they tend to weaken the authority of accounting regulators.",
"title": ""
},
{
"docid": "82e823324c1717996d09b11bdfdc4a62",
"text": "Deep neural networks (DNNs) have demonstrated dominating performance in many fields; since AlexNet, the neural networks used in practice are going wider and deeper. On the theoretical side, a long line of works have been focusing on why we can train neural networks when there is only one hidden layer. The theory of multi-layer networks remains somewhat unsettled. In this work, we prove why simple algorithms such as stochastic gradient descent (SGD) can find global minima on the training objective of DNNs in polynomial time. We only make two assumptions: the inputs do not degenerate and the network is over-parameterized. The latter means the number of hidden neurons is sufficiently large: polynomial in L, the number of DNN layers and in n, the number of training samples. As concrete examples, on the training set and starting from randomly initialized weights, we show that SGD attains 100% accuracy in classification tasks, or minimizes regression loss in linear convergence speed ε ∝ e−Ω(T , with a number of iterations that only scales polynomial in n and L. Our theory applies to the widely-used but non-smooth ReLU activation, and to any smooth and possibly non-convex loss functions. In terms of network architectures, our theory at least applies to fully-connected neural networks, convolutional neural networks (CNN), and residual neural networks (ResNet). ∗V1 appears on arXiv on this date and no new result is added since then. V2 adds citations and V3/V4 polish writing. This work was done when Yuanzhi Li and Zhao Song were 2018 summer interns at Microsoft Research Redmond. When this work was performed, Yuanzhi Li was also affiliated with Princeton, and Zhao Song was also affiliated with UW and Harvard. We would like to specially thank Greg Yang for many enlightening discussions, thank Ofer Dekel, Sebastien Bubeck, and Harry Shum for very helpful conversations, and thank Jincheng Mei for carefully checking the proofs of this paper. ar X iv :1 81 1. 03 96 2v 4 [ cs .L G ] 4 F eb 2 01 9",
"title": ""
},
{
"docid": "c02e7ece958714df34539a909c2adb7d",
"text": "Despite the growing evidence of the association between shame experiences and eating psychopathology, the specific effect of body image-focused shame memories on binge eating remains largely unexplored. The current study examined this association and considered current body image shame and self-criticism as mediators. A multi-group path analysis was conducted to examine gender differences in these relationships. The sample included 222 women and 109 men from the Portuguese general and college student populations who recalled an early body image-focused shame experience and completed measures of the centrality of the shame memory, current body image shame, binge eating symptoms, depressive symptoms, and self-criticism. For both men and women, the effect of the centrality of shame memories related to body image on binge eating symptoms was fully mediated by body image shame and self-criticism. In women, these effects were further mediated by self-criticism focused on a sense of inadequacy and also on self-hatred. In men, only the form of self-criticism focused on a sense of inadequacy mediated these associations. The present study has important implications for the conceptualization and treatment of binge eating symptoms. Findings suggest that, in both genders, body image-focused shame experiences are associated with binge eating symptoms via their effect on current body image shame and self-criticism.",
"title": ""
},
{
"docid": "1303770cf8d0f1b0f312feb49281aa10",
"text": "A terahertz metamaterial absorber (MA) with properties of broadband width, polarization-insensitive, wide angle incidence is presented. Different from the previous methods to broaden the absorption width, this letter proposes a novel combinatorial way which units a nested structure with multiple metal-dielectric layers. We numerically investigate the proposed MA, and the simulation results show that the absorber achieves a broadband absorption over a frequency range of 0.896 THz with the absorptivity greater than 90%. Moreover, the full-width at half maximum of the absorber is up to 1.224 THz which is 61.2% with respect to the central frequency. The mechanism for the broadband absorption originates from the overlapping of longitudinal coupling between layers and coupling of the nested structure. Importantly, the nested structure makes a great contribution to broaden the absorption width. Thus, constructing a nested structure in a multi-layer absorber may be considered as an effective way to design broadband MAs.",
"title": ""
},
{
"docid": "4575b5c93aa86c150944597638402439",
"text": "Multilayer networks are networks where edges exist in multiple layers that encode different types or sources of interactions. As one of the most important problems in network science, discovering the underlying community structure in multilayer networks has received an increasing amount of attention in recent years. One of the challenging issues is to develop effective community structure quality functions for characterizing the structural or functional properties of the expected community structure. Although several quality functions have been developed for evaluating the detected community structure, little has been explored about how to explicitly bring our knowledge of the desired community structure into such quality functions, in particular for the multilayer networks. To address this issue, we propose the multilayer edge mixture model (MEMM), which is positioned as a general framework that enables us to design a quality function that reflects our knowledge about the desired community structure. The proposed model is based on a mixture of the edges, and the weights reflect their role in the detection process. By decomposing a community structure quality function into the form of MEMM, it becomes clear which type of community structure will be discovered by such quality function. Similarly, after such decomposition we can also modify the weights of the edges to find the desired community structure. In this paper, we apply the quality functions modified with the knowledge of MEMM to different multilayer benchmark networks as well as real-world multilayer networks and the detection results confirm the feasibility of MEMM.",
"title": ""
},
{
"docid": "2444b0ae9920e55cf0e3e329b048a2e8",
"text": "Concurrent Clean is an experimental, lazy, higher-order parallel functional programming language based on term graph rewriting. An important diierence with other languages is that in Clean graphs are manipulated and not terms. This can be used by the programmer to control communication and sharing of computation. Cyclic structures can be deened. Concurrent Clean furthermore allows to control the (parallel) order of evaluation to make eecient evaluation possible. With help of sequential annotations the default lazy evaluation can be locally changed into eager evaluation. The language enables the deenition of partially strict data structures which make a whole new class of algorithms feasible in a functional language. A powerful and fast strictness analyser is incorporated in the system. The quality of the code generated by the Clean compiler has been greatly improved such that it is one of the best code generators for a lazy functional language. Two very powerful parallel annotations enable the programmer to deene concurrent functional programs with arbitrary process topologies. Concurrent Clean is set up in such a way that the eeciency achieved for the sequential case can largely be maintained for a parallel implementation on loosely coupled parallel machine architectures.",
"title": ""
},
{
"docid": "80c3aa4530dfa8c7c909da7dea9bed3a",
"text": "We present a state-of-the-art algorithm for measuring the semantic similarity of word pairs using novel combinations of word embeddings, WordNet, and the concept dictionary 4lang. We evaluate our system on the SimLex-999 benchmark data. Our top score of 0.76 is higher than any published system that we are aware of, well beyond the average inter-annotator agreement of 0.67, and close to the 0.78 average correlation between a human rater and the average of all other ratings, suggesting that our system has achieved nearhuman performance on this benchmark.",
"title": ""
},
{
"docid": "331bb4a2b28c391045bcd74d76dd26fb",
"text": "This paper intends to data analysis for Li-Ion and Lead Acid Batteries. The analysis based on discharge parameters input and output were processed in Simulink MATLAB. The input parameters are nominal voltage, rated capacity, and SOC, while the output parameters consist of maximum capacity, fully charged voltage, nominal discharge current, internal resistance, exponential zone voltage, and exponential zone capacity. Study and investigation of Li-Ion batteries were done by comparing them to the Lead Acids at the voltage and battery capacity of 3.7 V, 1400 mAh and 12V, 100Ah respectively. The result showed that the maximum capacity parameter of Lead Acid batteries equally 104.16% is better than Li-Ions of 100%, while Li-Ion batteries is good for almost all others parameters except internal resistance.",
"title": ""
}
] |
scidocsrr
|
4cd57dbeee4df3dae7a636768e5ff4ef
|
Object detection using Haar-cascade Classifier
|
[
{
"docid": "bdfc21b5ae86711f093806b976258d33",
"text": "A generic and robust approach for the detection of road vehicles from an Unmanned Aerial Vehicle (UAV) is an important goal within the framework of fully autonomous UAV deployment for aerial reconnaissance and surveillance. Here we present a novel approach to the automatic detection of vehicles based on using multiple trained cascaded Haar classifiers (a disjunctive set of cascades). Our approach facilitates the realtime detection of both static and moving vehicles independent of orientation, colour, type and configuration. The results presented show the successful detection of differing vehicle types under varying conditions in both isolated rural and cluttered urban environments with minimal false positive detection. The technique is realised on aerial imagery obtained at 1Hz from an optical camera on the medium UAV B-MAV platform with results presented to include those from the MoD Grand Challenge 2008.",
"title": ""
}
] |
[
{
"docid": "8d210885c21833795e69ab741625f6ac",
"text": "We introduce Picturebook, a large-scale lookup operation to ground language via ‘snapshots’ of our physical world accessed through image search. For each word in a vocabulary, we extract the top-k images from Google image search and feed the images through a convolutional network to extract a word embedding. We introduce a multimodal gating function to fuse our Picturebook embeddings with other word representations. We also introduce Inverse Picturebook, a mechanism to map a Picturebook embedding back into words. We experiment and report results across a wide range of tasks: word similarity, natural language inference, semantic relatedness, sentiment/topic classification, image-sentence ranking and machine translation. We also show that gate activations corresponding to Picturebook embeddings are highly correlated to human judgments of concreteness ratings.",
"title": ""
},
{
"docid": "a8f01a5ddf17bce7bfd83fa3efb9d8d3",
"text": "The evidence of multi-photon absorption enhancement by the dual-wavelength double-pulse laser irradiation in transparent sapphire was demonstrated experimentally and explained theoretically for the first time. Two collinearly combined laser beams with the wavelengths of 1064 nm and 355 nm, inter-pulse delay of 0.1 ns, and pulse duration of 10 ps were used to induce intra-volume modifications in sapphire. The theoretical prediction of using a particular orientation angle of 15 degrees of the half-wave plate for the most efficient absorption of laser irradiation is in good agreement with the experimental data. The new innovative effect of multi-photon absorption enhancement by dual-wavelength double-pulse irradiation allowed utilisation of the laser energy up to four times more efficiently for initiation of internal modifications in sapphire. The new absorption enhancement effect has been used for efficient intra-volume dicing and singulation of transparent sapphire wafers. The dicing speed of 150 mm/s was achieved for the 430 μm thick sapphire wafer by using the laser power of 6.8 W at the repetition rate of 100 kHz. This method opens new opportunities for the manufacturers of the GaN-based light-emitting diodes by fast and precise separation of sapphire substrates.",
"title": ""
},
{
"docid": "7a258a5dd6d18bb159b948205ec89fc6",
"text": "The ErbB receptor tyrosine kinases evolved as key regulatory entities enabling the extracellular milieu to communicate with the intracellular machinery to bring forth the appropriate biological response in an ever-changing environment. Since its discovery, many aspects of the ErbB family have been deciphered, with emphasis on aberration of signaling in human diseases. However, only now, with the availability of the atomic coordinates of these receptors, can we construct a comprehensive model of the mechanisms underlying ligand-induced receptor dimerization and subsequent tyrosine kinase activation. Furthermore, the recent introduction of new high-throughput screening methodologies, combined with the materialization of a systems biology perspective, reveals an overwhelming network complexity, enabling robust signaling and evolvability. This knowledge is likely to impact our view of diseases as system perturbations and resistance to ErbB-targeted therapeutics as manifestations of robustness.",
"title": ""
},
{
"docid": "ef0ce55309cf2e353f58f18d20990cb5",
"text": "The quality of a Neural Machine Translation system depends substantially on the availability of sizable parallel corpora. For low-resource language pairs this is not the case, resulting in poor translation quality. Inspired by work in computer vision, we propose a novel data augmentation approach that targets low-frequency words by generating new sentence pairs containing rare words in new, synthetically created contexts. Experimental results on simulated low-resource settings show that our method improves translation quality by up to 2.9 BLEU points over the baseline and up to 3.2 BLEU over back-translation.",
"title": ""
},
{
"docid": "ca2e577e819ac49861c65bfe8d26f5a1",
"text": "A design of a delay based self-oscillating class-D power amplifier for piezoelectric actuators is presented and modelled. First order and second order configurations are discussed in detail and analytical results reveal the stability criteria of a second order system, which should be respected in the design. It also shows if the second order system converges, it will tend to give a correct pulse modulation regarding to the input modulation index. Experimental results show the effectiveness of this design procedure. For a piezoelectric load of 400 nF, powered by a 150 V 10 kHz sinusoidal signal, a total harmonic distortion (THD) of 4.3% is obtained.",
"title": ""
},
{
"docid": "82dae1a1b6bcd1ca2af690253a6e650a",
"text": "The task of automatic document summarization aims at generating short summaries for originally long documents. A good summary should cover the most important information of the original document or a cluster of documents, while being coherent, non-redundant and grammatically readable. Numerous approaches for automatic summarization have been developed to date. In this paper we give a self-contained, broad overview of recent progress made for document summarization within the last 5 years. Specifically, we emphasize on significant contributions made in recent years that represent the state-of-the-art of document summarization, including progress on modern sentence extraction approaches that improve concept coverage, information diversity and content coherence, as well as attempts from summarization frameworks that integrate sentence compression, and more abstractive systems that are able to produce completely new sentences. In addition, we review progress made for document summarization in domains, genres and applications that are different from traditional settings. We also point out some of the latest trends and highlight a few possible future directions.",
"title": ""
},
{
"docid": "66876eb3710afda075b62b915a2e6032",
"text": "In this paper we analyze the CS Principles project, a proposed Advanced Placement course, by focusing on the second pilot that took place in 2011-2012. In a previous publication the first pilot of the course was explained, but not in a context related to relevant educational research and philosophy. In this paper we analyze the content and the pedagogical approaches used in the second pilot of the project. We include information about the third pilot being conducted in 2012-2013 and the portfolio exam that is part of that pilot. Both the second and third pilots provide evidence that the CS Principles course is succeeding in changing how computer science is taught and to whom it is taught.",
"title": ""
},
{
"docid": "30d191f30f8d0cd0fd0d9b99a440a1df",
"text": "Despite their ubiquitous presence, texture-less objects present significant challenges to contemporary visual object detection and localization algorithms. This paper proposes a practical method for the detection and accurate 3D localization of multiple texture-less and rigid objects depicted in RGB-D images. The detection procedure adopts the sliding window paradigm, with an efficient cascade-style evaluation of each window location. A simple pre-filtering is performed first, rapidly rejecting most locations. For each remaining location, a set of candidate templates (i.e. trained object views) is identified with a voting procedure based on hashing, which makes the method's computational complexity largely unaffected by the total number of known objects. The candidate templates are then verified by matching feature points in different modalities. Finally, the approximate object pose associated with each detected template is used as a starting point for a stochastic optimization procedure that estimates accurate 3D pose. Experimental evaluation shows that the proposed method yields a recognition rate comparable to the state of the art, while its complexity is sub-linear in the number of templates.",
"title": ""
},
{
"docid": "0ad5e71e00b4637483d7676b6ee677db",
"text": "In renal failure metformin can lead to lactic acidosis. Additional inhibition of hepatic gluconeogenesis by accumulation of the drug may aggravate fasting-induced ketoacidosis. We report the occurrence of metformin-associated lactic acidosis (MALA) with concurrent euglycemic ketoacidosis (MALKA) in three patients with renal failure. Patient 1: a 78-year-old woman (pH = 6.89, lactic acid 22 mmol/l, serum ketoacids 7.4 mmol/l and blood glucose 63 mg/dl) on metformin and insulin treatment. Patient 2: a 79-year-old woman on metformin treatment (pH = 6.80, lactic acid 14.7 mmol/l, serum ketoacids 6.4 mmol/l and blood glucose 76 mg/dl). Patient 3: a 71-year-old man on metformin, canagliflozin and liraglutide treatment (pH = 7.21, lactic acid 5.9 mmol/l, serum ketoacids 16 mmol/l and blood glucose 150 mg/dl). In all patients, ketoacidosis receded on glucose infusion and renal replacement therapy. This case series highlights the parallel occurrence of MALA and euglycemic ketoacidosis, the latter exceeding ketosis due to starvation, suggesting a metformin-triggered inhibition of gluconeogenesis. Affected patients benefit from glucose infusion counteracting suppressed hepatic gluconeogenesis.",
"title": ""
},
{
"docid": "7419fa101c2471e225c976da196ed813",
"text": "A 4×40 Gb/s collaborative digital CDR is implemented in 28nm CMOS. The CDR is capable of recovering a low jitter clock from a partially-equalized or un-equalized eye by using a phase detection scheme that inherently filters out ISI edges. The CDR uses split feedback that simultaneously allows wider bandwidth and lower recovered clock jitter. A shared frequency tracking is also introduced that results in lower periodic jitter. Combining these techniques the CDR recovers a 10GHz clock from an eye containing 0.8UIpp DDJ and still achieves 1-10 MHz of tracking bandwidth while adding <; 300fs of jitter. Per lane CDR occupies only .06 mm2 and consumes 175 mW.",
"title": ""
},
{
"docid": "e4ce5d47a095fcdadbe5c16bb90445d4",
"text": "Artificial neural network (ANN) has been widely applied in flood forecasting and got good results. However, it can still not go beyond one or two hidden layers for the problematic non-convex optimization. This paper proposes a deep learning approach by integrating stacked autoencoders (SAE) and back propagation neural networks (BPNN) for the prediction of stream flow, which simultaneously takes advantages of the powerful feature representation capability of SAE and superior predicting capacity of BPNN. To further improve the non-linearity simulation capability, we first classify all the data into several categories by the K-means clustering. Then, multiple SAE-BP modules are adopted to simulate their corresponding categories of data. The proposed approach is respectively compared with the support-vector-machine (SVM) model, the BP neural network model, the RBF neural network model and extreme learning machine (ELM) model. The experimental results show that the SAE-BP integrated algorithm performs much better than other benchmarks.",
"title": ""
},
{
"docid": "e74d1eb4f1d5c45989aff2cb0e79a83e",
"text": "Environmental audio tagging is a newly proposed task to predict the presence or absence of a specific audio event in a chunk. Deep neural network (DNN) based methods have been successfully adopted for predicting the audio tags in the domestic audio scene. In this paper, we propose to use a convolutional neural network (CNN) to extract robust features from mel-filter banks (MFBs), spectrograms or even raw waveforms for audio tagging. Gated recurrent unit (GRU) based recurrent neural networks (RNNs) are then cascaded to model the long-term temporal structure of the audio signal. To complement the input information, an auxiliary CNN is designed to learn on the spatial features of stereo recordings. We evaluate our proposed methods on Task 4 (audio tagging) of the Detection and Classification of Acoustic Scenes and Events 2016 (DCASE 2016) challenge. Compared with our recent DNN-based method, the proposed structure can reduce the equal error rate (EER) from 0.13 to 0.11 on the development set. The spatial features can further reduce the EER to 0.10. The performance of the end-to-end learning on raw waveforms is also comparable. Finally, on the evaluation set, we get the state-of-the-art performance with 0.12 EER while the performance of the best existing system is 0.15 EER.",
"title": ""
},
{
"docid": "6a6063c05941c026b083bfcc573520f8",
"text": "This paper describes how semantic indexing can help to generate a contextual overview of topics and visually compare clusters of articles. The method was originally developed for an innovative information exploration tool, called Ariadne, which operates on bibliographic databases with tens of millions of records (Koopman et al. in Proceedings of the 33rd Annual ACM Conference Extended Abstracts on Human Factors in Computing Systems. doi: 10.1145/2702613.2732781 , 2015b). In this paper, the method behind Ariadne is further developed and applied to the research question of the special issue “Same data, different results”—the better understanding of topic (re-)construction by different bibliometric approaches. For the case of the Astro dataset of 111,616 articles in astronomy and astrophysics, a new instantiation of the interactive exploring tool, LittleAriadne, has been created. This paper contributes to the overall challenge to delineate and define topics in two different ways. First, we produce two clustering solutions based on vector representations of articles in a lexical space. These vectors are built on semantic indexing of entities associated with those articles. Second, we discuss how LittleAriadne can be used to browse through the network of topical terms, authors, journals, citations and various cluster solutions of the Astro dataset. More specifically, we treat the assignment of an article to the different clustering solutions as an additional element of its bibliographic record. Keeping the principle of semantic indexing on the level of such an extended list of entities of the bibliographic record, LittleAriadne in turn provides a visualization of the context of a specific clustering solution. It also conveys the similarity of article clusters produced by different algorithms, hence representing a complementary approach to other possible means of comparison.",
"title": ""
},
{
"docid": "4e9ca5976fc68c319e8303076ca80dc7",
"text": "A self-driving car, to be deployed in real-world driving environments, must be capable of reliably detecting and effectively tracking of nearby moving objects. This paper presents our new, moving object detection and tracking system that extends and improves our earlier system used for the 2007 DARPA Urban Challenge. We revised our earlier motion and observation models for active sensors (i.e., radars and LIDARs) and introduced a vision sensor. In the new system, the vision module detects pedestrians, bicyclists, and vehicles to generate corresponding vision targets. Our system utilizes this visual recognition information to improve a tracking model selection, data association, and movement classification of our earlier system. Through the test using the data log of actual driving, we demonstrate the improvement and performance gain of our new tracking system.",
"title": ""
},
{
"docid": "7c2425bb7395f17935e7e32122d12cce",
"text": "The development of microwave breast cancer detection and treatment techniques has been driven by reports of substantial contrast in the dielectric properties of malignant and normal breast tissues. However, definitive knowledge of the dielectric properties of normal and diseased breast tissues at microwave frequencies has been limited by gaps and discrepancies across previously published studies. To address these issues, we conducted a large-scale study to experimentally determine the ultrawideband microwave dielectric properties of a variety of normal, malignant and benign breast tissues, measured from 0.5 to 20 GHz using a precision open-ended coaxial probe. Previously, we reported the dielectric properties of normal breast tissue samples obtained from reduction surgeries. Here, we report the dielectric properties of normal (adipose, glandular and fibroconnective), malignant (invasive and non-invasive ductal and lobular carcinomas) and benign (fibroadenomas and cysts) breast tissue samples obtained from cancer surgeries. We fit a one-pole Cole-Cole model to the complex permittivity data set of each characterized sample. Our analyses show that the contrast in the microwave-frequency dielectric properties between malignant and normal adipose-dominated tissues in the breast is considerable, as large as 10:1, while the contrast in the microwave-frequency dielectric properties between malignant and normal glandular/fibroconnective tissues in the breast is no more than about 10%.",
"title": ""
},
{
"docid": "9fb27226848da6b18fdc1e3b3edf79c9",
"text": "In the last few years thousands of scientific papers have investigated sentiment analysis, several startups that measure opinions on real data have emerged and a number of innovative products related to this theme have been developed. There are multiple methods for measuring sentiments, including lexical-based and supervised machine learning methods. Despite the vast interest on the theme and wide popularity of some methods, it is unclear which one is better for identifying the polarity (i.e., positive or negative) of a message. Accordingly, there is a strong need to conduct a thorough apple-to-apple comparison of sentiment analysis methods, as they are used in practice, across multiple datasets originated from different data sources. Such a comparison is key for understanding the potential limitations, advantages, and disadvantages of popular methods. This article aims at filling this gap by presenting a benchmark comparison of twenty-four popular sentiment analysis methods (which we call the state-of-the-practice methods). Our evaluation is based on a benchmark of eighteen labeled datasets, covering messages posted on social networks, movie and product reviews, as well as opinions and comments in news articles. Our results highlight the extent to which the prediction performance of these methods varies considerably across datasets. Aiming at boosting the development of this research area, we open the methods’ codes and datasets used in this article, deploying them in a benchmark system, which provides an open API for accessing and comparing sentence-level sentiment analysis methods.",
"title": ""
},
{
"docid": "27d1b0df0bfa086f1084ccfbffd1a670",
"text": "This paper presents Design of a Microcontroller based Constant Voltage Battery Charger. The circuit is implemented using soft switching buck converter. Solar panels of 75Wp and 37WP are used in parallel for the experimentation and a lead acid battery of 75Ah is used for charging. Microcontroller Atmega16 is used for programming using Win AVR ISP software. It is observed that during 10AM to 2PM, on 1may2012 when there is enough solar radiation at Nagpur, charging current of the battery is almost 7 to 8A.Time taken for charging the battery is 8 to 10 hours depending upon the intensity of solar radiation. The merits of the proposed charger are, highly efficient, simple to design mostly due to not having a transformer, puts minimal stress on the switch, and requires a relatively small output filter for low output ripple.",
"title": ""
},
{
"docid": "9cf81f7fc9fdfcf5718aba0a67b89a45",
"text": "Many modern games provide environments in which agents perform decision making at several levels of granularity. In the domain of real-time strategy games, an effective agent must make high-level strategic decisions while simultaneously controlling individual units in battle. We advocate reactive planning as a powerful technique for building multi-scale game AI and demonstrate that it enables the specification of complex, real-time agents in a unified agent architecture. We present several idioms used to enable authoring of an agent that concurrently pursues strategic and tactical goals, and an agent for playing the real-time strategy game StarCraft that uses these design patterns.",
"title": ""
},
{
"docid": "5e09b2302bc3dc9ca6ae8f4a3812ec1d",
"text": "Learning to Reconstruct 3D Objects",
"title": ""
},
{
"docid": "df09834abe25199ac7b3205d657fffb2",
"text": "In modern wireless communications products it is required to incorporate more and more different functions to comply with current market trends. A very attractive function with steadily growing market penetration is local positioning. To add this feature to low-cost mass-market devices without additional power consumption, it is desirable to use commercial communication chips and standards for localization of the wireless units. In this paper we present a concept to measure the distance between two IEEE 802.15.4 (ZigBee) compliant devices. The presented prototype hardware consists of a low- cost 2.45 GHz ZigBee chipset. For localization we use standard communication packets as transmit signals. Thus simultaneous data transmission and transponder localization is feasible. To achieve high positioning accuracy even in multipath environments, a coherent synthesis of measurements in multiple channels and a special signal phase evaluation concept is applied. With this technique the full available ISM bandwidth of 80 MHz is utilized. In first measurements with two different frequency references-a low-cost oscillator and a temperatur-compensated crystal oscillator-a positioning bias error of below 16 cm and 9 cm was obtained. The standard deviation was less than 3 cm and 1 cm, respectively. It is demonstrated that compared to signal correlation in time, the phase processing technique yields an accuracy improvement of roughly an order of magnitude.",
"title": ""
}
] |
scidocsrr
|
a82c001a054efc065ee2de4c795121ff
|
On the Feasibility of Side-Channel Attacks with Brain-Computer Interfaces
|
[
{
"docid": "ffee60d5f6d862115b7d7d2442e1a1b9",
"text": "Preventing accidents caused by drowsiness has become a major focus of active safety driving in recent years. It requires an optimal technique to continuously detect drivers' cognitive state related to abilities in perception, recognition, and vehicle control in (near-) real-time. The major challenges in developing such a system include: 1) the lack of significant index for detecting drowsiness and 2) complicated and pervasive noise interferences in a realistic and dynamic driving environment. In this paper, we develop a drowsiness-estimation system based on electroencephalogram (EEG) by combining independent component analysis (ICA), power-spectrum analysis, correlation evaluations, and linear regression model to estimate a driver's cognitive state when he/she drives a car in a virtual reality (VR)-based dynamic simulator. The driving error is defined as deviations between the center of the vehicle and the center of the cruising lane in the lane-keeping driving task. Experimental results demonstrate the feasibility of quantitatively estimating drowsiness level using ICA-based multistream EEG spectra. The proposed ICA-based method applied to power spectrum of ICA components can successfully (1) remove most of EEG artifacts, (2) suggest an optimal montage to place EEG electrodes, and estimate the driver's drowsiness fluctuation indexed by the driving performance measure. Finally, we present a benchmark study in which the accuracy of ICA-component-based alertness estimates compares favorably to scalp-EEG based.",
"title": ""
}
] |
[
{
"docid": "f405c62d932eec05c55855eb13ba804c",
"text": "Multilevel converters have been under research and development for more than three decades and have found successful industrial application. However, this is still a technology under development, and many new contributions and new commercial topologies have been reported in the last few years. The aim of this paper is to group and review these recent contributions, in order to establish the current state of the art and trends of the technology, to provide readers with a comprehensive and insightful review of where multilevel converter technology stands and is heading. This paper first presents a brief overview of well-established multilevel converters strongly oriented to their current state in industrial applications to then center the discussion on the new converters that have made their way into the industry. In addition, new promising topologies are discussed. Recent advances made in modulation and control of multilevel converters are also addressed. A great part of this paper is devoted to show nontraditional applications powered by multilevel converters and how multilevel converters are becoming an enabling technology in many industrial sectors. Finally, some future trends and challenges in the further development of this technology are discussed to motivate future contributions that address open problems and explore new possibilities.",
"title": ""
},
{
"docid": "b1bc4cef47dd7ebc2f9d30719b57c5f8",
"text": "This paper discusses our ongoing experiences in developing an interdisciplinary general education course called Sound Thinking that is offered jointly by our Dept. of Computer Science and Dept. of Music. It focuses on the student outcomes we are trying to achieve and the projects we are using to help students realize those outcomes. It explains why we are moving from a web-based environment using HTML and JavaScript to Scratch and discusses the potential for Scratch's \"musical live coding\" capability to reinforce those concepts even more strongly.",
"title": ""
},
{
"docid": "43a84d7fc14e52e93ab2df5db6660a2b",
"text": "The advent of regenerative medicine has brought us the opportunity to regenerate, modify and restore human organs function. Stem cells, a key resource in regenerative medicine, are defined as clonogenic, self-renewing, progenitor cells that can generate into one or more specialized cell types. Stem cells have been classified into three main groups: embryonic stem cells (ESCs), induced pluripotent stem cells (iPSCs) and adult/postnatal stem cells (ASCs). The present review focused the attention on ASCs, which have been identified in many perioral tissues such as dental pulp, periodontal ligament, follicle, gingival, alveolar bone and papilla. Human dental pulp stem cells (hDPSCs) are ectodermal-derived stem cells, originating from migrating neural crest cells and possess mesenchymal stem cell properties. During last decade, hDPSCs have received extensive attention in the field of tissue engineering and regenerative medicine due to their accessibility and ability to differentiate in several cell phenotypes. In this review, we have carefully described the potential of hDPSCs to differentiate into odontoblasts, osteocytes/osteoblasts, adipocytes, chondrocytes and neural cells.",
"title": ""
},
{
"docid": "3f36b23dd997649b8df6c7fa7fb73963",
"text": "This paper presents a virtual impedance design and implementation approach for power electronics interfaced distributed generation (DG) units. To improve system stability and prevent power couplings, the virtual impedances can be placed between interfacing converter outputs and the main grid. However, optimal design of the impedance value, robust implementation of the virtual impedance, and proper utilization of the virtual impedance for DG performance enhancement are key for the virtual impedance concept. In this paper, flexible small-signal models of microgrids in different operation modes are developed first. Based on the developed microgrid models, the desired DG impedance range is determined considering the stability, transient response, and power flow performance of DG units. A robust virtual impedance implementation method is also presented, which can alleviate voltage distortion problems caused by harmonic loads compared to the effects of physical impedances. Furthermore, an adaptive impedance concept is proposed to further improve power control performances during the transient and grid faults. Simulation and experimental results are provided to validate the impedance design approach, the virtual impedance implementation method, and the proposed adaptive transient impedance control strategies.",
"title": ""
},
{
"docid": "d72e4df2e396a11ae7130ca7e0b2fb56",
"text": "Advances in location-acquisition and wireless communication technologies have led to wider availability of spatio-temporal (ST) data, which has unique spatial properties (i.e. geographical hierarchy and distance) and temporal properties (i.e. closeness, period and trend). In this paper, we propose a <u>Deep</u>-learning-based prediction model for <u>S</u>patio-<u>T</u>emporal data (DeepST). We leverage ST domain knowledge to design the architecture of DeepST, which is comprised of two components: spatio-temporal and global. The spatio-temporal component employs the framework of convolutional neural networks to simultaneously model spatial near and distant dependencies, and temporal closeness, period and trend. The global component is used to capture global factors, such as day of the week, weekday or weekend. Using DeepST, we build a real-time crowd flow forecasting system called UrbanFlow1. Experiment results on diverse ST datasets verify DeepST's ability to capture ST data's spatio-temporal properties, showing the advantages of DeepST beyond four baseline methods.",
"title": ""
},
{
"docid": "813a3988b84745ec768959d1c98ac0a8",
"text": "To enhance effectiveness, a user's query can be rewritten internally by the search engine in many ways, for example by applying proximity, or by expanding the query with related terms. However, approaches that benefit effectiveness often have a negative impact on efficiency, which has impacts upon the user satisfaction, if the query is excessively slow. In this paper, we propose a novel framework for using the predicted execution time of various query rewritings to select between alternatives on a per-query basis, in a manner that ensures both effectiveness and efficiency. In particular, we propose the prediction of the execution time of ephemeral (e.g., proximity) posting lists generated from uni-gram inverted index posting lists, which are used in establishing the permissible query rewriting alternatives that may execute in the allowed time. Experiments examining both the effectiveness and efficiency of the proposed approach demonstrate that a 49% decrease in mean response time (and 62% decrease in 95th-percentile response time) can be attained without significantly hindering the effectiveness of the search engine.",
"title": ""
},
{
"docid": "6906f5983de48395b043b947b0574d8e",
"text": "As information technology and the popularity of Internet technology and in-depth applications, e-commerce is at unprecedented pace. People become more and more the focus of attention. At present, relatively fast development of e-commerce activities are online sales, online promotions, and online services. Globalization of electronic commerce as the development of enterprises provided many opportunities, but in developing country’s electricity business is still in the initial stage of development, how to improve e-commerce environment of consumer satisfaction, consumer loyalty and thus, related to the electron the performance of business enterprises. Therefore, to the upsurge in e-commerce for more benefits, for many enterprises, there is still need for careful analysis of the business environment electricity consumer behavior, understanding the factors that affect their consumption and thus the basis of network marketing the characteristics of the network setup customer satisfaction evaluation index system, then the theory based on customer satisfaction, on this basis to take corresponding countermeasures, the development of effective and reasonable marketing strategy, E-commerce can improve business performance, and promote the sound development of their self. At present, domestic and international network of scholars on consumer psychology, motivation and behavior have more exposition, however, how in ecommerce environment impact factors of customer satisfaction and how to improve ecommerce customer satisfaction studies are not many see. This article is from the analysis the impact of e-commerce network environment factors in consumer satisfaction.",
"title": ""
},
{
"docid": "957e103d533b3013e24aebd3617edd87",
"text": "The recent success of deep neural networks relies on massive amounts of labeled data. For a target task where labeled data is unavailable, domain adaptation can transfer a learner from a different source domain. In this paper, we propose a new approach to domain adaptation in deep networks that can jointly learn adaptive classifiers and transferable features from labeled data in the source domain and unlabeled data in the target domain. We relax a shared-classifier assumption made by previous methods and assume that the source classifier and target classifier differ by a residual function. We enable classifier adaptation by plugging several layers into deep network to explicitly learn the residual function with reference to the target classifier. We fuse features of multiple layers with tensor product and embed them into reproducing kernel Hilbert spaces to match distributions for feature adaptation. The adaptation can be achieved in most feed-forward models by extending them with new residual layers and loss functions, which can be trained efficiently via back-propagation. Empirical evidence shows that the new approach outperforms state of the art methods on standard domain adaptation benchmarks.",
"title": ""
},
{
"docid": "50df49f3c9de66798f89fdeab9d2ae85",
"text": "Predictive modeling is increasingly being employed to assist human decision-makers. One purported advantage of replacing or augmenting human judgment with computer models in high stakes settings– such as sentencing, hiring, policing, college admissions, and parole decisions– is the perceived “neutrality” of computers. It is argued that because computer models do not hold personal prejudice, the predictions they produce will be equally free from prejudice. There is growing recognition that employing algorithms does not remove the potential for bias, and can even amplify it if the training data were generated by a process that is itself biased. In this paper, we provide a probabilistic notion of algorithmic bias. We propose a method to eliminate bias from predictive models by removing all information regarding protected variables from the data to which the models will ultimately be trained. Unlike previous work in this area, our procedure accommodates data on any measurement scale. Motivated by models currently in use in the criminal justice system that inform decisions on pre-trial release and parole, we apply our proposed method to a dataset on the criminal histories of individuals at the time of sentencing to produce “race-neutral” predictions of re-arrest. In the process, we demonstrate that a common approach to creating “race-neutral” models– omitting race as a covariate– still results in racially disparate predictions. We then demonstrate that the application of our proposed method to these data removes racial disparities from predictions with minimal impact on predictive accuracy.",
"title": ""
},
{
"docid": "80752d3e7e5238ae23f90d4eaf492a3c",
"text": "Authorship attribution is associated with important applications in forensics and humanities research. A crucial point in this field is to quantify the personal style of writing, ideally in a way that is not affected by changes in topic or genre. In this paper, we present a novel method that enhances authorship attribution effectiveness by introducing a text distortion step before extracting stylometric measures. The proposed method attempts to mask topicspecific information that is not related to the personal style of authors. Based on experiments on two main tasks in authorship attribution, closed-set attribution and authorship verification, we demonstrate that the proposed approach can enhance existing methods especially under cross-topic conditions, where the training and test corpora do not match in topic.",
"title": ""
},
{
"docid": "c8f10cc90546fe5ffc7ccaabf5d9ccca",
"text": "The game of chess is the longest-studied domain in the history of artificial intelligence. The strongest programs are based on a combination of sophisticated search techniques, domain-specific adaptations, and handcrafted evaluation functions that have been refined by human experts over several decades. By contrast, the AlphaGo Zero program recently achieved superhuman performance in the game of Go by reinforcement learning from self-play. In this paper, we generalize this approach into a single AlphaZero algorithm that can achieve superhuman performance in many challenging games. Starting from random play and given no domain knowledge except the game rules, AlphaZero convincingly defeated a world champion program in the games of chess and shogi (Japanese chess), as well as Go.",
"title": ""
},
{
"docid": "514f8ca4015f7abac2674e209ccc3f51",
"text": "Complex real-world signals, such as images, contain discriminative structures that differ in many aspects including scale, invariance, and data channel. While progress in deep learning shows the importance of learning features through multiple layers, it is equally important to learn features through multiple paths. We propose Multipath Hierarchical Matching Pursuit (M-HMP), a novel feature learning architecture that combines a collection of hierarchical sparse features for image classification to capture multiple aspects of discriminative structures. Our building blocks are MI-KSVD, a codebook learning algorithm that balances the reconstruction error and the mutual incoherence of the codebook, and batch orthogonal matching pursuit (OMP), we apply them recursively at varying layers and scales. The result is a highly discriminative image representation that leads to large improvements to the state-of-the-art on many standard benchmarks, e.g., Caltech-101, Caltech-256, MITScenes, Oxford-IIIT Pet and Caltech-UCSD Bird-200.",
"title": ""
},
{
"docid": "5eeabef9f87bbebcdc9c44a6ceeb1373",
"text": "This paper revisits the classical problem of multi-query optimization in the context of RDF/SPARQL. We show that the techniques developed for relational and semi-structured data/query languages are hard, if not impossible, to be extended to account for RDF data model and graph query patterns expressed in SPARQL. In light of the NP-hardness of the multi-query optimization for SPARQL, we propose heuristic algorithms that partition the input batch of queries into groups such that each group of queries can be optimized together. An essential component of the optimization incorporates an efficient algorithm to discover the common sub-structures of multiple SPARQL queries and an effective cost model to compare candidate execution plans. Since our optimization techniques do not make any assumption about the underlying SPARQL query engine, they have the advantage of being portable across different RDF stores. The extensive experimental studies, performed on three popular RDF stores, show that the proposed techniques are effective, efficient and scalable.",
"title": ""
},
{
"docid": "556e496bd716f46e27c8378066c91521",
"text": "A study is being done into the psychology of crowd behaviour during emergencies, and ways of ensuring safety during mass evacuations by encouraging more altruistic behaviour. Crowd emergencies have previously been understood as involving panic and selfish behaviour. The present study tests the claims that (1) co-operation and altruistic behaviour rather than panic will predominate in mass responses to emergencies, even in situations where there is a clear threat of death; and that this is the case not only because (2) everyday norms and social roles continue to exert an influence, but also because (3) the external threat can create a sense of solidarity amongst strangers. Qualitative analysis of interviews with survivors of different emergencies supports these claims. A second study of the July 7 London bombings is on-going and also supports these claims. While these findings provide support for some existing models of mass emergency evacuation, it also points to the necessity of a new theoretical approach to the phenomena, using Self-Categorization Theory. Practical applications for the future management of crowd emergencies are also considered.",
"title": ""
},
{
"docid": "f32ed82c3ab67c711f50394eea2b9106",
"text": "Concept-to-text generation refers to the task of automatically producing textual output from non-linguistic input. We present a joint model that captures content selection (“what to say”) and surface realization (“how to say”) in an unsupervised domain-independent fashion. Rather than breaking up the generation process into a sequence of local decisions, we define a probabilistic context-free grammar that globally describes the inherent structure of the input (a corpus of database records and text describing some of them). We recast generation as the task of finding the best derivation tree for a set of database records and describe an algorithm for decoding in this framework that allows to intersect the grammar with additional information capturing fluency and syntactic well-formedness constraints. Experimental evaluation on several domains achieves results competitive with state-of-the-art systems that use domain specific constraints, explicit feature engineering or labeled data.",
"title": ""
},
{
"docid": "cdf5426c834a1c904039e66382708467",
"text": "In this paper, we propose a new prototype-based discriminative feature learning (PDFL) method for kinship verification. Unlike most previous kinship verification methods which employ low-level hand-crafted descriptors such as local binary pattern and Gabor features for face representation, this paper aims to learn discriminative mid-level features to better characterize the kin relation of face images for kinship verification. To achieve this, we construct a set of face samples with unlabeled kin relation from the labeled face in the wild dataset as the reference set. Then, each sample in the training face kinship dataset is represented as a mid-level feature vector, where each entry is the corresponding decision value from one support vector machine hyperplane. Subsequently, we formulate an optimization function by minimizing the intraclass samples (with a kin relation) and maximizing the neighboring interclass samples (without a kin relation) with the mid-level features. To better use multiple low-level features for mid-level feature learning, we further propose a multiview PDFL method to learn multiple mid-level features to improve the verification performance. Experimental results on four publicly available kinship datasets show the superior performance of the proposed methods over both the state-of-the-art kinship verification methods and human ability in our kinship verification task.",
"title": ""
},
{
"docid": "a2e0163aebb348d3bfab7ebac119e0c0",
"text": "Herein we report the first study of the oxygen reduction reaction (ORR) catalyzed by a cofacial porphyrin scaffold accessed in high yield (overall 53%) using coordination-driven self-assembly with no chromatographic purification steps. The ORR activity was investigated using chemical and electrochemical techniques on monomeric cobalt(II) tetra(meso-4-pyridyl)porphyrinate (CoTPyP) and its cofacial analogue [Ru8(η6-iPrC6H4Me)8(dhbq)4(CoTPyP)2][OTf]8 (Co Prism) (dhbq = 2,5-dihydroxy-1,4-benzoquinato, OTf = triflate) as homogeneous oxygen reduction catalysts. Co Prism is obtained in one self-assembly step that organizes six total building blocks, two CoTPyP units and four arene-Ru clips, into a cofacial motif previously demonstrated with free-base, Zn(II), and Ni(II) porphyrins. Turnover frequencies (TOFs) from chemical reduction (66 vs 6 h-1) and rate constants of overall homogeneous catalysis (kobs) determined from rotating ring-disk experiments (1.1 vs 0.05 h-1) establish a cofacial enhancement upon comparison of the activities of Co Prism and CoTPyP, respectively. Cyclic voltammetry was used to initially probe the electrochemical catalytic behavior. Rotating ring-disk electrode studies were completed to probe the Faradaic efficiency and obtain an estimate of the rate constant associated with the ORR.",
"title": ""
},
{
"docid": "972ee7027c71364e8fe1894088f79d8a",
"text": "A fully integrated output capacitor-less, nMOS regulation FET low-dropout (LDO) regulator with fast transient response for system-on-chip power regulation applications is presented. The error amplifier (EA) consists of a differential cross-coupled common-gate (CG) input stage achieving twice the transconductance and unity-gain-bandwidth in comparison to a conventional differential common-source stage. The low input resistance of the CG EA improves stability of the LDO over a wide range of load currents. The LDO employs a current-reused dynamic biasing technique to further improve the load transient response, with no extra quiescent current. It is designed and fabricated in a 0.18-<inline-formula> <tex-math notation=\"LaTeX\">${\\mu }\\text{m}$ </tex-math></inline-formula> CMOS technology for an input voltage range of 1.6–1.8 V, and an output voltage range of 1.4–1.6 V. Measured undershoot is 158 mV and settling time is 20 ns for 9–40 mA load change in 250 ps edge-time with zero load capacitance. The LDO core consumes 130 <inline-formula> <tex-math notation=\"LaTeX\">${\\mu }\\text{A}$ </tex-math></inline-formula> of quiescent current, occupies 0.21 mm<sup>2</sup> die area, and sustains 0–50 pF of on-chip load capacitance.",
"title": ""
},
{
"docid": "550a936ec02706a9de94a50abf6f1ac6",
"text": "Motivated by the capability of sparse coding based anomaly detection, we propose a Temporally-coherent Sparse Coding (TSC) where we enforce similar neighbouring frames be encoded with similar reconstruction coefficients. Then we map the TSC with a special type of stacked Recurrent Neural Network (sRNN). By taking advantage of sRNN in learning all parameters simultaneously, the nontrivial hyper-parameter selection to TSC can be avoided, meanwhile with a shallow sRNN, the reconstruction coefficients can be inferred within a forward pass, which reduces the computational cost for learning sparse coefficients. The contributions of this paper are two-fold: i) We propose a TSC, which can be mapped to a sRNN which facilitates the parameter optimization and accelerates the anomaly prediction. ii) We build a very large dataset which is even larger than the summation of all existing dataset for anomaly detection in terms of both the volume of data and the diversity of scenes. Extensive experiments on both a toy dataset and real datasets demonstrate that our TSC based and sRNN based method consistently outperform existing methods, which validates the effectiveness of our method.",
"title": ""
},
{
"docid": "5fc8afbe7d55af3274d849d1576d3b13",
"text": "It is a difficult task to classify images with multiple class labels using only a small number of labeled examples, especially when the label (class) distribution is imbalanced. Emotion classification is such an example of imbalanced label distribution, because some classes of emotions like disgusted are relatively rare comparing to other labels like happy or sad. In this paper, we propose a data augmentation method using generative adversarial networks (GAN). It can complement and complete the data manifold and find better margins between neighboring classes. Specifically, we design a framework using a CNN model as the classifier and a cycle-consistent adversarial networks (CycleGAN) as the generator. In order to avoid gradient vanishing problem, we employ the least-squared loss as adversarial loss. We also propose several evaluation methods on three benchmark datasets to validate GAN’s performance. Empirical results show that we can obtain 5%∼10% increase in the classification accuracy after employing the GAN-based data augmentation techniques.",
"title": ""
}
] |
scidocsrr
|
693e578c14483342cffaa27440f71599
|
Syntax highlighting in business process models
|
[
{
"docid": "8ec4ffa9226b9e6357ba64918f7659e9",
"text": "Purpose – This paper summarizes typical pitfalls as they can be observed in larger process modeling projects. Design/methodology/approach – The identified pitfalls have been derived from a series of focus groups and semi-structured interviews with business process analysts and managers of process management and modeling projects. Findings – The paper provides a list of typical characteristics of unsuccessful process modeling. It covers six pitfalls related to strategy and governance (1-3) and the involved stakeholders (4-6). Further issues related to tools and related requirements (7-10), the practice of modeling (11-16), the way we design to-be models (17-19), and how we deal with success of modeling and maintenance issues (19-21) will be discussed in the second part of this paper. Research limitations/implications – This paper is a personal viewpoint, and does not report on the outcomes of a structured qualitative research project. Practical implications – The provided list of total 22 pitfalls increases the awareness for the main challenges related to process modeling and helps to identify common mistakes. Originality/value – This paper is one of the very few contributions in the area of challenges related to process modeling.",
"title": ""
}
] |
[
{
"docid": "24151cf5d4481ba03e6ffd1ca29f3441",
"text": "The design, fabrication and characterization of 79 GHz slot antennas based on substrate integrated waveguides (SIW) are presented in this paper. All the prototypes are fabricated in a polyimide flex foil using printed circuit board (PCB) fabrication processes. A novel concept is used to minimize the leakage losses of the SIWs at millimeter wave frequencies. Different losses in the SIWs are analyzed. SIW-based single slot antenna, longitudinal and four-by-four slot array antennas are numerically and experimentally studied. Measurements of the antennas show approximately 4.7%, 5.4% and 10.7% impedance bandwidth (S11=-10 dB) with 2.8 dBi, 6.0 dBi and 11.0 dBi maximum antenna gain around 79 GHz, respectively. The measured results are in good agreement with the numerical simulations.",
"title": ""
},
{
"docid": "bb03f7d799b101966b4ea6e75cd17fea",
"text": "Fuzzy decision trees (FDTs) have shown to be an effective solution in the framework of fuzzy classification. The approaches proposed so far to FDT learning, however, have generally neglected time and space requirements. In this paper, we propose a distributed FDT learning scheme shaped according to the MapReduce programming model for generating both binary and multiway FDTs from big data. The scheme relies on a novel distributed fuzzy discretizer that generates a strong fuzzy partition for each continuous attribute based on fuzzy information entropy. The fuzzy partitions are, therefore, used as an input to the FDT learning algorithm, which employs fuzzy information gain for selecting the attributes at the decision nodes. We have implemented the FDT learning scheme on the Apache Spark framework. We have used ten real-world publicly available big datasets for evaluating the behavior of the scheme along three dimensions: 1) performance in terms of classification accuracy, model complexity, and execution time; 2) scalability varying the number of computing units; and 3) ability to efficiently accommodate an increasing dataset size. We have demonstrated that the proposed scheme turns out to be suitable for managing big datasets even with a modest commodity hardware support. Finally, we have used the distributed decision tree learning algorithm implemented in the MLLib library and the Chi-FRBCS-BigData algorithm, a MapReduce distributed fuzzy rule-based classification system, for comparative analysis.",
"title": ""
},
{
"docid": "b43e14cdca5bb58633a8f1530068d9ac",
"text": "Oxygen reduction reaction (ORR) and oxygen evolution reaction (OER) are essential reactions for energy-storage and -conversion devices relying on oxygen electrochemistry. High-performance, nonprecious metal-based hybrid catalysts are developed from postsynthesis integration of dual-phase spinel MnCo2O4 (dp-MnCo2O4) nanocrystals with nanocarbon materials, e.g., carbon nanotube (CNT) and nitrogen-doped reduced graphene oxide (N-rGO). The synergic covalent coupling between dp-MnCo2O4 and nanocarbons effectively enhances both the bifunctional ORR and OER activities of the spinel/nanocarbon hybrid catalysts. The dp-MnCo2O4/N-rGO hybrid catalysts exhibited comparable ORR activity and superior OER activity compared to commercial 30 wt % platinum supported on carbon black (Pt/C). An electrically rechargeable zinc-air battery using dp-MnCo2O4/CNT hybrid catalysts on the cathode was successfully operated for 64 discharge-charge cycles (or 768 h equivalent), significantly outperforming the Pt/C counterpart, which could only survive up to 108 h under similar conditions.",
"title": ""
},
{
"docid": "0742314b8099dce0eadaa12f96579209",
"text": "Smart utility network (SUN) communications are an essential part of the smart grid. Major vendors realized the importance of universal standards and participated in the IEEE802.15.4g standardization effort. Due to the fact that many vendors already have proprietary solutions deployed in the field, the standardization effort was a challenge, but after three years of hard work, the IEEE802.15.4g standard published on April 28th, 2012. The publication of this standard is a first step towards establishing common and consistent communication specifications for utilities deploying smart grid technologies. This paper summaries the technical essence of the standard and how it can be used in smart utility networks.",
"title": ""
},
{
"docid": "0c9a76222f885b95f965211e555e16cd",
"text": "In this paper we address the following question: “Can we approximately sample from a Bayesian posterior distribution if we are only allowed to touch a small mini-batch of data-items for every sample we generate?”. An algorithm based on the Langevin equation with stochastic gradients (SGLD) was previously proposed to solve this, but its mixing rate was slow. By leveraging the Bayesian Central Limit Theorem, we extend the SGLD algorithm so that at high mixing rates it will sample from a normal approximation of the posterior, while for slow mixing rates it will mimic the behavior of SGLD with a pre-conditioner matrix. As a bonus, the proposed algorithm is reminiscent of Fisher scoring (with stochastic gradients) and as such an efficient optimizer during burn-in.",
"title": ""
},
{
"docid": "65031bb814a4812e499a8906d3a67fc4",
"text": "The training process in industries is assisted with computer solutions to reduce costs. Normally, computer systems created to simulate assembly or machine manipulation are implemented with traditional Human-Computer interfaces (keyboard, mouse, etc). But, this usually leads to systems that are far from the real procedures, and thus not efficient in term of training. Two techniques could improve this procedure: mixed-reality and haptic feedback. We propose in this paper to investigate the integration of both of them inside a single framework. We present the hardware used to design our training system. A feasibility study allows one to establish testing protocol. The results of these tests convince us that such system should not try to simulate realistically the interaction between real and virtual objects as if it was only real objects.",
"title": ""
},
{
"docid": "d11d6df22b5c6212b27dad4e3ed96826",
"text": "We propose learning sentiment-specific word embeddings dubbed sentiment embeddings in this paper. Existing word embedding learning algorithms typically only use the contexts of words but ignore the sentiment of texts. It is problematic for sentiment analysis because the words with similar contexts but opposite sentiment polarity, such as good and bad, are mapped to neighboring word vectors. We address this issue by encoding sentiment information of texts (e.g., sentences and words) together with contexts of words in sentiment embeddings. By combining context and sentiment level evidences, the nearest neighbors in sentiment embedding space are semantically similar and it favors words with the same sentiment polarity. In order to learn sentiment embeddings effectively, we develop a number of neural networks with tailoring loss functions, and collect massive texts automatically with sentiment signals like emoticons as the training data. Sentiment embeddings can be naturally used as word features for a variety of sentiment analysis tasks without feature engineering. We apply sentiment embeddings to word-level sentiment analysis, sentence level sentiment classification, and building sentiment lexicons. Experimental results show that sentiment embeddings consistently outperform context-based embeddings on several benchmark datasets of these tasks. This work provides insights on the design of neural networks for learning task-specific word embeddings in other natural language processing tasks.",
"title": ""
},
{
"docid": "17c5f3ca9171cabddc13a6c0ad00e040",
"text": "Contingency planning is the first stage in developing a formal set of production planning and control activities for the reuse of products obtained via return flows in a closed-loop supply chain. The paper takes a contingency approach to explore the factors that impact production planning and control for closed-loop supply chains that incorporate product recovery. A series of three cases are presented, and a framework developed that shows the common activities required for all remanufacturing operations. To build on the similarities and illustrate and integrate the differences in closed-loop supply chains, Hayes and Wheelwright’s product–process matrix is used as a foundation to examine the three cases representing Remanufacture-to-Stock (RMTS), Reassemble-to-Order (RATO), and Remanufacture-to-Order (RMTO). These three cases offer end-points and an intermediate point for closed-loop supply operations. Since they represent different positions on the matrix, characteristics such as returns volume, timing, quality, product complexity, test and evaluation complexity, and remanufacturing complexity are explored. With a contingency theory for closed-loop supply chains that incorporate product recovery in place, past cases can now be reexamined and the potential for generalizability of the approach to similar types of other problems and applications can be assessed and determined. © 2002 Elsevier Science B.V. All rights reserved.",
"title": ""
},
{
"docid": "f82a9c15e88ba24dbf8f5d4678b8dffd",
"text": "Numerous existing object segmentation frameworks commonly utilize the object bounding box as a prior. In this paper, we address semantic segmentation assuming that object bounding boxes are provided by object detectors, but no training data with annotated segments are available. Based on a set of segment hypotheses, we introduce a simple voting scheme to estimate shape guidance for each bounding box. The derived shape guidance is used in the subsequent graph-cut-based figure-ground segmentation. The final segmentation result is obtained by merging the segmentation results in the bounding boxes. We conduct an extensive analysis of the effect of object bounding box accuracy. Comprehensive experiments on both the challenging PASCAL VOC object segmentation dataset and GrabCut-50 image segmentation dataset show that the proposed approach achieves competitive results compared to previous detection or bounding box prior based methods, as well as other state-of-the-art semantic segmentation methods.",
"title": ""
},
{
"docid": "609bbd3b066cf7a56d11ea545c0b0e71",
"text": "Subgingival margins are often required for biologic, mechanical, or esthetic reasons. Several investigations have demonstrated that their use is associated with adverse periodontal reactions, such as inflammation or recession. The purpose of this prospective randomized clinical study was to determine if two different subgingival margin designs influence the periodontal parameters and patient perception. Deep chamfer and feather-edge preparations were compared on 58 patients with 6 months follow-up. Statistically significant differences were present for bleeding on probing, gingival recession, and patient satisfaction. Feather-edge preparation was associated with increased bleeding on probing and deep chamfer with increased recession; improved patient comfort was registered with chamfer margin design. Subgingival margins are technique sensitive, especially when feather-edge design is selected. This margin design may facilitate soft tissue stability but can expose the patient to an increased risk of gingival inflammation.",
"title": ""
},
{
"docid": "c8cd0f14edee76888e4f1fd0ccc72dfa",
"text": "BACKGROUND\nTotal hip and total knee arthroplasties are well accepted as reliable and suitable surgical procedures to return patients to function. Health-related quality-of-life instruments have been used to document outcomes in order to optimize the allocation of resources. The objective of this study was to review the literature regarding the outcomes of total hip and knee arthroplasties as evaluated by health-related quality-of-life instruments.\n\n\nMETHODS\nThe Medline and EMBASE medical literature databases were searched, from January 1980 to June 2003, to identify relevant studies. Studies were eligible for review if they met the following criteria: (1). the language was English or French, (2). at least one well-validated and self-reported health-related quality of life instrument was used, and (3). a prospective cohort study design was used.\n\n\nRESULTS\nOf the seventy-four studies selected for the review, thirty-two investigated both total hip and total knee arthroplasties, twenty-six focused on total hip arthroplasty, and sixteen focused on total knee arthroplasty exclusively. The most common diagnosis was osteoarthritis. The duration of follow-up ranged from seven days to seven years, with the majority of studies describing results at six to twelve months. The Short Form-36 and the Western Ontario and McMaster University Osteoarthritis Index, the most frequently used instruments, were employed in forty and twenty-eight studies, respectively. Seventeen studies used a utility index. Overall, total hip and total knee arthroplasties were found to be quite effective in terms of improvement in health-related quality-of-life dimensions, with the occasional exception of the social dimension. Age was not found to be an obstacle to effective surgery, and men seemed to benefit more from the intervention than did women. When improvement was found to be modest, the role of comorbidities was highlighted. Total hip arthroplasty appears to return patients to function to a greater extent than do knee procedures, and primary surgery offers greater improvement than does revision. Patients who had poorer preoperative health-related quality of life were more likely to experience greater improvement.\n\n\nCONCLUSIONS\nHealth-related quality-of-life data are valuable, can provide relevant health-status information to health professionals, and should be used as a rationale for the implementation of the most adequate standard of care. Additional knowledge and scientific dissemination of surgery outcomes should help to ensure better management of patients undergoing total hip or total knee arthroplasty and to optimize the use of these procedures.",
"title": ""
},
{
"docid": "82f8bfc9bb01105ccab46005d3df18d7",
"text": "This paper presents a comparative study of different classification methodologies for the task of fine-art genre classification. 2-level comparative study is performed for this classification problem. 1st level reviews the performance of discriminative vs. generative models while 2nd level touches the features aspect of the paintings and compares semantic-level features vs low-level and intermediate level features present in the painting.",
"title": ""
},
{
"docid": "5fb640a9081f72fcf994b1691470d7bc",
"text": "Omnidirectional cameras are widely used in such areas as robotics and virtual reality as they provide a wide field of view. Their images are often processed with classical methods, which might unfortunately lead to non-optimal solutions as these methods are designed for planar images that have different geometrical properties than omnidirectional ones. In this paper we study image classification task by taking into account the specific geometry of omnidirectional cameras with graph-based representations. In particular, we extend deep learning architectures to data on graphs; we propose a principled way of graph construction such that convolutional filters respond similarly for the same pattern on different positions of the image regardless of lens distortions. Our experiments show that the proposed method outperforms current techniques for the omnidirectional image classification problem.",
"title": ""
},
{
"docid": "0e5187e6d72082618bd5bda699adab93",
"text": "Many applications of mobile deep learning, especially real-time computer vision workloads, are constrained by computation power. This is particularly true for workloads running on older consumer phones, where a typical device might be powered by a singleor dual-core ARMv7 CPU. We provide an open-source implementation and a comprehensive analysis of (to our knowledge) the state of the art ultra-low-precision (<4 bit precision) implementation of the core primitives required for modern deep learning workloads on ARMv7 devices, and demonstrate speedups of 4x-20x over our additional state-of-the-art float32 and int8 baselines.",
"title": ""
},
{
"docid": "e4f26f4ed55e51fb2a9a55fd0f04ccc0",
"text": "Nowadays, the Web has revolutionized our vision as to how deliver courses in a radically transformed and enhanced way. Boosted by Cloud computing, the use of the Web in education has revealed new challenges and looks forward to new aspirations such as MOOCs (Massive Open Online Courses) as a technology-led revolution ushering in a new generation of learning environments. Expected to deliver effective education strategies, pedagogies and practices, which lead to student success, the massive open online courses, considered as the “linux of education”, are increasingly developed by elite US institutions such MIT, Harvard and Stanford by supplying open/distance learning for large online community without paying any fees, MOOCs have the potential to enable free university-level education on an enormous scale. Nevertheless, a concern often is raised about MOOCs is that a very small proportion of learners complete the course while thousands enrol for courses. In this paper, we present LASyM, a learning analytics system for massive open online courses. The system is a Hadoop based one whose main objective is to assure Learning Analytics for MOOCs’ communities as a mean to help them investigate massive raw data, generated by MOOC platforms around learning outcomes and assessments, and reveal any useful information to be used in designing learning-optimized MOOCs. To evaluate the effectiveness of the proposed system we developed a method to identify, with low latency, online learners more likely to drop out. Keywords—Cloud Computing; MOOCs; Hadoop; Learning",
"title": ""
},
{
"docid": "96669cea810d2918f2d35875f87d45f2",
"text": "In this paper, a new probabilistic tagging method is presented which avoids problems that Markov Model based taggers face, when they have to estimate transition probabilities from sparse data. In this tagging method, transition probabilities are estimated using a decision tree. Based on this method, a part-of-speech tagger (called TreeTagger) has been implemented which achieves 96.36 % accuracy on Penn-Treebank data which is better than that of a trigram tagger (96.06 %) on the same data.",
"title": ""
},
{
"docid": "6649a5635cffce83cc32887e2f6b0b04",
"text": "Alexander Serenko is a Professor at Faculty of Business Administration, Lakehead University, Thunder Bay, Canada. Nick Bontis is an Associate Professor at DeGroote School of Business, McMaster University, Hamilton, Canada. Abstract Purpose – The purpose of this paper is to investigate the impact of exchange modes – negotiated, reciprocal, generalized, and productive – on inter-employee knowledge sharing. Design/methodology/approach – Based on the affect theory of social exchange, a theoretical model was developed and empirically tested using a survey of 691 employees from 15 North American credit unions. Findings – The negotiated mode of knowledge exchange, i.e. when a knowledge contributor explicitly establishes reciprocation conditions with a recipient, develops negative knowledge sharing attitude. The reciprocal mode, i.e. when a knowledge donor assumes that a receiver will reciprocate, has no effect on knowledge sharing attitude. The generalized exchange form, i.e. when a knowledge contributor believes that other organizational members may reciprocate, is weakly related to knowledge sharing attitude. The productive exchange mode, i.e. when a knowledge provider assumes he or she is a responsible citizen within a cooperative enterprise, strongly facilitates the development of knowledge sharing attitude, which, in turn, leads to knowledge sharing intentions. Practical implications – To facilitate inter-employee knowledge sharing, managers should focus on the development of positive knowledge sharing culture when all employees believe they contribute to a common good instead of expecting reciprocal benefits. Originality/value – This is one of the first studies to apply the affect theory of social exchange to study knowledge sharing.",
"title": ""
},
{
"docid": "1e6ea96d9aafb244955ff38423562a1c",
"text": "Many statistical methods rely on numerical optimization to estimate a model’s parameters. Unfortunately, conventional algorithms sometimes fail. Even when they do converge, there is no assurance that they have found the global, rather than a local, optimum. We test a new optimization algorithm, simulated annealing, on four econometric problems and compare it to three common conventional algorithms. Not only can simulated annealing find the global optimum, it is also less likely to fail on difficult functions because it is a very robust algorithm. The promise of simulated annealing is demonstrated on the four econometric problems.",
"title": ""
},
{
"docid": "64770c350dc1d260e24a43760d4e641b",
"text": "A first step in the task of automatically generating questions for testing reading comprehension is to identify questionworthy sentences, i.e. sentences in a text passage that humans find it worthwhile to ask questions about. We propose a hierarchical neural sentence-level sequence tagging model for this task, which existing approaches to question generation have ignored. The approach is fully data-driven — with no sophisticated NLP pipelines or any hand-crafted rules/features — and compares favorably to a number of baselines when evaluated on the SQuAD data set. When incorporated into an existing neural question generation system, the resulting end-to-end system achieves stateof-the-art performance for paragraph-level question generation for reading comprehension.",
"title": ""
}
] |
scidocsrr
|
8b9596c6dc20340812165fbae343f7bf
|
LEAD TIME REDUCTION VIA PREPOSITIONING OF INVENTORY IN AN INDUSTRIAL CONSTRUCTION SUPPLY CHAIN
|
[
{
"docid": "0580342f7efb379fc417d2e5e48c4b73",
"text": "The use of System Dynamics Modeling in Supply Chain Management has only recently re-emerged after a lengthy slack period. Current research on System Dynamics Modelling in supply chain management focuses on inventory decision and policy development, time compression, demand amplification, supply chain design and integration, and international supply chain management. The paper first gives an overview of recent research work in these areas, followed by a discussion of research issues that have evolved, and presents a taxonomy of research and development in System Dynamics Modelling in supply chain management.",
"title": ""
},
{
"docid": "6fd3baefceeb6f341cd5a1c881cab05b",
"text": "Supply chain management (SCM) is a concept that has flourished in manufacturing, originating from Just-In-Time (JIT) production and logistics. Today, SCM represents an autonomous managerial concept, although still largely dominated by logistics. SCM endeavors to observe the entire scope of the supply chain. All issues are viewed and resolved in a supply chain perspective, taking into account the interdependency in the supply chain. SCM offers a methodology to relieve the myopic control in the supply chain that has been reinforcing waste and problems. Construction supply chains are still full of waste and problems caused by myopic control. Comparison of case studies with prior research justifies that waste and problems in construction supply chains are extensively present and persistent, and due to interdependency largely interrelated with causes in other stages of the supply chain. The characteristics of the construction supply chain reinforce the problems in the construction supply chain, and may well hinder the application of SCM to construction. Previous initiatives to advance the construction supply chain have been somewhat partial. The generic methodology offered by SCM contributes to better understanding and resolution of basic problems in construction supply chains, and gives directions for construction supply chain development. The practical solutions offered by SCM, however, have to be developed in construction practice itself, taking into account the specific characteristics and local conditions of construction supply chains.",
"title": ""
}
] |
[
{
"docid": "573dde1b9187a925ddad7e2f1e5102c4",
"text": "Nowadays, the usage of cloud storages to store data is a popular alternative to traditional local storage systems. However, besides the benefits such services can offer, there are also some downsides like vendor lock-in or unavailability. Furthermore, the large number of available providers and their different pricing models can turn the search for the best fitting provider into a tedious and cumbersome task. Furthermore, the optimal selection of a provider may change over time.In this paper, we formalize a system model that uses several cloud storages to offer a redundant storage for data. The according optimization problem considers historic data access patterns and predefined Quality of Service requirements for the selection of the best-fitting storages. Through extensive evaluations we show the benefits of our work and compare the novel approach against a baseline which follows a state-of-the-art approach.",
"title": ""
},
{
"docid": "67825e84cb2e636deead618a0868fa4a",
"text": "Image compression is used specially for the compression of images where tolerable degradation is required. With the wide use of computers and consequently need for large scale storage and transmission of data, efficient ways of storing of data have become necessary. With the growth of technology and entrance into the Digital Age, the world has found itself amid a vast amount of information. Dealing with such enormous information can often present difficulties. Image compression is minimizing the size in bytes of a graphics file without degrading the quality of the image to an unacceptable level. The reduction in file size allows more images to be stored in a given amount of disk or memory space. It also reduces the time required for images to be sent over the Internet or downloaded from Web pages.JPEG and JPEG 2000 are two important techniques used for image compression. In this paper, we discuss about lossy image compression techniques and reviews of different basic lossy image compression methods are considered. The methods such as JPEG and JPEG2000 are considered. A conclusion is derived on the basis of these methods Keywords— Data compression, Lossy image compression, JPEG, JPEG2000, DCT, DWT",
"title": ""
},
{
"docid": "643be78202e4d118e745149ed389b5ef",
"text": "Little clinical research exists on the contribution of the intrinsic foot muscles (IFM) to gait or on the specific clinical evaluation or retraining of these muscles. The purpose of this clinical paper is to review the potential functions of the IFM and their role in maintaining and dynamically controlling the medial longitudinal arch. Clinically applicable methods of evaluation and retraining of these muscles for the effective management of various foot and ankle pain syndromes are discussed.",
"title": ""
},
{
"docid": "0177729f2d7fc610bd8e55a93a93b03b",
"text": "Preference-based recommendation systems have transformed how we consume media. By analyzing usage data, these methods uncover our latent preferences for items (such as articles or movies) and form recommendations based on the behavior of others with similar tastes. But traditional preference-based recommendations do not account for the social aspect of consumption, where a trusted friend might point us to an interesting item that does not match our typical preferences. In this work, we aim to bridge the gap between preference- and social-based recommendations. We develop social Poisson factorization (SPF), a probabilistic model that incorporates social network information into a traditional factorization method; SPF introduces the social aspect to algorithmic recommendation. We develop a scalable algorithm for analyzing data with SPF, and demonstrate that it outperforms competing methods on six real-world datasets; data sources include a social reader and Etsy.",
"title": ""
},
{
"docid": "5c9f03e6f3710005f0e100582849ecc0",
"text": "Fractals have experienced considerable success in quantifying the complex structure exhibited by many natural patterns and have captured the imagination of scientists and artists alike. With ever widening appeal, they have been referred to both as \"fingerprints of nature\" and \"the new aesthetics.\" Our research has shown that the drip patterns of the American abstract painter Jackson Pollock are fractal. In this paper, we consider the implications of this discovery. We first present an overview of our research from the past five years to establish a context for our current investigations of human response to fractals. We discuss results showing that fractal images generated by mathematical, natural and human processes possess a shared aesthetic quality based on visual complexity. In particular, participants in visual perception tests display a preference for fractals with mid-range fractal dimensions. We also present recent preliminary work based on skin conductance measurements that indicate that these mid-range fractals also affect the observer's physiological condition and discuss future directions based on these results.",
"title": ""
},
{
"docid": "d7c0b0261547590d405e118301651b1f",
"text": "This paper reports on the Event StoryLine Corpus (ESC) v0.9, a new benchmark dataset for the temporal and causal relation detection. By developing this dataset, we also introduce a new task, the StoryLine Extraction from news data, which aims at extracting and classifying events relevant for stories, from across news documents spread in time and clustered around a single seminal event or topic. In addition to describing the dataset, we also report on three baselines systems whose results show the complexity of the task and suggest directions for the development of more robust systems.",
"title": ""
},
{
"docid": "d8eb6a9426aafc53411a7b9ecdf63ecb",
"text": "Social media analytics (SMA) uses advanced techniques to analyze patterns in social media data to enable informed and insightful decision-making. It provides organizations with new ways to create value and gain competitive advantage. In this paper, we present a theoretical framework that explains how organizations create value with SMA. We use the framework as a lens for a case study involving a large financial institution (Bankco) that used SMA as a critical component of a major and highly successful marketing campaign. Bankco successfully distinguished itself from its competitors with a marketing campaign based on the creative and innovative use of a number of social media channels. SMA provided Bankco with important insights into customer sentiments, engagement and brand awareness. A number of important lessons learned about effective use of SMA are discussed.",
"title": ""
},
{
"docid": "fc9fe094b3e46a85b7564a89730347fd",
"text": "We present a design study of the TensorFlow Graph Visualizer, part of the TensorFlow machine intelligence platform. This tool helps users understand complex machine learning architectures by visualizing their underlying dataflow graphs. The tool works by applying a series of graph transformations that enable standard layout techniques to produce a legible interactive diagram. To declutter the graph, we decouple non-critical nodes from the layout. To provide an overview, we build a clustered graph using the hierarchical structure annotated in the source code. To support exploration of nested structure on demand, we perform edge bundling to enable stable and responsive cluster expansion. Finally, we detect and highlight repeated structures to emphasize a model's modular composition. To demonstrate the utility of the visualizer, we describe example usage scenarios and report user feedback. Overall, users find the visualizer useful for understanding, debugging, and sharing the structures of their models.",
"title": ""
},
{
"docid": "c63e325e144eccde5ae5382f1ae82817",
"text": "People have self-control problems: We pursue immediate gratification in a way that we ourselves do not appreciate in the long run. Only recently have economists begun to focus on the behavioral and welfare implications of such time-inconsistent preferences. In this paper, we outline a simple formal model of self-control problems, apply this model to some specific economic applications, and discuss some general lessons and open questions in the economic analysis of immediate gratification. We argue that the economic implications of self-control problems depend on the timing of the rewards and costs of an activity, as well as a person's awareness of future self-control problems. We identify situations where knowing about self-control problems can help a person and situations where it can hurt her, and also identify situations where even mild self-control problems can severely damage a person. In the process, we describe some specific implications of self-control problems for addiction, incentive theory, and consumer choice and marketing. Acknowledgments: O'Donoghue thanks the Alfred P. Sloan Foundation and Rabin thanks the Alfred P. Sloan and Russell Sage Foundations for financial support. Some of the research for this project was completed when both authors were visiting the Math Center at Northwestern University. We are extremely grateful for their hospitality, intellectual feedback, and financial support. We thank Erik Eyster for valuable research assistance.",
"title": ""
},
{
"docid": "0eaa21f408844453aab198fbfe666646",
"text": "This article describes the conceptual basis and key elements of a transdisciplinary model for solution-focused coaching in pediatric rehabilitation (SFC-peds). The model exemplifies a strengths-based, relational, and goal-oriented approach to clinical practice. It provides a distinct shift from a problem-oriented, therapist-directed approach to a possibilities-oriented approach where client empowerment takes precedence. The model facilitates client change through a method of working with client strengths and resources that involves the use of strategic questions to co-construct therapy intervention. Through client-therapist collaboration, therapy goals and plans are developed that align with client hopes, priorities, and readiness for change. SFC supports client self-determination and capacity for change through customized therapy goals and plans that are meaningful for the child and family. Implications for therapists include the need for relational expertise, practical coaching skills, and expertise in facilitating change. The need for research on the effectiveness of this approach in pediatric rehabilitation is discussed.",
"title": ""
},
{
"docid": "62fc80e1eb0f22d470286d1b14dd584b",
"text": "This project examines the level of accuracy that can be achieved in precision positioning by using built-in sensors in an Android smartphone. The project is focused in estimating the position of the phone inside a building where the GPS signal is bad or unavailable. The approach is sensor-fusion: by using data from the device’s different sensors, such as accelerometer, gyroscope and wireless adapter, the position is determined. The results show that the technique is promising for future handheld indoor navigation systems that can be used in malls, museums, large office buildings, hospitals, etc.",
"title": ""
},
{
"docid": "d0e51f7cf2731e3ce00b4b53f29e8e5e",
"text": "Loyal customers provide firms a consistent source of revenue (repeat and increased purchases) and for cost reduction (less promotional expenses) that leads to increased profits. Customer loyalty is the result of successful marketing strategy in competitive markets that creates value for consumers. This study examines the mediating role of consumer perceived value in the marketing strategy-customer loyalty relationship. A theoretical framework is established that is supported by empirical evidence. Based on the literature, the findings indicate an inconsistent measure for perceived value that does not fully explain its mediating role. The conclusion is to be valid perceived value should be measured by specific non-monetary scale items.",
"title": ""
},
{
"docid": "8244bb1d75e550beb417049afb1ff9d5",
"text": "Electronically available data on the Web is exploding at an ever increasing pace. Much of this data is unstructured, which makes searching hard and traditional database querying impossible. Many Web documents, however, contain an abundance of recognizable constants that together describe the essence of a document’s content. For these kinds of data-rich, multiple-record documents (e.g. advertisements, movie reviews, weather reports, travel information, sports summaries, financial statements, obituaries, and many others) we can apply a conceptual-modeling approach to extract and structure data automatically. The approach is based on an ontology—a conceptual model instance—that describes the data of interest, including relationships, lexical appearance, and context keywords. By parsing the ontology, we can automatically produce a database scheme and recognizers for constants and keywords, and then invoke routines to recognize and extract data from unstructured documents and structure it according to the generated database scheme. Experiments show that it is possible to achieve good recall and precision ratios for documents that are rich in recognizable constants and narrow in ontological breadth. Our approach is less labor-intensive than other approaches that manually or semiautomatically generate wrappers, and it is generally insensitive to changes in Web-page format.",
"title": ""
},
{
"docid": "39e9fe27f70f54424df1feec453afde3",
"text": "Ontology is a sub-field of Philosophy. It is the study of the nature of existence and a branch of metaphysics concerned with identifying the kinds of things that actually exists and how to describe them. It describes formally a domain of discourse. Ontology is used to capture knowledge about some domain of interest and to describe the concepts in the domain and also to express the relationships that hold between those concepts. Ontology consists of finite list of terms (or important concepts) and the relationships among the terms (or Classes of Objects). Relationships typically include hierarchies of classes. It is an explicit formal specification of conceptualization and the science of describing the kind of entities in the world and how they are related (W3C). Web Ontology Language (OWL) is a language for defining and instantiating web ontologies (a W3C Recommendation). OWL ontology includes description of classes, properties and their instances. OWL is used to explicitly represent the meaning of terms in vocabularies and the relationships between those terms. Such representation of terms and their interrelationships is called ontology. OWL has facilities for expressing meaning and semantics and the ability to represent machine interpretable content on the Web. OWL is designed for use by applications that need to process the content of information instead of just presenting information to humans. This is used for knowledge representation and also is useful to derive logical consequences from OWL formal semantics.",
"title": ""
},
{
"docid": "f5c4b6114795d42d9e5b1f2aec036b43",
"text": "Automotive industry is considered to be one of the main contributors to environmental pollution and global warming. Therefore, many car manufacturers are in near future planning to introduce hybrid electric vehicles (HEV), fuel cell electric vehicles (FCEV) and pure electric vehicles (EV) to make our cars more environmentally friendly. These new vehicles require highly efficient and small power converters. In recent years, considerable improvements were made in designing such converters. In this paper, an approach based on so called Snubber Assisted Zero Voltage and Zero Current Switching topology otherwise also know as SAZZ is presented. This topology has evolved to be one of the leaders in the field of highly efficient converters with high power densities. Evolution and main features of this topology are briefly discussed. Capabilities of the topology are demonstrated on two case study prototypes based on different design approaches. The prototypes are designed to be fully bi-directional for peak power output of 30 kW. Both designs reached efficiencies close to 99 % in wide load range. Power densities over 40 kW/litre are attainable in the same time. Combination of MOSFET technology and SAZZ topology is shown to be very beneficial to converters designed for EV applications.",
"title": ""
},
{
"docid": "58042f8c83e5cc4aa41e136bb4e0dc1f",
"text": "In this paper, we propose wire-free integrated sensors that monitor pulse wave velocity (PWV) and respiration, both non-electrical vital signs, by using an all-electrical method. The key techniques that we employ to obtain all-electrical and wire-free measurement are bio-impedance (BI) and analog-modulated body-channel communication (BCC), respectively. For PWV, time difference between ECG signal from the heart and BI signal from the wrist is measured. To remove wires and avoid sampling rate mismatch between ECG and BI sensors, ECG signal is sent to the BI sensor via analog BCC without any sampling. For respiration measurement, BI sensor is located at the abdomen to detect volume change during inhalation and exhalation. A prototype chip fabricated in 0.11 μm CMOS process consists of ECG, BI sensor and BCC transceiver. Measurement results show that heart rate and PWV are both within their normal physiological range. The chip consumes 1.28 mW at 1.2 V supply while occupying 5 mm×2.5 mm of area.",
"title": ""
},
{
"docid": "b93a949c1c509bf8e5d36a9ec2cb37a5",
"text": "At first glance, agile methods and global software development might seem incompatible. Agile methods stress continuous face-to-face communication, whereas communication has been reported as the biggest problem of global software development. One challenge to solve is how to apply agile practices in settings where continuous face-to-face interaction is missing. However, agile methods have been successfully used in distributed projects, indicating that they could benefit global software development. This paper discusses potential benefits and challenges of adopting agile methods in global software development. The literature on real industrial case studies reporting on experiences of using agile methods in distributed projects is still scarce. Therefore we suggest further research on the topic. We present our plans for research in companies using agile methods in their distributed projects. We also intend to test the use of agile principles in globally distributed student projects developing software for industrial clients",
"title": ""
},
{
"docid": "5cb8b8d4c228d0f75543ae1b4d5a0e5c",
"text": "Clustering is an important data mining task for exploration and visualization of different data types like news stories, scientific publications, weblogs, etc. Due to the evolving nature of these data, evolutionary clustering, also known as dynamic clustering, has recently emerged to cope with the challenges of mining temporally smooth clusters over time. A good evolutionary clustering algorithm should be able to fit the data well at each time epoch, and at the same time results in a smooth cluster evolution that provides the data analyst with a coherent and easily interpretable model. In this paper we introduce the temporal Dirichlet process mixture model (TDPM) as a framework for evolutionary clustering. TDPM is a generalization of the DPM framework for clustering that automatically grows the number of clusters with the data. In our framework, the data is divided into epochs; all data points inside the same epoch are assumed to be fully exchangeable, whereas the temporal order is maintained across epochs. Moreover, The number of clusters in each epoch is unbounded: the clusters can retain, die out or emerge over time, and the actual parameterization of each cluster can also evolve over time in a Markovian fashion. We give a detailed and intuitive construction of this framework using the recurrent Chinese restaurant process (RCRP) metaphor, as well as a Gibbs sampling algorithm to carry out posterior inference in order to determine the optimal cluster evolution. We demonstrate our model over simulated data by using it to build an infinite dynamic mixture of Gaussian factors, and over real dataset by using it to build a simple non-parametric dynamic clustering-topic model and apply it to analyze the NIPS12 document collection.",
"title": ""
},
{
"docid": "a0080a7751287b2ec32409c3cd2e3803",
"text": "Semantic Complex Event Processing (CEP) is a promising approach for analysing streams of social media data in crisis situations. Traditional CEP approaches lack the capability to semantically interpret and analyse data, which Semantic CEP attempts to address, but current approaches have a number of limitations. In this paper we survey four semantic stream processing engines, and discuss them with the specific requirements of CEP for social media monitoring in mind. Current approaches assume well-structured data, known streams and vocabularies, and mainly static event patterns and ontologies, neither of which are realistic assumptions in our scenario. Additionally, the languages commonly used for event pattern detection, i.e., SPARQL extensions, lack several important features that would facilitate more advanced statistical and textual analyses, as well as adequate support for temporal and spatial reasoning. Being able to utilize external tools for processing specific tasks would also be of great value in processing social data streams.",
"title": ""
},
{
"docid": "8f54f2c6e9736a63ea4a99f89090e0a2",
"text": "This article demonstrates how documents prepared in hypertext or word processor format can be saved in portable document format (PDF). These files are self-contained documents that that have the same appearance on screen and in print, regardless of what kind of computer or printer are used, and regardless of what software package was originally used to for their creation. PDF files are compressed documents, invariably smaller than the original files, hence allowing rapid dissemination and download.",
"title": ""
}
] |
scidocsrr
|
950fed6fd9bcb1fe8cfc00a83eda7668
|
MQTT based vehicle accident detection and alert system
|
[
{
"docid": "8cdc70a728191aa25789c6284d581dc0",
"text": "The objective of the smart helmet is to provide a means and apparatus for detecting and reporting accidents. Sensors, Wi-Fi enabled processor, and cloud computing infrastructures are utilised for building the system. The accident detection system communicates the accelerometer values to the processor which continuously monitors for erratic variations. When an accident occurs, the related details are sent to the emergency contacts by utilizing a cloud based service. The vehicle location is obtained by making use of the global positioning system. The system promises a reliable and quick delivery of information relating to the accident in real time and is appropriately named Konnect. Thus, by making use of the ubiquitous connectivity which is a salient feature for the smart cities, a smart helmet for accident detection is built.",
"title": ""
},
{
"docid": "39b072a5adb75eb43561017d53ab6f44",
"text": "The Internet of Things (IoT) is converting the agriculture industry and solving the immense problems or the major challenges faced by the farmers todays in the field. India is one of the 13th countries in the world having scarcity of water resources. Due to ever increasing of world population, we are facing difficulties in the shortage of water resources, limited availability of land, difficult to manage the costs while meeting the demands of increasing consumption needs of a global population that is expected to grow by 70% by the year 2050. The influence of population growth on agriculture leads to a miserable impact on the farmers livelihood. To overcome the problems we design a low cost system for monitoring the agriculture farm which continuously measure the level of soil moisture of the plants and alert the farmers if the moisture content of particular plants is low via sms or an email. This system uses an esp8266 microcontroller and a moisture sensor using Losant platform. Losant is a simple and most powerful IoT cloud platform for the development of coming generation. It offers the real time data visualization of sensors data which can be operate from any part of the world irrespective of the position of field.",
"title": ""
}
] |
[
{
"docid": "4f527bddf622c901a7894ce7cc381ee1",
"text": "Most popular programming languages support situations where a value of one type is converted into a value of another type without any explicit cast. Such implicit type conversions, or type coercions, are a highly controversial language feature. Proponents argue that type coercions enable writing concise code. Opponents argue that type coercions are error-prone and that they reduce the understandability of programs. This paper studies the use of type coercions in JavaScript, a language notorious for its widespread use of coercions. We dynamically analyze hundreds of programs, including real-world web applications and popular benchmark programs. We find that coercions are widely used (in 80.42% of all function executions) and that most coercions are likely to be harmless (98.85%). Furthermore, we identify a set of rarely occurring and potentially harmful coercions that safer subsets of JavaScript or future language designs may want to disallow. Our results suggest that type coercions are significantly less evil than commonly assumed and that analyses targeted at real-world JavaScript programs must consider coercions. 1998 ACM Subject Classification D.3.3 Language Constructs and Features, F.3.2 Semantics of Programming Languages, D.2.8 Metrics",
"title": ""
},
{
"docid": "bc8592866537b13cac47abe621a90d03",
"text": "In the previous paper Ralph Brodd and Martin Winter described the different kinds of batteries and fuel cells. In this paper I will describe lithium batteries in more detail, building an overall foundation for the papers that follow which describe specific components in some depth and usually with an emphasis on the materials behavior. The lithium battery industry is undergoing rapid expansion, now representing the largest segment of the portable battery industry and dominating the computer, cell phone, and camera power source industry. However, the present secondary batteries use expensive components, which are not in sufficient supply to allow the industry to grow at the same rate in the next decade. Moreover, the safety of the system is questionable for the large-scale batteries needed for hybrid electric vehicles (HEV). Another battery need is for a high-power system that can be used for power tools, where only the environmentally hazardous Ni/ Cd battery presently meets the requirements. A battery is a transducer that converts chemical energy into electrical energy and vice versa. It contains an anode, a cathode, and an electrolyte. The anode, in the case of a lithium battery, is the source of lithium ions. The cathode is the sink for the lithium ions and is chosen to optimize a number of parameters, discussed below. The electrolyte provides for the separation of ionic transport and electronic transport, and in a perfect battery the lithium ion transport number will be unity in the electrolyte. The cell potential is determined by the difference between the chemical potential of the lithium in the anode and cathode, ∆G ) -EF. As noted above, the lithium ions flow through the electrolyte whereas the electrons generated from the reaction, Li ) Li+ + e-, go through the external circuit to do work. Thus, the electrode system must allow for the flow of both lithium ions and electrons. That is, it must be both a good ionic conductor and an electronic conductor. As discussed below, many electrochemically active materials are not good electronic conductors, so it is necessary to add an electronically conductive material such as carbon * To whom correspondence should be addressed. Phone and fax: (607) 777-4623. E-mail: stanwhit@binghamton.edu. 4271 Chem. Rev. 2004, 104, 4271−4301",
"title": ""
},
{
"docid": "59e02bc986876edc0ee0a97fd4d12a28",
"text": "CONTEXT\nSocial anxiety disorder is thought to involve emotional hyperreactivity, cognitive distortions, and ineffective emotion regulation. While the neural bases of emotional reactivity to social stimuli have been described, the neural bases of emotional reactivity and cognitive regulation during social and physical threat, and their relationship to social anxiety symptom severity, have yet to be investigated.\n\n\nOBJECTIVE\nTo investigate behavioral and neural correlates of emotional reactivity and cognitive regulation in patients and controls during processing of social and physical threat stimuli.\n\n\nDESIGN\nParticipants were trained to implement cognitive-linguistic regulation of emotional reactivity induced by social (harsh facial expressions) and physical (violent scenes) threat while undergoing functional magnetic resonance imaging and providing behavioral ratings of negative emotion experience.\n\n\nSETTING\nAcademic psychology department.\n\n\nPARTICIPANTS\nFifteen adults with social anxiety disorder and 17 demographically matched healthy controls.\n\n\nMAIN OUTCOME MEASURES\nBlood oxygen level-dependent signal and negative emotion ratings.\n\n\nRESULTS\nBehaviorally, patients reported greater negative emotion than controls during social and physical threat but showed equivalent reduction in negative emotion following cognitive regulation. Neurally, viewing social threat resulted in greater emotion-related neural responses in patients than controls, with social anxiety symptom severity related to activity in a network of emotion- and attention-processing regions in patients only. Viewing physical threat produced no between-group differences. Regulation during social threat resulted in greater cognitive and attention regulation-related brain activation in controls compared with patients. Regulation during physical threat produced greater cognitive control-related response (ie, right dorsolateral prefrontal cortex) in patients compared with controls.\n\n\nCONCLUSIONS\nCompared with controls, patients demonstrated exaggerated negative emotion reactivity and reduced cognitive regulation-related neural activation, specifically for social threat stimuli. These findings help to elucidate potential neural mechanisms of emotion regulation that might serve as biomarkers for interventions for social anxiety disorder.",
"title": ""
},
{
"docid": "af9e3268901a46967da226537eba3cb6",
"text": "Magnetic Resonance Imaging (MRI) is a non-invasive diagnostic tool very frequently used for brain 8 imaging. The classification of MRI images of normal and pathological brain conditions pose a challenge from 9 technological and clinical point of view, since MR imaging focuses on soft tissue anatomy and generates a large 10 information set and these can act as a mirror reflecting the conditions of the brain. A new approach by 11 integrating wavelet entropy based spider web plots and probabilistic neural network is proposed for the 12 classification of MRI brain images. The two step method for classification uses (1) wavelet entropy based spider 13 web plots for the feature extraction and (2) probabilistic neural network for the classification. The spider web 14 plot is a geometric construction drawn using the entropy of the wavelet approximation components and the areas 15 calculated are used as feature set for classification. Probabilistic neural network provides a general solution to 16 the pattern classification problems and the classification accuracy is found to be 100%. 17 Keywords-Magnetic Resonance Imaging (MRI), Wavelet Transformation, Entropy, Spider Web Plots, 18 Probabilistic Neural Network 19",
"title": ""
},
{
"docid": "cebcd53ef867abb158445842cd0f4daf",
"text": "Let [ be a random variable over a finite set with an arbitrary probability distribution. In this paper we make improvements to a fast method of generating sample values for ( in constant time.",
"title": ""
},
{
"docid": "9df51d2e5755caa355869dacb90544c2",
"text": "Deep learning frameworks have been widely deployed on GPU servers for deep learning applications in both academia and industry. In training deep neural networks (DNNs), there are many standard processes or algorithms, such as convolution and stochastic gradient descent (SGD), but the running performance of different frameworks might be different even running the same deep model on the same GPU hardware. In this study, we evaluate the running performance of four state-of-the-art distributed deep learning frameworks (i.e., Caffe-MPI, CNTK, MXNet, and TensorFlow) over single-GPU, multi-GPU, and multi-node environments. We first build performance models of standard processes in training DNNs with SGD, and then we benchmark the running performance of these frameworks with three popular convolutional neural networks (i.e., AlexNet, GoogleNet and ResNet-50), after that, we analyze what factors that result in the performance gap among these four frameworks. Through both analytical and experimental analysis, we identify bottlenecks and overheads which could be further optimized. The main contribution is that the proposed performance models and the analysis provide further optimization directions in both algorithmic design and system configuration.",
"title": ""
},
{
"docid": "912fb50be7a37154259ad3d7f5c4194f",
"text": "This paper presents a novel single-ended disturb-free 9T subthreshold SRAM cell with cross-point data-aware Write word-line structure. The disturb-free feature facilitates bit-interleaving architecture, which can reduce multiple-bit upsets in a single word and enhance soft error immunity by employing Error Checking and Correction (ECC) technique. The proposed 9T SRAM cell is demonstrated by a 72 Kb SRAM macro with a Negative Bit-Line (NBL) Write-assist and an adaptive Read operation timing tracing circuit implemented in 65 nm low-leakage CMOS technology. Measured full Read and Write functionality is error free with VDD down to 0.35 V ( 0.15 V lower than the threshold voltage) with 229 KHz frequency and 4.05 μW power. Data is held down to 0.275 V with 2.29 μW Standby power. The minimum energy per operation is 4.5 pJ at 0.5 V. The 72 Kb SRAM macro has wide operation range from 1.2 V down to 0.35 V, with operating frequency of around 200 MHz for VDD around/above 1.0 V.",
"title": ""
},
{
"docid": "7f6e03069810f9d7ef68d6a775b8849b",
"text": "For more than a century, the déjà vu experience has been examined through retrospective surveys, prospective surveys, and case studies. About 60% of the population has experienced déjà vu, and its frequency decreases with age. Déjà vu appears to be associated with stress and fatigue, and it shows a positive relationship with socioeconomic level and education. Scientific explanations of déjà vu fall into 4 categories: dual processing (2 cognitive processes momentarily out of synchrony), neurological (seizure, disruption in neuronal transmission), memory (implicit familiarity of unrecognized stimuli),and attentional (unattended perception followed by attended perception). Systematic research is needed on the prevalence and etiology of this culturally familiar cognitive experience, and several laboratory models may help clarify this illusion of recognition.",
"title": ""
},
{
"docid": "304b4cee4006e87fc4172a3e9de88ed1",
"text": "Recently, graph neural networks (GNNs) have revolutionized the field of graph representation learning through effectively learned node embeddings, and achieved state-of-the-art results in tasks such as node classification and link prediction. However, current GNN methods are inherently flat and do not learn hierarchical representations of graphs—a limitation that is especially problematic for the task of graph classification, where the goal is to predict the label associated with an entire graph. Here we propose DIFFPOOL, a differentiable graph pooling module that can generate hierarchical representations of graphs and can be combined with various graph neural network architectures in an end-to-end fashion. DIFFPOOL learns a differentiable soft cluster assignment for nodes at each layer of a deep GNN, mapping nodes to a set of clusters, which then form the coarsened input for the next GNN layer. Our experimental results show that combining existing GNN methods with DIFFPOOL yields an average improvement of 5–10% accuracy on graph classification benchmarks, compared to all existing pooling approaches, achieving a new state-of-the-art on four out of five benchmark data sets.",
"title": ""
},
{
"docid": "68c1aa2e3d476f1f24064ed6f0f07fb7",
"text": "Granuloma annulare is a benign, asymptomatic, self-limited papular eruption found in patients of all ages. The primary skin lesion usually is grouped papules in an enlarging annular shape, with color ranging from flesh-colored to erythematous. The two most common types of granuloma annulare are localized, which typically is found on the lateral or dorsal surfaces of the hands and feet; and disseminated, which is widespread. Localized disease generally is self-limited and resolves within one to two years, whereas disseminated disease lasts longer. Because localized granuloma annulare is self-limited, no treatment other than reassurance may be necessary. There are no well-designed randomized controlled trials of the treatment of granuloma annulare. Treatment recommendations are based on the pathophysiology of the disease, expert opinion, and case reports only. Liquid nitrogen, injected steroids, or topical steroids under occlusion have been recommended for treatment of localized disease. Disseminated granuloma annulare may be treated with one of several systemic therapies such as dapsone, retinoids, niacinamide, antimalarials, psoralen plus ultraviolet A therapy, fumaric acid esters, tacrolimus, and pimecrolimus. Consultation with a dermatologist is recommended because of the possible toxicities of these agents.",
"title": ""
},
{
"docid": "717bea69015f1c2e9f9909c3510c825a",
"text": "To assess the impact of anti-vaccine movements that targeted pertussis whole-cell vaccines, we compared pertussis incidence in countries where high coverage with diphtheria-tetanus-pertussis vaccines (DTP) was maintained (Hungary, the former East Germany, Poland, and the USA) with countries where immunisation was disrupted by anti-vaccine movements (Sweden, Japan, UK, The Russian Federation, Ireland, Italy, the former West Germany, and Australia). Pertussis incidence was 10 to 100 times lower in countries where high vaccine coverage was maintained than in countries where immunisation programs were compromised by anti-vaccine movements. Comparisons of neighbouring countries with high and low vaccine coverage further underscore the efficacy of these vaccines. Given the safety and cost-effectiveness of whole-cell pertussis vaccines, our study shows that, far from being obsolete, these vaccines continue to have an important role in global immunisation.",
"title": ""
},
{
"docid": "c83eefbe2eadfee71db7faf0238c5023",
"text": "Successful prosthesis use is largely dependent on providing patients with high-quality, individualized pre-prosthetic training, ideally completed under the supervision of a trained therapist. Computer-based training systems, or ‘virtual coaches,’ designed to augment rehabilitation training protocols are an emerging area of research and could be a convenient and low-cost alternative to supplement the therapy received by the patient. In this contribution we completed an iterative needs focus group to determine important design elements required for an effective virtual coach software package.",
"title": ""
},
{
"docid": "44380ea0107c22d3f6412456f4533482",
"text": "Shadow memory is used by dynamic program analysis tools to store metadata for tracking properties of application memory. The efficiency of mapping between application memory and shadow memory has substantial impact on the overall performance of such analysis tools. However, traditional memory mapping schemes that work well on 32-bit architectures cannot easily port to 64-bit architectures due to the much larger 64-bit address space.\n This paper presents EMS64, an efficient memory shadowing scheme for 64-bit architectures. By taking advantage of application reference locality and unused regions in the 64-bit address space, EMS64 provides a fast and flexible memory mapping scheme without relying on any underlying platform features or requiring any specific shadow memory size. Our experiments show that EMS64 is able to reduce the runtime shadow memory translation overhead to 81% on average, which almost halves the overhead of the fastest 64-bit shadow memory system we are aware of.",
"title": ""
},
{
"docid": "52d6711ebbafd94ab5404e637db80650",
"text": "At present, designing convolutional neural network (CNN) architectures requires both human expertise and labor. New architectures are handcrafted by careful experimentation or modified from a handful of existing networks. We introduce MetaQNN, a meta-modeling algorithm based on reinforcement learning to automatically generate high-performing CNN architectures for a given learning task. The learning agent is trained to sequentially choose CNN layers using Qlearning with an -greedy exploration strategy and experience replay. The agent explores a large but finite space of possible architectures and iteratively discovers designs with improved performance on the learning task. On image classification benchmarks, the agent-designed networks (consisting of only standard convolution, pooling, and fully-connected layers) beat existing networks designed with the same layer types and are competitive against the state-of-the-art methods that use more complex layer types. We also outperform existing meta-modeling approaches for network design on image classification tasks.",
"title": ""
},
{
"docid": "50f896bba89b1906229c5c9800c8ea7b",
"text": "Intra-regional South-South medical tourism is a vastly understudied subject despite its significance in many parts of the Global South. This paper takes issue with the conventional notion of South Africa purely as a high-end \"surgeon and safari\" destination for medical tourists from the Global North. It argues that South-South movement to South Africa for medical treatment is far more significant, numerically and financially, than North-South movement. The general lack of access to medical diagnosis and treatment in SADC countries has led to a growing temporary movement of people across borders to seek help at South African institutions in border towns and in the major cities. These movements are both formal (institutional) and informal (individual) in nature. In some cases, patients go to South Africa for procedures that are not offered in their own countries. In others, patients are referred by doctors and hospitals to South African facilities. But the majority of the movement is motivated by lack of access to basic healthcare at home. The high demand and large informal flow of patients from countries neighbouring South Africa has prompted the South African government to try and formalise arrangements for medical travel to its public hospitals and clinics through inter-country agreements in order to recover the cost of treating non-residents. The danger, for 'disenfranchised' medical tourists who fall outside these agreements, is that medical xenophobia in South Africa may lead to increasing exclusion and denial of treatment. Medical tourism in this region and South-South medical tourism in general are areas that require much additional research.",
"title": ""
},
{
"docid": "e0f66f533c0af19126565160ff423949",
"text": "Antibiotic resistance, prompted by the overuse of antimicrobial agents, may arise from a variety of mechanisms, particularly horizontal gene transfer of virulence and antibiotic resistance genes, which is often facilitated by biofilm formation. The importance of phenotypic changes seen in a biofilm, which lead to genotypic alterations, cannot be overstated. Irrespective of if the biofilm is single microbe or polymicrobial, bacteria, protected within a biofilm from the external environment, communicate through signal transduction pathways (e.g., quorum sensing or two-component systems), leading to global changes in gene expression, enhancing virulence, and expediting the acquisition of antibiotic resistance. Thus, one must examine a genetic change in virulence and resistance not only in the context of the biofilm but also as inextricably linked pathologies. Observationally, it is clear that increased virulence and the advent of antibiotic resistance often arise almost simultaneously; however, their genetic connection has been relatively ignored. Although the complexities of genetic regulation in a multispecies community may obscure a causative relationship, uncovering key genetic interactions between virulence and resistance in biofilm bacteria is essential to identifying new druggable targets, ultimately providing a drug discovery and development pathway to improve treatment options for chronic and recurring infection.",
"title": ""
},
{
"docid": "a3e383cb19c97af5a4e501c7b13d9088",
"text": "Rapid diagnosis and treatment of acute neurological illnesses such as stroke, hemorrhage, and hydrocephalus are critical to achieving positive outcomes and preserving neurologic function—‘time is brain’1–5. Although these disorders are often recognizable by their symptoms, the critical means of their diagnosis is rapid imaging6–10. Computer-aided surveillance of acute neurologic events in cranial imaging has the potential to triage radiology workflow, thus decreasing time to treatment and improving outcomes. Substantial clinical work has focused on computer-assisted diagnosis (CAD), whereas technical work in volumetric image analysis has focused primarily on segmentation. 3D convolutional neural networks (3D-CNNs) have primarily been used for supervised classification on 3D modeling and light detection and ranging (LiDAR) data11–15. Here, we demonstrate a 3D-CNN architecture that performs weakly supervised classification to screen head CT images for acute neurologic events. Features were automatically learned from a clinical radiology dataset comprising 37,236 head CTs and were annotated with a semisupervised natural-language processing (NLP) framework16. We demonstrate the effectiveness of our approach to triage radiology workflow and accelerate the time to diagnosis from minutes to seconds through a randomized, double-blinded, prospective trial in a simulated clinical environment. A deep-learning algorithm is developed to provide rapid and accurate diagnosis of clinical 3D head CT-scan images to triage and prioritize urgent neurological events, thus potentially accelerating time to diagnosis and care in clinical settings.",
"title": ""
},
{
"docid": "75b0a7b0fa0320a3666fb147471dd45f",
"text": "Maximum power densities by air-driven microbial fuel cells (MFCs) are considerably influenced by cathode performance. We show here that application of successive polytetrafluoroethylene (PTFE) layers (DLs), on a carbon/PTFE base layer, to the air-side of the cathode in a single chamber MFC significantly improved coulombic efficiencies (CEs), maximum power densities, and reduced water loss (through the cathode). Electrochemical tests using carbon cloth electrodes coated with different numbers of DLs indicated an optimum increase in the cathode potential of 117 mV with four-DLs, compared to a <10 mV increase due to the carbon base layer alone. In MFC tests, four-DLs was also found to be the optimum number of coatings, resulting in a 171% increase in the CE (from 19.1% to 32%), a 42% increase in the maximum power density (from 538 to 766 mW m ), and measurable water loss was prevented. The increase in CE due is believed to result from the increased power output and the increased operation time (due to a reduction in aerobic degradation of substrate sustained by oxygen diffusion through the cathode). 2006 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "f676c503bcf59a8916995a6db3908792",
"text": "Bone tissue engineering has been increasingly studied as an alternative approach to bone defect reconstruction. In this approach, new bone cells are stimulated to grow and heal the defect with the aid of a scaffold that serves as a medium for bone cell formation and growth. Scaffolds made of metallic materials have preferably been chosen for bone tissue engineering applications where load-bearing capacities are required, considering the superior mechanical properties possessed by this type of materials to those of polymeric and ceramic materials. The space holder method has been recognized as one of the viable methods for the fabrication of metallic biomedical scaffolds. In this method, temporary powder particles, namely space holder, are devised as a pore former for scaffolds. In general, the whole scaffold fabrication process with the space holder method can be divided into four main steps: (i) mixing of metal matrix powder and space-holding particles; (ii) compaction of granular materials; (iii) removal of space-holding particles; (iv) sintering of porous scaffold preform. In this review, detailed procedures in each of these steps are presented. Technical challenges encountered during scaffold fabrication with this specific method are addressed. In conclusion, strategies are yet to be developed to address problematic issues raised, such as powder segregation, pore inhomogeneity, distortion of pore sizes and shape, uncontrolled shrinkage and contamination.",
"title": ""
},
{
"docid": "1d82d994635a0bd0137febd74b8c3835",
"text": "research A. Agrawal J. Basak V. Jain R. Kothari M. Kumar P. A. Mittal N. Modani K. Ravikumar Y. Sabharwal R. Sureka Marketing decisions are typically made on the basis of research conducted using direct mailings, mall intercepts, telephone interviews, focused group discussion, and the like. These methods of marketing research can be time-consuming and expensive, and can require a large amount of effort to ensure accurate results. This paper presents a novel approach for conducting online marketing research based on several concepts such as active learning, matched control and experimental groups, and implicit and explicit experiments. These concepts, along with the opportunity provided by the increasing numbers of online shoppers, enable rapid, systematic, and cost-effective marketing research.",
"title": ""
}
] |
scidocsrr
|
b495f664e9f2408e4a338e5dc3c14456
|
Machine learning based handover management for improved QoE in LTE
|
[
{
"docid": "4c50dd5905ce7e1f772e69673abe1094",
"text": "The wireless industry has been experiencing an explosion of data traffic usage in recent years and is now facing an even bigger challenge, an astounding 1000-fold data traffic increase in a decade. The required traffic increase is in bits per second per square kilometer, which is equivalent to bits per second per Hertz per cell × Hertz × cell per square kilometer. The innovations through higher utilization of the spectrum (bits per second per Hertz per cell) and utilization of more bandwidth (Hertz) are quite limited: spectral efficiency of a point-to-point link is very close to the theoretical limits, and utilization of more bandwidth is a very costly solution in general. Hyper-dense deployment of heterogeneous and small cell networks (HetSNets) that increase cells per square kilometer by deploying more cells in a given area is a very promising technique as it would provide a huge capacity gain by bringing small base stations closer to mobile devices. This article presents a holistic view on hyperdense HetSNets, which include fundamental preference in future wireless systems, and technical challenges and recent technological breakthroughs made in such networks. Advancements in modeling and analysis tools for hyper-dense HetSNets are also introduced with some additional interference mitigation and higher spectrum utilization techniques. This article ends with a promising view on the hyper-dense HetSNets to meet the upcoming 1000× data challenge.",
"title": ""
}
] |
[
{
"docid": "a94d8b425aed0ade657aa1091015e529",
"text": "Generative models for source code are an interesting structured prediction problem, requiring to reason about both hard syntactic and semantic constraints as well as about natural, likely programs. We present a novel model for this problem that uses a graph to represent the intermediate state of the generated output. Our model generates code by interleaving grammar-driven expansion steps with graph augmentation and neural message passing steps. An experimental evaluation shows that our new model can generate semantically meaningful expressions, outperforming a range of strong baselines.",
"title": ""
},
{
"docid": "49880a6cad6b00b9dfbd517c6675338e",
"text": "Associations between large cavum septum pellucidum and functional psychosis disorders, especially schizophrenia, have been reported. We report a case of late-onset catatonia associated with enlarged CSP and cavum vergae. A 66-year-old woman was presented with altered mental status and stereotypic movement. She was initially treated with aripiprazole and lorazepam. After 4 weeks, she was treated with electroconvulsive therapy. By 10 treatments, echolalia vanished, and catatonic behavior was alleviated. Developmental anomalies in the midline structure may increase susceptibility to psychosis, even in the elderly.",
"title": ""
},
{
"docid": "dd5a45464936906e7b4c987274c66839",
"text": "Visual analytic systems, especially mixed-initiative systems, can steer analytical models and adapt views by making inferences from users’ behavioral patterns with the system. Because such systems rely on incorporating implicit and explicit user feedback, they are particularly susceptible to the injection and propagation of human biases. To ultimately guard against the potentially negative effects of systems biased by human users, we must first qualify what we mean by the term bias. Thus, in this paper we describe four different perspectives on human bias that are particularly relevant to visual analytics. We discuss the interplay of human and computer system biases, particularly their roles in mixed-initiative systems. Given that the term bias is used to describe several different concepts, our goal is to facilitate a common language in research and development efforts by encouraging researchers to mindfully choose the perspective(s) considered in their work.",
"title": ""
},
{
"docid": "8dc2f16d4f4ed1aa0acf6a6dca0ccc06",
"text": "This is the second paper in a four-part series detailing the relative merits of the treatment strategies, clinical techniques and dental materials for the restoration of health, function and aesthetics for the dentition. In this paper the management of wear in the anterior dentition is discussed, using three case studies as illustration.",
"title": ""
},
{
"docid": "93c928adef35a409acaa9b371a1498f3",
"text": "The acquisition of a new motor skill is characterized first by a short-term, fast learning stage in which performance improves rapidly, and subsequently by a long-term, slower learning stage in which additional performance gains are incremental. Previous functional imaging studies have suggested that distinct brain networks mediate these two stages of learning, but direct comparisons using the same task have not been performed. Here we used a task in which subjects learn to track a continuous 8-s sequence demanding variable isometric force development between the fingers and thumb of the dominant, right hand. Learning-associated changes in brain activation were characterized using functional MRI (fMRI) during short-term learning of a novel sequence, during short-term learning after prior, brief exposure to the sequence, and over long-term (3 wk) training in the task. Short-term learning was associated with decreases in activity in the dorsolateral prefrontal, anterior cingulate, posterior parietal, primary motor, and cerebellar cortex, and with increased activation in the right cerebellar dentate nucleus, the left putamen, and left thalamus. Prefrontal, parietal, and cerebellar cortical changes were not apparent with short-term learning after prior exposure to the sequence. With long-term learning, increases in activity were found in the left primary somatosensory and motor cortex and in the right putamen. Our observations extend previous work suggesting that distinguishable networks are recruited during the different phases of motor learning. While short-term motor skill learning seems associated primarily with activation in a cortical network specific for the learned movements, long-term learning involves increased activation of a bihemispheric cortical-subcortical network in a pattern suggesting \"plastic\" development of new representations for both motor output and somatosensory afferent information.",
"title": ""
},
{
"docid": "088df7d8d71c00f7129d5249844edbc5",
"text": "Intense multidisciplinary research has provided detailed knowledge of the molecular pathogenesis of Alzheimer disease (AD). This knowledge has been translated into new therapeutic strategies with putative disease-modifying effects. Several of the most promising approaches, such as amyloid-β immunotherapy and secretase inhibition, are now being tested in clinical trials. Disease-modifying treatments might be at their most effective when initiated very early in the course of AD, before amyloid plaques and neurodegeneration become too widespread. Thus, biomarkers are needed that can detect AD in the predementia phase or, ideally, in presymptomatic individuals. In this Review, we present the rationales behind and the diagnostic performances of the core cerebrospinal fluid (CSF) biomarkers for AD, namely total tau, phosphorylated tau and the 42 amino acid form of amyloid-β. These biomarkers reflect AD pathology, and are candidate markers for predicting future cognitive decline in healthy individuals and the progression to dementia in patients who are cognitively impaired. We also discuss emerging plasma and CSF biomarkers, and explore new proteomics-based strategies for identifying additional CSF markers. Furthermore, we outline the roles of CSF biomarkers in drug discovery and clinical trials, and provide perspectives on AD biomarker discovery and the validation of such markers for use in the clinic.",
"title": ""
},
{
"docid": "4bbcaa76b20afecc8e6002d155acf23e",
"text": "We study the problem of learning mixtures of distributions, a natural formalization of clustering. A mixture of distributions is a collection of distributionsD = {D1, . . .DT }, andmixing weights , {w1, . . . , wT } such that",
"title": ""
},
{
"docid": "0a842427c2c03d08f9950765ee0fb625",
"text": "For centuries, several hundred pesticides have been used to control insects. These pesticides differ greatly in their mode of action, uptake by the body, metabolism, elimination from the body, and toxicity to humans. Potential exposure from the environment can be estimated by environmental monitoring. Actual exposure (uptake) is measured by the biological monitoring of human tissues and body fluids. Biomarkers are used to detect the effects of pesticides before adverse clinical health effects occur. Pesticides and their metabolites are measured in biological samples, serum, fat, urine, blood, or breast milk by the usual analytical techniques. Biochemical responses to environmental chemicals provide a measure of toxic effect. A widely used biochemical biomarker, cholinesterase depression, measures exposure to organophosphorus insecticides. Techniques that measure DNA damage (e.g., detection of DNA adducts) provide a powerful tool in measuring environmental effects. Adducts to hemoglobin have been detected with several pesticides. Determination of chromosomal aberration rates in cultured lymphocytes is an established method of monitoring populations occupationally or environmentally exposed to known or suspected mutagenic-carcinogenic agents. There are several studies on the cytogenetic effects of work with pesticide formulations. The majority of these studies report increases in the frequency of chromosomal aberrations and/or sister chromatid exchanges among the exposed workers. Biomarkers will have a major impact on the study of environmental risk factors. The basic aim of scientists exploring these issues is to determine the nature and consequences of genetic change or variation, with the ultimate purpose of predicting or preventing disease.",
"title": ""
},
{
"docid": "a7090eb926dee4b648e307559db4fc36",
"text": "Technology incubators are university-based technology initiatives that should facilitate knowledge flows from the university to the incubator firms. We thus investigate the research question of how knowledge actually flows from universities to incubator firms. Moreover, we assess the effect of these knowledge flows on incubator firm-level differential performance. Based on the resource-based view of the firm and the absorptive capacity construct, we advance the overarching hypothesis that knowledge flows should enhance incubator firm performance. Drawing on longitudinal and fine-grained firm-level data of 79 technology ventures incubated between 1998 and 2003 at the Advanced Technology Development Center, a technology incubator sponsored by the Georgia Institute of Technology, we find some support for knowledge flows from universities to incubator firms. Our evidence suggests that incubator firms’ absorptive capacity is an important factor when transforming university knowledge into",
"title": ""
},
{
"docid": "6954c2a51c589987ba7e37bd81289ba1",
"text": "TYAs paper looks at some of the algorithms that can be used for effective detection and tracking of vehicles, in particular for statistical analysis. The main methods for tracking discussed and implemented are blob analysis, optical flow and foreground detection. A further analysis is also done testing two of the techniques using a number of video sequences that include different levels of difficulties.",
"title": ""
},
{
"docid": "7dcc7cdff8a9196c716add8a1faf0203",
"text": "Power modulators for compact, repetitive systems are continually faced with new requirements as the corresponding system objectives increase. Changes in pulse rate frequency or number of pulses significantly impact the design of the power conditioning system. In order to meet future power supply requirements, we have developed several high voltage (HV) capacitor charging power supplies (CCPS). This effort focuses on a volume of 6\" x 6\" x 14\" and a weight of 25 lbs. The primary focus was to increase the effective capacitor charge rate, or power output, for the given size and weight. Although increased power output was the principal objective, efficiency and repeatability were also considered. A number of DC-DC converter topologies were compared to determine the optimal design. In order to push the limits of output power, numerous resonant converter parameters were examined. Comparisons of numerous topologies, HV transformers and rectifiers, and switching frequency ranges are presented. The impacts of the control system and integration requirements are also considered.",
"title": ""
},
{
"docid": "b6983a5ccdac40607949e2bfe2beace2",
"text": "A focus on novel, confirmatory, and statistically significant results leads to substantial bias in the scientific literature. One type of bias, known as \"p-hacking,\" occurs when researchers collect or select data or statistical analyses until nonsignificant results become significant. Here, we use text-mining to demonstrate that p-hacking is widespread throughout science. We then illustrate how one can test for p-hacking when performing a meta-analysis and show that, while p-hacking is probably common, its effect seems to be weak relative to the real effect sizes being measured. This result suggests that p-hacking probably does not drastically alter scientific consensuses drawn from meta-analyses.",
"title": ""
},
{
"docid": "36a694668a10bc0475f447adb1e09757",
"text": "Previous findings indicated that when people observe someone’s behavior, they spontaneously infer the traits and situations that cause the target person’s behavior. These inference processes are called spontaneous trait inferences (STIs) and spontaneous situation inferences (SSIs). While both patterns of inferences have been observed, no research has examined the extent to which people from different cultural backgrounds produce these inferences when information affords both trait and situation inferences. Based on the theoretical frameworks of social orientations and thinking styles, we hypothesized that European Canadians would be more likely to produce STIs than SSIs because of the individualistic/independent social orientation and the analytic thinking style dominant in North America, whereas Japanese would produce both STIs and SSIs equally because of the collectivistic/interdependent social orientation and the holistic thinking style dominant in East Asia. Employing the savings-in-relearning paradigm, we presented information that affords both STIs and SSIs and examined cultural differences in the extent of both inferences. The results supported our hypotheses. The relationships between culturally dominant styles of thought and the inference processes in impression formation are discussed.",
"title": ""
},
{
"docid": "9d60842315ad481ac55755160a581d74",
"text": "This paper presents an efficient DNN design with stochastic computing. Observing that directly adopting stochastic computing to DNN has some challenges including random error fluctuation, range limitation, and overhead in accumulation, we address these problems by removing near-zero weights, applying weight-scaling, and integrating the activation function with the accumulator. The approach allows an easy implementation of early decision termination with a fixed hardware design by exploiting the progressive precision characteristics of stochastic computing, which was not easy with existing approaches. Experimental results show that our approach outperforms the conventional binary logic in terms of gate area, latency, and power consumption.",
"title": ""
},
{
"docid": "989bdb2cf2e2587b854d8411f945d4fe",
"text": "In this paper, we propose a combination of mean-shift-based tracking processes to establish migrating cell trajectories through in vitro phase-contrast video microscopy. After a recapitulation on how the mean-shift algorithm permits efficient object tracking we describe the proposed extension and apply it to the in vitro cell tracking problem. In this application, the cells are unmarked (i.e., no fluorescent probe is used) and are observed under classical phase-contrast microscopy. By introducing an adaptive combination of several kernels, we address several problems such as variations in size and shape of the tracked objects (e.g., those occurring in the case of cell membrane extensions), the presence of incomplete (or noncontrasted) object boundaries, partially overlapping objects and object splitting (in the case of cell divisions or mitoses). Comparing the tracking results automatically obtained to those generated manually by a human expert, we tested the stability of the different algorithm parameters and their effects on the tracking results. We also show how the method is resistant to a decrease in image resolution and accidental defocusing (which may occur during long experiments, e.g., dozens of hours). Finally, we applied our methodology on cancer cell tracking and showed that cytochalasin-D significantly inhibits cell motility.",
"title": ""
},
{
"docid": "34e6ff966bead1eb91d1f21209cf992c",
"text": "UR robotic arms are from a series of lightweight, fast, easy to program, flexible, and safe robotic arms with 6 degrees of freedom. The fairly open control structure and low level programming access with high control bandwidth have made them of interest for many researchers. This paper presents a complete set of mathematical kinematic and dynamic, Matlab, and Simmechanics models for the UR5 robot. The accuracy of the developed mathematical models are demonstrated through kinematic and dynamic analysis. The Simmechanics model is developed based on these models to provide high quality visualisation of this robot for simulation of it in Matlab environment. The models are developed for public access and readily usable in Matlab environment. A position control system has been developed to demonstrate the use of the models and for cross validation purpose.",
"title": ""
},
{
"docid": "148b7445ec2cd811d64fd81c61c20e02",
"text": "Using sensors to measure parameters of interest in rotating environments and communicating the measurements in real-time over wireless links, requires a reliable power source. In this paper, we have investigated the possibility to generate electric power locally by evaluating six different energy-harvesting technologies. The applicability of the technology is evaluated by several parameters that are important to the functionality in an industrial environment. All technologies are individually presented and evaluated, a concluding table is also summarizing the technologies strengths and weaknesses. To support the technology evaluation on a more theoretical level, simulations has been performed to strengthen our claims. Among the evaluated and simulated technologies, we found that the variable reluctance-based harvesting technology is the strongest candidate for further technology development for the considered use-case.",
"title": ""
},
{
"docid": "ad0fb1877ac6323a6f17f885295517bc",
"text": "In current business practice, an integrated approach to business and IT is indispensable. Take for example a company that needs to assess the impact of introducing a new product in its portfolio. This may require defining additional business processes, hiring extra personnel, changing the supporting applications, and augmenting the technological infrastructure to support the additional load of these applications. Perhaps this may even require a change of the organizational structure.",
"title": ""
},
{
"docid": "7fe6bba98c9d3bda246d5cc40c62c27d",
"text": "A large proportion of online comments present on public domains are usually constructive, however a significant proportion are toxic in nature. The comments contain lot of typos which increases the number of features manifold, making the ML model difficult to train. Considering the fact that the data scientists spend approximately 80% of their time in collecting, cleaning and organizing their data [1], we explored how much effort should we invest in the preprocessing (transformation) of raw comments before feeding it to the state-of-the-art classification models. With the help of four models on Jigsaw toxic comment classification data, we demonstrated that the training of model without any transformation produce relatively decent model. Applying even basic transformations, in some cases, lead to worse performance and should be applied with caution.",
"title": ""
},
{
"docid": "e9497a16e9d12ea837c7a0ec44d71860",
"text": "This article surveys existing and emerging disaggregation techniques for energy-consumption data and highlights signal features that might be used to sense disaggregated data in an easily installed and cost-effective manner.",
"title": ""
}
] |
scidocsrr
|
0f5f826dced62cc765fa9d8b491c14d9
|
Big Data for Industry 4.0: A Conceptual Framework
|
[
{
"docid": "b206a5f5459924381ef6c46f692c7052",
"text": "The Konstanz Information Miner is a modular environment, which enables easy visual assembly and interactive execution of a data pipeline. It is designed as a teaching, research and collaboration platform, which enables simple integration of new algorithms and tools as well as data manipulation or visualization methods in the form of new modules or nodes. In this paper we describe some of the design aspects of the underlying architecture, briey sketch how new nodes can be incorporated, and highlight some of the new features of version 2.0.",
"title": ""
}
] |
[
{
"docid": "160fefce1158a9a70a61869d54c4c39a",
"text": "We present a new approach for efficient approximate nearest neighbor (ANN) search in high dimensional spaces, extending the idea of Product Quantization. We propose a two level product and vector quantization tree that reduces the number of vector comparisons required during tree traversal. Our approach also includes a novel highly parallelizable re-ranking method for candidate vectors by efficiently reusing already computed intermediate values. Due to its small memory footprint during traversal the method lends itself to an efficient, parallel GPU implementation. This Product Quantization Tree (PQT) approach significantly outperforms recent state of the art methods for high dimensional nearest neighbor queries on standard reference datasets. Ours is the first work that demonstrates GPU performance superior to CPU performance on high dimensional, large scale ANN problems in time-critical real-world applications, like loop-closing in videos.",
"title": ""
},
{
"docid": "b1a0a76e73aa5b0a893e50b2fadf0ad2",
"text": "The field of occupational therapy, as with all facets of health care, has been profoundly affected by the changing climate of health care delivery. The combination of cost-effectiveness and quality of care has become the benchmark for and consequent drive behind the rise of managed health care delivery systems. The spawning of outcomes research is in direct response to the need for comparative databases to provide results of effectiveness in health care treatment protocols, evaluations of health-related quality of life, and cost containment measures. Outcomes management is the application of outcomes research data by all levels of health care providers. The challenges facing occupational therapists include proving our value in an economic trend of downsizing, competing within the medical profession, developing and affiliating with new payer sources, and reengineering our careers to meet the needs of the new, nontraditional health care marketplace.",
"title": ""
},
{
"docid": "3f5083aca7cb8952ba5bf421cb34fab6",
"text": "Thyroid gland is butterfly shaped organ which consists of two cone lobes and belongs to the endocrine system. It lies in front of the neck below the adams apple. Thyroid disorders are some kind of abnormalities in thyroid gland which can give rise to nodules like hypothyroidism, hyperthyroidism, goiter, benign and malignant etc. Ultrasound (US) is one among the hugely used modality to detect the thyroid disorders because it has some benefits over other techniques like non-invasiveness, low cost, free of ionizing radiations etc. This paper provides a concise overview about segmentation of thyroid nodules and importance of neural networks comparative to other techniques.",
"title": ""
},
{
"docid": "3c4a8623330c48558ca178a82b68f06c",
"text": "Humans assimilate information from the traffic environment mainly through visual perception. Obviously, the dominant information required to conduct a vehicle can be acquired with visual sensors. However, in contrast to most other sensor principles, video signals contain relevant information in a highly indirect manner and hence visual sensing requires sophisticated machine vision and image understanding techniques. This paper provides an overview on the state of research in the field of machine vision for intelligent vehicles. The functional spectrum addressed covers the range from advanced driver assistance systems to autonomous driving. The organization of the article adopts the typical order in image processing pipelines that successively condense the rich information and vast amount of data in video sequences. Data-intensive low-level “early vision” techniques first extract features that are later grouped and further processed to obtain information of direct relevance for vehicle guidance. Recognition and classification schemes allow to identify specific objects in a traffic scene. Recently, semantic labeling techniques using convolutional neural networks have achieved impressive results in this field. High-level decisions of intelligent vehicles are often influenced by map data. The emerging role of machine vision in the mapping and localization process is illustrated at the example of autonomous driving. Scene representation methods are discussed that organize the information from all sensors and data sources and thus build the interface between perception and planning. Recently, vision benchmarks have been tailored to various tasks in traffic scene perception that provide a metric for the rich diversity of machine vision methods. Finally, the paper addresses computing architectures suited to real-time implementation. Throughout the paper, numerous specific examples and real world experiments with prototype vehicles are presented.",
"title": ""
},
{
"docid": "3adb2815bceb4a3bf11e5d3a595ac098",
"text": "Orientation estimation using low cost sensors is an important task for Micro Aerial Vehicles (MAVs) in order to obtain a good feedback for the attitude controller. The challenges come from the low accuracy and noisy data of the MicroElectroMechanical System (MEMS) technology, which is the basis of modern, miniaturized inertial sensors. In this article, we describe a novel approach to obtain an estimation of the orientation in quaternion form from the observations of gravity and magnetic field. Our approach provides a quaternion estimation as the algebraic solution of a system from inertial/magnetic observations. We separate the problems of finding the \"tilt\" quaternion and the heading quaternion in two sub-parts of our system. This procedure is the key for avoiding the impact of the magnetic disturbances on the roll and pitch components of the orientation when the sensor is surrounded by unwanted magnetic flux. We demonstrate the validity of our method first analytically and then empirically using simulated data. We propose a novel complementary filter for MAVs that fuses together gyroscope data with accelerometer and magnetic field readings. The correction part of the filter is based on the method described above and works for both IMU (Inertial Measurement Unit) and MARG (Magnetic, Angular Rate, and Gravity) sensors. We evaluate the effectiveness of the filter and show that it significantly outperforms other common methods, using publicly available datasets with ground-truth data recorded during a real flight experiment of a micro quadrotor helicopter.",
"title": ""
},
{
"docid": "441633276271b94dc1bd3e5e28a1014d",
"text": "While a large number of consumers in the US and Europe frequently shop on the Internet, research on what drives consumers to shop online has typically been fragmented. This paper therefore proposes a framework to increase researchers’ understanding of consumers’ attitudes toward online shopping and their intention to shop on the Internet. The framework uses the constructs of the Technology Acceptance Model (TAM) as a basis, extended by exogenous factors and applies it to the online shopping context. The review shows that attitudes toward online shopping and intention to shop online are not only affected by ease of use, usefulness, and enjoyment, but also by exogenous factors like consumer traits, situational factors, product characteristics, previous online shopping experiences, and trust in online shopping.",
"title": ""
},
{
"docid": "52cde6191c79d085127045a62deacf31",
"text": "Deep Reinforcement Learning methods have achieved state of the art performance in learning control policies for the games in the Atari 2600 domain. One of the important parameters in the Arcade Learning Environment (ALE, [Bellemare et al., 2013]) is the frame skip rate. It decides the granularity at which agents can control game play. A frame skip value of k allows the agent to repeat a selected action k number of times. The current state of the art architectures like Deep QNetwork (DQN, [Mnih et al., 2015]) and Dueling Network Architectures (DuDQN, [Wang et al., 2015]) consist of a framework with a static frame skip rate, where the action output from the network is repeated for a fixed number of frames regardless of the current state. In this paper, we propose a new architecture, Dynamic Frame skip Deep Q-Network (DFDQN) which makes the frame skip rate a dynamic learnable parameter. This allows us to choose the number of times an action is to be repeated based on the current state. We show empirically that such a setting improves the performance on relatively harder games like Seaquest.",
"title": ""
},
{
"docid": "374ee37f61ec6ff27e592c6a42ee687f",
"text": "Leaf vein forms the basis of leaf characterization and classification. Different species have different leaf vein patterns. It is seen that leaf vein segmentation will help in maintaining a record of all the leaves according to their specific pattern of veins thus provide an effective way to retrieve and store information regarding various plant species in database as well as provide an effective means to characterize plants on the basis of leaf vein structure which is unique for every species. The algorithm proposes a new way of segmentation of leaf veins with the use of Odd Gabor filters and the use of morphological operations for producing a better output. The Odd Gabor filter gives an efficient output and is robust and scalable as compared with the existing techniques as it detects the fine fiber like veins present in leaves much more efficiently.",
"title": ""
},
{
"docid": "b941dc9133a12aad0a75d41112e91aa8",
"text": "Recurrent neural network grammars (RNNG) are a recently proposed probabilistic generative modeling family for natural language. They show state-ofthe-art language modeling and parsing performance. We investigate what information they learn, from a linguistic perspective, through various ablations to the model and the data, and by augmenting the model with an attention mechanism (GA-RNNG) to enable closer inspection. We find that explicit modeling of composition is crucial for achieving the best performance. Through the attention mechanism, we find that headedness plays a central role in phrasal representation (with the model’s latent attention largely agreeing with predictions made by hand-crafted head rules, albeit with some important differences). By training grammars without nonterminal labels, we find that phrasal representations depend minimally on nonterminals, providing support for the endocentricity hypothesis.",
"title": ""
},
{
"docid": "a31ac47cd08fe2ede7192c1ca572076b",
"text": "Pipes are present in most of the infrastructure around us - in refineries, chemical plants, power plants, not to mention sewer, gas and water distribution networks. Inspection of these pipes is extremely important, as failures may result in catastrophic accidents with loss of lives. However, inspection of small pipes (from 3 to 6 inches) is usually neglected or performed only partially due to the lack of satisfactory tools. This paper introduces a new series of robots named PipeTron, developed especially for inspection of pipes in refineries and power plants. The mobility concept and design of each version will be described, follower by results of field deployment and considerations for future improvements.",
"title": ""
},
{
"docid": "c3a3f4128d4268f174f278be4039f7b0",
"text": "Suicide pacts are uncommon and mainly committed by male-female pairs in a consortial relationship. The victims frequently choose methods such as hanging, poisoning, using a firearm, etc; however, a case of a suicide pact by drowning is rare in forensic literature. We report a case where a male and a female, both young adults, in a relationship of adopted \"brother of convenience\" were found drowned in a river. The victims were bound together at their wrists which helped with our conclusion this was a suicide pact. The medico-legal importance of wrist binding in drowning cases is also discussed in this article.",
"title": ""
},
{
"docid": "a1ed789387713c1351b737f28b4c4eb0",
"text": "Matching local geometric features on real-world depth images is a challenging task due to the noisy, low-resolution, and incomplete nature of 3D scan data. These difficulties limit the performance of current state-of-art methods, which are typically based on histograms over geometric properties. In this paper, we present 3DMatch, a data-driven model that learns a local volumetric patch descriptor for establishing correspondences between partial 3D data. To amass training data for our model, we propose a self-supervised feature learning method that leverages the millions of correspondence labels found in existing RGB-D reconstructions. Experiments show that our descriptor is not only able to match local geometry in new scenes for reconstruction, but also generalize to different tasks and spatial scales (e.g. instance-level object model alignment for the Amazon Picking Challenge, and mesh surface correspondence). Results show that 3DMatch consistently outperforms other state-of-the-art approaches by a significant margin. Code, data, benchmarks, and pre-trained models are available online at http://3dmatch.cs.princeton.edu.",
"title": ""
},
{
"docid": "858651d38d25df7f3c9a5e497b5c3dce",
"text": "Identification and recognition of the cephalic vein in the deltopectoral triangle is of critical importance when considering emergency catheterization procedures. The aim of our study was to conduct a cadaveric study to access data regarding the topography and the distribution patterns of the cephalic vein as it relates to the deltopectoral triangle. One hundred formalin fixed cadavers were examined. The cephalic vein was found in 95% (190 right and left) specimens, while in the remaining 5% (10) the cephalic vein was absent. In 80% (152) of cases the cephalic vein was found emerging superficially in the lateral portion of the deltopectoral triangle. In 30% (52) of these 152 cases the cephalic vein received one tributary within the deltopectoral triangle, while in 70% (100) of the specimens it received two. In the remaining 20% (38) of cases the cephalic vein was located deep to the deltopectoral fascia and fat and did not emerge through the deltopectoral triangle but was identified medially to the coracobrachialis and inferior to the medial border of the deltoid. In addition, in 4 (0.2%) of the specimens the cephalic vein, after crossing the deltopectoral triangle, ascended anterior and superior to the clavicle to drain into the subclavian vein. In these specimens a collateral branch was observed to communicate between the cephalic and external jugular veins. In 65.2% (124) of the cases the cephalic vein traveled with the deltoid branch of the thoracoacromial trunk. The length of the cephalic vein within the deltopectoral triangle ranged from 3.5 cm to 8.2 cm with a mean of 4.8+/-0.7 cm. The morphometric analysis revealed a mean cephalic vein diameter of 0.8+/-0.1 cm with a range of 0.1 cm to 1.2 cm. The cephalic vein is relatively large and constant, usually allowing for easy cannulation.",
"title": ""
},
{
"docid": "3a301b11b704e34af05c9072d8353696",
"text": "Attention-deficit hyperactivity disorder (ADHD) is typically characterized as a disorder of inattention and hyperactivity/impulsivity but there is increasing evidence of deficits in motivation. Using positron emission tomography (PET), we showed decreased function in the brain dopamine reward pathway in adults with ADHD, which, we hypothesized, could underlie the motivation deficits in this disorder. To evaluate this hypothesis, we performed secondary analyses to assess the correlation between the PET measures of dopamine D2/D3 receptor and dopamine transporter availability (obtained with [11C]raclopride and [11C]cocaine, respectively) in the dopamine reward pathway (midbrain and nucleus accumbens) and a surrogate measure of trait motivation (assessed using the Achievement scale on the Multidimensional Personality Questionnaire or MPQ) in 45 ADHD participants and 41 controls. The Achievement scale was lower in ADHD participants than in controls (11±5 vs 14±3, P<0.001) and was significantly correlated with D2/D3 receptors (accumbens: r=0.39, P<0.008; midbrain: r=0.41, P<0.005) and transporters (accumbens: r=0.35, P<0.02) in ADHD participants, but not in controls. ADHD participants also had lower values in the Constraint factor and higher values in the Negative Emotionality factor of the MPQ but did not differ in the Positive Emotionality factor—and none of these were correlated with the dopamine measures. In ADHD participants, scores in the Achievement scale were also negatively correlated with symptoms of inattention (CAARS A, E and SWAN I). These findings provide evidence that disruption of the dopamine reward pathway is associated with motivation deficits in ADHD adults, which may contribute to attention deficits and supports the use of therapeutic interventions to enhance motivation in ADHD.",
"title": ""
},
{
"docid": "cfc3d8ee024928151edb5ee2a1d28c13",
"text": "Objective: In this paper, we present a systematic literature review of motivation in Software Engineering. The objective of this review is to plot the landscape of current reported knowledge in terms of what motivates developers, what de-motivates them and how existing models address motivation. Methods: We perform a systematic literature review of peer reviewed published studies that focus on motivation in Software Engineering. Systematic reviews are well established in medical research and are used to systematically analyse the literature addressing specific research questions. Results: We found 92 papers related to motivation in Software Engineering. Fifty-six percent of the studies reported that Software Engineers are distinguishable from other occupational groups. Our findings suggest that Software Engineers are likely to be motivated according to three related factors: their ‘characteristics’ (for example, their need for variety); internal ‘controls’ (for example, their personality) and external ‘moderators’ (for example, their career stage). The literature indicates that de-motivated engineers may leave the organisation or take more sick-leave, while motivated engineers will increase their productivity and remain longer in the organisation. Aspects of the job that motivate Software Engineers include problem solving, working to benefit others and technical challenge. Our key finding is that the published models of motivation in Software Engineering are disparate and do not reflect the complex needs of Software Engineers in their career stages, cultural and environmental settings. Conclusions: The literature on motivation in Software Engineering presents a conflicting and partial picture of the area. It is clear that motivation is context dependent and varies from one engineer to another. The most commonly cited motivator is the job itself, yet we found very little work on what it is about that job that Software Engineers find motivating. Furthermore, surveys are often aimed at how Software Engineers feel about ‘the organisation’, rather than ‘the profession’. Although models of motivation in Software Engineering are reported in the literature, they do not account for the changing roles and environment in which Software Engineers operate. Overall, our findings indicate that there is no clear understanding of the Software Engineers’ job, what motivates Software Engineers, how they are motivated, or the outcome and benefits of motivating Software Engineers. 2007 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "5eb1aa594c3c6210f029b5bbf6acc599",
"text": "Intestinal nematodes affecting dogs, i.e. roundworms, hookworms and whipworms, have a relevant health-risk impact for animals and, for most of them, for human beings. Both dogs and humans are typically infected by ingesting infective stages, (i.e. larvated eggs or larvae) present in the environment. The existence of a high rate of soil and grass contamination with infective parasitic elements has been demonstrated worldwide in leisure, recreational, public and urban areas, i.e. parks, green areas, bicycle paths, city squares, playgrounds, sandpits, beaches. This review discusses the epidemiological and sanitary importance of faecal pollution with canine intestinal parasites in urban environments and the integrated approaches useful to minimize the risk of infection in different settings.",
"title": ""
},
{
"docid": "a1fed0bcce198ad333b45bfc5e0efa12",
"text": "Contemporary games are making significant strides towards offering complex, immersive experiences for players. We can now explore sprawling 3D virtual environments populated by beautifully rendered characters and objects with autonomous behavior, engage in highly visceral action-oriented experiences offering a variety of missions with multiple solutions, and interact in ever-expanding online worlds teeming with physically customizable player avatars.",
"title": ""
},
{
"docid": "473968c14db4b189af126936fd5486ca",
"text": "Disclaimer/Complaints regulations If you believe that digital publication of certain material infringes any of your rights or (privacy) interests, please let the Library know, stating your reasons. In case of a legitimate complaint, the Library will make the material inaccessible and/or remove it from the website. Please Ask the Library: http://uba.uva.nl/en/contact, or a letter to: Library of the University of Amsterdam, Secretariat, Singel 425, 1012 WP Amsterdam, The Netherlands. You will be contacted as soon as possible.",
"title": ""
},
{
"docid": "ef925e9d448cf4ca9a889b5634b685cf",
"text": "This paper proposes an ameliorated wheel-based cable inspection robot, which is able to climb up a vertical cylindrical cable on the cable-stayed bridge. The newly-designed robot in this paper is composed of two equally spaced modules, which are joined by connecting bars to form a closed hexagonal body to clasp on the cable. Another amelioration is the newly-designed electric circuit, which is employed to limit the descending speed of the robot during its sliding down along the cable. For the safe landing in case of electricity broken-down, a gas damper with a slider-crank mechanism is introduced to exhaust the energy generated by the gravity when the robot is slipping down. For the present design, with payloads below 3.5 kg, the robot can climb up a cable with diameters varying from 65 mm to 205 mm. The landing system is tested experimentally and a simplified mathematical model is analyzed. Several climbing experiments performed on real cables show the capability of the proposed robot.",
"title": ""
},
{
"docid": "922cc239f2511801da980620aa87ee94",
"text": "Alloying is an effective way to engineer the band-gap structure of two-dimensional transition-metal dichalcogenide materials. Molybdenum and tungsten ditelluride alloyed with sulfur or selenium layers (MX2xTe2(1-x), M = Mo, W and X = S, Se) have a large band-gap tunability from metallic to semiconducting due to the 2H-to-1T' phase transition as controlled by the alloy concentrations, whereas the alloy atom distribution in these two phases remains elusive. Here, combining atomic resolution Z-contrast scanning transmission electron microscopy imaging and density functional theory (DFT), we discovered that anisotropic ordering occurs in the 1T' phase, in sharp contrast to the isotropic alloy behavior in the 2H phase under similar alloy concentration. The anisotropic ordering is presumably due to the anisotropic bonding in the 1T' phase, as further elaborated by DFT calculations. Our results reveal the atomic anisotropic alloyed behavior in 1T' phase layered alloys regardless of their alloy concentration, shining light on fine-tuning their physical properties via engineering the alloyed atomic structure.",
"title": ""
}
] |
scidocsrr
|
c9d8f73da91b19104e3ab129444342ec
|
Sentence Ordering and Coherence Modeling using Recurrent Neural Networks
|
[
{
"docid": "ee46ee9e45a87c111eb14397c99cd653",
"text": "This is a review of unsupervised learning applied to videos with the aim of learning visual representations. We look at different realizations of the notion of temporal coherence across various models. We try to understand the challenges being faced, the strengths and weaknesses of different approaches and identify directions for future work. Unsupervised Learning of Visual Representations using Videos Nitish Srivastava Department of Computer Science, University of Toronto",
"title": ""
},
{
"docid": "e7ac73f581ae7799021374ddd3e4d3a2",
"text": "Table: Coherence evaluation results on Discrimination and Insertion tasks. † indicates a neural model is significantly superior to its non-neural counterpart with p-value < 0.01. Discr. Ins. Acc F1 Random 50.00 50.00 12.60 Graph-based (G&S) 64.23 65.01 11.93 Dist. sentence (L&H) 77.54 77.54 19.32 Grid-all nouns (E&C) 81.58 81.60 22.13 Extended Grid (E&C) 84.95 84.95 23.28 Grid-CNN 85.57† 85.57† 23.12 Extended Grid-CNN 88.69† 88.69† 25.95†",
"title": ""
}
] |
[
{
"docid": "dbf26735e4bba4f1259a876137dd6f0c",
"text": "A complex waveguide to microstrip line transition is proposed for system-on-package (SOP) on low temperature cofired ceramic (LTCC). Transition is designed to operate around 60 GHz and is used to feed the 16 elements microstrip antenna array. Transition includes waveguide to stripline transition, stripline to embedded microstrip line transition, and finally embedded microstrip line to microstrip line transition. Return loss characteristics for single transitions are presented. For the assembled complex transition 10-dB return loss bandwidth is from 52 GHz up to 75 GHz. System with antenna array and feed line has gain more then 17 dB. Analysis has been performed using full-wave simulation software.",
"title": ""
},
{
"docid": "bc58f2f9f6f5773f5f8b2696d9902281",
"text": "Software development is a complicated process and requires careful planning to produce high quality software. In large software development projects, release planning may involve a lot of unique challenges. Due to time, budget and some other constraints, potentially there are many problems that may possibly occur. Subsequently, project managers have been trying to identify and understand release planning, challenges and possible resolutions which might help them in developing more effective and successful software products. This paper presents the findings from an empirical study which investigates release planning challenges. It takes a qualitative approach using interviews and observations with practitioners and project managers at five large software banking projects in Informatics Services Corporation (ISC) in Iran. The main objective of this study is to explore and increase the understanding of software release planning challenges in several software companies in a developing country. A number of challenges were elaborated and discussed in this study within the domain of software banking projects. These major challenges are classified into two main categories: the human-originated including people cooperation, disciplines and abilities; and the system-oriented including systematic approaches, resource constraints, complexity, and interdependency among the systems.",
"title": ""
},
{
"docid": "028eb67d71987c33c4a331cf02c6ff00",
"text": "We explore the feasibility of using crowd workers from Amazon Mechanical Turk to identify and rank sidewalk accessibility issues from a manually curated database of 100 Google Street View images. We examine the effect of three different interactive labeling interfaces (Point, Rectangle, and Outline) on task accuracy and duration. We close the paper by discussing limitations and opportunities for future work.",
"title": ""
},
{
"docid": "1000855a500abc1f8ef93d286208b600",
"text": "Nowadays, the most widely used variable speed machine for wind turbine above 1MW is the doubly fed induction generator (DFIG). As the wind power penetration continues to increase, wind turbines are required to provide Low Voltage Ride-Through (LVRT) capability. Crowbars are commonly used to protect the power converters during voltage dips. Its main drawback is that the DFIG absorbs reactive power from the grid during grid faults. This paper proposes an improved control strategy for the crowbar protection to reduce its operation time. And a simple demagnetization method is adopted to decrease the oscillations of the transient current. Moreover, reactive power can be provided to assist the recovery of the grid voltage. Simulation results show the effectiveness of the proposed control schemes.",
"title": ""
},
{
"docid": "959a43b6b851a4a255466296efac7299",
"text": "Technology in football has been debated by pundits, players and fans all over the world for the past decade. FIFA has recently commissioned the use of ‘Hawk-Eye’ and ‘Goal Ref’ goal line technology systems at the 2014 World Cup in Brazil. This paper gives an in depth evaluation of the possible technologies that could be used in football and determines the potential benefits and implications these systems could have on the officiating of football matches. The use of technology in other sports is analyzed to come to a conclusion as to whether officiating technology should be used in football. Will football be damaged by the loss of controversial incidents such as Frank Lampard’s goal against Germany at the 2010 World Cup? Will cost, accuracy and speed continue to prevent the use of officiating technology in football? Time will tell, but for now, any advancement in the use of technology in football will be met by some with discontent, whilst others see it as moving the sport into the 21 century.",
"title": ""
},
{
"docid": "154c5c644171c63647e5a1c83ed06440",
"text": "Recommender System are new generation internet tool that help user in navigating through information on the internet and receive information related to their preferences. Although most of the time recommender systems are applied in the area of online shopping and entertainment domains like movie and music, yet their applicability is being researched upon in other area as well. This paper presents an overview of the Recommender Systems which are currently working in the domain of online book shopping. This paper also proposes a new book recommender system that combines user choices with not only similar users but other users as well to give diverse recommendation that change over time. The overall architecture of the proposed system is presented and its implementation with a prototype design is described. Lastly, the paper presents empirical evaluation of the system based on a survey reflecting the impact of such diverse recommendations on the user choices. Key-Words: Recommender system; Collaborative filtering; Content filtering; Data mining; Time; Book",
"title": ""
},
{
"docid": "bd3cedfd42e261e9685cf402fc44c914",
"text": "OBJECTIVES\nThe objective of this study was to compile existing scientific evidence regarding the effects of essential oils (EOs) administered via inhalation for the alleviation of nausea and vomiting.\n\n\nMETHODS\nCINAHL, PubMed, and EBSCO Host and Science Direct databases were searched for articles related to the use of EOs and/or aromatherapy for nausea and vomiting. Only articles using English as a language of publication were included. Eligible articles included all forms of evidence (nonexperimental, experimental, case report). Interventions were limited to the use of EOs by inhalation of their vapors to treat symptoms of nausea and vomiting in various conditions regardless of age group. Studies where the intervention did not utilize EOs or were concerned with only alcohol inhalation and trials that combined the use of aromatherapy with other treatments (massage, relaxations, or acupressure) were excluded.\n\n\nRESULTS\nFive (5) articles met the inclusion criteria encompassing trials with 328 respondents. Their results suggest that the inhaled vapor of peppermint or ginger essential oils not only reduced the incidence and severity of nausea and vomiting but also decreased antiemetic requirements and consequently improved patient satisfaction. However, a definitive conclusion could not be drawn due to methodological flaws in the existing research articles and an acute lack of additional research in this area.\n\n\nCONCLUSIONS\nThe existing evidence is encouraging but yet not compelling. Hence, further well-designed large trials are needed before confirmation of EOs effectiveness in treating nausea and vomiting can be strongly substantiated.",
"title": ""
},
{
"docid": "0ffe744bfa62726930406065399e6bca",
"text": "In this paper we present an annotated corpus created with the aim of analyzing the informative behaviour of emoji – an issue of importance for sentiment analysis and natural language processing. The corpus consists of 2475 tweets all containing at least one emoji, which has been annotated using one of the three possible classes: Redundant, Non Redundant, and Non Redundant + POS. We explain how the corpus was collected, describe the annotation procedure and the interface developed for the task. We provide an analysis of the corpus, considering also possible predictive features, discuss the problematic aspects of the annotation, and suggest future improvements.",
"title": ""
},
{
"docid": "9e0186c53e0a55744f60074145d135e3",
"text": "Two new low-power, and high-performance 1bit Full Adder cells are proposed in this paper. These cells are based on low-power XOR/XNOR circuit and Majority-not gate. Majority-not gate, which produces Cout (Output Carry), is implemented with an efficient method, using input capacitors and a static CMOS inverter. This kind of implementation benefits from low power consumption, a high degree of regularity and simplicity. Eight state-of-the-art 1-bit Full Adders and two proposed Full Adders are simulated with HSPICE using 0.18μm CMOS technology at several supply voltages ranging from 2.4v down to 0.8v. Although low power consumption is targeted in implementation of our designs, simulation results demonstrate great improvement in terms of power consumption and also PDP.",
"title": ""
},
{
"docid": "de8661c2e63188464de6b345bfe3a908",
"text": "Modern computer games show potential not just for engaging and entertaining users, but also in promoting learning. Game designers employ a range of techniques to promote long-term user engagement and motivation. These techniques are increasingly being employed in so-called serious games, games that have nonentertainment purposes such as education or training. Although such games share the goal of AIED of promoting deep learner engagement with subject matter, the techniques employed are very different. Can AIED technologies complement and enhance serious game design techniques, or does good serious game design render AIED techniques superfluous? This paper explores these questions in the context of the Tactical Language Training System (TLTS), a program that supports rapid acquisition of foreign language and cultural skills. The TLTS combines game design principles and game development tools with learner modelling, pedagogical agents, and pedagogical dramas. Learners carry out missions in a simulated game world, interacting with non-player characters. A virtual aide assists the learners if they run into difficulties, and gives performance feedback in the context of preparatory exercises. Artificial intelligence plays a key role in controlling the behaviour of the non-player characters in the game; intelligent tutoring provides supplementary scaffolding.",
"title": ""
},
{
"docid": "135e3fa3b9487255b6ee67465b645fc9",
"text": "In the past few decades, the concepts of personalization in the forms of recommender system, information filtering, or customization not only are quickly accepted by the public but also draw considerable attention from enterprises. Therefore, a number of studies based on personalized recommendations have subsequently been produced. Most of these studies apply on E-commerce, website, and information, and some of them apply on teaching, tourism, and TV programs. Because the recent rise of Web 3.0 emphasizes on providing more complete personal information and service through the efficient method, the recommender application gradually develops towards mobile commerce, mobile information, or social network. Many studies have adopted Content-Based (CB), Collaborative Filtering (CF), and hybrid approach as the main recommender style in the analysis. There are few or even no studies that have emphasized on the review of recommendation recently. For this reason, this study aims to collect, analyze, and review the research topics of recommender systems and their application in the past few decades. This study collects the research types and from various researchers. The literature arrangement of this study can help researchers to understand the recommender system researches in a clear sense and in a short time.",
"title": ""
},
{
"docid": "7c8948433cf6c0d35fe29ccfac75d5b5",
"text": "The EMIB dense MCP technology is a new packaging paradigm that provides localized high density interconnects between two or more die on an organic package substrate, opening up new opportunities for heterogeneous on-package integration. This paper provides an overview of EMIB architecture and package capabilities. First, EMIB is compared with other approaches for high density interconnects. Some of the inherent advantages of the technology, such as the ability to cost effectively implement high density interconnects without requiring TSVs, and the ability to support the integration of many large die in an area much greater than the typical reticle size limit are highlighted. Next, the overall EMIB architecture envelope is discussed along with its constituent building blocks, the package construction with the embedded bridge, die to package interconnect features. Next, the EMIB assembly process is described at a high level. Finally, high bandwidth signaling between the die is discussed and the link bandwidth envelope is quantified.",
"title": ""
},
{
"docid": "6e1eee6355865bffd6af4c5c1d4a5d31",
"text": "Most of the prior work on multi-agent reinforcement learning (MARL) achieves optimal collaboration by directly learning a policy for each agent to maximize a common reward. In this paper, we aim to address this from a different angle. In particular, we consider scenarios where there are self-interested agents (i.e., worker agents) which have their own minds (preferences, intentions, skills, etc.) and can not be dictated to perform tasks they do not want to do. For achieving optimal coordination among these agents, we train a super agent (i.e., the manager) to manage them by first inferring their minds based on both current and past observations and then initiating contracts to assign suitable tasks to workers and promise to reward them with corresponding bonuses so that they will agree to work together. The objective of the manager is to maximize the overall productivity as well as minimize payments made to the workers for ad-hoc worker teaming. To train the manager, we propose Mind-aware Multi-agent Management Reinforcement Learning (MRL), which consists of agent modeling and policy learning. We have evaluated our approach in two environments, Resource Collection and Crafting, to simulate multi-agent management problems with various task settings and multiple designs for the worker agents. The experimental results have validated the effectiveness of our approach in modeling worker agents’ minds online, and in achieving optimal ad-hoc teaming with good generalization and fast adaptation.1",
"title": ""
},
{
"docid": "36b0ace93b5a902966e96e4649d83b98",
"text": "We introduce a novel matching algorithm, called DeepMatching, to compute dense correspondences between images. DeepMatching relies on a hierarchical, multi-layer, correlational architecture designed for matching images and was inspired by deep convolutional approaches. The proposed matching algorithm can handle non-rigid deformations and repetitive textures and efficiently determines dense correspondences in the presence of significant changes between images. We evaluate the performance of DeepMatching, in comparison with state-of-the-art matching algorithms, on the Mikolajczyk (Mikolajczyk et al. A comparison of affine region detectors, 2005), the MPI-Sintel (Butler et al. A naturalistic open source movie for optical flow evaluation, 2012) and the Kitti (Geiger et al. Vision meets robotics: The KITTI dataset, 2013) datasets. DeepMatching outperforms the state-of-the-art algorithms and shows excellent results in particular for repetitive textures. We also apply DeepMatching to the computation of optical flow, called DeepFlow, by integrating it in the large displacement optical flow (LDOF) approach of Brox and Malik (Large displacement optical flow: descriptor matching in variational motion estimation, 2011). Additional robustness to large displacements and complex motion is obtained thanks to our matching approach. DeepFlow obtains competitive performance on public benchmarks for optical flow estimation.",
"title": ""
},
{
"docid": "cd29357697fafb5aa5b66807f746b682",
"text": "Autonomous path planning algorithms are significant to planetary exploration rovers, since relying on commands from Earth will heavily reduce their efficiency of executing exploration missions. This paper proposes a novel learning-based algorithm to deal with global path planning problem for planetary exploration rovers. Specifically, a novel deep convolutional neural network with double branches (DB-CNN) is designed and trained, which can plan path directly from orbital images of planetary surfaces without implementing environment mapping. Moreover, the planning procedure requires no prior knowledge about planetary surface terrains. Finally, experimental results demonstrate that DBCNN achieves better performance on global path planning and faster convergence during training compared with the existing Value Iteration Network (VIN).",
"title": ""
},
{
"docid": "a620202abaa0f11d2d324b05a29986dd",
"text": "Haze is an atmospheric phenomenon that significantly degrades the visibility of outdoor scenes. This is mainly due to the atmosphere particles that absorb and scatter the light. This paper introduces a novel single image approach that enhances the visibility of such degraded images. Our method is a fusion-based strategy that derives from two original hazy image inputs by applying a white balance and a contrast enhancing procedure. To blend effectively the information of the derived inputs to preserve the regions with good visibility, we filter their important features by computing three measures (weight maps): luminance, chromaticity, and saliency. To minimize artifacts introduced by the weight maps, our approach is designed in a multiscale fashion, using a Laplacian pyramid representation. We are the first to demonstrate the utility and effectiveness of a fusion-based technique for dehazing based on a single degraded image. The method performs in a per-pixel fashion, which is straightforward to implement. The experimental results demonstrate that the method yields results comparative to and even better than the more complex state-of-the-art techniques, having the advantage of being appropriate for real-time applications.",
"title": ""
},
{
"docid": "3f904e591a46f770e9a1425e6276041b",
"text": "Several decades of research in underwater communication and networking has resulted in novel and innovative solutions to combat challenges such as long delay spread, rapid channel variation, significant Doppler, high levels of non-Gaussian noise, limited bandwidth and long propagation delays. Many of the physical layer solutions can be tested by transmitting carefully designed signals, recording them after passing through the underwater channel, and then processing them offline using appropriate algorithms. However some solutions requiring online feedback to the transmitter cannot be tested without real-time processing capability in the field. Protocols and algorithms for underwater networking also require real-time communication capability for experimental testing. Although many modems are commercially available, they provide limited flexibility in physical layer signaling and sensing. They also provide limited control over the exact timing of transmission and reception, which can be critical for efficient implementation of some networking protocols with strict time constraints. To aid in our physical and higher layer research, we developed the UNET-2 software-defined modem with flexibility and extensibility as primary design objectives. We present the hardware and software architecture of the modem, focusing on the flexibility and adaptability that it provides researchers with. We describe the network stack that the modem uses, and show how it can also be used as a powerful tool for underwater network simulation. We illustrate the flexibility provided by the modem through a number of practical examples and experiments.",
"title": ""
},
{
"docid": "ad7a5bccf168ac3b13e13ccf12a94f7d",
"text": "As one of the most popular social media platforms today, Twitter provides people with an effective way to communicate and interact with each other. Through these interactions, influence among users gradually emerges and changes people's opinions. Although previous work has studied interpersonal influence as the probability of activating others during information diffusion, they ignore an important fact that information diffusion is the result of influence, while dynamic interactions among users produce influence. In this article, the authors propose a novel temporal influence model to learn users' opinion behaviors regarding a specific topic by exploring how influence emerges during communications. The experiments show that their model performs better than other influence models with different influence assumptions when predicting users' future opinions, especially for the users with high opinion diversity.",
"title": ""
},
{
"docid": "c3b652b561e38a51f1fa40483532e22d",
"text": "Vertical integration refers to one of the options that firms make decisions in the supply of oligopoly market. It was impacted by competition game between upstream firms and downstream firms. Based on the game theory and other previous studies,this paper built a dynamic game model of two-stage competition between the oligopoly suppliers of upstream and the vertical integration firms of downstream manufacturers. In the first stage, it analyzed the influences on integration degree by prices of intermediate goods when an oligopoly firm engages in a Bertrand-game if outputs are not limited. Moreover, it analyzed the influences on integration degree by price-diverge of intermediate goods if outputs were not restricted within a Bertrand Duopoly game equilibrium. In the second stage, there is a Cournot duopoly game between downstream specialization firms and downstream integration firms. Their marginal costs are affected by the integration degree and their yields are affected either under indifferent manufacture conditions. Finally, prices of intermediate goods are determined by the competition of upstream firms, the prices of intermediate goods affect the changes of integration degree between upstream firms and downstream firms. The conclusions can be referenced to decision-making of integration in market competition.",
"title": ""
},
{
"docid": "332d517d07187d2403a672b08365e5ef",
"text": "Please cite this article in press as: C. Galleguillos doi:10.1016/j.cviu.2010.02.004 The goal of object categorization is to locate and identify instances of an object category within an image. Recognizing an object in an image is difficult when images include occlusion, poor quality, noise or background clutter, and this task becomes even more challenging when many objects are present in the same scene. Several models for object categorization use appearance and context information from objects to improve recognition accuracy. Appearance information, based on visual cues, can successfully identify object classes up to a certain extent. Context information, based on the interaction among objects in the scene or global scene statistics, can help successfully disambiguate appearance inputs in recognition tasks. In this work we address the problem of incorporating different types of contextual information for robust object categorization in computer vision. We review different ways of using contextual information in the field of object categorization, considering the most common levels of extraction of context and the different levels of contextual interactions. We also examine common machine learning models that integrate context information into object recognition frameworks and discuss scalability, optimizations and possible future approaches. 2010 Elsevier Inc. All rights reserved.",
"title": ""
}
] |
scidocsrr
|
c2647bb383e19ff7339fb31ba13130be
|
Vertical Integration in Building Automation Systems
|
[
{
"docid": "7edb8a803734f4eb9418b8c34b1bf07c",
"text": "Building automation systems (BAS) provide automatic control of the conditions of indoor environments. The historical root and still core domain of BAS is the automation of heating, ventilation and air-conditioning systems in large functional buildings. Their primary goal is to realize significant savings in energy and reduce cost. Yet the reach of BAS has extended to include information from all kinds of building systems, working toward the goal of \"intelligent buildings\". Since these systems are diverse by tradition, integration issues are of particular importance. When compared with the field of industrial automation, building automation exhibits specific, differing characteristics. The present paper introduces the task of building automation and the systems and communications infrastructure necessary to address it. Basic requirements are covered as well as standard application models and typical services. An overview of relevant standards is given, including BACnet, LonWorks and EIB/KNX as open systems of key significance in the building automation domain.",
"title": ""
}
] |
[
{
"docid": "eea9332a263b7e703a60c781766620e5",
"text": "The use of topic models to analyze domainspecific texts often requires manual validation of the latent topics to ensure that they are meaningful. We introduce a framework to support such a large-scale assessment of topical relevance. We measure the correspondence between a set of latent topics and a set of reference concepts to quantify four types of topical misalignment: junk, fused, missing, and repeated topics. Our analysis compares 10,000 topic model variants to 200 expertprovided domain concepts, and demonstrates how our framework can inform choices of model parameters, inference algorithms, and intrinsic measures of topical quality.",
"title": ""
},
{
"docid": "6d728174d576ac785ff093f4cdc16e1b",
"text": "The stress-inducible protein heme oxygenase-1 provides protection against oxidative stress. The anti-inflammatory properties of heme oxygenase-1 may serve as a basis for this cytoprotection. We demonstrate here that carbon monoxide, a by-product of heme catabolism by heme oxygenase, mediates potent anti-inflammatory effects. Both in vivo and in vitro, carbon monoxide at low concentrations differentially and selectively inhibited the expression of lipopolysaccharide-induced pro-inflammatory cytokines tumor necrosis factor-α, interleukin-1β, and macrophage inflammatory protein-1β and increased the lipopolysaccharide-induced expression of the anti-inflammatory cytokine interleukin-10. Carbon monoxide mediated these anti-inflammatory effects not through a guanylyl cyclase–cGMP or nitric oxide pathway, but instead through a pathway involving the mitogen-activated protein kinases. These data indicate the possibility that carbon monoxide may have an important protective function in inflammatory disease states and thus has potential therapeutic uses.",
"title": ""
},
{
"docid": "2a600bc7d6e35335e1514597aa4c7a79",
"text": "Since the 2000s, Business Process Management (BPM) has evolved into a comprehensively studied discipline that goes beyond the boundaries of particular business processes. By also affecting enterprise-wide capabilities (such as an organisational culture and structure that support a processoriented way of working), BPM can now correctly be called Business Process Orientation (BPO). Meanwhile, various maturity models have been developed to help organisations adopt a processoriented way of working based on step-by-step best practices. The present article reports on a case study in which the process portfolio of an organisation is assessed by different maturity models that each cover a different set of process-oriented capabilities. The purpose is to reflect on how business process maturity is currently measured, and to explore relevant considerations for practitioners, scholars and maturity model designers. Therefore, we investigate a possible difference in maturity scores that are obtained based on model-related characteristics (e.g. capabilities, scale and calculation technique) and respondent-related characteristics (e.g. organisational function). For instance, based on an experimental design, the original maturity scores are recalculated for different maturity scales and different calculation techniques. Follow-up research can broaden our experiment from multiple maturity models in a single case to multiple maturity models in multiple cases.",
"title": ""
},
{
"docid": "a6f5061478984636dadf061b9048c0f6",
"text": "Inter-Component Communication (ICC) provides a message passing mechanism for data exchange between Android applications. It has been long believed that inter-app ICCs can be abused by malware writers to launch collusion attacks using two or more apps. However, because of the complexity of performing pairwise program analysis on apps, the scale of existing analyses is too small (e.g., up to several hundred) to produce concrete security evidence. In this paper, we report our findings in the first large-scale detection of collusive and vulnerable apps, based on inter-app ICC data flows among 110,150 real-world apps. Our system design aims to balance the accuracy of static ICC resolution/data-flow analysis and run-time scalability. This large-scale analysis provides real-world evidence and deep insights on various types of inter-app ICC abuse. Besides the empirical findings, we make several technical contributions, including a new open-source ICC resolution tool with improved accuracy over the state-of-the-art, and a large database of inter-app ICCs and their attributes.",
"title": ""
},
{
"docid": "df155f17d4d810779ee58bafcaab6f7b",
"text": "OBJECTIVE\nTo explore the types, prevalence and associated variables of cyberbullying among students with intellectual and developmental disability attending special education settings.\n\n\nMETHODS\nStudents (n = 114) with intellectual and developmental disability who were between 12-19 years of age completed a questionnaire containing questions related to bullying and victimization via the internet and cellphones. Other questions concerned sociodemographic characteristics (IQ, age, gender, diagnosis), self-esteem and depressive feelings.\n\n\nRESULTS\nBetween 4-9% of students reported bullying or victimization of bullying at least once a week. Significant associations were found between cyberbullying and IQ, frequency of computer usage and self-esteem and depressive feelings. No associations were found between cyberbullying and age and gender.\n\n\nCONCLUSIONS\nCyberbullying is prevalent among students with intellectual and developmental disability in special education settings. Programmes should be developed to deal with this issue in which students, teachers and parents work together.",
"title": ""
},
{
"docid": "fd45363f75f9206aa13e139d784e5d52",
"text": "Multivariate pattern recognition methods are increasingly being used to identify multiregional brain activity patterns that collectively discriminate one cognitive condition or experimental group from another, using fMRI data. The performance of these methods is often limited because the number of regions considered in the analysis of fMRI data is large compared to the number of observations (trials or participants). Existing methods that aim to tackle this dimensionality problem are less than optimal because they either over-fit the data or are computationally intractable. Here, we describe a novel method based on logistic regression using a combination of L1 and L2 norm regularization that more accurately estimates discriminative brain regions across multiple conditions or groups. The L1 norm, computed using a fast estimation procedure, ensures a fast, sparse and generalizable solution; the L2 norm ensures that correlated brain regions are included in the resulting solution, a critical aspect of fMRI data analysis often overlooked by existing methods. We first evaluate the performance of our method on simulated data and then examine its effectiveness in discriminating between well-matched music and speech stimuli. We also compared our procedures with other methods which use either L1-norm regularization alone or support vector machine-based feature elimination. On simulated data, our methods performed significantly better than existing methods across a wide range of contrast-to-noise ratios and feature prevalence rates. On experimental fMRI data, our methods were more effective in selectively isolating a distributed fronto-temporal network that distinguished between brain regions known to be involved in speech and music processing. These findings suggest that our method is not only computationally efficient, but it also achieves the twin objectives of identifying relevant discriminative brain regions and accurately classifying fMRI data.",
"title": ""
},
{
"docid": "c24c7131a24b478beff8e682845588ab",
"text": "Modern technologies of mobile computing and wireless sensing prompt the concept of pervasive social network (PSN)-based healthcare. To realize the concept, the core problem is how a PSN node can securely share health data with other nodes in the network. In this paper, we propose a secure system for PSN-based healthcare. Two protocols are designed for the system. The first one is an improved version of the IEEE 802.15.6 display authenticated association. It establishes secure links with unbalanced computational requirements for mobile devices and resource-limited sensor nodes. The second protocol uses blockchain technique to share health data among PSN nodes. We realize a protocol suite to study protocol runtime and other factors. In addition, human body channels are proposed for PSN nodes in some use cases. The proposed system illustrates a potential method of using blockchain for PSN-based applications.",
"title": ""
},
{
"docid": "96d4045130c1fde17505d1e5bd1c537b",
"text": "Current factor VIII (FVIII) products display a half-life (t(1/2)) of ∼ 8-12 hours, requiring frequent intravenous injections for prophylaxis and treatment of patients with hemophilia A. rFVIIIFc is a recombinant fusion protein composed of a single molecule of FVIII covalently linked to the Fc domain of human IgG(1) to extend circulating rFVIII t(1/2). This first-in-human study in previously treated subjects with severe hemophilia A investigated safety and pharmacokinetics of rFVIIIFc. Sixteen subjects received a single dose of rFVIII at 25 or 65 IU/kg followed by an equal dose of rFVIIIFc. Most adverse events were unrelated to study drug. None of the study subjects developed anti-rFVIIIFc antibodies or inhibitors. Across dose levels, compared with rFVIII, rFVIIIFc showed 1.54- to 1.70-fold longer elimination t(1/2), 1.49- to 1.56-fold lower clearance, and 1.48- to 1.56-fold higher total systemic exposure. rFVIII and rFVIIIFc had comparable dose-dependent peak plasma concentrations and recoveries. Time to 1% FVIII activity above baseline was ∼ 1.53- to 1.68-fold longer than rFVIII across dose levels. Each subject showed prolonged exposure to rFVIIIFc relative to rFVIII. Thus, rFVIIIFc may offer a viable therapeutic approach to achieve prolonged hemostatic protection and less frequent dosing in patients with hemophilia A. This trial was registered at www.clinicaltrials.gov as NCT01027377.",
"title": ""
},
{
"docid": "19d83278272386638051f5a3dabc08a9",
"text": "Predicting the depth map of a scene is often a vital component of monocular SLAM pipelines. Depth prediction is fundamentally ill-posed due to the inherent ambiguity in the scene formation process. In recent times, convolutional neural networks (CNNs) that exploit scene geometric constraints have been explored extensively for supervised single-view depth prediction and semi-supervised 2-view depth prediction. In this paper we explore whether recurrent neural networks (RNNs) can learn spatio-temporally accurate monocular depth prediction from video sequences, even without explicit definition of the inter-frame geometric consistency or pose supervision. To this end, we propose a novel convolutional LSTM (ConvLSTM)-based network architecture for depth prediction from a monocular video sequence. In the proposed ConvLSTM network architecture, we harness the ability of long short-term memory (LSTM)-based RNNs to reason sequentially and predict the depth map for an image frame as a function of the appearances of scene objects in the image frame as well as image frames in its temporal neighborhood. In addition, the proposed ConvLSTM network is also shown to be able to make depth predictions for future or unseen image frame(s). We demonstrate the depth prediction performance of the proposed ConvLSTM network on the KITTI dataset and show that it gives results that are superior in terms of accuracy to those obtained via depth-supervised and self-supervised methods and comparable to those generated by state-of-the-art pose-supervised methods.",
"title": ""
},
{
"docid": "0254d49cb759e163a032b6557f969bd3",
"text": "The smart electricity grid enables a two-way flow of power and data between suppliers and consumers in order to facilitate the power flow optimization in terms of economic efficiency, reliability and sustainability. This infrastructure permits the consumers and the micro-energy producers to take a more active role in the electricity market and the dynamic energy management (DEM). The most important challenge in a smart grid (SG) is how to take advantage of the users’ participation in order to reduce the cost of power. However, effective DEM depends critically on load and renewable production forecasting. This calls for intelligent methods and solutions for the real-time exploitation of the large volumes of data generated by a vast amount of smart meters. Hence, robust data analytics, high performance computing, efficient data network management, and cloud computing techniques are critical towards the optimized operation of SGs. This research aims to highlight the big data issues and challenges faced by the DEM employed in SG networks. It also provides a brief description of the most commonly used data processing methods in the literature, and proposes a promising direction for future research in the field.",
"title": ""
},
{
"docid": "598dd39ec35921242b94f17e24b30389",
"text": "In this paper, we present a study on the characterization and the classification of textures. This study is performed using a set of values obtained by the computation of indexes. To obtain these indexes, we extract a set of data with two techniques: the computation of matrices which are statistical representations of the texture and the computation of \"measures\". These matrices and measures are subsequently used as parameters of a function bringing real or discrete values which give information about texture features. A model of texture characterization is built based on this numerical information, for example to classify textures. An application is proposed to classify cells nuclei in order to diagnose patients affected by the Progeria disease.",
"title": ""
},
{
"docid": "ebde7eb6e61bf56f84267b14e913b74a",
"text": "Contraction of want to to wanna is subject to constraints which have been related to the operation of Universal Grammar. Contraction appears to be blocked when the trace of an extracted wh-word intervenes. Evidence for knowledge of these constraints by young English-speaking children in as been taken to show the operation of Universal Grammar in early child language acquisition. The present study investigates the knowledge these constraints in adults, both English native speakers and advanced Korean learners of English. The results of three experiments, using elicited production, oral repair, and grammaticality judgements, confirmed native speaker knowledge of the constraints. A second process of phonological elision may also operate to produce wanna. Learners also showed some differentiation of contexts, but much less clearly than native speakers. We speculate that non-natives may be using rules of complement selection, rather than the constraints of UG, to control contraction. Introduction: wanna contraction and language learnability In English, want to can be contracted to wanna, but not invariably. As first observed by Lakoff (1970) in examples such as (1), in which the object of the infinitival complement of want has been extracted by wh-movement, contraction is possible, but not in (2), in which the subject of the infinitival complement is extracted from the position between want and to. We shall call examples like (1) \"subject extraction questions\" (SEQ) and examples like (2) \"object extraction questions\" (OEQ).",
"title": ""
},
{
"docid": "57c0f9c629e4fdcbb0a4ca2d4f93322f",
"text": "Chronic exertional compartment syndrome and medial tibial stress syndrome are uncommon conditions that affect long-distance runners or players involved in team sports that require extensive running. We report 2 cases of bilateral chronic exertional compartment syndrome, with medial tibial stress syndrome in identical twins diagnosed with the use of a Kodiag monitor (B. Braun Medical, Sheffield, United Kingdom) fulfilling the modified diagnostic criteria for chronic exertional compartment syndrome as described by Pedowitz et al, which includes: (1) pre-exercise compartment pressure level >15 mm Hg; (2) 1 minute post-exercise pressure >30 mm Hg; and (3) 5 minutes post-exercise pressure >20 mm Hg in the presence of clinical features. Both patients were treated with bilateral anterior fasciotomies through minimal incision and deep posterior fasciotomies with tibial periosteal stripping performed through longer anteromedial incisions under direct vision followed by intensive physiotherapy resulting in complete symptomatic recovery. The etiology of chronic exertional compartment syndrome is not fully understood, but it is postulated abnormal increases in intramuscular pressure during exercise impair local perfusion, causing ischemic muscle pain. No familial predisposition has been reported to date. However, some authors have found that no significant difference exists in the relative perfusion, in patients, diagnosed with chronic exertional compartment syndrome. Magnetic resonance images of affected compartments have indicated that the pain is not due to ischemia, but rather from a disproportionate oxygen supply versus demand. We believe this is the first report of chronic exertional compartment syndrome with medial tibial stress syndrome in twins, raising the question of whether there is a genetic predisposition to the causation of these conditions.",
"title": ""
},
{
"docid": "3f55bac8aaba79cdb28284bbdc4c6e8e",
"text": "We present an OpenCL compilation framework to generate high-performance hardware for FPGAs. For an OpenCL application comprising a host program and a set of kernels, it compiles the host program, generates Verilog HDL for each kernel, compiles the circuit using Altera Complete Design Suite 12.0, and downloads the compiled design onto an FPGA.We can then run the application by executing the host program on a Windows(tm)-based machine, which communicates with kernels on an FPGA using a PCIe interface. We implement four applications on an Altera Stratix IV and present the throughput and area results for each application. We show that we can achieve a clock frequency in excess of 160MHz on our benchmarks, and that OpenCL computing paradigm is a viable design entry method for high-performance computing applications on FPGAs.",
"title": ""
},
{
"docid": "67bc81066dbe06ac615df861435fdbd9",
"text": "When a three-dimensional ferromagnetic topological insulator thin film is magnetized out-of-plane, conduction ideally occurs through dissipationless, one-dimensional (1D) chiral states that are characterized by a quantized, zero-field Hall conductance. The recent realization of this phenomenon, the quantum anomalous Hall effect, provides a conceptually new platform for studies of 1D transport, distinct from the traditionally studied quantum Hall effects that arise from Landau level formation. An important question arises in this context: how do these 1D edge states evolve as the magnetization is changed from out-of-plane to in-plane? We examine this question by studying the field-tilt-driven crossover from predominantly edge-state transport to diffusive transport in Crx(Bi,Sb)(2-x)Te3 thin films. This crossover manifests itself in a giant, electrically tunable anisotropic magnetoresistance that we explain by employing a Landauer-Büttiker formalism. Our methodology provides a powerful means of quantifying dissipative effects in temperature and chemical potential regimes far from perfect quantization.",
"title": ""
},
{
"docid": "d6959f0cd5ad7a534e99e3df5fa86135",
"text": "In the course of the project Virtual Try-On new VR technologies have been developed, which form the basis for a realistic, three dimensional, (real-time) simulation and visualization of individualized garments put on by virtual counterparts of real customers. To provide this cloning and dressing of people in VR, a complete process chain is being build up starting with the touchless 3-dimensional scanning of the human body up to a photo-realistic 3-dimensional presentation of the virtual customer dressed in the chosen pieces of clothing. The emerging platform for interactive selection and configuration of virtual garments, the „virtual shop“, will be accessible in real fashion boutiques as well as over the internet, thereby supplementing the conventional distribution channels.",
"title": ""
},
{
"docid": "4b68825f6b65419c4449773fab99375f",
"text": "This letter presents the novel concept of a miniaturized robotized machine tool, Mini-RoboMach, consisting of a walking hexapod robot and a Slender Continuum Arm. By combining the mobility of the walking robot with the positioning accuracy of the machine tool, with its 24 + 25 degrees of freedom, camera-based calibration system, laser scanner, and two end-effectors of opposed orientations, the proposed system can provide a versatile tool for in-situ work (e.g., repair) in hazardous/unreachable locations in large installations.",
"title": ""
},
{
"docid": "4dd61afa86e13270599d4193f8b9bb70",
"text": "The paper deals with the definition of urban mobility, assuming that inside the city, the flow of mobility is deeply connected with the distribution, quality and use of the urban activities that “polarize” different users (residents, commuters, tourists and city users). In this vision, ICT assume a strategic role, but the need to reconsider their role emerges in respect to the concept of a smart city. The consideration that “urban smartness” does not depend exclusively on the ICT component or on the quantitative presence of technologies in the city, in fact, represents a shared opinion within the current scientific debate on the subject of the smart city. The paper assumes that, for the present urban contexts, the smart vision has to be related to an integrated approach, which considers the city as a complex system. Inside the urban system, the networks for both material and immaterial mobility interact with the urban activities that play a supporting role and have characteristics that affect the levels of urban smartness. Changes in urban systems greatly depend on the sorts of innovation technology that have intensely modified the social component to a far greater extent than others. Big Data, for instance, can help with knowledge of urban processes, provided they have to be well-interpreted and managed, and this will be of interest within the interactions among urban systems and the functioning of the system as a whole. Town planning has to take on responsibility in regard to approaching cities according to a different vision and updating its tools in order to steer the urban system steadfastly into a smartness state. In a systemic vision, this transition must be framed within the context of a process of governmental transformation that is carefully oriented towards the individuation of interactions among the different subsystems composing the city. According to this vision, the study of urban mobility can be related to the attractiveness generated by the different urban functions. The formalization of the degree of polarization, activated by urban functions, represents the main objective of this study. Among the urban functions, the study considers tourism as one of the most significant in the formalization of urban mobility flow inside the smart city.",
"title": ""
},
{
"docid": "d9605c1cde4c40d69c2faaea15eb466c",
"text": "A magnetically tunable ferrite-loaded substrate integrated waveguide (SIW) cavity resonator is presented and demonstrated. X-band cavity resonator is operated in the dominant mode and the ferrite slabs are loaded onto the side walls of the cavity where the value of magnetic field is highest. Measured results for single and double ferrite-loaded SIW cavity resonators are presented. Frequency tuning range of more than 6% and 10% for single and double ferrite slabs are obtained. Unloaded Q -factor of more than 200 is achieved.",
"title": ""
},
{
"docid": "a58130841813814dacd7330d04efe735",
"text": "Under-reporting of food intake is one of the fundamental obstacles preventing the collection of accurate habitual dietary intake data. The prevalence of under-reporting in large nutritional surveys ranges from 18 to 54% of the whole sample, but can be as high as 70% in particular subgroups. This wide variation between studies is partly due to different criteria used to identify under-reporters and also to non-uniformity of under-reporting across populations. The most consistent differences found are between men and women and between groups differing in body mass index. Women are more likely to under-report than men, and under-reporting is more common among overweight and obese individuals. Other associated characteristics, for which there is less consistent evidence, include age, smoking habits, level of education, social class, physical activity and dietary restraint. Determining whether under-reporting is specific to macronutrients or food is problematic, as most methods identify only low energy intakes. Studies that have attempted to measure under-reporting specific to macronutrients express nutrients as percentage of energy and have tended to find carbohydrate under-reported and protein over-reported. However, care must be taken when interpreting these results, especially when data are expressed as percentages. A logical conclusion is that food items with a negative health image (e.g. cakes, sweets, confectionery) are more likely to be under-reported, whereas those with a positive health image are more likely to be over-reported (e.g. fruits and vegetables). This also suggests that dietary fat is likely to be under-reported. However, it is necessary to distinguish between under-reporting and genuine under-eating for the duration of data collection. The key to understanding this problem, but one that has been widely neglected, concerns the processes that cause people to under-report their food intakes. The little work that has been done has simply confirmed the complexity of this issue. The importance of obtaining accurate estimates of habitual dietary intakes so as to assess health correlates of food consumption can be contrasted with the poor quality of data collected. This phenomenon should be considered a priority research area. Moreover, misreporting is not simply a nutritionist's problem, but requires a multidisciplinary approach (including psychology, sociology and physiology) to advance the understanding of under-reporting in dietary intake studies.",
"title": ""
}
] |
scidocsrr
|
838c28d6fe4ecc67ed27878193e73776
|
The Rutgers Master II — New Design Force-Feedback Glove
|
[
{
"docid": "c39db00484417af8adbdf2932a4ff198",
"text": "The Virtual Assembly Design Environment (VADE) is a Virtual Reality (VR)-based engineering application that allows engineers to evaluate, analyze, and plan the assembly of mechanical systems. This system focuses on utilizing an immersive, virtual environment tightly coupled with commercial computer aided design (CAD) systems. Salient features of VADE include: 1) data integration (two-way) with a parametric CAD system, 2) realistic interaction of user with parts in the virtual environment, 3) creation of valued design information in the virtual environment, 4) reverse data transfer of design information back to the CAD system, 5) significant interactivity in the virtual environment, 6) collision detection, and 7) physically-based modeling. This paper describes the functionality and applications of VADE. A discussion of the limitations of virtual assembly and a comparison with automated assembly planning systems are presented. Experiments conducted using real-world engineering models are also described.",
"title": ""
}
] |
[
{
"docid": "790ac9330d698cf5d6f3f8fc7891f090",
"text": "It is well known that the convergence rate of the expectation-maximization (EM) algorithm can be faster than those of convention first-order iterative algorithms when the overlap in the given mixture is small. But this argument has not been mathematically proved yet. This article studies this problem asymptotically in the setting of gaussian mixtures under the theoretical framework of Xu and Jordan (1996). It has been proved that the asymptotic convergence rate of the EM algorithm for gaussian mixtures locally around the true solution is o(e0.5()), where > 0 is an arbitrarily small number, o(x) means that it is a higher-order infinitesimal as x 0, and e() is a measure of the average overlap of gaussians in the mixture. In other words, the large sample local convergence rate for the EM algorithm tends to be asymptotically superlinear when e() tends to zero.",
"title": ""
},
{
"docid": "a0d0661f66ceb83bf628fccb11fefcfb",
"text": "In past years a body of data integrity checking techniques have been proposed for securing cloud data services. Most of these works assume that only the data owner can modify cloud-stored data. Recently a few attempts started considering more realistic scenarios by allowing multiple cloud users to modify data with integrity assurance. However, these attempts are still far from practical due to the tremendous computational cost on cloud users. Moreover, collusion between misbehaving cloud servers and revoked users is not considered. This paper proposes a novel data integrity checking scheme characterized by multi-user modification, collusion resistance and a constant computational cost of integrity checking for cloud users, thanks to our novel design of polynomial-based authentication tags and proxy tag update techniques. Our scheme also supports public checking and efficient user revocation and is provably secure. Numerical analysis and extensive experimental results show the efficiency and scalability of our proposed scheme.",
"title": ""
},
{
"docid": "fef45863bc531960dbf2a7783995bfdb",
"text": "The main goal of facial attribute recognition is to determine various attributes of human faces, e.g. facial expressions, shapes of mouth and nose, headwears, age and race, by extracting features from the images of human faces. Facial attribute recognition has a wide range of potential application, including security surveillance and social networking. The available approaches, however, fail to consider the correlations and heterogeneities between different attributes. This paper proposes that by utilizing these correlations properly, an improvement can be achieved on the recognition of different attributes. Therefore, we propose a facial attribute recognition approach based on the grouping of different facial attribute tasks and a multi-task CNN structure. Our approach can fully utilize the correlations between attributes, and achieve a satisfactory recognition result on a large number of attributes with limited amount of parameters. Several modifications to the traditional architecture have been tested in the paper, and experiments have been conducted to examine the effectiveness of our approach.",
"title": ""
},
{
"docid": "1380438b5c7739a77644520ebc744002",
"text": "The present work proposes a review and comparison of different Kernel functionals and neighborhood geometry for Nonlocal Means (NLM) in the task of digital image filtering. Some different alternatives to change the classical exponential kernel function used in NLM methods are explored. Moreover, some approaches that change the geometry of the neighborhood and use dimensionality reduction of the neighborhood or patches onto principal component analysis (PCA) are also analyzed, and their performance is compared with respect to the classic NLM method. Mainly, six approaches were compared using quantitative and qualitative evaluations, to do this an homogeneous framework has been established using the same simulation platform, the same computer, and same conditions for the initializing parameters. According to the obtained comparison, one can say that the NLM filtering could be improved when changing the kernel, particularly for the case of the Tukey kernel. On the other hand, the excellent performance given by recent hybrid approaches such as NLM SAP, NLM PCA (PH), and the BM3D SAPCA lead to establish that significantly improvements to the classic NLM could be obtained. Particularly, the BM3D SAPCA approach gives the best denoising results, however, the computation times were the longest.",
"title": ""
},
{
"docid": "fb4f4d1762535b8afe7feec072f1534e",
"text": "Recently, evaluation of a recommender system has been beyond evaluating just the algorithm. In addition to accuracy of algorithms, user-centric approaches evaluate a system’s e↵ectiveness in presenting recommendations, explaining recommendations and gaining users’ confidence in the system. Existing research focuses on explaining recommendations that are related to user’s current task. However, explaining recommendations can prove useful even when recommendations are not directly related to user’s current task. Recommendations of development environment commands to software developers is an example of recommendations that are not related to the user’s current task, which is primarily focussed on programming, rather than inspecting recommendations. In this dissertation, we study three di↵erent kinds of explanations for IDE commands recommended to software developers. These explanations are inspired by the common approaches based on literature in the domain. We describe a lab-based experimental study with 24 participants where they performed programming tasks on an open source project. Our results suggest that explanations a↵ect users’ trust of recommendations, and explanations reporting the system’s confidence in recommendation a↵ects their trust more. The explanation with system’s confidence rating of the recommendations resulted in more recommendations being investigated. However, explanations did not a↵ect the uptake of the commands. Our qualitative results suggest that recommendations, when not user’s primary focus, should be in context of his task to be accepted more readily.",
"title": ""
},
{
"docid": "73b0a5820c8268bb5911e1b44401273b",
"text": "In typical reinforcement learning (RL), the environment is assumed given and the goal of the learning is to identify an optimal policy for the agent taking actions through its interactions with the environment. In this paper, we extend this setting by considering the environment is not given, but controllable and learnable through its interaction with the agent at the same time. This extension is motivated by environment design scenarios in the realworld, including game design, shopping space design and traffic signal design. Theoretically, we find a dual Markov decision process (MDP) w.r.t. the environment to that w.r.t. the agent, and derive a policy gradient solution to optimizing the parametrized environment. Furthermore, discontinuous environments are addressed by a proposed general generative framework. Our experiments on a Maze game design task show the effectiveness of the proposed algorithms in generating diverse and challenging Mazes against various agent settings.",
"title": ""
},
{
"docid": "3a00a29587af4f7c5ce974a8e6970413",
"text": "After reviewing six senses of abstraction, this article focuses on abstractions that take the form of summary representations. Three central properties of these abstractions are established: ( i ) type-token interpretation; (ii) structured representation; and (iii) dynamic realization. Traditional theories of representation handle interpretation and structure well but are not sufficiently dynamical. Conversely, connectionist theories are exquisitely dynamic but have problems with structure. Perceptual symbol systems offer an approach that implements all three properties naturally. Within this framework, a loose collection of property and relation simulators develops to represent abstractions. Type-token interpretation results from binding a property simulator to a region of a perceived or simulated category member. Structured representation results from binding a configuration of property and relation simulators to multiple regions in an integrated manner. Dynamic realization results from applying different subsets of property and relation simulators to category members on different occasions. From this standpoint, there are no permanent or complete abstractions of a category in memory. Instead, abstraction is the skill to construct temporary online interpretations of a category's members. Although an infinite number of abstractions are possible, attractors develop for habitual approaches to interpretation. This approach provides new ways of thinking about abstraction phenomena in categorization, inference, background knowledge and learning.",
"title": ""
},
{
"docid": "35c5f8dcf914b7381041b2e5f6a17507",
"text": "This paper proposes a new high efficiency single phase transformerless grid-tied photovoltaic (PV) inverter by using super-junction MOSFETs as main power switches. No reverse recovery issues are required for the main power switches and the blocking voltages across the switches are half of the DC input voltage in the proposed topology. Therefore, the super-junction MOSFETs have been used to improve the efficiency. Two additional switches with the conventional full H-bridge topology, make sure the disconnection of PV module from the grid at the freewheeling mode. As a result, the high frequency common mode (CM) voltage which leads leakage current is minimized. PWM dead time is not necessary for the proposed topology which reduces the distortion of the AC output current. The efficiency at light load is increased by using MOSFET as main power switches which increases the European Union (EU) efficiency of the proposed topology. The proposed inverter can also operate with high frequency by retaining high efficiency which enables reduced cooling system. The total semiconductor device losses for the proposed topology and several existing topologies are calculated and compared. Finally, the proposed new topology is simulated by MATLAB/Simulink software to validate the accuracy of the theoretical explanation. It is being manufactured to verify the experimental results.",
"title": ""
},
{
"docid": "e5048285c2616e9bfb28accd91629187",
"text": "Hidden Markov Models (HMMs) are learning methods for pattern recognition. The probabilistic HMMs have been one of the most used techniques based on the Bayesian model. First-order probabilistic HMMs were adapted to the theory of belief functions such that Bayesian probabilities were replaced with mass functions. In this paper, we present a second-order Hidden Markov Model using belief functions. Previous works in belief HMMs have been focused on the first-order HMMs. We extend them to the second-order model.",
"title": ""
},
{
"docid": "5e75a46c36e663791db0f8b45f685cb6",
"text": "This study provides one of very few experimental investigations into the impact of a musical soundtrack on the video gaming experience. Participants were randomly assigned to one of three experimental conditions: game-with-music, game-without-music, or music-only. After playing each of three segments of The Lord of the Rings: The Two Towers (Electronic Arts, 2002)--or, in the music-only condition, listening to the musical score that accompanies the scene--subjects responded on 21 verbal scales. Results revealed that some, but not all, of the verbal scales exhibited a statistically significant difference due to the presence of a musical score. In addition, both gender and age level were shown to be significant factors for some, but not all, of the verbal scales. Details of the specific ways in which music affects the gaming experience are provided in the body of the paper.",
"title": ""
},
{
"docid": "8efe2a6cdea88b9436f714b56dd1e660",
"text": "S INCE ITS INAUGURATION in 1966, the ACM A.M. Tur-ing Award has recognized major contributions of lasting importance to computing. Through the years, it has become the most prestigious award in computing. To help celebrate 50 years of the ACM Turing Award and the visionaries who have received it, ACM has launched a campaign called \" Panels in Print, \" which takes the form of a collection of responses from Turing laureates, ACM award recipients and other ACM experts on a given topic or trend. ACM's celebration of 50 years of the ACM Turing Award will culminate with a conference June 23–24, 2017 at the Westin St. Francis in San Francis-co to highlight the significant impact of the contributions of ACM Turing laureates on computing and society, to look ahead to the future of technology and innovation, and to help inspire the next generation of computer scientists to invent and dream. Grace Murray Hopper recipient PEDRO FELZENSZWALB to respond to several questions about Artificial Intelligence. What have been the biggest breakthroughs in AI in recent years and what impact is it having in the real-world? RAJ REDDY: Ten years ago, I would have said it wouldn't be possible, in my lifetime, to recognize unre-hearsed spontaneous speech from an open population but that's exactly what Siri, Cortana and Alexa do. The same is happening with vision and robotics. We are by no means at the end of the activity in these areas, but we have enough working examples that society can benefit from these breakthroughs. JEFF DEAN: The biggest breakthrough in the last five or so years has been the use of deep learning, a particular kind of machine learning that uses neural networks. Stacking the network into many layers that learn increasingly abstract patterns as you go up the layers seems to be a fundamentally powerful idea, and it's been very successful in a surprisingly wide variety of applications—from speech recognition, to image recognition, to language understanding. What's interesting is we don't seem to be near the limit of what deep learning can do; we'll likely see many more powerful uses of it in the coming years. PEDRO FELZENSZWALB: Among the biggest technical advances I would include the development of scalable machine learning algorithms and the computational infrastructure to process and interact with huge data-sets. The latest example of these advances is deep learning. In computer vision deep learning has …",
"title": ""
},
{
"docid": "aaaa90a881f6d52b02f14a05faa25f4e",
"text": "Studies on human motion have attracted a lot of attentions. Human motion capture data, which much more precisely records human motion than videos do, has been widely used in many areas. Motion segmentation is an indispensable step for many related applications, but current segmentation methods for motion capture data do not effectively model some important characteristics of motion capture data, such as Riemannian manifold structure and containing non-Gaussian noise. In this paper, we convert the segmentation of motion capture data into a temporal subspace clustering problem. Under the framework of sparse subspace clustering, we propose to use the geodesic exponential kernel to model the Riemannian manifold structure, use correntropy to measure the reconstruction error, use the triangle constraint to guarantee temporal continuity in each cluster and use multi-view reconstruction to extract the relations between different joints. Therefore, exploiting some special characteristics of motion capture data, we propose a new segmentation method, which is robust to non-Gaussian noise, since correntropy is a localized similarity measure. We also develop an efficient optimization algorithm based on block coordinate descent method to solve the proposed model. Our optimization algorithm has a linear complexity while sparse subspace clustering is originally a quadratic problem. Extensive experiment results both on simulated noisy data set and real noisy data set demonstrate the advantage of the proposed method.",
"title": ""
},
{
"docid": "2cd3833634cf2dae58ccb268ba85e955",
"text": "We explore the hypothesis that many intuitive physical inferences are based on a mental physics engine that is analogous in many ways to the machine physics engines used in building interactive video games. We describe the key features of game physics engines and their parallels in human mental representation, focusing especially on the intuitive physics of young infants where the hypothesis helps to unify many classic and otherwise puzzling phenomena, and may provide the basis for a computational account of how the physical knowledge of infants develops. This hypothesis also explains several 'physics illusions', and helps to inform the development of artificial intelligence (AI) systems with more human-like common sense.",
"title": ""
},
{
"docid": "637b6abdadd3653e95a127f48dc991db",
"text": "State-of-the-art models for joint entity recognition and relation extraction strongly rely on external natural language processing (NLP) tools such as POS (part-of-speech) taggers and dependency parsers. Thus, the performance of such joint models depends on the quality of the features obtained from these NLP tools. However, these features are not always accurate for various languages and contexts. In this paper, we propose a joint neural model which performs entity recognition and relation extraction simultaneously, without the need of any manually extracted features or the use of any external tool. Specifically, we model the entity recognition task using a CRF (Conditional Random Fields) layer and the relation extraction task as a multi-head selection problem (i.e., potentially identify multiple relations for each entity). We present an extensive experimental setup, to demonstrate the effectiveness of our method using datasets from various contexts (i.e., news, biomedical, real estate) and languages (i.e., English, Dutch). Our model outperforms the previous neural models that use automatically extracted features, while it performs within a reasonable margin of feature-based neural models, or even beats them.",
"title": ""
},
{
"docid": "d939f9e7b3229b654d5a1d331376eca1",
"text": "Knowledge graph embedding aims to represent entities and relations of a knowledge graph in continuous vector spaces. It has increasingly drawn attention for its ability to encode semantics in low dimensional vectors as well as its outstanding performance on many applications, such as question answering systems and information retrieval tasks. Existing methods often handle each triple independently, without considering context information of a triple in the knowledge graph, such an information can be useful for inference of new knowledge. Moreover, the relations and paths between an entity pair also provide information for inference. In this paper, we define a novel context-dependent knowledge graph representation model named triple-context-based knowledge embedding, which is based on the notion of triple context used for embedding entities and relations. For each triple, the triple context is composed of two kinds of graph structured information: one is a set of neighboring entities along with their outgoing relations, the other is a set of relation paths which contain a pair of target entities. Our embedding method is designed to utilize the triple context of each triple while learning embeddings of entities and relations. The method is evaluated on multiple tasks in the paper. Experimental results reveal that our method achieves significant improvements over the state-of-the-art methods.",
"title": ""
},
{
"docid": "510881bfca7005dcc32fce2162e7e225",
"text": "Across many disciplines, interest is increasing in the use of computational text analysis in the service of social science questions. We survey the spectrum of current methods, which lie on two dimensions: (1) computational and statistical model complexity; and (2) domain assumptions. This comparative perspective suggests directions of research to better align new methods with the goals of social scientists. 1 Use cases for computational text analysis in the social sciences The use of computational methods to explore research questions in the social sciences and humanities has boomed over the past several years, as the volume of data capturing human communication (including text, audio, video, etc.) has risen to match the ambitious goal of understanding the behaviors of people and society [1]. Automated content analysis of text, which draws on techniques developed in natural language processing, information retrieval, text mining, and machine learning, should be properly understood as a class of quantitative social science methodologies. Employed techniques range from simple analysis of comparative word frequencies to more complex hierarchical admixture models. As this nascent field grows, it is important to clearly present and characterize the assumptions of techniques currently in use, so that new practitioners can be better informed as to the range of available models. To illustrate the breadth of current applications, we list a sampling of substantive questions and studies that have developed or applied computational text analysis to address them. • Political Science: How do U.S. Senate speeches reflect agendas and attention? How are Senate institutions changing [27]? What are the agendas expressed in Senators’ press releases [28]? Do U.S. Supreme Court oral arguments predict justices’ voting behavior [29]? Does social media reflect public political opinion, or forecast elections [12, 30]? What determines international conflict and cooperation [31, 32, 33]? How much did racial attitudes affect voting in the 2008 U.S. presidential election [34]? • Economics: How does sentiment in the media affect the stock market [2, 3]? Does sentiment in social media associate with stocks [4, 5, 6]? Do a company’s SEC filings predict aspects of stock performance [7, 8]? What determines a customer’s trust in an online merchant [9]? How can we measure macroeconomic variables with search queries and social media text [10, 11, 12]? How can we forecast consumer demand for movies [13, 14]? • Psychology: How does a person’s mental and affective state manifest in their language [15]? Are diurnal and seasonal mood cycles cross-cultural [16]?",
"title": ""
},
{
"docid": "2910fe6ac9958d9cbf9014c5d3140030",
"text": "We present a novel variational approach to estimate dense depth maps from multiple images in real-time. By using robust penalizers for both data term and regularizer, our method preserves discontinuities in the depth map. We demonstrate that the integration of multiple images substantially increases the robustness of estimated depth maps to noise in the input images. The integration of our method into recently published algorithms for camera tracking allows dense geometry reconstruction in real-time using a single handheld camera. We demonstrate the performance of our algorithm with real-world data.",
"title": ""
},
{
"docid": "fdbdac5f319cd46aeb73be06ed64cbb9",
"text": "Recently deep neural networks (DNNs) have been used to learn speaker features. However, the quality of the learned features is not sufficiently good, so a complex back-end model, either neural or probabilistic, has to be used to address the residual uncertainty when applied to speaker verification. This paper presents a convolutional time-delay deep neural network structure (CT-DNN) for speaker feature learning. Our experimental results on the Fisher database demonstrated that this CT-DNN can produce high-quality speaker features: even with a single feature (0.3 seconds including the context), the EER can be as low as 7.68%. This effectively confirmed that the speaker trait is largely a deterministic short-time property rather than a longtime distributional pattern, and therefore can be extracted from just dozens of frames.",
"title": ""
},
{
"docid": "02781a25d8fb7ed69480f944d63b56ae",
"text": "Technology-supported learning systems have proved to be helpful in many learning situations. These systems require an appropriate representation of the knowledge to be learned, the Domain Module. The authoring of the Domain Module is cost and labor intensive, but its development cost might be lightened by profiting from semiautomatic Domain Module authoring techniques and promoting knowledge reuse. DOM-Sortze is a system that uses natural language processing techniques, heuristic reasoning, and ontologies for the semiautomatic construction of the Domain Module from electronic textbooks. To determine how it might help in the Domain Module authoring process, it has been tested with an electronic textbook, and the gathered knowledge has been compared with the Domain Module that instructional designers developed manually. This paper presents DOM-Sortze and describes the experiment carried out.",
"title": ""
},
{
"docid": "bc3e2c94cd53f472f36229e2d9c5d69e",
"text": "The problem of planning safe tra-jectories for computer controlled manipulators with two movable links and multiple degrees of freedom is analyzed, and a solution to the problem proposed. The key features of the solution are: 1. the identification of trajectory primitives and a hierarchy of abstraction spaces that permit simple manip-ulator models, 2. the characterization of empty space by approximating it with easily describable entities called charts-the approximation is dynamic and can be selective, 3. a scheme for planning motions close to obstacles that is computationally viable, and that suggests how proximity sensors might be used to do the planning, and 4. the use of hierarchical decomposition to reduce the complexity of the planning problem. 1. INTRODUCTION the 2D and 3D solution noted, it is easy to visualize the solution for the 3D manipulator. Section 2 of this paper presents an example, and Section 3 a statement and analysis of the problem. Sections 4 and 5 present the solution. Section 6 summarizes the key ideas in the solution and indicates areas for future work. 2. AN EXAMPLE This section describes an example (Figure 2.1) of the collision detection and avoidance problem for a two-dimensional manipulator. The example highlights features of the problem and its solution. 2.1 The Problem The manipulator has two links and three degrees of freedom. The larger link, called the boom, slides back and forth and can rotate about the origin. The smaller link, called the forearm, has a rotational degree of freedom about the tip of the boom. The tip of the forearm is called the hand. S and G are the initial and final configurations of the manipulator. Any real manipulator's links will have physical dimensions. The line segment representation of the link is an abstraction; the physical dimensions can be accounted for and how this is done is described later. The problem of planning safe trajectories for computer controlled manipulators with two movable links and multiple degrees of freedom is analyzed, and a solution to the problem is presented. The trajectory planning system is initialized with a description of the part of the environment that the manipulator is to maneuver in. When given the goal position and orientation of the hand, the system plans a complete trajectory that will safely maneuver the manipulator into the goal configuration. The executive system in charge of operating the hardware uses this trajectory to physically move the manipulator. …",
"title": ""
}
] |
scidocsrr
|
eab78763d4ff827b888a9cc6fa1b796e
|
Immune responses against human papillomavirus (HPV) infection and evasion of host defense in cervical cancer.
|
[
{
"docid": "b6b9e1eaf17f6cdbc9c060e467021811",
"text": "Tumour-associated viruses produce antigens that, on the face of it, are ideal targets for immunotherapy. Unfortunately, these viruses are experts at avoiding or subverting the host immune response. Cervical-cancer-associated human papillomavirus (HPV) has a battery of immune-evasion mechanisms at its disposal that could confound attempts at HPV-directed immunotherapy. Other virally associated human cancers might prove similarly refractive to immuno-intervention unless we learn how to circumvent their strategies for immune evasion.",
"title": ""
}
] |
[
{
"docid": "60a7d21510c3c3861d49c6294859c8b7",
"text": "As mobile applications become more complex, specific development tools and frameworks as well as cost effective testing techniques and tools will be essential to assure the development of secure, high-quality mobile applications. This paper addresses the problem of automatic testing of mobile applications developed for the Google Android platform, and presents a technique for rapid crash testing and regression testing of Android applications. The technique is based on a crawler that automatically builds a model of the application GUI and obtains test cases that can be automatically executed. The technique is supported by a tool for both crawling the application and generating the test cases. In the paper we present an example of using the technique and the tool for testing a real small size Android application that preliminary shows the effectiveness and usability of the proposed testing approach.",
"title": ""
},
{
"docid": "f917a32b3bfed48dfe14c05d248ef53f",
"text": "Recently Adleman has shown that a small traveling salesman problem can be solved by molecular operations. In this paper we show how the same principles can be applied to breaking the Data Encryption Standard (DES). We describe in detail a library of operations which are useful when working with a molecular computer. We estimate that given one arbitrary (plain-text, cipher-text) pair, one can recover the DES key in about 4 months of work. Furthermore, we show that under chosen plain-text attack it is possible to recover the DES key in one day using some preprocessing. Our method can be generalized to break any cryptosystem which uses keys of length less than 64 bits.",
"title": ""
},
{
"docid": "2c37693a40584e60a182c8c7f448455d",
"text": "With the rise in popularity of public social media and micro-blogging services, most notably Twitter, the people have found a venue to hear and be heard by their peers without an intermediary. As a consequence, and aided by the public nature of Twitter, political scientists now potentially have the means to analyse and understand the narratives that organically form, spread and decline among the public in a political campaign. However, the volume and diversity of the conversation on Twitter, combined with its noisy and idiosyncratic nature, make this a hard task. Thus, advanced data mining and language processing techniques are required to process and analyse the data. In this paper, we present and evaluate a technical framework, based on recent advances in deep neural networks, for identifying and analysing election-related conversation on Twitter on a continuous, longitudinal basis. Our models can detect election-related tweets with an F-score of 0.92 and can categorize these tweets into 22 topics with an F-score of 0.90.",
"title": ""
},
{
"docid": "dd4a53afc6af03fc323139b29dc024c5",
"text": "Log management and log auditing have become increasingly crucial for enterprises in this era of information and technology explosion. The log analysis technique is useful for discovering possible problems in business processes and preventing illegal-intrusion attempts and data-tampering attacks. Because of the complexity of the dynamically changing environment, auditing a tremendous number of data is a challenging issue. We provide a real-time audit mechanism to improve the aforementioned problems in log auditing. This mechanism was developed based on the Lempel-Ziv-Welch (LZW) compression technique to facilitate effective compression and provide reliable auditing log entries. The mechanism can be used to predict unusual activities when compressing the log data according to pre-defined auditing rules. Auditors using real-time and continuous monitoring can perceive instantly the most likely anomalies or exceptions that could cause problems. We also designed a user interface that allows auditors to define the various compression and audit parameters, using real log cases in the experiment to verify the feasibility and effectiveness of this proposed audit mechanism. In summary, this mechanism changes the log access method and improves the efficiency of log analysis. This mechanism greatly simplifies auditing so that auditors must only trace the sources and causes of the problems related to the detected anomalies. This greatly reduces the processing time of analytical audit procedures and the manual checking time, and improves the log audit efficiency.",
"title": ""
},
{
"docid": "59c68b4e5399fbfd3f74952258c807b0",
"text": "Quaternions have been a popular tool in 3D computer graphics for more than 20 years. However, classical quaternions are restricted to the representation of rotations, whereas in graphical applications we typically work with rotation composed with translation (i.e., a rigid transformation). Dual quaternions represent rigid transformations in the same way as classical quaternions represent rotations. In this paper we show how to generalize established techniques for blending of rotations to include all rigid transformations. Algorithms based on dual quaternions are computationally more efficient than previous solutions and have better properties (constant speed, shortest path and coordinate invariance). For the specific example of skinning, we demonstrate that problems which required considerable research effort recently are trivial to solve using our dual quaternion formulation. However, skinning is only one application of dual quaternions, so several further promising research directions are suggested in the paper. CR Categories: I.3.5 [Computer Graphics]: Computational Geometry and Object Modeling – Geometric Transformations— [I.3.7]: Computer Graphics—Three-Dimensional Graphics and Realism – Animation",
"title": ""
},
{
"docid": "e0cf83bcc9830f2a94af4822576e4167",
"text": "Multiple kernel learning (MKL) optimally combines the multiple channels of each sample to improve classification performance. However, existing MKL algorithms cannot effectively handle the situation where some channels are missing, which is common in practical applications. This paper proposes an absent MKL (AMKL) algorithm to address this issue. Different from existing approaches where missing channels are firstly imputed and then a standard MKL algorithm is deployed on the imputed data, our algorithm directly classifies each sample with its observed channels. In specific, we define a margin for each sample in its own relevant space, which corresponds to the observed channels of that sample. The proposed AMKL algorithm then maximizes the minimum of all sample-based margins, and this leads to a difficult optimization problem. We show that this problem can be reformulated as a convex one by applying the representer theorem. This makes it readily be solved via existing convex optimization packages. Extensive experiments are conducted on five MKL benchmark data sets to compare the proposed algorithm with existing imputation-based methods. As observed, our algorithm achieves superior performance and the improvement is more significant with the increasing missing ratio. Disciplines Engineering | Science and Technology Studies Publication Details Liu, X., Wang, L., Yin, J., Dou, Y. & Zhang, J. (2015). Absent multiple kernel learning. Proceedings of the Twenty-Ninth AAAI Conference on Artificial Intelligence (pp. 2807-2813). United States: IEEE. This conference paper is available at Research Online: http://ro.uow.edu.au/eispapers/5373 Absent Multiple Kernel Learning Xinwang Liu School of Computer National University of Defense Technology Changsha, China, 410073 Lei Wang School of Computer Science and Software Engineering University of Wollongong NSW, Australia, 2522 Jianping Yin, Yong Dou School of Computer National University of Defense Technology Changsha, China, 410073 Jian Zhang Faculty of Engineering and Information Technology University of Technology Sydney NSW, Australia, 2007",
"title": ""
},
{
"docid": "6eb2fa2b7b95b571b1a8be42a5412f70",
"text": "Generic object level saliency detection is important for many vision tasks. Previous approaches are mostly built on the prior that “appearance contrast between objects and backgrounds is high”. Although various computational models have been developed, the problem remains challenging and huge behavioral discrepancies between previous approaches can be observed. This suggest that the problem may still be highly ill-posed by using this prior only. In this work, we tackle the problem from a different viewpoint: we focus more on the background instead of the object. We exploit two common priors about backgrounds in natural images, namely boundary and connectivity priors, to provide more clues for the problem. Accordingly, we propose a novel saliency measure called geodesic saliency. It is intuitive, easy to interpret and allows fast implementation. Furthermore, it is complementary to previous approaches, because it benefits more from background priors while previous approaches do not. Evaluation on two databases validates that geodesic saliency achieves superior results and outperforms previous approaches by a large margin, in both accuracy and speed (2 ms per image). This illustrates that appropriate prior exploitation is helpful for the ill-posed saliency detection problem.",
"title": ""
},
{
"docid": "0bd7956dbee066a5b7daf4cbd5926f35",
"text": "Computer networks lack a general control paradigm, as traditional networks do not provide any networkwide management abstractions. As a result, each new function (such as routing) must provide its own state distribution, element discovery, and failure recovery mechanisms. We believe this lack of a common control platform has significantly hindered the development of flexible, reliable and feature-rich network control planes. To address this, we present Onix, a platform on top of which a network control plane can be implemented as a distributed system. Control planes written within Onix operate on a global view of the network, and use basic state distribution primitives provided by the platform. Thus Onix provides a general API for control plane implementations, while allowing them to make their own trade-offs among consistency, durability, and scalability.",
"title": ""
},
{
"docid": "78829447a6cbf0aa020ef098a275a16d",
"text": "Black soldier fly (BSF), Hermetia illucens (L.) is widely used in bio-recycling of human food waste and manure of livestock. Eggs of BSF were commonly collected by egg-trapping technique for mass rearing. To find an efficient lure for BSF egg-trapping, this study compared the number of egg batch trapped by different lures, including fruit, food waste, chicken manure, pig manure, and dairy manure. The result showed that fruit wastes are the most efficient on trapping BSF eggs. To test the effects of fruit species, number of egg batch trapped by three different fruit species, papaya, banana, and pineapple were compared, and no difference were found among fruit species. Environmental factors including temperature, relative humidity, and light intensity were measured and compared in different study sites to examine their effects on egg-trapping. The results showed no differences on temperature, relative humidity, and overall light intensity between sites, but the stability of light environment differed between sites. BSF tend to lay more eggs in site with stable light environment.",
"title": ""
},
{
"docid": "17bd8497b30045267f77572c9bddb64f",
"text": "0007-6813/$ see front matter D 200 doi:10.1016/j.bushor.2004.11.006 * Corresponding author. E-mail addresses: cseelos@sscg.org jmair@iese.edu (J. Mair).",
"title": ""
},
{
"docid": "b24e5a512306f24568f3e21af08a1faf",
"text": "We propose an object detection method that improves the accuracy of the conventional SSD (Single Shot Multibox Detector), which is one of the top object detection algorithms in both aspects of accuracy and speed. The performance of a deep network is known to be improved as the number of feature maps increases. However, it is difficult to improve the performance by simply raising the number of feature maps. In this paper, we propose and analyze how to use feature maps effectively to improve the performance of the conventional SSD. The enhanced performance was obtained by changing the structure close to the classifier network, rather than growing layers close to the input data, e.g., by replacing VGGNet with ResNet. The proposed network is suitable for sharing the weights in the classifier networks, by which property, the training can be faster with better generalization power. For the Pascal VOC 2007 test set trained with VOC 2007 and VOC 2012 training sets, the proposed network with the input size of 300×300 achieved 78.5% mAP (mean average precision) at the speed of 35.0 FPS (frame per second), while the network with a 512×512 sized input achieved 80.8% mAP at 16.6 FPS using Nvidia Titan X GPU. The proposed network shows state-of-the-art mAP, which is better than those of the conventional SSD, YOLO, Faster-RCNN and RFCN. Also, it is faster than Faster-RCNN and RFCN.",
"title": ""
},
{
"docid": "d6b46b598f4fcbee933c1d0aff29c96c",
"text": "Neural network based sequence-to-sequence models in an encoder-decoder framework have been successfully applied to solve Question Answering (QA) problems, predicting answers from statements and questions. However, almost all previous models have failed to consider detailed context information and unknown states under which systems do not have enough information to answer given questions. These scenarios with incomplete or ambiguous information are very common in the setting of Interactive Question Answering (IQA). To address this challenge, we develop a novel model, employing context-dependent word-level attention for more accurate statement representations and question-guided sentence-level attention for better context modeling. We also generate unique IQA datasets to test our model, which will be made publicly available. Employing these attention mechanisms, our model accurately understands when it can output an answer or when it requires generating a supplementary question for additional input depending on different contexts. When available, user's feedback is encoded and directly applied to update sentence-level attention to infer an answer. Extensive experiments on QA and IQA datasets quantitatively demonstrate the effectiveness of our model with significant improvement over state-of-the-art conventional QA models.",
"title": ""
},
{
"docid": "01bb8e6af86aa1545958a411653e014c",
"text": "Estimating the tempo of a musical piece is a complex problem, which has received an increasing amount of attention in the past few years. The problem consists of estimating the number of beats per minute (bpm) at which the music is played and identifying exactly when these beats occur. Commercial devices already exist that attempt to extract a musical instrument digital interface (MIDI) clock from an audio signal, indicating both the tempo and the actual location of the beat. Such MIDI clocks can then be used to synchronize other devices (such as drum machines and audio effects) to the audio source, enabling a new range of \" beat-synchronized \" audio processing. Beat detection can also simplify the usually tedious process of manipulating audio material in audio-editing software. Cut and paste operations are made considerably easier if markers are positioned at each beat or at bar boundaries. Looping a drum track over two bars becomes trivial once the location of the beats is known. A third range of applications is the fairly new area of automatic playlist generation, where a computer is given the task to choose a series of audio tracks from a track database in a way similar to what a human deejay would do. The track tempo is a very important selection criterion in this context , as deejays will tend to string tracks with similar tempi back to back. Furthermore, deejays also tend to perform beat-synchronous crossfading between successive tracks manually, slowing down or speeding up one of the tracks so that the beats in the two tracks line up exactly during the crossfade. This can easily be done automatically once the beats are located in the two tracks. The tempo detection systems commercially available appear to be fairly unsophisticated, as they rely mostly on the presence of a strong and regular bass-drum kick at every beat, an assumption that holds mostly with modern musical genres such as techno or drums and bass. For music with a less pronounced tempo such techniques fail miserably and more sophisticated algorithms are needed. This paper describes an off-line tempo detection algorithm , able to estimate a time-varying tempo from an audio track stored, for example, on an audio CD or on a computer hard disk. The technique works in three successive steps: 1) an \" energy flux \" signal is extracted from the track, 2) at each tempo-analysis time, several …",
"title": ""
},
{
"docid": "d7780a122b51adc30f08eeb13af78bd1",
"text": "Malware sandboxes, widely used by antivirus companies, mobile application marketplaces, threat detection appliances, and security researchers, face the challenge of environment-aware malware that alters its behavior once it detects that it is being executed on an analysis environment. Recent efforts attempt to deal with this problem mostly by ensuring that well-known properties of analysis environments are replaced with realistic values, and that any instrumentation artifacts remain hidden. For sandboxes implemented using virtual machines, this can be achieved by scrubbing vendor-specific drivers, processes, BIOS versions, and other VM-revealing indicators, while more sophisticated sandboxes move away from emulation-based and virtualization-based systems towards bare-metal hosts. We observe that as the fidelity and transparency of dynamic malware analysis systems improves, malware authors can resort to other system characteristics that are indicative of artificial environments. We present a novel class of sandbox evasion techniques that exploit the \"wear and tear\" that inevitably occurs on real systems as a result of normal use. By moving beyond how realistic a system looks like, to how realistic its past use looks like, malware can effectively evade even sandboxes that do not expose any instrumentation indicators, including bare-metal systems. We investigate the feasibility of this evasion strategy by conducting a large-scale study of wear-and-tear artifacts collected from real user devices and publicly available malware analysis services. The results of our evaluation are alarming: using simple decision trees derived from the analyzed data, malware can determine that a system is an artificial environment and not a real user device with an accuracy of 92.86%. As a step towards defending against wear-and-tear malware evasion, we develop statistical models that capture a system's age and degree of use, which can be used to aid sandbox operators in creating system images that exhibit a realistic wear-and-tear state.",
"title": ""
},
{
"docid": "36ea5beaaa58f781eaff21a372dbf6cf",
"text": "With the increasing scale of deployment of Internet of Things (IoT), concerns about IoT security have become more urgent. In particular, memory corruption attacks play a predominant role as they allow remote compromise of IoT devices. Control-flow integrity (CFI) is a promising and generic defense technique against these attacks. However, given the nature of IoT deployments, existing protection mechanisms for traditional computing environments (including CFI) need to be adapted to the IoT setting. In this paper, we describe the challenges of enabling CFI on microcontroller (MCU) based IoT devices. We then present CaRE, the first interrupt-aware CFI scheme for low-end MCUs. CaRE uses a novel way of protecting the CFI metadata by leveraging TrustZone-M security extensions introduced in the ARMv8-M architecture. Its binary instrumentation approach preserves the memory layout of the target MCU software, allowing pre-built bare-metal binary code to be protected by CaRE. We describe our implementation on a Cortex-M Prototyping System and demonstrate that CaRE is secure while imposing acceptable performance and memory impact.",
"title": ""
},
{
"docid": "2e2b079c456e89f8103185caaf1fc97f",
"text": "The imbalance problem is receiving an increasing attention in the literature. Studies in binary cases are recurrent, however there still are several real world problems with more than two classes. The known solutions for binary datasets may not be applicable in this case. Some efforts are being applied in decomposition techniques which transforms a multiclass problem into some binary problems. However it is also possible to face a multiclass problem with an ad hoc approach, i.e., a classifier able to handle all classes at once. In this work a method able to handle several classes is proposed. This new method is based on the Voronoi diagram. We try to dynamically divide the feature space into several regions, each one assigned to a different class. It is expected for the method to be able to construct a complex classification model. However, as it is in its beginning, some tests need to be performed in order to evaluate its feasibility. Experiments with some classical classifiers confirm its feasibility, and comparisons with ad hoc methods found in literature show its potentiality.",
"title": ""
},
{
"docid": "22eaa11a92848cd673e97437e6048421",
"text": "The fault diagnosis of hydraulic pump is always a challenging issue in the field of machinery fault diagnosis. Not only is manual feature extraction time-consuming and laborious, but also the diagnostic result is easily affected by subjective experience. The stacked autoencoders (SAE) which has powerful learning and representation ability is applied in hydraulic pump fault diagnosis, and it is directly used for training and identification based on vibration signals, so that we don't need to extract features manually. To avoid gradient vanishing and to improve the performance for small training set, ReLU activation function and Dropout strategy are both introduced into SAE. Validated by experiments, proposed SAE is superior to the BP, SVM and traditional SAE. And it can recognize hydraulic pump condition accurately even if the training set is small, which satisfies the engineering application.",
"title": ""
},
{
"docid": "627f96528a712e606e15ff312a26f021",
"text": "Traveling wave slot array on cylindrical substrate integrated waveguide (CSIW) is designed, fabricated and measured at K-band. CSIW is formed by wrapping the substrate integrated waveguide (SIW) around the cylinder in the circumferential direction. 16 element longitudinal slot array on the broad wall of single CSIW is designed by the Elliot's design procedure. The spacings between the slot elements are determined to reduce the half power beam width (HPBW) and to obtain good matching at 25 GHz. A 4 × 16 slot array is formed using 4 CSIW array each having 16 traveling wave longitudinal slots and the array is fed by 1 × 4 SIW power divider structure. About 10 ° beam steering is achieved when frequency is swept from 24 to 26 GHz. Gain of the antenna is 14 dB. Very good agreement between the simulated and measured results is obtained.",
"title": ""
},
{
"docid": "85a01086e72befaccff9b8741b920fdf",
"text": "While search engines are the major sources of content discovery on online content providers and e-commerce sites, their capability is limited since textual descriptions cannot fully describe the semantic of content such as videos. Recommendation systems are now widely used in online content providers and e-commerce sites and play an important role in discovering content. In this paper, we describe how one can boost the popularity of a video through the recommendation system in YouTube. We present a model that captures the view propagation between videos through the recommendation linkage and quantifies the influence that a video has on the popularity of another video. Furthermore, we identify that the similarity in titles and tags is an important factor in forming the recommendation linkage between videos. This suggests that one can manipulate the metadata of a video to boost its popularity.",
"title": ""
},
{
"docid": "5fe5fd5d74517282956137169e36743d",
"text": "This paper presents an attempt at using the syntactic structure in natural language for improved language models for speech recognition. The structured language model merges techniques in automatic parsing and language modeling using an original probabilistic parameterization of a shift-reduce parser. A maximum likelihood re-estimation procedure belonging to the class of expectation-maximization algorithms is employed for training the model. Experiments on theWall Street JournalandSwitchboardcorpora show improvement in both perplexity and word error rate—word lattice rescoring—over the standard 3-gram language model. c © 2000 Academic Press",
"title": ""
}
] |
scidocsrr
|
315025d0cb659bcb820d9b1393503b08
|
Efficient placement of multi-component applications in edge computing systems
|
[
{
"docid": "bbf5561f88f31794ca95dd991c074b98",
"text": "O CTO B E R 2014 | Volume 18, Issue 4 GetMobile Every time you use a voice command on your smartphone, you are benefitting from a technique called cloud offload. Your speech is captured by a microphone, pre-processed, then sent over a wireless network to a cloud service that converts speech to text. The result is then forwarded to another cloud service or sent back to your mobile device, depending on the application. Speech recognition and many other resource-intensive mobile services require cloud offload. Otherwise, the service would be too slow and drain too much of your battery. Research projects on cloud offload are hot today, with MAUI [4] in 2010, Odessa [13] and CloneCloud [2] in 2011, and COMET [8] in 2012. These build on a rich heritage of work dating back to the mid-1990s on a theme that is broadly characterized as cyber foraging. They are also relevant to the concept of cloudlets [18] that has emerged as an important theme in mobile-cloud convergence. Reflecting my participation in this evolution from its origins, this article is a personal account of the key developments in this research area. It focuses on mobile computing, ignoring many other uses of remote execution since the 1980s such as distributed processing, query processing, distributed object systems, and distributed partitioning.",
"title": ""
}
] |
[
{
"docid": "1e82d6acef7e5b5f0c2446d62cf03415",
"text": "The purpose of this research is to characterize and model the self-heating effect of multi-finger n-channel MOSFETs. Self-heating effect (SHE) does not need to be analyzed for single-finger bulk CMOS devices. However, it should be considered for multi-finger n-channel MOSFETs that are mainly used for RF-CMOS applications. The SHE mechanism was analyzed based on a two-dimensional device simulator. A compact model, which is a BSIM6 model with additional equations, was developed and implemented in a SPICE simulator with Verilog-A language. Using the proposed model and extracted parameters excellent agreements have been obtained between measurements and simulations in DC and S-parameter domain whereas the original BSIM6 shows inconsistency between static DC and small signal AC simulations due to the lack of SHE. Unlike the generally-used sub-circuits based SHE models including in BSIMSOI models, the proposed SHE model can converge in large scale circuits.",
"title": ""
},
{
"docid": "bc49930fa967b93ed1e39b3a45237652",
"text": "In gene expression data, a bicluster is a subset of the genes exhibiting consistent patterns over a subset of the conditions. We propose a new method to detect significant biclusters in large expression datasets. Our approach is graph theoretic coupled with statistical modelling of the data. Under plausible assumptions, our algorithm is polynomial and is guaranteed to find the most significant biclusters. We tested our method on a collection of yeast expression profiles and on a human cancer dataset. Cross validation results show high specificity in assigning function to genes based on their biclusters, and we are able to annotate in this way 196 uncharacterized yeast genes. We also demonstrate how the biclusters lead to detecting new concrete biological associations. In cancer data we are able to detect and relate finer tissue types than was previously possible. We also show that the method outperforms the biclustering algorithm of Cheng and Church (2000).",
"title": ""
},
{
"docid": "b56d61ac3e807219b3caa9ed4362abd9",
"text": "Secure communication is critical in military environments where the network infrastructure is vulnerable to various attacks and compromises. A conventional centralized solution breaks down when the security servers are destroyed by the enemies. In this paper we design and evaluate a security framework for multi-layer ad-hoc wireless networks with unmanned aerial vehicles (UAVs). In battlefields, the framework adapts to the contingent damages on the network infrastructure. Depending on the availability of the network infrastructure, our design is composed of two modes. In infrastructure mode, security services, specifically the authentication services, are implemented on UAVs that feature low overhead and flexible managements. When the UAVs fail or are destroyed, our system seamlessly switches to infrastructureless mode, a backup mechanism that maintains comparable security services among the surviving units. In the infrastructureless mode, the security services are localized to each node’s vicinity to comply with the ad-hoc communication mechanism in the scenario. We study the instantiation of these two modes and the transitions between them. Our implementation and simulation measurements confirm the effectiveness of our design.",
"title": ""
},
{
"docid": "59a16f229e5c205176639843521310d0",
"text": "In the ancient Egypt seven goddesses, represented by seven cows, composed the celestial herd that provides the nourishment to her worshippers. This herd is observed in the sky as a group of stars, the Pleiades, close to Aldebaran, the main star in the Taurus constellation. For many ancient populations, Pleiades were relevant stars and their rising was marked as a special time of the year. In this paper, we will discuss the presence of these stars in ancient cultures. Moreover, we will report some results of archeoastronomy on the role for timekeeping of these stars, results which show that for hunter-gatherers at Palaeolithic times, they were linked to the seasonal cycles of aurochs.",
"title": ""
},
{
"docid": "98a647d378a06c0314a60e220d10976a",
"text": "Driven by the confluence between the need to collect data about people's physical, physiological, psychological, cognitive, and behavioral processes in spaces ranging from personal to urban and the recent availability of the technologies that enable this data collection, wireless sensor networks for healthcare have emerged in the recent years. In this review, we present some representative applications in the healthcare domain and describe the challenges they introduce to wireless sensor networks due to the required level of trustworthiness and the need to ensure the privacy and security of medical data. These challenges are exacerbated by the resource scarcity that is inherent with wireless sensor network platforms. We outline prototype systems spanning application domains from physiological and activity monitoring to large-scale physiological and behavioral studies and emphasize ongoing research challenges.",
"title": ""
},
{
"docid": "760f9f91a845726bc79b874978d5b9ab",
"text": "Data sharing is increasingly recognized as critical to cross-disciplinary research and to assuring scientific validity. Despite National Institutes of Health and National Science Foundation policies encouraging data sharing by grantees, little data sharing of clinical data has in fact occurred. A principal reason often given is the potential of inadvertent violation of the Health Insurance Portability and Accountability Act privacy regulations. While regulations specify the components of private health information that should be protected, there are no commonly accepted methods to de-identify clinical data objects such as images. This leads institutions to take conservative risk-averse positions on data sharing. In imaging trials, where images are coded according to the Digital Imaging and Communications in Medicine (DICOM) standard, the complexity of the data objects and the flexibility of the DICOM standard have made it especially difficult to meet privacy protection objectives. The recent release of DICOM Supplement 142 on image de-identification has removed much of this impediment. This article describes the development of an open-source software suite that implements DICOM Supplement 142 as part of the National Biomedical Imaging Archive (NBIA). It also describes the lessons learned by the authors as NBIA has acquired more than 20 image collections encompassing over 30 million images.",
"title": ""
},
{
"docid": "d59e21319b9915c2f6d7a8931af5503c",
"text": "The effect of directional antenna elements in uniform circular arrays (UCAs) for direction of arrival (DOA) estimation is studied in this paper. While the vast majority of previous work assumes isotropic antenna elements or omnidirectional dipoles, this work demonstrates that improved DOA estimation accuracy and increased bandwidth is achievable with appropriately-designed directional antennas. The Cramer-Rao Lower Bound (CRLB) is derived for UCAs with directional antennas and is compared to isotropic antennas for 4- and 8-element arrays using a theoretical radiation pattern. The directivity that minimizes the CRLB is identified and microstrip patch antennas approximating the optimal theoretical gain pattern are designed to compare the resulting DOA estimation accuracy with a UCA using dipole antenna elements. Simulation results show improved DOA estimation accuracy and robustness using microstrip patch antennas as opposed to conventional dipoles. Additionally, it is shown that the bandwidth of a UCA for DOA estimation is limited only by the broadband characteristics of the directional antenna elements and not by the electrical size of the array as is the case with omnidirectional antennas.",
"title": ""
},
{
"docid": "4122fb29bb82d4432391f4362ddcf512",
"text": "In this paper we propose three techniques to improve the performance of one of the major algorithms for large scale continuous global function optimization. Multilevel Cooperative Co-evolution (MLCC) is based on a Cooperative Co-evolutionary framework and employs a technique called random grouping in order to group interacting variables in one subcomponent. It also uses another technique called adaptive weighting for co-adaptation of subcomponents. We prove that the probability of grouping interacting variables in one subcomponent using random grouping drops significantly as the number of interacting variables increases. This calls for more frequent random grouping of variables. We show how to increase the frequency of random grouping without increasing the number of fitness evaluations. We also show that adaptive weighting is ineffective and in most cases fails to improve the quality of found solution, and hence wastes considerable amount of CPU time by extra evaluations of objective function. Finally we propose a new technique for self-adaptation of the subcomponent sizes in CC. We demonstrate how a substantial improvement can be gained by applying these three techniques.",
"title": ""
},
{
"docid": "d580f60d48331b37c55f1e9634b48826",
"text": "The fifth generation (5G) wireless network technology is to be standardized by 2020, where main goals are to improve capacity, reliability, and energy efficiency, while reducing latency and massively increasing connection density. An integral part of 5G is the capability to transmit touch perception type real-time communication empowered by applicable robotics and haptics equipment at the network edge. In this regard, we need drastic changes in network architecture including core and radio access network (RAN) for achieving end-to-end latency on the order of 1 ms. In this paper, we present a detailed survey on the emerging technologies to achieve low latency communications considering three different solution domains: 1) RAN; 2) core network; and 3) caching. We also present a general overview of major 5G cellular network elements such as software defined network, network function virtualization, caching, and mobile edge computing capable of meeting latency and other 5G requirements.",
"title": ""
},
{
"docid": "fae3b6d1415e5f1d95aa2126c14e7a09",
"text": "This paper presents an active RF phase shifter with 10 bit control word targeted toward the upcoming 5G wireless systems. The circuit is designed and fabricated using 45 nm CMOS SOI technology. An IQ vector modulator (IQVM) topology is used which provides both amplitude and phase control. The design is programmable with exhaustive digital controls available for parameters like bias voltage, resonance frequency, and gain. The frequency of operation is tunable from 12.5 GHz to 15.7 GHz. The mean angular separation between phase points is 1.5 degree at optimum amplitude levels. The rms phase error over the operating band is as low as 0.8 degree. Active area occupied is 0.18 square millimeter. The total DC power consumed from 1 V supply is 75 mW.",
"title": ""
},
{
"docid": "37bdc258e652fb4a21d9516400428f8b",
"text": "In many Internet of Things (IoT) applications, large numbers of small sensor data are delivered in the network, which may cause heavy traffics. To reduce the number of messages delivered from the sensor devices to the IoT server, a promising approach is to aggregate several small IoT messages into a large packet before they are delivered through the network. When the packets arrive at the destination, they are disaggregated into the original IoT messages. In the existing solutions, packet aggregation/disaggregation is performed by software at the server, which results in long delays and low throughputs. To resolve the above issue, this paper utilizes the programmable Software Defined Networking (SDN) switch to program quick packet aggregation and disaggregation. Specifically, we consider the Programming Protocol-Independent Packet Processor (P4) technology. We design and develop novel P4 programs for aggregation and disaggregation in commercial P4 switches. Our study indicates that packet aggregation can be achieved in a P4 switch with its line rate (without extra packet processing cost). On the other hand, to disaggregate a packet that combines N IoT messages, the processing time is about the same as processing N individual IoT messages. Our implementation conducts IoT message aggregation at the highest bit rate (100 Gbps) that has not been found in the literature. We further propose to provide a small buffer in the P4 switch to significantly reduce the processing power for disaggregating a packet.",
"title": ""
},
{
"docid": "c091e5b24dc252949b3df837969e263a",
"text": "The emergence of powerful portable computers, along with advances in wireless communication technologies, has made mobile computing a reality. Among the applications that are finding their way to the market of mobile computingthose that involve data managementhold a prominent position. In the past few years, there has been a tremendous surge of research in the area of data management in mobile computing. This research has produced interesting results in areas such as data dissemination over limited bandwith channels, location-dependent querying of data, and advanced interfaces for mobile computers. This paper is an effort to survey these techniques and to classify this research in a few broad areas.",
"title": ""
},
{
"docid": "b91f54fd70da385625d9df127834d8c7",
"text": "This commentary was stimulated by Yeping Li’s first editorial (2014) citing one of the journal’s goals as adding multidisciplinary perspectives to current studies of single disciplines comprising the focus of other journals. In this commentary, I argue for a greater focus on STEM integration, with a more equitable representation of the four disciplines in studies purporting to advance STEM learning. The STEM acronym is often used in reference to just one of the disciplines, commonly science. Although the integration of STEM disciplines is increasingly advocated in the literature, studies that address multiple disciplines appear scant with mixed findings and inadequate directions for STEM advancement. Perspectives on how discipline integration can be achieved are varied, with reference to multidisciplinary, interdisciplinary, and transdisciplinary approaches adding to the debates. Such approaches include core concepts and skills being taught separately in each discipline but housed within a common theme; the introduction of closely linked concepts and skills from two or more disciplines with the aim of deepening understanding and skills; and the adoption of a transdisciplinary approach, where knowledge and skills from two or more disciplines are applied to real-world problems and projects with the aim of shaping the total learning experience. Research that targets STEM integration is an embryonic field with respect to advancing curriculum development and various student outcomes. For example, we still need more studies on how student learning outcomes arise not only from different forms of STEM integration but also from the particular disciplines that are being integrated. As noted in this commentary, it seems that mathematics learning benefits less than the other disciplines in programs claiming to focus on STEM integration. Factors contributing to this finding warrant more scrutiny. Likewise, learning outcomes for engineering within K-12 integrated STEM programs appear under-researched. This commentary advocates a greater focus on these two disciplines within integrated STEM education research. Drawing on recommendations from the literature, suggestions are offered for addressing the challenges of integrating multiple disciplines faced by the STEM community.",
"title": ""
},
{
"docid": "46209913057e33c17d38a565e50097a3",
"text": "Power-on reset circuits are available as discrete devices as well as on-chip solutions and are indispensable to initialize some critical nodes of analog and digital designs during power-on. In this paper, we present a power-on reset circuit specifically designed for on-chip applications. The mentioned POR circuit should meet certain design requirements necessary to be integrated on-chip, some of them being area-efficiency, power-efficiency, supply rise-time insensitivity and ambient temperature insensitivity. The circuit is implemented within a small area (60mum times 35mum) using the 2.5V tolerant MOSFETs of a 0.28mum CMOS technology. It has a maximum quiescent current consumption of 40muA and works over infinite range of supply rise-times and ambient temperature range of -40degC to 150degC",
"title": ""
},
{
"docid": "ac4d208a022717f6389d8b754abba80b",
"text": "This paper presents a new approach to detect tabular structures present in document images and in low resolution video images. The algorithm for table detection is based on identifying the unique table start pattern and table trailer pattern. We have formulated perceptual attributes to characterize the patterns. The performance of our table detection system is tested on a set of document images picked from UW-III (University of Washington) dataset, UNLV dataset, video images of NPTEL videos, and our own dataset. Our approach demonstrates improved detection for different types of table layouts, with or without ruling lines. We have obtained correct table localization on pages with multiple tables aligned side-by-side.",
"title": ""
},
{
"docid": "e49ea1a6aa8d7ffec9ca16ac18cfc43a",
"text": "Simultaneous Localization And Mapping (SLAM) is a fundamental problem in mobile robotics. While point-based SLAM methods provide accurate camera localization, the generated maps lack semantic information. On the other hand, state of the art object detection methods provide rich information about entities present in the scene from a single image. This work marries the two and proposes a method for representing generic objects as quadrics which allows object detections to be seamlessly integrated in a SLAM framework. For scene coverage, additional dominant planar structures are modeled as infinite planes. Experiments show that the proposed points-planes-quadrics representation can easily incorporate Manhattan and object affordance constraints, greatly improving camera localization and leading to semantically meaningful maps. The performance of our SLAM system is demonstrated in https://youtu.be/dR-rB9keF8M.",
"title": ""
},
{
"docid": "3ff55193d10980cbb8da5ec757b9161c",
"text": "The growth of social web contributes vast amount of user generated content such as customer reviews, comments and opinions. This user generated content can be about products, people, events, etc. This information is very useful for businesses, governments and individuals. While this content meant to be helpful analyzing this bulk of user generated content is difficult and time consuming. So there is a need to develop an intelligent system which automatically mine such huge content and classify them into positive, negative and neutral category. Sentiment analysis is the automated mining of attitudes, opinions, and emotions from text, speech, and database sources through Natural Language Processing (NLP). The objective of this paper is to discover the concept of Sentiment Analysis in the field of Natural Language Processing, and presents a comparative study of its techniques in this field. Keywords— Natural Language Processing, Sentiment Analysis, Sentiment Lexicon, Sentiment Score.",
"title": ""
},
{
"docid": "da4b2452893ca0734890dd83f5b63db4",
"text": "Diabetic retinopathy is when damage occurs to the retina due to diabetes, which affects up to 80 percent of all patients who have had diabetes for 10 years or more. The expertise and equipment required are often lacking in areas where diabetic retinopathy detection is most needed. Most of the work in the field of diabetic retinopathy has been based on disease detection or manual extraction of features, but this paper aims at automatic diagnosis of the disease into its different stages using deep learning. This paper presents the design and implementation of GPU accelerated deep convolutional neural networks to automatically diagnose and thereby classify high-resolution retinal images into 5 stages of the disease based on severity. The single model accuracy of the convolutional neural networks presented in this paper is 0.386 on a quadratic weighted kappa metric and ensembling of three such similar models resulted in a score of 0.3996.",
"title": ""
},
{
"docid": "948295ca3a97f7449548e58e02dbdd62",
"text": "Neural computations are often compared to instrument-measured distance or duration, and such relationships are interpreted by a human observer. However, neural circuits do not depend on human-made instruments but perform computations relative to an internally defined rate-of-change. While neuronal correlations with external measures, such as distance or duration, can be observed in spike rates or other measures of neuronal activity, what matters for the brain is how such activity patterns are utilized by downstream neural observers. We suggest that hippocampal operations can be described by the sequential activity of neuronal assemblies and their internally defined rate of change without resorting to the concept of space or time.",
"title": ""
},
{
"docid": "4b95b6d7991ea1b774ac8730df6ec21c",
"text": "We address the problem of automatically learning the main steps to complete a certain task, such as changing a car tire, from a set of narrated instruction videos. The contributions of this paper are three-fold. First, we develop a new unsupervised learning approach that takes advantage of the complementary nature of the input video and the associated narration. The method solves two clustering problems, one in text and one in video, applied one after each other and linked by joint constraints to obtain a single coherent sequence of steps in both modalities. Second, we collect and annotate a new challenging dataset of real-world instruction videos from the Internet. The dataset contains about 800,000 frames for five different tasks1 that include complex interactions between people and objects, and are captured in a variety of indoor and outdoor settings. Third, we experimentally demonstrate that the proposed method can automatically discover, in an unsupervised manner, the main steps to achieve the task and locate the steps in the input videos.",
"title": ""
}
] |
scidocsrr
|
72931c3b66f1c91dcb617a852d9edad5
|
Deep Spatio-Temporal Features for Multimodal Emotion Recognition
|
[
{
"docid": "cd4f6f8478bfd5fac7b80f14371f21a2",
"text": "In this paper, we present human emotion recognition systems based on audio and spatio-temporal visual features. The proposed system has been tested on audio visual emotion data set with different subjects for both genders. The mel-frequency cepstral coefficient (MFCC) and prosodic features are first identified and then extracted from emotional speech. For facial expressions spatio-temporal features are extracted from visual streams. Principal component analysis (PCA) is applied for dimensionality reduction of the visual features and capturing 97 % of variances. Codebook is constructed for both audio and visual features using Euclidean space. Then occurrences of the histograms are employed as input to the state-of-the-art SVM classifier to realize the judgment of each classifier. Moreover, the judgments from each classifier are combined using Bayes sum rule (BSR) as a final decision step. The proposed system is tested on public data set to recognize the human emotions. Experimental results and simulations proved that using visual features only yields on average 74.15 % accuracy, while using audio features only gives recognition average accuracy of 67.39 %. Whereas by combining both audio and visual features, the overall system accuracy has been significantly improved up to 80.27 %.",
"title": ""
}
] |
[
{
"docid": "54a47a57296658ca0e8bae74fd99e8f0",
"text": "Road traffic accidents are among the top leading causes of deaths and injuries of various levels. Ethiopia is experiencing highest rate of such accidents resulting in fatalities and various levels of injuries. Addis Ababa, the capital city of Ethiopia, takes the lion’s share of the risk having higher number of vehicles and traffic and the cost of these fatalities and injuries has a great impact on the socio-economic development of a society. This research is focused on developing adaptive regression trees to build a decision support system to handle road traffic accident analysis for Addis Ababa city traffic office. The study focused on injury severity levels resulting from an accident using real data obtained from the Addis Ababa traffic office. Empirical results show that the developed models could classify accidents within reasonable accuracy.",
"title": ""
},
{
"docid": "f7c9cf0cef0a24ba199401adc2a7260c",
"text": "MOBA (Multiplayer Online Battle Arena) games are currently one of the most popular online video game genres. This paper discusses implementation of a typical MOBA game prototype for Windows platform in a popular game engine Unity 5. The focus is put on using the built-in Unity components in a MOBA setting, developing additional behaviours using Unity's Scripting API for C# and integrating third party components such as the networking engine, 3D models, and particle systems created for use with Unity and available through the Unity Asset Store. A brief overview of useful programming design patterns as well as design patterns already used in Unity is given. Various game state synchronization mechanisms available in the chosen networking engine, Photon Unity Networking, and their usage when synchronizing different types of game information over multiple clients are also discussed. The implemented game retains most of the main features of the modern MOBA games such as heroes with different play styles, skills, team versus team competition, resource collection and consumption, varied maps and defensive structures. The paper concludes with comments on Unity 5 as a MOBA game development environment and execution engine.",
"title": ""
},
{
"docid": "30ec2dafe3b931a1dd126a73ae6f1bc3",
"text": "In 1970, David Raskin, a psychologist and researcher at the University of Utah, began a study of the probable lie comparison question polygraph technique. Raskin and his colleagues systematically refined the elements of polygraphy by determining what aspects of the technique could be scientifically proven to increase validity and reliability (Raskin & Honts 2002). Their efforts culminated in the creation of what is known today as the Utah approach to the Comparison Question Test (CQT), an empirically consistent and unified approach to polygraphy. The Utah-CQT, was traditionally employed as a single issue Zone Comparison Test (ZCT). It is amenable to other uses including multi-facet testing of a single crime issue, as a Modified General Question Technique (MGQT) format, or as a multiple-issue (mixed-issue) General Question Technique (GQT). The Utah-CQT and the corresponding Utah Numerical Scoring System (Bell, Raskin, Honts & Kircher, 1999; Handler, 2006) resulted from over 30 years of scientific research and scientific peerreview. The resulting technique provides some of the highest rates of criterion accuracy and interrater reliability of any polygraph examination protocol (Senter, Dollins & Krapohl, 2004; Krapohl, 2006) when applied in an event-specific testing situation. The authors discuss the UtahCQT using the Probable Lie Test (PLT) as well as the lesser known Directed Lie Test (DLT) and review some of the possible benefits offered by each method. Test Structure and Administration The Utah-CQT begins as other testing procedures do, with the pre-test interview, accomplished in a non-accusatory manner. The examiner should obtain the necessary test release that includes a brief statement of allegations or issues to be resolved, and if applicable, a statutory rights waiver and then collects general biographical and medical information from the test subject. Rapportbuilding discussion gives the examiner a chance to evaluate the test subject’s suitability for the examination. Interaction with the test subject also gives the examiner the chance to do a rough assessment of the test subject’s verbal and mental abilities that will later be used to help word the examination questions. In the PLT version, the examiner uses this period of conversation to develop material for comparison questions to be used during the testing phase of the examination, although the nature of the issues to be resolved usually dictates the general content of the comparison questions. The examiner does not, however, lecture the test subject regarding past transgressions during this comparison question material review. This portion of the interview is conducted with open-ended questions and the careful use of suggestions as opposed to an interrogation of past deeds. The version of this paper originally published in Polygraph was rewritten with greater detail for the journal European Polygraph, and the authors recommended the more detailed article for republication in Polygraph for this special edition. It appears here with the kind permission of the authors and the Editor of European Polygraph. The citation is: Handler, M. & Nelson, R., (2008). Utah approach to comparison question polygraph testing. European Polygraph, 2(2), 83-110. The authors thank David Raskin, Charles Honts, Don Krapohl, John Kircher and Frank Horvath for their thoughtful reviews and comments to an earlier drafts of and revisions to this paper. 15 Polygraph, 2009, 38(1) Utah Approach to CQT The examiner points out any monitoring or recording devices in the examination room and explains the purpose for having the exam monitored and/or recorded. In the Utah-CQT approach all examinations should be recorded in their entirety. In an age in which video and audio recording technology is easily available and fully integrated into all modern field polygraph systems, there is no reason to forgo the advantages of a complete video and audio recording of all polygraph examinations. It is only through complete recordings that meaningful quality assurance is possible. Frankness regarding monitoring devices helps assure the test subject the test will be conducted in a professional manner and may assist in convincing the test subject that the examiner is being open and truthful. Brief explanation of any quality assurance program also assists in establishing a professional and trustworthy atmosphere. The examiner advises the test subject of the general nature of the allegations and the specific issues to be resolved by the examination. The test subject is then given the opportunity to provide a “free narrative” to discuss his or her knowledge of and/or role in the incident. The goal of the free narrative discussion is to obtain information from the test subject without confrontation or undue stress. In general the examiner should allow the test subject to tell his or her story without interruption. The examiner informs the test subject of the case facts in a low-key approach and should advise the test subject that these are allegations and ensure the test subject understands the difference between allegations and facts known to be true. The examiner should note inconsistencies or other matters to which he or she may wish to return once the test subject finishes the narrative. The examiner does not argue with the test subject nor does the examiner challenge the test subject’s version of the case facts. The examiner encourages the test subject to be candid in order to formulate the test questions in a succinct and clear manner. In polygraph screening or monitoring programs (i.e., LEPET, security, PCSOT), the Utah-CQT may be used as a mixed-issue (multiple-issue) examination, similar to the AFMGQT, in the absence of a known allegation or known incident. In these programs discussion of the known allegation or known incident will be replaced with a structured interview protocol, which addresses content areas pertinent to the risk or compliance issues under investigation. It should be noted that these applications of polygraph testing have not been investigated as thoroughly as other uses, and scientific investigation and verification of such uses are more limited. This low key, non-accusatory approach presents the examiner as a neutral seeker of the truth and helps to allay fears of preconceived guilt. If there are inconsistencies or other matters that require follow-up or clarification before the examination, they are discussed at this time in a nonconfrontational fashion. After the narrative and the discussion of any other issues, the components are placed on the test subject. During this process, the functions of various polygraph component sensors are discussed, and a general explanation of the psychophysiology that underlies the polygraph test is provided. This may be done through a general discussion of the anecdotes that illustrate psychophysiological responding and various possible causes of arousal (Handler & Honts, 2007). The goal of this portion of the interview is to ensure in the test subject an understanding that lying will inevitably be associated with physiological response. Once the components are placed on the test subject, the examiner conducts an acquaintance test. The acquaintance test is generally a known solution peak of tension test that is used to demonstrate the efficacy of the polygraph examination. Other approaches to the acquaintance test are not prohibited and would not invalidate an examination. In the known-solution acquaintance test, the test subject is told to select a number such that there will be some additional or padding questions before and after the selected number. This can be accomplished by directing the test subject to select a number between 3 and 6 and write that number on a piece of paper. The paper may then be Polygraph, 2009, 38(1) 16",
"title": ""
},
{
"docid": "384a0a9d9613750892225562cb5ff113",
"text": "Large scale, high concurrency, and vast amount of data are important trends for the new generation of website. Node.js becomes popular and successful to build data-intensive web applications. To study and compare the performance of Node.js, Python-Web and PHP, we used benchmark tests and scenario tests. The experimental results yield some valuable performance data, showing that PHP and Python-Web handle much less requests than that of Node.js in a certain time. In conclusion, our results clearly demonstrate that Node.js is quite lightweight and efficient, which is an idea fit for I/O intensive websites among the three, while PHP is only suitable for small and middle scale applications, and Python-Web is developer friendly and good for large web architectures. To the best of our knowledge, this is the first paper to evaluate these Web programming technologies with both objective systematic tests (benchmark) and realistic user behavior tests (scenario), especially taking Node.js as the main topic to discuss.",
"title": ""
},
{
"docid": "2d02bf71ee22e062d12ce4ec0b53d4c9",
"text": "BACKGROUND\nTherapies that maintain remission for patients with Crohn's disease are essential. Stable remission rates have been demonstrated for up to 2 years in adalimumab-treated patients with moderately to severely active Crohn's disease enrolled in the CHARM and ADHERE clinical trials.\n\n\nAIM\nTo present the long-term efficacy and safety of adalimumab therapy through 4 years of treatment.\n\n\nMETHODS\nRemission (CDAI <150), response (CR-100) and corticosteroid-free remission over 4 years, and maintenance of these endpoints beyond 1 year were assessed in CHARM early responders randomised to adalimumab. Corticosteroid-free remission was also assessed in all adalimumab-randomised patients using corticosteroids at baseline. Fistula healing was assessed in adalimumab-randomised patients with fistula at baseline. As observed, last observation carried forward and a hybrid nonresponder imputation analysis for year 4 (hNRI) were used to report efficacy. Adverse events were reported for any patient receiving at least one dose of adalimumab.\n\n\nRESULTS\nOf 329 early responders randomised to adalimumab induction therapy, at least 30% achieved remission (99/329) or CR-100 (116/329) at year 4 of treatment (hNRI). The majority of patients (54%) with remission at year 1 maintained this endpoint at year 4 (hNRI). At year 4, 16% of patients taking corticosteroids at baseline were in corticosteroid-free remission and 24% of patients with fistulae at baseline had healed fistulae. The incidence rates of adverse events remained stable over time.\n\n\nCONCLUSIONS\nProlonged adalimumab therapy maintained clinical remission and response in patients with moderately to severely active Crohn's disease for up to 4 years. No increased risk of adverse events or new safety signals were identified with long-term maintenance therapy. (clinicaltrials.gov number: NCT00077779).",
"title": ""
},
{
"docid": "ea12c2b64eab8fdaed954450875effa8",
"text": "Transformation of experience into memories that can guide future behavior is a common ability across species. However, only humans can declare their perceptions and memories of experienced events (episodes). The medial temporal lobe (MTL) is central to episodic memory, yet the neuronal code underlying the translation from sensory information to memory remains unclear. Recordings from neurons within the brain in patients who have electrodes implanted for clinical reasons provide an opportunity to bridge physiology with cognitive theories. Recent evidence illustrates several striking response properties of MTL neurons. Responses are selective yet invariant, associated with conscious perception, can be internally generated and modulated, and spontaneously retrieved. Representation of information by these neurons is highly explicit, suggesting abstraction of information for future conscious recall.",
"title": ""
},
{
"docid": "78952b9185a7fb1d8e7bd7723bb1021b",
"text": "We develop and apply two new methods for analyzing file system behavior and evaluating file system changes. First, semantic block-level analysis (SBA) combines knowledge of on-disk data structures with a trace of disk traffic to infer file syste m behavior; in contrast to standard benchmarking approaches, S BA enables users to understand why the file system behaves as it does. Second, semantic trace playback (STP) enables traces of disk traffic to be easily modified to represent changes in the fi le system implementation; in contrast to directly modifying t he file system, STP enables users to rapidly gauge the benefits of new policies. We use SBA to analyze Linux ext3, ReiserFS, JFS, and Windows NTFS; in the process, we uncover many strengths and weaknesses of these journaling file systems. We also appl y STP to evaluate several modifications to ext3, demonstratin g the benefits of various optimizations without incurring the cos ts of a real implementation.",
"title": ""
},
{
"docid": "114e6cde6a38bcbb809f19b80110c16f",
"text": "This paper proposes a neural semantic parsing approach – Sequence-to-Action, which models semantic parsing as an endto-end semantic graph generation process. Our method simultaneously leverages the advantages from two recent promising directions of semantic parsing. Firstly, our model uses a semantic graph to represent the meaning of a sentence, which has a tight-coupling with knowledge bases. Secondly, by leveraging the powerful representation learning and prediction ability of neural network models, we propose a RNN model which can effectively map sentences to action sequences for semantic graph generation. Experiments show that our method achieves state-of-the-art performance on OVERNIGHT dataset and gets competitive performance on GEO and ATIS datasets.",
"title": ""
},
{
"docid": "72e9e772ede3d757122997d525d0f79c",
"text": "Deep learning systems, such as Convolutional Neural Networks (CNNs), can infer a hierarchical representation of input data that facilitates categorization. In this paper, we propose to learn affect-salient features for Speech Emotion Recognition (SER) using semi-CNN. The training of semi-CNN has two stages. In the first stage, unlabeled samples are used to learn candidate features by contractive convolutional neural network with reconstruction penalization. The candidate features, in the second step, are used as the input to semi-CNN to learn affect-salient, discriminative features using a novel objective function that encourages the feature saliency, orthogonality and discrimination. Our experiment results on benchmark datasets show that our approach leads to stable and robust recognition performance in complex scenes (e.g., with speaker and environment distortion), and outperforms several well-established SER features.",
"title": ""
},
{
"docid": "63046d1ca19a158052a62c8719f5f707",
"text": "Cloud machine learning (CML) techniques offer contemporary machine learning services, with pre-trained models and a service to generate own personalized models. This paper presents a completely unique emotional modeling methodology for incorporating human feeling into intelligent systems. The projected approach includes a technique to elicit emotion factors from users, a replacement illustration of emotions and a framework for predicting and pursuit user’s emotional mechanical phenomenon over time. The neural network based CML service has better training concert and enlarged exactness compare to other large scale deep learning systems. Opinions are important to almost all human activities and cloud based sentiment analysis is concerned with the automatic extraction of sentiment related information from text. With the rising popularity and availability of opinion rich resources such as personal blogs and online appraisal sites, new opportunities and issues arise as people now, actively use information technologies to explore and capture others opinions. In the existing system, a segmentation ranking model is designed to score the usefulness of a segmentation candidate for sentiment classification. A classification model is used for predicting the sentiment polarity of segmentation. The joint framework is trained directly using the sentences annotated with only sentiment polarity, without the use of any syntactic or sentiment annotations in segmentation level. However the existing system still has issue with classification accuracy results. To improve the classification performance, in the proposed system, cloud integrate the support vector machine, naive bayes and neural network algorithms along with joint segmentation approaches has been proposed to classify the very positive, positive, neutral, negative and very negative features more effectively using important feature selection. Also to handle the outliers we apply modified k-means clustering method on the given dataset. It is used to cloud cluster the outliers and hence the label as well as unlabeled features is handled efficiently. From the experimental result, we conclude that the proposed system yields better performance than the existing system.",
"title": ""
},
{
"docid": "9d30a391706acd3d5b34dff504f3f3a6",
"text": "A major determinant of forgetting in memory is the presence of interference in the retrieval context. Previous research has shown that proactive interference has less impact for emotional than neutral study material (Levens & Phelps, 2008). However, it is unclear how emotional content affects the impact of interference in memory. Emotional content could directly affect the buildup of interference, leading to reduced levels of interference. Alternatively, emotional content could affect the controlled processes that resolve interference. The present study employed the response deadline speed-accuracy trade-off procedure to independently test these hypotheses. Participants studied 3-item lists consisting of emotional or neutral images, immediately followed by a recognition probe. Results indicated a slower rate of accrual for interfering material (lures from previous study list) and lower levels of interference for emotional than neutral stimuli, suggesting a direct impact of emotion on the buildup of interference. In contrast to this beneficiary effect, resolution of interference for emotional material was less effective than neutral material. These findings can provide insight into the interactions of emotion and memory processes.",
"title": ""
},
{
"docid": "9a6ee40c3cd66ade4c9e1401505ec321",
"text": "Secretion of saliva to aid swallowing and digestion is an important physiological function found in many vertebrates and invertebrates. Pavlov reported classical conditioning of salivation in dogs a century ago. Conditioning of salivation, however, has been so far reported only in dogs and humans, and its underlying neural mechanisms remain elusive because of the complexity of the mammalian brain. We previously reported that, in cockroaches Periplaneta americana, salivary neurons that control salivation exhibited increased responses to an odor after conditioning trials in which the odor was paired with sucrose solution. However, no direct evidence of conditioning of salivation was obtained. In this study, we investigated the effects of conditioning trials on the level of salivation. Untrained cockroaches exhibited salivary responses to sucrose solution applied to the mouth but not to peppermint or vanilla odor applied to an antenna. After differential conditioning trials in which an odor was paired with sucrose solution and another odor was presented without pairing with sucrose solution, sucrose-associated odor induced an increase in the level of salivation, but the odor presented alone did not. The conditioning effect lasted for one day after conditioning trials. This study demonstrates, for the first time, classical conditioning of salivation in species other than dogs and humans, thereby providing the first evidence of sophisticated neural control of autonomic function in insects. The results provide a useful model system for studying cellular basis of conditioning of salivation in the simpler nervous system of insects.",
"title": ""
},
{
"docid": "da7f6149def6f7bfbb968af0a9f88705",
"text": "Deep Matching (DM) is a popular high-quality method for quasi-dense image matching. Despite its name, however, the original DM formulation does not yield a deep neural network that can be trained end-to-end via backpropagation. In this paper, we remove this limitation by rewriting the complete DM algorithm as a convolutional neural network. This results in a novel deep architecture for image matching that involves a number of new layer types and that, similar to recent networks for image segmentation, has a U-topology. We demonstrate the utility of the approach by improving the performance of DM by learning it end-to-end on an image matching task.",
"title": ""
},
{
"docid": "2d57ab9827a0dde1b35f0739588f1eee",
"text": "Probabilistic topic models could be used to extract lowdimension topics from document collections. However, such models without any human knowledge often produce topics that are not interpretable. In recent years, a number of knowledge-based topic models have been proposed, but they could not process fact-oriented triple knowledge in knowledge graphs. Knowledge graph embeddings, on the other hand, automatically capture relations between entities in knowledge graphs. In this paper, we propose a novel knowledge-based topic model by incorporating knowledge graph embeddings into topic modeling. By combining latent Dirichlet allocation, a widely used topic model with knowledge encoded by entity vectors, we improve the semantic coherence significantly and capture a better representation of a document in the topic space. Our evaluation results will demonstrate the effectiveness of our method.",
"title": ""
},
{
"docid": "d63a760289c7ecb903ad26db7b0b838d",
"text": "A new gain linearized varactor bank suitable for wideband voltage-controlled oscillators (VCOs) is presented in this paper. The VCO tuning gain linearized techniques, namely the gain variation compensation and linear tuning range extension techniques, are used in the proposed varactor bank to achieve a further reduced VCO tuning gain with low variation. The phase noise from Amplitude Modulation to Phase Modulation up conversion is considerably improved thanks to the reduced VCO tuning gain. Fabricated in a 0.18-μm CMOS technology, a 3-bits VCO prototype employing the proposed varactor bank achieves <;5% gain variation at the output frequency from 4.1 to 5 GHz, and exhibits maximum power consumption of 7.2 mW at its peak frequency, 5 GHz.",
"title": ""
},
{
"docid": "3230ef371e7475cfa82c7ab240fdd610",
"text": "After a decade of fundamental interdisciplinary research in machine learning, the spadework in this field has been done; the 1990s should see the widespread exploitation of knowledge discovery as an aid to assembling knowledge bases. The contributors to the AAAI Press book Knowledge Discovery in Databases were excited at the potential benefits of this research. The editors hope that some of this excitement will communicate itself to \"AI Magazine readers of this article.",
"title": ""
},
{
"docid": "9cb16594b916c5d11c189e80c0ac298a",
"text": "This paper describes the design of an innovative and low cost self-assistive technology that is used to facilitate the control of a wheelchair and home appliances by using advanced voice commands of the disabled people. This proposed system will provide an alternative to the physically challenged people with quadriplegics who is permanently unable to move their limbs (but who is able to speak and hear) and elderly people in controlling the motion of the wheelchair and home appliances using their voices to lead an independent, confident and enjoyable life. The performance of this microcontroller based and voice integrated design is evaluated in terms of accuracy and velocity in various environments. The results show that it could be part of an assistive technology for the disabled persons without any third person’s assistance.",
"title": ""
},
{
"docid": "4348f2af97c7a02f988df350a0729040",
"text": "Societies are complex systems, which tend to polarize into subgroups of individuals with dramatically opposite perspectives. This phenomenon is reflected-and often amplified-in online social networks, where, however, humans are no longer the only players and coexist alongside with social bots-that is, software-controlled accounts. Analyzing large-scale social data collected during the Catalan referendum for independence on October 1, 2017, consisting of nearly 4 millions Twitter posts generated by almost 1 million users, we identify the two polarized groups of Independentists and Constitutionalists and quantify the structural and emotional roles played by social bots. We show that bots act from peripheral areas of the social system to target influential humans of both groups, bombarding Independentists with violent contents, increasing their exposure to negative and inflammatory narratives, and exacerbating social conflict online. Our findings stress the importance of developing countermeasures to unmask these forms of automated social manipulation.",
"title": ""
},
{
"docid": "74260280ebe49952537858ba82c3cbfc",
"text": "Pretarsal roll augmentation with dermal hyaluronic acid filler injection focuses on restoring pretarsal fullness. This study aimed to introduce a method of pretarsal roll augmentation with dermal hyaluronic acid filler injection and establish the level of difficulty, safety, and effectiveness of this method. Eighty female patients were enrolled in this study. Hyaluronic acid filler was used to perform pretarsal roll augmentation. Physician and patient satisfaction at 1 month and 4 months after surgery was investigated. The level of satisfaction was graded from points 1 to 5. The patient satisfaction and physician scores were 4.7 ± 1.1 (mean ± standard deviation) points at 1 month and 4.8 ± 0.9 points at 4 months and 4.6 ± 0.9 points at 1 month and 4.8 ± 1.0 points at 4 months, respectively. No major complications were observed. Our technique provided a natural and younger appearance with pretarsal fullness. This technique was easy to perform for the restoration of pretarsal fullness, and it improved periorbital contouring, rejuvenated the pretarsal roll, and provided excellent esthetic results. Level of Evidence: Level V, therapeutic study.",
"title": ""
},
{
"docid": "6c0f290498aef3ebef654a495d1af04e",
"text": "The Bristol LoRaWAN Network is a low power radio network for the Internet of Things, based on LoRaWAN and utilising The Things Network. LoRaWAN is a Low Power Wide Area Network (LPWAN) specification intended for wireless battery operated Things in regional, national, or global networks. LoRaWAN targets key requirements of internet of things such as secure bi-directional communication, mobility and localisation services. A 2015 pilot programme in the city of Amsterdam aimed to cover the entire city with just 10 wireless gateways. LoRaWAN Bristol aims to replicate this experiment in the UK.",
"title": ""
}
] |
scidocsrr
|
e3159ac94777391ad11d0995b718c746
|
Bridging Physical and Virtual Worlds: Complex Event Processing for RFID Data Streams
|
[
{
"docid": "c02a55b5a3536f3ab12c65dd0d3037ef",
"text": "The emergence of large-scale receptor-based systems has enabled applications to execute complex business logic over data generated from monitoring the physical world. An important functionality required by these applications is the detection and response to complex events, often in real-time. Bridging the gap between low-level receptor technology and such high-level needs of applications remains a significant challenge.We demonstrate our solution to this problem in the context of HiFi, a system we are building to solve the data management problems of large-scale receptor-based systems. Specifically, we show how HiFi generates simple events out of receptor data at its edges and provides high-functionality complex event processing mechanisms for sophisticated event detection using a real-world library scenario.",
"title": ""
}
] |
[
{
"docid": "86bc723bb07eaf07c424d3f089d5e310",
"text": "We experimentally demonstrate an optical system that uses a semiconductor optical amplifier (SOA) to perform adaptive, analog self-interference cancellation for radio-frequency signals. The system subtracts a known interference signal from a corrupted received signal to recover a weak signal of interest. The SOA uses a combination of slow and fast light and cross-gain modulation to perform precise amplitude and phase matching to cancel the interference. The system achieves 38 dB of cancellation across 60-MHz instantaneous bandwidth and 56 dB of narrowband cancellation, limited by noise. The Nelder-Mead simplex algorithm is used to adaptively minimize the interference power through the control of the semiconductor's bias current and input optical power.",
"title": ""
},
{
"docid": "6a66a990b38422abaf46a126fcb61543",
"text": "We quantify the lexical subjectivity of adjectives using a corpus-based method, and show for the first time that it correlates with noun concreteness in large corpora. These cognitive dimensions together influence how word meanings combine, and we exploit this fact to achieve performance improvements on the semantic classification of adjective-noun pairs.",
"title": ""
},
{
"docid": "c433a12078d0933baa7c5f5c812a0ecd",
"text": "OBJECTIVES\nOur objective was to estimate the incidence of recent burnout in a large sample of Taiwanese physicians and analyze associations with job related satisfaction and medical malpractice experience.\n\n\nMETHODS\nWe performed a cross-sectional survey. Physicians were asked to fill out a questionnaire that included demographic information, practice characteristics, burnout, medical malpractice experience, job satisfaction, and medical error experience. There are about 2% of total physicians. Physicians who were members of the Taiwan Society of Emergency Medicine, Taiwan Surgical Association, Taiwan Association of Obstetrics and Gynecology, The Taiwan Pediatric Association, and Taiwan Stroke Association, and physicians of two medical centers, three metropolitan hospitals, and two local community hospitals were recruited.\n\n\nRESULTS\nThere is high incidence of burnout among Taiwan physicians. In our research, Visiting staff (VS) and residents were more likely to have higher level of burnout of the emotional exhaustion (EE) and depersonalization (DP), and personal accomplishment (PA). There was no difference in burnout types in gender. Married had higher-level burnout in EE. Physicians who were 20~30 years old had higher burnout levels in EE, those 31~40 years old had higher burnout levels in DP, and PA. Physicians who worked in medical centers had a higher rate in EE, DP, and who worked in metropolitan had higher burnout in PA. With specialty-in-training, physicians had higher-level burnout in EE and DP, but lower burnout in PA. Physicians who worked 13-17hr continuously had higher-level burnout in EE. Those with ≥41 times/week of being on call had higher-level burnout in EE and DP. Physicians who had medical malpractice experience had higher-level burnout in EE, DP, and PA. Physicians who were not satisfied with physician-patient relationships had higher-level burnout than those who were satisfied.\n\n\nCONCLUSION\nPhysicians in Taiwan face both burnout and a high risk in medical malpractice. There is high incidence of burnout among Taiwan physicians. This can cause shortages in medical care human resources and affect patient safety. We believe that high burnout in physicians was due to long working hours and several other factors, like mental depression, the evaluation assessment system, hospital culture, patient-physician relationships, and the environment. This is a very important issue on public health that Taiwanese authorities need to deal with.",
"title": ""
},
{
"docid": "a00ac4cefbb432ffcc6535dd8fd56880",
"text": "Mobile activity recognition focuses on inferring current user activities by leveraging sensory data available on today's sensor rich mobile phones. Supervised learning with static models has been applied pervasively for mobile activity recognition. In this paper, we propose a novel phone-based dynamic recognition framework with evolving data streams for activity recognition. The novel framework incorporates incremental and active learning for real-time recognition and adaptation in streaming settings. While stream evolves, we refine, enhance and personalise the learning model in order to accommodate the natural drift in a given data stream. Extensive experimental results using real activity recognition data have evidenced that the novel dynamic approach shows improved performance of recognising activities especially across different users. & 2014 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "817c30996704fa58d8eb527fced31630",
"text": "Image classification, a complex perceptual task with many real life important applications, faces a major challenge in presence of noise. Noise degrades the performance of the classifiers and makes them less suitable in real life scenarios. To solve this issue, several researches have been conducted utilizing denoising autoencoder (DAE) to restore original images from noisy images and then Convolutional Neural Network (CNN) is used for classification. The existing models perform well only when the noise level present in the training set and test set are same or differs only a little. To fit a model in real life applications, it should be independent to level of noise. The aim of this study is to develop a robust image classification system which performs well at regular to massive noise levels. The proposed method first trains a DAE with low-level noise-injected images and a CNN with noiseless native images independently. Then it arranges these two trained models in three different combinational structures: CNN, DAE-CNN, and DAE-DAECNN to classify images corrupted with zero, regular and massive noises, accordingly. Final system outcome is chosen by applying the winner-takes-all combination on individual outcomes of the three structures. Although proposed system consists of three DAEs and three CNNs in different structure layers, the DAEs and CNNs are the copy of same DAE and CNN trained initially which makes it computationally efficient as well. In DAE-DAECNN, two identical DAEs are arranged in a cascaded structure to make the structure well suited for classifying massive noisy data while the DAE is trained with low noisy image data. The proposed method is tested with MNIST handwritten numeral dataset with different noise levels. Experimental results revealed the effectiveness of the proposed method showing better results than individual structures as well as the other related methods. Keywords—Image denoising; denoising autoencoder; cascaded denoising autoencoder; convolutional neural network",
"title": ""
},
{
"docid": "8726e80818f0619f5157ad2295dee7df",
"text": "The OptaSense® Distributed Acoustic Sensing (DAS) system is an acoustic and seismic sensing capability that uses simple fibre optic communications cables as the sensor. Using existing or new cables, it can provide low-cost and high-reliability surface crossing and tunnel construction detection, with power and communications services needed only every 80-100 km. The technology has been proven in worldwide security operations at over one hundred locations in a variety of industries including oil and gas pipelines, railways, and high-value facility perimeters - a total of 100,000,000 kilometre-hours of linear asset protection. The system reliably detects a variety of border threats with very few nuisance alarms. It can work in concert with existing border surveillance technologies to provide security personnel a new value proposition for fighting trans-border crime. Its ability to detect, classify and locate activity over hundreds of kilometres and provide information in an accurate and actionable way has proven OptaSense to be a cost-effective solution for monitoring long borders. It has been scaled to cover 1500 km controlled by a single central monitoring station in pipeline applications.",
"title": ""
},
{
"docid": "97672636ef85a0bb489e61f8e65b28e3",
"text": "In the legal domain it is important to differentiate between words in general, and afterwards to link the occurrences of the same entities. The topic to solve these challenges is called Named-Entity Linking (NEL). Current supervised neural networks designed for NEL use publicly available datasets for training and testing. However, this paper focuses especially on the aspect of applying transfer learning approach using networks trained for NEL to legal documents. Experiments show consistent improvement in the legal datasets that were created from the European Union law in the scope of this research. Using transfer learning approach, we reached F1-score of 98.90% and 98.01% on the legal small and large test dataset.",
"title": ""
},
{
"docid": "5a0da0bad12a1f0e9a5a2a272519c49e",
"text": "Recurrent neural networks have been very successful at pred icting sequences of words in tasks such as language modeling. However, all such m odels are based on the conventional classification framework, where model is t rained against one-hot targets, and each word is represented both as an input and as a output in isolation. This causes inefficiencies in learning both in terms of utili zing all of the information and in terms of the number of parameters needed to train. We introduce a novel theoretical framework that facilitates better learn ing in language modeling, and show that our framework leads to tying together the input embedding and the output projection matrices, greatly reducing the numbe r of trainable variables. Our LSTM model lowers the state of the art word-level perplex ity on the Penn Treebank to 68.5.",
"title": ""
},
{
"docid": "38012834c3e533adad68fb0d8377f7db",
"text": "Undersampling the k -space data is widely adopted for acceleration of Magnetic Resonance Imaging (MRI). Current deep learning based approaches for supervised learning of MRI image reconstruction employ real-valued operations and representations by treating complex valued k-space/spatial-space as real values. In this paper, we propose complex dense fully convolutional neural network (CDFNet) for learning to de-alias the reconstruction artifacts within undersampled MRI images. We fashioned a densely-connected fully convolutional block tailored for complex-valued inputs by introducing dedicated layers such as complex convolution, batch normalization, non-linearities etc. CDFNet leverages the inherently complex-valued nature of input k -space and learns richer representations. We demonstrate improved perceptual quality and recovery of anatomical structures through CDFNet in contrast to its realvalued counterparts.",
"title": ""
},
{
"docid": "ec52b4c078c14a0d564577438846f178",
"text": "Millions of students across the United States cannot benefit fully from a traditional educational program because they have a disability that impairs their ability to participate in a typical classroom environment. For these students, computer-based technologies can play an especially important role. Not only can computer technology facilitate a broader range of educational activities to meet a variety of needs for students with mild learning disorders, but adaptive technology now exists than can enable even those students with severe disabilities to become active learners in the classroom alongside their peers who do not have disabilities. This article provides an overview of the role computer technology can play in promoting the education of children with special needs within the regular classroom. For example, use of computer technology for word processing, communication, research, and multimedia projects can help the three million students with specific learning and emotional disorders keep up with their nondisabled peers. Computer technology has also enhanced the development of sophisticated devices that can assist the two million students with more severe disabilities in overcoming a wide range of limitations that hinder classroom participation--from speech and hearing impairments to blindness and severe physical disabilities. However, many teachers are not adequately trained on how to use technology effectively in their classrooms, and the cost of the technology is a serious consideration for all schools. Thus, although computer technology has the potential to act as an equalizer by freeing many students from their disabilities, the barriers of inadequate training and cost must first be overcome before more widespread use can become a reality.",
"title": ""
},
{
"docid": "90489f48161a13734cb91da56d4fad87",
"text": "Given that the neural and connective tissues of the optic nerve head (ONH) exhibit complex morphological changes with the development and progression of glaucoma, their simultaneous isolation from optical coherence tomography (OCT) images may be of great interest for the clinical diagnosis and management of this pathology. A deep learning algorithm was designed and trained to digitally stain (i.e. highlight) 6 ONH tissue layers by capturing both the local (tissue texture) and contextual information (spatial arrangement of tissues). The overall dice coefficient (mean of all tissues) was 0.91 ± 0.05 when assessed against manual segmentations performed by an expert observer. We offer here a robust segmentation framework that could be extended for the automated parametric study of the ONH tissues.",
"title": ""
},
{
"docid": "5365f6f5174c3d211ea562c8a7fa0aab",
"text": "Generative Adversarial Networks (GANs) have become one of the dominant methods for fitting generative models to complicated real-life data, and even found unusual uses such as designing good cryptographic primitives. In this talk, we will first introduce the ba- sics of GANs and then discuss the fundamental statistical question about GANs — assuming the training can succeed with polynomial samples, can we have any statistical guarantees for the estimated distributions? In the work with Arora, Ge, Liang, and Zhang, we suggested a dilemma: powerful discriminators cause overfitting, whereas weak discriminators cannot detect mode collapse. Such a conundrum may be solved or alleviated by designing discrimina- tor class with strong distinguishing power against the particular generator class (instead of against all possible generators.)",
"title": ""
},
{
"docid": "58e176bb818efed6de7224d7088f2487",
"text": "In the context of marketing, attribution is the process of quantifying the value of marketing activities relative to the final outcome. It is a topic rapidly growing in importance as acknowledged by the industry. However, despite numerous tools and techniques designed for its measurement, the absence of a comprehensive assessment and classification scheme persists. Thus, we aim to bridge this gap by providing an academic review to accumulate and comprehend current knowledge in attribution modeling, leading to a road map to guide future research, expediting new knowledge creation.",
"title": ""
},
{
"docid": "45ec4615b6cc593011eb9a7b714fb325",
"text": "There has been a drive recently to make sensor data accessible on the Web. However, because of the vast number of sensors collecting data about our environment, finding relevant sensors on the Web is a non-trivial challenge. In this paper, we present an approach to discovering sensors through a standard service interface over Linked Data. This is accomplished with a semantic sensor network middleware that includes a sensor registry on Linked Data and a sensor discovery service that extends the OGC Sensor Web Enablement. With this approach, we are able to access and discover sensors that are positioned near named-locations of interest.",
"title": ""
},
{
"docid": "4403c6e57037d74d305462faf1f4840e",
"text": "Most of the existing monocular visual-inertial SLAM techniques assume that the camera-IMU extrinsic parameters are known, therefore these methods merely estimate the initial values of velocity, visual scale, gravity, biases of gyroscope and accelerometer in the initialization stage. However, it's usually a professional work to carefully calibrate the extrinsic parameters, and it is required to repeat this work once the mechanical configuration of the sensor suite changes slightly. To tackle this problem, we propose an online initialization method to automatically estimate the initial values and the extrinsic parameters without knowing the mechanical configuration. The biases of gyroscope and accelerometer are considered in our method, and a convergence criteria for both orientation and translation calibration is introduced to identify the convergence and to terminate the initialization procedure. In the three processes of our method, an iterative strategy is firstly introduced to iteratively estimate the gyroscope bias and the extrinsic orientation. Secondly, the scale factor, gravity, and extrinsic translation are approximately estimated without considering the accelerometer bias. Finally, these values are further optimized by a refinement algorithm in which the accelerometer bias and the gravitational magnitude are taken into account. Extensive experimental results show that our method achieves competitive accuracy compared with the state-of-the-art with less calculation.",
"title": ""
},
{
"docid": "d96237fca40ac097e52146549672fbdf",
"text": "Cannabidiol (CBD) is a phytocannabinoid with therapeutic properties for numerous disorders exerted through molecular mechanisms that are yet to be completely identified. CBD acts in some experimental models as an anti-inflammatory, anticonvulsant, anti-oxidant, anti-emetic, anxiolytic and antipsychotic agent, and is therefore a potential medicine for the treatment of neuroinflammation, epilepsy, oxidative injury, vomiting and nausea, anxiety and schizophrenia, respectively. The neuroprotective potential of CBD, based on the combination of its anti-inflammatory and anti-oxidant properties, is of particular interest and is presently under intense preclinical research in numerous neurodegenerative disorders. In fact, CBD combined with Δ(9)-tetrahydrocannabinol is already under clinical evaluation in patients with Huntington's disease to determine its potential as a disease-modifying therapy. The neuroprotective properties of CBD do not appear to be exerted by the activation of key targets within the endocannabinoid system for plant-derived cannabinoids like Δ(9)-tetrahydrocannabinol, i.e. CB(1) and CB(2) receptors, as CBD has negligible activity at these cannabinoid receptors, although certain activity at the CB(2) receptor has been documented in specific pathological conditions (i.e. damage of immature brain). Within the endocannabinoid system, CBD has been shown to have an inhibitory effect on the inactivation of endocannabinoids (i.e. inhibition of FAAH enzyme), thereby enhancing the action of these endogenous molecules on cannabinoid receptors, which is also noted in certain pathological conditions. CBD acts not only through the endocannabinoid system, but also causes direct or indirect activation of metabotropic receptors for serotonin or adenosine, and can target nuclear receptors of the PPAR family and also ion channels.",
"title": ""
},
{
"docid": "193c60c3a14fe3d6a46b2624d45b70aa",
"text": "*Corresponding author: Shirin Sadat Ghiasi. Faculty of Medicine, Mashhad University of Medical Sciences, Mahshhad, Iran. E-mail: shirin.ghiasi@gmail.com Tel:+989156511388 This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons. org/licenses/by/3.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. A Review Study on the Prenatal Diagnosis of Congenital Heart Disease Using Fetal Echocardiography",
"title": ""
},
{
"docid": "ac65c09468cd88765009abe49d9114cf",
"text": "It is known that head gesture and brain activity can reflect some human behaviors related to a risk of accident when using machine-tools. The research presented in this paper aims at reducing the risk of injury and thus increase worker safety. Instead of using camera, this paper presents a Smart Safety Helmet (SSH) in order to track the head gestures and the brain activity of the worker to recognize anomalous behavior. Information extracted from SSH is used for computing risk of an accident (a safety level) for preventing and reducing injuries or accidents. The SSH system is an inexpensive, non-intrusive, non-invasive, and non-vision-based system, which consists of an Inertial Measurement Unit (IMU) and dry EEG electrodes. A haptic device, such as vibrotactile motor, is integrated to the helmet in order to alert the operator when computed risk level (fatigue, high stress or error) reaches a threshold. Once the risk level of accident breaks the threshold, a signal will be sent wirelessly to stop the relevant machine tool or process.",
"title": ""
},
{
"docid": "8ac596c8360e2d56b24fee750d58a8b8",
"text": "Stemming is a process of reducing inflected words to their stem or root from a generally written word form. This process is used in many text mining application as a feature selection technique. Moreover, Arabic text summarization has increasingly become an important task in natural language processing area (NLP). Therefore, the aim of this paper is to evaluate the impact of three different Arabic stemmers (i.e. Khoja, Larekey and Alkhalil's stemmer) on the text summarization performance for Arabic language. The evaluation of the proposed system, with the three different stemmers and without stemming, on the dataset used shows that the best performance was achieved by Khoja stemmer in term of recall, precision and F1-measure. The evaluation also shows that the performances of the proposed system are significantly improved by applying the stemming process in the pre-processing stage.",
"title": ""
},
{
"docid": "0b3875a3447ff6e4e4415feefcb4c98b",
"text": "Although 'diseases of affluence', such as diabetes and cardiovascular disease, are increasing in developing countries, infectious diseases still impose the greatest health burden. Annually, just under 1 million people die from malaria, 4.3 million from acute respiratory infections, 2.9 million from enteric infections and 5 million from AIDS and tuberculosis. Other sexually transmitted infections and tropical parasitic infections are responsible for hundreds of thousands of deaths and an enormous burden of morbidity. More than 95% of these deaths occur in developing countries. Simple, accurate and stable diagnostic tests are essential to combat these diseases, but are usually unavailable or inaccessible to those who need them.",
"title": ""
}
] |
scidocsrr
|
3deb1e177be03258d46216481d401d2a
|
Sentiment Analysis of Yelp ‘ s Ratings Based on Text
|
[
{
"docid": "8a2586b1059534c5a23bac9c1cc59906",
"text": "The web contains a wealth of product reviews, but sifting through them is a daunting task. Ideally, an opinion mining tool would process a set of search results for a given item, generating a list of product attributes (quality, features, etc.) and aggregating opinions about each of them (poor, mixed, good). We begin by identifying the unique properties of this problem and develop a method for automatically distinguishing between positive and negative reviews. Our classifier draws on information retrieval techniques for feature extraction and scoring, and the results for various metrics and heuristics vary depending on the testing situation. The best methods work as well as or better than traditional machine learning. When operating on individual sentences collected from web searches, performance is limited due to noise and ambiguity. But in the context of a complete web-based tool and aided by a simple method for grouping sentences into attributes, the results are qualitatively quite useful.",
"title": ""
}
] |
[
{
"docid": "1313fbdd0721b58936a05da5080239df",
"text": "Bug tracking systems are valuable assets for managing maintenance activities. They are widely used in open-source projects as well as in the software industry. They collect many different kinds of issues: requests for defect fixing, enhancements, refactoring/restructuring activities and organizational issues. These different kinds of issues are simply labeled as \"bug\" for lack of a better classification support or of knowledge about the possible kinds.\n This paper investigates whether the text of the issues posted in bug tracking systems is enough to classify them into corrective maintenance and other kinds of activities.\n We show that alternating decision trees, naive Bayes classifiers, and logistic regression can be used to accurately distinguish bugs from other kinds of issues. Results from empirical studies performed on issues for Mozilla, Eclipse, and JBoss indicate that issues can be classified with between 77% and 82% of correct decisions.",
"title": ""
},
{
"docid": "7456af2a110a0f05b39d7d72e64ab553",
"text": "Initially mobile phones were developed only for voice communication but now days the scenario has changed, voice communication is just one aspect of a mobile phone. There are other aspects which are major focus of interest. Two such major factors are web browser and GPS services. Both of these functionalities are already implemented but are only in the hands of manufacturers not in the hands of users because of proprietary issues, the system does not allow the user to access the mobile hardware directly. But now, after the release of android based open source mobile phone a user can access the hardware directly and design customized native applications to develop Web and GPS enabled services and can program the other hardware components like camera etc. In this paper we will discuss the facilities available in android platform for implementing LBS services (geo-services).",
"title": ""
},
{
"docid": "41b83a85c1c633785766e3f464cbd7a6",
"text": "Distributed systems are easier to build than ever with the emergence of new, data-centric abstractions for storing and computing over massive datasets. However, similar abstractions do not exist for storing and accessing meta-data. To fill this gap, Tango provides developers with the abstraction of a replicated, in-memory data structure (such as a map or a tree) backed by a shared log. Tango objects are easy to build and use, replicating state via simple append and read operations on the shared log instead of complex distributed protocols; in the process, they obtain properties such as linearizability, persistence and high availability from the shared log. Tango also leverages the shared log to enable fast transactions across different objects, allowing applications to partition state across machines and scale to the limits of the underlying log without sacrificing consistency.",
"title": ""
},
{
"docid": "a73917d842c18ed9c36a13fe9187ea4c",
"text": "Brain Magnetic Resonance Image (MRI) plays a non-substitutive role in clinical diagnosis. The symptom of many diseases corresponds to the structural variants of brain. Automatic structure segmentation in brain MRI is of great importance in modern medical research. Some methods were developed for automatic segmenting of brain MRI but failed to achieve desired accuracy. In this paper, we proposed a new patch-based approach for automatic segmentation of brain MRI using convolutional neural network (CNN). Each brain MRI acquired from a small portion of public dataset is firstly divided into patches. All of these patches are then used for training CNN, which is used for automatic segmentation of brain MRI. Experimental results showed that our approach achieved better segmentation accuracy compared with other deep learning methods.",
"title": ""
},
{
"docid": "071b46c04389b6fe3830989a31991d0d",
"text": "Direct slicing of CAD models to generate process planning instructions for solid freeform fabrication may overcome inherent disadvantages of using stereolithography format in terms of the process accuracy, ease of file management, and incorporation of multiple materials. This paper will present the results of our development of a direct slicing algorithm for layered freeform fabrication. The direct slicing algorithm was based on a neutral, international standard (ISO 10303) STEP-formatted non-uniform rational B-spline (NURBS) geometric representation and is intended to be independent of any commercial CAD software. The following aspects of the development effort will be presented: (1) determination of optimal build direction based upon STEP-based NURBS models; (2) adaptive subdivision of NURBS data for geometric refinement; and (3) ray-casting slice generation into sets of raster patterns. The development also provides for multi-material slicing and will provide an effective tool in heterogeneous slicing processes. q 2004 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "619b39299531f126769aa96b3e0e84e1",
"text": "In this paper, we focus on the opinion target extraction as part of the opinion mining task. We model the problem as an information extraction task, which we address based on Conditional Random Fields (CRF). As a baseline we employ the supervised algorithm by Zhuang et al. (2006), which represents the state-of-the-art on the employed data. We evaluate the algorithms comprehensively on datasets from four different domains annotated with individual opinion target instances on a sentence level. Furthermore, we investigate the performance of our CRF-based approach and the baseline in a singleand cross-domain opinion target extraction setting. Our CRF-based approach improves the performance by 0.077, 0.126, 0.071 and 0.178 regarding F-Measure in the single-domain extraction in the four domains. In the crossdomain setting our approach improves the performance by 0.409, 0.242, 0.294 and 0.343 regarding F-Measure over the baseline.",
"title": ""
},
{
"docid": "2f8361f2943ff90bf98c6b8a207086c4",
"text": "Real-life bugs are successful because of their unfailing ability to adapt. In particular this applies to their ability to adapt to strategies that are meant to eradicate them as a species. Software bugs have some of these same traits. We will discuss these traits, and consider what we can do about them.",
"title": ""
},
{
"docid": "2d8f76cef3d0c11441bbc8f5487588cb",
"text": "Abstract. It seems natural to assume that the more It seems natural to assume that the more closely robots come to resemble people, the more likely they are to elicit the kinds of responses people direct toward each other. However, subtle flaws in appearance and movement only seem eerie in very humanlike robots. This uncanny phenomenon may be symptomatic of entities that elicit a model of a human other but do not measure up to it. If so, a very humanlike robot may provide the best means of finding out what kinds of behavior are perceived as human, since deviations from a human other are more obvious. In pursuing this line of inquiry, it is essential to identify the mechanisms involved in evaluations of human likeness. One hypothesis is that an uncanny robot elicits an innate fear of death and culturally-supported defenses for coping with death’s inevitability. An experiment, which borrows from the methods of terror management research, was performed to test this hypothesis. Across all questions subjects who were exposed to a still image of an uncanny humanlike robot had on average a heightened preference for worldview supporters and a diminished preference for worldview threats relative to the control group.",
"title": ""
},
{
"docid": "48560dec9177dd68e6a2827395370a4e",
"text": "We present Segment-level Neural CRF, which combines neural networks with a linear chain CRF for segment-level sequence modeling tasks such as named entity recognition (NER) and syntactic chunking. Our segment-level CRF can consider higher-order label dependencies compared with conventional word-level CRF. Since it is difficult to consider all possible variable length segments, our method uses segment lattice constructed from the word-level tagging model to reduce the search space. Performing experiments on NER and chunking, we demonstrate that our method outperforms conventional word-level CRF with neural networks.",
"title": ""
},
{
"docid": "4d1eae0f247f1c2db9e3c544a65c041f",
"text": "This papers presents a new system using circular markers to estimate the pose of a camera. Contrary to most markersbased systems using square markers, we advocate the use of circular markers, as we believe that they are easier to detect and provide a pose estimate that is more robust to noise. Unlike existing systems using circular markers, our method computes the exact pose from one single circular marker, and do not need specific points being explicitly shown on the marker (like center, or axes orientation). Indeed, the center and orientation is encoded directly in the marker’s code. We can thus use the entire marker surface for the code design. After solving the back projection problem for one conic correspondence, we end up with two possible poses. We show how to find the marker’s code, rotation and final pose in one single step, by using a pyramidal cross-correlation optimizer. The marker tracker runs at 100 frames/second on a desktop PC and 30 frames/second on a hand-held UMPC.",
"title": ""
},
{
"docid": "51e3a023053b628de30adeb0730d3832",
"text": "The frequency characteristics of subpixel-based decimation with RGB vertical stripe and RGBX square-shaped subpixel arrangements are studied. To achieve higher apparent resolution than pixel-based decimation, the sampling locations are specially chosen for each of two subpixel arrangements, resulting in relatively small magnitudes of horizontal and vertical aliasing spectra in frequency domain. Thanks to 2-D RGBX square-shaped subpixel arrangement, all the horizontal, vertical, diagonal and anti-diagonal aliasing spectra merely contain low-frequency information, indicating that subpixel-based decimation with RGBX square-shaped panel is more effective in retaining original high frequency details than RGB vertical stripe subpixel arrangement.",
"title": ""
},
{
"docid": "6a6191695c948200658ad6020f21f203",
"text": "Given a random pair of images, an arbitrary style transfer method extracts the feel from the reference image to synthesize an output based on the look of the other content image. Recent arbitrary style transfer methods transfer second order statistics from reference image onto content image via a multiplication between content image features and a transformation matrix, which is computed from features with a pre-determined algorithm. These algorithms either require computationally expensive operations, or fail to model the feature covariance and produce artifacts in synthesized images. Generalized from these methods, in this work, we derive the form of transformation matrix theoretically and present an arbitrary style transfer approach that learns the transformation matrix with a feed-forward network. Our algorithm is highly efficient yet allows a flexible combination of multi-level styles while preserving content affinity during style transfer process. We demonstrate the effectiveness of our approach on four tasks: artistic style transfer, video and photo-realistic style transfer as well as domain adaptation, including comparisons with the stateof-the-art methods.",
"title": ""
},
{
"docid": "3df76261ff7981794e9c3d1332efe023",
"text": "The complete sequence of the 16,569-base pair human mitochondrial genome is presented. The genes for the 12S and 16S rRNAs, 22 tRNAs, cytochrome c oxidase subunits I, II and III, ATPase subunit 6, cytochrome b and eight other predicted protein coding genes have been located. The sequence shows extreme economy in that the genes have none or only a few noncoding bases between them, and in many cases the termination codons are not coded in the DNA but are created post-transcriptionally by polyadenylation of the mRNAs.",
"title": ""
},
{
"docid": "4cc52c8b6065d66472955dff9200b71f",
"text": "Over the past few years there has been an increasing focus on the development of features for resource management within the Linux kernel. The addition of the fair group scheduler has enabled the provisioning of proportional CPU time through the specification of group weights. Since the scheduler is inherently workconserving in nature, a task or a group can consume excess CPU share in an otherwise idle system. There are many scenarios where this extra CPU share can cause unacceptable utilization or latency. CPU bandwidth provisioning or limiting approaches this problem by providing an explicit upper bound on usage in addition to the lower bound already provided by shares. There are many enterprise scenarios where this functionality is useful. In particular are the cases of payper-use environments, and latency provisioning within non-homogeneous environments. This paper details the requirements behind this feature, the challenges involved in incorporating into CFS (Completely Fair Scheduler), and the future development road map for this feature. 1 CPU as a manageable resource Before considering the aspect of bandwidth provisioning let us first review some of the basic existing concepts currently arbitrating entity management within the scheduler. There are two major scheduling classes within the Linux CPU scheduler, SCHED_RT and SCHED_NORMAL. When runnable, entities from the former, the real-time scheduling class, will always be elected to run over those from the normal scheduling class. Prior to v2.6.24, the scheduler had no notion of any entity larger than that of single task1. The available management APIs reflected this and the primary control of bandwidth available was nice(2). In v2.6.24, the completely fair scheduler (CFS) was merged, replacing the existing SCHED_NORMAL scheduling class. This new design delivered weight based scheduling of CPU bandwidth, enabling arbitrary partitioning. This allowed support for group scheduling to be added, managed using cgroups through the CPU controller sub-system. This support allows for the flexible creation of scheduling groups, allowing the fraction of CPU resources received by a group of tasks to be arbitrated as a whole. The addition of this support has been a major step in scheduler development, enabling Linux to align more closely with enterprise requirements for managing this resouce. The hierarchies supported by this model are flexible, and groups may be nested within groups. Each group entity’s bandwidth is provisioned using a corresponding shares attribute which defines its weight. Similarly, the nice(2) API was subsumed to control the weight of an individual task entity. Figure 1 shows the hierarchical groups that might be created in a typical university server to differentiate CPU bandwidth between users such as professors, students, and different departments. One way to think about shares is that it provides lowerbound provisioning. When CPU bandwidth is scheduled at capacity, all runnable entities will receive bandwidth in accordance with the ratio of their share weight. It’s key to observe here that not all entities may be runnable 1Recall that under Linux any kernel-backed thread is considered individual task entity, there is no typical notion of a process in scheduling context.",
"title": ""
},
{
"docid": "c2802496761276ddc99949f8c5667bbc",
"text": "A long-term goal of AI is to produce agents that can learn a diversity of skills throughout their lifetimes and continuously improve those skills via experience. A longstanding obstacle towards that goal is catastrophic forgetting, which is when learning new information erases previously learned information. Catastrophic forgetting occurs in artificial neural networks (ANNs), which have fueled most recent advances in AI. A recent paper proposed that catastrophic forgetting in ANNs can be reduced by promoting modularity, which can limit forgetting by isolating task information to specific clusters of nodes and connections (functional modules). While the prior work did show that modular ANNs suffered less from catastrophic forgetting, it was not able to produce ANNs that possessed task-specific functional modules, thereby leaving the main theory regarding modularity and forgetting untested. We introduce diffusion-based neuromodulation, which simulates the release of diffusing, neuromodulatory chemicals within an ANN that can modulate (i.e. up or down regulate) learning in a spatial region. On the simple diagnostic problem from the prior work, diffusion-based neuromodulation 1) induces task-specific learning in groups of nodes and connections (task-specific localized learning), which 2) produces functional modules for each subtask, and 3) yields higher performance by eliminating catastrophic forgetting. Overall, our results suggest that diffusion-based neuromodulation promotes task-specific localized learning and functional modularity, which can help solve the challenging, but important problem of catastrophic forgetting.",
"title": ""
},
{
"docid": "eebf03df49eb4a99f61d371e059ef43e",
"text": "In theoretical cognitive science, there is a tension between highly structured models whose parameters have a direct psychological interpretation and highly complex, general-purpose models whose parameters and representations are difficult to interpret. The former typically provide more insight into cognition but the latter often perform better. This tension has recently surfaced in the realm of educational data mining, where a deep learning approach to estimating student proficiency, termed deep knowledge tracing or DKT [17], has demonstrated a stunning performance advantage over the mainstay of the field, Bayesian knowledge tracing or BKT [3].",
"title": ""
},
{
"docid": "0cb2c9d4f7c54450bddd84eed70ed403",
"text": "The well-known Mori-Zwanzig theory tells us that model reduction leads to memory effect. For a long time, modeling the memory effect accurately and efficiently has been an important but nearly impossible task in developing a good reduced model. In this work, we explore a natural analogy between recurrent neural networks and the Mori-Zwanzig formalism to establish a systematic approach for developing reduced models with memory. Two training models-a direct training model and a dynamically coupled training model-are proposed and compared. We apply these methods to the Kuramoto-Sivashinsky equation and the Navier-Stokes equation. Numerical experiments show that the proposed method can produce reduced model with good performance on both short-term prediction and long-term statistical properties. In science and engineering, many high-dimensional dynamical systems are too complicated to solve in detail. Nor is it necessary since usually we are only interested in a small subset of the variables representing the gross behavior of the system. Therefore, it is useful to develop reduced models which can approximate the variables of interest without solving the full system. This is the celebrated model reduction problem. Even though model reduction has been widely explored in many fields, to this day there is still a lack of systematic and reliable methodologies for model reduction. One has to rely on uncontrolled approximations in order to move things forward. On the other hand, there is in principle a rather solid starting point, the Mori-Zwanzig (M-Z) theory, for performing model reduction [1], [2]. In M-Z, the effect of unresolved variables on resolved ones is represented as a memory and a noise term, giving rise to the so-called generalized Langevin equation (GLE). Solving the GLE accurately is almost equivalent to solving the full system, because the memory kernel and noise terms contain the full information for the unresolved variables. This means that the M-Z theory does not directly lead to a reduction of complexity or the computational cost. However, it does provide a starting point for making approximations. In this regard, we mention in particular the t-model proposed by Chorin et al [3]. In [4] reduced models of the viscous Burgers equation and 3-dimensional Navier-Stokes equation were developed by analytically approximating the memory kernel in the GLE using the trapezoidal integration scheme. Li and E [5] developed approximate boundary conditions for molecular dynamics using linear approximation of the M-Z formalism. In [6], auxiliary variables are used to deal with the non-Markovian dynamics of the GLE. Despite all of these efforts, it is fair to say that there is still a lack of systematic and reliable procedure for approximating the GLE. In fact, dealing with the memory terms explicitly does not seem to be a promising approach for deriving systematic and reliable approximations to the GLE. ∗The Program in Applied and Computational Mathematics, Princeton University, Princeton, NJ 08544, USA †Department of Mechanics and Aerospace Engineering, Southern University of Science and Technology, Shenzhen 518055, Peoples Republic of China ‡Beijing Institute of Big Data Research, Beijing, 100871, P.R. China 1 ar X iv :1 80 8. 04 25 8v 1 [ cs .L G ] 1 0 A ug 2 01 8 One of the most successful approaches for representing memory effects has been the recurrent neural networks (RNN) in machine learning. Indeed there is a natural analogy between RNN and M-Z. The hidden states in RNN can be viewed as a reduced representation of the unresolved variables in M-Z. We can then view RNN as a way of performing dimension reduction in the space of the unresolved variables. In this paper, we explore the possibility of performing model reduction using RNNs. We will limit ourselves to the situation when the original model is in the form of a conservative partial differential equation (PDE), the reduced model is an averaged version of the original PDE. The crux of the matter is then the accurate representation of the unresolved flux term. We propose two kinds of models. In the first kind, the unresolved flux terms in the equation are learned from data. This flux model is then used in the averaged equation to form the reduced model. We call this the direct training model. A second approach, which we call the coupled training model, is to train the neural network together with the averaged equation. From the viewpoint of machine learning, the objective in the direct training model is to fit the unresolved flux. The objective in the coupled training model is to fit the resolved variables (the averaged quantities). For application, we focus on the Kuramoto-Sivashinsky (K-S) equation and the Navier-Stokes (N-S) equation. The K-S equation writes as ∂u ∂t + 1 2 ∂u ∂x + ∂u ∂x2 + ∂u ∂x4 = 0, x ∈ R, t > 0; (1) u(x, t) = u(x+ L, t), u(x, 0) = g(x). (2) We are interested in a low-pass filtered solution of the K-S equation, ū, and want to develop a reduced system for ū. In general, ū can be written as the convolution of u with a low pass filter G(y):",
"title": ""
},
{
"docid": "e48f641ad2ca9a61611b48e1a6f82a52",
"text": "We present a methodology to design cavity-excited omega-bianisotropic metasurface (O-BMS) antennas capable of producing arbitrary radiation patterns, prescribed by antenna array theory. The method relies on previous work, in which we proved that utilizing the three O-BMS degrees of freedom, namely, electric and magnetic polarizabilities, and magnetoelectric coupling, any field transformation that obeys local power conservation can be implemented via passive lossless components. When the O-BMS acts as the top cover of a metallic cavity excited by a point source, this property allows optimization of the metasurface modal reflection coefficients to establish any desirable power profile on the aperture. Matching in this way the excitation profile to the target power profile corresponding to the desirable aperture fields allows emulation of arbitrary discrete antenna array radiation patterns. The resultant low-profile probed-fed cavity-excited O-BMS antennas offer a new means for meticulous pattern control, without requiring complex, expensive, and often lossy, feed networks.",
"title": ""
},
{
"docid": "7edfde7d7875d88702db2aabc4ac2883",
"text": "This paper proposes a novel approach to build integer multiplication circuits based on speculation, a technique which performs a faster-but occasionally wrong-operation resorting to a multi-cycle error correction circuit only in the rare case of error. The proposed speculative multiplier uses a novel speculative carry-save reduction tree using three steps: partial products recoding, partial products partitioning, speculative compression. The speculative tree uses speculative (m:2) counters, with m > 3, that are faster than a conventional tree using full-adders and half-adders. A technique to automatically choose the suitable speculative counters, taking into accounts both error probability and delay, is also presented in the paper. The speculative tree is completed with a fast speculative carry-propagate adder and an error correction circuit. We have synthesized speculative multipliers for several operand lengths using the UMC 65 nm library. Comparisons with conventional multipliers show that speculation is effective when high speed is required. Speculative multipliers allow reaching a higher speed compared with conventional counterparts and are also quite effective in terms of power dissipation, when a high speed operation is required.",
"title": ""
}
] |
scidocsrr
|
375936b0be0975ef5785ad0efeb91109
|
Design and Validation of a Reconfigurable Single Varactor-Tuned Reflectarray
|
[
{
"docid": "7760a3074983f36e385299706ed9a927",
"text": "A reflectarray antenna monolithically integrated with 90 RF MEMS switches has been designed and fabricated to achieve switching of the main beam. Aperture coupled microstrip patch antenna (ACMPA) elements are used to form a 10 × 10 element reconfigurable reflectarray antenna operating at 26.5 GHz. The change in the progressive phase shift between the elements is obtained by adjusting the length of the open ended transmission lines in the elements with the RF MEMS switches. The reconfigurable reflectarray is monolithically fabricated with the RF MEMS switches in an area of 42.46 cm2 using an in-house surface micromachining and wafer bonding process. The measurement results show that the main beam can be switched between broadside and 40° in the H-plane at 26.5 GHz.",
"title": ""
}
] |
[
{
"docid": "f4cc2848713439b162dc5fc255c336d2",
"text": "We consider the problem of waveform design for multiple input/multiple output (MIMO) radars, where the transmit waveforms are adjusted based on target and clutter statistics. A model for the radar returns which incorporates the transmit waveforms is developed. The target detection problem is formulated for that model. Optimal and suboptimal algorithms are derived for designing the transmit waveforms under different assumptions regarding the statistical information available to the detector. The performance of these algorithms is illustrated by computer simulation.",
"title": ""
},
{
"docid": "adcbc47e18f83745f776dec84d09559f",
"text": "Adaptive and flexible production systems require modular and reusable software especially considering their long-term life cycle of up to 50 years. SWMAT4aPS, an approach to measure Software Maturity for automated Production Systems is introduced. The approach identifies weaknesses and strengths of various companies’ solutions for modularity of software in the design of automated Production Systems (aPS). At first, a self-assessed questionnaire is used to evaluate a large number of companies concerning their software maturity. Secondly, we analyze PLC code, architectural levels, workflows and abilities to configure code automatically out of engineering information in four selected companies. In this paper, the questionnaire results from 16 German world-leading companies in machine and plant manufacturing and four case studies validating the results from the detailed analyses are introduced to prove the applicability of the approach and give a survey of the state of the art in industry. Keywords—factory automation, automated production systems, maturity, modularity, control software, Programmable Logic Controller.",
"title": ""
},
{
"docid": "3505170ccc81058b75e2073f8080b799",
"text": "Indoor Location Based Services (LBS), such as indoor navigation and tracking, still have to deal with both technical and non-technical challenges. For this reason, they have not yet found a prominent position in people’s everyday lives. Reliability and availability of indoor positioning technologies, the availability of up-to-date indoor maps, and privacy concerns associated with location data are some of the biggest challenges to their development. If these challenges were solved, or at least minimized, there would be more penetration into the user market. This paper studies the requirements of LBS applications, through a survey conducted by the authors, identifies the current challenges of indoor LBS, and reviews the available solutions that address the most important challenge, that of providing seamless indoor/outdoor positioning. The paper also looks at the potential of emerging solutions and the technologies that may help to handle this challenge.",
"title": ""
},
{
"docid": "70e82da805e5bb21d35d552afe68bc61",
"text": "The consumption of pomegranate juice (PJ), a rich source of antioxidant polyphenols, has grown tremendously due to its reported health benefits. Pomegranate extracts, which incorporate the major antioxidants found in pomegranates, namely, ellagitannins, have been developed as botanical dietary supplements to provide an alternative convenient form for consuming the bioactive polyphenols found in PJ. Despite the commercial availability of pomegranate extract dietary supplements, there have been no studies evaluating their safety in human subjects. A pomegranate ellagitannin-enriched polyphenol extract (POMx) was prepared for dietary supplement use and evaluated in two pilot clinical studies. Study 1 was designed for safety assessment in 64 overweight individuals with increased waist size. The subjects consumed either one or two POMx capsules per day providing 710 mg (435 mg of gallic acid equivalents, GAEs) or 1420 mg (870 mg of GAEs) of extracts, respectively, and placebo (0 mg of GAEs). Safety laboratory determinations, including complete blood count (CBC), chemistry, and urinalysis, were made at each of three visits. Study 2 was designed for antioxidant activity assessment in 22 overweight subjects by administration of two POMx capsules per day providing 1000 mg (610 mg of GAEs) of extract versus baseline measurements. Measurement of antioxidant activity as evidenced by thiobarbituric acid reactive substances (TBARS) in plasma were measured before and after POMx supplementation. There was evidence of antioxidant activity through a significant reduction in TBARS linked with cardiovascular disease risk. There were no serious adverse events in any subject studied at either site. These studies demonstrate the safety of a pomegranate ellagitannin-enriched polyphenol dietary supplement in humans and provide evidence of antioxidant activity in humans.",
"title": ""
},
{
"docid": "0b4f44030a922ba2c970c263583e8465",
"text": "BACKGROUND\nSmoking remains one of the few potentially preventable factors associated with low birthweight, preterm birth and perinatal death.\n\n\nOBJECTIVES\nTo assess the effects of smoking cessation programs implemented during pregnancy on the health of the fetus, infant, mother, and family.\n\n\nSEARCH STRATEGY\nWe searched the Cochrane Pregnancy and Childbirth Group trials register and the Cochrane Tobacco Addiction Group trials register (July 2003), MEDLINE (January 2002 to July 2003), EMBASE (January 2002 to July 2003), PsychLIT (January 2002 to July 2003), CINAHL (January 2002 to July 2003), and AUSTHEALTH (January 2002 to 2003). We contacted trial authors to locate additional unpublished data. We handsearched references of identified trials and recent obstetric journals.\n\n\nSELECTION CRITERIA\nRandomised and quasi-randomised trials of smoking cessation programs implemented during pregnancy.\n\n\nDATA COLLECTION AND ANALYSIS\nFour reviewers assessed trial quality and extracted data independently.\n\n\nMAIN RESULTS\nThis review included 64 trials. Fifty-one randomised controlled trials (20,931 women) and six cluster-randomised trials (over 7500 women) provided data on smoking cessation and/or perinatal outcomes. Despite substantial variation in the intensity of the intervention and the extent of reminders and reinforcement through pregnancy, there was an increase in the median intensity of both 'usual care' and interventions over time. There was a significant reduction in smoking in the intervention groups of the 48 trials included: (relative risk (RR) 0.94, 95% confidence interval (CI) 0.93 to 0.95), an absolute difference of six in 100 women continuing to smoke. The 36 trials with validated smoking cessation had a similar reduction (RR 0.94, 95% CI 0.92 to 0.95). Smoking cessation interventions reduced low birthweight (RR 0.81, 95% CI 0.70 to 0.94) and preterm birth (RR 0.84, 95% CI 0.72 to 0.98), and there was a 33 g (95% CI 11 g to 55 g) increase in mean birthweight. There were no statistically significant differences in very low birthweight, stillbirths, perinatal or neonatal mortality but these analyses had very limited power. One intervention strategy, rewards plus social support (two trials), resulted in a significantly greater smoking reduction than other strategies (RR 0.77, 95% CI 0.72 to 0.82). Five trials of smoking relapse prevention (over 800 women) showed no statistically significant reduction in relapse.\n\n\nREVIEWERS' CONCLUSIONS\nSmoking cessation programs in pregnancy reduce the proportion of women who continue to smoke, and reduce low birthweight and preterm birth. The pooled trials have inadequate power to detect reductions in perinatal mortality or very low birthweight.",
"title": ""
},
{
"docid": "0eb4a0cb4a40407aea3025e0a3e1b534",
"text": "Telling the story of \"Moana\" became one of the most ambitious things we've ever done at the Walt Disney Animation Studios. We felt a huge responsibility to properly celebrate the culture and mythology of the Pacific Islands, in an epic tale involving demigods, monsters, vast ocean voyages, beautiful lush islands, and a sweeping musical visit to the village and people of Motunui. Join us as we discuss our partnership with our Pacific Islands consultants, known as our \"Oceanic Story Trust,\" the research and development we pursued, and the tremendous efforts of our team of engineers, artists and storytellers who brought the world of \"Moana\" to life.",
"title": ""
},
{
"docid": "bd7a011f47fd48e19e2bbdb2f426ae1d",
"text": "In social networks, link prediction predicts missing links in current networks and new or dissolution links in future networks, is important for mining and analyzing the evolution of social networks. In the past decade, many works have been done about the link prediction in social networks. The goal of this paper is to comprehensively review, analyze and discuss the state-of-the-art of the link prediction in social networks. A systematical category for link prediction techniques and problems is presented. Then link prediction techniques and problems are analyzed and discussed. Typical applications of link prediction are also addressed. Achievements and roadmaps of some active research groups are introduced. Finally, some future challenges of the link prediction in social networks are discussed. 对社交网络中的链接预测研究现状进行系统回顾、分析和讨论, 并指出未来研究挑战. 在动态社交网络中, 链接预测是挖掘和分析网络演化的一项重要任务, 其目的是预测当前未知的链接以及未来链接的变化. 过去十余年中, 在社交网络链接预测问题上已有大量研究工作. 本文旨在对该问题的研究现状和趋势进行全面回顾、分析和讨论. 提出一种分类法组织链接预测技术和问题. 详细分析和讨论了链接预测的技术、问题和应用. 介绍了该问题的活跃研究组. 分析和讨论了社交网络链接预测研究的未来挑战.",
"title": ""
},
{
"docid": "065b0af0f1ed195ac90fa3ad041fa4c4",
"text": "We present CapWidgets, passive tangible controls for capacitive touch screens. CapWidgets bring back physical controls to off-the-shelf multi-touch surfaces as found in mobile phones and tablet computers. While the user touches the widget, the surface detects the capacitive marker on the widget's underside. We study the relative performance of this tangible interaction with direct multi-touch interaction and our experimental results show that user performance and preferences are not automatically in favor of tangible widgets and careful design is necessary to validate their properties.",
"title": ""
},
{
"docid": "3c455cc1c98d1379adefc80bd9d792b6",
"text": "Ferroptosis is a non-apoptotic form of cell death induced by small molecules in specific tumour types, and in engineered cells overexpressing oncogenic RAS. Yet, its relevance in non-transformed cells and tissues is unexplored and remains enigmatic. Here, we provide direct genetic evidence that the knockout of glutathione peroxidase 4 (Gpx4) causes cell death in a pathologically relevant form of ferroptosis. Using inducible Gpx4−/− mice, we elucidate an essential role for the glutathione/Gpx4 axis in preventing lipid-oxidation-induced acute renal failure and associated death. We furthermore systematically evaluated a library of small molecules for possible ferroptosis inhibitors, leading to the discovery of a potent spiroquinoxalinamine derivative called Liproxstatin-1, which is able to suppress ferroptosis in cells, in Gpx4−/− mice, and in a pre-clinical model of ischaemia/reperfusion-induced hepatic damage. In sum, we demonstrate that ferroptosis is a pervasive and dynamic form of cell death, which, when impeded, promises substantial cytoprotection.",
"title": ""
},
{
"docid": "fa54ab205ab9c48a1c2e4a99131b614c",
"text": "The popularity of smart objects in our daily life fosters a new generation of applications under the umbrella of the Internet of Things (IoT). Such applications are built on a distributed network of heterogeneous context-aware devices, where localization is a key issue. The localization problem is further magnified by IoT challenges such as scalability, mobility and the heterogeneity of objects. In existing localization systems using RFID technology, there is a lack of systems that localize mobile tags using heterogeneous mobile readers in a distributed manner. In this paper, we propose the GOSSIPY system for localizing mobile RFID tags using a group of ad hoc heterogeneous mobile RFID readers. The system depends on cooperation of mobile readers through time-constrained interleaving processes. Readers in a neighborhood share interrogation information, estimate tag locations accordingly and employ both proactive and reactive protocols to ensure timely dissemination of location information. We evaluate the proposed system and present its performance through extensive simulation experiments using ns-3.",
"title": ""
},
{
"docid": "8dc50e5d77db50332c06684cac3e5c01",
"text": "BACKGROUND\nRhodiola rosea (R. rosea) is grown at high altitudes and northern latitudes. Due to its purported adaptogenic properties, it has been studied for its performance-enhancing capabilities in healthy populations and its therapeutic properties in a number of clinical populations. To systematically review evidence of efficacy and safety of R. rosea for physical and mental fatigue.\n\n\nMETHODS\nSix electronic databases were searched to identify randomized controlled trials (RCTs) and controlled clinical trials (CCTs), evaluating efficacy and safety of R. rosea for physical and mental fatigue. Two reviewers independently screened the identified literature, extracted data and assessed risk of bias for included studies.\n\n\nRESULTS\nOf 206 articles identified in the search, 11 met inclusion criteria for this review. Ten were described as RCTs and one as a CCT. Two of six trials examining physical fatigue in healthy populations report R. rosea to be effective as did three of five RCTs evaluating R. rosea for mental fatigue. All of the included studies exhibit either a high risk of bias or have reporting flaws that hinder assessment of their true validity (unclear risk of bias).\n\n\nCONCLUSION\nResearch regarding R. rosea efficacy is contradictory. While some evidence suggests that the herb may be helpful for enhancing physical performance and alleviating mental fatigue, methodological flaws limit accurate assessment of efficacy. A rigorously-designed well reported RCT that minimizes bias is needed to determine true efficacy of R. rosea for fatigue.",
"title": ""
},
{
"docid": "b181559966c55d90741f62e645b7d2f7",
"text": "BACKGROUND AND AIMS\nPsychological stress is associated with inflammatory bowel disease [IBD], but the nature of this relationship is complex. At present, there is no simple tool to screen for stress in IBD clinical practice or assess stress repeatedly in longitudinal studies. Our aim was to design a single-question 'stressometer' to rapidly measure stress and validate this in IBD patients.\n\n\nMETHODS\nIn all, 304 IBD patients completed a single-question 'stressometer'. This was correlated with stress as measured by the Depression Anxiety Stress Scales [DASS-21], quality of life, and disease activity. Test-retest reliability was assessed in 31 patients who completed the stressometer and the DASS-21 on two occasions 4 weeks apart.\n\n\nRESULTS\nStressometer levels correlated with the DASS-21 stress dimension in both Crohn's disease [CD] (Spearman's rank correlation coefficient [rs] 0.54; p < 0.001) and ulcerative colitis [UC] [rs 0.59; p < 0.001]. Stressometer levels were less closely associated with depression and anxiety [rs range 0.36 to 0.49; all p-values < 0.001]. Stressometer scores correlated with all four Short Health Scale quality of life dimensions in both CD and UC [rs range 0.35 to 0.48; all p-values < 0.001] and with disease activity in Crohn's disease [rs 0.46; p < 0.001] and ulcerative colitis [rs 0.20; p = 0.02]. Responsiveness was confirmed with a test-retest correlation of 0.43 [p = 0.02].\n\n\nCONCLUSIONS\nThe stressometer is a simple, valid, and responsive measure of psychological stress in IBD patients and may be a useful patient-reported outcome measure in future IBD clinical and research assessments.",
"title": ""
},
{
"docid": "644936acfe1f9ffa0b5f3e8751015d86",
"text": "The use of electromagnetic induction lamps without electrodes has increased because of their long life and energy efficiency. The control of the ignition and luminosity of the lamp is provided by an electronic ballast. Beyond that, the electronic ballast also provides a power factor correction, allowing the minimizing of the lamps impact on the quality of service of the electrical network. The electronic ballast includes several blocks, namely a bridge rectifier, a power factor correcting circuit (PFC), an asymmetric half-bridge inverter with a resonant filter on the inverter output, and a circuit to control the conduction time ot the ballast transistors. Index Terms – SEPIC, PFC, electrodeless lamp, ressonant filter,",
"title": ""
},
{
"docid": "ee561508df2500fbd86f9f4dfe6726bd",
"text": "Cutaneous manifestations are a diagnostic criterion of Ehlers-Danlos syndrome, hypermobility type (EDS-HT) and joint hypermobility syndrome (JHS). These two conditions, originally considered different disorders, are now accepted as clinically indistinguishable and often segregate as a single-familial trait. EDS-HT and JHS are still exclusion diagnoses not supported by any specific laboratory test. Accuracy of clinical diagnosis is, therefore, crucial for appropriate patients' classification and management, but it is actually hampered by the low consistency of many applied criteria including the cutaneous one. We report on mucocutaneous findings in 277 patients with JHS/EDS-HT with both sexes and various ages. Sixteen objective and five anamnestic items were selected and ascertained in two specialized outpatient clinics. Feature rates were compared by sex and age by a series of statistical tools. Data were also used for a multivariate correspondence analysis with the attempt to identify non-causal associations of features depicting recognizable phenotypic clusters. Our findings identified a few differences between sexes and thus indicated an attenuated sexual dimorphism for mucocutaneous features in JHS/EDS-HT. Ten features showed significantly distinct rates at different ages and this evidence corroborated the concept of an evolving phenotype in JHS/EDS-HT also affecting the skin. Multivariate correspondence analysis identified three relatively discrete phenotypic profiles, which may represent the cutaneous counterparts of the three disease phases previously proposed for JHS/EDS-HT. These findings could be used for revising the cutaneous criterion in a future consensus for the clinical diagnosis of JHS/EDS-HT.",
"title": ""
},
{
"docid": "e3459bb93bb6f7af75a182472bb42b3e",
"text": "We consider the algorithmic problem of selecting a set of target nodes that cause the biggest activation cascade in a network. In case when the activation process obeys the diminishing return property, a simple hill-climbing selection mechanism has been shown to achieve a provably good performance. Here we study models of influence propagation that exhibit critical behavior and where the property of diminishing returns does not hold. We demonstrate that in such systems the structural properties of networks can play a significant role. We focus on networks with two loosely coupled communities and show that the double-critical behavior of activation spreading in such systems has significant implications for the targeting strategies. In particular, we show that simple strategies that work well for homogenous networks can be overly suboptimal and suggest simple modification for improving the performance by taking into account the community structure.",
"title": ""
},
{
"docid": "d3d100398f51e0e87728b77a5e3ee1b8",
"text": "Morality is among the most sophisticated features of human judgement, behaviour and, ultimately, mind. An individual who behaves immorally may violate ethical rules and civil rights, and may threaten others' individual liberty, sometimes becoming violent and aggressive. In recent years, neuroscience has shown a growing interest in human morality, and has advanced our understanding of the cognitive and emotional processes involved in moral decisions, their anatomical substrates and the neurology of abnormal moral behaviour. In this article, we review research findings that have provided a key insight into the functional and clinical neuroanatomy of the brain areas involved in normal and abnormal moral behaviour. The 'moral brain' consists of a large functional network including both cortical and subcortical anatomical structures. Because morality is a complex process, some of these brain structures share their neural circuits with those controlling other behavioural processes, such as emotions and theory of mind. Among the anatomical structures implicated in morality are the frontal, temporal and cingulate cortices. The prefrontal cortex regulates activity in subcortical emotional centres, planning and supervising moral decisions, and when its functionality is altered may lead to impulsive aggression. The temporal lobe is involved in theory of mind and its dysfunction is often implicated in violent psychopathy. The cingulate cortex mediates the conflict between the emotional and the rational components of moral reasoning. Other important structures contributing to moral behaviour include the subcortical nuclei such as the amygdala, hippocampus and basal ganglia. Brain areas participating in moral processing can be influenced also by genetic, endocrine and environmental factors. Hormones can modulate moral behaviour through their effects on the brain. Finally, genetic polymorphisms can predispose to aggressivity and violence, arguing for a genetic-based predisposition to morality. Because abnormal moral behaviour can arise from both functional and structural brain abnormalities that should be diagnosed and treated, the neurology of moral behaviour has potential implications for clinical practice and raises ethical concerns. Last, since research has developed several neuromodulation techniques to improve brain dysfunction (deep brain stimulation, transcranial magnetic stimulation and transcranial direct current stimulation), knowing more about the 'moral brain' might help to develop novel therapeutic strategies for neurologically based abnormal moral behaviour.",
"title": ""
},
{
"docid": "9f8c05f7825067ca86caa16547f709e7",
"text": "We consider video object cut as an ensemble of frame-level background-foreground object classifiers which fuses information across frames and refine their segmentation results in a collaborative and iterative manner. Our approach addresses the challenging issues of modeling of background with dynamic textures and segmentation of foreground objects from cluttered scenes. We construct patch-level bag-of-words background models to effectively capture the background motion and texture dynamics. We propose a foreground salience graph (FSG) to characterize the similarity of an image patch to the bag-of-words background models in the temporal domain and to neighboring image patches in the spatial domain. We incorporate this similarity information into a graph-cut energy minimization framework for foreground object segmentation. The background-foreground classification results at neighboring frames are fused together to construct a foreground probability map to update the graph weights. The resulting object shapes at neighboring frames are also used as constraints to guide the energy minimization process during graph cut. Our extensive experimental results and performance comparisons over a diverse set of challenging videos with dynamic scenes, including the new Change Detection Challenge Dataset, demonstrate that the proposed ensemble video object cut method outperforms various state-of-the-art algorithms.",
"title": ""
},
{
"docid": "5fe8142a25953d5f168adc21e621175a",
"text": "In order to reduce the security risk of a commercial aircraft, passengers are not allowed to take certain items in their carry-on baggage. For this reason, human operators are trained to detect prohibited items using a manually controlled baggage screening process. The inspection process, however, is highly complex as hazardous items are very difficult to detect when placed in close packed bags, superimposed by other objects, and/or rotated showing an unrecognizable profile. In this paper, we review certain advances achieved by our research group in this field. Our methodology is based on multiple view analysis, because it can be a powerful tool for examining complex objects in cases in which uncertainty can lead to misinterpretation. In our approach, multiple views (taken from fixed points of view, or using an active vision approach in which the best views are automated selected) are analyzed in the detection of regular objects. In order to illustrate the effectiveness of the proposed method, experimental results on recognizing guns, razor blades, pins, clips and springs in baggage inspection are presented achieving around 90% accuracy. We believe that it would be possible to design an automated aid in a target detection task using the proposed algorithm.",
"title": ""
},
{
"docid": "384a321db8cdf144dc9a0be88930424b",
"text": "We introduce a method for evaluating the relevance of all visible components of a Web search results page, in the context of that results page. Contrary to Cranfield-style evaluation methods, our approach recognizes that a user's initial search interaction is with the result page produced by a search system, not the landing pages linked from it. Our key contribution is that the method allows us to investigate aspects of component relevance that are difficult or impossible to judge in isolation. Such contextual aspects include component-level information redundancy and cross-component coherence. We report on how the method complements traditional document relevance measurement and its support for comparative relevance assessment across multiple search engines. We also study possible issues with applying the method, including brand presentation effects, inter-judge agreement, and comparisons with document-based relevance judgments. Our findings show this is a useful method for evaluating the dominant user experience in interacting with search systems.",
"title": ""
}
] |
scidocsrr
|
1858e8fa3f0ff4249bd007abf7679481
|
The effectiveness of mindfulness based programs in reducing stress experienced by nurses in adult hospital settings: a systematic review of quantitative evidence protocol.
|
[
{
"docid": "e4628211d0d2657db387c093228e9b9b",
"text": "BACKGROUND\nMindfulness-based stress reduction (MBSR) is a clinically standardized meditation that has shown consistent efficacy for many mental and physical disorders. Less attention has been given to the possible benefits that it may have in healthy subjects. The aim of the present review and meta-analysis is to better investigate current evidence about the efficacy of MBSR in healthy subjects, with a particular focus on its benefits for stress reduction.\n\n\nMATERIALS AND METHODS\nA literature search was conducted using MEDLINE (PubMed), the ISI Web of Knowledge, the Cochrane database, and the references of retrieved articles. The search included articles written in English published prior to September 2008, and identified ten, mainly low-quality, studies. Cohen's d effect size between meditators and controls on stress reduction and spirituality enhancement values were calculated.\n\n\nRESULTS\nMBSR showed a nonspecific effect on stress reduction in comparison to an inactive control, both in reducing stress and in enhancing spirituality values, and a possible specific effect compared to an intervention designed to be structurally equivalent to the meditation program. A direct comparison study between MBSR and standard relaxation training found that both treatments were equally able to reduce stress. Furthermore, MBSR was able to reduce ruminative thinking and trait anxiety, as well as to increase empathy and self-compassion.\n\n\nCONCLUSIONS\nMBSR is able to reduce stress levels in healthy people. However, important limitations of the included studies as well as the paucity of evidence about possible specific effects of MBSR in comparison to other nonspecific treatments underline the necessity of further research.",
"title": ""
}
] |
[
{
"docid": "460a296de1bd13378d71ce19ca5d807a",
"text": "Many books discuss applications of data mining. For financial data analysis and financial modeling, see Benninga and Czaczkes [BC00] and Higgins [Hig03]. For retail data mining and customer relationship management, see books by Berry and Linoff [BL04] and Berson, Smith, and Thearling [BST99], and the article by Kohavi [Koh01]. For telecommunication-related data mining, see the book by Mattison [Mat97]. Chen, Hsu, and Dayal [CHD00] reported their work on scalable telecommunication tandem traffic analysis under a data warehouse/OLAP framework. For bioinformatics and biological data analysis, there are a large number of introductory references and textbooks. An introductory overview of bioinformatics for computer scientists was presented by Cohen [Coh04]. Recent textbooks on bioinformatics include Krane and Raymer [KR03], Jones and Pevzner [JP04], Durbin, Eddy, Krogh and Mitchison [DEKM98], Setubal and Meidanis [SM97], Orengo, Jones, and Thornton [OJT03], and Pevzner [Pev03]. Summaries of biological data analysis methods and algorithms can also be found in many other books, such as Gusfield [Gus97], Waterman [Wat95], Baldi and Brunak [BB01], and Baxevanis and Ouellette [BO04]. There are many books on scientific data analysis, such as Grossman, Kamath, Kegelmeyer, et al. (eds.) [GKK01]. For geographic data mining, see the book edited by Miller and Han [MH01]. Valdes-Perez [VP99] discusses the principles of human-computer collaboration for knowledge discovery in science. For intrusion detection, see Barbará [Bar02] and Northcutt and Novak [NN02].",
"title": ""
},
{
"docid": "cf1e0d6a07674aa0b4c078550b252104",
"text": "Industry-practiced agile methods must become an integral part of a software engineering curriculum. It is essential that graduates of such programs seeking careers in industry understand and have positive attitudes toward agile principles. With this knowledge they can participate in agile teams and apply these methods with minimal additional training. However, learning these methods takes experience and practice, both of which are difficult to achieve in a direct manner within the constraints of an academic program. This paper presents a novel, immersive boot camp approach to learning agile software engineering concepts with LEGO® bricks as the medium. Students construct a physical product while inductively learning the basic principles of agile methods. The LEGO®-based approach allows for multiple iterations in an active learning environment. In each iteration, students inductively learn agile concepts through their experiences and mistakes. Subsequent iterations then ground these concepts, visibly leading to an effective process. We assessed this approach using a combination of quantitative and qualitative methods. Our assessment shows that the students demonstrated positive attitudes toward the boot-camp approach compared to lecture-based instruction. However, the agile boot camp did not have an effect on the students' recall on class tests when compared to their recall of concepts taught in lecture-based instruction.",
"title": ""
},
{
"docid": "66844a6bce975f8e3e32358f0e0d1fb7",
"text": "The recent advent of DNA sequencing technologies facilitates the use of genome sequencing data that provide means for more informative and precise classification and identification of members of the Bacteria and Archaea. Because the current species definition is based on the comparison of genome sequences between type and other strains in a given species, building a genome database with correct taxonomic information is of paramount need to enhance our efforts in exploring prokaryotic diversity and discovering novel species as well as for routine identifications. Here we introduce an integrated database, called EzBioCloud, that holds the taxonomic hierarchy of the Bacteria and Archaea, which is represented by quality-controlled 16S rRNA gene and genome sequences. Whole-genome assemblies in the NCBI Assembly Database were screened for low quality and subjected to a composite identification bioinformatics pipeline that employs gene-based searches followed by the calculation of average nucleotide identity. As a result, the database is made of 61 700 species/phylotypes, including 13 132 with validly published names, and 62 362 whole-genome assemblies that were identified taxonomically at the genus, species and subspecies levels. Genomic properties, such as genome size and DNA G+C content, and the occurrence in human microbiome data were calculated for each genus or higher taxa. This united database of taxonomy, 16S rRNA gene and genome sequences, with accompanying bioinformatics tools, should accelerate genome-based classification and identification of members of the Bacteria and Archaea. The database and related search tools are available at www.ezbiocloud.net/.",
"title": ""
},
{
"docid": "d470122d50dbb118ae9f3068998f8e14",
"text": "Tumor heterogeneity presents a challenge for inferring clonal evolution and driver gene identification. Here, we describe a method for analyzing the cancer genome at a single-cell nucleotide level. To perform our analyses, we first devised and validated a high-throughput whole-genome single-cell sequencing method using two lymphoblastoid cell line single cells. We then carried out whole-exome single-cell sequencing of 90 cells from a JAK2-negative myeloproliferative neoplasm patient. The sequencing data from 58 cells passed our quality control criteria, and these data indicated that this neoplasm represented a monoclonal evolution. We further identified essential thrombocythemia (ET)-related candidate mutations such as SESN2 and NTRK1, which may be involved in neoplasm progression. This pilot study allowed the initial characterization of the disease-related genetic architecture at the single-cell nucleotide level. Further, we established a single-cell sequencing method that opens the way for detailed analyses of a variety of tumor types, including those with high genetic complex between patients.",
"title": ""
},
{
"docid": "16560cdfe50fc908ae46abf8b82e620f",
"text": "While there seems to be a general agreement that next years' systems will include many processing cores, it is often overlooked that these systems will also include an increasing number of different cores (we already see dedicated units for graphics or network processing). Orchestrating the diversity of processing functionality is going to be a major challenge in the upcoming years, be it to optimize for performance or for minimal energy consumption.\n We expect field-programmable gate arrays (FPGAs or \"programmable hardware\") to soon play the role of yet another processing unit, found in commodity computers. It is clear that the new resource is going to be too precious to be ignored by database systems, but it is unclear how FPGAs could be integrated into a DBMS. With a focus on database use, this tutorial introduces into the emerging technology, demonstrates its potential, but also pinpoints some challenges that need to be addressed before FPGA-accelerated database systems can go mainstream. Attendees will gain an intuition of an FPGA development cycle, receive guidelines for a \"good\" FPGA design, but also learn the limitations that hardware-implemented database processing faces. Our more high-level ambition is to spur a broader interest in database processing on novel hardware technology.",
"title": ""
},
{
"docid": "08f45368b85de5e6036fd4309f7c7a05",
"text": "Inflammatory bowel disease (IBD) is a group of diseases characterized by inflammation of the small and large intestine and primarily includes ulcerative colitis and Crohn’s disease. Although the etiology of IBD is not fully understood, it is believed to result from the interaction of genetic, immunological, and environmental factors, including gut microbiota. Recent studies have shown a correlation between changes in the composition of the intestinal microbiota and IBD. Moreover, it has been suggested that probiotics and prebiotics influence the balance of beneficial and detrimental bacterial species, and thereby determine homeostasis versus inflammatory conditions. In this review, we focus on recent advances in the understanding of the role of prebiotics, probiotics, and synbiotics in functions of the gastrointestinal tract and the induction and maintenance of IBD remission. We also discuss the role of psychobiotics, which constitute a novel class of psychotropic agents that affect the central nervous system by influencing gut microbiota. (Inflamm Bowel Dis 2015;21:1674–1682)",
"title": ""
},
{
"docid": "8016e80e506dcbae5c85fdabf1304719",
"text": "We introduce globally normalized convolutional neural networks for joint entity classification and relation extraction. In particular, we propose a way to utilize a linear-chain conditional random field output layer for predicting entity types and relations between entities at the same time. Our experiments show that global normalization outperforms a locally normalized softmax layer on a benchmark dataset.",
"title": ""
},
{
"docid": "2545af6c324fa7fb0e766bf6d68dfd90",
"text": "Evidence of aberrant hypothalamic-pituitary-adrenocortical (HPA) activity in many psychiatric disorders, although not universal, has sparked long-standing interest in HPA hormones as biomarkers of disease or treatment response. HPA activity may be chronically elevated in melancholic depression, panic disorder, obsessive-compulsive disorder, and schizophrenia. The HPA axis may be more reactive to stress in social anxiety disorder and autism spectrum disorders. In contrast, HPA activity is more likely to be low in PTSD and atypical depression. Antidepressants are widely considered to inhibit HPA activity, although inhibition is not unanimously reported in the literature. There is evidence, also uneven, that the mood stabilizers lithium and carbamazepine have the potential to augment HPA measures, while benzodiazepines, atypical antipsychotics, and to some extent, typical antipsychotics have the potential to inhibit HPA activity. Currently, the most reliable use of HPA measures in most disorders is to predict the likelihood of relapse, although changes in HPA activity have also been proposed to play a role in the clinical benefits of psychiatric treatments. Greater attention to patient heterogeneity and more consistent approaches to assessing treatment effects on HPA function may solidify the value of HPA measures in predicting treatment response or developing novel strategies to manage psychiatric disease.",
"title": ""
},
{
"docid": "37a8ec11d92dd8a83d757fa27b8f4118",
"text": "Weed control is necessary in rice cultivation, but the excessive use of herbicide treatments has led to serious agronomic and environmental problems. Suitable site-specific weed management (SSWM) is a solution to address this problem while maintaining the rice production quality and quantity. In the context of SSWM, an accurate weed distribution map is needed to provide decision support information for herbicide treatment. UAV remote sensing offers an efficient and effective platform to monitor weeds thanks to its high spatial resolution. In this work, UAV imagery was captured in a rice field located in South China. A semantic labeling approach was adopted to generate the weed distribution maps of the UAV imagery. An ImageNet pre-trained CNN with residual framework was adapted in a fully convolutional form, and transferred to our dataset by fine-tuning. Atrous convolution was applied to extend the field of view of convolutional filters; the performance of multi-scale processing was evaluated; and a fully connected conditional random field (CRF) was applied after the CNN to further refine the spatial details. Finally, our approach was compared with the pixel-based-SVM and the classical FCN-8s. Experimental results demonstrated that our approach achieved the best performance in terms of accuracy. Especially for the detection of small weed patches in the imagery, our approach significantly outperformed other methods. The mean intersection over union (mean IU), overall accuracy, and Kappa coefficient of our method were 0.7751, 0.9445, and 0.9128, respectively. The experiments showed that our approach has high potential in accurate weed mapping of UAV imagery.",
"title": ""
},
{
"docid": "85736b2fd608e3d109ce0f3c46dda9ac",
"text": "The WHO (2001) recommends exclusive breast-feeding and delaying the introduction of solid foods to an infant's diet until 6 months postpartum. However, in many countries, this recommendation is followed by few mothers, and earlier weaning onto solids is a commonly reported global practice. Therefore, this prospective, observational study aimed to assess compliance with the WHO recommendation and examine weaning practices, including the timing of weaning of infants, and to investigate the factors that predict weaning at ≤ 12 weeks. From an initial sample of 539 pregnant women recruited from the Coombe Women and Infants University Hospital, Dublin, 401 eligible mothers were followed up at 6 weeks and 6 months postpartum. Quantitative data were obtained on mothers' weaning practices using semi-structured questionnaires and a short dietary history of the infant's usual diet at 6 months. Only one mother (0.2%) complied with the WHO recommendation to exclusively breastfeed up to 6 months. Ninety-one (22.6%) infants were prematurely weaned onto solids at ≤ 12 weeks with predictive factors after adjustment, including mothers' antenatal reporting that infants should be weaned onto solids at ≤ 12 weeks, formula feeding at 12 weeks and mothers' reporting of the maternal grandmother as the principal source of advice on infant feeding. Mothers who weaned their infants at ≤ 12 weeks were more likely to engage in other sub-optimal weaning practices, including the addition of non-recommended condiments to their infants' foods. Provision of professional advice and exploring antenatal maternal misperceptions are potential areas for targeted interventions to improve compliance with the recommended weaning practices.",
"title": ""
},
{
"docid": "80fe141d88740955f189e8e2bf4c2d89",
"text": "Predictions concerning development, interrelations, and possible independence of working memory, inhibition, and cognitive flexibility were tested in 325 participants (roughly 30 per age from 4 to 13 years and young adults; 50% female). All were tested on the same computerized battery, designed to manipulate memory and inhibition independently and together, in steady state (single-task blocks) and during task-switching, and to be appropriate over the lifespan and for neuroimaging (fMRI). This is one of the first studies, in children or adults, to explore: (a) how memory requirements interact with spatial compatibility and (b) spatial incompatibility effects both with stimulus-specific rules (Simon task) and with higher-level, conceptual rules. Even the youngest children could hold information in mind, inhibit a dominant response, and combine those as long as the inhibition required was steady-state and the rules remained constant. Cognitive flexibility (switching between rules), even with memory demands minimized, showed a longer developmental progression, with 13-year-olds still not at adult levels. Effects elicited only in Mixed blocks with adults were found in young children even in single-task blocks; while young children could exercise inhibition in steady state it exacted a cost not seen in adults, who (unlike young children) seemed to re-set their default response when inhibition of the same tendency was required throughout a block. The costs associated with manipulations of inhibition were greater in young children while the costs associated with increasing memory demands were greater in adults. Effects seen only in RT in adults were seen primarily in accuracy in young children. Adults slowed down on difficult trials to preserve accuracy; but the youngest children were impulsive; their RT remained more constant but at an accuracy cost on difficult trials. Contrary to our predictions of independence between memory and inhibition, when matched for difficulty RT correlations between these were as high as 0.8, although accuracy correlations were less than half that. Spatial incompatibility effects and global and local switch costs were evident in children and adults, differing only in size. Other effects (e.g., asymmetric switch costs and the interaction of switching rules and switching response-sites) differed fundamentally over age.",
"title": ""
},
{
"docid": "0a0cc3c3d3cd7e7c3e8b409554daa5a3",
"text": "Purpose: We investigate the extent of voluntary disclosures in UK higher education institutions’ (HEIs) annual reports and examine whether internal governance structures influence disclosure in the period following major reform and funding constraints. Design/methodology/approach: We adopt a modified version of Coy and Dixon’s (2004) public accountability index, referred to in this paper as a public accountability and transparency index (PATI), to measure the extent of voluntary disclosures in 130 UK HEIs’ annual reports. Informed by a multitheoretical framework drawn from public accountability, legitimacy, resource dependence and stakeholder perspectives, we propose that the characteristics of governing and executive structures in UK universities influence the extent of their voluntary disclosures. Findings: We find a large degree of variability in the level of voluntary disclosures by universities and an overall relatively low level of PATI (44%), particularly with regards to the disclosure of teaching/research outcomes. We also find that audit committee quality, governing board diversity, governor independence, and the presence of a governance committee are associated with the level of disclosure. Finally, we find that the interaction between executive team characteristics and governance variables enhances the level of voluntary disclosures, thereby providing support for the continued relevance of a ‘shared’ leadership in the HEIs’ sector towards enhancing accountability and transparency in HEIs. Research limitations/implications: In spite of significant funding cuts, regulatory reforms and competitive challenges, the level of voluntary disclosure by UK HEIs remains low. Whilst the role of selected governance mechanisms and ‘shared leadership’ in improving disclosure, is asserted, the varying level and selective basis of the disclosures across the surveyed HEIs suggest that the public accountability motive is weaker relative to the other motives underpinned by stakeholder, legitimacy and resource dependence perspectives. Originality/value: This is the first study which explores the association between HEI governance structures, managerial characteristics and the level of disclosure in UK HEIs.",
"title": ""
},
{
"docid": "727a97b993098aa1386e5bfb11a99d4b",
"text": "Inevitably, reading is one of the requirements to be undergone. To improve the performance and quality, someone needs to have something new every day. It will suggest you to have more inspirations, then. However, the needs of inspirations will make you searching for some sources. Even from the other people experience, internet, and many books. Books and internet are the recommended media to help you improving your quality and performance.",
"title": ""
},
{
"docid": "fe23c80ef28f59066b6574e9c0f8578b",
"text": "Received: 1 September 2008 Revised: 30 May 2009 2nd Revision: 10 October 2009 3rd Revision: 17 December 2009 4th Revision: 28 September 2010 Accepted: 1 November 2010 Abstract This paper applies the technology acceptance model to explore the digital divide and transformational government (t-government) in the United States. Successful t-government is predicated on citizen adoption and usage of e-government services. The contribution of this research is to enhance our understanding of the factors associated with the usage of e-government services among members of a community on the unfortunate side of the divide. A questionnaire was administered to members, of a techno-disadvantaged public housing community and neighboring households, who partook in training or used the community computer lab. The results indicate that perceived access barriers and perceived ease of use (PEOU) are significantly associated with usage, while perceived usefulness (PU) is not. Among the demographic characteristics, educational level, employment status, and household income all have a significant impact on access barriers and employment is significantly associated with PEOU. Finally, PEOU is significantly related to PU. Overall, the results emphasize that t-government cannot cross the digital divide without accompanying employment programs and programs that enhance citizens’ ease in using such services. European Journal of Information Systems (2011) 20, 308–328. doi:10.1057/ejis.2010.64; published online 28 December 2010",
"title": ""
},
{
"docid": "e9676faf7e8d03c64fdcf6aa5e09b008",
"text": "In this paper, a novel subspace method called diagonal principal component analysis (DiaPCA) is proposed for face recognition. In contrast to standard PCA, DiaPCA directly seeks the optimal projective vectors from diagonal face images without image-to-vector transformation. While in contrast to 2DPCA, DiaPCA reserves the correlations between variations of rows and those of columns of images. Experiments show that DiaPCA is much more accurate than both PCA and 2DPCA. Furthermore, it is shown that the accuracy can be further improved by combining DiaPCA with 2DPCA.",
"title": ""
},
{
"docid": "d1c4e0da79ceb8893f63aa8ea7c8041c",
"text": "This paper describes the GOLD (Generic Obstacle and Lane Detection) system, a stereo vision-based hardware and software architecture developed to increment road safety of moving vehicles: it allows to detect both generic obstacles (without constraints on symmetry or shape) and the lane position in a structured environment (with painted lane markings). It has been implemented on the PAPRICA system and works at a rate of 10 Hz.",
"title": ""
},
{
"docid": "7a1f409eea5e0ff89b51fe0a26d6db8d",
"text": "A multi-agent system consisting of <inline-formula><tex-math notation=\"LaTeX\">$N$</tex-math></inline-formula> agents is considered. The problem of steering each agent from its initial position to a desired goal while avoiding collisions with obstacles and other agents is studied. This problem, referred to as the <italic>multi-agent collision avoidance problem</italic>, is formulated as a differential game. Dynamic feedback strategies that approximate the feedback Nash equilibrium solutions of the differential game are constructed and it is shown that, provided certain assumptions are satisfied, these guarantee that the agents reach their targets while avoiding collisions.",
"title": ""
},
{
"docid": "3e0a731c76324ad0cea438a1d9907b68",
"text": "ance. In addition, the salt composition of the soil water influences the composition of cations on the exchange Due in large measure to the prodigious research efforts of Rhoades complex of soil particles, which influences soil permeand his colleagues at the George E. Brown, Jr., Salinity Laboratory ability and tilth, depending on salinity level and exover the past two decades, soil electrical conductivity (EC), measured changeable cation composition. Aside from decreasing using electrical resistivity and electromagnetic induction (EM), is among the most useful and easily obtained spatial properties of soil crop yield and impacting soil hydraulics, salinity can that influences crop productivity. As a result, soil EC has become detrimentally impact ground water, and in areas where one of the most frequently used measurements to characterize field tile drainage occurs, drainage water can become a disvariability for application to precision agriculture. The value of spatial posal problem as demonstrated in the southern San measurements of soil EC to precision agriculture is widely acknowlJoaquin Valley of central California. edged, but soil EC is still often misunderstood and misinterpreted. From a global perspective, irrigated agriculture makes To help clarify misconceptions, a general overview of the application an essential contribution to the food needs of the world. of soil EC to precision agriculture is presented. The following areas While only 15% of the world’s farmland is irrigated, are discussed with particular emphasis on spatial EC measurements: roughly 35 to 40% of the total supply of food and fiber a brief history of the measurement of soil salinity with EC, the basic comes from irrigated agriculture (Rhoades and Lovetheories and principles of the soil EC measurement and what it actually day, 1990). However, vast areas of irrigated land are measures, an overview of the measurement of soil salinity with various threatened by salinization. Although accurate worldEC measurement techniques and equipment (specifically, electrical wide data are not available, it is estimated that roughly resistivity with the Wenner array and EM), examples of spatial EC half of all existing irrigation systems (totaling about 250 surveys and their interpretation, applications and value of spatial measurements of soil EC to precision agriculture, and current and million ha) are affected by salinity and waterlogging future developments. Precision agriculture is an outgrowth of techno(Rhoades and Loveday, 1990). logical developments, such as the soil EC measurement, which faciliSalinity within irrigated soils clearly limits productivtate a spatial understanding of soil–water–plant relationships. The ity in vast areas of the USA and other parts of the world. future of precision agriculture rests on the reliability, reproducibility, It is generally accepted that the extent of salt-affected and understanding of these technologies. soil is increasing. In spite of the fact that salinity buildup on irrigated lands is responsible for the declining resource base for agriculture, we do not know the exact T predominant mechanism causing the salt accuextent to which soils in our country are salinized, the mulation in irrigated agricultural soils is evapotransdegree to which productivity is being reduced by salinpiration. The salt contained in the irrigation water is ity, the increasing or decreasing trend in soil salinity left behind in the soil as the pure water passes back to development, and the location of contributory sources the atmosphere through the processes of evaporation of salt loading to ground and drainage waters. Suitable and plant transpiration. The effects of salinity are manisoil inventories do not exist and until recently, neither fested in loss of stand, reduced rates of plant growth, did practical techniques to monitor salinity or assess the reduced yields, and in severe cases, total crop failure (Rhoades and Loveday, 1990). Salinity limits water upAbbreviations: EC, electrical conductivity; ECa, apparent soil electritake by plants by reducing the osmotic potential and cal conductivity; ECe, electrical conductivity of the saturated soil paste thus the total soil water potential. Salinity may also extract; ECw, electrical conductivity of soil water; EM, electromagnetic cause specific ion toxicity or upset the nutritional balinduction; EMavg, the geometric mean of the vertical and horizontal electromagnetic induction readings; EMh, electromagnetic induction measurement in the horizontal coil-mode configuration; EMv, electroUSDA-ARS, George E. Brown, Jr., Salinity Lab., 450 West Big magnetic induction measurement in the vertical coil-mode configuraSprings Rd., Riverside, CA 92507-4617. Received 23 Apr. 2001. *Cortion; GIS, geographical information system; GPS, global positioning responding author (dcorwin@ussl.ars.usda.gov). systems; NPS, nonpoint source; SP, saturation percentage; TDR, time domain reflectometry; w, total volumetric water content. Published in Agron. J. 95:455–471 (2003).",
"title": ""
},
{
"docid": "8c301956112a9bfb087ae9921d80134a",
"text": "This paper presents an operation analysis of a high frequency three-level (TL) PWM inverter applied for an induction heating applications. The feature of TL inverter is to achieve zero-voltage switching (ZVS) at above the resonant frequency. The circuit has been modified from the full-bridge inverter to reach high-voltage with low-harmonic output. The device voltage stresses are controlled in a half of the DC input voltage. The prototype operated between 70 and 78 kHz at the DC voltage rating of 580 V can supply the output power rating up to 3000 W. The iron has been heated and hardened at the temperature up to 800degC. In addition, the experiments have been successfully tested and compared with the simulations",
"title": ""
}
] |
scidocsrr
|
e3743032e23258c4b1874b76ac169833
|
Cloud computing for Internet of Things & sensing based applications
|
[
{
"docid": "00614d23a028fe88c3f33db7ace25a58",
"text": "Cloud Computing and The Internet of Things are the two hot points in the Internet field. The application of the two new technologies is in hot discussion and research, but quite less on the field of agriculture and forestry. Thus, in this paper, we analyze the study and application of Cloud Computing and The Internet of Things on agriculture and forestry. Then we put forward an idea that making a combination of the two techniques and analyze the feasibility, applications and future prospect of the combination.",
"title": ""
}
] |
[
{
"docid": "9490ca6447448c0aba919871b1fa9791",
"text": "The study's goal was to examine the socially responsible power use in the context of ethical leadership as an explanatory mechanism of the ethical leadership-follower outcomes link. Drawing on the attachment theory (Bowlby, 1969/1982), we explored a power-based process model, which assumes that a leader's personal power is an intervening variable in the relationship between ethical leadership and follower outcomes, while incorporating the moderating role of followers' moral identity in this transformation process. The results of a two-wave field study (N = 235) that surveyed employees and a scenario experiment (N = 169) fully supported the proposed (moderated) mediation models, as personal power mediated the positive relationship between ethical leadership and a broad range of tested follower outcomes (i.e., leader effectiveness, follower extra effort, organizational commitment, job satisfaction, and work engagement), as well as the interactive effects of ethical leadership and follower moral identity on these follower outcomes. Theoretical and practical implications are discussed.",
"title": ""
},
{
"docid": "e9a154af3a041cadc5986b7369ce841b",
"text": "Metrological characterization of high-performance ΔΣ Analog-to-Digital Converters (ADCs) poses severe challenges to reference instrumentation and standard methods. In this paper, most important tests related to noise and effective resolution, nonlinearity, environmental uncertainty, and stability are proved and validated in the specific case of a high-performance ΔΣ ADC. In particular, tests setups are proposed and discussed and the definitions used to assess the performance are clearly stated in order to identify procedures and guidelines for high-resolution ADCs characterization. An experimental case study of the high-performance ΔΣ ADC DS-22 developed at CERN is reported and discussed by presenting effective alternative test setups. Experimental results show that common characterization methods by the IEEE standards 1241 [1] and 1057 [2] cannot be used and alternative strategies turn out to be effective.",
"title": ""
},
{
"docid": "012bcbc6b5e7b8aaafd03f100489961c",
"text": "DNA is an attractive medium to store digital information. Here we report a storage strategy, called DNA Fountain, that is highly robust and approaches the information capacity per nucleotide. Using our approach, we stored a full computer operating system, movie, and other files with a total of 2.14 × 106 bytes in DNA oligonucleotides and perfectly retrieved the information from a sequencing coverage equivalent to a single tile of Illumina sequencing. We also tested a process that can allow 2.18 × 1015 retrievals using the original DNA sample and were able to perfectly decode the data. Finally, we explored the limit of our architecture in terms of bytes per molecule and obtained a perfect retrieval from a density of 215 petabytes per gram of DNA, orders of magnitude higher than previous reports.",
"title": ""
},
{
"docid": "65dd0e6e143624c644043507cf9465a7",
"text": "Let G \" be a non-directed graph having n vertices, without parallel edges and slings. Let the vertices of Gn be denoted by F 1 ,. . ., Pn. Let v(P j) denote the valency of the point P i and put (0. 1) V(G,) = max v(Pj). 1ninn Let E(G.) denote the number of edges of Gn. Let H d (n, k) denote the set of all graphs Gn for which V (G n) = k and the diameter D (Gn) of which is-d, In the present paper we shall investigate the quantity (0 .2) Thus we want to determine the minimal number N such that there exists a graph having n vertices, N edges and diameter-d and the maximum of the valencies of the vertices of the graph is equal to k. To help the understanding of the problem let us consider the following interpretation. Let be given in a country n airports ; suppose we want to plan a network of direct flights between these airports so that the maximal number of airports to which a given airport can be connected by a direct flight should be equal to k (i .e. the maximum of the capacities of the airports is prescribed), further it should be possible to fly from every airport to any other by changing the plane at most d-1 times ; what is the minimal number of flights by which such a plan can be realized? For instance, if n = 7, k = 3, d= 2 we have F2 (7, 3) = 9 and the extremal graph is shown by Fig. 1. The problem of determining Fd (n, k) has been proposed and discussed recently by two of the authors (see [1]). In § 1 we give a short summary of the results of the paper [1], while in § 2 and 3 we give some new results which go beyond those of [1]. Incidentally we solve a long-standing problem about the maximal number of edges of a graph not containing a cycle of length 4. In § 4 we mention some unsolved problems. Let us mention that our problem can be formulated also in terms of 0-1 matrices as follows : Let M=(a il) be a symmetrical n by n zero-one matrix such 2",
"title": ""
},
{
"docid": "c3ee2beee84cd32e543c4b634062eeac",
"text": "In this paper, a hierarchical feature extraction method is proposed for image recognition. The key idea of the proposed method is to extract an effective feature, called local neural response (LNR), of the input image with nontrivial discrimination and invariance properties by alternating between local coding and maximum pooling operation. The local coding, which is carried out on the locally linear manifold, can extract the salient feature of image patches and leads to a sparse measure matrix on which maximum pooling is carried out. The maximum pooling operation builds the translation invariance into the model. We also show that other invariant properties, such as rotation and scaling, can be induced by the proposed model. In addition, a template selection algorithm is presented to reduce computational complexity and to improve the discrimination ability of the LNR. Experimental results show that our method is robust to local distortion and clutter compared with state-of-the-art algorithms.",
"title": ""
},
{
"docid": "9dd8ab91929e3c4e7ddd90919eb79d22",
"text": "–Graphs are currently becoming more important in modeling and demonstrating information. In the recent years, graph mining is becoming an interesting field for various processes such as chemical compounds, protein structures, social networks and computer networks. One of the most important concepts in graph mining is to find frequent subgraphs. The major advantage of utilizing subgraphs is speeding up the search for similarities, finding graph specifications and graph classifications. In this article we classify the main algorithms in the graph mining field. Some fundamental algorithms are reviewed and categorized. Some issues for any algorithm are graph representation, search strategy, nature of input and completeness of output that are discussed in this article. Keywords––Frequent subgraph, Graph mining, Graph mining algorithms",
"title": ""
},
{
"docid": "dff0752eace9db08e25904a844533338",
"text": "The authors investigated whether accuracy in identifying deception from demeanor in high-stake lies is specific to those lies or generalizes to other high-stake lies. In Experiment 1, 48 observers judged whether 2 different groups of men were telling lies about a mock theft (crime scenario) or about their opinion (opinion scenario). The authors found that observers' accuracy in judging deception in the crime scenario was positively correlated with their accuracy in judging deception in the opinion scenario. Experiment 2 replicated the results of Experiment 1, as well as P. Ekman and M. O'Sullivan's (1991) finding of a positive correlation between the ability to detect deceit and the ability to identify micromomentary facial expressions of emotion. These results show that the ability to detect high-stake lies generalizes across high-stake situations and is most likely due to the presence of emotional clues that betray deception in high-stake lies.",
"title": ""
},
{
"docid": "88615ac1788bba148f547ca52bffc473",
"text": "This paper describes a probabilistic framework for faithful reproduction of dynamic facial expressions on a synthetic face model with MPEG-4 facial animation parameters (FAPs) while achieving very low bitrate in data transmission. The framework consists of a coupled Bayesian network (BN) to unify the facial expression analysis and synthesis into one coherent structure. At the analysis end, we cast the FAPs and facial action coding system (FACS) into a dynamic Bayesian network (DBN) to account for uncertainties in FAP extraction and to model the dynamic evolution of facial expressions. At the synthesizer, a static BN reconstructs the FAPs and their intensity. The two BNs are connected statically through a data stream link. Using the coupled BN to analyze and synthesize the dynamic facial expressions is the major novelty of this work. The novelty brings about several benefits. First, very low bitrate (9 bytes per frame) in data transmission can be achieved. Second, a facial expression is inferred through both spatial and temporal inference so that the perceptual quality of animation is less affected by the misdetected FAPs. Third, more realistic looking facial expressions can be reproduced by modelling the dynamics of human expressions.",
"title": ""
},
{
"docid": "b477893ecccb3aee1de3b6f12f3186ca",
"text": "Obesity is a global health problem characterized as an increase in the mass of adipose tissue. Adipogenesis is one of the key pathways that increases the mass of adipose tissue, by which preadipocytes mature into adipocytes through cell differentiation. Peroxisome proliferator-activated receptor γ (PPARγ), the chief regulator of adipogenesis, has been acutely investigated as a molecular target for natural products in the development of anti-obesity treatments. In this review, the regulation of PPARγ expression by natural products through inhibition of CCAAT/enhancer-binding protein β (C/EBPβ) and the farnesoid X receptor (FXR), increased expression of GATA-2 and GATA-3 and activation of the Wnt/β-catenin pathway were analyzed. Furthermore, the regulation of PPARγ transcriptional activity associated with natural products through the antagonism of PPARγ and activation of Sirtuin 1 (Sirt1) and AMP-activated protein kinase (AMPK) were discussed. Lastly, regulation of mitogen-activated protein kinase (MAPK) by natural products, which might regulate both PPARγ expression and PPARγ transcriptional activity, was summarized. Understanding the role natural products play, as well as the mechanisms behind their regulation of PPARγ activity is critical for future research into their therapeutic potential for fighting obesity.",
"title": ""
},
{
"docid": "e7ae72f3bb2c24259dd122bff0f5d04e",
"text": "In this paper we introduce a novel linear precoding technique. The approach used for the design of the precoding matrix is general and the resulting algorithm can address several optimization criteria with an arbitrary number of antennas at the user terminals. We have achieved this by designing the precoding matrices in two steps. In the first step we minimize the overlap of the row spaces spanned by the effective channel matrices of different users using a new cost function. In the next step, we optimize the system performance with respect to specific optimization criteria assuming a set of parallel single- user MIMO channels. By combining the closed form solution with Tomlinson-Harashima precoding we reach the maximum sum-rate capacity when the total number of antennas at the user terminals is less or equal to the number of antennas at the base station. By iterating the closed form solution with appropriate power loading we are able to extract the full diversity in the system and reach the maximum sum-rate capacity in case of high multi-user interference. Joint processing over a group of multi-user MIMO channels in different frequency and time slots yields maximum diversity regardless of the level of multi-user interference.",
"title": ""
},
{
"docid": "d9617ed486a1b5488beab08652f736e0",
"text": "The paper shows how Combinatory Categorial Grammar (CCG) can be adapted to take advantage of the extra resourcesensitivity provided by the Categorial Type Logic framework. The resulting reformulation, Multi-Modal CCG, supports lexically specified control over the applicability of combinatory rules, permitting a universal rule component and shedding the need for language-specific restrictions on rules. We discuss some of the linguistic motivation for these changes, define the Multi-Modal CCG system and demonstrate how it works on some basic examples. We furthermore outline some possible extensions and address computational aspects of Multi-Modal CCG.",
"title": ""
},
{
"docid": "869cc834f84bc88a258b2d9d9d4f3096",
"text": "Obesity is a multifactorial disease characterized by an excessive weight for height due to an enlarged fat deposition such as adipose tissue, which is attributed to a higher calorie intake than the energy expenditure. The key strategy to combat obesity is to prevent chronic positive impairments in the energy equation. However, it is often difficult to maintain energy balance, because many available foods are high-energy yielding, which is usually accompanied by low levels of physical activity. The pharmaceutical industry has invested many efforts in producing antiobesity drugs; but only a lipid digestion inhibitor obtained from an actinobacterium is currently approved and authorized in Europe for obesity treatment. This compound inhibits the activity of pancreatic lipase, which is one of the enzymes involved in fat digestion. In a similar way, hundreds of extracts are currently being isolated from plants, fungi, algae, or bacteria and screened for their potential inhibition of pancreatic lipase activity. Among them, extracts isolated from common foodstuffs such as tea, soybean, ginseng, yerba mate, peanut, apple, or grapevine have been reported. Some of them are polyphenols and saponins with an inhibitory effect on pancreatic lipase activity, which could be applied in the management of the obesity epidemic.",
"title": ""
},
{
"docid": "bb774fed5d447fdc181cb712c74925c2",
"text": "Test-driven development is a discipline that helps professional software developers ship clean, flexible code that works, on time. In this article, the author discusses how test-driven development can help software developers achieve a higher degree of professionalism",
"title": ""
},
{
"docid": "c94d01ee0aaa8a70ce4e3441850316a6",
"text": "Convolutional neural networks (CNNs) are inherently subject to invariable filters that can only aggregate local inputs with the same topological structures. It causes that CNNs are allowed to manage data with Euclidean or grid-like structures (e.g., images), not ones with non-Euclidean or graph structures (e.g., traffic networks). To broaden the reach of CNNs, we develop structure-aware convolution to eliminate the invariance, yielding a unified mechanism of dealing with both Euclidean and non-Euclidean structured data. Technically, filters in the structure-aware convolution are generalized to univariate functions, which are capable of aggregating local inputs with diverse topological structures. Since infinite parameters are required to determine a univariate function, we parameterize these filters with numbered learnable parameters in the context of the function approximation theory. By replacing the classical convolution in CNNs with the structure-aware convolution, Structure-Aware Convolutional Neural Networks (SACNNs) are readily established. Extensive experiments on eleven datasets strongly evidence that SACNNs outperform current models on various machine learning tasks, including image classification and clustering, text categorization, skeleton-based action recognition, molecular activity detection, and taxi flow prediction.",
"title": ""
},
{
"docid": "5d5014506bdf0c16b566edc8bba3b730",
"text": "This paper surveys recent literature in the domain of machine learning techniques and artificial intelligence used to predict stock market movements. Artificial Neural Networks (ANNs) are identified to be the dominant machine learning technique in stock market prediction area. Keywords— Artificial Neural Networks (ANNs); Stock Market; Prediction",
"title": ""
},
{
"docid": "8b2c83868c16536910e7665998b2d87e",
"text": "Nowadays organizations turn to any standard procedure to gain a competitive advantage. If sustainable, competitive advantage can bring about benefit to the organization. The aim of the present study was to introduce competitive advantage as well as to assess the impacts of the balanced scorecard as a means to measure the performance of organizations. The population under study included employees of organizations affiliated to the Social Security Department in North Khorasan Province, of whom a total number of 120 employees were selected as the participants in the research sample. Two researcher-made questionnaires with a 5-point Likert scale were used to measure the competitive advantage and the balanced scorecard. Besides, Cronbach's alpha coefficient was used to measure the reliability of the instruments that was equal to 0.74 and 0.79 for competitive advantage and the balanced scorecard, respectively. The data analysis was performed using the structural equation modeling and the results indicated the significant and positive impact of the implementation of the balanced scorecard on the sustainable competitive advantage. © 2015 AESS Publications. All Rights Reserved.",
"title": ""
},
{
"docid": "50edb29954ee6cbb3e38055d7b01e99a",
"text": "Security has becoming an important issue everywhere. Home security is becoming necessary nowadays as the possibilities of intrusion are increasing day by day. Safety from theft, leaking of raw gas and fire are the most important requirements of home security system for people. A traditional home security system gives the signals in terms of alarm. However, the GSM (Global System for Mobile communications) based security systems provides enhanced security as whenever a signal from sensor occurs, a text message is sent to a desired number to take necessary actions. This paper suggests two methods for home security system. The first system uses web camera. Whenever there is a motion in front of the camera, it gives security alert in terms of sound and a mail is delivered to the owner. The second method sends SMS which uses GSMGPS Module (sim548c) and Atmega644p microcontroller, sensors, relays and buzzers.",
"title": ""
},
{
"docid": "0b79fc06afe7782e7bdcdbd96cc1c1a0",
"text": "Your use of the JSTOR archive indicates your acceptance of JSTOR's Terms and Conditions of Use, available at http://www.jstor.org/about/terms.html. JSTOR's Terms and Conditions of Use provides, in part, that unless you have obtained prior permission, you may not download an entire issue of a journal or multiple copies of articles, and you may use content in the JSTOR archive only for your personal, non-commercial use. Please contact the publisher regarding any further use of this work. Publisher contact information may be obtained at http://www.jstor.org/journals/annals.html. Each copy of any part of a JSTOR transmission must contain the same copyright notice that appears on the screen or printed page of such transmission.",
"title": ""
},
{
"docid": "92ff221950df6e7fd266926c305200cd",
"text": "The authors provide a didactic treatment of nonlinear (categorical) principal components analysis (PCA). This method is the nonlinear equivalent of standard PCA and reduces the observed variables to a number of uncorrelated principal components. The most important advantages of nonlinear over linear PCA are that it incorporates nominal and ordinal variables and that it can handle and discover nonlinear relationships between variables. Also, nonlinear PCA can deal with variables at their appropriate measurement level; for example, it can treat Likert-type scales ordinally instead of numerically. Every observed value of a variable can be referred to as a category. While performing PCA, nonlinear PCA converts every category to a numeric value, in accordance with the variable's analysis level, using optimal quantification. The authors discuss how optimal quantification is carried out, what analysis levels are, which decisions have to be made when applying nonlinear PCA, and how the results can be interpreted. The strengths and limitations of the method are discussed. An example applying nonlinear PCA to empirical data using the program CATPCA (J. J. Meulman, W. J. Heiser, & SPSS, 2004) is provided.",
"title": ""
},
{
"docid": "981cbb9140570a6a6f3d4f4f49cd3654",
"text": "OBJECTIVES\nThe study sought to evaluate clinical outcomes in clinical practice with rhythm control versus rate control strategy for management of atrial fibrillation (AF).\n\n\nBACKGROUND\nRandomized trials have not demonstrated significant differences in stroke, heart failure, or mortality between rhythm and rate control strategies. The comparative outcomes in contemporary clinical practice are not well described.\n\n\nMETHODS\nPatients managed with a rhythm control strategy targeting maintenance of sinus rhythm were retrospectively compared with a strategy of rate control alone in a AF registry across various U.S. practice settings. Unadjusted and adjusted (inverse-propensity weighted) outcomes were estimated.\n\n\nRESULTS\nThe overall study population (N = 6,988) had a median of 74 (65 to 81) years of age, 56% were males, 77% had first detected or paroxysmal AF, and 68% had CHADS2 score ≥2. In unadjusted analyses, rhythm control was associated with lower all-cause death, cardiovascular death, first stroke/non-central nervous system systemic embolization/transient ischemic attack, or first major bleeding event (all p < 0.05); no difference in new onset heart failure (p = 0.28); and more frequent cardiovascular hospitalizations (p = 0.0006). There was no difference in the incidence of pacemaker, defibrillator, or cardiac resynchronization device implantations (p = 0.99). In adjusted analyses, there were no statistical differences in clinical outcomes between rhythm control and rate control treated patients (all p > 0.05); however, rhythm control was associated with more cardiovascular hospitalizations (hazard ratio: 1.24; 95% confidence interval: 1.10 to 1.39; p = 0.0003).\n\n\nCONCLUSIONS\nAmong patients with AF, rhythm control was not superior to rate control strategy for outcomes of stroke, heart failure, or mortality, but was associated with more cardiovascular hospitalizations.",
"title": ""
}
] |
scidocsrr
|
5ff6288bf1a883014805687745c56ca8
|
Effects of missing data in social networks
|
[
{
"docid": "236896835b48994d7737b9152c0e435f",
"text": "A network is said to show assortative mixing if the nodes in the network that have many connections tend to be connected to other nodes with many connections. Here we measure mixing patterns in a variety of networks and find that social networks are mostly assortatively mixed, but that technological and biological networks tend to be disassortative. We propose a model of an assortatively mixed network, which we study both analytically and numerically. Within this model we find that networks percolate more easily if they are assortative and that they are also more robust to vertex removal.",
"title": ""
}
] |
[
{
"docid": "4ba3ac9a0ef8f46fe92401843b1eaba7",
"text": "This paper explores gender-based differences in multimodal deception detection. We introduce a new large, gender-balanced dataset, consisting of 104 subjects with 520 different responses covering multiple scenarios, and perform an extensive analysis of different feature sets extracted from the linguistic, physiological, and thermal data streams recorded from the subjects. We describe a multimodal deception detection system, and show how the two genders achieve different detection rates for different individual and combined feature sets, with accuracy figures reaching 80%. Our experiments and results allow us to make interesting observations concerning the differences in the multimodal detection of deception in males and females.",
"title": ""
},
{
"docid": "7182921f825bd924be6e6441f1fa6433",
"text": "Word embeddings are increasingly being used as a tool to study word associations in specific corpora. However, it is unclear whether such embeddings reflect enduring properties of language or if they are sensitive to inconsequential variations in the source documents. We find that nearest-neighbor distances are highly sensitive to small changes in the training corpus for a variety of algorithms. For all methods, including specific documents in the training set can result in substantial variations. We show that these effects are more prominent for smaller training corpora. We recommend that users never rely on single embedding models for distance calculations, but rather average over multiple bootstrap samples, especially for small corpora.",
"title": ""
},
{
"docid": "4b6a4f9d91bc76c541f4879a1a684a3f",
"text": "Query auto-completion (QAC) is one of the most prominent features of modern search engines. The list of query candidates is generated according to the prefix entered by the user in the search box and is updated on each new key stroke. Query prefixes tend to be short and ambiguous, and existing models mostly rely on the past popularity of matching candidates for ranking. However, the popularity of certain queries may vary drastically across different demographics and users. For instance, while instagram and imdb have comparable popularities overall and are both legitimate candidates to show for prefix i, the former is noticeably more popular among young female users, and the latter is more likely to be issued by men.\n In this paper, we present a supervised framework for personalizing auto-completion ranking. We introduce a novel labelling strategy for generating offline training labels that can be used for learning personalized rankers. We compare the effectiveness of several user-specific and demographic-based features and show that among them, the user's long-term search history and location are the most effective for personalizing auto-completion rankers. We perform our experiments on the publicly available AOL query logs, and also on the larger-scale logs of Bing. The results suggest that supervised rankers enhanced by personalization features can significantly outperform the existing popularity-based base-lines, in terms of mean reciprocal rank (MRR) by up to 9%.",
"title": ""
},
{
"docid": "f0f16472cdb6b52b05d1d324e55da081",
"text": "We propose a new distributed algorithm for empirical risk minimization in machine learning. The algorithm is based on an inexact damped Newton method, where the inexact Newton steps are computed by a distributed preconditioned conjugate gradient method. We analyze its iteration complexity and communication efficiency for minimizing self-concordant empirical loss functions, and discuss the results for distributed ridge regression, logistic regression and binary classification with a smoothed hinge loss. In a standard setting for supervised learning, where the n data points are i.i.d. sampled and when the regularization parameter scales as 1/ √ n, we show that the proposed algorithm is communication efficient: the required round of communication does not increase with the sample size n, and only grows slowly with the number of machines.",
"title": ""
},
{
"docid": "6ab38099b989f1d9bdc504c9b50b6bbe",
"text": "Users' search tactics often appear naïve. Much research has endeavored to understand the rudimentary query typically seen in log analyses and user studies. Researchers have tested a number of approaches to supporting query development, including information literacy training and interaction design these have tried and often failed to induce users to use more complex search strategies. To further investigate this phenomenon, we combined established HCI methods with models from cultural studies, and observed customers' mediated searches for books in bookstores. Our results suggest that sophisticated search techniques demand mental models that many users lack.",
"title": ""
},
{
"docid": "7c5f2c92cb3d239674f105a618de99e0",
"text": "We consider the isolated spelling error correction problem as a specific subproblem of the more general string-to-string translation problem. In this context, we investigate four general string-to-string transformation models that have been suggested in recent years and apply them within the spelling error correction paradigm. In particular, we investigate how a simple ‘k-best decoding plus dictionary lookup’ strategy performs in this context and find that such an approach can significantly outdo baselines such as edit distance, weighted edit distance, and the noisy channel Brill and Moore model to spelling error correction. We also consider elementary combination techniques for our models such as language model weighted majority voting and center string combination. Finally, we consider real-world OCR post-correction for a dataset sampled from medieval Latin texts.",
"title": ""
},
{
"docid": "db8cd5dad5c3d3bda0f10f3369351bbd",
"text": "The massive diffusion of online social media allows for the rapid and uncontrolled spreading of conspiracy theories, hoaxes, unsubstantiated claims, and false news. Such an impressive amount of misinformation can influence policy preferences and encourage behaviors strongly divergent from recommended practices. In this paper, we study the statistical properties of viral misinformation in online social media. By means of methods belonging to Extreme Value Theory, we show that the number of extremely viral posts over time follows a homogeneous Poisson process, and that the interarrival times between such posts are independent and identically distributed, following an exponential distribution. Moreover, we characterize the uncertainty around the rate parameter of the Poisson process through Bayesian methods. Finally, we are able to derive the predictive posterior probability distribution of the number of posts exceeding a certain threshold of shares over a finite interval of time.",
"title": ""
},
{
"docid": "1df9ac95778bbe7ad750810e9b5a9756",
"text": "To characterize muscle synergy organization underlying multidirectional control of stance posture, electromyographic activity was recorded from 11 lower limb and trunk muscles of 7 healthy subjects while they were subjected to horizontal surface translations in 12 different, randomly presented directions. The latency and amplitude of muscle responses were quantified for each perturbation direction. Tuning curves for each muscle were examined to relate the amplitude of the muscle response to the direction of surface translation. The latencies of responses for the shank and thigh muscles were constant, regardless of perturbation direction. In contrast, the latencies for another thigh [tensor fascia latae (TFL)] and two trunk muscles [rectus abdominis (RAB) and erector spinae (ESP)] were either early or late, depending on the perturbation direction. These three muscles with direction-specific latencies may play different roles in postural control as prime movers or as stabilizers for different translation directions, depending on the timing of recruitment. Most muscle tuning curves were within one quadrant, having one direction of maximal activity, generally in response to diagonal surface translations. Two trunk muscles (RAB and ESP) and two lower limb muscles (semimembranosus and peroneus longus) had bipolar tuning curves, with two different directions of maximal activity, suggesting that these muscle can play different roles as part of different synergies, depending on translation direction. Muscle tuning curves tended to group into one of three regions in response to 12 different directions of perturbations. Two muscles [rectus femoris (RFM) and TFL] were maximally active in response to lateral surface translations. The remaining muscles clustered into one of two diagonal regions. The diagonal regions corresponded to the two primary directions of active horizontal force vector responses. Two muscles (RFM and adductor longus) were maximally active orthogonal to their predicted direction of maximal activity based on anatomic orientation. Some of the muscles in each of the synergic regions were not anatomic synergists, suggesting a complex central organization for recruitment of muscles. The results suggest that neither a simple reflex mechanism nor a fixed muscle synergy organization is adequate to explain the muscle activation patterns observed in this postural control task. Our results are consistent with a centrally mediated pattern of muscle latencies combined with peripheral influence on muscle magnitude. We suggest that a flexible continuum of muscle synergies that are modifiable in a task-dependent manner be used for equilibrium control in stance.",
"title": ""
},
{
"docid": "b0cba371bb9628ac96a9ae2bb228f5a9",
"text": "Graph-based recommendation approaches can model associations between users and items alongside additional contextual information. Recent studies demonstrated that representing features extracted from social media (SM) auxiliary data, like friendships, jointly with traditional users/items ratings in the graph, contribute to recommendation accuracy. In this work, we take a step further and propose an extended graph representation that includes socio-demographic and personal traits extracted from the content posted by the user on SM. Empirical results demonstrate that processing unstructured textual information collected from Twitter and representing it in structured form in the graph improves recommendation performance, especially in cold start conditions.",
"title": ""
},
{
"docid": "9a3a73f35b27d751f237365cc34c8b28",
"text": "The development of brain metastases in patients with advanced stage melanoma is common, but the molecular mechanisms responsible for their development are poorly understood. Melanoma brain metastases cause significant morbidity and mortality and confer a poor prognosis; traditional therapies including whole brain radiation, stereotactic radiotherapy, or chemotherapy yield only modest increases in overall survival (OS) for these patients. While recently approved therapies have significantly improved OS in melanoma patients, only a small number of studies have investigated their efficacy in patients with brain metastases. Preliminary data suggest that some responses have been observed in intracranial lesions, which has sparked new clinical trials designed to evaluate the efficacy in melanoma patients with brain metastases. Simultaneously, recent advances in our understanding of the mechanisms of melanoma cell dissemination to the brain have revealed novel and potentially therapeutic targets. In this review, we provide an overview of newly discovered mechanisms of melanoma spread to the brain, discuss preclinical models that are being used to further our understanding of this deadly disease and provide an update of the current clinical trials for melanoma patients with brain metastases.",
"title": ""
},
{
"docid": "08844c98f9d6b92f84d272516af64281",
"text": "This paper describes the synthesis of Dynamic Differential Logic to increase the resistance of FPGA implementations against Differential Power Analysis. The synthesis procedure is developed and a detailed description is given of how EDA tools should be used appropriately to implement a secure digital design flow. Compared with an existing technique to implement Dynamic Differential Logic on FPGA, the technique saves a factor 2 in slice utilization. Experimental results also indicate that a secure version of the AES encryption algorithm can now be implemented with a mere 50% increase in time delay and 90% increase in slice utilization when compared with a normal non-secure single ended implementation.",
"title": ""
},
{
"docid": "43c9afd57b35c2db2c285b9c0b79b81a",
"text": "We present SfSNet, an end-to-end learning framework for producing an accurate decomposition of an unconstrained image of a human face into shape, reflectance and illuminance. Our network is designed to reflect a physical lambertian rendering model. SfSNet learns from a mixture of labeled synthetic and unlabeled real world images. This allows the network to capture low frequency variations from synthetic images and high frequency details from real images through the photometric reconstruction loss. SfSNet consists of a new decomposition architecture with residual blocks that learns a complete separation of albedo and normal. This is used along with the original image to predict lighting. SfSNet produces significantly better quantitative and qualitative results than state-of-the-art methods for inverse rendering and independent normal and illumination estimation.",
"title": ""
},
{
"docid": "9309ce05609d1cbdadcdc89fe8937473",
"text": "There is an increase use of ontology-driven approaches to support requirements engineering (RE) activities, such as elicitation, analysis, specification, validation and management of requirements. However, the RE community still lacks a comprehensive understanding of how ontologies are used in RE process. Thus, the main objective of this work is to investigate and better understand how ontologies support RE as well as identify to what extent they have been applied to this field. In order to meet our goal, we conducted a systematic literature review (SLR) to identify the primary studies on the use of ontologies in RE, following a predefined review protocol. We then identified the main RE phases addressed, the requirements modelling styles that have been used in conjunction with ontologies, the types of requirements that have been supported by the use of ontologies and the ontology languages that have been adopted. We also examined the types of contributions reported and looked for evidences of the benefits of ontology-driven RE. In summary, the main findings of this work are: (1) there are empirical evidences of the benefits of using ontologies in RE activities both in industry and academy, specially for reducing ambiguity, inconsistency and incompleteness of requirements; (2) the majority of studies only partially address the RE process; (3) there is a great diversity of RE modelling styles supported by ontologies; (4) most studies addressed only functional requirements; (5) several studies describe the use/development of tools to support different types of ontology-driven RE approaches; (6) about half of the studies followed W3C recommendations on ontology-related languages; and (7) a great variety of RE ontologies were identified; nevertheless, none of them has been broadly adopted by the community. Finally, we conclude this work by showing several promising research opportunities that are quite important and interesting but underexplored in current research and practice.",
"title": ""
},
{
"docid": "ac8a620e752144e3f4e20c16efb56ebc",
"text": "or as ventricular fibrillation, the circulation must be restored promptly; otherwise anoxia will result in irreversible damage. There are two techniques that may be used to meet the emergency: one is to open the chest and massage the heart directly and the other is to accomplish the same end by a new method of closed-chest cardiac massage. The latter method is described in this communication. The closed-chest alternating current defibrillator ' that",
"title": ""
},
{
"docid": "bf11d9a1ef46b24f5d13dc119e715005",
"text": "This paper explores the relationship between the three beliefs about online shopping ie. perceived usefulness, perceived ease of use and perceived enjoyment and intention to shop online. A sample of 150 respondents was selected using a purposive sampling method whereby the respondents have to be Internet users to be included in the survey. A structured, self-administered questionnaire was used to elicit responses from these respondents. The findings indicate that perceived ease of use (β = 0.70, p<0.01) and perceived enjoyment (β = 0.32, p<0.05) were positively related to intention to shop online whereas perceived usefulness was not significantly related to intention to shop online. Furthermore, perceived ease of use (β = 0.78, p<0.01) was found to be a significant predictor of perceived usefulness. This goes to show that ease of use and enjoyment are the 2 main drivers of intention to shop online. Implications of the findings for developers are discussed further.",
"title": ""
},
{
"docid": "18c885e8cb799086219585e419140ba5",
"text": "Reaction-time and eye-fixation data are analyzed to investigate how people infer the kinematics of simple mechanical systems (pulley systems) from diagrams showing their static configuration. It is proposed that this mental animation process involves decomposing the representation of a pulley system into smaller units corresponding to the machine components and animating these components in a sequence corresponding to the causal sequence of events in the machine's operation. Although it is possible for people to make inferences against the chain of causality in the machine, these inferences are more difficult, and people have a preference for inferences in the direction of causality. The mental animation process reflects both capacity limitations and limitations of mechanical knowledge.",
"title": ""
},
{
"docid": "13c2c1a1bd4ff886f93d8f89a14e39e2",
"text": "One of the key elements in qualitative data analysis is the systematic coding of text (Strauss and Corbin 1990:57%60; Miles and Huberman 1994:56). Codes are the building blocks for theory or model building and the foundation on which the analyst’s arguments rest. Implicitly or explicitly, they embody the assumptions underlying the analysis. Given the context of the interdisciplinary nature of research at the Centers for Disease Control and Prevention (CDC), we have sought to develop explicit guidelines for all aspects of qualitative data analysis, including codebook development.",
"title": ""
},
{
"docid": "f41f4e3b27bda4b3000f3ab5ae9ef22a",
"text": "This paper, first analysis the performance of image segmentation techniques; K-mean clustering algorithm and region growing for cyst area extraction from liver images, then enhances the performance of K-mean by post-processing. The K-mean algorithm makes the clusters effectively. But it could not separate out the desired cluster (cyst) from the image. So, to enhance its performance for cyst region extraction, morphological opening-by-reconstruction is applied on the output of K-mean clustering algorithm. The results are presented both qualitatively and quantitatively, which demonstrate the superiority of enhanced K-mean as compared to standard K-mean and region growing algorithm.",
"title": ""
},
{
"docid": "c8d2092150e1e50232a5bc3847520d19",
"text": "Thermoregulation disorders are associated with Body temperature fluctuation. Both hyper- and hypothermia are evidence of an ongoing pathological process. Contralateral symmetry in the Body heat spread is considered normal, while asymmetry, if above a certain level, implies an underlying pathology. Infrared thermography (IRT) is employed in many medical fields including ophthalmology. The earliest attempts of eye surface temperature evaluation were made in the 19th century. Over the last 50 years, different authors have been using this method to assess ocular adnexa, however, the technique remains insufficiently studied. The reported IRT data is often contradictory, which may be due to heterogeneity (in terms of severity) of patient groups and disparities between research parameters.",
"title": ""
},
{
"docid": "af5a8f2811ff334d742f802c6c1b7833",
"text": "Kalman filter extensions are commonly used algorithms for nonlinear state estimation in time series. The structure of the state and measurement models in the estimation problem can be exploited to reduce the computational demand of the algorithms. We review algorithms that use different forms of structure and show how they can be combined. We show also that the exploitation of the structure of the problem can lead to improved accuracy of the estimates while reducing the computational load.",
"title": ""
}
] |
scidocsrr
|
f18f0915d7a78b25f326e41eb015a216
|
3D-Aided Face Recognition Robust to Expression and Pose Variations
|
[
{
"docid": "2d9b42c47dcf18ed83244c65384a8599",
"text": "A new 3D face database that includes a rich set of expressions, systematic variation of poses and different types of occlusions is presented in this paper. This database is unique from three aspects: i) the facial expressions are composed of judiciously selected subset of Action Units as well as the six basic emotions, and many actors/actresses are incorporated to obtain more realistic expression data; ii) a rich set of head pose variations are available; and iii) different types of face occlusions are included. Hence, this new database can be a very valuable resource for development and evaluation of algorithms on face recognition under adverse conditions and facial expression analysis as well as for facial expression synthesis.",
"title": ""
}
] |
[
{
"docid": "bf6c25593274cebad438a3f44f31f44a",
"text": "It has been observed that there is a great growth of the market share of PLB in developed countries. Earlier most of the people were using branded clothes, but now a days the companies have introduced their own private brands to increase their popularity and more profit. Companies are providing more discounts on private brands to get more customers. Retailers have not only customized and localized the products as per customer’s tastes and preference but also created PLBs’. At present customers are more intelligent and smart, they look for the product which gives them value, so today’s private label brands are more competitive, reasonable price and of better quality. The consumers prefer private label brands heavily because they can save money. The apparels are second most demanded product after FMCG. So, here we are focusing on the private label apparels brands. Big companies like Pantaloons, Wills Lifestyle, Reliance and many more retailers having their own brands. The researcher aimed to find and analyze the effect of various brand related attributes (Brand knowledge (brand image, brand awareness) on consumer’s purchase intention towards private label brands in apparels. Most of the customer purchase depends upon its brand image. The study is carried at some reputed stores of Ahmedabad like Pantaloons, Westside and Will Lifestyle. It tries to establish the relationship between brands related factors and their impact on consumers purchase",
"title": ""
},
{
"docid": "d2b7e61ecedf80f613d25c4f509ddaf6",
"text": "We present a new image editing method, particularly effective for sharpening major edges by increasing the steepness of transition while eliminating a manageable degree of low-amplitude structures. The seemingly contradictive effect is achieved in an optimization framework making use of L0 gradient minimization, which can globally control how many non-zero gradients are resulted in to approximate prominent structure in a sparsity-control manner. Unlike other edge-preserving smoothing approaches, our method does not depend on local features, but instead globally locates important edges. It, as a fundamental tool, finds many applications and is particularly beneficial to edge extraction, clip-art JPEG artifact removal, and non-photorealistic effect generation.",
"title": ""
},
{
"docid": "0fdc468347fc6c50767687d5364a098e",
"text": "We study a generalization of the setting of regenerating codes, motivated by applications to storage systems consisting of clusters of storage nodes. There are n clusters in total, with m nodes per cluster. A data file is coded and stored across the mn nodes, with each node storing α symbols. For availability of data, we demand that the file is retrievable by downloading the entire content from any subset of k clusters. Nodes represent entities that can fail, and here we distinguish between intra-cluster and inter-cluster bandwidth-costs during node repair. Node-repair is accomplished by downloading β symbols each from any set of d other clusters. The replacement-node also downloads content from any set of ` surviving nodes in the same cluster during the repair process. We identity the optimal trade-off between storage-overhead and inter-cluster (IC) repair-bandwidth under functional repair, and also present optimal exact-repair code constructions for a class of parameters. Our results imply that it is possible to simultaneously achieve both optimal storage overhead and optimal minimum IC bandwidth, for sufficiently large values of nodes per cluster. The simultaneous optimality comes at the expense of intra-cluster bandwidth, and we obtain lower bounds on the necessary intra-cluster repair-bandwidth. Simulation results based on random linear network codes suggest optimality of the bounds on intra-cluster repair-bandwidth.",
"title": ""
},
{
"docid": "b0e81e112b9aa7ebf653243f00b21f23",
"text": "Recent research indicates that toddlers and infants succeed at various non-verbal spontaneous-response false-belief tasks; here we asked whether toddlers would also succeed at verbal spontaneous-response false-belief tasks that imposed significant linguistic demands. We tested 2.5-year-olds using two novel tasks: a preferential-looking task in which children listened to a false-belief story while looking at a picture book (with matching and non-matching pictures), and a violation-of-expectation task in which children watched an adult 'Subject' answer (correctly or incorrectly) a standard false-belief question. Positive results were obtained with both tasks, despite their linguistic demands. These results (1) support the distinction between spontaneous- and elicited-response tasks by showing that toddlers succeed at verbal false-belief tasks that do not require them to answer direct questions about agents' false beliefs, (2) reinforce claims of robust continuity in early false-belief understanding as assessed by spontaneous-response tasks, and (3) provide researchers with new experimental tasks for exploring early false-belief understanding in neurotypical and autistic populations.",
"title": ""
},
{
"docid": "0c28741df3a9bf999f4abe7b840cfb26",
"text": "In this work, we analyze taxi-GPS traces collected in Lisbon, Portugal. We perform an exploratory analysis to visualize the spatiotemporal variation of taxi services; explore the relationships between pick-up and drop-off locations; and analyze the behavior in downtime (between the previous drop-off and the following pick-up). We also carry out the analysis of predictability of taxi trips for the next pick-up area type given history of taxi flow in time and space.",
"title": ""
},
{
"docid": "e96bf66f084be015b11a2d12f22fdabe",
"text": "This paper addresses the loop closure detection problem in slam, and presents a method for solving the problem using pairwise comparison of point clouds in both 2D and 3D. The point clouds are mathematically described using features that capture important geometric and statistical properties. The features are used as input to the machine learning algorithm AdaBoost, which is used to build a non-linear classifier capable of detecting loop closure from pairs of point clouds. Vantage point dependency in the detection process is eliminated by only using rotation invariant features, thus loop closure can be detected from arbitrary direction. The classifier is evaluated using publicly available data, and is shown to generalise well between environments. Detection rates of 66%, 63% and 53% for 0% false alarm rate are achieved for 2D outdoor data, 3D outdoor data and 3D indoor data, respectively. In both 2D and 3D, experiments are performed using publicly available data, showing that the proposed algorithm compares favourably to related work.",
"title": ""
},
{
"docid": "53d54256640089e41b676da7c28b65ff",
"text": "published a document in 2002 called the NSW Curriculum Framework for Children's Services: A practice of relationships (.pdf 1.4 MB). This document has some interesting perspectives of the role of child development and developmental norms. It is important to consider multiple A basic introduction to child development theories",
"title": ""
},
{
"docid": "ac168ff92c464cb90a9a4ca0eb5bfa5c",
"text": "Path computing is a new paradigm that generalizes the edge computing vision into a multi-tier cloud architecture deployed over the geographic span of the network. Path computing supports scalable and localized processing by providing storage and computation along a succession of datacenters of increasing sizes, positioned between the client device and the traditional wide-area cloud data-center. CloudPath is a platform that implements the path computing paradigm. CloudPath consists of an execution environment that enables the dynamic installation of light-weight stateless event handlers, and a distributed eventual consistent storage system that replicates application data on-demand. CloudPath handlers are small, allowing them to be rapidly instantiated on demand on any server that runs the CloudPath execution framework. In turn, CloudPath automatically migrates application data across the multiple datacenter tiers to optimize access latency and reduce bandwidth consumption.",
"title": ""
},
{
"docid": "1245c626f26dd7fe799d862b6f56a6af",
"text": "The emergence of cloud services brings new possibilities for constructing and using HPC platforms. However, while cloud services provide the flexibility and convenience of customized, pay-as-you-go parallel computing, multiple previous studies in the past three years have indicated that cloud-based clusters need a significant performance boost to become a competitive choice, especially for tightly coupled parallel applications.\n In this work, we examine the feasibility of running HPC applications in clouds. This study distinguishes itself from existing investigations in several ways: 1) We carry out a comprehensive examination of issues relevant to the HPC community, including performance, cost, user experience, and range of user activities. 2) We compare an Amazon EC2-based platform built upon its newly available HPC-oriented virtual machines with typical local cluster and supercomputer options, using benchmarks and applications with scale and problem size unprecedented in previous cloud HPC studies. 3) We perform detailed performance and scalability analysis to locate the chief limiting factors of the state-of-the-art cloud based clusters. 4) We present a case study on the impact of per-application parallel I/O system configuration uniquely enabled by cloud services. Our results reveal that though the scalability of EC2-based virtual clusters still lags behind traditional HPC alternatives, they are rapidly gaining in overall performance and cost-effectiveness, making them feasible candidates for performing tightly coupled scientific computing. In addition, our detailed benchmarking and profiling discloses and analyzes several problems regarding the performance and performance stability on EC2.",
"title": ""
},
{
"docid": "8fd43b39e748d47c02b66ee0d8eecc65",
"text": "One standing problem in the area of web-based e-learning is how to support instructional designers to effectively and efficiently retrieve learning materials, appropriate for their educational purposes. Learning materials can be retrieved from structured repositories, such as repositories of Learning Objects and Massive Open Online Courses; they could also come from unstructured sources, such as web hypertext pages. Platforms for distance education often implement algorithms for recommending specific educational resources and personalized learning paths to students. But choosing and sequencing the adequate learning materials to build adaptive courses may reveal to be quite a challenging task. In particular, establishing the prerequisite relationships among learning objects, in terms of prior requirements needed to understand and complete before making use of the subsequent contents, is a crucial step for faculty, instructional designers or automated systems whose goal is to adapt existing learning objects to delivery in new distance courses. Nevertheless, this information is often missing. In this paper, an innovative machine learning-based approach for the identification of prerequisites between text-based resources is proposed. A feature selection methodology allows us to consider the attributes that are most relevant to the predictive modeling problem. These features are extracted from both the input material and weak-taxonomies available on the web. Input data undergoes a Natural language process that makes finding patterns of interest more easy for the applied automated analysis. Finally, the prerequisite identification is cast to a binary statistical classification task. The accuracy of the approach is validated by means of experimental evaluations on real online coursers covering different subjects.",
"title": ""
},
{
"docid": "d505a0fe73296fe19f0f683773c9520d",
"text": "Abstractive text summarization is a complex task whose goal is to generate a concise version of a text without necessarily reusing the sentences from the original source, but still preserving the meaning and the key contents. In this position paper we address this issue by modeling the problem as a sequence to sequence learning and exploiting Recurrent Neural Networks (RNN). Moreover, we discuss the idea of combining RNNs and probabilistic models in a unified way in order to incorporate prior knowledge, such as linguistic features. We believe that this approach can obtain better performance than the state-of-the-art models for generating well-formed summaries.",
"title": ""
},
{
"docid": "81f5c17e5b0b52bb55a27733a198be51",
"text": "This paper uses the 'lens' of integrated and sustainable waste management (ISWM) to analyse the new data set compiled on 20 cities in six continents for the UN-Habitat flagship publication Solid Waste Management in the World's Cities. The comparative analysis looks first at waste generation rates and waste composition data. A process flow diagram is prepared for each city, as a powerful tool for representing the solid waste system as a whole in a comprehensive but concise way. Benchmark indicators are presented and compared for the three key physical components/drivers: public health and collection; environment and disposal; and resource recovery--and for three governance strategies required to deliver a well-functioning ISWM system: inclusivity; financial sustainability; and sound institutions and pro-active policies. Key insights include the variety and diversity of successful models - there is no 'one size fits all'; the necessity of good, reliable data; the importance of focusing on governance as well as technology; and the need to build on the existing strengths of the city. An example of the latter is the critical role of the informal sector in the cities in many developing countries: it not only delivers recycling rates that are comparable with modern Western systems, but also saves the city authorities millions of dollars in avoided waste collection and disposal costs. This provides the opportunity for win-win solutions, so long as the related wider challenges can be addressed.",
"title": ""
},
{
"docid": "4b47c2f98ebc8f7b19f90fdf1edcb2ee",
"text": "Prevalent theories about consciousness propose a causal relation between lack of spatial coding and absence of conscious experience: The failure to code the position of an object is assumed to prevent this object from entering consciousness. This is consistent with influential theories of unilateral neglect following brain damage, according to which spatial coding of neglected stimuli is defective, and this would keep their processing at the nonconscious level. Contrary to this view, we report evidence showing that spatial coding and consciousness can dissociate. A patient with left neglect, who was not aware of contralesional stimuli, was able to process their color and position. However, in contrast to (ipsilesional) consciously perceived stimuli, color and position of neglected stimuli were processed separately. We propose that individual object features, including position, can be processed without attention and consciousness and that conscious perception of an object depends on the binding of its features into an integrated percept.",
"title": ""
},
{
"docid": "5f419f75e2f6399e6a1a456f78d0e48e",
"text": "We present an attention-based bidirectional LSTM approach to improve the target-dependent sentiment classification. Our method learns the alignment between the target entities and the most distinguishing features. We conduct extensive experiments on a real-life dataset. The experimental results show that our model achieves state-of-the-art results.",
"title": ""
},
{
"docid": "ed35d80dd3af3acbe75e5122b2378756",
"text": "We present a system whereby the human voice may specify continuous control signals to manipulate a simulated 2D robotic arm and a real 3D robotic arm. Our goal is to move towards making accessible the manipulation of everyday objects to individuals with motor impairments. Using our system, we performed several studies using control style variants for both the 2D and 3D arms. Results show that it is indeed possible for a user to learn to effectively manipulate real-world objects with a robotic arm using only non-verbal voice as a control mechanism. Our results provide strong evidence that the further development of non-verbal voice controlled robotics and prosthetic limbs will be successful.",
"title": ""
},
{
"docid": "64facdd87a992ae923b2a468f1e29ade",
"text": "This paper focuses on the problem of controlling DC-to-DC switched power converter of Boost type. The system nonlinear feature is coped with by resorting to the backstepping control approach. Both adaptive and nonadaptive versions are designed and shown to yield quite interesting tracking and robustness performances. A comparison study shows that backstepping nonlinear controllers perform as well as passivity- based controllers. For both the choice of design parameters proves to be crucial to ensure robustness with respect to load resistance variations. From this viewpoint, adaptive backstepping controllers are more interesting as they prove to be less sensitive to design parameters.",
"title": ""
},
{
"docid": "9f34152d5dd13619d889b9f6e3dfd5c3",
"text": "Nichols, M. (2003). A theory for eLearning. Educational Technology & Society, 6(2), 1-10, Available at http://ifets.ieee.org/periodical/6-2/1.html ISSN 1436-4522. © International Forum of Educational Technology & Society (IFETS). The authors and the forum jointly retain the copyright of the articles. Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear the full citation on the first page. Copyrights for components of this work owned by others than IFETS must be honoured. Abstracting with credit is permitted. To copy otherwise, to republish, to post on servers, or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from the editors at kinshuk@massey.ac.nz. A theory for eLearning",
"title": ""
},
{
"docid": "391fb9de39cb2d0635f2329362db846e",
"text": "In recent years, there has been an explosion of interest in mining time series databases. As with most computer science problems, representation of the data is the key to efficient and effective solutions. One of the most commonly used representations is piecewise linear approximation. This representation has been used by various researchers to support clustering, classification, indexing and association rule mining of time series data. A variety of algorithms have been proposed to obtain this representation, with several algorithms having been independently rediscovered several times. In this paper, we undertake the first extensive review and empirical comparison of all proposed techniques. We show that all these algorithms have fatal flaws from a data mining perspective. We introduce a novel algorithm that we empirically show to be superior to all others in the literature.",
"title": ""
},
{
"docid": "c102e00d44d335b344b56423bd16e7c5",
"text": "PURPOSE\nTo evaluate the association between social networking site (SNS) use and depression in older adolescents using an experience sample method (ESM) approach.\n\n\nMETHODS\nOlder adolescent university students completed an online survey containing the Patient Health Questionnaire-9 depression screen (PHQ) and a week-long ESM data collection period to assess SNS use.\n\n\nRESULTS\nParticipants (N = 190) included in the study were 58% female and 91% Caucasian. The mean age was 18.9 years (standard deviation = .8). Most used SNSs for either <30 minutes (n = 100, 53%) or between 30 minutes and 2 hours (n = 74, 39%); a minority of participants reported daily use of SNS >2 hours (n = 16, 8%). The mean PHQ score was 5.4 (standard deviation = 4.2). No associations were seen between SNS use and either any depression (p = .519) or moderate to severe depression (p = .470).\n\n\nCONCLUSIONS\nWe did not find evidence supporting a relationship between SNS use and clinical depression. Counseling patients or parents regarding the risk of \"Facebook Depression\" may be premature.",
"title": ""
}
] |
scidocsrr
|
c9e696fb2294b3479ab3d1fdb73de187
|
A biophysically-based neuromorphic model of spike rate- and timing-dependent plasticity.
|
[
{
"docid": "e7465b565ad849f3b7deb4fde2e86c0e",
"text": "Neuromorphic silicoN NeuroNs: state of the art Complementary metal-oxide-semiconductor (CMOS) transistors are commonly used in very-large-scale-integration (VLSI) digital circuits as a basic binary switch that turns on or off as the transistor gate voltage crosses some threshold. Carver Mead first noted that CMOS transistor circuits operating below this threshold in current mode have strikingly similar sigmoidal current– voltage relationships as do neuronal ion channels and consume little power; hence they are ideal analogs of neuronal function (Mead, 1989). This unique device physics led to the advent of “neuromorphic” silicon neurons (SiNs) which allow neuronal spiking dynamics to be directly emulated on analog VLSI chips without the need for digital software simulation (Mahowald and Douglas, 1991). In the inaugural issue of this Journal, Indiveri et al. (2011) review the current state of the art in CMOS-based neuromorphic neuron circuit designs that have evolved over the past two decades. The comprehensive appraisal delineates and compares the latest SiN design techniques as applied to varying types of spiking neuron models ranging from realistic conductancebased Hodgkin–Huxley models to simple yet versatile integrate-and-fire models. The timely and much needed compendium is a tour de force that will certainly provide a valuable guidepost for future SiN designs and applications.",
"title": ""
},
{
"docid": "342b57da0f0fcf190f926dfe0744977d",
"text": "Spike timing-dependent plasticity (STDP) as a Hebbian synaptic learning rule has been demonstrated in various neural circuits over a wide spectrum of species, from insects to humans. The dependence of synaptic modification on the order of pre- and postsynaptic spiking within a critical window of tens of milliseconds has profound functional implications. Over the past decade, significant progress has been made in understanding the cellular mechanisms of STDP at both excitatory and inhibitory synapses and of the associated changes in neuronal excitability and synaptic integration. Beyond the basic asymmetric window, recent studies have also revealed several layers of complexity in STDP, including its dependence on dendritic location, the nonlinear integration of synaptic modification induced by complex spike trains, and the modulation of STDP by inhibitory and neuromodulatory inputs. Finally, the functional consequences of STDP have been examined directly in an increasing number of neural circuits in vivo.",
"title": ""
}
] |
[
{
"docid": "b9b135e9ac811d360aee3aa9a7cec375",
"text": "BACKGROUND\nSecondary thrombocytosis is associated with a variety of clinical conditions, one of which is lower respiratory tract infection. However, reports on thrombocytosis induced by viral infections are scarce.\n\n\nOBJECTIVES\nTo assess the rate of thrombocytosis (platelet count > 500 x 10(9)/L) in hospitalized infants with bronchiolitis and to investigate its potential role as an early marker of respiratory syncytial virus infection.\n\n\nMETHODS\nClinical data on 469 infants aged < or = 4 months who were hospitalized for bronchiolitis were collected prospectively and compared between RSV-positive and RSV-negative infants.\n\n\nRESULTS\nThe rate of thrombocytosis was significantly higher in RSV-positive than RSV-negative infants (41.3% vs. 29.2%, P=0.031). The odds ratio of an infant with bronchiolitis and thrombocytosis to have a positive RSV infection compared to an infant with bronchiolitis and a normal platelet count was 1.7 (P= 0.023, 95% confidence interval 1.07-2.72). There was no significant difference in mean platelet count between the two groups.\n\n\nCONCLUSIONS\nRSV-positive bronchiolitis in hospitalized young infants is associated with thrombocytosis.",
"title": ""
},
{
"docid": "4f00d8fecd12179899ece621f44c4032",
"text": "In this paper we present a deployed, scalable optical character recognition (OCR) system, which we call Rosetta , designed to process images uploaded daily at Facebook scale. Sharing of image content has become one of the primary ways to communicate information among internet users within social networks such as Facebook, and the understanding of such media, including its textual information, is of paramount importance to facilitate search and recommendation applications. We present modeling techniques for efficient detection and recognition of text in images and describe Rosetta 's system architecture. We perform extensive evaluation of presented technologies, explain useful practical approaches to build an OCR system at scale, and provide insightful intuitions as to why and how certain components work based on the lessons learnt during the development and deployment of the system.",
"title": ""
},
{
"docid": "45d49bbbc2d763effed6c7dc03ee3ce4",
"text": "IMPORTANCE\nDespite research showing no link between the measles-mumps-rubella (MMR) vaccine and autism spectrum disorders (ASD), beliefs that the vaccine causes autism persist, leading to lower vaccination levels. Parents who already have a child with ASD may be especially wary of vaccinations.\n\n\nOBJECTIVE\nTo report ASD occurrence by MMR vaccine status in a large sample of US children who have older siblings with and without ASD.\n\n\nDESIGN, SETTING, AND PARTICIPANTS\nA retrospective cohort study using an administrative claims database associated with a large commercial health plan. Participants included children continuously enrolled in the health plan from birth to at least 5 years of age during 2001-2012 who also had an older sibling continuously enrolled for at least 6 months between 1997 and 2012.\n\n\nEXPOSURES\nMMR vaccine receipt (0, 1, 2 doses) between birth and 5 years of age.\n\n\nMAIN OUTCOMES AND MEASURES\nASD status defined as 2 claims with a diagnosis code in any position for autistic disorder or other specified pervasive developmental disorder (PDD) including Asperger syndrome, or unspecified PDD (International Classification of Diseases, Ninth Revision, Clinical Modification 299.0x, 299.8x, 299.9x).\n\n\nRESULTS\nOf 95,727 children with older siblings, 994 (1.04%) were diagnosed with ASD and 1929 (2.01%) had an older sibling with ASD. Of those with older siblings with ASD, 134 (6.9%) had ASD, vs 860 (0.9%) children with unaffected siblings (P < .001). MMR vaccination rates (≥1 dose) were 84% (n = 78,564) at age 2 years and 92% (n = 86,063) at age 5 years for children with unaffected older siblings, vs 73% (n = 1409) at age 2 years and 86% (n = 1660) at age 5 years for children with affected siblings. MMR vaccine receipt was not associated with an increased risk of ASD at any age. For children with older siblings with ASD, at age 2, the adjusted relative risk (RR) of ASD for 1 dose of MMR vaccine vs no vaccine was 0.76 (95% CI, 0.49-1.18; P = .22), and at age 5, the RR of ASD for 2 doses compared with no vaccine was 0.56 (95% CI, 0.31-1.01; P = .052). For children whose older siblings did not have ASD, at age 2, the adjusted RR of ASD for 1 dose was 0.91 (95% CI, 0.67-1.20; P = .50) and at age 5, the RR of ASD for 2 doses was 1.12 (95% CI, 0.78-1.59; P = .55).\n\n\nCONCLUSIONS AND RELEVANCE\nIn this large sample of privately insured children with older siblings, receipt of the MMR vaccine was not associated with increased risk of ASD, regardless of whether older siblings had ASD. These findings indicate no harmful association between MMR vaccine receipt and ASD even among children already at higher risk for ASD.",
"title": ""
},
{
"docid": "332981832176f90e4fc99f0c93cfb5d7",
"text": "Though silicon tunnel field effect transistor (TFET) has attracted attention for sub-60mV/decade subthreshold swing and very small OFF current (IOFF), its practical application is questionable due to low ON current (ION) and complicated fabrication process steps. In this paper, a new n-type classical-MOSFET-alike tunnel FET architecture is proposed, which offers sub-60mV/decade subthreshold swing along with a significant improvement in ION. The enhancement in ION is achieved by introducing a thin strained SiGe layer on top of the silicon source. Through 2D simulations it is observed that the device is nearly free from short channel effect (SCE) and its immunity towards drain induced barrier lowering (DIBL) increases with increasing germanium mole fraction. It is also found that the body bias does not change the drive current but after body current gets affected. An ION of 0:58mA=mm and a minimum average subthreshold swing of 13mV/decade is achieved for 100 nm channel length device with 1.2V supply voltage and 0.7 Ge mole fraction, while maintaining the IOFF in fA range. r 2008 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "444a4b700fb2e3e60647150d37127d4b",
"text": "-Suggested by the structure of the visual nervous system, a new algorithm is proposed for pattern recognlt~on This algorithm can be reahzed with a multllayered network consisting of neuron-hke cells The network, \"neocognltron\", is self-organized by unsupervised learnmg, and acquires the abdlty to recognize stimulus patterns according to the &fferences in their shapes Any patterns which we human beings judge to be ahke are also judged to be of the same category by the neocognltron The neocognltron recognizes stimulus patterns correctly without being affected by shifts m position or even by considerable d~stortlons in shape of the stimulus patterns Visual pattern recognition Unsupervised learning Neural network model Deformatmn-reslstant Self-orgamzatmn Visual nervous system Posltlon-mvarlant Multdayered network Simulation",
"title": ""
},
{
"docid": "692207fdd7e27a04924000648f8b1bbf",
"text": "Many animals, on air, water, or land, navigate in three-dimensional (3D) environments, yet it remains unclear how brain circuits encode the animal's 3D position. We recorded single neurons in freely flying bats, using a wireless neural-telemetry system, and studied how hippocampal place cells encode 3D volumetric space during flight. Individual place cells were active in confined 3D volumes, and in >90% of the neurons, all three axes were encoded with similar resolution. The 3D place fields from different neurons spanned different locations and collectively represented uniformly the available space in the room. Theta rhythmicity was absent in the firing patterns of 3D place cells. These results suggest that the bat hippocampus represents 3D volumetric space by a uniform and nearly isotropic rate code.",
"title": ""
},
{
"docid": "31beba3bcdd3451ace78460fd7adb67f",
"text": "Distributed dense word vectors have been shown to be effective at capturing tokenlevel semantic and syntactic regularities in language, while topic models can form interpretable representations over documents. In this work, we describe lda2vec, a model that learns dense word vectors jointly with Dirichlet-distributed latent document-level mixtures of topic vectors. In contrast to continuous dense document representations, this formulation produces sparse, interpretable document mixtures through a non-negative simplex constraint. Our method is simple to incorporate into existing automatic differentiation frameworks and allows for unsupervised document representations geared for use by scientists while simultaneously learning word vectors and the linear relationships between them.",
"title": ""
},
{
"docid": "fbda5771eb59ef5abf6810b47412452d",
"text": "We demonstrate the Task Completion Platform (TCP); a multi-domain dialogue platform that can host and execute large numbers of goal-orientated dialogue tasks. The platform features a task configuration language, TaskForm, that allows the definition of each individual task to be decoupled from the overarching dialogue policy used by the platform to complete those tasks. This separation allows for simple and rapid authoring of new tasks, while dialogue policy and platform functionality evolve independent of the tasks. The current platform includes machine learnt models that provide contextual slot carry-over, flexible item selection, and task selection/switching. Any new task immediately gains the benefit of these pieces of built-in platform functionality. The platform is used to power many of the multi-turn dialogues supported by the Cortana personal assistant.",
"title": ""
},
{
"docid": "f1fe8a9d2e4886f040b494d76bc4bb78",
"text": "The benefits of enhanced condition monitoring in the asset management of the electricity transmission infrastructure are increasingly being exploited by the grid operators. Adding more sensors helps to track the plant health more accurately. However, the installation or operating costs of any additional sensors could outweigh the benefits they bring due to the requirement for new cabling or battery maintenance. Energy harvesting devices are therefore being proposed to power a new generation of wireless sensors. The harvesting devices could enable the sensors to be maintenance free over their lifetime and substantially reduce the cost of installing and operating a condition monitoring system.",
"title": ""
},
{
"docid": "74ccb28a31d5a861bea1adfaab2e9bf1",
"text": "For many decades CMOS devices have been successfully scaled down to achieve higher speed and increased performance of integrated circuits at lower cost. Today’s charge-based CMOS electronics encounters two major challenges: power dissipation and variability. Spintronics is a rapidly evolving research and development field, which offers a potential solution to these issues by introducing novel ‘more than Moore’ devices. Spin-based magnetoresistive random-access memory (MRAM) is already recognized as one of the most promising candidates for future universal memory. Magnetic tunnel junctions, the main elements of MRAM cells, can also be used to build logic-in-memory circuits with non-volatile storage elements on top of CMOS logic circuits, as well as versatile compact on-chip oscillators with low power consumption. We give an overview of CMOS-compatible spintronics applications. First, we present a brief introduction to the physical background considering such effects as magnetoresistance, spin-transfer torque (STT), spin Hall effect, and magnetoelectric effects. We continue with a comprehensive review of the state-of-the-art spintronic devices for memory applications (STT-MRAM, domain wallmotion MRAM, and spin–orbit torque MRAM), oscillators (spin torque oscillators and spin Hall nano-oscillators), logic (logic-in-memory, all-spin logic, and buffered magnetic logic gate grid), sensors, and random number generators. Devices with different types of resistivity switching are analyzed and compared, with their advantages highlighted and challenges revealed. CMOScompatible spintronic devices are demonstrated beginning with predictive simulations, proceeding to their experimental confirmation and realization, and finalized by the current status of application in modern integrated systems and circuits. We conclude the review with an outlook, where we share our vision on the future applications of the prospective devices in the area.",
"title": ""
},
{
"docid": "f1086cf6c27d39ea5e4a4c9b2522c74f",
"text": "This paper talks about the relationship between conceptual metaphor and semantic motivation of English and Chinese idioms from three aspects, namely, structural metaphor, orientation metaphor and ontological metaphor. Based on that, the author puts forward applying conceptual metaphor theory to English and Chinese idiom teaching.",
"title": ""
},
{
"docid": "bac5b36d7da7199c1bb4815fa0d5f7de",
"text": "During quadrupedal trotting, diagonal pairs of limbs are set down in unison and exert forces on the ground simultaneously. Ground-reaction forces on individual limbs of trotting dogs were measured separately using a series of four force platforms. Vertical and fore-aft impulses were determined for each limb from the force/time recordings. When mean fore-aft acceleration of the body was zero in a given trotting step (steady state), the fraction of vertical impulse on the forelimb was equal to the fraction of body weight supported by the forelimbs during standing (approximately 60 %). When dogs accelerated or decelerated during a trotting step, the vertical impulse was redistributed to the hindlimb or forelimb, respectively. This redistribution of the vertical impulse is due to a moment exerted about the pitch axis of the body by fore-aft accelerating and decelerating forces. Vertical forces exerted by the forelimb and hindlimb resist this pitching moment, providing stability during fore-aft acceleration and deceleration.",
"title": ""
},
{
"docid": "5027641226096e156f745cc4d2bbcb5a",
"text": "High resolution depth-maps, obtained by upsampling sparse range data from a 3D-LIDAR, find applications in many fields ranging from sensory perception to semantic segmentation and object detection. Upsampling is often based on combining data from a monocular camera to compensate the low-resolution of a LIDAR. This paper, on the other hand, introduces a novel framework to obtain dense depth-map solely from a single LIDAR point cloud; which is a research direction that has been barely explored. The formulation behind the proposed depth-mapping process relies on local spatial interpolation, using sliding-window (mask) technique, and on the Bilateral Filter (BF) where the variable of interest, the distance from the sensor, is considered in the interpolation problem. In particular, the BF is conveniently modified to perform depth-map upsampling such that the edges (foreground-background discontinuities) are better preserved by means of a proposed method which influences the range-based weighting term. Other methods for spatial upsampling are discussed, evaluated and compared in terms of different error measures. This paper also researches the role of the mask's size in the performance of the implemented methods. Quantitative and qualitative results from experiments on the KITTI Database, using LIDAR point clouds only, show very satisfactory performance of the approach introduced in this work.",
"title": ""
},
{
"docid": "d8da6bebb1ca8f00b176e1493ded4b9c",
"text": "This paper presents an efficient technique for the evaluation of different types of losses in substrate integrated waveguide (SIW). This technique is based on the Boundary Integral-Resonant Mode Expansion (BI-RME) method in conjunction with a perturbation approach. This method also permits to derive automatically multimodal and parametric equivalent circuit models of SIW discontinuities, which can be adopted for an efficient design of complex SIW circuits. Moreover, a comparison of losses in different types of planar interconnects (SIW, microstrip, coplanar waveguide) is presented.",
"title": ""
},
{
"docid": "8e03f4410676fb4285596960880263e9",
"text": "Fuzzy computing (FC) has made a great impact in capturing human domain knowledge and modeling non-linear mapping of input-output space. In this paper, we describe the design and implementation of FC systems for detection of money laundering behaviors in financial transactions and monitoring of distributed storage system load. Our objective is to demonstrate the power of FC for real-world applications which are characterized by imprecise, uncertain data, and incomplete domain knowledge. For both applications, we designed fuzzy rules based on experts’ domain knowledge, depending on money laundering scenarios in transactions or the “health” of a distributed storage system. In addition, we developped a generic fuzzy inference engine and contributed to the open source community.",
"title": ""
},
{
"docid": "a5ba65ad4e5b33be89904d75ba01029c",
"text": "A fast and efficient approach for color image segmentation is proposed. In this work, a new quantization technique for HSV color space is implemented to generate a color histogram and a gray histogram for K-Means clustering, which operates across different dimensions in HSV color space. Compared with the traditional K-Means clustering, the initialization of centroids and the number of cluster are automatically estimated in the proposed method. In addition, a filter for post-processing is introduced to effectively eliminate small spatial regions. Experiments show that the proposed segmentation algorithm achieves high computational speed, and salient regions of images can be effectively extracted. Moreover, the segmentation results are close to human perceptions.",
"title": ""
},
{
"docid": "6fae12410517c4a91559d9704e814809",
"text": "The concept of smart cities has gained relevance over the years. City leaders plan investments with the aim of evolving the city towards a smart city. Several models and frameworks, of which maturity models, provide directions or support such investment decisions. Nevertheless, it is not always clear whether the maturity models developed so far are able to fulfil their proposed objectives. This paper identifies smart city maturity models and assesses them, taking into account an approach based on the design principles framework proposed by Pöppelbuß & Röglinger [1]. The main objective of this paper is to infer on the relevance of current maturity models for smart cities, taking into account their purpose. Furthermore, it aims at creating awareness towards the need for completeness when developing a maturity model.",
"title": ""
},
{
"docid": "60bb725cf5f0923101949fc11e93502a",
"text": "An important ability of cognitive systems is the ability to familiarize themselves with the properties of objects and their environment as well as to develop an understanding of the consequences of their own actions on physical objects. Developing developmental approaches that allow cognitive systems to familiarize with objects in this sense via guided self-exploration is an important challenge within the field of developmental robotics. In this paper we present a novel approach that allows cognitive systems to familiarize themselves with the properties of objects and the effects of their actions on them in a self-exploration fashion. Our approach is inspired by developmental studies that hypothesize that infants have a propensity to systematically explore the connection between own actions and their perceptual consequences in order to support inter-modal calibration of their bodies. We propose a reinforcement-based approach operating in a continuous state space in which the function predicting cumulated future rewards is learned via a deep Q-network. We investigate the impact of the structure of rewards, the impact of different regularization approaches as well as the impact of different exploration strategies.",
"title": ""
},
{
"docid": "c102e00d44d335b344b56423bd16e7c5",
"text": "PURPOSE\nTo evaluate the association between social networking site (SNS) use and depression in older adolescents using an experience sample method (ESM) approach.\n\n\nMETHODS\nOlder adolescent university students completed an online survey containing the Patient Health Questionnaire-9 depression screen (PHQ) and a week-long ESM data collection period to assess SNS use.\n\n\nRESULTS\nParticipants (N = 190) included in the study were 58% female and 91% Caucasian. The mean age was 18.9 years (standard deviation = .8). Most used SNSs for either <30 minutes (n = 100, 53%) or between 30 minutes and 2 hours (n = 74, 39%); a minority of participants reported daily use of SNS >2 hours (n = 16, 8%). The mean PHQ score was 5.4 (standard deviation = 4.2). No associations were seen between SNS use and either any depression (p = .519) or moderate to severe depression (p = .470).\n\n\nCONCLUSIONS\nWe did not find evidence supporting a relationship between SNS use and clinical depression. Counseling patients or parents regarding the risk of \"Facebook Depression\" may be premature.",
"title": ""
},
{
"docid": "adb6144e24291071f6c80e1190582f4e",
"text": "Molecular docking is an important method in computational drug discovery. In large-scale virtual screening, millions of small drug-like molecules (chemical compounds) are compared against a designated target protein (receptor). Depending on the utilized docking algorithm for screening, this can take several weeks on conventional HPC systems. However, for certain applications including large-scale screening tasks for newly emerging infectious diseases such high runtimes can be highly prohibitive. In this paper, we investigate how the massively parallel neo-heterogeneous architecture of Tianhe-2 Supercomputer consisting of thousands of nodes comprising CPUs and MIC coprocessors that can efficiently be used for virtual screening tasks. Our proposed approach is based on a coordinated parallel framework called mD3DOCKxb in which CPUs collaborate with MICs to achieve high hardware utilization. mD3DOCKxb comprises a novel efficient communication engine for dynamic task scheduling and load balancing between nodes in order to reduce communication and I/O latency. This results in a highly scalable implementation with parallel efficiency of over 84% (strong scaling) when executing on 8,000 Tianhe-2 nodes comprising 192,000 CPU cores and 1,368,000 MIC cores.",
"title": ""
}
] |
scidocsrr
|
a746849703daae985e9d1c5a62d6b9d3
|
t-FFD: free-form deformation by using triangular mesh
|
[
{
"docid": "7d741e9073218fa073249e512161748d",
"text": "Free-form deformation (FFD) is a powerful modeling tool, but controlling the shape of an object under complex deformations is often difficult. The interface to FFD in most conventional systems simply represents the underlying mathematics directly; users describe deformations by manipulating control points. The difficulty in controlling shape precisely is largely due to the control points being extraneous to the object; the deformed object does not follow the control points exactly. In addition, the number of degrees of freedom presented to the user can be overwhelming. We present a method that allows a user to control a free-form deformation of an object by manipulating the object directly, leading to better control of the deformation and a more intuitive interface. CR Categories: I.3.5 [Computer Graphics]: Computational Geometry and Object Modeling Curve, Surface, Solid, and Object Representations; I.3.6 [Computer Graphics]: Methodology and Techniques Interaction Techniques. Additional",
"title": ""
}
] |
[
{
"docid": "b5c7b9f1f57d3d79d3fc8a97eef16331",
"text": "This paper presents an end-to-end convolutional neural network (CNN) for 2D-3D exemplar detection. We demonstrate that the ability to adapt the features of natural images to better align with those of CAD rendered views is critical to the success of our technique. We show that the adaptation can be learned by compositing rendered views of textured object models on natural images. Our approach can be naturally incorporated into a CNN detection pipeline and extends the accuracy and speed benefits from recent advances in deep learning to 2D-3D exemplar detection. We applied our method to two tasks: instance detection, where we evaluated on the IKEA dataset [36], and object category detection, where we out-perform Aubry et al. [3] for \"chair\" detection on a subset of the Pascal VOC dataset.",
"title": ""
},
{
"docid": "2ce21d12502577882ced4813603e9a72",
"text": "Positive psychology is the scientific study of positive experiences and positive individual traits, and the institutions that facilitate their development. A field concerned with well-being and optimal functioning, positive psychology aims to broaden the focus of clinical psychology beyond suffering and its direct alleviation. Our proposed conceptual framework parses happiness into three domains: pleasure, engagement, and meaning. For each of these constructs, there are now valid and practical assessment tools appropriate for the clinical setting. Additionally, mounting evidence demonstrates the efficacy and effectiveness of positive interventions aimed at cultivating pleasure, engagement, and meaning. We contend that positive interventions are justifiable in their own right. Positive interventions may also usefully supplement direct attempts to prevent and treat psychopathology and, indeed, may covertly be a central component of good psychotherapy as it is done now.",
"title": ""
},
{
"docid": "b7aea71af6c926344286fbfa214c4718",
"text": "Semantic segmentation is a task that covers most of the perception needs of intelligent vehicles in an unified way. ConvNets excel at this task, as they can be trained end-to-end to accurately classify multiple object categories in an image at the pixel level. However, current approaches normally involve complex architectures that are expensive in terms of computational resources and are not feasible for ITS applications. In this paper, we propose a deep architecture that is able to run in real-time while providing accurate semantic segmentation. The core of our ConvNet is a novel layer that uses residual connections and factorized convolutions in order to remain highly efficient while still retaining remarkable performance. Our network is able to run at 83 FPS in a single Titan X, and at more than 7 FPS in a Jetson TX1 (embedded GPU). A comprehensive set of experiments demonstrates that our system, trained from scratch on the challenging Cityscapes dataset, achieves a classification performance that is among the state of the art, while being orders of magnitude faster to compute than other architectures that achieve top precision. This makes our model an ideal approach for scene understanding in intelligent vehicles applications.",
"title": ""
},
{
"docid": "ac5c015aa485084431b8dba640f294b5",
"text": "In human sentence processing, cognitive load can be defined many ways. This report considers a definition of cognitive load in terms of the total probability of structural options that have been disconfirmed at some point in a sentence: the surprisal of word wi given its prefix w0...i−1 on a phrase-structural language model. These loads can be efficiently calculated using a probabilistic Earley parser (Stolcke, 1995) which is interpreted as generating predictions about reading time on a word-by-word basis. Under grammatical assumptions supported by corpusfrequency data, the operation of Stolcke’s probabilistic Earley parser correctly predicts processing phenomena associated with garden path structural ambiguity and with the subject/object relative asymmetry.",
"title": ""
},
{
"docid": "6bafdd357ad44debeda78d911a69da90",
"text": "We present a framework to tackle combinatorial optimization problems using neural networks and reinforcement learning. We focus on the traveling salesman problem (TSP) and train a recurrent neural network that, given a set of city coordinates, predicts a distribution over different city permutations. Using negative tour length as the reward signal, we optimize the parameters of the recurrent neural network using a policy gradient method. Without much engineering and heuristic designing, Neural Combinatorial Optimization achieves close to optimal results on 2D Euclidean graphs with up to 100 nodes. These results, albeit still quite far from state-of-the-art, give insights into how neural networks can be used as a general tool for tackling combinatorial optimization problems.",
"title": ""
},
{
"docid": "69ad6c10f8a7ae4629ff2aee38da0ddb",
"text": "A new hybrid security algorithm is presented for RSA cryptosystem named as Hybrid RSA. The system works on the concept of using two different keys- a private and a public for decryption and encryption processes. The value of public key (P) and private key (Q) depends on value of M, where M is the product of four prime numbers which increases the factorizing of variable M. moreover, the computation of P and Q involves computation of some more factors which makes it complex. This states that the variable x or M is transferred during encryption and decryption process, where x represents the multiplication of two prime numbers A and B. thus, it provides more secure path for encryption and decryption process. The proposed system is compared with the RSA and enhanced RSA (ERSA) algorithms to measure the key generation time, encryption and decryption time which is proved to be more efficient than RSA and ERSA.",
"title": ""
},
{
"docid": "4789f548800a38c11f0fa2f91efc95c9",
"text": "Most of the Low Dropout Regulators (LDRs) have limited operation range of load current due to their stability problem. This paper proposes a new frequency compensation scheme for LDR to optimize the regulator performance over a wide load current range. By introducing a tracking zero to cancel out the regulator output pole, the frequency response of the feedback loop becomes load current independent. The open-loop DC gain is boosted up by a low frequency dominant pole, which increases the regulator accuracy. To demonstrate the feasibility of the proposed scheme, a LDR utilizing the new frequency compensation scheme is designed and fabricated using TSMC 0.3511~1 digital CMOS process. Simulation results show that with output current from 0 pA to 100 mA the bandwidth variation is only 2.3 times and the minimum DC gain is 72 dB. Measurement of the dynamic response matches well with simulation.",
"title": ""
},
{
"docid": "d0811a8c8b760b8dadfa9a51df568bd9",
"text": "A strain of the microalga Chlorella pyrenoidosa F-9 in our laboratory showed special characteristics when transferred from autotrophic to heterotrophic culture. In order to elucidate the possible metabolic mechanism, the gene expression profiles of the autonomous organelles in the green alga C. pyrenoidosa under autotrophic and heterotrophic cultivation were compared by suppression subtractive hybridization technology. Two subtracted libraries of autotrophic and heterotrophic C. pyrenoidosa F-9 were constructed, and 160 clones from the heterotrophic library were randomly selected for DNA sequencing. Dot blot hybridization showed that the ratio of positivity was 70.31% from the 768 clones. Five chloroplast genes (ftsH, psbB, rbcL, atpB, and infA) and two mitochondrial genes (cox2 and nad6) were selected to verify their expression levels by real-time quantitative polymerase chain reaction. Results showed that the seven genes were abundantly expressed in the heterotrophic culture. Among the seven genes, the least increment of gene expression was ftsH, which was expressed 1.31-1.85-fold higher under heterotrophy culture than under autotrophy culture, and the highest increment was psbB, which increased 28.07-39.36 times compared with that under autotrophy conditions. The expression levels of the other five genes were about 10 times higher in heterotrophic algae than in autotrophic algae. In inclusion, the chloroplast and mitochondrial genes in C. pyrenoidosa F-9 might be actively involved in heterotrophic metabolism.",
"title": ""
},
{
"docid": "f7c4b71b970b7527cd2650ce1e05ab1b",
"text": "BACKGROUND\nPhysician burnout has reached epidemic levels, as documented in national studies of both physicians in training and practising physicians. The consequences are negative effects on patient care, professionalism, physicians' own care and safety, and the viability of health-care systems. A more complete understanding than at present of the quality and outcomes of the literature on approaches to prevent and reduce burnout is necessary.\n\n\nMETHODS\nIn this systematic review and meta-analysis, we searched MEDLINE, Embase, PsycINFO, Scopus, Web of Science, and the Education Resources Information Center from inception to Jan 15, 2016, for studies of interventions to prevent and reduce physician burnout, including single-arm pre-post comparison studies. We required studies to provide physician-specific burnout data using burnout measures with validity support from commonly accepted sources of evidence. We excluded studies of medical students and non-physician health-care providers. We considered potential eligibility of the abstracts and extracted data from eligible studies using a standardised form. Outcomes were changes in overall burnout, emotional exhaustion score (and high emotional exhaustion), and depersonalisation score (and high depersonalisation). We used random-effects models to calculate pooled mean difference estimates for changes in each outcome.\n\n\nFINDINGS\nWe identified 2617 articles, of which 15 randomised trials including 716 physicians and 37 cohort studies including 2914 physicians met inclusion criteria. Overall burnout decreased from 54% to 44% (difference 10% [95% CI 5-14]; p<0·0001; I2=15%; 14 studies), emotional exhaustion score decreased from 23·82 points to 21·17 points (2·65 points [1·67-3·64]; p<0·0001; I2=82%; 40 studies), and depersonalisation score decreased from 9·05 to 8·41 (0·64 points [0·15-1·14]; p=0·01; I2=58%; 36 studies). High emotional exhaustion decreased from 38% to 24% (14% [11-18]; p<0·0001; I2=0%; 21 studies) and high depersonalisation decreased from 38% to 34% (4% [0-8]; p=0·04; I2=0%; 16 studies).\n\n\nINTERPRETATION\nThe literature indicates that both individual-focused and structural or organisational strategies can result in clinically meaningful reductions in burnout among physicians. Further research is needed to establish which interventions are most effective in specific populations, as well as how individual and organisational solutions might be combined to deliver even greater improvements in physician wellbeing than those achieved with individual solutions.\n\n\nFUNDING\nArnold P Gold Foundation Research Institute.",
"title": ""
},
{
"docid": "274485dd39c0727c99fcc0a07d434b25",
"text": "Fetal mortality rate is considered a good measure of the quality of health care in a country or a medical facility. If we look at the current scenario, we find that we have focused more on child mortality rate than on fetus mortality. Even it is a same situation in developed country. Our aim is to provide technological solution to help decrease the fetal mortality rate. Also if we consider pregnant women, they have to come to hospital 2-3 times a week for their regular checkups. It becomes a problem for working women and women having diabetes or other disease. For these reasons it would be very helpful if they can do this by themselves at home. This will reduce the frequency of their visit to the hospital at same time cause no compromise in the wellbeing of both the mother and the child. The end to end system consists of wearable sensors, built into a fabric belt, that collects and sends vital signs of patients via bluetooth to smart mobile phones for further processing and made available to required personnel allowing efficient monitoring and alerting when attention is required in often challenging and chaotic scenarios.",
"title": ""
},
{
"docid": "b27ab468a885a3d52ec2081be06db2ef",
"text": "The beautification of human photos usually requires professional editing softwares, which are difficult for most users. In this technical demonstration, we propose a deep face beautification framework, which is able to automatically modify the geometrical structure of a face so as to boost the attractiveness. A learning based approach is adopted to capture the underlying relations between the facial shape and the attractiveness via training the Deep Beauty Predictor (DBP). Relying on the pre-trained DBP, we construct the BeAuty SHaper (BASH) to infer the \"flows\" of landmarks towards the maximal aesthetic level. BASH modifies the facial landmarks with the direct guidance of the beauty score estimated by DBP.",
"title": ""
},
{
"docid": "1709f180c56cab295bf9fd9c3e35d4ef",
"text": "Harmonic radar systems provide an effective modality for tracking insect behavior. This letter presents a harmonic radar system proposed to track the migration of the Emerald Ash Borer (EAB). The system offers a unique combination of portability, low power and small tag design. It is comprised of a compact radar unit and a passive RF tag for mounting on the insect. The radar unit transmits a 5.96 GHz signal and detects at the 11.812 GHz band. A prototype of the radar unit was built and tested, and a new small tag was designed for the application. The new tag offers improved harmonic conversion efficiency and much smaller size as compared to previous harmonic radar systems for tracking insects. Unlike RFID detectors whose sensitivity allows detection up to a few meters, the developed radar can detect a tagged insect up to 58 m (190 ft).",
"title": ""
},
{
"docid": "a600a19440b8e6799e0e603cf56ff141",
"text": "In this work, we address the problem of distributed expert finding using chains of social referrals and profile matching with only local information in online social networks. By assuming that users are selfish, rational, and have privately known cost of participating in the referrals, we design a novel truthful efficient mechanism in which an expert-finding query will be relayed by intermediate users. When receiving a referral request, a participant will locally choose among her neighbors some user to relay the request. In our mechanism, several closely coupled methods are carefully designed to improve the performance of distributed search, including, profile matching, social acquaintance prediction, score function for locally choosing relay neighbors, and budget estimation. We conduct extensive experiments on several data sets of online social networks. The extensive study of our mechanism shows that the success rate of our mechanism is about 90 percent in finding closely matched experts using only local search and limited budget, which significantly improves the previously best rate 20 percent. The overall cost of finding an expert by our truthful mechanism is about 20 percent of the untruthful methods, e.g., the method that always selects high-degree neighbors. The median length of social referral chains is 6 using our localized search decision, which surprisingly matches the well-known small-world phenomenon of global social structures.",
"title": ""
},
{
"docid": "fd91f09861da433d27d4db3f7d2a38a6",
"text": "Herbert Simon’s research endeavor aimed to understand the processes that participate in human decision making. However, despite his effort to investigate this question, his work did not have the impact in the “decision making” community that it had in other fields. His rejection of the assumption of perfect rationality, made in mainstream economics, led him to develop the concept of bounded rationality. Simon’s approach also emphasized the limitations of the cognitive system, the change of processes due to expertise, and the direct empirical study of cognitive processes involved in decision making. In this article, we argue that his subsequent research program in problem solving and expertise offered critical tools for studying decision-making processes that took into account his original notion of bounded rationality. Unfortunately, these tools were ignored by the main research paradigms in decision making, such as Tversky and Kahneman’s biased rationality approach (also known as the heuristics and biases approach) and the ecological approach advanced by Gigerenzer and others. We make a proposal of how to integrate Simon’s approach with the main current approaches to decision making. We argue that this would lead to better models of decision making that are more generalizable, have higher ecological validity, include specification of cognitive processes, and provide a better understanding of the interaction between the characteristics of the cognitive system and the contingencies of the environment.",
"title": ""
},
{
"docid": "39eac1617b9b68f68022577951460fb5",
"text": "Web services support software architectures that can evolve dynamically. In particular, here we focus on architectures where services are composed (orchestrated) through a workflow described in the BPEL language. We assume that the resulting composite service refers to external services through assertions that specify their expected functional and non-functional properties. Based on these assertions, the composite service may be verified at design time by checking that it ensures certain relevant properties. Because of the dynamic nature of Web services and the multiple stakeholders involved in their provision, however, the external services may evolve dynamically, and even unexpectedly. They may become inconsistent with respect to the assertions against which the workflow was verified during development. As a consequence, validation of the composition must extend to run time. We introduce an assertion language, called ALBERT, which can be used to specify both functional and non-functional properties. We also describe an environment which supports design-time verification of ALBERT assertions for BPEL workflows via model checking. At run time, the assertions can be turned into checks that a software monitor performs on the composite system to verify that it continues to guarantee its required properties. A TeleAssistance application is provided as a running example to illustrate our validation framework.",
"title": ""
},
{
"docid": "2ecfc909301dcc6241bec2472b4d4135",
"text": "Previous work on text mining has almost exclusively focused on a single stream. However, we often have available multiple text streams indexed by the same set of time points (called coordinated text streams), which offer new opportunities for text mining. For example, when a major event happens, all the news articles published by different agencies in different languages tend to cover the same event for a certain period, exhibiting a correlated bursty topic pattern in all the news article streams. In general, mining correlated bursty topic patterns from coordinated text streams can reveal interesting latent associations or events behind these streams. In this paper, we define and study this novel text mining problem. We propose a general probabilistic algorithm which can effectively discover correlated bursty patterns and their bursty periods across text streams even if the streams have completely different vocabularies (e.g., English vs Chinese). Evaluation of the proposed method on a news data set and a literature data set shows that it can effectively discover quite meaningful topic patterns from both data sets: the patterns discovered from the news data set accurately reveal the major common events covered in the two streams of news articles (in English and Chinese, respectively), while the patterns discovered from two database publication streams match well with the major research paradigm shifts in database research. Since the proposed method is general and does not require the streams to share vocabulary, it can be applied to any coordinated text streams to discover correlated topic patterns that burst in multiple streams in the same period.",
"title": ""
},
{
"docid": "301ce75026839f85bc15100a9a7cc5ca",
"text": "This paper presents a novel visual-inertial integration system for human navigation in free-living environments, where the measurements from wearable inertial and monocular visual sensors are integrated. The preestimated orientation, obtained from magnet, angular rate, and gravity sensors, is used to estimate the translation based on the data from the visual and inertial sensors. This has a significant effect on the performance of the fusion sensing strategy and makes the fusion procedure much easier, because the gravitational acceleration can be correctly removed from the accelerometer measurements before the fusion procedure, where a linear Kalman filter is selected as the fusion estimator. Furthermore, the use of preestimated orientation can help to eliminate erroneous point matches based on the properties of the pure camera translation and thus the computational requirements can be significantly reduced compared with the RANdom SAmple Consensus algorithm. In addition, an adaptive-frame rate single camera is selected to not only avoid motion blur based on the angular velocity and acceleration after compensation, but also to make an effect called visual zero-velocity update for the static motion. Thus, it can recover a more accurate baseline and meanwhile reduce the computational requirements. In particular, an absolute scale factor, which is usually lost in monocular camera tracking, can be obtained by introducing it into the estimator. Simulation and experimental results are presented for different environments with different types of movement and the results from a Pioneer robot are used to demonstrate the accuracy of the proposed method.",
"title": ""
},
{
"docid": "1968573cf98307276bf0f10037aa3623",
"text": "In many imaging applications, the continuous phase information of the measured signal is wrapped to a single period of 2π, resulting in phase ambiguity. In this paper we consider the two-dimensional phase unwrapping problem and propose a Maximum a Posteriori (MAP) framework for estimating the true phase values based on the wrapped phase data. In particular, assuming a joint Gaussian prior on the original phase image, we show that the MAP formulation leads to a binary quadratic minimization problem. The latter can be efficiently solved by semidefinite relaxation (SDR). We compare the performances of our proposed method with the existing L1/L2-norm minimization approaches. The numerical results demonstrate that the SDR approach significantly outperforms the existing phase unwrapping methods.",
"title": ""
},
{
"docid": "b85e9ef3652a99e55414d95bfed9cc0d",
"text": "Regulatory T cells (Tregs) prevail as a specialized cell lineage that has a central role in the dominant control of immunological tolerance and maintenance of immune homeostasis. Thymus-derived Tregs (tTregs) and their peripherally induced counterparts (pTregs) are imprinted with unique Forkhead box protein 3 (Foxp3)-dependent and independent transcriptional and epigenetic characteristics that bestows on them the ability to suppress disparate immunological and non-immunological challenges. Thus, unidirectional commitment and the predominant stability of this regulatory lineage is essential for their unwavering and robust suppressor function and has clinical implications for the use of Tregs as cellular therapy for various immune pathologies. However, recent studies have revealed considerable heterogeneity or plasticity in the Treg lineage, acquisition of alternative effector or hybrid fates, and promotion rather than suppression of inflammation in extreme contexts. In addition, the absolute stability of Tregs under all circumstances has been questioned. Since these observations challenge the safety and efficacy of human Treg therapy, the issue of Treg stability versus plasticity continues to be enthusiastically debated. In this review, we assess our current understanding of the defining features of Foxp3(+) Tregs, the intrinsic and extrinsic cues that guide development and commitment to the Treg lineage, and the phenotypic and functional heterogeneity that shapes the plasticity and stability of this critical regulatory population in inflammatory contexts.",
"title": ""
},
{
"docid": "d7ab8b7604d90e1a3bb6b4c1e54833a0",
"text": "Invisibility devices have captured the human imagination for many years. Recent theories have proposed schemes for cloaking devices using transformation optics and conformal mapping. Metamaterials, with spatially tailored properties, have provided the necessary medium by enabling precise control over the flow of electromagnetic waves. Using metamaterials, the first microwave cloaking has been achieved but the realization of cloaking at optical frequencies, a key step towards achieving actual invisibility, has remained elusive. Here, we report the first experimental demonstration of optical cloaking. The optical 'carpet' cloak is designed using quasi-conformal mapping to conceal an object that is placed under a curved reflecting surface by imitating the reflection of a flat surface. The cloak consists only of isotropic dielectric materials, which enables broadband and low-loss invisibility at a wavelength range of 1,400-1,800 nm.",
"title": ""
}
] |
scidocsrr
|
9cf6ce4318504a40e6c0623e0f80e9db
|
Good Friends, Bad News - Affect and Virality in Twitter
|
[
{
"docid": "59af45fa33fd70d044f9749e59ba3ca7",
"text": "Retweeting is the key mechanism for information diffusion in Twitter. It emerged as a simple yet powerful way of disseminating useful information. Even though a lot of information is shared via its social network structure in Twitter, little is known yet about how and why certain information spreads more widely than others. In this paper, we examine a number of features that might affect retweetability of tweets. We gathered content and contextual features from 74M tweets and used this data set to identify factors that are significantly associated with retweet rate. We also built a predictive retweet model. We found that, amongst content features, URLs and hashtags have strong relationships with retweetability. Amongst contextual features, the number of followers and followees as well as the age of the account seem to affect retweetability, while, interestingly, the number of past tweets does not predict retweetability of a user’s tweet. We believe that this research would inform the design of sensemaking tools for Twitter streams as well as other general social media collections. Keywords-Twitter; retweet; tweet; follower; social network; social media; factor analysis",
"title": ""
},
{
"docid": "53477003e3c57381201a69e7cc54cfc9",
"text": "Twitter - a microblogging service that enables users to post messages (\"tweets\") of up to 140 characters - supports a variety of communicative practices; participants use Twitter to converse with individuals, groups, and the public at large, so when conversations emerge, they are often experienced by broader audiences than just the interlocutors. This paper examines the practice of retweeting as a way by which participants can be \"in a conversation.\" While retweeting has become a convention inside Twitter, participants retweet using different styles and for diverse reasons. We highlight how authorship, attribution, and communicative fidelity are negotiated in diverse ways. Using a series of case studies and empirical data, this paper maps out retweeting as a conversational practice.",
"title": ""
}
] |
[
{
"docid": "75fda2fa6c35c915dede699c12f45d84",
"text": "This work presents an open-source framework called systemc-clang for analyzing SystemC models that consist of a mixture of register-transfer level, and transaction-level components. The framework statically parses mixed-abstraction SystemC models, and represents them using an intermediate representation. This intermediate representation captures the structural information about the model, and certain behavioural semantics of the processes in the model. This representation can be used for multiple purposes such as static analysis of the model, code transformations, and optimizations. We describe with examples, the key details in implementing systemc-clang, and show an example of constructing a plugin that analyzes the intermediate representation to discover opportunities for parallel execution of SystemC processes. We also experimentally evaluate the capabilities of this framework with a subset of examples from the SystemC distribution including register-transfer, and transaction-level models.",
"title": ""
},
{
"docid": "e6953902f5fc0bb9f98d9c632b2ac26e",
"text": "In high voltage (HV) flyback charging circuits, the importance of transformer parasitics holds a significant part in the overall system parasitics. The HV transformers have a larger number of turns on the secondary side that leads to higher self-capacitance which is inevitable. The conventional wire-wound transformer (CWT) has limitation over the design with larger self-capacitance including increased size and volume. For capacitive load in flyback charging circuit these self-capacitances on the secondary side gets added with device capacitances and dominates the load. For such applications the requirement is to have a transformer with minimum self-capacitances and low profile. In order to achieve the above requirements Planar Transformer (PT) design can be implemented with windings as tracks in Printed Circuit Boards (PCB) each layer is insulated by the FR4 material which aids better insulation. Finite Element Model (FEM) has been developed to obtain the self-capacitance in between the layers for larger turns on the secondary side. The modelled hardware prototype of the Planar Transformer has been characterised for open circuit and short circuit test using Frequency Response Analyser (FRA). The results obtained from FEM and FRA are compared and presented.",
"title": ""
},
{
"docid": "5732967997a3914e0a9ef37305d18ee4",
"text": "Protein palmitoylation is an essential post-translational lipid modification of proteins, and reversibly orchestrates a variety of cellular processes. Identification of palmitoylated proteins with their sites is the foundation for understanding molecular mechanisms and regulatory roles of palmitoylation. Contrasting to the labor-intensive and time-consuming experimental approaches, in silico prediction of palmitoylation sites has attracted much attention as a popular strategy. In this work, we updated our previous CSS-Palm into version 2.0. An updated clustering and scoring strategy (CSS) algorithm was employed with great improvement. The leave-one-out validation and 4-, 6-, 8- and 10-fold cross-validations were adopted to evaluate the prediction performance of CSS-Palm 2.0. Also, an additional new data set not included in training was used to test the robustness of CSS-Palm 2.0. By comparison, the performance of CSS-Palm was much better than previous tools. As an application, we performed a small-scale annotation of palmitoylated proteins in budding yeast. The online service and local packages of CSS-Palm 2.0 were freely available at: http://bioinformatics.lcd-ustc.org/css_palm.",
"title": ""
},
{
"docid": "5e5e2d038ae29b4c79c79abe3d20ae40",
"text": "Article history: Received 28 February 2013 Accepted 26 July 2013 Available online 11 October 2013 Fault diagnosis of Discrete Event Systems has become an active research area in recent years. The research activity in this area is driven by the needs of many different application domains such as manufacturing, process control, control systems, transportation, communication networks, software engineering, and others. The aim of this paper is to review the state-of the art of methods and techniques for fault diagnosis of Discrete Event Systems based on models that include faulty behaviour. Theoretical and practical issues related to model description tools, diagnosis processing structure, sensor selection, fault representation and inference are discussed. 2013 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "eed788297c1b49895f8f19012b6231f2",
"text": "Can the choice of words and tone used by the authors of financial news articles correlate to measurable stock price movements? If so, can the magnitude of price movement be predicted using these same variables? We investigate these questions using the Arizona Financial Text (AZFinText) system, a financial news article prediction system, and pair it with a sentiment analysis tool. Through our analysis, we found that subjective news articles were easier to predict in price direction (59.0% versus 50.0% of chance alone) and using a simple trading engine, subjective articles garnered a 3.30% return. Looking further into the role of author tone in financial news articles, we found that articles with a negative sentiment were easiest to predict in price direction (50.9% versus 50.0% of chance alone) and a 3.04% trading return. Investigating negative sentiment further, we found that our system was able to predict price decreases in articles of a positive sentiment 53.5% of the time, and price increases in articles of a negative",
"title": ""
},
{
"docid": "94277962f6f6e0667851600e851e7dad",
"text": "Self-introduction of foreign bodies along the penile shaft has been reported in several ethnic and social groups, mainly in Asia, and recently has been described in Europe. We present the case of a 34-year-old homeless Russian immigrant who had an abdominal CT performed during an emergency department visit. On the CT scan, several hyperdense, well-demarcated subcutaneous nodules along the penile shaft were noted. Following a focused history and physical examination, the nodules were found to represent artificial foreign bodies made of glass, which were self-introduced by the patient in order to allegedly increase the pleasure of sexual partners. Penile nodules may be a manifestation of diverse pathological entities including infectious, inflammatory, and neoplastic processes. It is important for the radiologist to be familiar with this social phenomenon and its radiological appearance in order to avoid erroneous diagnosis.",
"title": ""
},
{
"docid": "98aaa75d102a76840de89d4876643943",
"text": "DeviceNet and ControlNet are two well known industrial networks based on the CIP protocol (CIP = Control and Information Protocol). Both networks have been developed by Rockwell Automation, but are now owned and maintained by the two manufacturers organizations ODVA (Open DeviceNet Vendors Association) and ControlNet International. ODVA and ControlNet International have introduced the newest member of this family-Ethernet/IP (\"IP\" stands for \"Industrial Protocol\"). The paper describes the techniques and mechanisms that are used to implement a fully consistent set of services and data objects on a TCP/UDP/IP based Ethernet network.",
"title": ""
},
{
"docid": "c6bfdc5c039de4e25bb5a72ec2350223",
"text": "Free-energy-based reinforcement learning (FERL) can handle Markov decision processes (MDPs) with high-dimensional state spaces by approximating the state-action value function with the negative equilibrium free energy of a restricted Boltzmann machine (RBM). In this study, we extend the FERL framework to handle partially observable MDPs (POMDPs) by incorporating a recurrent neural network that learns a memory representation sufficient for predicting future observations and rewards. We demonstrate that the proposed method successfully solves POMDPs with high-dimensional observations without any prior knowledge of the environmental hidden states and dynamics. After learning, task structures are implicitly represented in the distributed activation patterns of hidden nodes of the RBM.",
"title": ""
},
{
"docid": "2c3566048334e60ae3f30bd631e4da87",
"text": "The Indian Railways is world's fourth largest railway network in the world after USA, Russia and China. There is a severe problem of collisions of trains. So Indian railway is working in this aspect to promote the motto of "SAFE JOURNEY". A RFID based railway track finding system for railway has been proposed in this paper. In this system the RFID tags and reader are used which are attached in the tracks and engine consecutively. So Train engine automatically get the data of path by receiving it from RFID tag and detect it. If path is correct then train continue to run on track and if it is wrong then a signal is generated and sent to the control station and after this engine automatically stop in a minimum time and the display of LCD show the "WRONG PATH". So the collision and accident of train can be avoided. With the help of this system the train engine would be programmed to move according to the requirement. The another feature of this system is automatic track changer by which the track jointer would move automatically according to availability of trains.",
"title": ""
},
{
"docid": "b39904ccd087e59794cf2cc02e5d2644",
"text": "In this paper, we propose a novel walking method for torque controlled robots. The method is able to produce a wide range of speeds without requiring off-line optimizations and re-tuning of parameters. We use a quadratic whole-body optimization method running online which generates joint torques, given desired Cartesian accelerations of center of mass and feet. Using a dynamics model of the robot inside this optimizer, we ensure both compliance and tracking, required for fast locomotion. We have designed a foot-step planner that uses a linear inverted pendulum as simplified robot internal model. This planner is formulated as a quadratic convex problem which optimizes future steps of the robot. Fast libraries help us performing these calculations online. With very few parameters to tune and no perception, our method shows notable robustness against strong external pushes, relatively large terrain variations, internal noises, model errors and also delayed communication.",
"title": ""
},
{
"docid": "cac7822c1a40b406c998449e2664815f",
"text": "This paper demonstrates the possibility and feasibility of an ultralow-cost antenna-in-package (AiP) solution for the upcoming generation of wireless local area networks (WLANs) denoted as IEEE802.11ad. The iterative design procedure focuses on maximally alleviating the inherent disadvantages of high-volume FR4 process at 60 GHz such as its relatively high material loss and fabrication restrictions. Within the planar antenna package, the antenna element, vertical transition, antenna feedline, and low- and high-speed interfaces are allocated in a vertical schematic. A circular stacked patch antenna renders the antenna package to exhibit 10-dB return loss bandwidth from 57-66 GHz. An embedded coplanar waveguide (CPW) topology is adopted for the antenna feedline and features less than 0.24 dB/mm in unit loss, which is extracted from measured parametric studies. The fabricated single antenna package is 9 mm × 6 mm × 0.404 mm in dimension. A multiple-element antenna package is fabricated, and its feasibility for future phase array applications is studied. Far-field radiation measurement using an inhouse radio-frequency (RF) probe station validates the single-antenna package to exhibit more than 4.1-dBi gain and 76% radiation efficiency.",
"title": ""
},
{
"docid": "e4000835f1870399c4270492fb81694b",
"text": "In this paper, a new design of mm-Wave phased array 5G antenna for multiple-input multiple-output (MIMO) applications has been introduced. Two identical linear phased arrays with eight leaf-shaped bow-tie antenna elements have been used at different sides of the mobile-phone PCB. An Arlon AR 350 dielectric with properties of h=0.5 mm, ε=3.5, and δ=0.0026 has been used as a substrate of the proposed design. The antenna is working in the frequency range of 25 to 40 GHz (more than 45% FBW) and can be easily fit into current handheld devices. The proposed MIMO antenna has good radiation performances at 28 and 38 GHz which both are powerful candidates to be the carrier frequency of the future 5G cellular networks.",
"title": ""
},
{
"docid": "d8247467dfe5c3bf21d3588b7af0ff71",
"text": "Self-improving software has been a goal of computer scientists since the founding of the field of Artificial Intelligence. In this work we analyze limits on computation which might restrict recursive self-improvement. We also introduce Convergence Theory which aims to predict general behavior of RSI systems.",
"title": ""
},
{
"docid": "31cbc31b3da263ec1ec5060343f16cac",
"text": "We used convolutional neural networks (CNNs) for automatic sleep stage scoring based on single-channel electroencephalography (EEG) to learn task-specific filters for classification without using prior domain knowledge. We used an openly available dataset from 20 healthy young adults for evaluation and applied 20-fold crossvalidation. We used class-balanced random sampling within the stochastic gradient descent (SGD) optimization of the CNN to avoid skewed performance in favor of the most represented sleep stages. We achieved high mean F1-score (81%, range 79–83%), mean accuracy across individual sleep stages (82%, range 80–84%) and overall accuracy (74%, range 71–76%) over all subjects. By analyzing and visualizing the filters that our CNN learns, we found that rules learned by the filters correspond to sleep scoring criteria in the American Academy of Sleep Medicine (AASM) manual that human experts follow. Our method’s performance is balanced across classes and our results are comparable to state-of-the-art methods with hand-engineered features. We show that, without using prior domain knowledge, a CNN can automatically learn to distinguish among different normal sleep stages.",
"title": ""
},
{
"docid": "1913c6ce69e543a3ae9a90b73c9efddd",
"text": "Cooperative Intelligent Transportation Systems, mainly represented by vehicular ad hoc networks (VANETs), are among the key components contributing to the Smart City and Smart World paradigms. Based on the continuous exchange of both periodic and event triggered messages, smart vehicles can enhance road safety, while also providing support for comfort applications. In addition to the different communication protocols, securing such communications and establishing a certain trustiness among vehicles are among the main challenges to address, since the presence of dishonest peers can lead to unwanted situations. To this end, existing security solutions are typically divided into two main categories, cryptography and trust, where trust appeared as a complement to cryptography on some specific adversary models and environments where the latter was not enough to mitigate all possible attacks. In this paper, we provide an adversary-oriented survey of the existing trust models for VANETs. We also show when trust is preferable to cryptography, and the opposite. In addition, we show how trust models are usually evaluated in VANET contexts, and finally, we point out some critical scenarios that existing trust models cannot handle, together with some possible solutions.",
"title": ""
},
{
"docid": "02469f669769f5c9e2a9dc49cee20862",
"text": "In this work we study the use of 3D hand poses to recognize first-person dynamic hand actions interacting with 3D objects. Towards this goal, we collected RGB-D video sequences comprised of more than 100K frames of 45 daily hand action categories, involving 26 different objects in several hand configurations. To obtain hand pose annotations, we used our own mo-cap system that automatically infers the 3D location of each of the 21 joints of a hand model via 6 magnetic sensors and inverse kinematics. Additionally, we recorded the 6D object poses and provide 3D object models for a subset of hand-object interaction sequences. To the best of our knowledge, this is the first benchmark that enables the study of first-person hand actions with the use of 3D hand poses. We present an extensive experimental evaluation of RGB-D and pose-based action recognition by 18 baselines/state-of-the-art approaches. The impact of using appearance features, poses, and their combinations are measured, and the different training/testing protocols are evaluated. Finally, we assess how ready the 3D hand pose estimation field is when hands are severely occluded by objects in egocentric views and its influence on action recognition. From the results, we see clear benefits of using hand pose as a cue for action recognition compared to other data modalities. Our dataset and experiments can be of interest to communities of 3D hand pose estimation, 6D object pose, and robotics as well as action recognition.",
"title": ""
},
{
"docid": "ca29fee64e9271e8fce675e970932af1",
"text": "This paper considers univariate online electricity demand forecasting for lead times from a half-hour-ahead to a day-ahead. A time series of demand recorded at half-hourly intervals contains more than one seasonal pattern. A within-day seasonal cycle is apparent from the similarity of the demand profile from one day to the next, and a within-week seasonal cycle is evident when one compares the demand on the corresponding day of adjacent weeks. There is strong appeal in using a forecasting method that is able to capture both seasonalities. The multiplicative seasonal ARIMA model has been adapted for this purpose. In this paper, we adapt the Holt-Winters exponential smoothing formulation so that it can accommodate two seasonalities. We correct for residual autocorrelation using a simple autoregressive model. The forecasts produced by the new double seasonal Holt-Winters method outperform those from traditional Holt-Winters and from a well-specified multiplicative double seasonal ARIMA model.",
"title": ""
},
{
"docid": "4a21e3015f4fb63f25fd214eaa68ed87",
"text": "We describe our submission to the Brain Tumor Segmentation Challenge (BraTS) at MICCAI 2013. This segmentation approach is based on similarities between multi-channel patches. After patches are extracted from several MR channels for a test case, similar patches are found in training images for which label maps are known. These labels maps are then combined to result in a segmentation map for the test case. The labelling is performed, in a leave-one-out scheme, for each case of a publicly available training set, which consists of 30 real cases (20 highgrade gliomas, 10 low-grade gliomas) and 50 synthetic cases (25 highgrade gliomas, 25 low-grade gliomas). Promising results are shown on the training set, and we believe this algorithm would perform favourably well in comparison to the state of the art on a testing set.",
"title": ""
},
{
"docid": "bb44f0851cd6bb09a074fb34a1d4976c",
"text": "The global emergence and spread of the pathogenic, virulent, and highly transmissible fungus Batrachochytrium dendrobatidis, resulting in the disease chytridiomycosis, has caused the decline or extinction of up to about 200 species of frogs. Key postulates for this theory have been completely or partially fulfilled. In the absence of supportive evidence for alternative theories despite decades of research, it is important for the scientific community and conservation agencies to recognize and manage the threat of chytridiomycosis to remaining species of frogs, especially those that are naive to the pathogen. The impact of chytridiomycosis on frogs is the most spectacular loss of vertebrate biodiversity due to disease in recorded history.",
"title": ""
},
{
"docid": "a41dfbce4138a8422bc7ddfac830e557",
"text": "This paper is the second part in a series that provides a comprehensive survey of the problems and techniques of tracking maneuvering targets in the absence of the so-called measurement-origin uncertainty. It surveys motion models of ballistic targets used for target tracking. Models for all three phases (i.e., boost, coast, and reentry) of motion are covered.",
"title": ""
}
] |
scidocsrr
|
d789563a8f22c7749d20801e317d040a
|
Augmented Variational Autoencoders for Collaborative Filtering with Auxiliary Information
|
[
{
"docid": "eae92d06d00d620791e6b247f8e63c36",
"text": "Tagging systems have become major infrastructures on the Web. They allow users to create tags that annotate and categorize content and share them with other users, very helpful in particular for searching multimedia content. However, as tagging is not constrained by a controlled vocabulary and annotation guidelines, tags tend to be noisy and sparse. Especially new resources annotated by only a few users have often rather idiosyncratic tags that do not reflect a common perspective useful for search. In this paper we introduce an approach based on Latent Dirichlet Allocation (LDA) for recommending tags of resources in order to improve search. Resources annotated by many users and thus equipped with a fairly stable and complete tag set are used to elicit latent topics to which new resources with only a few tags are mapped. Based on this, other tags belonging to a topic can be recommended for the new resource. Our evaluation shows that the approach achieves significantly better precision and recall than the use of association rules, suggested in previous work, and also recommends more specific tags. Moreover, extending resources with these recommended tags significantly improves search for new resources.",
"title": ""
},
{
"docid": "1a65a6e22d57bb9cd15ba01943eeaa25",
"text": "+ optimal local factor – expensive for general obs. + exploit conj. graph structure + arbitrary inference queries + natural gradients – suboptimal local factor + fast for general obs. – does all local inference – limited inference queries – no natural gradients ± optimal given conj. evidence + fast for general obs. + exploit conj. graph structure + arbitrary inference queries + some natural gradients",
"title": ""
}
] |
[
{
"docid": "8b4a7b4e74b0fbedbc44c1c4410af9f2",
"text": "In 2006 the US National Vital Statistics Report recorded 33,300 suicides in the United States, of which hanging, strangulation and suffocation combined to account for 7,491 (22.5%) of the cases. Self strangulation by ligature is uncommon and in the majority of cases, scarves, belts, neckties and rope are used. We report three instances where cable ties were secured around the neck in order to commit suicide. All had a history of depression. One was a 37-year-old man who used a belt to complete the act after an unsuccessful attempt to use cable ties. The second was a 63-year old woman who used multiple cable ties to accomplish her goal. In the third case a tensioning tool was used by a 54-year old man to tighten a cable tie around his neck during self strangulation. Utilization of a tool to tighten the cable ties has not previously been reported.",
"title": ""
},
{
"docid": "3a47a157127d32094a20a895d4c2d8e2",
"text": "In this paper we present an optimisation model for airport taxi scheduling. We introduce a mixed-integer programming formulation to represent the movement of aircraft on the surface of the airport. In the optimal schedule delays due to taxi conflicts are minimised. We discuss implementation issues for solving this optimisation problem. Numerical results with real data of Amsterdam Airport Schiphol demonstrate that the algorithms lead to significant improvements of the efficiency with reasonable computational effort.",
"title": ""
},
{
"docid": "8186333a9ca2af805fa5261783bfdb55",
"text": "M are very interested in word-of-mouth communication because they believe that a product’s success is related to the word of mouth that it generates. However, there are at least three significant challenges associated with measuring word of mouth. First, how does one gather the data? Because the information is exchanged in private conversations, direct observation traditionally has been difficult. Second, what aspect of these conversations should one measure? The third challenge comes from the fact that word of mouth is not exogenous. While the mapping from word of mouth to future sales is of great interest to the firm, we must also recognize that word of mouth is an outcome of past sales. Our primary objective is to address these challenges. As a context for our study, we have chosen new television (TV) shows during the 1999–2000 seasons. Our source of word-of-mouth conversations is Usenet, a collection of thousands of newsgroups with diverse topics. We find that online conversations may offer an easy and cost-effective opportunity to measure word of mouth. We show that a measure of the dispersion of conversations across communities has explanatory power in a dynamic model of TV ratings.",
"title": ""
},
{
"docid": "fc66ced7b3faad64621722ab30cd5cc9",
"text": "In this paper, we present a novel framework for urban automated driving based 1 on multi-modal sensors; LiDAR and Camera. Environment perception through 2 sensors fusion is key to successful deployment of automated driving systems, 3 especially in complex urban areas. Our hypothesis is that a well designed deep 4 neural network is able to end-to-end learn a driving policy that fuses LiDAR and 5 Camera sensory input, achieving the best out of both. In order to improve the 6 generalization and robustness of the learned policy, semantic segmentation on 7 camera is applied, in addition to applying our new LiDAR post processing method; 8 Polar Grid Mapping (PGM). The system is evaluated on the recently released urban 9 car simulator, CARLA. The evaluation is measured according to the generalization 10 performance from one environment to another. The experimental results show that 11 the best performance is achieved by fusing the PGM and semantic segmentation. 12",
"title": ""
},
{
"docid": "86feba94dcc3e89097af2e50e5b7e908",
"text": "Concerned about the Turing test’s ability to correctly evaluate if a system exhibits human-like intelligence, the Winograd Schema Challenge (WSC) has been proposed as an alternative. A Winograd Schema consists of a sentence and a question. The answers to the questions are intuitive for humans but are designed to be difficult for machines, as they require various forms of commonsense knowledge about the sentence. In this paper we demonstrate our progress towards addressing the WSC. We present an approach that identifies the knowledge needed to answer a challenge question, hunts down that knowledge from text repositories, and then reasons with them to come up with the answer. In the process we develop a semantic parser (www.kparser.org). We show that our approach works well with respect to a subset of Winograd schemas.",
"title": ""
},
{
"docid": "80912c6ff371cdc47ef92e793f2497a0",
"text": "Since the explosion of the Web as a business medium, one of its primary uses has been for marketing. Soon, the Web will become a critical distribution channel for the majority of successful enterprises. The mass media, consumer marketers and advertising agencies seem to be in the midst of Internet discovery and exploitation. Before a company can envision what might sell online in the coming years, it must ®rst understand the attitudes and behaviour of its potential customers. Hence, this study examines attitudes toward various aspects of online shopping and provides a better understanding of the potential of electronic commerce for both researchers and practitioners.",
"title": ""
},
{
"docid": "dfea0aadb35d2984040938c7b9b1d633",
"text": "While Agile methods were originally introduced for small, tightly coupled teams, leaner ways of working are becoming a practical method to run entire enterprises. As the emphasis of user experience work has inherently been on the early phases before starting the development, it also needs to be adapted to the Agile way of working. To improve the current practices in Agile user experience work, we determined the present state of a multi-continental software development organization that already had a functioning user experience team. In this paper, we describe the most prevalent issues regarding the interaction of user experience design and software development activities, and suggest improvements to fix those. Most of the observed problems were related to communication issues and to the service mode of the user experience team. The user experience team was operating between management and development organizations trying to adapt to the dissimilar practices of both the disciplines.",
"title": ""
},
{
"docid": "5b5d600ae3c62da4ba2679e132cf9219",
"text": "TCP/IP protocol gradually exposes many shortcomings such as poor scalability and mobility. Content-Centric Networking is a new architecture which cares about the content itself rather than its source. Therefore, this paper proposes a novel IoV architecture which based on Content-Centric Networking and tests its transmission interference time, transmission delay, and throughout in network layer. The experimental results show that the novel architecture is superior to the current IoV in the communication performance.",
"title": ""
},
{
"docid": "77ece03721c0bf08484e64b405523e04",
"text": "Video content providers put stringent requirements on the quality assessment methods realized on their services. They need to be accurate, real-time, adaptable to new content, and scalable as the video set grows. In this letter, we introduce a novel automated and computationally efficient video assessment method. It enables accurate real-time (online) analysis of delivered quality in an adaptable and scalable manner. Offline deep unsupervised learning processes are employed at the server side and inexpensive no-reference measurements at the client side. This provides both real-time assessment and performance comparable to the full reference counterpart, while maintaining its no-reference characteristics. We tested our approach on the LIMP Video Quality Database (an extensive packet loss impaired video set) obtaining a correlation between <inline-formula><tex-math notation=\"LaTeX\">$78\\%$</tex-math> </inline-formula> and <inline-formula><tex-math notation=\"LaTeX\">$91\\%$</tex-math></inline-formula> to the FR benchmark (the video quality metric). Due to its unsupervised learning essence, our method is flexible and dynamically adaptable to new content and scalable with the number of videos.",
"title": ""
},
{
"docid": "727a97b993098aa1386e5bfb11a99d4b",
"text": "Inevitably, reading is one of the requirements to be undergone. To improve the performance and quality, someone needs to have something new every day. It will suggest you to have more inspirations, then. However, the needs of inspirations will make you searching for some sources. Even from the other people experience, internet, and many books. Books and internet are the recommended media to help you improving your quality and performance.",
"title": ""
},
{
"docid": "cec6e899c23dd65881f84cca81205eb0",
"text": "A fuzzy graph (f-graph) is a pair G : ( σ, μ) where σ is a fuzzy subset of a set S and μ is a fuzzy relation on σ. A fuzzy graph H : ( τ, υ) is called a partial fuzzy subgraph of G : (σ, μ) if τ (u) ≤ σ(u) for every u and υ (u, v) ≤ μ(u, v) for every u and v . In particular we call a partial fuzzy subgraph H : ( τ, υ) a fuzzy subgraph of G : ( σ, μ ) if τ (u) = σ(u) for every u in τ * and υ (u, v) = μ(u, v) for every arc (u, v) in υ*. A connected f-graph G : ( σ, μ) is a fuzzy tree(f-tree) if it has a fuzzy spannin g subgraph F : (σ, υ), which is a tree, where for all arcs (x, y) not i n F there exists a path from x to y in F whose strength is more than μ(x, y). A path P of length n is a sequence of disti nct nodes u0, u1, ..., un such that μ(ui−1, ui) > 0, i = 1, 2, ..., n and the degree of membershi p of a weakest arc is defined as its strength. If u 0 = un and n≥ 3, then P is called a cycle and a cycle P is called a fuzzy cycle(f-cycle) if it cont ains more than one weakest arc . The strength of connectedness between two nodes x and y is efined as the maximum of the strengths of all paths between x and y and is denot ed by CONNG(x, y). An x − y path P is called a strongest x − y path if its strength equal s CONNG(x, y). An f-graph G : ( σ, μ) is connected if for every x,y in σ ,CONNG(x, y) > 0. In this paper, we offer a survey of selected recent results on fuzzy graphs.",
"title": ""
},
{
"docid": "c7e584bca061335c8cd085511f4abb3b",
"text": "The application of boosting technique to regression problems has received relatively little attention in contrast to research aimed at classification problems. This letter describes a new boosting algorithm, AdaBoost.RT, for regression problems. Its idea is in filtering out the examples with the relative estimation error that is higher than the preset threshold value, and then following the AdaBoost procedure. Thus, it requires selecting the suboptimal value of the error threshold to demarcate examples as poorly or well predicted. Some experimental results using the M5 model tree as a weak learning machine for several benchmark data sets are reported. The results are compared to other boosting methods, bagging, artificial neural networks, and a single M5 model tree. The preliminary empirical comparisons show higher performance of AdaBoost.RT for most of the considered data sets.",
"title": ""
},
{
"docid": "92c72aa180d3dccd5fcc5504832780e9",
"text": "The site of S1-S2 root activation following percutaneous high-voltage electrical (ES) and magnetic stimulation were located by analyzing the variations of the time interval from M to H soleus responses elicited by moving the stimulus point from lumbar to low thoracic levels. ES was effective in activating S1-S2 roots at their origin. However supramaximal motor root stimulation required a dorsoventral montage, the anode being a large, circular surface electrode placed ventrally, midline between the apex of the xiphoid process and the umbilicus. Responses to magnetic stimuli always resulted from the activation of a fraction of the fiber pool, sometimes limited to the low-thresholds afferent component, near its exit from the intervertebral foramina, or even more distally. Normal values for conduction velocity in motor and 1a afferent fibers in the proximal nerve tract are provided.",
"title": ""
},
{
"docid": "3564cf609cf1b9666eaff7edcd12a540",
"text": "Recent advances in neural network modeling have enabled major strides in computer vision and other artificial intelligence applications. Human-level visual recognition abilities are coming within reach of artificial systems. Artificial neural networks are inspired by the brain, and their computations could be implemented in biological neurons. Convolutional feedforward networks, which now dominate computer vision, take further inspiration from the architecture of the primate visual hierarchy. However, the current models are designed with engineering goals, not to model brain computations. Nevertheless, initial studies comparing internal representations between these models and primate brains find surprisingly similar representational spaces. With human-level performance no longer out of reach, we are entering an exciting new era, in which we will be able to build biologically faithful feedforward and recurrent computational models of how biological brains perform high-level feats of intelligence, including vision.",
"title": ""
},
{
"docid": "ed3ed757804a423eef8b7394b64a971a",
"text": "This work is part of an eort aimed at developing computer-based systems for language instruction; we address the task of grading the pronunciation quality of the speech of a student of a foreign language. The automatic grading system uses SRI's Decipher continuous speech recognition system to generate phonetic segmentations. Based on these segmentations and probabilistic models we produce dierent pronunciation scores for individual or groups of sentences that can be used as predictors of the pronunciation quality. Dierent types of these machine scores can be combined to obtain a better prediction of the overall pronunciation quality. In this paper we review some of the bestperforming machine scores and discuss the application of several methods based on linear and nonlinear mapping and combination of individual machine scores to predict the pronunciation quality grade that a human expert would have given. We evaluate these methods in a database that consists of pronunciation-quality-graded speech from American students speaking French. With predictors based on spectral match and on durational characteristics, we ®nd that the combination of scores improved the prediction of the human grades and that nonlinear mapping and combination methods performed better than linear ones. Characteristics of the dierent nonlinear methods studied are discussed. Ó 2000 Elsevier Science B.V. All rights reserved.",
"title": ""
},
{
"docid": "7794902fc9408b431b01f9822328053e",
"text": "Echo state networks (ESNs) constitute a novel approach to recurrent neural network (RNN) training, with an RNN (the reservoir) being generated randomly, and only a readout being trained using a simple computationally efficient algorithm. ESNs have greatly facilitated the practical application of RNNs, outperforming classical approaches on a number of benchmark tasks. In this paper, we introduce a novel Bayesian approach toward ESNs, the echo state Gaussian process (ESGP). The ESGP combines the merits of ESNs and Gaussian processes to provide a more robust alternative to conventional reservoir computing networks while also offering a measure of confidence on the generated predictions (in the form of a predictive distribution). We exhibit the merits of our approach in a number of applications, considering both benchmark datasets and real-world applications, where we show that our method offers a significant enhancement in the dynamical data modeling capabilities of ESNs. Additionally, we also show that our method is orders of magnitude more computationally efficient compared to existing Gaussian process-based methods for dynamical data modeling, without compromises in the obtained predictive performance.",
"title": ""
},
{
"docid": "ac4edd65e7d81beb66b2f9d765b4ad30",
"text": "This paper is concerned with actively predicting search intent from user browsing behavior data. In recent years, great attention has been paid to predicting user search intent. However, the prediction was mostly passive because it was performed only after users submitted their queries to search engines. It is not considered why users issued these queries, and what triggered their information needs. According to our study, many information needs of users were actually triggered by what they have browsed. That is, after reading a page, if a user found something interesting or unclear, he/she might have the intent to obtain further information and accordingly formulate a search query. Actively predicting such search intent can benefit both search engines and their users. In this paper, we propose a series of technologies to fulfill this task. First, we extract all the queries that users issued after reading a given page from user browsing behavior data. Second, we learn a model to effectively rank these queries according to their likelihoods of being triggered by the page. Third, since search intents can be quite diverse even if triggered by the same page, we propose an optimization algorithm to diversify the ranked list of queries obtained in the second step, and then suggest the list to users. We have tested our approach on large-scale user browsing behavior data obtained from a commercial search engine. The experimental results have shown that our approach can predict meaningful queries for a given page, and the search performance for these queries can be significantly improved by using the triggering page as contextual information.",
"title": ""
},
{
"docid": "6e653e8c6b0074d065b02af81ddcc627",
"text": "The existing research on lone wolf terrorists and case experience are reviewed and interpreted through the lens of psychoanalytic theory. A number of characteristics of the lone wolf are enumerated: a personal grievance and moral outrage; the framing of an ideology; failure to affiliate with an extremist group; dependence on a virtual community found on the Internet; the thwarting of occupational goals; radicalization fueled by changes in thinking and emotion - including cognitive rigidity, clandestine excitement, contempt, and disgust - regardless of the particular ideology; the failure of sexual pair bonding and the sexualization of violence; the nexus of psychopathology and ideology; greater creativity and innovation than terrorist groups; and predatory violence sanctioned by moral (superego) authority. A concluding psychoanalytic formulation is offered.",
"title": ""
},
{
"docid": "56efa93dba9296c0c20fc4edd1e31504",
"text": "This paper introduces an analytical framework for evaluating the vulnerability of people and places to environmental and social forces. The framework represents the relative vulnerability of a variable of concern (e.g. such as agricultural yield) to a set of disturbing forces (e.g. climate change, market fluctuations) by a position on a three-dimensional analytical surface, where vulnerability is defined as a function of sensitivity, exposure, and the state relative to a threshold of damage. The surface is presented as a tool to help identify relative vulnerability in order to prioritize actions and assess the vulnerability implications of management and policy decisions. r 2005 Elsevier Ltd. All rights reserved.",
"title": ""
}
] |
scidocsrr
|
5b9ef8869ec99a3bd72717e816355793
|
Voice Conversion Based on Maximum-Likelihood Estimation of Spectral Parameter Trajectory
|
[
{
"docid": "69e0179971396fcaf09c9507735a8d5b",
"text": "In this paper, we describe a statistical approach to both an articulatory-to-acoustic mapping and an acoustic-to-articulatory inversion mapping without using phonetic information. The joint probability density of an articulatory parameter and an acoustic parameter is modeled using a Gaussian mixture model (GMM) based on a parallel acoustic-articulatory speech database. We apply the GMM-based mapping using the minimum mean-square error (MMSE) criterion, which has been proposed for voice conversion, to the two mappings. Moreover, to improve the mapping performance, we apply maximum likelihood estimation (MLE) to the GMM-based mapping method. The determination of a target parameter trajectory having appropriate static and dynamic properties is obtained by imposing an explicit relationship between static and dynamic features in the MLE-based mapping. Experimental results demonstrate that the MLE-based mapping with dynamic features can significantly improve the mapping performance compared with the MMSE-based mapping in both the articulatory-to-acoustic mapping and the inversion mapping.",
"title": ""
}
] |
[
{
"docid": "00527294606231986ba34d68e847e01a",
"text": "In this paper, we describe a new scheme to learn dynamic user's interests in an automated information filtering and gathering system running on the Internet. Our scheme is aimed to handle multiple domains of long-term and short-term user's interests simultaneously, which is learned through positive and negative user's relevance feedback. We developed a 3-descriptor approach to represent the user's interest categories. Using a learning algorithm derived for this representation, our scheme adapts quickly to significant changes in user interest, and is also able to learn exceptions to interest categories.",
"title": ""
},
{
"docid": "932934a4362bd671427954d0afb61459",
"text": "On the basis of the similarity between spinel and rocksalt structures, it is shown that some spinel oxides (e.g., MgCo2O4, etc) can be cathode materials for Mg rechargeable batteries around 150 °C. The Mg insertion into spinel lattices occurs via \"intercalation and push-out\" process to form a rocksalt phase in the spinel mother phase. For example, by utilizing the valence change from Co(III) to Co(II) in MgCo2O4, Mg insertion occurs at a considerably high potential of about 2.9 V vs. Mg2+/Mg, and similarly it occurs around 2.3 V vs. Mg2+/Mg with the valence change from Mn(III) to Mn(II) in MgMn2O4, being comparable to the ab initio calculation. The feasibility of Mg insertion would depend on the phase stability of the counterpart rocksalt XO of MgO in Mg2X2O4 or MgX3O4 (X = Co, Fe, Mn, and Cr). In addition, the normal spinel MgMn2O4 and MgCr2O4 can be demagnesiated to some extent owing to the robust host structure of Mg1-xX2O4, where the Mg extraction/insertion potentials for MgMn2O4 and MgCr2O4 are both about 3.4 V vs. Mg2+/Mg. Especially, the former \"intercalation and push-out\" process would provide a safe and stable design of cathode materials for polyvalent cations.",
"title": ""
},
{
"docid": "cca664cf201c79508a266a34646dba01",
"text": "Scholars have argued that online social networks and personalized web search increase ideological segregation. We investigate the impact of these potentially polarizing channels on news consumption by examining web browsing histories for 50,000 U.S.-located users who regularly read online news. We find that individuals indeed exhibit substantially higher segregation when reading articles shared on social networks or returned by search engines, a pattern driven by opinion pieces. However, these polarizing articles from social media and web search constitute only 2% of news consumption. Consequently, while recent technological changes do increase ideological segregation, the magnitude of the effect is limited. JEL: D83, L86, L82",
"title": ""
},
{
"docid": "d1cde8ce9934723224ecf21c3cab6615",
"text": "Deep Neural Networks (DNNs) denote multilayer artificial neural networks with more than one hidden layer and millions of free parameters. We propose a Generalized Discriminant Analysis (GerDA) based on DNNs to learn discriminative features of low dimension optimized with respect to a fast classification from a large set of acoustic features for emotion recognition. On nine frequently used emotional speech corpora, we compare the performance of GerDA features and their subsequent linear classification with previously reported benchmarks obtained using the same set of acoustic features classified by Support Vector Machines (SVMs). Our results impressively show that low-dimensional GerDA features capture hidden information from the acoustic features leading to a significantly raised unweighted average recall and considerably raised weighted average recall.",
"title": ""
},
{
"docid": "6057638a2a1cfd07ab2e691baf93a468",
"text": "Cybersecurity in smart grids is of critical importance given the heavy reliance of modern societies on electricity and the recent cyberattacks that resulted in blackouts. The evolution of the legacy electric grid to a smarter grid holds great promises but also comes up with an increasesd attack surface. In this article, we review state of the art developments in cybersecurity for smart grids, both from a standardization as well technical perspective. This work shows the important areas of future research for academia, and collaboration with government and industry stakeholders to enhance smart grid cybersecurity and make this new paradigm not only beneficial and valuable but also safe and secure.",
"title": ""
},
{
"docid": "d6f322f4dd7daa9525f778ead18c8b5e",
"text": "Face perception, perhaps the most highly developed visual skill in humans, is mediated by a distributed neural system in humans that is comprised of multiple, bilateral regions. We propose a model for the organization of this system that emphasizes a distinction between the representation of invariant and changeable aspects of faces. The representation of invariant aspects of faces underlies the recognition of individuals, whereas the representation of changeable aspects of faces, such as eye gaze, expression, and lip movement, underlies the perception of information that facilitates social communication. The model is also hierarchical insofar as it is divided into a core system and an extended system. The core system is comprised of occipitotemporal regions in extrastriate visual cortex that mediate the visual analysis of faces. In the core system, the representation of invariant aspects is mediated more by the face-responsive region in the fusiform gyrus, whereas the representation of changeable aspects is mediated more by the face-responsive region in the superior temporal sulcus. The extended system is comprised of regions from neural systems for other cognitive functions that can be recruited to act in concert with the regions in the core system to extract meaning from faces.",
"title": ""
},
{
"docid": "5d63815adaad5d2c1b80ddd125157842",
"text": "We consider the problem of building scalable semantic parsers for Freebase, and present a new approach for learning to do partial analyses that ground as much of the input text as possible without requiring that all content words be mapped to Freebase concepts. We study this problem on two newly introduced large-scale noun phrase datasets, and present a new semantic parsing model and semi-supervised learning approach for reasoning with partial ontological support. Experiments demonstrate strong performance on two tasks: referring expression resolution and entity attribute extraction. In both cases, the partial analyses allow us to improve precision over strong baselines, while parsing many phrases that would be ignored by existing techniques.",
"title": ""
},
{
"docid": "ea2af110b27015b83659182948a32b36",
"text": "BACKGROUND\nDescent of the lateral aspect of the brow is one of the earliest signs of aging. The purpose of this study was to describe an open surgical technique for lateral brow lifts, with the goal of achieving reliable, predictable, and long-lasting results.\n\n\nMETHODS\nAn incision was made behind and parallel to the temporal hairline, and then extended deeper through the temporoparietal fascia to the level of the deep temporal fascia. Dissection was continued anteriorly on the surface of the deep temporal fascia and subperiosteally beyond the temporal crest, to the level of the superolateral orbital rim. Fixation of the lateral brow and tightening of the orbicularis oculi muscle was achieved with the placement of sutures that secured the tissue directly to the galea aponeurotica on the lateral aspect of the incision. An additional fixation was made between the temporoparietal fascia and the deep temporal fascia, as well as between the temporoparietal fascia and the galea aponeurotica. The excess skin in the temporal area was excised and the incision was closed.\n\n\nRESULTS\nA total of 519 patients were included in the study. Satisfactory lateral brow elevation was obtained in most of the patients (94.41%). The following complications were observed: total relapse (n=8), partial relapse (n=21), neurapraxia of the frontal branch of the facial nerve (n=5), and limited alopecia in the temporal incision (n=9).\n\n\nCONCLUSIONS\nWe consider this approach to be a safe and effective procedure, with long-lasting results.",
"title": ""
},
{
"docid": "0dcfd4e14b9a86a6ca96df31d39cadfb",
"text": "Tumor angiogenesis is recognized as a major therapeutic target in the fight against cancer. The key involvement of angiogenesis in tumor growth and metastasis has started to redefine chemotherapy and new protocols have emerged. Metronomic chemotherapy, which is intended to prevent tumor angiogenesis, is based on more frequent and low-dose drug administrations compared with conventional chemotherapy. The potential of metronomic chemotherapy was revealed in animal models a decade ago and the efficacy of this approach has been confirmed in the clinic. In the past 5 years, multiple clinical trials have investigated the safety and efficacy of metronomic chemotherapy in a variety of human cancers. While the results have been variable, clinical studies have shown that these new treatment protocols represent an interesting alternative for either primary systemic therapy or maintenance therapy. We review the latest clinical trials of metronomic chemotherapy in adult and pediatric cancer patients. Accumulating evidence suggests that the efficacy of such treatment may not only rely on anti-angiogenic activity. Potential new mechanisms of action, such as restoration of anticancer immune response and induction of tumor dormancy are discussed. Finally, we highlight the research efforts that need to be made to facilitate the optimal development of metronomic chemotherapy.",
"title": ""
},
{
"docid": "5f39990b87532cd3189c7d4adb2cd144",
"text": "The abundance of data in the context of smart cities yields huge potential for data-driven businesses but raises unprecedented challenges on data privacy and security. Some of these challenges can be addressed merely through appropriate technical measures, while other issues can only be solved through strategic organizational decisions. In this paper, we present few cases from a real smart city project. We outline some exemplary data analytics scenarios and describe the measures that we adopt for a secure handling of data. Finally, we show how the chosen solutions impact the awareness of the public and acceptability of the project.",
"title": ""
},
{
"docid": "809b40cd0089410592d7b7f77f04c8e4",
"text": "This paper presents a new method for segmentation and interpretation of 3D point clouds from mobile LIDAR data. The main contribution of this work is the automatic detection and classification of artifacts located at the ground level. The detection is based on Top-Hat of hole filling algorithm of range images. Then, several features are extracted from the detected connected components (CCs). Afterward, a stepwise forward variable selection by using Wilk's Lambda criterion is performed. Finally, CCs are classified in four categories (lampposts, pedestrians, cars, the others) by using a SVM machine learning method.",
"title": ""
},
{
"docid": "5ced8b93ad1fb80bb0c5324d34af9269",
"text": "This paper introduces a novel methodology for training an event-driven classifier within a Spiking Neural Network (SNN) System capable of yielding good classification results when using both synthetic input data and real data captured from Dynamic Vision Sensor (DVS) chips. The proposed supervised method uses the spiking activity provided by an arbitrary topology of prior SNN layers to build histograms and train the classifier in the frame domain using the stochastic gradient descent algorithm. In addition, this approach can cope with leaky integrate-and-fire neuron models within the SNN, a desirable feature for real-world SNN applications, where neural activation must fade away after some time in the absence of inputs. Consequently, this way of building histograms captures the dynamics of spikes immediately before the classifier. We tested our method on the MNIST data set using different synthetic encodings and real DVS sensory data sets such as N-MNIST, MNIST-DVS, and Poker-DVS using the same network topology and feature maps. We demonstrate the effectiveness of our approach by achieving the highest classification accuracy reported on the N-MNIST (97.77%) and Poker-DVS (100%) real DVS data sets to date with a spiking convolutional network. Moreover, by using the proposed method we were able to retrain the output layer of a previously reported spiking neural network and increase its performance by 2%, suggesting that the proposed classifier can be used as the output layer in works where features are extracted using unsupervised spike-based learning methods. In addition, we also analyze SNN performance figures such as total event activity and network latencies, which are relevant for eventual hardware implementations. In summary, the paper aggregates unsupervised-trained SNNs with a supervised-trained SNN classifier, combining and applying them to heterogeneous sets of benchmarks, both synthetic and from real DVS chips.",
"title": ""
},
{
"docid": "8767019d309eb2cbb1f7f20d67894fce",
"text": "The problem of scheduling a parallel program represented by a weighted directed acyclic graph (DAG) to a set of homogeneous processors for minimizing the completion time of the program has been extensively studied. The NP-completeness of the problem has stimulated researchers to propose a myriad of heuristic algorithms. While most of these algorithms are reported to be efficient, it is not clear how they compare against each other. A meaningful performance evaluation and comparison of these algorithms is a complex task and it must take into account a number of issues. First, most scheduling algorithms are based upon diverse assumptions, making the performance comparison rather meaningless. Second, there does not exist a standard set of benchmarks to examine these algorithms. Third, most algorithms are evaluated using small problem sizes, and, therefore, their scalability is unknown. In this paper, we first provide a taxonomy for classifying various algorithms into distinct categories according to their assumptions and functionalities. We then propose a set of benchmarks that are based on diverse structures and are not biased toward a particular scheduling technique. We have implemented 15 scheduling algorithms and compared them on a common platform by using the proposed benchmarks, as well as by varying important problem parameters. We interpret the results based upon the design philosophies and principles behind these algorithms, drawing inferences why some algorithms perform better than others. We Article ID jpdc.1999.1578, available online at http: www.idealibrary.com on",
"title": ""
},
{
"docid": "5ae44d8b815bfaf6d6e20b38ba735d72",
"text": "Big data are coming to the study of bipolar disorder and all of psychiatry. Data are coming from providers and payers (including EMR, imaging, insurance claims and pharmacy data), from omics (genomic, proteomic, and metabolomic data), and from patients and non-providers (data from smart phone and Internet activities, sensors and monitoring tools). Analysis of the big data will provide unprecedented opportunities for exploration, descriptive observation, hypothesis generation, and prediction, and the results of big data studies will be incorporated into clinical practice. Technical challenges remain in the quality, analysis and management of big data. This paper discusses some of the fundamental opportunities and challenges of big data for psychiatry.",
"title": ""
},
{
"docid": "1b030e734e3ddfb5e612b1adc651b812",
"text": "Clustering1is an essential task in many areas such as machine learning, data mining and computer vision among others. Cluster validation aims to assess the quality of partitions obtained by clustering algorithms. Several indexes have been developed for cluster validation purpose. They can be external or internal depending on the availability of ground truth clustering. This paper deals with the issue of cluster validation of large data set. Indeed, in the era of big data this task becomes even more difficult to handle and requires parallel and distributed approaches. In this work, we are interested in external validation indexes. More specifically, this paper proposes a model for purity based cluster validation in parallel and distributed manner using Map-Reduce paradigm in order to be able to scale with increasing dataset sizes.\n The experimental results show that our proposed model is valid and achieves properly cluster validation of large datasets.",
"title": ""
},
{
"docid": "2fea6378ac23711ffa492a4b9c7dac06",
"text": "This paper proposes an acceleration-based robust controller for the motion control problem, i.e., position and force control problems, of a novel series elastic actuator (SEA). A variable stiffness SEA is designed by using soft and hard springs in series so as to relax the fundamental performance limitation of conventional SEAs. Although the proposed SEA intrinsically has several superiorities in force control, its motion control problem, especially position control problem, is harder than conventional stiff and SEAs due to its special mechanical structure. It is shown that the performance of the novel SEA is limited when conventional motion control methods are used. The performance of the steady-state response is significantly improved by using disturbance observer (DOb), i.e., improving the robustness; however, it degrades the transient response by increasing the vibration at tip point. The vibration of the novel SEA and external disturbances are suppressed by using resonance ratio control (RRC) and arm DOb, respectively. The proposed method can be used in the motion control problem of conventional SEAs as well. The intrinsically safe mechanical structure and high-performance motion control system provide several benefits in industrial applications, e.g., robots can perform dexterous and versatile industrial tasks alongside people in a factory setting. The experimental results show viability of the proposals.",
"title": ""
},
{
"docid": "70a970138428aeb06c139abb893a56a9",
"text": "Two sequentially rotated, four stage, wideband circularly polarized high gain microstrip patch array antennas at Ku-band are investigated and compared by incorporating both unequal and equal power division based feeding networks. Four stages of sequential rotation is used to create 16×16 patch array which provides wider common bandwidth between the impedance matching (S11 < −10dB), 3dB axial ratio and 3dB gain of 12.3% for the equal power divider based feed array and 13.2% for the unequal power divider based feed array in addition to high polarization purity. The high peak gain of 28.5dBic is obtained for the unequal power division feed based array antennas compared to 26.8dBic peak gain in the case of the equal power division based feed array antennas. The additional comparison between two feed networks based arrays reveals that the unequal power divider based array antennas provide better array characteristics than the equal power divider based feed array antennas.",
"title": ""
},
{
"docid": "304bf3c44e2946025370283e5c71ffbe",
"text": "Van Gog and Sweller (2015) claim that there is no testing effect—no benefit of practicing retrieval—for complex materials. We show that this claim is incorrect on several grounds. First, Van Gog and Sweller’s idea of “element interactivity” is not defined in a quantitative, measurable way. As a consequence, the idea is applied inconsistently in their literature review. Second, none of the experiments on retrieval practice with worked-example materials manipulated element interactivity. Third, Van Gog and Sweller’s literature review omitted several studies that have shown retrieval practice effects with complex materials, including studies that directly manipulated the complexity of the materials. Fourth, the experiments that did not show retrieval practice effects, which were emphasized by Van Gog and Sweller, either involved retrieval of isolated words in individual sentences or required immediate, massed retrieval practice. The experiments failed to observe retrieval practice effects because of the retrieval tasks, not because of the complexity of the materials. Finally, even though the worked-example experiments emphasized by Van Gog and Sweller have methodological problems, they do not show strong evidence favoring the null. Instead, the data provide evidence that there is indeed a small positive effect of retrieval practice with worked examples. Retrieval practice remains an effective way to improve meaningful learning of complex materials.",
"title": ""
},
{
"docid": "93177b2546e8efa1eccad4c81468f9fe",
"text": "Online Transaction Processing (OLTP) databases include a suite of features - disk-resident B-trees and heap files, locking-based concurrency control, support for multi-threading - that were optimized for computer technology of the late 1970's. Advances in modern processors, memories, and networks mean that today's computers are vastly different from those of 30 years ago, such that many OLTP databases will now fit in main memory, and most OLTP transactions can be processed in milliseconds or less. Yet database architecture has changed little.\n Based on this observation, we look at some interesting variants of conventional database systems that one might build that exploit recent hardware trends, and speculate on their performance through a detailed instruction-level breakdown of the major components involved in a transaction processing database system (Shore) running a subset of TPC-C. Rather than simply profiling Shore, we progressively modified it so that after every feature removal or optimization, we had a (faster) working system that fully ran our workload. Overall, we identify overheads and optimizations that explain a total difference of about a factor of 20x in raw performance. We also show that there is no single \"high pole in the tent\" in modern (memory resident) database systems, but that substantial time is spent in logging, latching, locking, B-tree, and buffer management operations.",
"title": ""
},
{
"docid": "a737a19e46e8af284ba0445b0d35c1ab",
"text": "A Web page has huge information and the information in the Web pages is useful in real world applications. The additional contents in the Web page like links, footers, headers and advertisements may cause the content extraction to be complicated. Irrelevant content in the Web page is treated as noisy content. A method is necessary to extract the informative content and discard the noisy content from Web pages. An integration of textual and visual importance is used to extract the informative content from Web pages. Initially a Web page is converted in to DOM (Document Object Model) tree. For each node in the DOM tree, textual and visual importance is calculated. Textual importance and visual importance is combined to form hybrid density. Density sum is calculated and used in content extraction algorithm to extract the informative content from Web pages. Performance of Web content extraction is obtained by calculating precision, recall, f-measure and accuracy.",
"title": ""
}
] |
scidocsrr
|
4f0d769ca2da804bc41fcccd4669cedb
|
AUTOMATIC WORD SENSE DISAMBIGUATION ( WSD ) SYSTEM
|
[
{
"docid": "f6ca9d3f880176bd692c0f5f5ca262e2",
"text": "This paper describes an experimental comparison of seven different learning algorithms on the problem of learning to disambiguate the meaning of a word from context. The algorithms tested include statistical, neural-network, decision-tree, rule-based, and case-based classification techniques. The specific problem tested involves disambiguating six senses of the word \"line\" using the words in the current and proceeding sentence as context. The statistical and neural-network methods perform the best on this particular problem and we discuss a potential reason for this observed difference. We also discuss the role of bias in machine learning and its importance in explaining performance differences observed on specific problems. I n t r o d u c t i o n Recent research in empirical (corpus-based) natural language processing has explored a number of different methods for learning from data. Three general approaches are statistical, neural-network, and symbolic machine learning and numerous specific methods have been developed under each of these paradigms (Wermter, Riloff, & Scheler, 1996; Charniak, 1993; Reilly & Sharkey, 1992). An important question is whether some methods perform significantly better than others on particular types of problems. Unfortunately, there have been very few direct comparisons of alternative methods on identical test data. A somewhat indirect comparison of applying stochastic context-free grammars (Periera & Shabes, 1992), a transformation-based method (Brill, 1993), and inductive logic programming (Zelle & Mooney, 1994) to parsing the ATIS (Airline Travel Information Service) corpus from the Penn Treebank (Marcus, Santorini, & Marcinkiewicz, 1993) indicates fairly similar performance for these three very different methods. Also, comparisons of Bayesian, informationretrieval, neural-network, and case-based methods on word-sense disambiguation have also demonstrated similar performance (Leacock, Towell, & Voorhees, 1993b; Lehman, 1994). However, in a comparison of neural-network and decision-tree methods on learning to generate the past tense of an English verb, decision trees performed significantly better (Ling & Marinov, 1993; Ling, 1994). Subsequent experiments on this problem have demonstrated that an inductive logic programming method produces even better results than decision trees (Mooney & Califf, 1995). In this paper, we present direct comparisons of a fairly wide range of general learning algorithms on the problem of discriminating six senses of the word \"line\" from context, using data assembled by Leacock et al. (1993b). We compare a naive Bayesian classifier (Duda & Hart, 1973), a perceptron (Rosenblatt, 1962), a decision-tree learner (Quinlan, 1993), a k nearest-neighbor classifier (Cover & Hart, 1967), logic-based DNF (disjunctive normal form) and CNF (conjunctive normal form) learners (Mooney, 1995) and a decisionlist learner (Rivest, 1987). Tests on all methods used identical training and test sets, and ten separate random trials were run in order to measure average performance and allow statistical testing of the significance of any observed differences. On this particular task, we found that the Bayesian and perceptron methods perform significantly better than the remaining methods and discuss a potential reason for this observed difference. We also discuss the role of bias in machine learning and its importance in explaining the observed differences in the performance of alternative methods on specific problems. B a c k g r o u n d o n M a c h i n e L e a r n i n g",
"title": ""
}
] |
[
{
"docid": "6dc4e4949d4f37f884a23ac397624922",
"text": "Research indicates that maladaptive patterns of Internet use constitute behavioral addiction. This article explores the research on the social effects of Internet addiction. There are four major sections. The Introduction section overviews the field and introduces definitions, terminology, and assessments. The second section reviews research findings and focuses on several key factors related to Internet addiction, including Internet use and time, identifiable problems, gender differences, psychosocial variables, and computer attitudes. The third section considers the addictive potential of the Internet in terms of the Internet, its users, and the interaction of the two. The fourth section addresses current and projected treatments of Internet addiction, suggests future research agendas, and provides implications for educational psychologists.",
"title": ""
},
{
"docid": "783c347d3d4f5a191508f005b362164b",
"text": "Workspace awareness is knowledge about others’ interaction with a shared workspace. Groupware systems provide only limited information about other participants, often compromising workspace awareness. This paper describes a usability study of several widgets designed to help maintain awareness in a groupware workspace. These widgets include a miniature view, a radar view, a multiuser scrollbar, a glance function, and a “what you see is what I do” view. The study examined the widgets’ information content, how easily people could interpret them, and whether they were useful or distracting. Observations, questionnaires, and interviews indicate that the miniature and radar displays are useful and valuable for tasks involving spatial manipulation of artifacts.",
"title": ""
},
{
"docid": "5bd483e895de779f8b91ca8537950a2f",
"text": "To evaluate the efficacy of pregabalin in facilitating taper off chronic benzodiazepines, outpatients (N = 106) with a lifetime diagnosis of generalized anxiety disorder (current diagnosis could be subthreshold) who had been treated with a benzodiazepine for 8-52 weeks were stabilized for 2-4 weeks on alprazolam in the range of 1-4 mg/day. Patients were then randomized to 12 weeks of double-blind treatment with either pregabalin 300-600 mg/day or placebo while undergoing a gradual benzodiazepine taper at a rate of 25% per week, followed by a 6-week benzodiazepine-free phase during which they continued double-blind study treatment. Outcome measures included ability to remain benzodiazepine-free (primary) as well as changes in Hamilton Anxiety Rating Scale (HAM)-A and Physician Withdrawal Checklist (PWC). At endpoint, a non-significant higher proportion of patients remained benzodiazepine-free receiving pregabalin compared with placebo (51.4% vs 37.0%). Treatment with pregabalin was associated with significantly greater endpoint reduction in the HAM-A total score versus placebo (-2.5 vs +1.3; p < 0.001), and lower endpoint mean PWC scores (6.5 vs 10.3; p = 0.012). Thirty patients (53%) in the pregabalin group and 19 patients (37%) in the placebo group completed the study, reducing the power to detect a significant difference on the primary outcome. The results on the anxiety and withdrawal severity measures suggest that switching to pregabalin may be a safe and effective method for discontinuing long-term benzodiazepine therapy.",
"title": ""
},
{
"docid": "bf37ea1cfab3b13ffd1bead9d9ead0e7",
"text": "We present a new tool for training neural network language mo dels (NNLMs), scoring sentences, and generating text. The to ol has been written using Python library Theano, which allows r esearcher to easily extend it and tune any aspect of the traini ng process. Regardless of the flexibility, Theano is able to gen erate extremely fast native code that can utilize a GPU or multi ple CPU cores in order to parallelize the heavy numerical com putations. The tool has been evaluated in difficult Finnish a nd English conversational speech recognition tasks, and sign ifica t improvement was obtained over our best back-off n-gram models. The results that we obtained in the Finnish task were com pared to those from existing RNNLM and RWTHLM toolkits, and found to be as good or better, while training times were an order of magnitude shorter.",
"title": ""
},
{
"docid": "a62a23df11fd72522a3d9726b60d4497",
"text": "In this paper, a simple single-phase grid-connected photovoltaic (PV) inverter topology consisting of a boost section, a low-voltage single-phase inverter with an inductive filter, and a step-up transformer interfacing the grid is considered. Ideally, this topology will not inject any lower order harmonics into the grid due to high-frequency pulse width modulation operation. However, the nonideal factors in the system such as core saturation-induced distorted magnetizing current of the transformer and the dead time of the inverter, etc., contribute to a significant amount of lower order harmonics in the grid current. A novel design of inverter current control that mitigates lower order harmonics is presented in this paper. An adaptive harmonic compensation technique and its design are proposed for the lower order harmonic compensation. In addition, a proportional-resonant-integral (PRI) controller and its design are also proposed. This controller eliminates the dc component in the control system, which introduces even harmonics in the grid current in the topology considered. The dynamics of the system due to the interaction between the PRI controller and the adaptive compensation scheme is also analyzed. The complete design has been validated with experimental results and good agreement with theoretical analysis of the overall system is observed.",
"title": ""
},
{
"docid": "4fe6cd07c801b7a15e7874eb0b359e25",
"text": "In this paper, we evaluate how the performance of a wearable context recognition system is affected by the sampling frequency and the resolution of the sensor signals used for the classification. We introduce our method for this evaluation and present the results for a widely studied activity recognition task: the classification of human modes of locomotion using body-worn acceleration sensors. With this example we show that both the sampling frequency and the resolution can be significantly reduced without much impact on the recognition performance. While many of the published approaches in this domain rely on higher sampling frequencies and signal resolutions, we show that good recognition performance can already be achieved with 20 Hz and 2 bit resolution.",
"title": ""
},
{
"docid": "254c3fd35436b95a2ec56693042fc1da",
"text": "Car detection and identification is an important task in the area of traffic control and management. Typically, to tackle this task, large datasets and domain-specific features are used to best fit the data. In our project, we implement, train, and test several state-of-the-art classifiers trained on domain-general datasets for the task of identifying the make and models of cars from various angles and different settings, with the added constraint of limited data and time. We experiment with different levels of transfer learning for fitting these models over to our domain. We report and compare these results to that of baseline models, and discuss the advantages of this approach.",
"title": ""
},
{
"docid": "cd545436dc62cc32f960a09442242eb2",
"text": "BACKGROUND\nSocial networking services (SNSs) contain abundant information about the feelings, thoughts, interests, and patterns of behavior of adolescents that can be obtained by analyzing SNS postings. An ontology that expresses the shared concepts and their relationships in a specific field could be used as a semantic framework for social media data analytics.\n\n\nOBJECTIVE\nThe aim of this study was to refine an adolescent depression ontology and terminology as a framework for analyzing social media data and to evaluate description logics between classes and the applicability of this ontology to sentiment analysis.\n\n\nMETHODS\nThe domain and scope of the ontology were defined using competency questions. The concepts constituting the ontology and terminology were collected from clinical practice guidelines, the literature, and social media postings on adolescent depression. Class concepts, their hierarchy, and the relationships among class concepts were defined. An internal structure of the ontology was designed using the entity-attribute-value (EAV) triplet data model, and superclasses of the ontology were aligned with the upper ontology. Description logics between classes were evaluated by mapping concepts extracted from the answers to frequently asked questions (FAQs) onto the ontology concepts derived from description logic queries. The applicability of the ontology was validated by examining the representability of 1358 sentiment phrases using the ontology EAV model and conducting sentiment analyses of social media data using ontology class concepts.\n\n\nRESULTS\nWe developed an adolescent depression ontology that comprised 443 classes and 60 relationships among the classes; the terminology comprised 1682 synonyms of the 443 classes. In the description logics test, no error in relationships between classes was found, and about 89% (55/62) of the concepts cited in the answers to FAQs mapped onto the ontology class. Regarding applicability, the EAV triplet models of the ontology class represented about 91.4% of the sentiment phrases included in the sentiment dictionary. In the sentiment analyses, \"academic stresses\" and \"suicide\" contributed negatively to the sentiment of adolescent depression.\n\n\nCONCLUSIONS\nThe ontology and terminology developed in this study provide a semantic foundation for analyzing social media data on adolescent depression. To be useful in social media data analysis, the ontology, especially the terminology, needs to be updated constantly to reflect rapidly changing terms used by adolescents in social media postings. In addition, more attributes and value sets reflecting depression-related sentiments should be added to the ontology.",
"title": ""
},
{
"docid": "ac8d66a387f3c2b7fc6c579e33b27c64",
"text": "We revisit the relation between stock market volatility and macroeconomic activity using a new class of component models that distinguish short-run from long-run movements. We formulate models with the long-term component driven by inflation and industrial production growth that are in terms of pseudo out-of-sample prediction for horizons of one quarter at par or outperform more traditional time series volatility models at longer horizons. Hence, imputing economic fundamentals into volatility models pays off in terms of long-horizon forecasting. We also find that macroeconomic fundamentals play a significant role even at short horizons.",
"title": ""
},
{
"docid": "eb5fbb3b7e421466f313dd9e4da39491",
"text": "We propose two dynamic indexing schemes for shortest-path and distance queries on large time-evolving graphs, which are useful in a wide range of important applications such as real-time network-aware search and network evolution analysis. To the best of our knowledge, these methods are the first practical exact indexing methods to efficiently process distance queries and dynamic graph updates. We first propose a dynamic indexing scheme for queries on the last snapshot. The scalability and efficiency of its offline indexing algorithm and query algorithm are competitive even with previous static methods. Meanwhile, the method is dynamic, that is, it can incrementally update indices as the graph changes over time. Then, we further design another dynamic indexing scheme that can also answer two kinds of historical queries with regard to not only the latest snapshot but also previous snapshots.\n Through extensive experiments on real and synthetic evolving networks, we show the scalability and efficiency of our methods. Specifically, they can construct indices from large graphs with millions of vertices, answer queries in microseconds, and update indices in milliseconds.",
"title": ""
},
{
"docid": "2b491f3c06f91e62e07b43c68bec0801",
"text": "Sissay M.M., 2007. Helminth parasites of sheep and goats in eastern Ethiopia: Epidemiology, and anthelmintic resistance and its management. Doctoral thesis, Swedish University of Agricultural Sciences, Uppsala, Sweden. ISSN 1652-6880, ISBN 978-91-576-7351-0 A two-year epidemiology study of helminths of small ruminants involved the collection of viscera from 655 sheep and 632 goats from 4 abattoirs in eastern Ethiopia. A further more detailed epidemiology study of gastro-intestinal nematode infections used the Haramaya University (HU) flock of 60 Black Head Ogaden sheep. The parasitological data included numbers of nematode eggs per gram of faeces (EPG), faecal culture L3 larvae, packed red cell volume (PCV), adult worm and early L4 counts, and FAMACHA eye-colour score estimates, along with animal performance (body weight change). There were 13 species of nematodes and 4 species of flukes present in the sheep and goats, with Haemonchus contortus being the most prevalent (65–80%), followed by Trichostrongylus spp. The nematode infection levels of both sheep and goats followed the bi-modal annual rainfall pattern, with the highest worm burdens occurring during the two rain seasons (peaks in May and September). There were significant differences in worm burdens between the 4 geographic locations for both sheep and goats. Similar seasonal but not geographical variations occurred in the prevalence of flukes. There were significant correlations between EPG and PCV, EPG and FAMACHA scores, and PCV and FAMACHA scores. Moreover, H. contortus showed an increased propensity for arrested development during the dry seasons. Faecal egg count reduction tests (FECRT) conducted on the HU flocks, and flocks in surrounding small-holder communities, evaluated the efficacy of commonly used anthelmintics, including albendazole (ABZ), tetramisole (TET), a combination (ABZ + TET) and ivermectin (IVM). Initially, high levels of resistance to all of the anthelmintics were found in the HU goat flock but not in the sheep. In an attempt to restore the anthelmintic efficacy a new management system was applied to the HU goat flock, including: eliminating the existing parasite infections in the goats, exclusion from the traditional goat pastures, and initiation of communal grazing of the goats with the HU sheep and animals of the local small-holder farmers. Subsequent FECRTs revealed high levels of efficacy of all three drugs in the goat and sheep flocks, demonstrating that anthelmintic efficacy can be restored by exploiting refugia. Individual FECRTs were also conducted on 8 sheep and goat flocks owned by neighbouring small-holder farmers, who received breeding stock from the HU. In each FECRT, 50 local breed sheep and goats, 6–9 months old, were divided into 5 treatment groups: ABZ, TET, ABZ + TET, IVM and untreated control. There was no evidence of anthelmintic resistance in the nematodes, indicating that dilution of resistant parasites, which are likely to be imported with introduced breeding goats, and the low selection pressure imposed by the small-holder farmers, had prevented anthelmintic resistance from emerging.",
"title": ""
},
{
"docid": "7e4c283766a18a12bda4c5990a5ae310",
"text": "In Genome Projects, biological sequences are aligned thousands of times, in a daily basis. The Smith-Waterman algorithm is able to retrieve the optimal local alignment with quadratic time and space complexity. So far, aligning huge sequences, such as whole chromosomes, with the Smith-Waterman algorithm has been regarded as unfeasible, due to huge computing and memory requirements. However, high-performance computing platforms such as GPUs are making it possible to obtain the optimal result for huge sequences in reasonable time. In this paper, we propose and evaluate CUDAlign 2.1, a parallel algorithm that uses GPU to align huge sequences, executing the Smith-Waterman algorithm combined with Myers-Miller, with linear space complexity. In order to achieve that, we propose optimizations which are able to reduce significantly the amount of data processed, while enforcing full parallelism most of the time. Using the NVIDIA GTX 560 Ti board and comparing real DNA sequences that range from 162 KBP (Thousand Base Pairs) to 59 MBP (Million Base Pairs), we show that CUDAlign 2.1 is scalable. Also, we show that CUDAlign 2.1 is able to produce the optimal alignment between the chimpanzee chromosome 22 (33 MBP) and the human chromosome 21 (47 MBP) in 8.4 hours and the optimal alignment between the chimpanzee chromosome Y (24 MBP) and the human chromosome Y (59 MBP) in 13.1 hours.",
"title": ""
},
{
"docid": "31555a5981fd234fe9dce3ed47f690f2",
"text": "An accredited biennial 2012 study by the Association of Certified Fraud Examiners claims that on average 5% of a company’s revenue is lost because of unchecked fraud every year. The reason for such heavy losses are that it takes around 18 months for a fraud to be caught and audits catch only 3% of the actual fraud. This begs the need for better tools and processes to be able to quickly and cheaply identify potential malefactors. In this paper, we describe a robust tool to identify procurement related fraud/risk, though the general design and the analytical components could be adapted to detecting fraud in other domains. Besides analyzing standard transactional data, our solution analyzes multiple public and private data sources leading to wider coverage of fraud types than what generally exists in the marketplace. Moreover, our approach is more principled in the sense that the learning component, which is based on investigation feedback has formal guarantees. Though such a tool is ever evolving, an initial deployment of this tool over the past 6 months has found many interesting cases from compliance risk and fraud point of view, increasing the number of true positives found by over 80% compared with other state-of-the-art tools that the domain experts were previously using.",
"title": ""
},
{
"docid": "de6de62ab783eb1b0a9347a6fa8dcacb",
"text": "The human face is among the most significant objects in an image or video, it contains many important information and specifications, also is required to be the cause of almost all achievable look variants caused by changes in scale, location, orientation, pose, facial expression, lighting conditions and partial occlusions. It plays a key role in face recognition systems and many other face analysis applications. We focus on the feature based approach because it gave great results on detect the human face. Face feature detection techniques can be mainly divided into two kinds of approaches are Feature base and image base approach. Feature base approach tries to extract features and match it against the knowledge of the facial features. This paper gives the idea about challenging problems in the field of human face analysis and as such, as it has achieved a great attention over the last few years because of its many applications in various domains. Furthermore, several existing face detection approaches are analyzed and discussed and attempt to give the issues regarding key technologies of feature base methods, we had gone direct comparisons of the method's performance are made where possible and the advantages/ disadvantages of different approaches are discussed.",
"title": ""
},
{
"docid": "98cb348fd15ba046fff0890a906dd98e",
"text": "Individual and Corporate Social Responsibility Society’s demands for individual and corporate social responsibility as an alternative response to market and distributive failures are becoming increasingly prominent. We first draw on recent developments in the “psychology and economics” of prosocial behavior to shed light on this trend, which reflects a complex interplay of genuine altruism, social or self image concerns, and material incentives. We then link individual concerns to corporate social responsibility, contrasting three possible understandings of the term: the adoption of a more long-term perspective by firms, the delegated exercise of prosocial behavior on behalf of stakeholders, and insider-initiated corporate philanthropy. For both individuals and firms we discuss the benefits, costs and limits of socially responsible behavior as a means to further societal goals. JEL Classification: D64, D78, H41, L31",
"title": ""
},
{
"docid": "c5d379c307a7b8dd172d973d023d57d4",
"text": "In this tutorial we present dynamic adaptive streaming over HTTP ranging from content creation to consumption. It particular, it provides an overview of the recently ratified MPEG-DASH standard, how to create content to be delivered using DASH, its consumption, and the evaluation thereof with respect to competing industry solutions. The tutorial can be roughly clustered into three parts. In part I we will provide an introduction to DASH, part II covers content creation, delivery, and consumption, and, finally, part III deals with the evaluation of existing (open source) MPEG-DASH implementations compared to state-of-art deployed industry solutions.",
"title": ""
},
{
"docid": "376471fa0c721de5a319e990a5dbccc8",
"text": "The basal ganglia are thought to play an important role in regulating motor programs involved in gait and in the fluidity and sequencing of movement. We postulated that the ability to maintain a steady gait, with low stride-to-stride variability of gait cycle timing and its subphases, would be diminished with both Parkinson's disease (PD) and Huntington's disease (HD). To test this hypothesis, we obtained quantitative measures of stride-to-stride variability of gait cycle timing in subjects with PD (n = 15), HD (n = 20), and disease-free controls (n = 16). All measures of gait variability were significantly increased in PD and HD. In subjects with PD and HD, gait variability measures were two and three times that observed in control subjects, respectively. The degree of gait variability correlated with disease severity. In contrast, gait speed was significantly lower in PD, but not in HD, and average gait cycle duration and the time spent in many subphases of the gait cycle were similar in control subjects, HD subjects, and PD subjects. These findings are consistent with a differential control of gait variability, speed, and average gait cycle timing that may have implications for understanding the role of the basal ganglia in locomotor control and for quantitatively assessing gait in clinical settings.",
"title": ""
},
{
"docid": "39c1be028688904914fb8d7be729a272",
"text": "Projections of computer technology forecast processors with peak performance of 1,000 MIPS in the relatively near future. These processors could easily lose half or more of their performance in the memory hierarchy if the hierarchy design is based on conventional caching techniques. This paper presents hardware techniques to improve the performance of caches. Miss caching places a small fully-associative cache between a cache and its refill path. Misses in the cache that hit in the miss cache have only a one cycle miss penalty, as opposed to a many cycle miss penalty without the miss cache. Small miss caches of 2 to 5 entries are shown to be very effective in removing mapping conflict misses in first-level direct-mapped caches. Victim caching is an improvement to miss caching that loads the small fully-associative cache with the victim of a miss and not the requested line. Small victim caches of 1 to 5 entries are even more effective at removing conflict misses than miss caching. Stream buffers prefetch cache lines starting at a cache miss address. The prefetched data is placed in the buffer and not in the cache. Stream buffers are useful in removing capacity and compulsory cache misses, as well as some instruction cache conflict misses. Stream buffers are more effective than previously investigated prefetch techniques at using the next slower level in the memory hierarchy when it is pipelined. An extension to the basic stream buffer, called multi-way stream buffers, is introduced. Multi-way stream buffers are useful for prefetching along multiple intertwined data reference streams. Together, victim caches and stream buffers reduce the miss rate of the first level in the cache hierarchy by a factor of two to three on a set of six large benchmarks. Copyright 1990 Digital Equipment Corporation",
"title": ""
},
{
"docid": "de34cb3489e58366f4aff7f05ba558c9",
"text": "Current initiatives in the field of Business Process Management (BPM) strive for the development of a BPM standard notation by pushing the Business Process Modeling Notation (BPMN). However, such a proposed standard notation needs to be carefully examined. Ontological analysis is an established theoretical approach to evaluating modelling techniques. This paper reports on the outcomes of an ontological analysis of BPMN and explores identified issues by reporting on interviews conducted with BPMN users in Australia. Complementing this analysis we consolidate our findings with previous ontological analyses of process modelling notations to deliver a comprehensive assessment of BPMN.",
"title": ""
},
{
"docid": "671bcd8c52fd6ad3cb2806ffa0cedfda",
"text": "In this paper we present a class of soft-robotic systems with superior load bearing capacity and expanded degrees of freedom. Spatial parallel soft robotic systems utilize spatial arrangement of soft actuators in a manner similar to parallel kinematic machines. In this paper we demonstrate that such an arrangement of soft actuators enhances stiffness and yield dramatic motions. The current work utilizes tri-chamber actuators made from silicone rubber to demonstrate the viability of the concept.",
"title": ""
}
] |
scidocsrr
|
e1bfbec4d77e0fd9cbeaeadaa36f3267
|
Compressing Convolutional Neural Networks in the Frequency Domain
|
[
{
"docid": "8207f59dab8704d14874417f6548c0a7",
"text": "The fully-connected layers of deep convolutional neural networks typically contain over 90% of the network parameters. Reducing the number of parameters while preserving predictive performance is critically important for training big models in distributed systems and for deployment in embedded devices. In this paper, we introduce a novel Adaptive Fastfood transform to reparameterize the matrix-vector multiplication of fully connected layers. Reparameterizing a fully connected layer with d inputs and n outputs with the Adaptive Fastfood transform reduces the storage and computational costs costs from O(nd) to O(n) and O(n log d) respectively. Using the Adaptive Fastfood transform in convolutional networks results in what we call a deep fried convnet. These convnets are end-to-end trainable, and enable us to attain substantial reductions in the number of parameters without affecting prediction accuracy on the MNIST and ImageNet datasets.",
"title": ""
}
] |
[
{
"docid": "eb2d29417686cc86a45c33694688801f",
"text": "We present a method to incorporate global orientation information from the sun into a visual odometry pipeline using only the existing image stream, where the sun is typically not visible. We leverage recent advances in Bayesian Convolutional Neural Networks to train and implement a sun detection model that infers a three-dimensional sun direction vector from a single RGB image. Crucially, our method also computes a principled uncertainty associated with each prediction, using a Monte Carlo dropout scheme. We incorporate this uncertainty into a sliding window stereo visual odometry pipeline where accurate uncertainty estimates are critical for optimal data fusion. Our Bayesian sun detection model achieves a median error of approximately 12 degrees on the KITTI odometry benchmark training set, and yields improvements of up to 42% in translational ARMSE and 32% in rotational ARMSE compared to standard VO. An open source implementation of our Bayesian CNN sun estimator (Sun-BCNN) using Caffe is available at https://github.com/utiasSTARS/sun-bcnn-vo.",
"title": ""
},
{
"docid": "04b62ed72ddf8f97b9cb8b4e59a279c1",
"text": "This paper aims to explore some of the manifold and changing links that official Pakistani state discourses forged between women and work from the 1940s to the late 2000s. The focus of the analysis is on discursive spaces that have been created for women engaged in non-domestic work. Starting from an interpretation of the existing academic literature, this paper argues that Pakistani women’s non-domestic work has been conceptualised in three major ways: as a contribution to national development, as a danger to the nation, and as non-existent. The paper concludes that although some conceptualisations of work have been more powerful than others and, at specific historical junctures, have become part of concrete state policies, alternative conceptualisations have always existed alongside them. Disclosing the state’s implication in the discursive construction of working women’s identities might contribute to the destabilisation of hegemonic concepts of gendered divisions of labour in Pakistan. DOI: https://doi.org/10.1016/j.wsif.2013.05.007 Posted at the Zurich Open Repository and Archive, University of Zurich ZORA URL: https://doi.org/10.5167/uzh-78605 Accepted Version Originally published at: Grünenfelder, Julia (2013). Discourses of gender identities and gender roles in Pakistan: Women and non-domestic work in political representations. Women’s Studies International Forum, 40:68-77. DOI: https://doi.org/10.1016/j.wsif.2013.05.007",
"title": ""
},
{
"docid": "91811c07f246e979401937aca9b66f7e",
"text": "Extraction of complex head and hand movements along with their constantly changing shapes for recognition of sign language is considered a difficult problem in computer vision. This paper proposes the recognition of Indian sign language gestures using a powerful artificial intelligence tool, convolutional neural networks (CNN). Selfie mode continuous sign language video is the capture method used in this work, where a hearing-impaired person can operate the SLR mobile application independently. Due to non-availability of datasets on mobile selfie sign language, we initiated to create the dataset with five different subjects performing 200 signs in 5 different viewing angles under various background environments. Each sign occupied for 60 frames or images in a video. CNN training is performed with 3 different sample sizes, each consisting of multiple sets of subjects and viewing angles. The remaining 2 samples are used for testing the trained CNN. Different CNN architectures were designed and tested with our selfie sign language data to obtain better accuracy in recognition. We achieved 92.88% recognition rate compared to other classifier models reported on the same dataset.",
"title": ""
},
{
"docid": "e1060ca6a60857a995fb22b6c773ebe1",
"text": "Fast and robust pupil detection is an essential prerequisite for video-based eye-tracking in real-world settings. Several algorithms for image-based pupil detection have been proposed in the past, their applicability, however, is mostly limited to laboratory conditions. In real-world scenarios, automated pupil detection has to face various challenges, such as illumination changes, reflections (on glasses), make-up, non-centered eye recording, and physiological eye characteristics. We propose ElSe, a novel algorithm based on ellipse evaluation of a filtered edge image. We aim at a robust, inexpensive approach that can be integrated in embedded architectures, e.g., driving. The proposed algorithm was evaluated against four state-of-the-art methods on over 93,000 hand-labeled images from which 55,000 are new eye images contributed by this work. On average, the proposed method achieved a 14.53% improvement on the detection rate relative to the best state-of-the-art performer. Algorithm and data sets are available for download: ftp://emmapupildata@messor.informatik.unituebingen.de (password:eyedata).",
"title": ""
},
{
"docid": "d1a9ac5a11d1f9fbd9b9ee24a199cb70",
"text": "In this paper, we proposed a new robust twin support vector machine (called R-TWSVM) via second order cone programming formulations for classification, which can deal with data with measurement noise efficiently. Preliminary experiments confirm the robustness of the proposed method and its superiority to the traditional robust SVM in both computation time and classification accuracy. Remarkably, since there are only inner products about inputs in our dual problems, this makes us apply kernel trick directly for nonlinear cases. Simultaneously we does not need to solve the extra inverse of matrices, which is totally different with existing TWSVMs. In addition, we also show that the TWSVMs are the special case of our robust model and simultaneously give a new dual form of TWSVM by degenerating R-TWSVM, which successfully overcomes the existing shortcomings of TWSVM. & 2012 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "c7eca07c70cab1eca77de2e10fc53a72",
"text": "The revolutionary concept of Software Defined Networks (SDNs) potentially provides flexible and wellmanaged next-generation networks. All the hype surrounding the SDNs is predominantly because of its centralized management functionality, the separation of the control plane from the data forwarding plane, and enabling innovation through network programmability. Despite the promising architecture of SDNs, security was not considered as part of the initial design. Moreover, security concerns are potentially augmented considering the logical centralization of network intelligence. Furthermore, the security and dependability of the SDN has largely been a neglected topic and remains an open issue. The paper presents a broad overview of the security implications of each SDN layer/interface. This paper contributes further by devising a contemporary layered/interface taxonomy of the reported security vulnerabilities, attacks, and challenges of SDN. We also highlight and analyze the possible threats on each layer/interface of SDN to help design secure SDNs. Moreover, the ensuing paper contributes by presenting the state-ofthe-art SDNs security solutions. The categorization of solutions is followed by a critical analysis and discussion to devise a comprehensive thematic taxonomy. We advocate the production of secure and dependable SDNs by presenting potential requirements and key enablers. Finally, in an effort to anticipate secure and dependable SDNs, we present the ongoing open security issues, challenges and future research directions. & 2015 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "28d89bf52b1955de36474fc247a381cf",
"text": "Cannabis has been employed medicinally throughout history, but its recent legal prohibition, biochemical complexity and variability, quality control issues, previous dearth of appropriately powered randomised controlled trials, and lack of pertinent education have conspired to leave clinicians in the dark as to how to advise patients pursuing such treatment. With the advent of pharmaceutical cannabis-based medicines (Sativex/nabiximols and Epidiolex), and liberalisation of access in certain nations, this ignorance of cannabis pharmacology and therapeutics has become untenable. In this article, the authors endeavour to present concise data on cannabis pharmacology related to tetrahydrocannabinol (THC), cannabidiol (CBD) et al., methods of administration (smoking, vaporisation, oral), and dosing recommendations. Adverse events of cannabis medicine pertain primarily to THC, whose total daily dose-equivalent should generally be limited to 30mg/day or less, preferably in conjunction with CBD, to avoid psychoactive sequelae and development of tolerance. CBD, in contrast to THC, is less potent, and may require much higher doses for its adjunctive benefits on pain, inflammation, and attenuation of THC-associated anxiety and tachycardia. Dose initiation should commence at modest levels, and titration of any cannabis preparation should be undertaken slowly over a period of as much as two weeks. Suggestions are offered on cannabis-drug interactions, patient monitoring, and standards of care, while special cases for cannabis therapeutics are addressed: epilepsy, cancer palliation and primary treatment, chronic pain, use in the elderly, Parkinson disease, paediatrics, with concomitant opioids, and in relation to driving and hazardous activities.",
"title": ""
},
{
"docid": "8e5573b7ab9789a73d431b666bfb3c8a",
"text": "Automated question answering has been a topic of research and development since the earliest AI applications. Computing power has increased since the first such systems were developed, and the general methodology has changed from the use of hand-encoded knowledge bases about simple domains to the use of text collections as the main knowledge source over more complex domains. Still, many research issues remain. The focus of this article is on the use of restricted domains for automated question answering. The article contains a historical perspective on question answering over restricted domains and an overview of the current methods and applications used in restricted domains. A main characteristic of question answering in restricted domains is the integration of domain-specific information that is either developed for question answering or that has been developed for other purposes. We explore the main methods developed to leverage this domain-specific information.",
"title": ""
},
{
"docid": "d7e2ab4a70dee48770a1ed9ccbeba08f",
"text": "Brezonik, P., K.D. Menken and M. Bauer. 2005. Landsat-based remote sensing of lake water quality characteristics, including chlorophyll and colored dissolved organic matter (CDOM). Lake and Reserv. Manage. 21(4):373-382. Ground-based measurements on 15 Minnesota lakes with wide ranges of optical properties and Landsat TM data from the same lakes were used to evaluate the effect of humic color on satellite-inferred water quality conditions. Color (C440), as measured by absorbance at 440 nm, causes only small biases in estimates of Secchi disk transparency (SDT) from Landsat TM data, except at very high values (> ~ 300 chloroplatinate units, CPU). Similarly, when chlorophyll a (chl a) levels are moderate or high (> 10 μg/L), low-to-moderate levels of humic color have only a small influence on the relationship between SDT and chl a concentration, but it has a pronounced influence at high levels of C440 (e.g., > ~200 CPU). However, deviations from the general chl a-SDT relationship occur at much lower C440 values (~ 60 CPU) when chl a levels are low. Good statistical relationships were found between optical properties of lake water generally associated with algal abundance (SDT, chl a, turbidity) and measured brightness of various Landsat TM bands. The best relationships for chl a (based on R2 and absence of statistical outliers or lakes with large leverage) were combinations of bands 1, 2, or 4 with the band ratio 1:3 (R2 = 0.88). Although TM bands 1-4 individually or as simple ratios were poor predictors of C440, multiple regression analyses between ln(C440) and combinations of bands 1-4 and band ratios yielded several relationships with R2 ≥ 0.70, suggesting that C440 can be estimated with fair reliability from Landsat TM data.",
"title": ""
},
{
"docid": "f38709ee76dd9988b36812a7801f7336",
"text": "BACKGROUND\nMost individuals with mood disorders experience psychiatric and/or medical comorbidity. Available treatment guidelines for major depressive disorder (MDD) and bipolar disorder (BD) have focused on treating mood disorders in the absence of comorbidity. Treating comorbid conditions in patients with mood disorders requires sufficient decision support to inform appropriate treatment.\n\n\nMETHODS\nThe Canadian Network for Mood and Anxiety Treatments (CANMAT) task force sought to prepare evidence- and consensus-based recommendations on treating comorbid conditions in patients with MDD and BD by conducting a systematic and qualitative review of extant data. The relative paucity of studies in this area often required a consensus-based approach to selecting and sequencing treatments.\n\n\nRESULTS\nSeveral principles emerge when managing comorbidity. They include, but are not limited to: establishing the diagnosis, risk assessment, establishing the appropriate setting for treatment, chronic disease management, concurrent or sequential treatment, and measurement-based care.\n\n\nCONCLUSIONS\nEfficacy, effectiveness, and comparative effectiveness research should emphasize treatment and management of conditions comorbid with mood disorders. Clinicians are encouraged to screen and systematically monitor for comorbid conditions in all individuals with mood disorders. The common comorbidity in mood disorders raises fundamental questions about overlapping and discrete pathoetiology.",
"title": ""
},
{
"docid": "7334904bb8b95fbf9668c388d30d4d72",
"text": "Write-optimized data structures like Log-Structured Merge-tree (LSM-tree) and its variants are widely used in key-value storage systems like Big Table and Cassandra. Due to deferral and batching, the LSM-tree based storage systems need background compactions to merge key-value entries and keep them sorted for future queries and scans. Background compactions play a key role on the performance of the LSM-tree based storage systems. Existing studies about the background compaction focus on decreasing the compaction frequency, reducing I/Os or confining compactions on hot data key-ranges. They do not pay much attention to the computation time in background compactions. However, the computation time is no longer negligible, and even the computation takes more than 60% of the total compaction time in storage systems using flash based SSDs. Therefore, an alternative method to speedup the compaction is to make good use of the parallelism of underlying hardware including CPUs and I/O devices. In this paper, we analyze the compaction procedure, recognize the performance bottleneck, and propose the Pipelined Compaction Procedure (PCP) to better utilize the parallelism of CPUs and I/O devices. Theoretical analysis proves that PCP can improve the compaction bandwidth. Furthermore, we implement PCP in real system and conduct extensive experiments. The experimental results show that the pipelined compaction procedure can increase the compaction bandwidth and storage system throughput by 77% and 62% respectively.",
"title": ""
},
{
"docid": "d9356e0a1e207c53301d776b0895bcd3",
"text": "Neurodegenerative diseases are a common cause of morbidity and cognitive impairment in older adults. Most clinicians who care for the elderly are not trained to diagnose these conditions, perhaps other than typical Alzheimer's disease (AD). Each of these disorders has varied epidemiology, clinical symptomatology, laboratory and neuroimaging features, neuropathology, and management. Thus, it is important that clinicians be able to differentiate and diagnose these conditions accurately. This review summarizes and highlights clinical aspects of several of the most commonly encountered neurodegenerative diseases, including AD, frontotemporal dementia (FTD) and its variants, progressive supranuclear palsy (PSP), corticobasal degeneration (CBD), Parkinson's disease (PD), dementia with Lewy bodies (DLB), multiple system atrophy (MSA), and Huntington's disease (HD). For each condition, we provide a brief overview of the epidemiology, defining clinical symptoms and diagnostic criteria, relevant imaging and laboratory features, genetics, pathology, treatments, and differential diagnosis.",
"title": ""
},
{
"docid": "4a31889cf90d39b7c49d02174a425b5b",
"text": "Inter-vehicle communication (IVC) protocols have the potential to increase the safety, efficiency, and convenience of transportation systems involving planes, trains, automobiles, and robots. The applications targeted include peer-to-peer networks for web surfing, coordinated braking, runway incursion prevention, adaptive traffic control, vehicle formations, and many others. The diversity of the applications and their potential communication protocols has challenged a systematic literature survey. We apply a classification technique to IVC applications to provide a taxonomy for detailed study of their communication requirements. The applications are divided into type classes which share common communication organization and performance requirements. IVC protocols are surveyed separately and their fundamental characteristics are revealed. The protocol characteristics are then used to determine the relevance of specific protocols to specific types of IVC applications.",
"title": ""
},
{
"docid": "8774c5a504e2d04e8a49e3625327828a",
"text": "Forest fire prediction constitutes a significant component of forest fire management. It plays a major role in resource allocation, mitigation and recovery efforts. This paper presents a description and analysis of forest fire prediction methods based on artificial intelligence. A novel forest fire risk prediction algorithm, based on support vector machines, is presented. The algorithm depends on previous weather conditions in order to predict the fire hazard level of a day. The implementation of the algorithm using data from Lebanon demonstrated its ability to accurately predict the hazard of fire occurrence.",
"title": ""
},
{
"docid": "ec7c9fa71dcf32a3258ee8712ccb95c1",
"text": "Fuzzy graph is now a very important research area due to its wide application. Fuzzy multigraph and fuzzy planar graphs are two subclasses of fuzzy graph theory. In this paper, we define both of these graphs and studied a lot of properties. A very close association of fuzzy planar graph is fuzzy dual graph. This is also defined and studied several properties. The relation between fuzzy planar graph and fuzzy dual graph is also established.",
"title": ""
},
{
"docid": "e0092f7964604f7adbe9f010bbac4871",
"text": "In the last decade, Web 2.0 services such as blogs, tweets, forums, chats, email etc. have been widely used as communication media, with very good results. Sharing knowledge is an important part of learning and enhancing skills. Furthermore, emotions may affect decisionmaking and individual behavior. Bitcoin, a decentralized electronic currency system, represents a radical change in financial systems, attracting a large number of users and a lot of media attention. In this work, we investigated if the spread of the Bitcoin’s price is related to the volumes of tweets or Web Search media results. We compared trends of price with Google Trends data, volume of tweets and particularly with those that express a positive sentiment. We found significant cross correlation values, especially between Bitcoin price and Google Trends data, arguing our initial idea based on studies about trends in stock and goods market.",
"title": ""
},
{
"docid": "dd4860e8dfe73c56c7bd30863ca626b4",
"text": "Terrain rendering is an important component of many GIS applications and simulators. Most methods rely on heightmap-based terrain which is simple to acquire and handle, but has limited capabilities for modeling features like caves, steep cliffs, or overhangs. In contrast, volumetric terrain models, e.g. based on isosurfaces can represent arbitrary topology. In this paper, we present a fast, practical and GPU-friendly level of detail algorithm for large scale volumetric terrain that is specifically designed for real-time rendering applications. Our algorithm is based on a longest edge bisection (LEB) scheme. The resulting tetrahedral cells are subdivided into four hexahedra, which form the domain for a subsequent isosurface extraction step. The algorithm can be used with arbitrary volumetric models such as signed distance fields, which can be generated from triangle meshes or discrete volume data sets. In contrast to previous methods our algorithm does not require any stitching between detail levels. It generates crack free surfaces with a good triangle quality. Furthermore, we efficiently extract the geometry at runtime and require no preprocessing, which allows us to render infinite procedural content with low memory",
"title": ""
},
{
"docid": "77f795e245cd0c358ad42b11199167e1",
"text": "Object recognition and pedestrian detection are of crucial importance to autonomous driving applications. Deep learning based methods have exhibited very large improvements in accuracy and fast decision in real time applications thanks to CUDA support. In this paper, we propose two Convolutions Neural Networks (CNNs) architectures with different layers. We extract the features obtained from the proposed CNN, CNN in AlexNet architecture, and Bag of visual Words (BOW) approach by using SURF, HOG and k-means. We use linear SVM classifiers for training the features. In the experiments, we carried out object recognition and pedestrian detection tasks using the benchmark the Caltech 101 and the Caltech Pedestrian Detection datasets.",
"title": ""
},
{
"docid": "e757ff7aa63b4fea854641ff97de6fb9",
"text": "It is well known that natural images admit sparse representations by redundant dictionaries of basis functions such as Gabor-like wavelets. However, it is still an open question as to what the next layer of representational units above the layer of wavelets should be. We address this fundamental question by proposing a sparse FRAME (Filters, Random field, And Maximum Entropy) model for representing natural image patterns. Our sparse FRAME model is an inhomogeneous generalization of the original FRAME model. It is a non-stationary Markov random field model that reproduces the observed statistical properties of filter responses at a subset of selected locations, scales and orientations. Each sparse FRAME model is intended to represent an object pattern and can be considered a deformable template. The sparse FRAME model can be written as a shared sparse coding model, which motivates us to propose a two-stage algorithm for learning the model. The first stage selects the subset of wavelets from the dictionary by a shared matching pursuit algorithm. The second stage then estimates the parameters of the model given the selected wavelets. Our experiments show that the sparse FRAME models are capable of representing a wide variety of object patterns in natural images and that the learned models are useful for object classification.",
"title": ""
},
{
"docid": "e520b7a8c9f323c92a7e0fa52f38f16d",
"text": "BACKGROUND\nRecent research has revealed concerning rates of anxiety and depression among university students. Nevertheless, only a small percentage of these students receive treatment from university health services. Universities are thus challenged with instituting preventative programs that address student stress and reduce resultant anxiety and depression.\n\n\nMETHOD\nA systematic review of the literature and meta-analysis was conducted to examine the effectiveness of interventions aimed at reducing stress in university students. Studies were eligible for inclusion if the assignment of study participants to experimental or control groups was by random allocation or parallel cohort design.\n\n\nRESULTS\nRetrieved studies represented a variety of intervention approaches with students in a broad range of programs and disciplines. Twenty-four studies, involving 1431 students were included in the meta-analysis. Cognitive, behavioral and mindfulness interventions were associated with decreased symptoms of anxiety. Secondary outcomes included lower levels of depression and cortisol.\n\n\nLIMITATIONS\nIncluded studies were limited to those published in peer reviewed journals. These studies over-represent interventions with female students in Western countries. Studies on some types of interventions such as psycho-educational and arts based interventions did not have sufficient data for inclusion in the meta-analysis.\n\n\nCONCLUSION\nThis review provides evidence that cognitive, behavioral, and mindfulness interventions are effective in reducing stress in university students. Universities are encouraged to make such programs widely available to students. In addition however, future work should focus on developing stress reduction programs that attract male students and address their needs.",
"title": ""
}
] |
scidocsrr
|
0fc7cf48da43ab10d584b87d8c593354
|
Access control in IoT: Survey & state of the art
|
[
{
"docid": "7e152f2fcd452e67f52b4a5165950f2d",
"text": "This paper describes a framework that allows fine-grained and flexible access control to connected devices with very limited processing power and memory. We propose a set of security and performance requirements for this setting and derive an authorization framework distributing processing costs between constrained devices and less constrained back-end servers while keeping message exchanges with the constrained devices at a minimum. As a proof of concept we present performance results from a prototype implementing the device part of the framework.",
"title": ""
}
] |
[
{
"docid": "d8255047dc2e28707d711f6d6ff19e30",
"text": "This paper discusses the design of a 10 kV and 200 A hybrid dc circuit breaker suitable for the protection of the dc power systems in electric ships. The proposed hybrid dc circuit breaker employs a Thompson coil based ultrafast mechanical switch (MS) with the assistance of two additional solid-state power devices. A low-voltage (80 V) metal–oxide–semiconductor field-effect transistors (MOSFETs)-based commutating switch (CS) is series connected with the MS to realize the zero current turn-OFF of the MS. In this way, the arcing issue with the MS is avoided. A 15 kV SiC emitter turn-OFF thyristor-based main breaker (MB) is parallel connected with the MS and CS branch to interrupt the fault current. A stack of MOVs parallel with the MB are used to clamp the voltage across the hybrid dc circuit breaker during interruption. This paper focuses on the electronic parts of the hybrid dc circuit breaker, and a companion paper will elucidate the principle and operation of the fast acting MS and the overall operation of the hybrid dc circuit breaker. The selection and design of both the high-voltage and low-voltage electronic components in the hybrid dc circuit breaker are presented in this paper. The turn-OFF capability of the MB with and without snubber circuit is experimentally tested, validating its suitability for the hybrid dc circuit breaker application. The CSs’ conduction performances are tested up to 200 A, and its current commutating during fault current interruption is also analyzed. Finally, the hybrid dc circuit breaker demonstrated a fast current interruption within 2 ms at 7 kV and 100 A.",
"title": ""
},
{
"docid": "63a29e42a28698339d7d1f5e1a2fabcc",
"text": "(n) k edges have equal probabilities to be chosen as the next one . We shall 2 study the \"evolution\" of such a random graph if N is increased . In this investigation we endeavour to find what is the \"typical\" structure at a given stage of evolution (i . e . if N is equal, or asymptotically equal, to a given function N(n) of n) . By a \"typical\" structure we mean such a structure the probability of which tends to 1 if n -* + when N = N(n) . If A is such a property that lim Pn,N,(n ) ( A) = 1, we shall say that „almost all\" graphs Gn,N(n) n--possess this property .",
"title": ""
},
{
"docid": "c7daf28d656a9e51e5a738e70beeadcf",
"text": "We present a taxonomy for Information Visualization (IV) that characterizes it in terms of data, task, skill and context, as well as a number of dimensions that relate to the input and output hardware, the software tools, as well as user interactions and human perceptual abil ities. We il lustrate the utilit y of the taxonomy by focusing particularly on the information retrieval task and the importance of taking into account human perceptual capabiliti es and limitations. Although the relevance of Psychology to IV is often recognised, we have seen relatively littl e translation of psychological results and theory to practical IV applications. This paper targets the better development of information visualizations through the introduction of a framework delineating the major factors in interface development. We believe that higher quality visualizations will result from structured developments that take into account these considerations and that the framework will also serve to assist the development of effective evaluation and assessment processes.",
"title": ""
},
{
"docid": "be1965fb5a8c15b07e2b6f9895d383b2",
"text": "Although braided pneumatic actuators are capable of producing phenomenal forces compared to their weight, they have yet to see mainstream use due to their relatively short fatigue lives. By improving manufacturing techniques, actuator lifetime was extended by nearly an order of magnitude. Another concern is that their response times may be too long for control of legged robots. In addition, the frequency response of these actuators was found to be similar to that of human muscle.",
"title": ""
},
{
"docid": "54c8a8669b133e23035d93aabdc01a54",
"text": "The proposed antenna topology is an interesting radiating element, characterized by broadband or multiband capabilities. The exponential and soft/tapered design of the edge transitions and feeding makes it a challenging item to design and tune, leading though to impressive results. The antenna is build on Rogers RO3010 material. The bands in which the antenna works are GPS and Galileo (1.57 GHz), UMTS (1.8–2.17 GHz) and ISM 2.4 GHz (Bluetooth WiFi). The purpose of such an antenna is to be embedded in an Assisted GPS (A-GPS) reference station. Such a device serves as a fix GPS reference distributing the positioning information to mobile device users and delivering at the same time services via GSM network standards or via Wi-Fi / Bluetooth connections.",
"title": ""
},
{
"docid": "e1b536458ddc8603b281bac69e6bd2e8",
"text": "We present highly integrated sensor-actuator-controller units (SAC units), addressing the increasing need for easy to use components in the design of modern high-performance robotic systems. Following strict design principles and an electro-mechanical co-design from the beginning on, our development resulted in highly integrated SAC units. Each SAC unit includes a motor, a gear unit, an IMU, sensors for torque, position and temperature as well as all necessary embedded electronics for control and communication over a high-speed EtherCAT bus. Key design considerations were easy to use interfaces and a robust cabling system. Using slip rings to electrically connect the input and output side, the units allow continuous rotation even when chained along a robotic arm. The experimental validation shows the potential of the new SAC units regarding the design of humanoid robots.",
"title": ""
},
{
"docid": "95452e8b73a19500b1820665d2ad50b5",
"text": "Voltage noise not only detracts from reliability and performance, but has been used to attack system security. Most systems are completely unaware of fluctuations occurring on nanosecond time scales. This paper quantifies the threat to FPGA-based systems and presents a solution approach. Novel measurements of transients on 28nm FPGAs show that extreme activity in the fabric can cause enormous undershoot and overshoot, more than 10× larger than what is allowed by the specification. An existing voltage sensor is evaluated and shown to be insufficient. Lastly, a sensor design using reconfigurable logic is presented; its time-to-digital converter enables sample rates 500× faster than the 28nm Xilinx ADC. This enables quick characterization of transients that would normally go undetected, thereby providing potentially useful data for system optimization and helping to defend against supply voltage attacks.",
"title": ""
},
{
"docid": "2ceedf1be1770938c94892c80ae956e4",
"text": "Although there is interest in the educational potential of online multiplayer games and virtual worlds, there is still little evidence to explain specifically what and how people learn from these environments. This paper addresses this issue by exploring the experiences of couples that play World of Warcraft together. Learning outcomes were identified (involving the management of ludic, social and material resources) along with learning processes, which followed Wenger’s model of participation in Communities of Practice. Comparing this with existing literature suggests that productive comparisons can be drawn with the experiences of distance education students and the social pressures that affect their participation. Introduction Although there is great interest in the potential that computer games have in educational settings (eg, McFarlane, Sparrowhawk & Heald, 2002), and their relevance to learning more generally (eg, Gee, 2003), there has been relatively little in the way of detailed accounts of what is actually learnt when people play (Squire, 2002), and still less that relates such learning to formal education. In this paper, we describe a study that explores how people learn when they play the massively multiplayer online role-playing game (MMORPG), World of Warcraft. Detailed, qualitative research was undertaken with couples to explore their play, adopting a social perspective on learning. The paper concludes with a discussion that relates this to formal curricula and considers the implications for distance learning. British Journal of Educational Technology Vol 40 No 3 2009 444–457 doi:10.1111/j.1467-8535.2009.00948.x © 2009 The Authors. Journal compilation © 2009 Becta. Published by Blackwell Publishing, 9600 Garsington Road, Oxford OX4 2DQ, UK and 350 Main Street, Malden, MA 02148, USA. Background Researchers have long been interested in games and learning. There is, for example, a tradition of work within psychology exploring what makes games motivating, and relating this to learning (eg, Malone & Lepper, 1987). Games have been recently featured in mainstream educational policy (eg, DfES, 2005), and it has been suggested (eg, Gee, 2003) that they provide a model that should inform educational practice more generally. However, research exploring how games can be used in formal education suggests that the potential value of games to support learning is not so easy to realise. McFarlane et al (2002, p. 16), for example, argued that ‘the greatest obstacle to integrating games into the curriculum is the mismatch between the skills and knowledge developed in games, and those recognised explicitly within the school system’. Mitchell and Savill-Smith (2004) noted that although games have been used to support various kinds of learning (eg, recall of content, computer literacy, strategic skills), such uses were often problematic, being complicated by the need to integrate games into existing educational contexts. Furthermore, games specifically designed to be educational were ‘typically disliked’ (p. 44) as well as being expensive to produce. Until recently, research on the use of games in education tended to focus on ‘stand alone’ or single player games. Such games can, to some extent, be assessed in terms of their content coverage or instructional design processes, and evaluated for their ‘fit’ with a given curriculum (eg, Kirriemuir, 2002). Gaming, however, is generally a social activity, and this is even more apparent when we move from a consideration of single player games to a focus on multiplayer, online games. Viewing games from a social perspective opens the possibility of understanding learning as a social achievement, not just a process of content acquisition or skills development (Squire, 2002). In this study, we focus on a particular genre of online, multiplayer game: an MMORPG. MMORPGs incorporate structural elements drawn from table-top role-playing games (Dungeons & Dragons being the classic example). Play takes place in an expansive and persistent graphically rendered world. Players form teams and guilds, undertake group missions, meet in banks and auction houses, chat, congregate in virtual cities and engage in different modes of play, which involve various forms of collaboration and competition. As Squire noted (2002), socially situated accounts of actual learning in games (as opposed to what they might, potentially, help people to learn) have been lacking, partly because the topic is so complex. How, indeed, should the ‘game’ be understood—is it limited to the rules, or the player’s interactions with these rules? Does it include other players, and all possible interactions, and extend to out-of-game related activities and associated materials such as fan forums? Such questions have methodological implications, and hint at the ambiguities that educators working with virtual worlds might face (Carr, Oliver & Burn, 2008). Learning in virtual worlds 445 © 2009 The Authors. Journal compilation © 2009 Becta. Work in this area is beginning to emerge, particularly in relation to the learning and mentoring that takes place within player ‘guilds’ and online clans (see Galarneau, 2005; Steinkuehler, 2005). However, it is interesting to note that the research emerging from a digital game studies perspective, including much of the work cited thus far, is rarely utilised by educators researching the pedagogic potentials of virtual worlds such as Second Life. This study is informed by and attempts to speak to both of these communities. Methodology The purpose of this study was to explore how people learn in such virtual worlds in general. It was decided that focusing on a MMORPG such as World of Warcraft would be practical and offer a rich opportunity to study learning. MMORPGs are games; they have rules and goals, and particular forms of progression. Expertise in a virtual world such as Second Life is more dispersed, because the range of activities is that much greater (encompassing building, playing, scripting, creating machinima or socialising, for instance). Each of these activities would involve particular forms of expertise. The ‘curriculum’ proposed by World of Warcraft is more specified. It was important to approach learning practices in this game without divorcing such phenomena from the real-world contexts in which play takes place. In order to study players’ accounts of learning and the links between their play and other aspects of their social lives, we sought participants who would interact with each other both in the context of the game and outside of it. To this end, we recruited couples that play together in the virtual environment of World of Warcraft, while sharing real space. This decision was taken to manage the potential complexity of studying social settings: couples were the simplest stable social formation that we could identify who would interact both in the context of the game and outside of this too. Interviews were conducted with five couples. These were theoretically sampled, to maximise diversity in players’ accounts (as with any theoretically sampled study, this means that no claims can be made about prevalence or typicality). Players were recruited through online guilds and real-world social networks. The first two sets of participants were sampled for convenience (two heterosexual couples); the rest were invited to participate in order to broaden this sample (one couple was chosen because they shared a single account, one where a partner had chosen to stop playing and one mother–son pairing). All participants were adults, and conventional ethical procedures to ensure informed consent were followed, as specified in the British Educational Research Association guidelines. The couples were interviewed in the game world at a location of their choosing. The interviews, which were semi-structured, were chat-logged and each lasted 60–90 minutes. The resulting transcripts were split into self-contained units (typically a single statement, or a question and answer, or a short exchange) and each was categorised 446 British Journal of Educational Technology Vol 40 No 3 2009 © 2009 The Authors. Journal compilation © 2009 Becta. thematically. The initial categories were then jointly reviewed in order to consolidate and refine them, cross-checking them with the source transcripts to ensure their relevance and coherence. At this stage, the categories included references to topics such as who started first, self-assessments of competence, forms of help, guilds, affect, domestic space and assets, ‘alts’ (multiple characters) and so on. These were then reviewed to develop a single category that might provide an overview or explanation of the process. It should be noted that although this approach was informed by ‘grounded theory’ processes as described in Glaser and Strauss (1967), it does not share their positivistic stance on the status of the model that has been developed. Instead, it accords more closely with the position taken by Charmaz (2000), who recognises the central role of the researcher in shaping the data collected and making sense of it. What is produced therefore is seen as a socially constructed model, based on personal narratives, rather than an objective account of an independent reality. Reviewing the categories that emerged in this case led to ‘management of resources’ being selected as a general marker of learning. As players moved towards greater competence, they identified and leveraged an increasingly complex array of in-game resources, while also negotiating real-world resources and demands. To consider this framework in greater detail, ‘management of resources’ was subdivided into three categories: ludic (concerning the skills, knowledge and practices of game play), social and material (concerning physical resources such as the embodied setting for play) (see Carr & Oliver, 2008). Using this explanation of learning, the transcripts were re-reviewed in order to ",
"title": ""
},
{
"docid": "62766b08b1666085543b732cf839dec0",
"text": "The research area of evolutionary multiobjective optimization (EMO) is reaching better understandings of the properties and capabilities of EMO algorithms, and accumulating much evidence of their worth in practical scenarios. An urgent emerging issue is that the favoured EMO algorithms scale poorly when problems have \"many\" (e.g. five or more) objectives. One of the chief reasons for this is believed to be that, in many-objective EMO search, populations are likely to be largely composed of nondominated solutions. In turn, this means that the commonly-used algorithms cannot distinguish between these for selective purposes. However, there are methods that can be used validly to rank points in a nondominated set, and may therefore usefully underpin selection in EMO search. Here we discuss and compare several such methods. Our main finding is that simple variants of the often-overlooked \"Average Ranking\" strategy usually outperform other methods tested, covering problems with 5-20 objectives and differing amounts of inter-objective correlation.",
"title": ""
},
{
"docid": "f2c345550dae6b6da01b4ce335173693",
"text": "The key or the scale information of a piece of music provides important clues on its high level musical content, like harmonic and melodic context, which can be useful for music classification, retrieval or further content analysis. Researchers have previously addressed the issue of finding the key for symbolically encoded music (MIDI); however, very little work has been done on key detection for acoustic music. In this paper, we present a method for estimating the root of diatonic scale and the key directly from acoustic signals (waveform) of popular and classical music. We propose a method to extract pitch profile features from the audio signal, which characterizes the tone distribution in the music. The diatonic scale root and key are estimated based on the extracted pitch profile by using a tone clustering algorithm and utilizing the tone structure of keys. Experiments on 72 music pieces have been conducted to evaluate the proposed techniques. The success rate of scale root detection for pop music pieces is above 90%.",
"title": ""
},
{
"docid": "a7f1565d548359c9f19bed304c2fbba6",
"text": "Handwritten character generation is a popular research topic with various applications. Various methods have been proposed in the literatures which are based on methods such as pattern recognition, machine learning, deep learning or others. However, seldom method could generate realistic and natural handwritten characters with a built-in determination mechanism to enhance the quality of generated image and make the observers unable to tell whether they are written by a person. To address these problems, in this paper, we proposed a novel generative adversarial network, multi-scale multi-class generative adversarial network (MSMC-CGAN). It is a neural network based on conditional generative adversarial network (CGAN), and it is designed for realistic multi-scale character generation. MSMC-CGAN combines the global and partial image information as condition, and the condition can also help us to generate multi-class handwritten characters. Our model is designed with unique neural network structures, image features and training method. To validate the performance of our model, we utilized it in Chinese handwriting generation, and an evaluation method called mean opinion score (MOS) was used. The MOS results show that MSMC-CGAN achieved good performance.",
"title": ""
},
{
"docid": "81ea96fd08b41ce6e526d614e9e46a7e",
"text": "BACKGROUND\nChronic alcoholism is known to impair the functioning of episodic and working memory, which may consequently reduce the ability to learn complex novel information. Nevertheless, semantic and cognitive procedural learning have not been properly explored at alcohol treatment entry, despite its potential clinical relevance. The goal of the present study was therefore to determine whether alcoholic patients, immediately after the weaning phase, are cognitively able to acquire complex new knowledge, given their episodic and working memory deficits.\n\n\nMETHODS\nTwenty alcoholic inpatients with episodic memory and working memory deficits at alcohol treatment entry and a control group of 20 healthy subjects underwent a protocol of semantic acquisition and cognitive procedural learning. The semantic learning task consisted of the acquisition of 10 novel concepts, while subjects were administered the Tower of Toronto task to measure cognitive procedural learning.\n\n\nRESULTS\nAnalyses showed that although alcoholic subjects were able to acquire the category and features of the semantic concepts, albeit slowly, they presented impaired label learning. In the control group, executive functions and episodic memory predicted semantic learning in the first and second halves of the protocol, respectively. In addition to the cognitive processes involved in the learning strategies invoked by controls, alcoholic subjects seem to attempt to compensate for their impaired cognitive functions, invoking capacities of short-term passive storage. Regarding cognitive procedural learning, although the patients eventually achieved the same results as the controls, they failed to automate the procedure. Contrary to the control group, the alcoholic groups' learning performance was predicted by controlled cognitive functions throughout the protocol.\n\n\nCONCLUSION\nAt alcohol treatment entry, alcoholic patients with neuropsychological deficits have difficulty acquiring novel semantic and cognitive procedural knowledge. Compared with controls, they seem to use more costly learning strategies, which are nonetheless less efficient. These learning disabilities need to be considered when treatment requiring the acquisition of complex novel information is envisaged.",
"title": ""
},
{
"docid": "1f9940ff3e31267cfeb62b2a7915aba9",
"text": "Infrared vein detection is one of the newest biomedical techniques researched today. Basic principal behind this is, when IR light transmitted on palm it passes through tissue and veins absorbs that light and the vein appears darker than surrounding tissue. This paper presents vein detection system using strong IR light source, webcam, Matlab based image processing algorithm. Using the Strong IR light source consisting of high intensity led and webcam camera we captured transillumination image of palm. Image processing algorithm is used to separate out the veins from palm.",
"title": ""
},
{
"docid": "53518256d6b4f3bb4e8dcf28a35f9284",
"text": "Customers often evaluate products at brick-and-mortar stores to identify their “best fit” product but buy it for a lower price at a competing online retailer. This free-riding behavior by customers is referred to as “showrooming” and we show that this is detrimental to the profits of the brick-and-mortar stores. We first analyze price matching as a short-term strategy to counter showrooming. Since customers purchase from the store at lower than store posted price when they ask for price-matching, one would expect the price matching strategy to be less effective as the fraction of customers who seek the matching increases. However, our results show that with an increase in the fraction of customers who seek price matching, the stores profits initially decrease and then increase. While price-matching could be used even when customers do not exhibit showrooming behavior, we find that it is more effective when customers do showrooming. We then study exclusivity of product assortments as a long-term strategy to counter showrooming. This strategy can be implemented in two different ways. One, by arranging for exclusivity of known brands (e.g. Macy’s has such an arrangement with Tommy Hilfiger), or, two, through creation of store brands at the brick-and-mortar store (T.J.Maxx uses a large number of store brands). Our analysis suggests that implementing exclusivity through store brands is better than exclusivity through known brands when the product category has few digital attributes. However, when customers do not showroom, the known brand strategy dominates the store brand strategy.",
"title": ""
},
{
"docid": "30e0918ec670bdab298f4f5bb59c3612",
"text": "Consider a single hard disk drive (HDD) composed of rotating platters and a single magnetic head. We propose a simple internal coding framework for HDDs that uses coding across drive blocks to reduce average block seek times. In particular, instead of the HDD controller seeking individual blocks, the drive performs coded-seeking: It seeks the closest subset of coded blocks, where a coded block contains partial information from multiple uncoded blocks. Coded-seeking is a tool that relaxes the scheduling of a full traveling salesman problem (TSP) on an HDD into a k-TSP. This may provide opportunities for new scheduling algorithms and to reduce average read times.",
"title": ""
},
{
"docid": "9e669f91dcce29a497c8524fccc1380d",
"text": "Increased serum cholesterol and decreased high-density lipoprotein (HDL) cholesterol level in serum and cerebro-spinal fluid is a risk factor for the development of Alzheimer disease, and also a predictor of cardiovascular events and stroke in epidemiologic studies. Niacin (vitamin B 3 or nicotinic acid) is the most effective medication in current clinical use for increasing HDL cholesterol and it substantially lowers triglycerides and LDL cholesterol. This review provides an update on the role of the increasing HDL cholesterol agent, niacin, as a neuroprotective and neurorestorative agent which promotes angiogenesis and arteriogenesis after stroke and improves neurobehavioral recovery following central nervous system diseases such as stroke, Alzheimer’s disease and multiple sclerosis. The mechanisms underlying the niacin induced neuroprotective and neurorestorative effects after stroke are discussed. The primary focus of this review is on stroke, with shorter discussion on Alzheimer disease and multiple sclerosis.",
"title": ""
},
{
"docid": "dc8180cdc6344f1dc5bfa4dbf048912c",
"text": "Image analysis is a key area in the computer vision domain that has many applications. Genetic Programming (GP) has been successfully applied to this area extensively, with promising results. Highlevel features extracted from methods such as Speeded Up Robust Features (SURF) and Histogram of Oriented Gradients (HoG) are commonly used for object detection with machine learning techniques. However, GP techniques are not often used with these methods, despite being applied extensively to image analysis problems. Combining the training process of GP with the powerful features extracted by SURF or HoG has the potential to improve the performance by generating high-level, domaintailored features. This paper proposes a new GP method that automatically detects di↵erent regions of an image, extracts HoG features from those regions, and simultaneously evolves a classifier for image classification. By extending an existing GP region selection approach to incorporate the HoG algorithm, we present a novel way of using high-level features with GP for image classification. The ability of GP to explore a large search space in an e cient manner allows all stages of the new method to be optimised simultaneously, unlike in existing approaches. The new approach is applied across a range of datasets, with promising results when compared to a variety of well-known machine learning techniques. Some high-performing GP individuals are analysed to give insight into how GP can e↵ectively be used with high-level features for image classification.",
"title": ""
},
{
"docid": "ac2d4f4e6c73c5ab1734bfeae3a7c30a",
"text": "While neural, encoder-decoder models have had significant empirical success in text generation, there remain several unaddressed problems with this style of generation. Encoderdecoder models are largely (a) uninterpretable, and (b) difficult to control in terms of their phrasing or content. This work proposes a neural generation system using a hidden semimarkov model (HSMM) decoder, which learns latent, discrete templates jointly with learning to generate. We show that this model learns useful templates, and that these templates make generation both more interpretable and controllable. Furthermore, we show that this approach scales to real data sets and achieves strong performance nearing that of encoderdecoder text generation models.",
"title": ""
},
{
"docid": "c352aff803967465db59c44801d4368c",
"text": "A voltage reference was developed using a 0.18 μm standard CMOS process technology, which is compatible with high power supply rejection ratio (PSRR) and low power consumption. The proposed reference circuit operating with all transistors biased in subthreshold region, which provide a reference voltage of 256 mV. The temperature coefficient (TC) was 5 ppm/°C at best and 6.6 ppm/°C on average, in a range from 0 to 140 °C. The line sensitivity was 163 ppm/V in a supply voltage range of 0.8 to 3.2 V, and the power supply rejection was 82 dB at 100 Hz. The current consumption is 30 nA at 140 °C. The chip area was 0.0042mm2.",
"title": ""
},
{
"docid": "844c75292441af560ed2d2abc1d175f6",
"text": "Completion rates for massive open online classes (MOOCs) are notoriously low, but learner intent is an important factor. By studying students who drop out despite their intent to complete the MOOC, it may be possible to develop interventions to improve retention and learning outcomes. Previous research into predicting MOOC completion has focused on click-streams, demographics, and sentiment analysis. This study uses natural language processing (NLP) to examine if the language in the discussion forum of an educational data mining MOOC is predictive of successful class completion. The analysis is applied to a subsample of 320 students who completed at least one graded assignment and produced at least 50 words in discussion forums. The findings indicate that the language produced by students can predict with substantial accuracy (67.8 %) whether students complete the MOOC. This predictive power suggests that NLP can help us both to understand student retention in MOOCs and to develop automated signals of student success.",
"title": ""
}
] |
scidocsrr
|
c921c3d5ca50dd48c3f3052eaeb079fd
|
Parasol and GreenSwitch: managing datacenters powered by renewable energy
|
[
{
"docid": "7d896fc0defac1bd5f11d19f555536cc",
"text": "Distributed processing frameworks, such as Yahoo!'s Hadoop and Google's MapReduce, have been successful at harnessing expansive datacenter resources for large-scale data analysis. However, their effect on datacenter energy efficiency has not been scrutinized. Moreover, the filesystem component of these frameworks effectively precludes scale-down of clusters deploying these frameworks (i.e. operating at reduced capacity). This paper presents our early work on modifying Hadoop to allow scale-down of operational clusters. We find that running Hadoop clusters in fractional configurations can save between 9% and 50% of energy consumption, and that there is a tradeoff between performance energy consumption. We also outline further research into the energy-efficiency of these frameworks.",
"title": ""
}
] |
[
{
"docid": "cab874a37c348491c85bfacb46d669b8",
"text": "Recent advances in meta-learning are providing the foundations to construct meta-learning assistants and task-adaptive learners. The goal of this special issue is to foster an interest in meta-learning by compiling representative work in the field. The contributions to this special issue provide strong insights into the construction of future meta-learning tools. In this introduction we present a common frame of reference to address work in meta-learning through the concept of meta-knowledge. We show how meta-learning can be simply defined as the process of exploiting knowledge about learning that enables us to understand and improve the performance of learning algorithms.",
"title": ""
},
{
"docid": "03a18f34ee67c579b4dd785e3ebd9baa",
"text": "Building a complete inertial navigation system using the limited quality data provided by current smartphones has been regarded challenging, if not impossible. This paper shows that by careful crafting and accounting for the weak information in the sensor samples, smartphones are capable of pure inertial navigation. We present a probabilistic approach for orientation and use-case free inertial odometry, which is based on double-integrating rotated accelerations. The strength of the model is in learning additive and multiplicative IMU biases online. We are able to track the phone position, velocity, and pose in realtime and in a computationally lightweight fashion by solving the inference with an extended Kalman filter. The information fusion is completed with zero-velocity updates (if the phone remains stationary), altitude correction from barometric pressure readings (if available), and pseudo-updates constraining the momentary speed. We demonstrate our approach using an iPad and iPhone in several indoor dead-reckoning applications and in a measurement tool setup.",
"title": ""
},
{
"docid": "39d2ffb43de7ef26495b38fbc7c7e394",
"text": "Rugby is a very popular sport and is played from primary school to senior level in more than a hundred countries worldwide. Certain anthropometric, physical, motor abilities and game-specific variables can distinguish between talented and less talented rugby players. However, a void still exists as to how these abilities change in growing and developing rugby players. At present the positional selection of players is left to the coaches and teachers, who do not necessarily possess the experience or knowledge for proper positional selections. The possibility to identify positional requirements by using a scientifically compiled test battery for rugby players will assist coaches and teachers in the correct positional selection of players at specific ages. The aim of this study was to compare playing groups in terms of anthropometric, rugby-specific skills, physical and motor components among U 13 (n=21), U 16 (n=22), U 18 (n=18) and U 19 (n=19) elite rugby players. These age groups were divided in four positional groups: tight forwards (props, hooker, locks), loose forwards (flankers, eight-man), halves (scrumand fly half) and back-line (centres, wings and full back). Research on talent identification normally uses small groups because elite athletes represent only the talented or gifted players. An analysis of variance (ANOVA) was performed to establish any significant differences (d-value) between playing groups in terms of anthropometric, rugby-specific skills, physical and motor components. In conclusion it seems that forwards, and many coaches are of this opinion, develop much later in terms of anthropometric components. The back-line players reveal many more differences in terms of rugby-specific skills, physical and motor components. It is also interesting to note that the older the players, the fewer the differences that were apparent in terms of rugby-specific skills, physical and motor components. It thus seems that the positional requirements of adolescent rugby players differ among age groups, as well as among adult rugby players. Therefore it is necessary to compile scientific test batteries specifically for each age group. This might be due to better physical and motor conditioning as well as coaching of all players, irrelevant of positional groups.",
"title": ""
},
{
"docid": "34e8cbfa11983f896d9e159daf08cc27",
"text": "XtratuM is an hypervisor designed to meet safety critical requirements. Initially designed for x86 architectures (version 2.0), it has been strongly redesigned for SPARC v8 arquitecture and specially for the to the LEON2 processor. Current version 2.2, includes all the functionalities required to build safety critical systems based on ARINC 653, AUTOSTAR and other standards. Although XtratuMdoes not provides a compliant API with these standards, partitions can offer easily the appropriated API to the applications. XtratuM is being used by the aerospace sector to build software building blocks of future generic on board software dedicated to payloads management units in aerospace. XtratuM provides ARINC 653 scheduling policy, partition management, inter-partition communications, health monitoring, logbooks, traces, and other services to easily been adapted to the ARINC standard. The configuration of the system is specified in a configuration file (XML format) and it is compiled to achieve a static configuration of the final container (XtratuM and the partition’s code) to be deployed to the hardware board. As far as we know, XtratuM is the first hypervisor for the SPARC v8 arquitecture. In this paper, the main design aspects are discussed and the internal architecture described. An evaluation of the most significant metrics is also provided. This evaluation permits to affirm that the overhead of a hypervisor is lower than 3% if the slot duration is higher than 1 millisecond.",
"title": ""
},
{
"docid": "ac7789e3e36716496ed01800f4099412",
"text": "Dietary assessment is essential for understanding the link between diet and health. We develop a context based image analysis system for dietary assessment to automatically segment, identify and quantify food items from images. In this paper, we describe image segmentation and object classification methods used in our system to detect and identify food items. We then use context information to refine the classification results. We define contextual dietary information as the data that is not directly produced by the visual appearance of an object in the image, but yields information about a user’s diet or can be used for diet planning. We integrate contextual dietary information that a user supplies to the system either explicitly or implicitly to correct potential misclassifications. We evaluate our models using food image datasets collected during dietary assessment studies from natural eating events.",
"title": ""
},
{
"docid": "7cc3d7722f978545a6735ae4982ffc62",
"text": "A multiband printed monopole slot antenna promising for operating as an internal antenna in the thin-profile laptop computer for wireless wide area network (WWAN) operation is presented. The proposed antenna is formed by three monopole slots operated at their quarter-wavelength modes and arranged in a compact planar configuration. A step-shaped microstrip feedline is applied to excite the three monopole slots at their respective optimal feeding position, and two wide operating bands at about 900 and 1900 MHz are obtained for the antenna to cover all the five operating bands of GSM850/900/1800/1900/UMTS for WWAN operation. The antenna is easily printed on a small-size FR4 substrate and shows a length of 60 mm only and a height of 12 mm when mounted at the top edge of the system ground plane or supporting metal frame of the laptop display. Details of the proposed antenna are presented and studied.",
"title": ""
},
{
"docid": "1f1a6df3b85a35af375a47a93584f498",
"text": "Natural language generation (NLG) is an important component of question answering(QA) systems which has a significant impact on system quality. Most tranditional QA systems based on templates or rules tend to generate rigid and stylised responses without the natural variation of human language. Furthermore, such methods need an amount of work to generate the templates or rules. To address this problem, we propose a Context-Aware LSTM model for NLG. The model is completely driven by data without manual designed templates or rules. In addition, the context information, including the question to be answered, semantic values to be addressed in the response, and the dialogue act type during interaction, are well approached in the neural network model, which enables the model to produce variant and informative responses. The quantitative evaluation and human evaluation show that CA-LSTM obtains state-of-the-art performance.",
"title": ""
},
{
"docid": "3f83d41f66b2c3b6b62afb3d3a3d8562",
"text": "Many recommendation algorithms suffer from popularity bias in their output: popular items are recommended frequently and less popular ones rarely, if at all. However, less popular, long-tail items are precisely those that are often desirable recommendations. In this paper, we introduce a flexible regularization-based framework to enhance the long-tail coverage of recommendation lists in a learning-to-rank algorithm. We show that regularization provides a tunable mechanism for controlling the trade-off between accuracy and coverage. Moreover, the experimental results using two data sets show that it is possible to improve coverage of long tail items without substantial loss of ranking performance.",
"title": ""
},
{
"docid": "bb6e240e97edf5e8ff33b2e1af04cd7b",
"text": "BACKGROUND\nThe techniques of lower blepharoplasty are evolving to reflect the concept that the lower eyelid contour does not stop at the inferior orbital rim, and that the lid-cheek junction must often be modified to restore the midface to a youthful configuration. Multiple procedures have been proposed to smooth the lid-cheek junction and tear trough. The author proposes a technique of carbon dioxide laser lysis of the orbicularis retaining ligament and of the orbicularis oculi insertion onto the maxilla to release the tethering of the lower lid and cheek and allow recontouring of the lid-cheek junction in an extended transcutaneous lower blepharoplasty.\n\n\nMETHODS\nRetrospective review of 80 extended lower blepharoplasty procedures with carbon dioxide laser lysis of the orbicularis retaining ligament and of the orbicularis oculi insertion performed in the past 3 years was undertaken. Follow-up ranged from 4 to 26 months, with an average of 7.2 months. The efficacy, risks, and complications of this procedure were assessed.\n\n\nRESULTS\nThe complication rate for this procedure is not significantly higher than that for standard transcutaneous blepharoplasty, and the procedure allows significant improvement of the lid-cheek junction and rejuvenation of the upper midface.\n\n\nCONCLUSIONS\nLysis of the orbicularis retaining ligament and lower orbicularis oculi insertion is a safe and effective adjunct to lower blepharoplasty. It is a powerful modality that allows significant rejuvenation of the lid-cheek complex and upper cheek.\n\n\nCLINICAL QUESTION/LEVEL OF EVIDENCE\nTherapeutic, IV.",
"title": ""
},
{
"docid": "547af58bf28ad045b773584fc964de2f",
"text": "Building automation systems (BAS) are widely deployed in modern buildings. They are typically engineered adhering to the classical, hierarchical 3-layer model, which has served well in the past but is reaching its limits in complex BAS. The move to more integrated building services also requires tighter integration of the mostly heterogeneous technologies. Vertical integration promises seamless communication from the individual sensor up to IT systems. The hierarchy levels are eliminated in favor of a service-oriented model. Several approaches promise to accomplish this goal: integration of network infrastructure, convergence of different protocols, distribution of services and integration to IT systems.",
"title": ""
},
{
"docid": "8217042c3779267570276664dc960612",
"text": "We introduce a taxonomy that reflects the theoretical contribution of empirical articles along two dimensions: theory building and theory testing. We used that taxonomy to track trends in the theoretical contributions offered by articles over the past five decades. Results based on data from a sample of 74 issues of the Academy of Management Journal reveal upward trends in theory building and testing over time. In addition, the levels of theory building and testing within articles are significant predictors of citation rates. In particular, articles rated moderate to high on both dimensions enjoyed the highest levels of citations.",
"title": ""
},
{
"docid": "f75b11bc21dc711b76a7a375c2a198d3",
"text": "In many application areas like e-science and data-warehousing detailed information about the origin of data is required. This kind of information is often referred to as data provenance or data lineage. The provenance of a data item includes information about the processes and source data items that lead to its creation and current representation. The diversity of data representation models and application domains has lead to a number of more or less formal definitions of provenance. Most of them are limited to a special application domain, data representation model or data processing facility. Not surprisingly, the associated implementations are also restricted to some application domain and depend on a special data model. In this paper we give a survey of data provenance models and prototypes, present a general categorization scheme for provenance models and use this categorization scheme to study the properties of the existing approaches. This categorization enables us to distinguish between different kinds of provenance information and could lead to a better understanding of provenance in general. Besides the categorization of provenance types, it is important to include the storage, transformation and query requirements for the different kinds of provenance information and application domains in our considerations. The analysis of existing approaches will assist us in revealing open research problems in the area of data provenance.",
"title": ""
},
{
"docid": "3117a335e4324b151f25d0d3b4279b3c",
"text": "Finding more effective solution and tools for complicated managerial problems is one of the most important and dominant subjects in management studies. With the advancement of computer and communication technology, the tools that are using for management decisions have undergone a massive change. Artificial Neural Networks (ANNs) are one of these tools that have become a critical component of business intelligence. In this article we describe the basic of neural networks as well as a review of selected works done in application of ANNs in management sciences.",
"title": ""
},
{
"docid": "2f9998cc4bd8fb6a15baf03686e34353",
"text": "The CRISPR (clustered regularly interspaced short palindromic repeat)-Cas9 (CRISPR-associated nuclease 9) system is a versatile tool for genome engineering that uses a guide RNA (gRNA) to target Cas9 to a specific sequence. This simple RNA-guided genome-editing technology has become a revolutionary tool in biology and has many innovative applications in different fields. In this review, we briefly introduce the Cas9-mediated genome-editing method, summarize the recent advances in CRISPR/Cas9 technology, and discuss their implications for plant research. To date, targeted gene knockout using the Cas9/gRNA system has been established in many plant species, and the targeting efficiency and capacity of Cas9 has been improved by optimizing its expression and that of its gRNA. The CRISPR/Cas9 system can also be used for sequence-specific mutagenesis/integration and transcriptional control of target genes. We also discuss off-target effects and the constraint that the protospacer-adjacent motif (PAM) puts on CRISPR/Cas9 genome engineering. To address these problems, a number of bioinformatic tools are available to help design specific gRNAs, and new Cas9 variants and orthologs with high fidelity and alternative PAM specificities have been engineered. Owing to these recent efforts, the CRISPR/Cas9 system is becoming a revolutionary and flexible tool for genome engineering. Adoption of the CRISPR/Cas9 technology in plant research would enable the investigation of plant biology at an unprecedented depth and create innovative applications in precise crop breeding.",
"title": ""
},
{
"docid": "ced911b92e427c1d58be739e20f47fcd",
"text": "Software defined networking and network function virtualization are widely deemed two critical pillars of the future service provider network. The expectation for significant operations cost savings from network programmability, open APIs, and operations automation is frequently mentioned as one of the primary benefits of the NFV/SDN vision. Intuitively, the flexibility and simplification values attributed to NFV/SDN lead the industry to the conclusion that operating expenses will decrease. This article provides a view into the operational costs of a typical service provider and discusses how the NFV/SDN attributes can be expected to influence the business equation. The drivers of OPEX change, the directionality of the change, and the categories of OPEX most affected based on our analysis from interactions with a number of service providers worldwide are presented in a structured analysis.",
"title": ""
},
{
"docid": "896edd4e7b3db05d67035a7159b927d6",
"text": "Chronic rhinosinusitis (CRS) is a heterogeneous disease characterized by local inflammation of the upper airways and sinuses which persists for at least 12 weeks. CRS can be divided into two phenotypes dependent on the presence of nasal polyps (NPs); CRS with NPs (CRSwNP) and CRS without NPs (CRSsNP). Immunological patterns in the two diseases are known to be different. Inflammation in CRSsNP is rarely investigated and limited studies show that CRSsNP is characterized by type 1 inflammation. Inflammation in CRSwNP is well investigated and CRSwNP in Western countries shows type 2 inflammation and eosinophilia in NPs. In contrast, mixed inflammatory patterns are found in CRSwNP in Asia and the ratio of eosinophilic NPs and non-eosinophilic NPs is almost 50:50 in these countries. Inflammation in eosinophilic NPs is mainly controlled by type 2 cytokines, IL-5 and IL-13, which can be produced from several immune cells including Th2 cells, mast cells and group 2 innate lymphoid cells (ILC2s) that are all elevated in eosinophilic NPs. IL-5 strongly induces eosinophilia. IL-13 activates macrophages, B cells and epithelial cells to induce recruitment of eosinophils and Th2 cells, IgE mediated reactions and remodeling. Epithelial derived cytokines, TSLP, IL-33 and IL-1 can directly and indirectly control type 2 cytokine production from these cells in eosinophilic NPs. Recent clinical trials showed the beneficial effect on eosinophilic NPs and/or asthma by monoclonal antibodies against IL-5, IL-4Rα, IgE and TSLP suggesting that they can be therapeutic targets for eosinophilic CRSwNP.",
"title": ""
},
{
"docid": "3bf35473dbed1029c9ed1e75470b7af1",
"text": "Swarm intelligence (SI)-based metaheuristics are well applied to solve real-time optimization problems of efficient node clustering and energy-aware data routing in wireless sensor networks. This paper presents another superior approach for these optimization problems based on an artificial bee colony metaheuristic. The proposed clustering algorithm presents an efficient cluster formation mechanism with improved cluster head selection criteria based on a multi-objective fitness function, whereas the routing algorithm is devised to consume minimum energy with least hop-count for data transmission. Extensive evaluation and comparison of the proposed approach with existing wellknown SI-based algorithms demonstrate its superiority over others in terms of packet delivery ratio, average energy consumed, average throughput and network life.",
"title": ""
},
{
"docid": "34546e42bd78161259d2bc190e36c9f7",
"text": "Peer to Peer networks are the leading cause for music piracy but also used for music sampling prior to purchase. In this paper we investigate the relations between music file sharing and sales (both physical and digital)using large Peer-to-Peer query database information. We compare file sharing information on songs to their popularity on the Billboard Hot 100 and the Billboard Digital Songs charts, and show that popularity trends of songs on the Billboard have very strong correlation (0.88-0.89) to their popularity on a Peer-to-Peer network. We then show how this correlation can be utilized by common data mining algorithms to predict a song's success in the Billboard in advance, using Peer-to-Peer information.",
"title": ""
},
{
"docid": "c43137ad8361804cf3683fb675340430",
"text": "Variable neighborhood search (VNS) is a metaheuristic, or framework for building heuristics, based upon systematic change of neighborhoods both in a decent phase to find a local minimum, and in a perturbation phase to get out of the corresponding valley. It was first proposed in 1997 and since has rapidly developed both in its methods and its applications. Both of these aspects are thouroughly reviewed and a large bibliography is given.",
"title": ""
},
{
"docid": "8822138c493df786296c02315bea5802",
"text": "Photodefinable Polyimides (PI) and polybenz-oxazoles (PBO) which have been widely used for various electronic applications such as buffer coating, interlayer dielectric and protection layer usually need high temperature cure condition over 300 °C to complete the cyclization and achieve good film properties. In addition, PI and PBO are also utilized recently for re-distribution layer of wafer level package. In this application, lower temperature curability is strongly required in order to prevent the thermal damage of the semi-conductor device and the other packaging material. Then, to meet this requirement, we focused on pre-cyclized polyimide with phenolic hydroxyl groups since this polymer showed the good solubility to aqueous TMAH and there was no need to apply high temperature cure condition. As a result of our study, the positive-tone photodefinable material could be obtained by using DNQ and combination of epoxy cross-linker enabled to enhance the chemical and PCT resistance of the cured film made even at 170 °C. Furthermore, the adhesion to copper was improved probably due to secondary hydroxyl groups which were generated from reacted epoxide groups. In this report, we introduce our concept of novel photodefinable positive-tone polyimide for low temperature cure.",
"title": ""
}
] |
scidocsrr
|
954a411bf58312459ac38b4b9d4d3bf1
|
Foresight: Rapid Data Exploration Through Guideposts
|
[
{
"docid": "299242a092512f0e9419ab6be13f9b93",
"text": "In this paper, we present ForeCache, a general-purpose tool for exploratory browsing of large datasets. ForeCache utilizes a client-server architecture, where the user interacts with a lightweight client-side interface to browse datasets, and the data to be browsed is retrieved from a DBMS running on a back-end server. We assume a detail-on-demand browsing paradigm, and optimize the back-end support for this paradigm by inserting a separate middleware layer in front of the DBMS. To improve response times, the middleware layer fetches data ahead of the user as she explores a dataset.\n We consider two different mechanisms for prefetching: (a) learning what to fetch from the user's recent movements, and (b) using data characteristics (e.g., histograms) to find data similar to what the user has viewed in the past. We incorporate these mechanisms into a single prediction engine that adjusts its prediction strategies over time, based on changes in the user's behavior. We evaluated our prediction engine with a user study, and found that our dynamic prefetching strategy provides: (1) significant improvements in overall latency when compared with non-prefetching systems (430% improvement); and (2) substantial improvements in both prediction accuracy (25% improvement) and latency (88% improvement) relative to existing prefetching techniques.",
"title": ""
},
{
"docid": "6103a365705a6083e40bb0ca27f6ca78",
"text": "Confirmation bias, as the term is typically used in the psychological literature, connotes the seeking or interpreting of evidence in ways that are partial to existing beliefs, expectations, or a hypothesis in hand. The author reviews evidence of such a bias in a variety of guises and gives examples of its operation in several practical contexts. Possible explanations are considered, and the question of its utility or disutility is discussed.",
"title": ""
},
{
"docid": "467c2a106b6fd5166f3c2a44d655e722",
"text": "AutoVis is a data viewer that responds to content – text, relational tables, hierarchies, streams, images – and displays the information appropriately (that is, as an expert would). Its design rests on the grammar of graphics, scagnostics and a modeler based on the logic of statistical analysis. We distinguish an automatic visualization system (AVS) from an automated visualization system. The former automatically makes decisions about what is to be visualized. The latter is a programming system for automating the production of charts, graphs and visualizations. An AVS is designed to provide a first glance at data before modeling and analysis are done. AVS is designed to protect researchers from ignoring missing data, outliers, miscodes and other anomalies that can violate statistical assumptions or otherwise jeopardize the validity of models. The design of this system incorporates several unique features: (1) a spare interface – analysts simply drag a data source into an empty window, (2) a graphics generator that requires no user definitions to produce graphs, (3) a statistical analyzer that protects users from false conclusions, and (4) a pattern recognizer that responds to the aspects (density, shape, trend, and so on) that professional statisticians notice when investigating data sets.",
"title": ""
}
] |
[
{
"docid": "4d1ae6893fa8b19d05da5794a3fb7978",
"text": "This study analyzes the influence of IT governance on IT investment performance. IT investment performance is known to vary widely across firms. Prior studies find that the variations are often due to the lack of investments in complementary organizational capitals. The presence of complementarities between IT and organizational capitals suggests that IT investment decisions should be made at the right organizational level to ensure that both IT and organizational factors are taken into consideration. IT governance, which determines the allocation of IT decision rights within a firm, therefore, plays an important role in IT investment performance. This study tests this proposition by using a sample dataset from Fortune 1000 firms. A key challenge in this study is that the appropriate IT governance mode varies across firms as well as across business units within a firm. We address this challenge by developing an empirical model of IT governance that is based on earlier studies on multiple contingency factors of IT governance. We use the empirical model to predict the appropriate IT governance mode for each business unit within a firm and use the difference between the predicted and observed IT governance mode to derive a measure of a firm’s IT governance misalignment. We find that firms with high IT governance misalignment receive no benefits from their IT investments; whereas firms with low IT governance misalignment obtain two to three times the value from their IT investments compared to firms with average IT governance misalignment. Our results highlight the importance of IT governance in realizing value from IT investments and confirm the validity of using the multiple contingency factor model in assessing IT governance decisions.",
"title": ""
},
{
"docid": "972ef2897c352ad384333dd88588f0e6",
"text": "We describe a vision-based obstacle avoidance system for of f-road mobile robots. The system is trained from end to end to map raw in put images to steering angles. It is trained in supervised mode t predict the steering angles provided by a human driver during training r uns collected in a wide variety of terrains, weather conditions, lighting conditions, and obstacle types. The robot is a 50cm off-road truck, with two f orwardpointing wireless color cameras. A remote computer process es the video and controls the robot via radio. The learning system is a lar ge 6-layer convolutional network whose input is a single left/right pa ir of unprocessed low-resolution images. The robot exhibits an excell ent ability to detect obstacles and navigate around them in real time at spe ed of 2 m/s.",
"title": ""
},
{
"docid": "0f0305afce53933df1153af6a31c09fb",
"text": "In the study of indoor simultaneous localization and mapping (SLAM) problems using a stereo camera, two types of primary features-point and line segments-have been widely used to calculate the pose of the camera. However, many feature-based SLAM systems are not robust when the camera moves sharply or turns too quickly. In this paper, an improved indoor visual SLAM method to better utilize the advantages of point and line segment features and achieve robust results in difficult environments is proposed. First, point and line segment features are automatically extracted and matched to build two kinds of projection models. Subsequently, for the optimization problem of line segment features, we add minimization of angle observation in addition to the traditional re-projection error of endpoints. Finally, our model of motion estimation, which is adaptive to the motion state of the camera, is applied to build a new combinational Hessian matrix and gradient vector for iterated pose estimation. Furthermore, our proposal has been tested on EuRoC MAV datasets and sequence images captured with our stereo camera. The experimental results demonstrate the effectiveness of our improved point-line feature based visual SLAM method in improving localization accuracy when the camera moves with rapid rotation or violent fluctuation.",
"title": ""
},
{
"docid": "9c9e3261c293aedea006becd2177a6d5",
"text": "This paper proposes a motion-focusing method to extract key frames and generate summarization synchronously for surveillance videos. Within each pre-segmented video shot, the proposed method focuses on one constant-speed motion and aligns the video frames by fixing this focused motion into a static situation. According to the relative motion theory, the other objects in the video are moving relatively to the selected kind of motion. This method finally generates a summary image containing all moving objects and embedded with spatial and motional information, together with key frames to provide details corresponding to the regions of interest in the summary image. We apply this method to the lane surveillance system and the results provide us a new way to understand the video efficiently.",
"title": ""
},
{
"docid": "36874bcbbea1563542265cf2c6261ede",
"text": "Given the tremendous growth of online videos, video thumbnail, as the common visualization form of video content, is becoming increasingly important to influence user's browsing and searching experience. However, conventional methods for video thumbnail selection often fail to produce satisfying results as they ignore the side semantic information (e.g., title, description, and query) associated with the video. As a result, the selected thumbnail cannot always represent video semantics and the click-through rate is adversely affected even when the retrieved videos are relevant. In this paper, we have developed a multi-task deep visual-semantic embedding model, which can automatically select query-dependent video thumbnails according to both visual and side information. Different from most existing methods, the proposed approach employs the deep visual-semantic embedding model to directly compute the similarity between the query and video thumbnails by mapping them into a common latent semantic space, where even unseen query-thumbnail pairs can be correctly matched. In particular, we train the embedding model by exploring the large-scale and freely accessible click-through video and image data, as well as employing a multi-task learning strategy to holistically exploit the query-thumbnail relevance from these two highly related datasets. Finally, a thumbnail is selected by fusing both the representative and query relevance scores. The evaluations on 1,000 query-thumbnail dataset labeled by 191 workers in Amazon Mechanical Turk have demonstrated the effectiveness of our proposed method.",
"title": ""
},
{
"docid": "48b78cae830b76b85c5205a9728244be",
"text": "The striking ability of music to elicit emotions assures its prominent status in human culture and every day life. Music is often enjoyed and sought for its ability to induce or convey emotions, which may manifest in anything from a slight variation in mood, to changes in our physical condition and actions. Consequently, research on how we might associate musical pieces with emotions and, more generally, how music brings about an emotional response is attracting ever increasing attention. First, this paper provides a thorough review of studies on the relation of music and emotions from di↵erent disciplines. We then propose new insights to enhance automated music emotion recognition models using recent results from psychology, musicology, a↵ective computing, semantic technologies and music information retrieval.",
"title": ""
},
{
"docid": "709c06739d20fe0a5ba079b21e5ad86d",
"text": "Bug triaging refers to the process of assigning a bug to the most appropriate developer to fix. It becomes more and more difficult and complicated as the size of software and the number of developers increase. In this paper, we propose a new framework for bug triaging, which maps the words in the bug reports (i.e., the term space) to their corresponding topics (i.e., the topic space). We propose a specialized topic modeling algorithm named <italic> multi-feature topic model (MTM)</italic> which extends Latent Dirichlet Allocation (LDA) for bug triaging. <italic>MTM </italic> considers product and component information of bug reports to map the term space to the topic space. Finally, we propose an incremental learning method named <italic>TopicMiner</italic> which considers the topic distribution of a new bug report to assign an appropriate fixer based on the affinity of the fixer to the topics. We pair <italic> TopicMiner</italic> with MTM (<italic>TopicMiner<inline-formula><tex-math notation=\"LaTeX\">$^{MTM}$</tex-math> <alternatives><inline-graphic xlink:href=\"xia-ieq1-2576454.gif\"/></alternatives></inline-formula></italic>). We have evaluated our solution on 5 large bug report datasets including GCC, OpenOffice, Mozilla, Netbeans, and Eclipse containing a total of 227,278 bug reports. We show that <italic>TopicMiner<inline-formula><tex-math notation=\"LaTeX\"> $^{MTM}$</tex-math><alternatives><inline-graphic xlink:href=\"xia-ieq2-2576454.gif\"/></alternatives></inline-formula> </italic> can achieve top-1 and top-5 prediction accuracies of 0.4831-0.6868, and 0.7686-0.9084, respectively. We also compare <italic>TopicMiner<inline-formula><tex-math notation=\"LaTeX\">$^{MTM}$</tex-math><alternatives> <inline-graphic xlink:href=\"xia-ieq3-2576454.gif\"/></alternatives></inline-formula></italic> with Bugzie, LDA-KL, SVM-LDA, LDA-Activity, and Yang et al.'s approach. The results show that <italic>TopicMiner<inline-formula> <tex-math notation=\"LaTeX\">$^{MTM}$</tex-math><alternatives><inline-graphic xlink:href=\"xia-ieq4-2576454.gif\"/> </alternatives></inline-formula></italic> on average improves top-1 and top-5 prediction accuracies of Bugzie by 128.48 and 53.22 percent, LDA-KL by 262.91 and 105.97 percent, SVM-LDA by 205.89 and 110.48 percent, LDA-Activity by 377.60 and 176.32 percent, and Yang et al.'s approach by 59.88 and 13.70 percent, respectively.",
"title": ""
},
{
"docid": "4cc3f3a5e166befe328b6e18bc836e89",
"text": "Virtual human characters are found in a broad range of applications, from movies, games and networked virtual environments to teleconferencing and tutoring applications. Such applications are available on a variety of platforms, from desktop and web to mobile devices. High-quality animation is an essential prerequisite for realistic and believable virtual characters. Though researchers and application developers have ample animation techniques for virtual characters at their disposal, implementation of these techniques into an existing application tends to be a daunting and time-consuming task. In this paper we present visage|SDK, a versatile framework for real-time character animation based on MPEG-4 FBA standard that offers a wide spectrum of features that includes animation playback, lip synchronization and facial motion tracking, while facilitating rapid production of art assets and easy integration with existing graphics engines.",
"title": ""
},
{
"docid": "002fe3efae0fc9f88690369496ce5e7d",
"text": "Experimental evidence suggests that emotions can both speed-up and slow-down the internal clock. Speeding up has been observed for to-be-timed emotional stimuli that have the capacity to sustain attention, whereas slowing down has been observed for to-be-timed neutral stimuli that are presented in the context of emotional distractors. These effects have been explained by mechanisms that involve changes in bodily arousal, attention, or sentience. A review of these mechanisms suggests both merits and difficulties in the explanation of the emotion-timing link. Therefore, a hybrid mechanism involving stimulus-specific sentient representations is proposed as a candidate for mediating emotional influences on time. According to this proposal, emotional events enhance sentient representations, which in turn support temporal estimates. Emotional stimuli with a larger share in ones sentience are then perceived as longer than neutral stimuli with a smaller share.",
"title": ""
},
{
"docid": "782346defc00d03c61fb8f694d612653",
"text": "We present PrologCheck, an automatic tool for propertybased testing of programs in the logic programming language Prolog with randomised test data generation. The tool is inspired by the well known QuickCheck, originally designed for the functional programming language Haskell. It includes features that deal with specific characteristics of Prolog such as its relational nature (as opposed to Haskell) and the absence of a strong type discipline. PrologCheck expressiveness stems from describing properties as Prolog goals. It enables the definition of custom test data generators for random testing tailored for the property to be tested. Further, it allows the use of a predicate specification language that supports types, modes and constraints on the number of successful computations. We evaluate our tool on a number of examples and apply it successfully to debug a Prolog library for AVL search trees.",
"title": ""
},
{
"docid": "ba4df2305d4f292a6ee0f033e58d7a16",
"text": "Reliable and real-time 3D reconstruction and localization functionality is a crucial prerequisite for the navigation of actively controlled capsule endoscopic robots as an emerging, minimally invasive diagnostic and therapeutic technology for use in the gastrointestinal (GI) tract. In this study, we propose a fully dense, non-rigidly deformable, strictly real-time, intraoperative map fusion approach for actively controlled endoscopic capsule robot applications which combines magnetic and vision-based localization, with non-rigid deformations based frame-to-model map fusion. The performance of the proposed method is evaluated using four different ex-vivo porcine stomach models. Across different trajectories of varying speed and complexity, and four different endoscopic cameras, the root mean square surface reconstruction errors vary from 1.58 to 2.17 cm.",
"title": ""
},
{
"docid": "a931f939e2e0c0f2f8940796ee23e957",
"text": "PURPOSE OF REVIEW\nMany patients requiring cardiac arrhythmia device surgery are on chronic oral anticoagulation therapy. The periprocedural management of their anticoagulation presents a dilemma to physicians, particularly in the subset of patients with moderate-to-high risk of arterial thromboembolic events. Physicians have responded by treating patients with bridging anticoagulation while oral anticoagulation is temporarily discontinued. However, there are a number of downsides to bridging anticoagulation around device surgery; there is a substantial risk of significant device pocket hematoma with important clinical sequelae; bridging anticoagulation may lead to more arterial thromboembolic events and bridging anticoagulation is expensive.\n\n\nRECENT FINDINGS\nIn response to these issues, a number of centers have explored the option of performing device surgery without cessation of oral anticoagulation. The observational data suggest a greatly reduced hematoma rate with this strategy. Despite these encouraging results, most physicians are reluctant to move to operating on continued Coumadin in the absence of confirmatory data from a randomized trial.\n\n\nSUMMARY\nWe have designed a prospective, single-blind, randomized, controlled trial to address this clinical question. In the conventional arm, patients will be bridged. In the experimental arm, patients will continue on oral anticoagulation and the primary outcome is clinically significant hematoma. Our study has clinical relevance to at least 70 000 patients per year in North America.",
"title": ""
},
{
"docid": "c196444f2093afc3092f85b8fbb67da5",
"text": "The objective of this paper is to evaluate “human action recognition without human”. Motion representation is frequently discussed in human action recognition. We have examined several sophisticated options, such as dense trajectories (DT) and the two-stream convolutional neural network (CNN). However, some features from the background could be too strong, as shown in some recent studies on human action recognition. Therefore, we considered whether a background sequence alone can classify human actions in current large-scale action datasets (e.g., UCF101). In this paper, we propose a novel concept for human action analysis that is named “human action recognition without human”. An experiment clearly shows the effect of a background sequence for understanding an action label.",
"title": ""
},
{
"docid": "8b45d7f55e7968a203da2eb09c712858",
"text": "The importance of demonstrating the value achieved from IT investments is long established in the Computer Science (CS) and Information Systems (IS) literature. However, emerging technologies such as the ever-changing complex area of cloud computing present new challenges and opportunities for demonstrating how IT investments lead to business value. This paper conducts a multidisciplinary systematic literature review drawing from CS, IS, and Business disciplines to understand the current evidence on the quantification of financial value from cloud computing investments. The study identified 53 articles, which were coded in an analytical framework across six themes (measurement type, costs, benefits, adoption type, actor and service model). Future research directions were presented for each theme. The review highlights the need for multi-disciplinary research which both explores and further develops the conceptualization of value in cloud computing research, and research which investigates how IT value manifests itself across the chain of service provision and in inter-organizational scenarios.",
"title": ""
},
{
"docid": "fd2e7025271565927f43784f0c69c3fb",
"text": "In this paper, we have proposed a fingerprint orientation model based on 2D Fourier expansions (FOMFE) in the phase plane. The FOMFE does not require prior knowledge of singular points (SPs). It is able to describe the overall ridge topology seamlessly, including the SP regions, even for noisy fingerprints. Our statistical experiments on a public database show that the proposed FOMFE can significantly improve the accuracy of fingerprint feature extraction and thus that of fingerprint matching. Moreover, the FOMFE has a low-computational cost and can work very efficiently on large fingerprint databases. The FOMFE provides a comprehensive description for orientation features, which has enabled its beneficial use in feature-related applications such as fingerprint indexing. Unlike most indexing schemes using raw orientation data, we exploit FOMFE model coefficients to generate the feature vector. Our indexing experiments show remarkable results using different fingerprint databases",
"title": ""
},
{
"docid": "4bf253b2349978d17fd9c2400df61d21",
"text": "This paper proposes an architecture for the mapping between syntax and phonology – in particular, that aspect of phonology that determines the linear ordering of words. We propose that linearization is restricted in two key ways. (1) the relative ordering of words is fixed at the end of each phase, or ‘‘Spell-out domain’’; and (2) ordering established in an earlier phase may not be revised or contradicted in a later phase. As a consequence, overt extraction out of a phase P may apply only if the result leaves unchanged the precedence relations established in P. We argue first that this architecture (‘‘cyclic linearization’’) gives us a means of understanding the reasons for successive-cyclic movement. We then turn our attention to more specific predictions of the proposal: in particular, the e¤ects of Holmberg’s Generalization on Scandinavian Object Shift; and also the Inverse Holmberg Effects found in Scandinavian ‘‘Quantifier Movement’’ constructions (Rögnvaldsson (1987); Jónsson (1996); Svenonius (2000)) and in Korean scrambling configurations (Ko (2003, 2004)). The cyclic linearization proposal makes predictions that cross-cut the details of particular syntactic configurations. For example, whether an apparent case of verb fronting results from V-to-C movement or from ‘‘remnant movement’’ of a VP whose complements have been removed by other processes, the verb should still be required to precede its complements after fronting if it preceded them before fronting according to an ordering established at an earlier phase. We argue that ‘‘cross-construction’’ consistency of this sort is in fact found.",
"title": ""
},
{
"docid": "95db9ce9faaf13e8ff8d5888a6737683",
"text": "Measurements of pH, acidity, and alkalinity are commonly used to describe water quality. The three variables are interrelated and can sometimes be confused. The pH of water is an intensity factor, while the acidity and alkalinity of water are capacity factors. More precisely, acidity and alkalinity are defined as a water’s capacity to neutralize strong bases or acids, respectively. The term “acidic” for pH values below 7 does not imply that the water has no alkalinity; likewise, the term “alkaline” for pH values above 7 does not imply that the water has no acidity. Water with a pH value between 4.5 and 8.3 has both total acidity and total alkalinity. The definition of pH, which is based on logarithmic transformation of the hydrogen ion concentration ([H+]), has caused considerable disagreement regarding the appropriate method of describing average pH. The opinion that pH values must be transformed to [H+] values before averaging appears to be based on the concept of mixing solutions of different pH. In practice, however, the averaging of [H+] values will not provide the correct average pH because buffers present in natural waters have a greater effect on final pH than does dilution alone. For nearly all uses of pH in fisheries and aquaculture, pH values may be averaged directly. When pH data sets are transformed to [H+] to estimate average pH, extreme pH values will distort the average pH. Values of pH conform more closely to a normal distribution than do values of [H+], making the pH values more acceptable for use in statistical analysis. Moreover, electrochemical measurements of pH and many biological responses to [H+] are described by the Nernst equation, which states that the measured or observed response is linearly related to 10-fold changes in [H+]. Based on these considerations, pH rather than [H+] is usually the most appropriate variable for use in statistical analysis. *Corresponding author: boydce1@auburn.edu Received November 2, 2010; accepted February 7, 2011 Published online September 27, 2011 Temperature, salinity, hardness, pH, acidity, and alkalinity are fundamental variables that define the quality of water. Although all six variables have precise, unambiguous definitions, the last three variables are often misinterpreted in aquaculture and fisheries studies. In this paper, we explain the concepts of pH, acidity, and alkalinity, and we discuss practical relationships among those variables. We also discuss the concept of pH averaging as an expression of the central tendency of pH measurements. The concept of pH averaging is poorly understood, if not controversial, because many believe that pH values, which are log-transformed numbers, cannot be averaged directly. We argue that direct averaging of pH values is the simplest and most logical approach for most uses and that direct averaging is based on sound practical and statistical principles. THE pH CONCEPT The pH is an index of the hydrogen ion concentration ([H+]) in water. The [H+] affects most chemical and biological processes; thus, pH is an important variable in water quality endeavors. Water temperature probably is the only water quality variable that is measured more commonly than pH. The pH concept has its basis in the ionization of water:",
"title": ""
},
{
"docid": "99bac31f4d0df12cf25f081c96d9a81a",
"text": "Residual networks, which use a residual unit to supplement the identity mappings, enable very deep convolutional architecture to operate well, however, the residual architecture has been proved to be diverse and redundant, which may leads to low-efficient modeling. In this work, we propose a competitive squeeze-excitation (SE) mechanism for the residual network. Re-scaling the value for each channel in this structure will be determined by the residual and identity mappings jointly, and this design enables us to expand the meaning of channel relationship modeling in residual blocks. Modeling of the competition between residual and identity mappings cause the identity flow to control the complement of the residual feature maps for itself. Furthermore, we design a novel inner-imaging competitive SE block to shrink the consumption and re-image the global features of intermediate network structure, by using the inner-imaging mechanism, we can model the channel-wise relations with convolution in spatial. We carry out experiments on the CIFAR, SVHN, and ImageNet datasets, and the proposed method can challenge state-of-the-art results.",
"title": ""
},
{
"docid": "f0846b4e74110ed469704c4a24407cc6",
"text": "Presently, a very large number of public and private data sets are available from local governments. In most cases, they are not semantically interoperable and a huge human effort would be needed to create integrated ontologies and knowledge base for smart city. Smart City ontology is not yet standardized, and a lot of research work is needed to identify models that can easily support the data reconciliation, the management of the complexity, to allow the data reasoning. In this paper, a system for data ingestion and reconciliation of smart cities related aspects as road graph, services available on the roads, traffic sensors etc., is proposed. The system allows managing a big data volume of data coming from a variety of sources considering both static and dynamic data. These data are mapped to a smart-city ontology, called KM4City (Knowledge Model for City), and stored into an RDF-Store where they are available for applications via SPARQL queries to provide new services to the users via specific applications of public administration and enterprises. The paper presents the process adopted to produce the ontology and the big data architecture for the knowledge base feeding on the basis of open and private data, and the mechanisms adopted for the data verification, reconciliation and validation. Some examples about the possible usage of the coherent big data knowledge base produced are also offered and are accessible from the RDF-store and related services. The article also presented the work performed about reconciliation algorithms and their comparative assessment and selection. & 2014 The Authors. Published by Elsevier Ltd. This is an open access article under the CC BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/3.0/).",
"title": ""
}
] |
scidocsrr
|
85de5777be2312f1a8f4d6c544e169f5
|
Miniaturized Dual-Band Wilkinson Power Divider With Self-Compensation Structure
|
[
{
"docid": "61f77a2ce189b9be9d4b8c7cc392f361",
"text": "In this paper, a Wilkinson power divider operating at two arbitrary different frequencies is presented. The structure of this power divider and the formulas used to determine the design parameters have been given. Experimental results show that all the features of a conventional Wilkinson power divider, such as an equal power split, impedance matching at all ports, and a good isolation between the two output ports can be fulfilled at two arbitrary given frequencies simultaneously",
"title": ""
},
{
"docid": "e342178b5c8ee8a48add15fefa0ef5f8",
"text": "A new scheme is proposed for the dual-band operation of the Wilkinson power divider/combiner. The dual band operation is achieved by attaching two central transmission line stubs to the conventional Wilkinson divider. It has simple structure and is suitable for distributed circuit implementation.",
"title": ""
},
{
"docid": "3512a29d291015e2f7df196ff437daa2",
"text": "A generalized model of a two-way dual-band Wilkinson power divider (WPD) with a parallel LC circuit at midpoints of two-segment transformers is proposed and compared with that of a conventional two-way dual-band WPD with a parallel LC circuit at the ends of two-segment transformers. The sum of power reflected at an output port and power transmitted to an isolation port from another isolation port in the proposed divider is smaller than that in the conventional divider. Therefore, wide bandwidths for S22, S33, and S32 can be expected for proposed dividers. In the case of equal power division, frequency characteristics of return loss at output ports and isolation of the proposed divider are wider than those of the convention one. The resonant frequencies of LC circuits in the proposed divider and a conventional divider are equal; however, the inductance L used in the proposed divider is always smaller than that in the conventional divider. Design charts and calculated bandwidths as a function of frequency ratio from 1 to 7 are presented. In experiments, two symmetrical and two asymmetrical circuits were fabricated. The experimental results showed good agreement with theoretical results.",
"title": ""
}
] |
[
{
"docid": "dbc28fb8fe14ac5fcfe5a1c52df5b8f0",
"text": "Wireless Local Area Networks frequently referred to as WLANs or Wi-Fi networks are all the vehemence in recent times. People are installing these in houses, institutions, offices and hotels etc, without any vain. In search of fulfilling the wireless demands, Wi-Fi product vendors and service contributors are exploding up as quickly as possible. Wireless networks offer handiness, mobility, and can even be less expensive to put into practice than wired networks in many cases. With the consumer demand, vendor solutions and industry standards, wireless network technology is factual and is here to stay. But how far this technology is going provide a protected environment in terms of privacy is again an anonymous issue. Realizing the miscellaneous threats and vulnerabilities associated with 802.11-based wireless networks and ethically hacking them to make them more secure is what this paper is all about. On this segment, we'll seize a look at common threats, vulnerabilities related with wireless networks. And also we have discussed the entire process of cracking WEP (Wired Equivalent Privacy) encryption of WiFi, focusing the necessity to become familiar with scanning tools like Cain, NetStumbler, Kismet and MiniStumbler to help survey the area and tests we should run so as to strengthen our air signals.",
"title": ""
},
{
"docid": "b9da9cc9d7583c5b72daf8a25a3145f5",
"text": "The purpose of this article is to review literature that is relevant to the social scientific study of ethics and leadership, as well as outline areas for future study. We first discuss ethical leadership and then draw from emerging research on \"dark side\" organizational behavior to widen the boundaries of the review to include ««ethical leadership. Next, three emerging trends within the organizational behavior literature are proposed for a leadership and ethics research agenda: 1 ) emotions, 2) fit/congruence, and 3) identity/ identification. We believe each shows promise in extending current thinking. The review closes with discussion of important issues that are relevant to the advancement of research on leadership and ethics. T IMPORTANCE OF LEADERSHIP in promoting ethical conduct in organizations has long been understood. Within a work environment, leaders set the tone for organizational goals and behavior. Indeed, leaders are often in a position to control many outcomes that affect employees (e.g., strategies, goal-setting, promotions, appraisals, resources). What leaders incentivize communicates what they value and motivates employees to act in ways to achieve such rewards. It is not surprising, then, that employees rely on their leaders for guidance when faced with ethical questions or problems (Treviño, 1986). Research supports this contention, and shows that employees conform to the ethical values of their leaders (Schminke, Wells, Peyrefitte, & Sabora, 2002). Furthermore, leaders who are perceived as ethically positive influence productive employee work behavior (Mayer, Kuenzi, Greenbaum, Bardes, & Salvador, 2009) and negatively influence counterproductive work behavior (Brown & Treviño, 2006b; Mayer et al., 2009). Recently, there has been a surge of empirical research seeking to understand the influence of leaders on building ethical work practices and employee behaviors (see Brown & Treviño, 2006a for a review). Initial theory and research (Bass & Steidlemeier, 1999; Brown, Treviño, & Harrison, 2005; Ciulla, 2004; Treviño, Brown, & Hartman, 2003; Treviño, Hartman, & Brown, 2000) sought to define ethical leadership from both normative and social scientific (descriptive) approaches to business ethics. The normative perspective is rooted in philosophy and is concerned with prescribing how individuals \"ought\" or \"should\" behave in the workplace. For example, normative scholarship on ethical leadership (Bass & Steidlemeier, 1999; Ciulla, 2004) examines ethical decision making from particular philosophical frameworks, evaluates the ethicality of particular leaders, and considers the degree to which certain styles of leadership or influence tactics are ethical. ©2010 Business Ethics Quarterly 20:4 (October 2010); ISSN 1052-150X pp. 583-616 584 BUSINESS ETHICS QUARTERLY In contrast, our article emphasizes a social scientific approach to ethical leadership (e.g.. Brown et al., 2005; Treviño et al., 2000; Treviño et al, 2003). This approach is rooted in disciplines such as psychology, sociology, and organization science, and it attempts to understand how people perceive ethical leadership and investigates the antecedents, outcomes, and potential boundary conditions of those perceptions. This research has focused on investigating research questions such as: What is ethical leadership (Brown et al., 2005; Treviño et al., 2003)? What traits are associated with perceived ethical leadership (Walumbwa & Schaubroeck, 2009)? How does ethical leadership flow through various levels of management within organizations (Mayer et al., 2009)? And, does ethical leadership help or hurt a leader's promotability within organizations (Rubin, Dierdorff, & Brown, 2010)? The purpose of our article is to review literature that is relevant to the descriptive study of ethics and leadership, as well as outhne areas for future empirical study. We first discuss ethical leadership and then draw from emerging research on what often is called \"dark\" (destructive) organizational behavior, so as to widen the boundaries of our review to also include ««ethical leadership. Next, we discuss three emerging trends within the organizational behavior literature—1) emotions, 2) fit/congruence, and 3) identity/identification—that we believe show promise in extending current thinking on the influence of leadership (both positive and negative) on organizational ethics. We conclude with a discussion of important issues that are relevant to the advancement of research in this domain. A REVIEW OF SOCIAL SCIENTIFIC ETHICAL LEADERSHIP RESEARCH The Concept of Ethical Leadership Although the topic of ethical leadership has long been considered by scholars, descriptive research on ethical leadership is relatively new. Some of the first formal investigations focused on defining ethical leadership from a descriptive perspective and were conducted by Treviño and colleagues (Treviño et al., 2000, 2003). Their qualitative research revealed that ethical leaders were best described along two related dimensions: moral person and moral manager. The moral person dimension refers to the qualities of the ethical leader as a person. Strong moral persons are honest and trustworthy. They demonstrate a concern for other people and are also seen as approachable. Employees can come to these individuals with problems and concerns, knowing that they will be heard. Moral persons have a reputation for being fair and principled. Lastly, riioral persons are seen as consistently moral in both their personal and professional lives. The moral manager dimension refers to how the leader uses the tools of the position of leadership to promote ethical conduct at work. Strong moral managers see themselves as role models in the workplace. They make ethics salient by modeling ethical conduct to their employees. Moral managers set and communicate ethical standards and use rewards and punishments to ensure those standards are followed. In sum, leaders who are moral managers \"walk the talk\" and \"talk the walk,\" patterning their behavior and organizational processes to meet moral standards. ETHICAL AND UNETHICAL LEADERSHIP 585 Treviño and colleagues (Treviño et al., 2000, 2003) argued that individuals in power must be both strong moral persons and moral managers in order to be seen as ethical leaders by those around them. Strong moral managers who are weak moral persons are likely to be seen as hypocrites, failing to practice what they preach. Hypocritical leaders talk about the importance of ethics, but their actions show them to be dishonest and unprincipled. Conversely, a strong moral person who is a weak moral manager runs the risk of being seen as an ethically \"neutral\" leader. That is, the leader is perceived as being silent on ethical issues, suggesting to employees that the leader does not really care about ethics. Subsequent research by Brown, Treviño, and Harrison (2005:120) further clarified the construct and provided a formal definition of ethical leadership as \"the demonstration of normatively appropriate conduct through personal actions and interpersonal relationships, and the promotion of such conduct to followers through two-way communication, reinforcement, and decision-making.\" They noted that \"the term normatively appropriate is 'deliberately vague'\" (Brown et al., 2005: 120) because norms vary across organizations, industries, and cultures. Brown et al. (2005) ground their conceptualization of ethical leadership in social learning theory (Bandura, 1977, 1986). This theory suggests individuals can learn standards of appropriate behavior by observing how role models (like teachers, parents, and leaders) behave. Accordingly, ethical leaders \"teach\" ethical conduct to employees through their own behavior. Ethical leaders are relevant role models because they occupy powerful and visible positions in organizational hierarchies that allow them to capture their follower's attention. They communicate ethical expectations through formal processes (e.g., rewards, policies) and personal example (e.g., interpersonal treatment of others). Effective \"ethical\" modeling, however, requires more than power and visibility. For social learning of ethical behavior to take place, role models must be credible in terms of moral behavior. By treating others fairly, honestly, and considerately, leaders become worthy of emulation by others. Otherwise, followers might ignore a leader whose behavior is inconsistent with his/her ethical pronouncements or who fails to interact with followers in a caring, nurturing style (Yussen & Levy, 1975). Outcomes of Ethical Leadership Researchers have used both social learning theory (Bandura, 1977,1986) and social exchange theory (Blau, 1964) to explain the effects of ethical leadership on important outcomes (Brown et al., 2005; Brown & Treviño, 2006b; Mayer et al , 2009; Walumbwa & Schaubroeck, 2009). According to principles of reciprocity in social exchange theory (Blau, 1964; Gouldner, 1960), individuals feel obligated to return beneficial behaviors when they believe another has been good and fair to them. In line with this reasoning, researchers argue and find that employees feel indebted to ethical leaders because of their trustworthy and fair nature; consequently, they reciprocate with beneficial work behavior (e.g., higher levels of ethical behavior and citizenship behaviors) and refrain from engaging in destructive behavior (e.g., lower levels of workplace deviance). 586 BUSINESS ETHICS QUARTERLY Emerging research has found that ethical leadership is related to important follower outcomes, such as employees' job satisfaction, organizational commitment, willingness to report problems to supervisors, willingness to put in extra effort on the job, voice behavior (i.e., expression of constructive suggestions intended to improve standard procedures), and perceptions of organizational culture and ethical climate (Brown et al., 2005; Neubert, Carlson, Kacmar, Roberts,",
"title": ""
},
{
"docid": "57c705e710f99accab3d9242fddc5ac8",
"text": "Although much research has been conducted in the area of organizational commitment, few studies have explicitly examined how organizations facilitate commitment among members. Using a sample of 291 respondents from 45 firms, the results of this study show that rigorous recruitment and selection procedures and a strong, clear organizational value system are associated with higher levels of employee commitment based on internalization and identification. Strong organizational career and reward systems are related to higher levels of instrumental or compliance-based commitment.",
"title": ""
},
{
"docid": "17de31cccc12b401a949ff5660d4f4c6",
"text": "In this paper we propose a system that automates the whole process of taking attendance and maintaining its records in an academic institute. Managing people is a difficult task for most of the organizations, and maintaining the attendance record is an important factor in people management. When considering academic institutes, taking the attendance of students on daily basis and maintaining the records is a major task. Manually taking the attendance and maintaining it for a long time adds to the difficulty of this task as well as wastes a lot of time. For this reason an efficient system is designed. This system takes attendance electronically with the help of a fingerprint sensor and all the records are saved on a computer server. Fingerprint sensors and LCD screens are placed at the entrance of each room. In order to mark the attendance, student has to place his/her finger on the fingerprint sensor. On identification student’s attendance record is updated in the database and he/she is notified through LCD screen. No need of all the stationary material and special personal for keeping the records. Furthermore an automated system replaces the manual system.",
"title": ""
},
{
"docid": "c22c34214e0f3c4d80be81d706233f96",
"text": "An alternating-current light-emitting diode (AC-LED) driver is implemented between the grid and lamp to eliminate the disadvantages of a directly grid-tied AC-LED lamp. In order to highlight the benefits of AC-LED technology, a single-stage converter with few components is adopted. A high power-factor (PF) single-stage bridgeless AC/AC converter is proposed with higher efficiency, greater power factor, less harmonics to pass IEC 61000-3-2 class C, and better regulation of output current. The brightness and flicker frequency issues caused by a low-frequency sinusoidal input are surpassed by the implementation of a high-frequency square-wave output current. In addition, the characteristics of the proposed circuit are discussed and analyzed in order to design the AC-LED driver. Finally, some simulation and experimental results are shown to verify this proposed scheme.",
"title": ""
},
{
"docid": "36460eda2098bdcf3810828f54ee7d2b",
"text": "[This corrects the article on p. 662 in vol. 60, PMID: 27729694.].",
"title": ""
},
{
"docid": "3cc0707cec7af22db42e530399e762a8",
"text": "While watching television, people increasingly consume additional content related to what they are watching. We consider the task of finding video content related to a live television broadcast for which we leverage the textual stream of subtitles associated with the broadcast. We model this task as a Markov decision process and propose a method that uses reinforcement learning to directly optimize the retrieval effectiveness of queries generated from the stream of subtitles. Our dynamic query modeling approach significantly outperforms state-of-the-art baselines for stationary query modeling and for text-based retrieval in a television setting. In particular we find that carefully weighting terms and decaying these weights based on recency significantly improves effectiveness. Moreover, our method is highly efficient and can be used in a live television setting, i.e., in near real time.",
"title": ""
},
{
"docid": "07f7a4fe69f6c4a1180cc3ca444a363a",
"text": "With the popularization of IoT (Internet of Things) devices and the continuous development of machine learning algorithms, learning-based IoT malicious traffic detection technologies have gradually matured. However, learning-based IoT traffic detection models are usually very vulnerable to adversarial samples. There is a great need for an automated testing framework to help security analysts to detect errors in learning-based IoT traffic detection systems. At present, most methods for generating adversarial samples require training parameters of known models and are only applicable to image data. To address the challenge, we propose a testing framework for learning-based IoT traffic detection systems, TLTD. By introducing genetic algorithms and some technical improvements, TLTD can generate adversarial samples for IoT traffic detection systems and can perform a black-box test on the systems.",
"title": ""
},
{
"docid": "4bcb62b8ca73fe841908e24c5c454a89",
"text": "Neural network based models have achieved impressive results on various specific tasks. However, in previous works, most models are learned separately based on single-task supervised objectives, which often suffer from insufficient training data. In this paper, we propose two deep architectures which can be trained jointly on multiple related tasks. More specifically, we augment neural model with an external memory, which is shared by several tasks. Experiments on two groups of text classification tasks show that our proposed architectures can improve the performance of a task with the help of other related tasks.",
"title": ""
},
{
"docid": "abb69c3653dd2a960f8719739032a080",
"text": "This paper sets out a moderate version of metaphysical structural realism that stands in contrast to both the epistemic structural realism of Worrall and the—radical—ontic structural realism of French and Ladyman. According to moderate structural realism, objects and relations (structure) are on the same ontological footing, with the objects being characterized only by the relations in which they stand. We show how this position fares well as regards philosophical arguments, avoiding the objections against the other two versions of structural realism. In particular, we set out how this position can be applied to space-time, providing for a convincing understanding of space-time points in the standard tensor formulation of general relativity as well as in the fibre bundle formulation.",
"title": ""
},
{
"docid": "e71860d5882f9b7b7f9ca1e209d4ac9d",
"text": "In-wheel motors for electric vehicles (EVs) have a high outer diameter (D) to axial length (L) ratio. In such applications, axial flux machines are preferred over radial flux machines due to high power density. Moreover, permanent magnet (PM)-less machines are gaining interest due to increase in cost of rare-earth PM materials. In view of this, axial flux switched reluctance motor (AFSRM) is considered as a possible option for EV propulsion. Two topologies namely, toothed and segmented rotor AFSRM are designed and compared for the same overall volume. These topologies have a three-phase, 12/16 pole single-stator, dual outer-rotor configuration along with non-overlapping winding arrangement. Analytical expressions for phase inductance and average torque are derived. To verify the performance of both the topologies a finite element method (FEM) based simulation study is carried out and its results are verified with the analytical values. It is observed from simulation that the average torque is 16.2% higher and torque ripple is 17.9% lower for segmented rotor AFSRM as compared to toothed rotor AFSRM.",
"title": ""
},
{
"docid": "0f6806c44bf6fa7e6a2c3fb02ef8781b",
"text": "Air quality has been negatively affected by industrial activities, which have caused imbalances in nature. The issue of air pollution has become a big concern for many people, especially those living in industrial areas. Air pollution levels can be measured using smart sensors. Additionally, Internet of Things (IoT) technology can be integrated to remotely detect pollution without any human interaction. The data gathered by such a system can be transmitted instantly to a web-based application to facilitate monitoring real time data and allow immediate risk management. In this paper, we describe an entire Internet of Things (IoT) system that monitors air pollution by collecting real-time data in specific locations. This data is analyzed and measured against a predetermined threshold. The collected data is sent to the concerned official organization to notify them in case of any violation so that they can take the necessary measures. Furthermore, if the value of the measured pollutants exceeds the threshold, an alarm system is triggered taking several actions to warn the surrounding people.",
"title": ""
},
{
"docid": "30d0453033d3951f5b5faf3213eacb89",
"text": "Semantic mapping is the incremental process of “mapping” relevant information of the world (i.e., spatial information, temporal events, agents and actions) to a formal description supported by a reasoning engine. Current research focuses on learning the semantic of environments based on their spatial location, geometry and appearance. Many methods to tackle this problem have been proposed, but the lack of a uniform representation, as well as standard benchmarking suites, prevents their direct comparison. In this paper, we propose a standardization in the representation of semantic maps, by defining an easily extensible formalism to be used on top of metric maps of the environments. Based on this, we describe the procedure to build a dataset (based on real sensor data) for benchmarking semantic mapping techniques, also hypothesizing some possible evaluation metrics. Nevertheless, by providing a tool for the construction of a semantic map ground truth, we aim at the contribution of the scientific community in acquiring data for populating the dataset.",
"title": ""
},
{
"docid": "5898a24a260d2c653c1ec7d798a1024c",
"text": "In this paper we present results for two tasks: social event detection and social network extraction from a literary text, Alice in Wonderland. For the first task, our system trained on a news corpus using tree kernels and support vector machines beats the baseline systems by a statistically significant margin. Using this system we extract a social network from Alice in Wonderland. We show that while we achieve an F-measure of about 61% on social event detection, our extracted unweighted network is not statistically distinguishable from the un-weighted gold network according to popularly used network measures.",
"title": ""
},
{
"docid": "7fe86801de04054ffca61eb1b3334872",
"text": "Images rendered with traditional computer graphics techniques, such as scanline rendering and ray tracing, appear focused at all depths. However, there are advantages to having blur, such as adding realism to a scene or drawing attention to a particular place in a scene. In this paper we describe the optics underlying camera models that have been used in computer graphics, and present object space techniques for rendering with those models. In our companion paper [3], we survey image space techniques to simulate these models. These techniques vary in both speed and accuracy.",
"title": ""
},
{
"docid": "d95cd76008dd65d5d7f00c82bad013d3",
"text": "Though data analysis tools continue to improve, analysts still expend an inordinate amount of time and effort manipulating data and assessing data quality issues. Such \"data wrangling\" regularly involves reformatting data values or layout, correcting erroneous or missing values, and integrating multiple data sources. These transforms are often difficult to specify and difficult to reuse across analysis tasks, teams, and tools. In response, we introduce Wrangler, an interactive system for creating data transformations. Wrangler combines direct manipulation of visualized data with automatic inference of relevant transforms, enabling analysts to iteratively explore the space of applicable operations and preview their effects. Wrangler leverages semantic data types (e.g., geographic locations, dates, classification codes) to aid validation and type conversion. Interactive histories support review, refinement, and annotation of transformation scripts. User study results show that Wrangler significantly reduces specification time and promotes the use of robust, auditable transforms instead of manual editing.",
"title": ""
},
{
"docid": "ff4d7c7aa17f5925e5514aef9e0963f9",
"text": "We present a novel concept for wideband, wide-scan phased array applications. The array is composed by connected-slot elements loaded with artificial dielectric superstrates. The proposed solution consists of a single multi-layer planar printed circuit board (PCB) and does not require the typically employed vertical arrangement of multiple PCBs. This offers advantages in terms of complexity of the assembly and cost of the array. We developed an analytical method for the prediction of the array performance, in terms of active input impedance. This method allows to estimate the relevant parameters of the array with a negligible computational cost. A design example with a bandwidth exceeding one octave (VSWR<;2 from 6.5 to 14.3) and scanning up to 50 degrees for all azimuth planes is presented.",
"title": ""
},
{
"docid": "989a16f498eaaa62d5578cc1bcc8bc04",
"text": "UML activity diagram is widely used to describe the behavior of the software system. Unfortunately, there is still no practical tool to verify the UML diagrams automatically. This paper proposes an alternative to translate UML activity diagram into a colored petri nets with inscription. The model translation rules are proposed to guide the automatic translation of the activity diagram with atomic action into a CPN model. Moreover, the relevant basic arc inscriptions are generated without manual elaboration. The resulting CPN with inscription is correctly verified as expected.",
"title": ""
},
{
"docid": "a83fcfc62bdf0f58335e0853c006eaff",
"text": "Compressed sensing (CS) in magnetic resonance imaging (MRI) enables the reconstruction of MR images from highly undersampled k-spaces, and thus substantial reduction of data acquisition time. In this context, edge-preserving and sparsity-promoting regularizers are used to exploit the prior knowledge that MR images are sparse or compressible in a given transform domain and thus to regulate the solution space. In this study, we introduce a new regularization scheme by iterative linearization of the non-convex clipped absolute deviation (SCAD) function in an augmented Lagrangian framework. The performance of the proposed regularization, which turned out to be an iteratively weighted total variation (TV) regularization, was evaluated using 2D phantom simulations and 3D retrospective undersampling of clinical MRI data by different sampling trajectories. It was demonstrated that the proposed regularization technique substantially outperforms conventional TV regularization, especially at reduced sampling rates.",
"title": ""
}
] |
scidocsrr
|
61290fc1aa8836e245109969b9aaec02
|
Review of the Impact of Vehicle-to-Grid Technologies on Distribution Systems and Utility Interfaces
|
[
{
"docid": "1cef757143fc21e712f47b29ee72dfe8",
"text": "Large-scale sustainable energy systems will be necessary for substantial reduction of CO2. However, large-scale implementation faces two major problems: (1) we must replace oil in the transportation sector, and (2) since today’s inexpensive and abundant renewable energy resources have fluctuating output, to increase the fraction of electricity from them, we must learn to maintain a balance between demand and supply. Plug-in electric vehicles (EVs) could reduce or eliminate oil for the light vehicle fleet. Adding ‘‘vehicle-to-grid’’ (V2G) technology to EVs can provide storage, matching the time of generation to time of load. Two national energy systems are modelled, one for Denmark, including combined heat and power (CHP) and the other a similarly sized country without CHP (the latter being more typical of other industrialized countries). The model (EnergyPLAN) integrates energy for electricity, transport and heat, includes hourly fluctuations in human needs and the environment (wind resource and weather-driven need for heat). Four types of vehicle fleets are modelled, under levels of wind penetration varying from 0% to 100%. EVs were assumed to have high power (10 kW) connections, which provide important flexibility in time and duration of charging. We find that adding EVs and V2G to these national energy systems allows integration of much higher levels of wind electricity without excess electric production, and also greatly reduces national CO2 emissions. & 2008 Published by Elsevier Ltd. 67",
"title": ""
}
] |
[
{
"docid": "f7b369690fa93420baa7bb43aa75ffec",
"text": "Total Quality Management (TQM) and Kaizena continuous change toward betterment are two fundamental concepts directly dealing with continuous improvement of quality of processes and performance of an organization to achieve positive transformation in mindset and action of employees and management. For clear understanding and to get maximum benefit from both of these concepts, as such it becomes mandatory to precisely differentiate between TQM and Kaizen. TQM features primarily focus on customer’s satisfaction through improvement of quality. It is both a top down and bottom up approach whereas kaizen is processes focused and a bottom up approach of small incremental changes. Implementation of TQM is more costly as compared to Kaizen. Through kaizen, improvements are made using organization’s available resources. For the effective implementation of kaizen, the culture of the organization must be supportive and the result of continuous improvement should be communicated to the whole organization for motivation of all employees and for the success of continuous improvement program in the organization. This paper focuses on analyzing the minute differences between TQM and Kaizen. It also discusses the different tools and techniques under the umbrella of kaizen and TQM Philosophy. This paper will elucidate the differences in both these concepts as far as their inherent characteristics and practical implementations are concerned. In spite of differences in methodology, focus and scale of operation in both the concept, it can be simply concluded that Kaizen is one of the Technique of the T QM for continuous improvement of quality, process and performance of the organization. [Muhammad Saleem, Nawar Khan, Shafqat Hameed, M Abbas Ch. An Analysis of Relationship between Total Quality Management and Kaizen. Life Science Journal. 2012;9(3):31-40] (ISSN:1097-8135). http://www.lifesciencesite.com. 5 Key Worlds: Total Quality Management, Kaizen Technique, Continuous Improvement (CI), Tools & Techniques",
"title": ""
},
{
"docid": "ad2efda03f2657ff73cac8cb992eba8e",
"text": "This paper investigates the effects of grounding the p-type gate-oxide protection layer called bottom p-well (BPW) of a trench-gate SiC-MOSFET on the short-circuit ruggedness of the device. The BPW is grounded by forming ground contacts in various cell layouts, and the layout of the contact cells is found to be a significant factor that determines the short-circuit safe operation area (SCSOA) of a device. By grounding the BPW in an optimized cell layout, an SCSOA of over 10 μs is obtained at room temperature. Further investigation revealed that minimizing the distance between the ground contacts for the BPW is a key to developing a highly-robust, high-performance power device.",
"title": ""
},
{
"docid": "8694f84e4e2bd7da1e678a3b38ccd447",
"text": "This paper describes a general methodology for extracting attribute-value pairs from web pages. It consists of two phases: candidate generation, in which syntactically likely attribute-value pairs are annotated; and candidate filtering, in which semantically improbable annotations are removed. We describe three types of candidate generators and two types of candidate filters, all of which are designed to be massively parallelizable. Our methods can handle 1 billion web pages in less than 6 hours with 1,000 machines. The best generator and filter combination achieves 70% F-measure compared to a hand-annotated corpus.",
"title": ""
},
{
"docid": "bbd64fe2f05e53ca14ad1623fe51cd1c",
"text": "Virtual assistants are the cutting edge of end user interaction, thanks to endless set of capabilities across multiple services. The natural language techniques thus need to be evolved to match the level of power and sophistication that users expect from virtual assistants. In this report we investigate an existing deep learning model for semantic parsing, and we apply it to the problem of converting natural language to trigger-action programs for the Almond virtual assistant. We implement a one layer seq2seq model with attention layer, and experiment with grammar constraints and different RNN cells. We take advantage of its existing dataset and we experiment with different ways to extend the training set. Our parser shows mixed results on the different Almond test sets, performing better than the state of the art on synthetic benchmarks by about 10% but poorer on realistic user data by about 15%. Furthermore, our parser is shown to be extensible to generalization, as well as or better than the current system employed by Almond.",
"title": ""
},
{
"docid": "8955c715c0341057b471eeed90c9c82d",
"text": "The letter presents an exact small-signal discrete-time model for digitally controlled pulsewidth modulated (PWM) dc-dc converters operating in constant frequency continuous conduction mode (CCM) with a single effective A/D sampling instant per switching period. The model, which is based on well-known approaches to discrete-time modeling and the standard Z-transform, takes into account sampling, modulator effects and delays in the control loop, and is well suited for direct digital design of digital compensators. The letter presents general results valid for any CCM converter with leading or trailing edge PWM. Specific examples, including approximate closed-form expressions for control-to-output transfer functions are given for buck and boost converters. The model is verified in simulation using an independent system identification approach.",
"title": ""
},
{
"docid": "5c4f313482543223306be014cff0cc2e",
"text": "Transformer inrush currents are high-magnitude, harmonic rich currents generated when transformer cores are driven into saturation during energization. These currents have undesirable effects, including potential damage or loss-of-life of transformer, protective relay miss operation, and reduced power quality on the system. This paper explores the theoretical explanations of inrush currents and explores different factors that have influences on the shape and magnitude of those inrush currents. PSCAD/EMTDC is used to investigate inrush currents phenomena by modeling a practical power system circuit for single phase transformer",
"title": ""
},
{
"docid": "c7c103a48a80ffee561a120913855758",
"text": "We study parameter estimation in Nonlinear Factor Analysis (NFA) where the generative model is parameterized by a deep neural network. Recent work has focused on learning such models using inference (or recognition) networks; we identify a crucial problem when modeling large, sparse, highdimensional datasets – underfitting. We study the extent of underfitting, highlighting that its severity increases with the sparsity of the data. We propose methods to tackle it via iterative optimization inspired by stochastic variational inference (Hoffman et al. , 2013) and improvements in the sparse data representation used for inference. The proposed techniques drastically improve the ability of these powerful models to fit sparse data, achieving state-of-the-art results on a benchmark textcount dataset and excellent results on the task of top-N recommendation.",
"title": ""
},
{
"docid": "59639429e45dc75e0b8db773d112f994",
"text": "Vector modulators are a key component in phased array antennas and communications systems. The paper describes a novel design methodology for a bi-directional, reflection-type balanced vector modulator using metal-oxide-semiconductor field-effect (MOS) transistors as active loads, which provides an improved constellation quality. The fabricated IC occupies 787 × 1325 μm2 and exhibits a minimum transmission loss of 9 dB and return losses better than 14 dB. As an application example, its use in a 16-QAM modulator is verified.",
"title": ""
},
{
"docid": "256b22fd89c0f7311e043efd2dd142f9",
"text": "Suicide rates are higher in later life than in any other age group. The design of effective suicide prevention strategies hinges on the identification of specific, quantifiable risk factors. Methodological challenges include the lack of systematically applied terminology in suicide and risk factor research, the low base rate of suicide, and its complex, multidetermined nature. Although variables in mental, physical, and social domains have been correlated with completed suicide in older adults, controlled studies are necessary to test hypothesized risk factors. Prospective cohort and retrospective case control studies indicate that affective disorder is a powerful independent risk factor for suicide in elders. Other mental illnesses play less of a role. Physical illness and functional impairment increase risk, but their influence appears to be mediated by depression. Social ties and their disruption are significantly and independently associated with risk for suicide in later life, relationships between which may be moderated by a rigid, anxious, and obsessional personality style. Affective illness is a highly potent risk factor for suicide in later life with clear implications for the design of prevention strategies. Additional research is needed to define more precisely the interactions between emotional, physical, and social factors that determine risk for suicide in the older adult.",
"title": ""
},
{
"docid": "924e10782437c323b8421b156db50584",
"text": "Ontology Learning greatly facilitates the construction of ontologies by the ontology engineer. The notion of ontology learning that we propose here includes a number of complementary disciplines that feed on different types of unstructured and semi-structured data in order to support a semi-automatic, cooperative ontology engineering process. Our ontology learning framework proceeds through ontology import, extraction, pruning, and refinement, giving the ontology engineer a wealth of coordinated tools for ontology modelling. Besides of the general architecture, we show in this paper some exemplary techniques in the ontology learning cycle that we have implemented in our ontology learning environment, KAON Text-To-Onto.",
"title": ""
},
{
"docid": "dd9f6ef9eafdef8b29c566bcea8ded57",
"text": "A recent trend in saliency algorithm development is large-scale benchmarking and algorithm ranking with ground truth provided by datasets of human fixations. In order to accommodate the strong bias humans have toward central fixations, it is common to replace traditional ROC metrics with a shuffled ROC metric which uses randomly sampled fixations from other images in the database as the negative set. However, the shuffled ROC introduces a number of problematic elements, including a fundamental assumption that it is possible to separate visual salience and image spatial arrangement. We argue that it is more informative to directly measure the effect of spatial bias on algorithm performance rather than try to correct for it. To capture and quantify these known sources of bias, we propose a novel metric for measuring saliency algorithm performance: the spatially binned ROC (spROC). This metric provides direct in-sight into the spatial biases of a saliency algorithm without sacrificing the intuitive raw performance evaluation of traditional ROC measurements. By quantitatively measuring the bias in saliency algorithms, researchers will be better equipped to select and optimize the most appropriate algorithm for a given task. We use a baseline measure of inherent algorithm bias to show that Adaptive Whitening Saliency (AWS) [14], Attention by Information Maximization (AIM) [8], and Dynamic Visual Attention (DVA) [20] provide the least spatially biased results, suiting them for tasks in which there is no information about the underlying spatial bias of the stimuli, whereas algorithms such as Graph Based Visual Saliency (GBVS) [18] and Context-Aware Saliency (CAS) [15] have a significant inherent central bias.",
"title": ""
},
{
"docid": "7178130e1a69bb93c4dc6b90b2c98bb2",
"text": "Leprosy is caused by Mycobacterium leprae bacillus and despite recommendation of multidrug therapy by World Health Organisation in 1981 and eradication programme in various countries; disease prevails and new cases added annually. Variable clinical presentation ranges from limited tuberculoid to widespread lepromatous leprosy. The neuritic presentation varies from mononeuropathy to mononeuropathy multiplex. The disease commonly affects the ulnar, radial in upper and common peroneal, posterior tibial in lower extremity. The neuritic leprosy is easily suspected when there is hypoanesthetic skin lesion with thickened and tender nerve. Involvement of uncommon nerve and pure neuritic presentation, a rare form of leprosy in which skin is spared often leads to diagnostic challenge. Biopsy is not needed to initiate treatment but sometimes required to rule our other diseases. We report a rare case of isolated thickening of greater auricular nerve and diagnostic dilemma encountered in the era of evidence-based medicine.",
"title": ""
},
{
"docid": "babe85fa78ea1f4ce46eb0cfd77ae2b8",
"text": "x + a1x + · · ·+ an = 0. On s’interesse surtout à la résolution “par radicaux”, c’est-à-dire à la résolution qui n’utilise que des racines m √ a. Il est bien connu depuis le 16 siècle que l’on peut résoudre par radicaux des équations de degré n ≤ 4. Par contre, selon un résultat célèbre d’Abel, l’équation générale de degré n ≥ 5 n’est pas résoluble par radicaux. L’idée principale de la théorie de Galois est d’associer à chaque équation son groupe de symétrie. Cette construction permet de traduire des propriétés de l’équation (telles que la résolubilité par radicaux) aux propriétés du groupe associé. Le cours ne suivra pas le chemin historique. L’ouvrage [Ti 1, 2] est une référence agréable pour l’histoire du sujet.",
"title": ""
},
{
"docid": "7643347a62e8835b5cc4b1b432f504c1",
"text": "Simulation systems have become an essential component in the development and validation of autonomous driving technologies. The prevailing state-of-the-art approach for simulation is to use game engines or high-fidelity computer graphics (CG) models to create driving scenarios. However, creating CG models and vehicle movements (e.g., the assets for simulation) remains a manual task that can be costly and time-consuming. In addition, the fidelity of CG images still lacks the richness and authenticity of real-world images and using these images for training leads to degraded performance. In this paper we present a novel approach to address these issues: Augmented Autonomous Driving Simulation (AADS). Our formulation augments real-world pictures with a simulated traffic flow to create photo-realistic simulation images and renderings. More specifically, we use LiDAR and cameras to scan street scenes. From the acquired trajectory data, we generate highly plausible traffic flows for cars and pedestrians and compose them into the background. The composite images can be re-synthesized with different viewpoints and sensor models (camera or LiDAR). The resulting images are photo-realistic, fully annotated, and ready for end-to-end training and testing of autonomous driving systems from perception to planning. We explain our system design and validate our algorithms with a number of autonomous driving tasks from detection to segmentation and predictions. Compared to traditional approaches, our method offers unmatched scalability and realism. Scalability is particularly important for AD simulation and we believe the complexity and diversity of the real world cannot be realistically captured in a virtual environment. Our augmented approach combines the flexibility in a virtual environment (e.g., vehicle movements) with the richness of the real world to allow effective simulation of anywhere on earth.",
"title": ""
},
{
"docid": "56e47efe6efdb7819c6a2e87e8fbb56e",
"text": "Recent investigations of Field Programmable Gate Array (FPGA)-based time-to-digital converters (TDCs) have predominantly focused on improving the time resolution of the device. However, the monolithic integration of multi-channel TDCs and the achievement of high measurement throughput remain challenging issues for certain applications. In this paper, the potential of the resources provided by the Kintex-7 Xilinx FPGA is fully explored, and a new design is proposed for the implementation of a high performance multi-channel TDC system on this FPGA. Using the tapped-delay-line wave union TDC architecture, in which a negative pulse is triggered by the hit signal propagating along the carry chain, two time measurements are performed in a single carry chain within one clock cycle. The differential non-linearity and time resolution can be significantly improved by realigning the bins. The on-line calibration and on-line updating of the calibration table reduce the influence of variations of environmental conditions. The logic resources of the 6-input look-up tables in the FPGA are employed for hit signal edge detection and bubble-proof encoding, thereby allowing the TDC system to operate at the maximum allowable clock rate of the FPGA and to achieve the maximum possible measurement throughput. This resource-efficient design, in combination with a modular implementation, makes the integration of multiple channels in one FPGA practicable. Using our design, a 128-channel TDC with a dead time of 1.47 ns, a dynamic range of 360 ns, and a root-mean-square resolution of less than 10 ps was implemented in a single Kintex-7 device.",
"title": ""
},
{
"docid": "397f6c39825a5d8d256e0cc2fbba5d15",
"text": "This paper presents a video-based motion modeling technique for capturing physically realistic human motion from monocular video sequences. We formulate the video-based motion modeling process in an image-based keyframe animation framework. The system first computes camera parameters, human skeletal size, and a small number of 3D key poses from video and then uses 2D image measurements at intermediate frames to automatically calculate the \"in between\" poses. During reconstruction, we leverage Newtonian physics, contact constraints, and 2D image measurements to simultaneously reconstruct full-body poses, joint torques, and contact forces. We have demonstrated the power and effectiveness of our system by generating a wide variety of physically realistic human actions from uncalibrated monocular video sequences such as sports video footage.",
"title": ""
},
{
"docid": "38b5917f30f33c55d3af42022dcb28d7",
"text": "We present a new algorithm that significantly improves the efficiency of exploration for deep Q-learning agents in dialogue systems. Our agents explore via Thompson sampling, drawing Monte Carlo samples from a Bayes-by-Backprop neural network. Our algorithm learns much faster than common exploration strategies such as -greedy, Boltzmann, bootstrapping, and intrinsic-reward-based ones. Additionally, we show that spiking the replay buffer with experiences from just a few successful episodes can make Q-learning feasible when it might otherwise fail.",
"title": ""
},
{
"docid": "2e2a21ca1be2da2d30b1b2a92cd49628",
"text": "A new form of cloud computing, serverless computing, is drawing attention as a new way to design micro-services architectures. In a serverless computing environment, services are developed as service functional units. The function development environment of all serverless computing framework at present is CPU based. In this paper, we propose a GPU-supported serverless computing framework that can deploy services faster than existing serverless computing framework using CPU. Our core approach is to integrate the open source serverless computing framework with NVIDIA-Docker and deploy services based on the GPU support container. We have developed an API that connects the open source framework to the NVIDIA-Docker and commands that enable GPU programming. In our experiments, we measured the performance of the framework in various environments. As a result, developers who want to develop services through the framework can deploy high-performance micro services and developers who want to run deep learning programs without a GPU environment can run code on remote GPUs with little performance degradation.",
"title": ""
},
{
"docid": "4c1c72fde3bbe25f6ff3c873a87b86ba",
"text": "The purpose of this study was to translate the Foot Function Index (FFI) into Italian, to perform a cross-cultural adaptation and to evaluate the psychometric properties of the Italian version of FFI. The Italian FFI was developed according to the recommended forward/backward translation protocol and evaluated in patients with foot and ankle diseases. Feasibility, reliability [intraclass correlation coefficient (ICC)], internal consistency [Cronbach’s alpha (CA)], construct validity (correlation with the SF-36 and a visual analogue scale (VAS) assessing for pain), responsiveness to surgery were assessed. The standardized effect size and standardized response mean were also evaluated. A total of 89 patients were recruited (mean age 51.8 ± 13.9 years, range 21–83). The Italian version of the FFI consisted in 18 items separated into a pain and disability subscales. CA value was 0.95 for both the subscales. The reproducibility was good with an ICC of 0.94 and 0.91 for pain and disability subscales, respectively. A strong correlation was found between the FFI and the scales of the SF-36 and the VAS with related content, particularly in the areas of physical function and pain was observed indicating good construct validity. After surgery, the mean FFI improved from 55.9 ± 24.8 to 32.4 ± 26.3 for the pain subscale and from 48.8 ± 28.8 to 24.9 ± 23.7 for the disability subscale (P < 0.01). The Italian version of the FFI showed satisfactory psychometric properties in Italian patients with foot and ankle diseases. Further testing in different and larger samples is required in order to ensure the validity and reliability of this score.",
"title": ""
}
] |
scidocsrr
|
ef7ed2d04a880140702a03734104a0d0
|
Testing: a roadmap
|
[
{
"docid": "78e5f70a0037cd14a5bb991d89a2940e",
"text": "Regression testing is a necessary but expensive maintenance activity aimed at showing that code has not been adversely affected by changes. Regression test selection techniques reuse tests from an existing test suite to test a modified program. Many regression test selection techniques have been proposed; however, it is difficult to compare and evaluate these techniques because they have different goals. This paper outlines the issues relevant to regression test selection techniques, and uses these issues as the basis for a framework within which to evaluate the techniques. We illustrate the application of our framework by using it to evaluate existing regression test selection techniques. The evaluation reveals the strengths and weaknesses of existing techniques, and highlights some problems that future work in this area should address.",
"title": ""
}
] |
[
{
"docid": "c8740613051e391a2e64a531dc0ff50d",
"text": "As an important feature, orientation field describes the global structure of fingerprints. It provides robust discriminatory information other than traditional widely-used minutiae points. However, there are few works explicitly incorporating this information into fingerprint matching stage, partly due to the difficulty of saving the orientation field in the feature template. In this paper, we propose a novel representation for fingerprints which includes both minutiae and model-based orientation field. Then, fingerprint matching can be done by combining the decisions of the matchers based on the global structure (orientation field) and the local cue (minutiae). We have conducted a set of experiments on large-scale databases and made thorough comparisons with the state-of-the-arts. Extensive experimental results show that combining these local and global discriminative information can largely improve the performance. The proposed system is more robust and accurate than conventional minutiae-based methods, and also better than the previous works which implicitly incorporate the orientation information. In this system, the feature template takes less than 420 bytes, and the feature extraction and matching procedures can be done in about 0.30 s. We also show that the global orientation field is beneficial to the alignment of the fingerprints which are either incomplete or poor-qualitied.",
"title": ""
},
{
"docid": "08bd0aefcb1510c497a2aa98e94d4a99",
"text": "Integrating Information and Communication Technology (ICT) into teaching and learning is a growing area that has attracted many educators’ efforts in recent years. Based on the scope of content covered, ICT integration can happen in three different areas: curriculum, topic, and lesson. This paper elaborates upon the concept of ICT integration, and presents a systematic planning model for guiding ICT integration in the topic area. A sample of an ICT integration plan is described in this paper to demonstrate how this model can be applied in practice.",
"title": ""
},
{
"docid": "dd0ee90650ffd3e0b6b5815ba23ba655",
"text": "This paper presents the design of a single pole 6 throw antenna switch able to manage all the four GSM standards, i.e. 850-900-1800-1900 MHz. The switch has been integrated in a 0.13mum CMOS SOI process with high resistivity substrate and a thick oxide (50Aring) option. The use of high resistivity substrate allows a good loss (IL)-isolation trade-off: IL is kept in the range of 0.55-0.8 dB for the RXs and at 0.7 dB for the TXs, while isolation varies from 40 dB at 900 MHz to 30 dB at 1900 MHz. Power handling capability is well compatible with GSM standards since an ICP0.1dB of 36 dBm has been measured and harmonics distortion is below -39 dBm for an input power of 34 dBm. Robustness to antenna mismatching condition has been successfully demonstrated up to a VSWR of 10:1. The chip size is of 1.23 mm2 and the power consumption is below 10muA and 0.5 mA respectively in stand by mode and during switching, under 2.5 voltage supply",
"title": ""
},
{
"docid": "e6a60fab31af5985520cc64b93b5deb0",
"text": "BACKGROUND\nGenital warts may mimic a variety of conditions, thus complicating their diagnosis and treatment. The recognition of early flat lesions presents a diagnostic challenge.\n\n\nOBJECTIVE\nWe sought to describe the dermatoscopic features of genital warts, unveiling the possibility of their diagnosis by dermatoscopy.\n\n\nMETHODS\nDermatoscopic patterns of 61 genital warts from 48 consecutively enrolled male patients were identified with their frequencies being used as main outcome measures.\n\n\nRESULTS\nThe lesions were examined dermatoscopically and further classified according to their dermatoscopic pattern. The most frequent finding was an unspecific pattern, which was found in 15/61 (24.6%) lesions; a fingerlike pattern was observed in 7 (11.5%), a mosaic pattern in 6 (9.8%), and a knoblike pattern in 3 (4.9%) cases. In almost half of the lesions, pattern combinations were seen, of which a fingerlike/knoblike pattern was the most common, observed in 11/61 (18.0%) cases. Among the vascular features, glomerular, hairpin/dotted, and glomerular/dotted vessels were the most frequent finding seen in 22 (36.0%), 15 (24.6%), and 10 (16.4%) of the 61 cases, respectively. In 10 (16.4%) lesions no vessels were detected. Hairpin vessels were more often seen in fingerlike (χ(2) = 39.31, P = .000) and glomerular/dotted vessels in knoblike/mosaic (χ(2) = 9.97, P = .008) pattern zones; vessels were frequently missing in unspecified (χ(2) = 8.54, P = .014) areas.\n\n\nLIMITATIONS\nOnly male patients were examined.\n\n\nCONCLUSIONS\nThere is a correlation between dermatoscopic patterns and vascular features reflecting the life stages of genital warts; dermatoscopy may be useful in the diagnosis of early-stage lesions.",
"title": ""
},
{
"docid": "ab7db4c786d2f5b084bf9dd2529baed6",
"text": "New protocols for Internet inter-domain routing struggle to get widely adopted. Because the Internet consists of more than 50,000 autonomous systems (ASes), deployment of a new routing protocol has to be incremental. In this work, we study such incremental deployment. We first formulate the routing problem in regard to a metric of routing cost. Then, the paper proposes and rigorously defines a statistical notion of protocol ignorance that quantifies the inability of a routing protocol to accurately determine routing prices with respect to the metric of interest. The proposed protocol-ignorance model of a routing protocol is fairly generic and can be applied to routing in both inter-domain and intra-domain settings, as well as to transportation and other types of networks. Our model of protocol deployment makes our study specific to Internet interdomain routing. Through a combination of mathematical analysis and simulation, we demonstrate that the benefits from adopting a new inter-domain protocol accumulate smoothly during its incremental deployment. In particular, the simulation shows that decreasing the routing price by 25% requires between 43% and 53% of all nodes to adopt the new protocol. Our findings elucidate the deployment struggle of new inter-domain routing protocols and indicate that wide deployment of such a protocol necessitates involving a large number of relevant ASes into a coordinated effort to adopt the new protocol.",
"title": ""
},
{
"docid": "d9ddbac5032e7ff445ea57ac3fdfe8a9",
"text": "Blood-brain barrier disruption, microglial activation and neurodegeneration are hallmarks of multiple sclerosis. However, the initial triggers that activate innate immune responses and their role in axonal damage remain unknown. Here we show that the blood protein fibrinogen induces rapid microglial responses toward the vasculature and is required for axonal damage in neuroinflammation. Using in vivo two-photon microscopy, we demonstrate that microglia form perivascular clusters before myelin loss or paralysis onset and that, of the plasma proteins, fibrinogen specifically induces rapid and sustained microglial responses in vivo. Fibrinogen leakage correlates with areas of axonal damage and induces reactive oxygen species release in microglia. Blocking fibrin formation with anticoagulant treatment or genetically eliminating the fibrinogen binding motif recognized by the microglial integrin receptor CD11b/CD18 inhibits perivascular microglial clustering and axonal damage. Thus, early and progressive perivascular microglial clustering triggered by fibrinogen leakage upon blood-brain barrier disruption contributes to axonal damage in neuroinflammatory disease.",
"title": ""
},
{
"docid": "482fb0c3b5ead028180c57466f3a092e",
"text": "Separating text lines in handwritten documents remains a challenge because the text lines are often ununiformly skewed and curved. In this paper, we propose a novel text line segmentation algorithm based on Minimal Spanning Tree (MST) clustering with distance metric learning. Given a distance metric, the connected components of document image are grouped into a tree structure. Text lines are extracted by dynamically cutting the edges of the tree using a new objective function. For avoiding artificial parameters and improving the segmentation accuracy, we design the distance metric by supervised learning. Experiments on handwritten Chinese documents demonstrate the superiority of the approach.",
"title": ""
},
{
"docid": "0904545d069ac10ff9783cd9647d4066",
"text": "Technological advances are taking a major role in every field of our life. Today, younger generation is more attached to technology, immerging it mostly for social purposes. Therefore, the importance of its existence cannot be ignored. For that, it is the time for every mentor to apply technology to education. Instructors from different majors need to realize that integrating technology into education is a powerful tool that helps them moderate their course, but never a replacement to their existence. This paper’s interest is to deliver a personal experience to other instructors on how to correctly use technology for educational purposes. One way of clarifying this point is to shed light on a very common social application that is WhatsApp. It is a social application available on every smartphone that is usually used as a social medium among users from different generation. This paper used WhatsApp as an application that can associate technology with learning and teachers’ moderation and collaboration under one roof, and that is by applying Mobile learning. One main question to rise at this point is whether students are going to be collaborative or not with their teacher in applying technology into education. There will be an anticipated approach from this paper on both Mobile learning and WhatsApp, that is to reach an agreement that Mobile learning is essential and adds value to the educational material we have in hand. Great examples from my own data are going to be presented to encourage others to predict new ways that can be added to my effort and others as well. The result hoped for after this paper is to be able to answer any digital immigrants’ questions and help them to be more confident with technology.",
"title": ""
},
{
"docid": "05eb344fb8b671542f6f0228774a5524",
"text": "This paper presents an improved hardware structure for the computation of the Whirlpool hash function. By merging the round key computation with the data compression and by using embedded memories to perform part of the Galois Field (28) multiplication, a core can be implemented in just 43% of the area of the best current related art while achieving a 12% higher throughput. The proposed core improves the Throughput per Slice compared to the state of the art by 160%, achieving a throughput of 5.47 Gbit/s with 2110 slices and 32 BRAMs on a VIRTEX II Pro FPGA. Results for a real application are also presented by considering a polymorphic computational approach.",
"title": ""
},
{
"docid": "a4c312bfe90cecb0b999d6b1c8548fd8",
"text": "Wireless Mesh Networks (WMNs) introduce a new paradigm of wireless broadband Internet access by providing high data rate service, scalability, and self-healing abilities at reduced cost. Obtaining high throughput for multi-cast applications (e.g. video streaming broadcast) in WMNs is challenging due to the interference and the change of channel quality. To overcome this issue, cross-layer has been proposed to improve the performance of WMNs. Network coding is a powerful coding technique that has been proven to be the very effective in achieving the maximum multi-cast throughput. In addition to achieving the multi-cast throughput, network coding offers other benefits such as load balancing and saves bandwidth consumption. This paper presents a review the fundamental concept types of medium access control (MAC) layer, routing protocols, cross-layer and network coding for wireless mesh networks. Finally, a list of directions for further research is considered. ",
"title": ""
},
{
"docid": "9a438856b2cce32bf4e9bcbdc93795a2",
"text": "By balancing the spacing effect against the effects of recency and frequency, this paper explains how practice may be scheduled to maximize learning and retention. In an experiment, an optimized condition using an algorithm determined with this method was compared with other conditions. The optimized condition showed significant benefits with large effect sizes for both improved recall and recall latency. The optimization method achieved these benefits by using a modeling approach to develop a quantitative algorithm, which dynamically maximizes learning by determining for each item when the balance between increasing temporal spacing (that causes better long-term recall) and decreasing temporal spacing (that reduces the failure related time cost of each practice) means that the item is at the spacing interval where long-term gain per unit of practice time is maximal. As practice repetitions accumulate for each item, items become stable in memory and this optimal interval increases.",
"title": ""
},
{
"docid": "63ca8787121e3b392e130f9d451b11ea",
"text": "Frank K.Y. Chan Hong Kong University of Science and Technology",
"title": ""
},
{
"docid": "e615ff8da6cdd43357e41aa97df88cc0",
"text": "In recent years, increasing numbers of people have been choosing herbal medicines or products to improve their health conditions, either alone or in combination with others. Herbs are staging a comeback and herbal \"renaissance\" occurs all over the world. According to the World Health Organization, 75% of the world's populations are using herbs for basic healthcare needs. Since the dawn of mankind, in fact, the use of herbs/plants has offered an effective medicine for the treatment of illnesses. Moreover, many conventional/pharmaceutical drugs are derived directly from both nature and traditional remedies distributed around the world. Up to now, the practice of herbal medicine entails the use of more than 53,000 species, and a number of these are facing the threat of extinction due to overexploitation. This paper aims to provide a review of the history and status quo of Chinese, Indian, and Arabic herbal medicines in terms of their significant contribution to the health promotion in present-day over-populated and aging societies. Attention will be focused on the depletion of plant resources on earth in meeting the increasing demand for herbs.",
"title": ""
},
{
"docid": "5dc4d740028b009f60c24d3107632aa7",
"text": "Modern technology has allowed real-time data collection in a variety of domains, ranging from environmental monitoring to healthcare. Consequently, there is a growing need for algorithms capable of performing inferential tasks in an online manner, continuously revising their estimates to reflect the current status of the underlying process. In particular, we are interested in constructing online and temporally adaptive classifiers capable of handling the possibly drifting decision boundaries arising in streaming environments. We first make a quadratic approximation to the log-likelihood that yields a recursive algorithm for fitting logistic regression online. We then suggest a novel way of equipping this framework with self-tuning forgetting factors. The resulting scheme is capable of tracking changes in the underlying probability distribution, adapting the decision boundary appropriately and hence maintaining high classification accuracy in dynamic or unstable environments. We demonstrate the scheme’s effectiveness in both real and simulated streaming environments.",
"title": ""
},
{
"docid": "1a9be0a664da314c143ca430bd6f4502",
"text": "Fingerprint image quality is an important factor in the perf ormance of Automatic Fingerprint Identification Systems(AFIS). It is used to evaluate the system performance, assess enrollment acceptability, and evaluate fingerprint sensors. This paper presents a novel methodology for fingerp rint image quality measurement. We propose limited ring-wedge spectral measu r to estimate the global fingerprint image features, and inhomogeneity with d rectional contrast to estimate local fingerprint image features. Experimental re sults demonstrate the effectiveness of our proposal.",
"title": ""
},
{
"docid": "31dd02e0a38cfba5ed153c79e434173c",
"text": "Linguistic typology studies the range of structures present in human language. The main goal of the field is to discover which sets of possible phenomena are universal, and which are merely frequent. For example, all languages have vowels, while most—but not all—languages have an [u] sound. In this paper we present the first probabilistic treatment of a basic question in phonological typology: What makes a natural vowel inventory? We introduce a series of deep stochastic point processes, and contrast them with previous computational, simulation-based approaches. We provide a comprehensive suite of experiments on over 200 distinct languages.",
"title": ""
},
{
"docid": "6050bd9f60b92471866d2935d42fce2d",
"text": "As one of the successful forms of using Wisdom of Crowd, crowdsourcing, has been widely used for many human intrinsic tasks, such as image labeling, natural language understanding, market predication and opinion mining. Meanwhile, with advances in pervasive technology, mobile devices, such as mobile phones and tablets, have become extremely popular. These mobile devices can work as sensors to collect multimedia data(audios, images and videos) and location information. This power makes it possible to implement the new crowdsourcing mode: spatial crowdsourcing. In spatial crowdsourcing, a requester can ask for resources related a specific location, the mobile users who would like to take the task will travel to that place and get the data. Due to the rapid growth of mobile device uses, spatial crowdsourcing is likely to become more popular than general crowdsourcing, such as Amazon Turk and Crowdflower. However, to implement such a platform, effective and efficient solutions for worker incentives, task assignment, result aggregation and data quality control must be developed. In this demo, we will introduce gMission, a general spatial crowdsourcing platform, which features with a collection of novel techniques, including geographic sensing, worker detection, and task recommendation. We introduce the sketch of system architecture and illustrate scenarios via several case analysis.",
"title": ""
},
{
"docid": "b52b27e83adf3c7466ab481092969f2e",
"text": "Test suite maintenance tends to have the biggest impact on the overall cost of test automation. Frequently modifications applied on a web application lead to have one or more test cases broken and repairing the test suite is a time-consuming and expensive task. \n This paper reports on an industrial case study conducted in a small Italian company investigating on the analysis of the effort to repair web test suites implemented using different UI locators (e.g., Identifiers and XPath). \n The results of our case study indicate that ID locators used in conjunction with LinkText is the best solution among the considered ones in terms of time required (and LOCs to modify) to repair the test suite to the new release of the application.",
"title": ""
},
{
"docid": "8dc130466a3ab4f9b932fdc5a0a9e991",
"text": "MyMediaLite is a fast and scalable, multi-purpose library of recommender system algorithms, aimed both at recommender system researchers and practitioners. It addresses two common scenarios in collaborative filtering: rating prediction (e.g. on a scale of 1 to 5 stars) and item prediction from positive-only implicit feedback (e.g. from clicks or purchase actions). The library offers state-of-the-art algorithms for those two tasks. Programs that expose most of the library's functionality, plus a GUI demo, are included in the package. Efficient data structures and a common API are used by the implemented algorithms, and may be used to implement further algorithms. The API also contains methods for real-time updates and loading/storing of already trained recommender models.\n MyMediaLite is free/open source software, distributed under the terms of the GNU General Public License (GPL). Its methods have been used in four different industrial field trials of the MyMedia project, including one trial involving over 50,000 households.",
"title": ""
},
{
"docid": "cc7b9d8bc0036b842f3c1f492998abc7",
"text": "This paper presents a new approach called Hierarchical Support Vector Machines (HSVM), to address multiclass problems. The method solves a series of maxcut problems to hierarchically and recursively partition the set of classes into two-subsets, till pure leaf nodes that have only one class label, are obtained. The SVM is applied at each internal node to construct the discriminant function for a binary metaclass classifier. Because maxcut unsupervised decomposition uses distance measures to investigate the natural class groupings. HSVM has a fast and intuitive SVM training process that requires little tuning and yields both high accuracy levels and good generalization. The HSVM method was applied to Hyperion hyperspectral data collected over the Okavango Delta of Botswana. Classification accuracies and generalization capability are compared to those achieved by the Best Basis Binary Hierarchical Classifier, a Random Forest CART binary decision tree classifier and Binary Hierarchical Support Vector Machines.",
"title": ""
}
] |
scidocsrr
|
cf5b3f2365b2309103e40d16f8d04e75
|
Pink Breast Milk: Serratia marcescens Colonization
|
[
{
"docid": "f6f1efae0bd6c6a8a9405814005a8352",
"text": "BACKGROUND\nSerratia marcescens, a known pathogen associated with postpartum mastitis, may be identified by its characteristic pigmentation.\n\n\nCASE\nA 36-year-old P0102 woman presented postpartum and said that her breast pump tubing had turned bright pink. S marcescens was isolated, indicating colonization. She was started on antibiotics. After viewing an Internet report in which a patient nearly died from a Serratia infection, she immediately stopped breastfeeding.\n\n\nCONCLUSION\nSerratia colonization may be noted before the development of overt infection. Because this pathogen can be associated with mastitis, physicians should be ready to treat and should encourage patients to continue nursing after clearance of the organism. Exposure to sensational Internet reports may make treatment recommendations difficult.",
"title": ""
}
] |
[
{
"docid": "3ada908b539c3ca23adda1b0791de211",
"text": "Two competing explanations for deviant employee responses to supervisor abuse are tested. A self-gain view is compared with a self-regulation impairment view. The self-gain view suggests that distributive justice (DJ) will weaken the abusive supervision-employee deviance relationship, as perceptions of fair rewards offset costs of abuse. Conversely, the self-regulation impairment view suggests that DJ will strengthen the relationship, as experiencing abuse drains self-resources needed to maintain appropriate behavior, and this effect intensifies when employees receive inconsistent information about their organizational membership (fair outcomes). Three field studies using different samples, measures, and designs support the self-regulation impairment view. Two studies found that the Abusive Supervision × DJ interaction was mediated by self-regulation impairment variables (ego depletion and intrusive thoughts). Implications for theory and research are discussed.",
"title": ""
},
{
"docid": "c06c067294cbb7bbc129324591d2636c",
"text": "In this article, we propose a new method for localizing optic disc in retinal images. Localizing the optic disc and its center is the first step of most vessel segmentation, disease diagnostic, and retinal recognition algorithms. We use optic disc of the first four retinal images in DRIVE dataset to extract the histograms of each color component. Then, we calculate the average of histograms for each color as template for localizing the center of optic disc. The DRIVE, STARE, and a local dataset including 273 retinal images are used to evaluate the proposed algorithm. The success rate was 100, 91.36, and 98.9%, respectively.",
"title": ""
},
{
"docid": "a910a28224ac10c8b4d2781a73849499",
"text": "The computing machine Z3, buHt by Konrad Zuse from 1938 to 1941, could only execute fixed sequences of floating-point arithmetical operations (addition, subtraction, multiplication, division and square root) coded in a punched tape. We show in this paper that a single program loop containing this type of instructions can simulate any Turing machine whose tape is of bounded size. This is achieved by simulating conditional branching and indirect addressing by purely arithmetical means. Zuse's Z3 is therefore, at least in principle, as universal as today's computers which have a bounded memory size. This result is achieved at the cost of blowing up the size of the program stored on punched tape. Universal Machines and Single Loops Nobody has ever built a universal computer. The reason is that a universal computer consists, in theory, of a fixed processor and a memory of unbounded size. This is the case of Turing machines with their unbounded tapes. In the theory of general recursive functions there is also a small set of rules and some predefined functions, but there is no upper bound on the size of intermediate reduction terms. Modern computers are only potentially universal: They can perform any computation that a Turing machine with a bounded tape can perform. If more storage is required, more can be added without having to modify the processor (provided that the extra memory is still addressable).",
"title": ""
},
{
"docid": "893683af36eea6e8ab03e3dcd1429ad4",
"text": "Obtaining a good baseline between different video frames is one of the key elements in vision-based monocular SLAM systems. However, if the video frames contain only a few 2D feature correspondences with a good baseline, or the camera only rotates without sufficient translation in the beginning, tracking and mapping becomes unstable. We introduce a real-time visual SLAM system that incrementally tracks individual 2D features, and estimates camera pose by using matched 2D features, regardless of the length of the baseline. Triangulating 2D features into 3D points is deferred until key frames with sufficient baseline for the features are available. Our method can also deal with pure rotational motions, and fuse the two types of measurements in a bundle adjustment step. Adaptive criteria for key frame selection are also introduced for efficient optimization and dealing with multiple maps. We demonstrate that our SLAM system improves camera pose estimates and robustness, even with purely rotational motions.",
"title": ""
},
{
"docid": "14b36f57ccc2d4814e8855fd7e3b102c",
"text": "The functions of Klotho (KL) are multifaceted and include the regulation of aging and mineral metabolism. It was originally identified as the gene responsible for premature aging-like symptoms in mice and was subsequently shown to function as a coreceptor in the fibroblast growth factor (FGF) 23 signaling pathway. The discovery of KL as a partner for FGF23 led to significant advances in understanding of the molecular mechanisms underlying phosphate and vitamin D metabolism, and simultaneously clarified the pathogenic roles of the FGF23 signaling pathway in human diseases. These novel insights led to the development of new strategies to combat disorders associated with the dysregulated metabolism of phosphate and vitamin D, and clinical trials on the blockade of FGF23 signaling in X-linked hypophosphatemic rickets are ongoing. Molecular and functional insights on KL and FGF23 have been discussed in this review and were extended to how dysregulation of the FGF23/KL axis causes human disorders associated with abnormal mineral metabolism.",
"title": ""
},
{
"docid": "9fefe5e216dec9b11f389c7d62175742",
"text": "Physical interaction in robotics is a complex problem that requires not only accurate reproduction of the kinematic trajectories but also of the forces and torques exhibited during the movement. We base our approach on Movement Primitives (MP), as MPs provide a framework for modelling complex movements and introduce useful operations on the movements, such as generalization to novel situations, time scaling, and others. Usually, MPs are trained with imitation learning, where an expert demonstrates the trajectories. However, MPs used in physical interaction either require additional learning approaches, e.g., reinforcement learning, or are based on handcrafted solutions. Our goal is to learn and generate movements for physical interaction that are learned with imitation learning, from a small set of demonstrated trajectories. The Probabilistic Movement Primitives (ProMPs) framework is a recent MP approach that introduces beneficial properties, such as combination and blending of MPs, and represents the correlations present in the movement. The ProMPs provides a variable stiffness controller that reproduces the movement but it requires a dynamics model of the system. Learning such a model is not a trivial task, and, therefore, we introduce the model-free ProMPs, that are learning jointly the movement and the necessary actions from a few demonstrations. We derive a variable stiffness controller analytically. We further extent the ProMPs to include force and torque signals, necessary for physical interaction. We evaluate our approach in simulated and real robot tasks.",
"title": ""
},
{
"docid": "32fcdb98d3c022262ddc487db5e4d27f",
"text": "Music recommendation is receiving increasing attention as the music industry develops venues to deliver music over the Internet. The goal of music recommendation is to present users lists of songs that they are likely to enjoy. Collaborative-filtering and content-based recommendations are two widely used approaches that have been proposed for music recommendation. However, both approaches have their own disadvantages: collaborative-filtering methods need a large collection of user history data and content-based methods lack the ability of understanding the interests and preferences of users. To overcome these limitations, this paper presents a novel dynamic music similarity measurement strategy that utilizes both content features and user access patterns. The seamless integration of them significantly improves the music similarity measurement accuracy and performance. Based on this strategy, recommended songs are obtained by a means of label propagation over a graph representing music similarity. Experimental results on a real data set collected from http://www.newwisdom.net demonstrate the effectiveness of the proposed approach.",
"title": ""
},
{
"docid": "52bee48854d8eaca3b119eb71d79c22d",
"text": "In this paper, we present a new combined approach for feature extraction, classification, and context modeling in an iterative framework based on random decision trees and a huge amount of features. A major focus of this paper is to integrate different kinds of feature types like color, geometric context, and auto context features in a joint, flexible and fast manner. Furthermore, we perform an in-depth analysis of multiple feature extraction methods and different feature types. Extensive experiments are performed on challenging facade recognition datasets, where we show that our approach significantly outperforms previous approaches with a performance gain of more than 15% on the most difficult dataset.",
"title": ""
},
{
"docid": "90ca045940f1bc9517c64bd93fd33d37",
"text": "We present a new algorithm for encoding low dynamic range images into fixed-rate texture compression formats. Our approach provides orders of magnitude improvements in speed over existing publicly-available compressors, while generating high quality results. The algorithm is applicable to any fixed-rate texture encoding scheme based on Block Truncation Coding and we use it to compress images into the OpenGL BPTC format. The underlying technique uses an axis-aligned bounding box to estimate the proper partitioning of a texel block and performs a generalized cluster fit to compute the endpoint approximation. This approximation can be further refined using simulated annealing. The algorithm is inherently parallel and scales with the number of processor cores. We highlight its performance on low-frequency game textures and the high frequency Kodak Test Image Suite.",
"title": ""
},
{
"docid": "6d26012bd529735410477c9f389bbf73",
"text": "Most current planners assume complete domain models and focus on generating correct plans. Unfortunately, domain modeling is a laborious and error-prone task, thus real world agents have to plan with incomplete domain models. While domain experts cannot guarantee completeness, often they are able to circumscribe the incompleteness of the model by providing annotations as to which parts of the domain model may be incomplete. In this paper, we study planning problems with incomplete domain models where the annotations specify possible preconditions and effects of actions. We show that the problem of assessing the quality of a plan, or its plan robustness, is #P -complete, establishing its equivalence with the weighted model counting problems. We present two approaches to synthesizing robust plans. While the method based on the compilation to conformant probabilistic planning is much intuitive, its performance appears to be limited to only small problem instances. Our second approach based on stochastic heuristic search works well for much larger problems. It aims to use the robustness measure directly for estimating heuristic distance, which is then used to guide the search. Our planning system, PISA, outperforms a state-of-the-art planner handling incomplete domain models in most of the tested domains, both in terms of plan quality and planning time. Finally, we also present an extension of PISA called CPISA that is able to exploit the available of past successful plan traces to both improve the robustness of the synthesized plans and reduce the domain modeling burden.",
"title": ""
},
{
"docid": "c89ce2fb6180961cdfee8120b0c17dd8",
"text": "Anti-forensics (AF) is a multi-headed demon with a range of weapons in its arsenal. Sarah Hilley looks at a set of hell-raising attacks directed at prominent forensic tools. Major forensic programs have started to attract unwanted attention from hackers aka security researchers of a type that have plagued mainstream software developers for years. This report focuses on the development of the Metasploit Anti-Forensic Investigation Arsenal (MAFIA).",
"title": ""
},
{
"docid": "326493520ccb5c8db07362f412f57e62",
"text": "This paper introduces Rank-based Interactive Evolution (RIE) which is an alternative to interactive evolution driven by computational models of user preferences to generate personalized content. In RIE, the computational models are adapted to the preferences of users which, in turn, are used as fitness functions for the optimization of the generated content. The preference models are built via ranking-based preference learning, while the content is generated via evolutionary search. The proposed method is evaluated on the creation of strategy game maps, and its performance is tested using artificial agents. Results suggest that RIE is both faster and more robust than standard interactive evolution and outperforms other state-of-the-art interactive evolution approaches.",
"title": ""
},
{
"docid": "7f8211ed8d7c8145f370c46b5bba3ddb",
"text": "The adjectives of quantity (Q-adjectives) many, few, much and little stand out from other quantity expressions on account of their syntactic flexibility, occurring in positions that could be called quantificational (many students attended), predicative (John’s friends were many), attributive (the many students), differential (much more than a liter) and adverbial (slept too much). This broad distribution poses a challenge for the two leading theories of this class, which treat them as either quantifying determiners or predicates over individuals. This paper develops an analysis of Q-adjectives as gradable predicates of sets of degrees or (equivalently) gradable quantifiers over degrees. It is shown that this proposal allows a unified analysis of these items across the positions in which they occur, while also overcoming several issues facing competing accounts, among others the divergences between Q-adjectives and ‘ordinary’ adjectives, the operator-like behavior of few and little, and the use of much as a dummy element. Overall the findings point to the central role of degrees in the semantics of quantity.",
"title": ""
},
{
"docid": "33f6fb40035058f4842d25dae2443167",
"text": "All rights reserved. No part of this publication may be reproduced, stored in a retrieval system, or transmitted, in any form or by any means, electronic, mechanical, photocopying, recording or otherwise, without the prior permission of the author.",
"title": ""
},
{
"docid": "8cfdd59ba7271d48ea0d41acc2ef795a",
"text": "The Cole single-dispersion impedance model is based upon a constant phase element (CPE), a conductance parameter as a dependent parameter and a characteristic time constant as an independent parameter. Usually however, the time constant of tissue or cell suspensions is conductance dependent, and so the Cole model is incompatible with general relaxation theory and not a model of first choice. An alternative model with conductance as a free parameter influencing the characteristic time constant of the biomaterial has been analyzed. With this free-conductance model it is possible to separately follow CPE and conductive processes, and the nominal time constant no longer corresponds to the apex of the circular arc in the complex plane.",
"title": ""
},
{
"docid": "6947f9e3da52e03e867a0c8c015c17df",
"text": "Graphs are a powerful and versatile tool useful in various subfields of science and engineering. In many applications, for example, in pattern recognition and computer vision, it is required to measure the similarity of objects. When graphs are used for the representation of structured objects, then the problem of measuring object similarity turns into the problem of computing the similarity of graphs, which is also known as graph matching. In this paper, similarity measures on graphs and related algorithms will be reviewed. Applications of graph matching will be demonstrated giving examples from the fields of pattern recognition and computer vision. Also recent theoretical work showing various relations between different similarity measures will be discussed.",
"title": ""
},
{
"docid": "149073f577d0e1fb380ae395ff1ca0c5",
"text": "A complete kinematic model of the 5 DOF-Mitsubishi RV-M1 manipulator is presented in this paper. The forward kinematic model is based on the Modified Denavit-Hartenberg notation, and the inverse one is derived in closed form by fixing the orientation of the tool. A graphical interface is developed using MATHEMATICA software to illustrate the forward and inverse kinematics, allowing student or researcher to have hands-on of virtual graphical model that fully describe both the robot's geometry and the robot's motion in its workspace before to tackle any real task.",
"title": ""
},
{
"docid": "6b5950c88c8cb414a124e74e9bc2ed00",
"text": "As most regular readers of this TRANSACTIONS know, the development of digital signal processing techniques for applications involving image or picture data has been an increasingly active research area for the past decade. Collectively, t h s work is normally characterized under the generic heading “digital image processing.” Interestingly, the two books under review here share this heading as their title. Both are quite ambitious undertakings in that they attempt to integrate contributions from many disciplines (classical systems theory, digital signal processing, computer science, statistical communications, etc.) into unified, comprehensive presentations. In this regard it can be said that both are to some extent successful, although in quite different ways. Why the unusual step of a joint review? A brief overview of the two books reveals that they share not only a common title, but also similar objectives/purposes, intended audiences, structural organizations, and lists of topics considered. A more careful study reveals that substantial differences do exist, however, in the style and depth of subject treatment (as reflected in the difference in their lengths). Given their almost simultaneous publication, it seems appropriate to discuss these similarities/differences in a common setting. After much forethought (and two drafts), the reviewer decided to structure this review by describing the general topical material in their (joint) major sections, with supplementary comments directed toward the individual texts. It is hoped that this will provide the reader with a brief survey of the books’ contents and some flavor of their contrasting approaches. To avoid the identity problems of the joint title, each book will be subsequently referred to using the respective authors’ names: Gonzalez/Wintz and Pratt. Subjects will be correlated with chapter number(s) and approximate l ngth of coverage.",
"title": ""
},
{
"docid": "62ea6783f6a3e6429621286b4a1f068d",
"text": "Aviation delays inconvenience travelers and result in financial losses for stakeholders. Without complex data pre-processing, delay data collected by the existing IATA delay coding system are inadequate to support advanced delay analytics, e.g. large-scale delay propagation tracing in an airline network. Consequently, we developed three new coding schemes aiming at improving the current IATA system. These schemes were tested with specific analysis tasks using simulated delay data and were benchmarked against the IATA system. It was found that a coding scheme with a well-designed reporting style can facilitate automated data analytics and data mining, and an improved grouping of delay codes can minimise potential confusion at the data entry and recording stages. © 2014 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "44272dd2c30ada5b63cc6244c194c43f",
"text": "This paper proposes a method to achieve fast and fluid human-robot interaction by estimating the progress of the movement of the human. The method allows the progress, also referred to as the phase of the movement, to be estimated even when observations of the human are partial and occluded; a problem typically found when using motion capture systems in cluttered environments. By leveraging on the framework of Interaction Probabilistic Movement Primitives (ProMPs), phase estimation makes it possible to classify the human action, and to generate a corresponding robot trajectory before the human finishes his/her movement. The method is therefore suited for semiautonomous robots acting as assistants and coworkers. Since observations may be sparse, our method is based on computing the probability of different phase candidates to find the phase that best aligns the Interaction ProMP with the current observations. The method is fundamentally different from approaches based on Dynamic Time Warping (DTW) that must rely on a consistent stream of measurements at runtime. The phase estimation algorithm can be seamlessly integrated into Interaction ProMPs such that robot trajectory coordination, phase estimation, and action recognition can all be achieved in a single probabilistic framework. We evaluated the method using a 7-DoF lightweight robot arm equipped with a 5-finger hand in single and multi-task collaborative experiments. We compare the accuracy achieved by phase estimation with our previous method based on DTW.",
"title": ""
}
] |
scidocsrr
|
4da19684f8282cca31c25868fefacab5
|
TripPlanner: Personalized Trip Planning Leveraging Heterogeneous Crowdsourced Digital Footprints
|
[
{
"docid": "71e9bb057e90f754f658c736e4f02b7a",
"text": "When tourists visit a city or region, they cannot visit every point of interest available, as they are constrained in time and budget. Tourist recommender applications help tourists by presenting a personal selection. Providing adequate tour scheduling support for these kinds of applications is a daunting task for the application developer. The objective of this paper is to demonstrate how existing models from the field of Operations Research (OR) fit this scheduling problem, and enable a wide range of tourist trip planning functionalities. Using the Orienteering Problem (OP) and its extensions to model the tourist trip planning problem, allows to deal with a vast number of practical planning problems.",
"title": ""
}
] |
[
{
"docid": "fff21e37244f5c097dc9e8935bb92939",
"text": "For the purpose of enhancing the search ability of the cuckoo search (CS) algorithm, an improved robust approach, called HS/CS, is put forward to address the optimization problems. In HS/CS method, the pitch adjustment operation in harmony search (HS) that can be considered as a mutation operator is added to the process of the cuckoo updating so as to speed up convergence. Several benchmarks are applied to verify the proposed method and it is demonstrated that, in most cases, HS/CS performs better than the standard CS and other comparative methods. The parameters used in HS/CS are also investigated by various simulations.",
"title": ""
},
{
"docid": "387c2b51fcac3c4f822ae337cf2d3f8d",
"text": "This paper directly follows and extends, where a novel method for measurement of extreme impedances is described theoretically. In this paper experiments proving that the method can significantly improve stability of a measurement system are described. Using Agilent PNA E8364A vector network analyzer (VNA) the method is able to measure reflection coefficient with stability improved 36-times in magnitude and 354-times in phase compared to the classical method of reflection coefficient measurement. Further, validity of the error model and related equations stated in are verified by real measurement of SMD resistors (size 0603) in microwave test fixture. Values of the measured SMD resistors range from 12 kOmega up to 330 kOmega. A novel calibration technique using three different resistors as calibration standards is used. The measured values of impedances reasonably agree with assumed values.",
"title": ""
},
{
"docid": "6057638a2a1cfd07ab2e691baf93a468",
"text": "Cybersecurity in smart grids is of critical importance given the heavy reliance of modern societies on electricity and the recent cyberattacks that resulted in blackouts. The evolution of the legacy electric grid to a smarter grid holds great promises but also comes up with an increasesd attack surface. In this article, we review state of the art developments in cybersecurity for smart grids, both from a standardization as well technical perspective. This work shows the important areas of future research for academia, and collaboration with government and industry stakeholders to enhance smart grid cybersecurity and make this new paradigm not only beneficial and valuable but also safe and secure.",
"title": ""
},
{
"docid": "305f877227516eded75819bdf48ab26d",
"text": "Deep generative models have been successfully applied to many applications. However, existing works experience limitations when generating large images (the literature usually generates small images, e.g. 32× 32 or 128× 128). In this paper, we propose a novel scheme, called deep tensor adversarial generative nets (TGAN), that generates large high-quality images by exploring tensor structures. Essentially, the adversarial process of TGAN takes place in a tensor space. First, we impose tensor structures for concise image representation, which is superior in capturing the pixel proximity information and the spatial patterns of elementary objects in images, over the vectorization preprocess in existing works. Secondly, we propose TGAN that integrates deep convolutional generative adversarial networks and tensor super-resolution in a cascading manner, to generate high-quality images from random distributions. More specifically, we design a tensor super-resolution process that consists of tensor dictionary learning and tensor coefficients learning. Finally, on three datasets, the proposed TGAN generates images with more realistic textures, compared with state-of-the-art adversarial autoencoders. The size of the generated images is increased by over 8.5 times, namely 374× 374 in PASCAL2.",
"title": ""
},
{
"docid": "dac5cebcbc14b82f7b8df977bed0c9d8",
"text": "While blockchain services hold great promise to improve many different industries, there are significant cybersecurity concerns which must be addressed. In this paper, we investigate security considerations for an Ethereum blockchain hosting a distributed energy management application. We have simulated a microgrid with ten buildings in the northeast U.S., and results of the transaction distribution and electricity utilization are presented. We also present the effects on energy distribution when one or two smart meters have their identities corrupted. We then propose a new approach to digital identity management that would require smart meters to authenticate with the blockchain ledger and mitigate identity-spoofing attacks. Applications of this approach to defense against port scans and DDoS, attacks are also discussed.",
"title": ""
},
{
"docid": "6e80065ade40ada9efde1f58859498bc",
"text": "Neural networks, as powerful tools for data mining and knowledge engineering, can learn from data to build feature-based classifiers and nonlinear predictive models. Training neural networks involves the optimization of nonconvex objective functions, and usually, the learning process is costly and infeasible for applications associated with data streams. A possible, albeit counterintuitive, alternative is to randomly assign a subset of the networks’ weights so that the resulting optimization task can be formulated as a linear least-squares problem. This methodology can be applied to both feedforward and recurrent networks, and similar techniques can be used to approximate kernel functions. Many experimental results indicate that such randomized models can reach sound performance compared to fully adaptable ones, with a number of favorable benefits, including (1) simplicity of implementation, (2) faster learning with less intervention from human beings, and (3) possibility of leveraging overall linear regression and classification algorithms (e.g., l1 norm minimization for obtaining sparse formulations). This class of neural networks attractive and valuable to the data mining community, particularly for handling large scale data mining in real-time. However, the literature in the field is extremely vast and fragmented, with many results being reintroduced multiple times under different names. This overview aims to provide a self-contained, uniform introduction to the different ways in which randomization can be applied to the design of neural networks and kernel functions. A clear exposition of the basic framework underlying all these approaches helps to clarify innovative lines of research, open problems, and most importantly, foster the exchanges of well-known results throughout different communities. © 2017 John Wiley & Sons, Ltd",
"title": ""
},
{
"docid": "c4df97f3db23c91f0ce02411d2e1e999",
"text": "One important challenge for probabilistic logics is reasoning with very large knowledge bases (KBs) of imperfect information, such as those produced by modern web-scale information extraction systems. One scalability problem shared by many probabilistic logics is that answering queries involves “grounding” the query—i.e., mapping it to a propositional representation—and the size of a “grounding” grows with database size. To address this bottleneck, we present a first-order probabilistic language called ProPPR in which approximate “local groundings” can be constructed in time independent of database size. Technically, ProPPR is an extension to stochastic logic programs that is biased towards short derivations; it is also closely related to an earlier relational learning algorithm called the path ranking algorithm. We show that the problem of constructing proofs for this logic is related to computation of personalized PageRank on a linearized version of the proof space, and based on this connection, we develop a provably-correct approximate grounding scheme, based on the PageRank–Nibble algorithm. Building on this, we develop a fast and easily-parallelized weight-learning algorithm for ProPPR. In our experiments, we show that learning for ProPPR is orders of magnitude faster than learning for Markov logic networks; that allowing mutual recursion (joint learning) in KB inference leads to improvements in performance; and that ProPPR can learn weights for a mutually recursive program with hundreds of clauses defining scores of interrelated predicates over a KB containing one million entities.",
"title": ""
},
{
"docid": "0e2b885774f69342ade2b9ad1bc84835",
"text": "History repeatedly demonstrates that rural communities have unique technological needs. Yet, we know little about how rural communities use modern technologies, so we lack knowledge on how to design for them. To address this gap, our empirical paper investigates behavioral differences between more than 3,000 rural and urban social media users. Using a dataset collected from a broadly popular social network site, we analyze users' profiles, 340,000 online friendships and 200,000 interpersonal messages. Using social capital theory, we predict differences between rural and urban users and find strong evidence supporting our hypotheses. Namely, rural people articulate far fewer friends online, and those friends live much closer to home. Our results also indicate that the groups have substantially different gender distributions and use privacy features differently. We conclude by discussing design implications drawn from our findings; most importantly, designers should reconsider the binary friend-or-not model to allow for incremental trust-building.",
"title": ""
},
{
"docid": "44928aa4c5b294d1b8f24eaab14e9ce7",
"text": "Most exact algorithms for solving partially observable Markov decision processes (POMDPs) are based on a form of dynamic programming in which a piecewise-linear and convex representation of the value function is updated at every iteration to more accurately approximate the true value function. However, the process is computationally expensive, thus limiting the practical application of POMDPs in planning. To address this current limitation, we present a parallel distributed algorithm based on the Restricted Region method proposed by Cassandra, Littman and Zhang [1]. We compare performance of the parallel algorithm against a serial implementation Restricted Region.",
"title": ""
},
{
"docid": "d83031118ea8c9bcdfc6df0d26b87e15",
"text": "Camera-based motion tracking has become a popular enabling technology for gestural human-computer interaction. However, the approach suffers from several limitations, which have been shown to be particularly problematic when employed within musical contexts. This paper presents Leimu, a wrist mount that couples a Leap Motion optical sensor with an inertial measurement unit to combine the benefits of wearable and camera-based motion tracking. Leimu is designed, developed and then evaluated using discourse and statistical analysis methods. Qualitative results indicate that users consider Leimu to be an effective interface for gestural music interaction and the quantitative results demonstrate that the interface offers improved tracking precision over a Leap Motion positioned on a table top.",
"title": ""
},
{
"docid": "8e3bf062119c6de9fa5670ce4b00764b",
"text": "Heating red phosphorus in sealed ampoules in the presence of a Sn/SnI4 catalyst mixture has provided bulk black phosphorus at much lower pressures than those required for allotropic conversion by anvil cells. Herein we report the growth of ultra-long 1D red phosphorus nanowires (>1 mm) selectively onto a wafer substrate from red phosphorus powder and a thin film of red phosphorus in the present of a Sn/SnI4 catalyst. Raman spectra and X-ray diffraction characterization suggested the formation of crystalline red phosphorus nanowires. FET devices constructed with the red phosphorus nanowires displayed a typical I-V curve similar to that of black phosphorus and a similar mobility reaching 300 cm(2) V(-1) s with an Ion /Ioff ratio approaching 10(2) . A significant response to infrared light was observed from the FET device.",
"title": ""
},
{
"docid": "914b38c4a5911a481bf9088f75adef30",
"text": "This paper presents a mixed-integer LP approach to the solution of the long-term transmission expansion planning problem. In general, this problem is large-scale, mixed-integer, nonlinear, and nonconvex. We derive a mixed-integer linear formulation that considers losses and guarantees convergence to optimality using existing optimization software. The proposed model is applied to Garver’s 6-bus system, the IEEE Reliability Test System, and a realistic Brazilian system. Simulation results show the accuracy as well as the efficiency of the proposed solution technique.",
"title": ""
},
{
"docid": "de9ed927d395f78459e84b1c27f9c746",
"text": "JuMP is an open-source modeling language that allows users to express a wide range of optimization problems (linear, mixed-integer, quadratic, conic-quadratic, semidefinite, and nonlinear) in a high-level, algebraic syntax. JuMP takes advantage of advanced features of the Julia programming language to offer unique functionality while achieving performance on par with commercial modeling tools for standard tasks. In this work we will provide benchmarks, present the novel aspects of the implementation, and discuss how JuMP can be extended to new problem classes and composed with state-of-the-art tools for visualization and interactivity.",
"title": ""
},
{
"docid": "2488c17b39dd3904e2f17448a8519817",
"text": "Young healthy participants spontaneously use different strategies in a virtual radial maze, an adaptation of a task typically used with rodents. Functional magnetic resonance imaging confirmed previously that people who used spatial memory strategies showed increased activity in the hippocampus, whereas response strategies were associated with activity in the caudate nucleus. Here, voxel based morphometry was used to identify brain regions covarying with the navigational strategies used by individuals. Results showed that spatial learners had significantly more gray matter in the hippocampus and less gray matter in the caudate nucleus compared with response learners. Furthermore, the gray matter in the hippocampus was negatively correlated to the gray matter in the caudate nucleus, suggesting a competitive interaction between these two brain areas. In a second analysis, the gray matter of regions known to be anatomically connected to the hippocampus, such as the amygdala, parahippocampal, perirhinal, entorhinal and orbitofrontal cortices were shown to covary with gray matter in the hippocampus. Because low gray matter in the hippocampus is a risk factor for Alzheimer's disease, these results have important implications for intervention programs that aim at functional recovery in these brain areas. In addition, these data suggest that spatial strategies may provide protective effects against degeneration of the hippocampus that occurs with normal aging.",
"title": ""
},
{
"docid": "fdc4d23fa336ca122fdfb12818901180",
"text": "Concept of communication systems, which use smart antennas is based on digital signal processing algorithms. Thus, the smart antennas system becomes capable to locate and track signals by the both: users and interferers and dynamically adapts the antenna pattern to enhance the reception in Signal-Of-Interest direction and minimizing interference in Signal-Of-Not-Interest direction. Hence, Space Division Multiple Access system, which uses smart antennas, is being used more often in wireless communications, because it shows improvement in channel capacity and co-channel interference. However, performance of smart antenna system greatly depends on efficiency of digital signal processing algorithms. The algorithm uses the Direction of Arrival (DOA) algorithms to estimate the number of incidents plane waves on the antenna array and their angle of incidence. This paper investigates performance of the DOA algorithms like MUSIC, ESPRIT and ROOT MUSIC on the uniform linear array in the presence of white noise. The simulation results show that MUSIC algorithm is the best. The resolution of the DOA techniques improves as number of snapshots, number of array elements and signalto-noise ratio increases.",
"title": ""
},
{
"docid": "1ab4f605d67dabd3b2815a39b6123aa4",
"text": "This paper examines and provides the theoretical evidence of the feasibility of 60 GHz mmWave in wireless body area networks (WBANs), by analyzing its properties. It has been shown that 60 GHz based communication could better fit WBANs compared to traditional 2.4 GHz based communication because of its compact network coverage, miniaturized devices, superior frequency reuse, multi-gigabyte transmission rate and the therapeutic merits for human health. Since allowing coexistence among the WBANs can enhance the efficiency of the mmWave based WBANs, we formulated the coexistence problem as a non-cooperative distributed power control game. This paper proves the existence of Nash equilibrium (NE) and derives the best response move as a solution. The efficiency of the NE is also improved by modifying the utility function and introducing a pair of pricing factors. Our simulation results indicate that the proposed pricing policy significantly improves the efficiency in terms of Pareto optimality and social optimality.",
"title": ""
},
{
"docid": "e38cbee5c03319d15086e9c39f7f8520",
"text": "In this paper we describe COLIN, a forward-chaining heuristic search planner, capable of reasoning with COntinuous LINear numeric change, in addition to the full temporal semantics of PDDL2.1. Through this work we make two advances to the state-of-the-art in terms of expressive reasoning capabilities of planners: the handling of continuous linear change, and the handling of duration-dependent effects in combination with duration inequalities, both of which require tightly coupled temporal and numeric reasoning during planning. COLIN combines FF-style forward chaining search, with the use of a Linear Program (LP) to check the consistency of the interacting temporal and numeric constraints at each state. The LP is used to compute bounds on the values of variables in each state, reducing the range of actions that need to be considered for application. In addition, we develop an extension of the Temporal Relaxed Planning Graph heuristic of CRIKEY3, to support reasoning directly with continuous change. We extend the range of task variables considered to be suitable candidates for specifying the gradient of the continuous numeric change effected by an action. Finally, we explore the potential for employing mixed integer programming as a tool for optimising the timestamps of the actions in the plan, once a solution has been found. To support this, we further contribute a selection of extended benchmark domains that include continuous numeric effects. We present results for COLIN that demonstrate its scalability on a range of benchmarks, and compare to existing state-of-the-art planners.",
"title": ""
},
{
"docid": "16a8fc39efe95c05a25deba4da6aa806",
"text": "Although effective treatments for obsessive-compulsive disorder (OCD) exist, there are significant barriers to receiving evidence-based care. Mobile health applications (Apps) offer a promising way of overcoming these barriers by increasing access to treatment. The current study investigated the feasibility, acceptability, and preliminary efficacy of LiveOCDFree, an App designed to help OCD patients conduct exposure and response prevention (ERP). Twenty-one participants with mild to moderate symptoms of OCD were enrolled in a 12-week open trial of App-guided self-help ERP. Self-report assessments of OCD, depression, anxiety, and quality of life were completed at baseline, mid-treatment, and post-treatment. App-guided ERP was a feasible and acceptable self-help intervention for individuals with OCD, with high rates of retention and satisfaction. Participants reported significant improvement in OCD and anxiety symptoms pre- to post-treatment. Findings suggest that LiveOCDFree is a feasible and acceptable self-help intervention for OCD. Preliminary efficacy results are encouraging and point to the potential utility of mobile Apps in expanding the reach of existing empirically supported treatments.",
"title": ""
},
{
"docid": "9b4c240bd55523360e92dbed26cb5dc2",
"text": "CBT has been seen as an alternative to the unmanageable population of undergraduate students in Nigerian universities. This notwithstanding, the peculiar nature of some courses hinders its total implementation. This study was conducted to investigate the students’ perception of CBT for undergraduate chemistry courses in University of Ilorin. To this end, it examined the potential for using student feedback in the validation of assessment. A convenience sample of 48 students who had taken test on CBT in chemistry was surveyed and questionnaire was used for data collection. Data analysis demonstrated an auspicious characteristics of the target context for the CBT implementation as majority (95.8%) of students said they were competent with the use of computers and 75% saying their computer anxiety was only mild or low but notwithstanding they have not fully accepted the testing mode with only 29.2% in favour of it, due to the impaired validity of the test administration which they reported as being many erroneous chemical formulas, equations and structures in the test items even though they have nonetheless identified the achieved success the testing has made such as immediate scoring, fastness and transparency in marking. As quality of designed items improves and sufficient time is allotted according to the test difficulty, the test experience will become favourable for students and subsequently CBT will gain its validation in this particular context.",
"title": ""
},
{
"docid": "b53f1a0b71fe5588541195d405b4a104",
"text": "We propose a neural machine-reading model that constructs dynamic knowledge graphs from procedural text. It builds these graphs recurrently for each step of the described procedure, and uses them to track the evolving states of participant entities. We harness and extend a recently proposed machine reading comprehension (MRC) model to query for entity states, since these states are generally communicated in spans of text and MRC models perform well in extracting entity-centric spans. The explicit, structured, and evolving knowledge graph representations that our model constructs can be used in downstream question answering tasks to improve machine comprehension of text, as we demonstrate empirically. On two comprehension tasks from the recently proposed PROPARA dataset (Dalvi et al., 2018), our model achieves state-of-the-art results. We further show that our model is competitive on the RECIPES dataset (Kiddon et al., 2015), suggesting it may be generally applicable. We present some evidence that the model’s knowledge graphs help it to impose commonsense constraints on its predictions.",
"title": ""
}
] |
scidocsrr
|
eb78f2f66c5e7e2e7817d8c15b672e06
|
A deep representation for depth images from synthetic data
|
[
{
"docid": "01534202e7db5d9059651290e1720bf0",
"text": "The objective of this paper is the effective transfer of the Convolutional Neural Network (CNN) feature in image search and classification. Systematically, we study three facts in CNN transfer. 1) We demonstrate the advantage of using images with a properly large size as input to CNN instead of the conventionally resized one. 2) We benchmark the performance of different CNN layers improved by average/max pooling on the feature maps. Our observation suggests that the Conv5 feature yields very competitive accuracy under such pooling step. 3) We find that the simple combination of pooled features extracted across variou s CNN layers is effective in collecting evidences from both low and high level descriptors. Following these good practices, we are capable of improving the state of the art on a number of benchmarks to a large margin.",
"title": ""
}
] |
[
{
"docid": "b716af4916ac0e4a0bf0b040dccd352b",
"text": "Modeling visual attention-particularly stimulus-driven, saliency-based attention-has been a very active research area over the past 25 years. Many different models of attention are now available which, aside from lending theoretical contributions to other fields, have demonstrated successful applications in computer vision, mobile robotics, and cognitive systems. Here we review, from a computational perspective, the basic concepts of attention implemented in these models. We present a taxonomy of nearly 65 models, which provides a critical comparison of approaches, their capabilities, and shortcomings. In particular, 13 criteria derived from behavioral and computational studies are formulated for qualitative comparison of attention models. Furthermore, we address several challenging issues with models, including biological plausibility of the computations, correlation with eye movement datasets, bottom-up and top-down dissociation, and constructing meaningful performance measures. Finally, we highlight current research trends in attention modeling and provide insights for future.",
"title": ""
},
{
"docid": "020970e68281409d378e6682a780f54c",
"text": "Lung Carcinoma is a disease of uncontrolled growth of cancerous cells in the tissues of the lungs. The early detection of lung cancer is the key of its cure. Early diagnosis of the disease saves enormous lives, failing in which may lead to other severe problems causing sudden fatal death. In general, a measure for early stage diagnosis mainly includes X-rays, CT-images, MRI’s, etc. In this system first we would use some techniques that are essential for the task of medical image mining such as Data Preprocessing, Training and testing of samples, Classification using Backpropagation Neural Network which would classify the digital X-ray, CT-images, MRI’s, etc. as normal or abnormal. The normal state is the one that characterizes a healthy patient. The abnormal image will be further considered for the feature analysis. Further for optimized analysis of features Genetic Algorithm will be used that would extract as well as select features on the basis of the fitness of the features extracted. The selected features would be further classified as cancerous or noncancerous for the images classified as abnormal before. Hence this system will help to draw an appropriate decision about a particular patient’s state. Keywords—BackpopagationNeuralNetworks,Classification, Genetic Algorithm, Lung Cancer, Medical Image Mining.",
"title": ""
},
{
"docid": "9ec39badc92094783fcaaa28c2eb2f7a",
"text": "In trying to solve multiobjective optimization problems, many traditional methods scalarize the objective vector into a single objective. In those cases, the obtained solution is highly sensitive to the weight vector used in the scalarization process and demands that the user have knowledge about the underlying problem. Moreover, in solving multiobjective problems, designers may be interested in a set of Pareto-optimal points, instead of a single point. Since genetic algorithms (GAs) work with a population of points, it seems natural to use GAs in multiobjective optimization problems to capture a number of solutions simultaneously. Although a vector evaluated GA (VEGA) has been implemented by Schaffer and has been tried to solve a number of multiobjective problems, the algorithm seems to have bias toward some regions. In this paper, we investigate Goldberg's notion of nondominated sorting in GAs along with a niche and speciation method to find multiple Pareto-optimal points simultaneously. The proof-of-principle results obtained on three problems used by Schaffer and others suggest that the proposed method can be extended to higher dimensional and more difficult multiobjective problems. A number of suggestions for extension and application of the algorithm are also discussed.",
"title": ""
},
{
"docid": "344112b4ecf386026fd4c4714f0f3087",
"text": "This paper deals with easy programming methods of dual-arm manipulation tasks for humanoid robots. Hereby a programming by demonstration system is used in order to observe, learn and generalize tasks performed by humans. A classification for dual-arm manipulations is introduced, enabling a segmentation of tasks into adequate subtasks. Further it is shown how the generated programs are mapped on and executed by a humanoid robot.",
"title": ""
},
{
"docid": "f465415bf9cc982b4eb75ee9a02b1468",
"text": "After the demise of the Industrial Age, we currently live in an 'Information Age' fuelled mainly by the Internet, with an ever-increasing medically and dentally literate population. The media has played its role by reporting scientific advances, as well as securitising medical and dental practices. Reality television such as 'Extreme makeovers' has also raised public awareness of body enhancements, with a greater number of people seeking such procedures. To satiate this growing demand, the dental industry has flourished by introducing novel cosmetic products such as bleaching kits, tooth coloured filling materials and a variety of dental ceramics. In addition, one only has to browse through a dental journal to notice innumerable courses and lectures on techniques for providing cosmetic dentistry. The incessant public interest, combined with unrelenting marketing by companies is gradually shifting the balance of dental care from a healing to an enhancement profession. The purpose of this article is to endeavour to answer questions such as, What is aesthetic or cosmetic dentistry? Why do patients seek cosmetic dentistry? Are enhancement procedures a part of dental practice? What, if any, ethical guidelines and constraints apply to elective enhancement procedures? What is the role of the dentist in providing or encouraging this type of 'therapy'? What treatment modalities are available for aesthetic dental treatment?",
"title": ""
},
{
"docid": "db8325925cb9fd1ebdcf7480735f5448",
"text": "A general nonparametric technique is proposed for the analysis of a complex multimodal feature space and to delineate arbitrarily shaped clusters in it. The basic computational module of the technique is an old pattern recognition procedure, the mean shift. We prove for discrete data the convergence of a recursive mean shift procedure to the nearest stationary point of the underlying density function and thus its utility in detecting the modes of the density. The equivalence of the mean shift procedure to the Nadaraya–Watson estimator from kernel regression and the robust M-estimators of location is also established. Algorithms for two low-level vision tasks, discontinuity preserving smoothing and image segmentation are described as applications. In these algorithms the only user set parameter is the resolution of the analysis, and either gray level or color images are accepted as input. Extensive experimental results illustrate their excellent performance.",
"title": ""
},
{
"docid": "ff71838a3f8f44e30dc69ed2f9371bfc",
"text": "The idea that video games or computer-based applications can improve cognitive function has led to a proliferation of programs claiming to \"train the brain.\" However, there is often little scientific basis in the development of commercial training programs, and many research-based programs yield inconsistent or weak results. In this study, we sought to better understand the nature of cognitive abilities tapped by casual video games and thus reflect on their potential as a training tool. A moderately large sample of participants (n=209) played 20 web-based casual games and performed a battery of cognitive tasks. We used cognitive task analysis and multivariate statistical techniques to characterize the relationships between performance metrics. We validated the cognitive abilities measured in the task battery, examined a task analysis-based categorization of the casual games, and then characterized the relationship between game and task performance. We found that games categorized to tap working memory and reasoning were robustly related to performance on working memory and fluid intelligence tasks, with fluid intelligence best predicting scores on working memory and reasoning games. We discuss these results in the context of overlap in cognitive processes engaged by the cognitive tasks and casual games, and within the context of assessing near and far transfer. While this is not a training study, these findings provide a methodology to assess the validity of using certain games as training and assessment devices for specific cognitive abilities, and shed light on the mixed transfer results in the computer-based training literature. Moreover, the results can inform design of a more theoretically-driven and methodologically-sound cognitive training program.",
"title": ""
},
{
"docid": "9760e3676a7df5e185ec35089d06525e",
"text": "This paper examines the sufficiency of existing e-Learning standards for facilitating and supporting the introduction of adaptive techniques in computer-based learning systems. To that end, the main representational and operational requirements of adaptive learning environments are examined and contrasted against current eLearning standards. The motivation behind this preliminary analysis is attainment of: interoperability between adaptive learning systems; reuse of adaptive learning materials; and, the facilitation of adaptively supported, distributed learning activities.",
"title": ""
},
{
"docid": "3ed927f16de87a753fd7c1cc2cce7cef",
"text": "The state-of-the-art in securing mobile software systems are substantially intended to detect and mitigate vulnerabilities in a single app, but fail to identify vulnerabilities that arise due to the interaction of multiple apps, such as collusion attacks and privilege escalation chaining, shown to be quite common in the apps on the market. This paper demonstrates COVERT, a novel approach and accompanying tool-suite that relies on a hybrid static analysis and lightweight formal analysis technique to enable compositional security assessment of complex software. Through static analysis of Android application packages, it extracts relevant security specifications in an analyzable formal specification language, and checks them as a whole for inter-app vulnerabilities. To our knowledge, COVERT is the first formally-precise analysis tool for automated compositional analysis of Android apps. Our study of hundreds of Android apps revealed dozens of inter-app vulnerabilities, many of which were previously unknown. A video highlighting the main features of the tool can be found at: http://youtu.be/bMKk7OW7dGg.",
"title": ""
},
{
"docid": "80c1f7e845e21513fc8eaf644b11bdc5",
"text": "We describe survey results from a representative sample of 1,075 U. S. social network users who use Facebook as their primary network. Our results show a strong association between low engagement and privacy concern. Specifically, users who report concerns around sharing control, comprehension of sharing practices or general Facebook privacy concern, also report consistently less time spent as well as less (self-reported) posting, commenting and \"Like\"ing of content. The limited evidence of other significant differences between engaged users and others suggests that privacy-related concerns may be an important gate to engagement. Indeed, privacy concern and network size are the only malleable attributes that we find to have significant association with engagement. We manually categorize the privacy concerns finding that many are nonspecific and not associated with negative personal experiences. Finally, we identify some education and utility issues associated with low social network activity, suggesting avenues for increasing engagement amongst current users.",
"title": ""
},
{
"docid": "23d1534a9daee5eeefaa1fdc8a5db0aa",
"text": "Obtaining a protein’s 3D structure is crucial to the understanding of its functions and interactions with other proteins. It is critical to accelerate the protein crystallization process with improved accuracy for understanding cancer and designing drugs. Systematic high-throughput approaches in protein crystallization have been widely applied, generating a large number of protein crystallization-trial images. Therefore, an efficient and effective automatic analysis for these images is a top priority. In this paper, we present a novel system, CrystalNet, for automatically labeling outcomes of protein crystallization-trial images. CrystalNet is a deep convolutional neural network that automatically extracts features from X-ray protein crystallization images for classification. We show that (1) CrystalNet can provide real-time labels for crystallization images effectively, requiring approximately 2 seconds to provide labels for all 1536 images of crystallization microassay on each plate; (2) compared with the stateof-the-art classification systems in crystallization image analysis, our technique demonstrates an improvement of 8% in accuracy, and achieve 90.8% accuracy in classification. As a part of the high-throughput pipeline which generates millions of images a year, CrystalNet can lead to a substantial reduction of labor-intensive screening.",
"title": ""
},
{
"docid": "af6c9c39b9d1be54ccc6e2478823df16",
"text": "Mobile security threats have recently emerged because of the fast growth in mobile technologies and the essential role that mobile devices play in our daily lives. For that, and to particularly address threats associated with malware, various techniques are developed in the literature, including ones that utilize static, dynamic, on-device, off-device, and hybrid approaches for identifying, classifying, and defend against mobile threats. Those techniques fail at times, and succeed at other times, while creating a trade-off of performance and operation. In this paper, we contribute to the mobile security defense posture by introducing Andro-AutoPsy, an anti-malware system based on similarity matching of malware-centric and malware creator-centric information. Using Andro-AutoPsy, we detect and classify malware samples into similar subgroups by exploiting the profiles extracted from integrated footprints, which are implicitly equivalent to distinct characteristics. The experimental results demonstrate that Andro-AutoPsy is scalable, performs precisely in detecting and classifying malware with low false positives and false negatives, and is capable of identifying zero-day mobile malware. © 2015 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "e083b5fdf76bab5cdc8fcafc77db23f7",
"text": "Working under a model of privacy in which data remains private even from the statistician, we study the tradeoff between privacy guarantees and the risk of the resulting statistical estimators. We develop private versions of classical information-theoretic bounds, in particular those due to Le Cam, Fano, and Assouad. These inequalities allow for a precise characterization of statistical rates under local privacy constraints and the development of provably (minimax) optimal estimation procedures. We provide a treatment of several canonical families of problems: mean estimation and median estimation, multinomial probability estimation, and nonparametric density estimation. For all of these families, we provide lower and upper bounds that match up to constant factors, and exhibit new (optimal) privacy-preserving mechanisms and computationally efficient estimators that achieve the bounds. Additionally, we present a variety of experimental results for estimation problems involving sensitive data, including salaries, censored blog posts and articles, and drug abuse; these experiments demonstrate the importance of deriving optimal procedures.",
"title": ""
},
{
"docid": "7ec7b4783afb72ff3b182e1375187b11",
"text": "Climate change is predicted to increase the intensity and negative impacts of urban heat events, prompting the need to develop preparedness and adaptation strategies that reduce societal vulnerability to extreme heat. Analysis of societal vulnerability to extreme heat events requires an interdisciplinary approach that includes information about weather and climate, the natural and built environment, social processes and characteristics, interactions with stakeholders, and an assessment of community vulnerability at a local level. In this letter, we explore the relationships between people and places, in the context of urban heat stress, and present a new research framework for a multi-faceted, top-down and bottom-up analysis of local-level vulnerability to extreme heat. This framework aims to better represent societal vulnerability through the integration of quantitative and qualitative data that go beyond aggregate demographic information. We discuss how different elements of the framework help to focus attention and resources on more targeted health interventions, heat hazard mitigation and climate adaptation strategies.",
"title": ""
},
{
"docid": "35a85d6652bd333d93f8112aff83ab83",
"text": "For natural language understanding (NLU) technology to be maximally useful, both practically and as a scientific object of study, it must be general: it must be able to process language in a way that is not exclusively tailored to any one specific task or dataset. In pursuit of this objective, we introduce the General Language Understanding Evaluation benchmark (GLUE), a tool for evaluating and analyzing the performance of models across a diverse range of existing NLU tasks. GLUE is modelagnostic, but it incentivizes sharing knowledge across tasks because certain tasks have very limited training data. We further provide a hand-crafted diagnostic test suite that enables detailed linguistic analysis of NLU models. We evaluate baselines based on current methods for multi-task and transfer learning and find that they do not immediately give substantial improvements over the aggregate performance of training a separate model per task, indicating room for improvement in developing general and robust NLU systems.",
"title": ""
},
{
"docid": "2d8baa9a78e5e20fd20ace55724e2aec",
"text": "To determine the relationship between fatigue and post-activation potentiation, we examined the effects of sub-maximal continuous running on neuromuscular function tests, as well as on the squat jump and counter movement jump in endurance athletes. The height of the squat jump and counter movement jump and the estimate of the fast twitch fiber recruiting capabilities were assessed in seven male middle distance runners before and after 40 min of continuous running at an intensity corresponding to the individual lactate threshold. The same test was then repeated after three weeks of specific aerobic training. Since the three variables were strongly correlated, only the estimate of the fast twitch fiber was considered for the results. The subjects showed a significant improvement in the fast twitch fiber recruitment percentage after the 40 min run. Our data show that submaximal physical exercise determined a change in fast twitch muscle fiber recruitment patterns observed when subjects performed vertical jumps; however, this recruitment capacity was proportional to the subjects' individual fast twitch muscle fiber profiles measured before the 40 min run. The results of the jump tests did not change significantly after the three-week training period. These results suggest that pre-fatigue methods, through sub-maximal exercises, could be used to take advantage of explosive capacity in middle-distance runners.",
"title": ""
},
{
"docid": "5f9da666504ade5b661becfd0a648978",
"text": "cefe.cnrs-mop.fr Under natural selection, individuals tend to adapt to their local environmental conditions, resulting in a pattern of LOCAL ADAPTATION (see Glossary). Local adaptation can occur if the direction of selection changes for an allele among habitats (antagonistic environmental effect), but it might also occur if the intensity of selection at several loci that are maintained as polymorphic by recurrent mutations covaries negatively among habitats. These two possibilities have been clearly identified in the related context of the evolution of senescence but have not have been fully appreciated in empirical and theoretical studies of local adaptation [1,2].",
"title": ""
},
{
"docid": "74770d8f7e0ac066badb9760a6a2b925",
"text": "Memristor-based synaptic network has been widely investigated and applied to neuromorphic computing systems for the fast computation and low design cost. As memristors continue to mature and achieve higher density, bit failures within crossbar arrays can become a critical issue. These can degrade the computation accuracy significantly. In this work, we propose a defect rescuing design to restore the computation accuracy. In our proposed design, significant weights in a specified network are first identified and retraining and remapping algorithms are described. For a two layer neural network with 92.64% classification accuracy on MNIST digit recognition, our evaluation based on real device testing shows that our design can recover almost its full performance when 20% random defects are present.",
"title": ""
},
{
"docid": "ff5c993fd071b31b6f639d1f64ce28b0",
"text": "We show that explicit pragmatic inference aids in correctly generating and following natural language instructions for complex, sequential tasks. Our pragmatics-enabled models reason about why speakers produce certain instructions, and about how listeners will react upon hearing them. Like previous pragmatic models, we use learned base listener and speaker models to build a pragmatic speaker that uses the base listener to simulate the interpretation of candidate descriptions, and a pragmatic listener that reasons counterfactually about alternative descriptions. We extend these models to tasks with sequential structure. Evaluation of language generation and interpretation shows that pragmatic inference improves state-of-the-art listener models (at correctly interpreting human instructions) and speaker models (at producing instructions correctly interpreted by humans) in diverse settings.",
"title": ""
}
] |
scidocsrr
|
5b1448e6785d658ae917ff67dab4c06e
|
Detecting and Analyzing Urban Regions with High Impact of Weather Change on Transport
|
[
{
"docid": "1b53378e33f24f59eb0486f2978bebee",
"text": "The advances in location-acquisition and mobile computing techniques have generated massive spatial trajectory data, which represent the mobility of a diversity of moving objects, such as people, vehicles, and animals. Many techniques have been proposed for processing, managing, and mining trajectory data in the past decade, fostering a broad range of applications. In this article, we conduct a systematic survey on the major research into trajectory data mining, providing a panorama of the field as well as the scope of its research topics. Following a road map from the derivation of trajectory data, to trajectory data preprocessing, to trajectory data management, and to a variety of mining tasks (such as trajectory pattern mining, outlier detection, and trajectory classification), the survey explores the connections, correlations, and differences among these existing techniques. This survey also introduces the methods that transform trajectories into other data formats, such as graphs, matrices, and tensors, to which more data mining and machine learning techniques can be applied. Finally, some public trajectory datasets are presented. This survey can help shape the field of trajectory data mining, providing a quick understanding of this field to the community.",
"title": ""
}
] |
[
{
"docid": "d2a9cbd5c03ddccc55d1fc00c8d2ff0c",
"text": "Having good decision support is absolutely necessary nowadays because of the need to improve and gain value. For any organization, it is vital to obtain value from anything it can and having huge amounts of data, Big Data,_pushes them to do so. But only having Big Data is not enough. The most important thing is to use it smartly in order to gain valuable decision support. Making sense of Big Data would happen when we are able to use all the data we have and get important hints and directions. The impediment in the Big Data concept is to obtain support as fast it can be obtained, in real time if possible and the solution used needs to be very malleable because of information technology evolution. Because of this evolution and because of the ease in managing data with Elastic Stack we chose to use Elastic Stack to manage Big Data. The process of making sense of Big Data is based on three big steps: collect, process and use. In order to do so and to make sense of all of this, this paper proposes to use an Elastic Stack solution, also known as ELK (Elasticsearch, Logstash and Kibana), to easily and rapidly manage the Big Data problem. The purpose of making sense of Big Data is succeeding to extract valuable information to stand as decision support. In order to achieve that purpose and obtain valuable information from Big Data all components are used to process data and analyze the result to offer support for decision making.",
"title": ""
},
{
"docid": "558e93a861881914cc3398b447027df8",
"text": "W hile the mathematical side of Dutch graphic artist M. C. Escher (1898– 1972) is often acknowledged, few of his admirers are aware of the mathematical depth of his work. Probably not since the Renaissance has an artist engaged in mathematics to the extent that Escher did, with the sole purpose of understanding mathematical ideas in order to employ them in his art. Escher consulted mathematical publications and interacted with mathematicians. He used mathematics (especially geometry) in creating many of his drawings and prints. Several of his prints celebrate mathematical forms. Many prints provide visual metaphors for abstract mathematical concepts; in particular, Escher was obsessed with the depiction of infinity. His work has sparked investigations by scientists and mathematicians. But most surprising of all, for several years Escher carried out his own mathematical research, some of which anticipated later discoveries by mathematicians. And yet with all this, Escher steadfastly denied any ability to understand or do mathematics. His son George explains:",
"title": ""
},
{
"docid": "a01a1bb4c5f6fc027384aa40e495eced",
"text": "Sentiment classification of grammatical constituents can be explained in a quasicompositional way. The classification of a complex constituent is derived via the classification of its component constituents and operations on these that resemble the usual methods of compositional semantic analysis. This claim is illustrated with a description of sentiment propagation, polarity reversal, and polarity conflict resolution within various linguistic constituent types at various grammatical levels. We propose a theoretical composition model, evaluate a lexical dependency parsing post-process implementation, and estimate its impact on general NLP pipelines.",
"title": ""
},
{
"docid": "c7a32821699ebafadb4c59e99fb3aa9e",
"text": "According to the trend towards high-resolution CMOS image sensors, pixel sizes are continuously shrinking, towards and below 1.0μm, and sizes are now reaching a technological limit to meet required SNR performance [1-2]. SNR at low-light conditions, which is a key performance metric, is determined by the sensitivity and crosstalk in pixels. To improve sensitivity, pixel technology has migrated from frontside illumination (FSI) to backside illumiation (BSI) as pixel size shrinks down. In BSI technology, it is very difficult to further increase the sensitivity in a pixel of near-1.0μm size because there are no structural obstacles for incident light from micro-lens to photodiode. Therefore the only way to improve low-light SNR is to reduce crosstalk, which makes the non-diagonal elements of the color-correction matrix (CCM) close to zero and thus reduces color noise [3]. The best way to improve crosstalk is to introduce a complete physical isolation between neighboring pixels, e.g., using deep-trench isolation (DTI). So far, a few attempts using DTI have been made to suppress silicon crosstalk. A backside DTI in as small as 1.12μm-pixel, which is formed in the BSI process, is reported in [4], but it is just an intermediate step in the DTI-related technology because it cannot completely prevent silicon crosstalk, especially for long wavelengths of light. On the other hand, front-side DTIs for FSI pixels [5] and BSI pixels [6] are reported. In [5], however, DTI is present not only along the periphery of each pixel, but also invades into the pixel so that it is inefficient in terms of gathering incident light and providing sufficient amount of photodiode area. In [6], the pixel size is as large as 2.0μm and it is hard to scale down with this technology for near 1.0μm pitch because DTI width imposes a critical limit on the sufficient amount of photodiode area for full-well capacity. Thus, a new technological advance is necessary to realize the ideal front DTI in a small size pixel near 1.0μm.",
"title": ""
},
{
"docid": "76738e6a05b147a349d90eae1cde00e7",
"text": "In this work we introduce a new framework for performing temporal predictions in the presence of uncertainty. It is based on a simple idea of disentangling components of the future state which are predictable from those which are inherently unpredictable, and encoding the unpredictable components into a low-dimensional latent variable which is fed into a forward model. Our method uses a supervised training objective which is fast and easy to train. We evaluate it in the context of video prediction on multiple datasets and show that it is able to consistently generate diverse predictions without the need for alternating minimization over a latent space or adversarial training.",
"title": ""
},
{
"docid": "b327f4e9a9e11ade7faff4b9781d3524",
"text": "In the decade since Jeff Hawkins proposed Hierarchical Temporal Memory (HTM) as a model of neocortical computation, the theory and the algorithms have evolved dramatically. This paper presents a detailed description of HTM’s Cortical Learning Algorithm (CLA), including for the first time a rigorous mathematical formulation of all aspects of the computations. Prediction Assisted CLA (paCLA), a refinement of the CLA, is presented, which is both closer to the neuroscience and adds significantly to the computational power. Finally, we summarise the key functions of neocortex which are expressed in paCLA implementations. An Open Source project, Comportex, is the leading implementation of this evolving theory of the brain.",
"title": ""
},
{
"docid": "f4438c21802e244d4021ef3390aecf89",
"text": "Ship detection has been playing a significant role in the field of remote sensing for a long time but it is still full of challenges. The main limitations of traditional ship detection methods usually lie in the complexity of application scenarios, the difficulty of intensive object detection and the redundancy of detection region. In order to solve such problems above, we propose a framework called Rotation Dense Feature Pyramid Networks (R-DFPN) which can effectively detect ship in different scenes including ocean and port. Specifically, we put forward the Dense Feature Pyramid Network (DFPN), which is aimed at solving the problem resulted from the narrow width of the ship. Compared with previous multi-scale detectors such as Feature Pyramid Network (FPN), DFPN builds the high-level semantic feature-maps for all scales by means of dense connections, through which enhances the feature propagation and encourages the feature reuse. Additionally, in the case of ship rotation and dense arrangement, we design a rotation anchor strategy to predict the minimum circumscribed rectangle of the object so as to reduce the redundant detection region and improve the recall. Furthermore, we also propose multi-scale ROI Align for the purpose of maintaining the completeness of semantic and spatial information. Experiments based on remote sensing images from Google Earth for ship detection show that our detection method based on RDFPN representation has a state-of-the-art performance.",
"title": ""
},
{
"docid": "bb34abb4b4fe50dbd1531e87f313c982",
"text": "This paper presents an energy efficient successive-approximation-register (SAR) analog-to-digital converter (ADC) for biomedical applications. To reduce energy consumption, a bypass window technique is used to select switching sequences to skip several conversion steps when the signal is within a predefined small window. The power consumptions of the capacitive digital-to-analog converter (DAC), latch comparator, and digital control circuit of the proposed ADC are lower than those of a conventional SAR ADC. The proposed bypass window tolerates the DAC settling error and comparator voltage offset in the first four phases and suppresses the peak DNL and INL values. A proof-of-concept prototype was fabricated in 0.18-μm 1P6M CMOS technology. At a 0.6-V supply voltage and a 200-kS/s sampling rate, the ADC achieves a signal-to-noise and distortion ratio of 57.97 dB and consumes 1.04 μW, resulting in a figure of merit of 8.03 fJ/conversion-step. The ADC core occupies an active area of only 0.082 mm2.",
"title": ""
},
{
"docid": "d2fcfcf3b913f08193c2e13f17375d78",
"text": "The fate, effects, and potential environmental risks of ethylene glycol (EG) in the environment were examined. EG undergoes rapid biodegradation in aerobic and anaerobic environments (approximately 100% removal of EG within 24 h to 28 days). In air, EG reacts with photo-chemically produced hydroxyl radicals with a resulting atmospheric half-life of 2 days. Acute toxicity values (LC(50)s and EC(50)s) were generally >10,000 mg/l for fish and aquatic invertebrates. The data collectively show that EG is not persistent in air, surface water, soil, or groundwater, is practically non-toxic to aquatic organisms, and does not bioaccumulate in aquatic organisms. Potential long-term, quasi-steady state regional concentrations of EG estimated with a multi-media model for air, water, soil, and sediment were all less than predicted no effect concentrations (PNECs).",
"title": ""
},
{
"docid": "b622b927d718d8645858ecfc1809ed4d",
"text": "This paper presents our contribution to the SemEval 2016 task 5: Aspect-Based Sentiment Analysis. We have addressed Subtask 1 for the restaurant domain, in English and French, which implies opinion target expression detection, aspect category and polarity classification. We describe the different components of the system, based on composite models combining sophisticated linguistic features with Machine Learning algorithms, and report the results obtained for both languages.",
"title": ""
},
{
"docid": "9b71c5bd7314e793757776c6e54f03bb",
"text": "This paper evaluates the application of Bronfenbrenner’s bioecological theory as it is represented in empirical work on families and their relationships. We describe the ‘‘mature’’ form of bioecological theory of the mid-1990s and beyond, with its focus on proximal processes at the center of the Process-Person-Context-Time model. We then examine 25 papers published since 2001, all explicitly described as being based on Bronfenbrenner’s theory, and show that all but 4 rely on outmoded versions of the theory, resulting in conceptual confusion and inadequate testing of the theory.",
"title": ""
},
{
"docid": "c7363faf0df0b34cf2a021f66e6ce1e1",
"text": "The participation of African Americans in clinical and public health research is essential. However, for a multitude of reasons, participation is low in many research studies. This article reviews the literature that substantiates barriers to participation and the legacy of past abuses of human subjects through research. The article then reports the results of seven focus groups with 60 African Americans in Los Angeles, Chicago, Washington, DC, and Atlanta during the winter of 1997. In order to improve recruitment and retention in research, the focus group study examined knowledge of and attitudes toward medical research, knowledge of the Tuskegee Syphilis Study, and reactions to the Home Box Office production, Miss Evers' Boys, a fictionalized version of the Tuskegee Study, that premiered in February, 1997. The study found that accurate knowledge about research was limited; lack of understanding and trust of informed consent procedures was problematic; and distrust of researchers posed a substantial barrier to recruitment. Additionally, the study found that, in general, participants believed that research was important, but they clearly distinguished between types of research they would be willing to consider participating in and their motivations for doing so.",
"title": ""
},
{
"docid": "36356a91bc84888cb2dd6180983fdfc5",
"text": "We recently showed that Long Short-Term Memory (LSTM) recurrent neural networks (RNNs) outperform state-of-the-art deep neural networks (DNNs) for large scale acoustic modeling where the models were trained with the cross-entropy (CE) criterion. It has also been shown that sequence discriminative training of DNNs initially trained with the CE criterion gives significant improvements. In this paper, we investigate sequence discriminative training of LSTM RNNs in a large scale acoustic modeling task. We train the models in a distributed manner using asynchronous stochastic gradient descent optimization technique. We compare two sequence discriminative criteria – maximum mutual information and state-level minimum Bayes risk, and we investigate a number of variations of the basic training strategy to better understand issues raised by both the sequential model, and the objective function. We obtain significant gains over the CE trained LSTM RNN model using sequence discriminative training techniques.",
"title": ""
},
{
"docid": "ac2d4f4e6c73c5ab1734bfeae3a7c30a",
"text": "While neural, encoder-decoder models have had significant empirical success in text generation, there remain several unaddressed problems with this style of generation. Encoderdecoder models are largely (a) uninterpretable, and (b) difficult to control in terms of their phrasing or content. This work proposes a neural generation system using a hidden semimarkov model (HSMM) decoder, which learns latent, discrete templates jointly with learning to generate. We show that this model learns useful templates, and that these templates make generation both more interpretable and controllable. Furthermore, we show that this approach scales to real data sets and achieves strong performance nearing that of encoderdecoder text generation models.",
"title": ""
},
{
"docid": "2e6b034cbb73d91b70e3574a06140621",
"text": "ETHNOPHARMACOLOGICAL RELEVANCE\nBitter melon (Momordica charantia L.) has been widely used as an traditional medicine treatment for diabetic patients in Asia. In vitro and animal studies suggested its hypoglycemic activity, but limited human studies are available to support its use.\n\n\nAIM OF STUDY\nThis study was conducted to assess the efficacy and safety of three doses of bitter melon compared with metformin.\n\n\nMATERIALS AND METHODS\nThis is a 4-week, multicenter, randomized, double-blind, active-control trial. Patients were randomized into 4 groups to receive bitter melon 500 mg/day, 1,000 mg/day, and 2,000 mg/day or metformin 1,000 mg/day. All patients were followed for 4 weeks.\n\n\nRESULTS\nThere was a significant decline in fructosamine at week 4 of the metformin group (-16.8; 95% CI, -31.2, -2.4 μmol/L) and the bitter melon 2,000 mg/day group (-10.2; 95% CI, -19.1, -1.3 μmol/L). Bitter melon 500 and 1,000 mg/day did not significantly decrease fructosamine levels (-3.5; 95% CI -11.7, 4.6 and -10.3; 95% CI -22.7, 2.2 μmol/L, respectively).\n\n\nCONCLUSIONS\nBitter melon had a modest hypoglycemic effect and significantly reduced fructosamine levels from baseline among patients with type 2 diabetes who received 2,000 mg/day. However, the hypoglycemic effect of bitter melon was less than metformin 1,000 mg/day.",
"title": ""
},
{
"docid": "6c2317957daf4f51354114de62f660a1",
"text": "This paper proposes a framework for recognizing complex human activities in videos. Our method describes human activities in a hierarchical discriminative model that operates at three semantic levels. At the lower level, body poses are encoded in a representative but discriminative pose dictionary. At the intermediate level, encoded poses span a space where simple human actions are composed. At the highest level, our model captures temporal and spatial compositions of actions into complex human activities. Our human activity classifier simultaneously models which body parts are relevant to the action of interest as well as their appearance and composition using a discriminative approach. By formulating model learning in a max-margin framework, our approach achieves powerful multi-class discrimination while providing useful annotations at the intermediate semantic level. We show how our hierarchical compositional model provides natural handling of occlusions. To evaluate the effectiveness of our proposed framework, we introduce a new dataset of composed human activities. We provide empirical evidence that our method achieves state-of-the-art activity classification performance on several benchmark datasets.",
"title": ""
},
{
"docid": "df93966d3ef546cd4fd24e2174cf6a50",
"text": "Success of real-time bidding (RTB) and advertising campaigns depends on efficient and precise user identification, audience targeting and data exchange between SSP (supply side platform) and DSP (demand side platform). Proper implementation of these platforms allows utilizing campaign budget more efficiently and choosing target audience more accurately. From user perspective this would give a chance to see truly valuable and relevant advertising.",
"title": ""
},
{
"docid": "f4e7e0ea60d9697e8fea434990409c16",
"text": "Prognostics is very useful to predict the degradation trend of machinery and to provide an alarm before a fault reaches critical levels. This paper proposes an ARIMA approach to predict the future machine status with accuracy improvement by an improved forecasting strategy and an automatic prediction algorithm. Improved forecasting strategy increases the times of model building and creates datasets for modeling dynamically to avoid using the previous values predicted to forecast and generate the predictions only based on the true observations. Automatic prediction algorithm can satisfy the requirement of real-time prognostics by automates the whole process of ARIMA modeling and forecasting based on the Box-Jenkins's methodology and the improved forecasting strategy. The feasibility and effectiveness of the approach proposed is demonstrated through the prediction of the vibration characteristic in rotating machinery. The experimental results show that the approach can be applied successfully and effectively for prognostics of machine health condition.",
"title": ""
},
{
"docid": "c7bfb8dfcfc4c267b515e3c92afbbdd0",
"text": "Each month, many women experience an ovulatory cycle that regulates fertility. Although research has found that this cycle influences women's mating preferences, we proposed that it might also change women's political and religious views. Building on theory suggesting that political and religious orientation are linked to reproductive goals, we tested how fertility influenced women's politics, religiosity, and voting in the 2012 U.S. presidential election. In two studies with large and diverse samples, ovulation had drastically different effects on single women and women in committed relationships. Ovulation led single women to become more liberal, less religious, and more likely to vote for Barack Obama. In contrast, ovulation led women in committed relationships to become more conservative, more religious, and more likely to vote for Mitt Romney. In addition, ovulation-induced changes in political orientation mediated women's voting behavior. Overall, the ovulatory cycle not only influences women's politics but also appears to do so differently for single women than for women in relationships.",
"title": ""
},
{
"docid": "ab71df85da9c1798a88b2bb3572bf24f",
"text": "In order to develop an efficient and reliable pulsed power supply for excimer dielectric barrier discharge (DBD) ultraviolet (UV) sources, a pulse generator using Marx topology is adopted. MOSFETs are used as switches. The 12-stage pulse generator operates with a voltage amplitude in the range of 0-5.5 kV. The repetition rate and pulsewidth can be adjusted from 0.1 to 50 kHz and 2 to 20 μs, respectively. It is used to excite KrCl* excilamp, a typical DBD UV source. In order to evaluate the performance of the pulse generator, a sinusoidal voltage power supply dedicated for DBD lamp is also used to excite the KrCl* excilamp. It shows that the lamp excited by the pulse generator has better performance with regard to radiant power and system efficiency. The influence of voltage amplitude, repetition rate, pulsewidth, and rise and fall times on radiant power and system efficiency is investigated using the pulse generator. An inductor is inserted between the pulse generator and the KrCl* excilamp to reduce electromagnetic interference and enhance system reliability.",
"title": ""
}
] |
scidocsrr
|
4a54c1c15327ec7b1ac5171f75974f55
|
A Game Theoretical Model for Adversarial Learning
|
[
{
"docid": "3c1c89aeeae6bde84e338c15c44b20ce",
"text": "Using statistical machine learning for making security decisions introduces new vulnerabilities in large scale systems. This paper shows how an adversary can exploit statistical machine learning, as used in the SpamBayes spam filter, to render it useless—even if the adversary’s access is limited to only 1% of the training messages. We further demonstrate a new class of focused attacks that successfully prevent victims from receiving specific email messages. Finally, we introduce two new types of defenses against these attacks.",
"title": ""
},
{
"docid": "67e85e8b59ec7dc8b0019afa8270e861",
"text": "Machine learning’s ability to rapidly evolve to changing and complex situations has helped it become a fundamental tool for computer security. That adaptability is also a vulnerability: attackers can exploit machine learning systems. We present a taxonomy identifying and analyzing attacks against machine learning systems. We show how these classes influence the costs for the attacker and defender, and we give a formal structure defining their interaction. We use our framework to survey and analyze the literature of attacks against machine learning systems. We also illustrate our taxonomy by showing how it can guide attacks against SpamBayes, a popular statistical spam filter. Finally, we discuss how our taxonomy suggests new lines of defenses.",
"title": ""
}
] |
[
{
"docid": "8994337878d2ac35464cb4af5e32fccc",
"text": "We describe an algorithm for approximate inference in graphical models based on Hölder’s inequality that provides upper and lower bounds on common summation problems such as computing the partition function or probability of evidence in a graphical model. Our algorithm unifies and extends several existing approaches, including variable elimination techniques such as minibucket elimination and variational methods such as tree reweighted belief propagation and conditional entropy decomposition. We show that our method inherits benefits from each approach to provide significantly better bounds on sum-product tasks.",
"title": ""
},
{
"docid": "2615f2f66adeaf1718d7afa5be3b32b1",
"text": "In this paper, an advanced design of an Autonomous Underwater Vehicle (AUV) is presented. The design is driven only by four water pumps. The different power combinations of the four motors provides the force and moment for propulsion and maneuvering. No control surfaces are needed in this design, which make the manufacturing cost of such a vehicle minimal and more reliable. Based on the propulsion method of the vehicle, a nonlinear AUV dynamic model is studied. This nonlinear model is linearized at the operation point. A control strategy of the AUV is proposed including attitude control and auto-pilot design. Simulation results for the attitude control loop are presented to validate this approach.",
"title": ""
},
{
"docid": "513750b6909ae13f2ef54a361e476990",
"text": "OBJECTIVES\nFactors that influence the likelihood of readmission for chronic obstructive pulmonary disease (COPD) patients and the impact of posthospital care coordination remain uncertain. LACE index (L = length of stay, A = Acuity of admission; C = Charlson comorbidity index; E = No. of emergency department (ED) visits in last 6 months) is a validated tool for predicting 30-days readmissions for general medicine patients. We aimed to identify variables predictive of COPD readmissions including LACE index and determine the impact of a novel care management process on 30-day all-cause readmission rate.\n\n\nMETHODS\nIn a case-control design, potential readmission predictors including LACE index were analyzed using multivariable logistic regression for 461 COPD patients between January-October 2013. Patients with a high LACE index at discharge began receiving care coordination in July 2013. We tested for association between readmission and receipt of care coordination between July-October 2013. Care coordination consists of a telephone call from the care manager who: 1) reviews discharge instructions and medication reconciliation; 2) emphasizes importance of medication adherence; 3) makes a follow-up appointment with primary care physician within 1-2 weeks and; 4) makes an emergency back-up plan.\n\n\nRESULTS\nCOPD readmission rate was 16.5%. An adjusted LACE index of ≥ 13 was not associated with readmission (p = 0.186). Significant predictors included female gender (odds ratio [OR] 0.51, 95% confidence interval [CI] 0.29-0.91, p = 0.021); discharge to skilled nursing facility (OR 3.03, 95% CI 1.36-6.75, p = 0.007); 4-6 comorbid illnesses (OR 9.21, 95% CI 1.17-76.62, p = 0.035) and ≥ 4 ED visits in previous 6 months (OR 6.40, 95% CI 1.25-32.87, p = 0.026). Out of 119 patients discharged between July-October 2013, 41% received the care coordination. The readmission rate in the intervention group was 14.3% compared to 18.6% in controls (p = 0.62).\n\n\nCONCLUSIONS\nFactors influencing COPD readmissions are complex and poorly understood. LACE index did not predict 30-days all-cause COPD readmissions. Posthospital care coordination for transition of care from hospital to the community showed a 4.3% reduction in the 30-days all-cause readmission rate which did not reach statistical significance (p = 0.62).",
"title": ""
},
{
"docid": "85df53c5fc62e8e66e6b0ba6409116e2",
"text": "No aspect of adolescent development ha& received more attention from the public and from researchers than parent-child relation ships. Much of the research indicates that despite altered patterns of interaction, relation ships with parents remain important social and emotional resources well beyond the child hood years (for recent reviews, see Collins & Steinberg, 2006; Smetana, Campione-Barr, & Metzger, 2006). Yet it is a challenge to rec oncile this conclusion with the widespread per ception that parent-child relationships decline in quality and influence over the course of the adolescent years. The aim of this chapter is to specify the characteristics and processes of parent-child relationships that sustain the cen trality of the family amid the extensive changes of adolescence. We will argue that it is the con tent and the quality of these relationships, rather than the actions of either parent or adolescent alone, that determine the nature and extent of family influences on adolescent development. We will also argue that divergence between academic prescriptions and public perceptions about parent-adolescent relationships can be traced to the relative emphasis that each places on potential individual differences. The chapter reflects three premises that have emerged from the sizable literature on parent-child relationships during adolescence. First, relationships with parents undergo trans formations across the adolescent years that set the stage for less hierarchical interactions dur ing adulthood. Second, family relationships have far-reaching implications for concurrent and long-term relationships with friends. romantic partners. teacher~. and other adults, as well as for individual mental health, psy chosocial adjustment school performance. and eventual occupational choice and suc cess. Third, contextual and cultural variations significantly shape family relationships and experiences that, in turn, affect the course and outcomes of development both during and beyond adolescence. The chapter is divided into four main sec tions. The first section outlines theoretical views of parent-adolescent relationships and their developmental significance. The second section focuses on the behavior of parents and children and on interpersonal processes between them. with particular attention given to the distinctive characteristics of parent child relationships and how these relationships change during adolescence. The third sec tion considers whether and how parent-child relationships and their transformations are significant for adolescent development. The fourth section focuses on variability in parent child relationships during adolescence as' a function of structural. economic. and demo graphic distinctions among families.",
"title": ""
},
{
"docid": "5856cc15a65e153c9d719bd1bbc8c59e",
"text": "Most past studies on neural news headline generation trained the encoder-decoder model using the first sentence of a document aligned with a headline. However, it is found that the first sentence might not provide sufficient information. We propose using a topic sentence as the input instead of the first sentence for neural news headline generation task. The topic sentence is considered as the most newsworthy sentence and has been studied in the past. Experimental results show that the model trained on the topic sentence has a better generalizability than the model trained using only the first sentence. Training the model using both the first and topic sentences increases the performance even further compared to only training using the topic sentence in certain cases. We conclude that using a topic sentence, while keeping input length as short as possible at the same time, is a preferred strategy for providing more informative information to the neural network compared to using just the first sentence.",
"title": ""
},
{
"docid": "990d15bd9b79f6ec67eb04394d5791d7",
"text": "Spirituality is currently widely studied in the field of Psychology; and Filipinos are known for having a deep sense of spirituality. In terms of measuring spirituality however, researchers argue that measures or scales about it should reflect greater sensitivity to cultural characteristics and issues (Hill & Pargament, 2003). The study aimed to develop a measure of Filipino spirituality. Specifically, it intended to identify salient dimensions of spirituality among Filipinos. The study had two phases in the development of the scale, namely: focus group discussion (FGD) on the Filipino conceptions of spirituality as a basis for generating items; and test development, which included item construction based on the FGD and the literature, pilot testing, establishing reliability and validity of the scale. Qualitative results showed that spirituality has 3 main themes: connectedness with the sacred, sense of meaning and purpose, and expressions of spirituality. In the test development, the Filipino spirituality scale yielded two factors. The first factor—having a relationship or connectedness with a supreme being with a 53.13 % total variance; while the other factor of good relationship with others had a 7.196%. The reliability of the whole measure yielded cronbach alpha of 0.978, while the factors also obtained good reliability of indicators of 0.986 and 0.778 respectively. The results of the study are discussed in the broader conceptualization of spirituality in the Philippines as well as in mainstream Psychology.",
"title": ""
},
{
"docid": "0aa0c63a4617bf829753df08c5544791",
"text": "The paper discusses the application program interface (API). Most software projects reuse components exposed through APIs. In fact, current-day software development technologies are becoming inseparable from the large APIs they provide. An API is the interface to implemented functionality that developers can access to perform various tasks. APIs support code reuse, provide high-level abstractions that facilitate programming tasks, and help unify the programming experience. A study of obstacles that professional Microsoft developers faced when learning to use APIs uncovered challenges and resulting implications for API users and designers. The article focuses on the obstacles to learning an API. Although learnability is only one dimension of usability, there's a clear relationship between the two, in that difficult-to-use APIs are likely to be difficult to learn as well. Many API usability studies focus on situations where developers are learning to use an API. The author concludes that as APIs keep growing larger, developers will need to learn a proportionally smaller fraction of the whole. In such situations, the way to foster more efficient API learning experiences is to include more sophisticated means for developers to identify the information and the resources they need-even for well-designed and documented APIs.",
"title": ""
},
{
"docid": "fffe18dbc8e671e91e9f24b64fcfe825",
"text": "During the past two decades probiotic (health promoting) micro-organisms have been increasingly included in various types of food products, especially in fermented milks. Several aspects, including safety, functional and technological characteristics, have to be taken into consideration in the selection process of probiotic micro-organisms. Safety aspects include specifications such as origin (healthy human GI-tract), non-pathogenicity and antibiotic resistance characteristics. Functional aspects include viability and persistence in the GI-tract, immunomodulation, antagonistic and antimutagenic properties. Before probiotic strains, chosen on the basis of their good safety and functional characteristics, can benefit the consumer, they must first be able to be manufactured under industrial conditions. Furthermore, they have to survive and retain their functionality during storage, and also in the foods into which they are incorporated without producing off-flavours. Factors related to the technological and sensory aspects of probiotic food production are of utmost importance since only by satisfying the demands of the consumer can the food industry succeed in promoting the consumption of functional probiotic products in the future.",
"title": ""
},
{
"docid": "cd176e795fe52784e27a1c001979709b",
"text": "[Purpose] The purpose of this study was to identify the influence of relaxation exercises for the masticator muscles on the limited ROM and pain of temporomandibular joint dysfunction (TMD). [Subjects and Methods] The subjects were 10 men and 31 women in their 20s and 30s. They were randomly divided into no treatment, active exercises and relaxation exercise for the masticator muscle groups. The exercise groups performed exercises three times or more a day over a period of four weeks, performing exercise for 10 minutes each time. Before and after the four weeks, all the subjects were measured for ROM, deviation, occlusion, and pain in the temporomandibular joint. [Results] ROM, deviation and pain showed statistically significant in improvements after the intervention in the active exercise and relaxation exercise for the masticator muscle groups. Deviation also showed a statistically significant difference between the active exercise and relaxation exercise groups. [Conclusion] The results verify that as with active exercises, relaxation exercises for the masticatory muscles are an effective treatment for ROM and pain in TMD. Particularly, masticatory muscle relaxation exercises were found to be a treatment that is also effective for deviation.",
"title": ""
},
{
"docid": "074ffb251bfa5e529fceecc284834d15",
"text": "OBJECTIVE\nEffective nutrition labels are part of a supportive environment that encourages healthier food choices. The present study examined the use, understanding and preferences regarding nutrition labels among ethnically diverse shoppers in New Zealand.\n\n\nDESIGN AND SETTING\nA survey was carried out at twenty-five supermarkets in Auckland, New Zealand, between February and April 2007. Recruitment was stratified by ethnicity. Questions assessed nutrition label use, understanding of the mandatory Nutrition Information Panel (NIP), and preference for and understanding of four nutrition label formats: multiple traffic light (MTL), simple traffic light (STL), NIP and percentage of daily intake (%DI).\n\n\nSUBJECTS\nIn total 1525 shoppers completed the survey: 401 Maori, 347 Pacific, 372 Asian and 395 New Zealand European and Other ethnicities (ten did not state ethnicity).\n\n\nRESULTS\nReported use of nutrition labels (always, regularly, sometimes) ranged from 66% to 87% by ethnicity. There was little difference in ability to obtain information from the NIP according to ethnicity or income. However, there were marked ethnic differences in ability to use the NIP to determine if a food was healthy, with lesser differences by income. Of the four label formats tested, STL and MTL labels were best understood across all ethnic and income groups, and MTL labels were most frequently preferred.\n\n\nCONCLUSIONS\nThere are clear ethnic and income disparities in ability to use the current mandatory food labels in New Zealand (NIP) to determine if foods are healthy. Conversely, MTL and STL label formats demonstrated high levels of understanding and acceptance across ethnic and income groups.",
"title": ""
},
{
"docid": "e2d0a4d2c2c38722d9e9493cf506fc1c",
"text": "This paper describes two Global Positioning System (GPS) based attitude determination algorithms which contain steps of integer ambiguity resolution and attitude computation. The first algorithm extends the ambiguity function method to account for the unique requirement of attitude determination. The second algorithm explores the artificial neural network approach to find the attitude. A test platform is set up for verifying these algorithms.",
"title": ""
},
{
"docid": "11883ce54b631c7d74fa4c0c03b73874",
"text": "In real-world social networks, communities tend to be overlapped with each other because a vertex can belong to multiple communities. To identify these overlapping communities, a number of overlapping community detection methods have been proposed over the recent years. However, there have been very few studies on the characteristics and the implications of the community overlap. In this paper, we investigate the properties of the nodes and the edges placed within the overlapped regions between the communities using the ground-truth communities as well as algorithmic communities derived from the state-of-the-art overlapping community detection methods. We find that the overlapped nodes and the overlapped edges play different roles from the ones that are not in the overlapped regions. Using real-world data, we empirically show that the highly overlapped nodes are involved in structure holes of a network. Also, we show that the overlapped nodes and edges play an important role in forming new links in evolving networks and diffusing information through a network.",
"title": ""
},
{
"docid": "595b020768622866ab0941031d5590dd",
"text": "The wafer procedure is an effective treatment for ulnar impaction syndrome, which decompresses the ulnocarpal junction through a limited open or arthroscopic approach. In comparison with other common decompressive procedures, the wafer procedure does not require bone healing or internal fixation and also provides excellent exposure of the proximal surface of the triangular fibrocartilage complex. Results of the wafer procedure have been good and few complications have been reported.",
"title": ""
},
{
"docid": "df6c7f13814178d7b34703757899d6b1",
"text": "Regression testing of natural language systems is problematic for two main reasons: component input and output is complex, and system behaviour is context-dependent. We have developed a generic approach which solves both of these issues. We describe our regression tool, CONTEST, which supports context-dependent testing of dialogue system components, and discuss the regression test sets we developed, designed to effectively isolate components from changes and problems earlier in the pipeline. We believe that the same approach can be used in regression testing for other dialogue systems, as well as in testing any complex NLP system containing multiple components.",
"title": ""
},
{
"docid": "42029849d1e390fabf183bf10217a609",
"text": "Robustness and discrimination are two of the most important objectives in image hashing. We incorporate ring partition and invariant vector distance to image hashing algorithm for enhancing rotation robustness and discriminative capability. As ring partition is unrelated to image rotation, the statistical features that are extracted from image rings in perceptually uniform color space, i.e., CIE L*a*b* color space, are rotation invariant and stable. In particular, the Euclidean distance between vectors of these perceptual features is invariant to commonly used digital operations to images (e.g., JPEG compression, gamma correction, and brightness/contrast adjustment), which helps in making image hash compact and discriminative. We conduct experiments to evaluate the efficiency with 250 color images, and demonstrate that the proposed hashing algorithm is robust at commonly used digital operations to images. In addition, with the receiver operating characteristics curve, we illustrate that our hashing is much better than the existing popular hashing algorithms at robustness and discrimination.",
"title": ""
},
{
"docid": "4c82a4e51633b87f2f6b2619ca238686",
"text": "Allocentric space is mapped by a widespread brain circuit of functionally specialized cell types located in interconnected subregions of the hippocampal-parahippocampal cortices. Little is known about the neural architectures required to express this variety of firing patterns. In rats, we found that one of the cell types, the grid cell, was abundant not only in medial entorhinal cortex (MEC), where it was first reported, but also in pre- and parasubiculum. The proportion of grid cells in pre- and parasubiculum was comparable to deep layers of MEC. The symmetry of the grid pattern and its relationship to the theta rhythm were weaker, especially in presubiculum. Pre- and parasubicular grid cells intermingled with head-direction cells and border cells, as in deep MEC layers. The characterization of a common pool of space-responsive cells in architecturally diverse subdivisions of parahippocampal cortex constrains the range of mechanisms that might give rise to their unique functional discharge phenotypes.",
"title": ""
},
{
"docid": "ca62a58ac39d0c2daaa573dcb91cd2e0",
"text": "Blast-related head injuries are one of the most prevalent injuries among military personnel deployed in service of Operation Iraqi Freedom. Although several studies have evaluated symptoms after blast injury in military personnel, few studies compared them to nonblast injuries or measured symptoms within the acute stage after traumatic brain injury (TBI). Knowledge of acute symptoms will help deployed clinicians make important decisions regarding recommendations for treatment and return to duty. Furthermore, differences more apparent during the acute stage might suggest important predictors of the long-term trajectory of recovery. This study evaluated concussive, psychological, and cognitive symptoms in military personnel and civilian contractors (N = 82) diagnosed with mild TBI (mTBI) at a combat support hospital in Iraq. Participants completed a clinical interview, the Automated Neuropsychological Assessment Metric (ANAM), PTSD Checklist-Military Version (PCL-M), Behavioral Health Measure (BHM), and Insomnia Severity Index (ISI) within 72 hr of injury. Results suggest that there are few differences in concussive symptoms, psychological symptoms, and neurocognitive performance between blast and nonblast mTBIs, although clinically significant impairment in cognitive reaction time for both blast and nonblast groups is observed. Reductions in ANAM accuracy were related to duration of loss of consciousness, not injury mechanism.",
"title": ""
},
{
"docid": "d4ec89ae64a09df75e7d4fb0d9c8fdab",
"text": "Deep Convolution Neural Network (CNN) has achieved outstanding performance in image recognition over large scale dataset. However, pursuit of higher inference accuracy leads to CNN architecture with deeper layers and denser connections, which inevitably makes its hardware implementation demand more and more memory and computational resources. It can be interpreted as ‘CNN power and memory wall’. Recent research efforts have significantly reduced both model size and computational complexity by using low bit-width weights, activations and gradients, while keeping reasonably good accuracy. In this work, we present different emerging nonvolatile Magnetic Random Access Memory (MRAM) designs that could be leveraged to implement ‘bit-wise in-memory convolution engine’, which could simultaneously store network parameters and compute low bit-width convolution. Such new computing model leverages the ‘in-memory computing’ concept to accelerate CNN inference and reduce convolution energy consumption due to intrinsic logic-in-memory design and reduction of data communication.",
"title": ""
},
{
"docid": "b408788cd974438f32c1858cda9ff910",
"text": "Speaking as someone who has personally felt the influence of the “Chomskian Turn”, I believe that one of Chomsky’s most significant contributions to Psychology, or as it is now called, Cognitive Science was to bring back scientific realism. This may strike you as a very odd claim, for one does not usually think of science as needing to be talked into scientific realism. Science is, after all, the study of reality by the most precise instruments of measurement and analysis that humans have developed.",
"title": ""
},
{
"docid": "b4c25df52a0a5f6ab23743d3ca9a3af2",
"text": "Measuring similarity between texts is an important task for several applications. Available approaches to measure document similarity are inadequate for document pairs that have non-comparable lengths, such as a long document and its summary. This is because of the lexical, contextual and the abstraction gaps between a long document of rich details and its concise summary of abstract information. In this paper, we present a document matching approach to bridge this gap, by comparing the texts in a common space of hidden topics. We evaluate the matching algorithm on two matching tasks and find that it consistently and widely outperforms strong baselines. We also highlight the benefits of incorporating domain knowledge to text matching.",
"title": ""
}
] |
scidocsrr
|
41e986533505e706430352f2ed053401
|
Multiple Kernel Learning for Hyperspectral Image Classification: A Review
|
[
{
"docid": "cc8adbaf01e3ab61546fd875724ac270",
"text": "This paper presents the image information mining based on a communication channel concept. The feature extraction algorithms encode the image, while an analysis of topic discovery will decode and send its content to the user in the shape of a semantic map. We consider this approach for a real meaning based semantic annotation of very high resolution remote sensing images. The scene content is described using a multi-level hierarchical information representation. Feature hierarchies are discovered considering that higher levels are formed by combining features from lower level. Such a level to level mapping defines our methodology as a deep learning process. The whole analysis can be divided in two major learning steps. The first one regards the Bayesian inference to extract objects and assign basic semantic to the image. The second step models the spatial interactions between the scene objects based on Latent Dirichlet Allocation, performing a high level semantic annotation. We used a WorldView2 image to exemplify the processing results.",
"title": ""
},
{
"docid": "2af5e18cfb6dadd4d5145a1fa63f0536",
"text": "Hyperspectral remote sensing technology has advanced significantly in the past two decades. Current sensors onboard airborne and spaceborne platforms cover large areas of the Earth surface with unprecedented spectral, spatial, and temporal resolutions. These characteristics enable a myriad of applications requiring fine identification of materials or estimation of physical parameters. Very often, these applications rely on sophisticated and complex data analysis methods. The sources of difficulties are, namely, the high dimensionality and size of the hyperspectral data, the spectral mixing (linear and nonlinear), and the degradation mechanisms associated to the measurement process such as noise and atmospheric effects. This paper presents a tutorial/overview cross section of some relevant hyperspectral data analysis methods and algorithms, organized in six main topics: data fusion, unmixing, classification, target detection, physical parameter retrieval, and fast computing. In all topics, we describe the state-of-the-art, provide illustrative examples, and point to future challenges and research directions.",
"title": ""
},
{
"docid": "63af822cd877b95be976f990b048f90c",
"text": "We propose a method for generating classifier ensembles based on feature extraction. To create the training data for a base classifier, the feature set is randomly split into K subsets (K is a parameter of the algorithm) and principal component analysis (PCA) is applied to each subset. All principal components are retained in order to preserve the variability information in the data. Thus, K axis rotations take place to form the new features for a base classifier. The idea of the rotation approach is to encourage simultaneously individual accuracy and diversity within the ensemble. Diversity is promoted through the feature extraction for each base classifier. Decision trees were chosen here because they are sensitive to rotation of the feature axes, hence the name \"forest\". Accuracy is sought by keeping all principal components and also using the whole data set to train each base classifier. Using WEKA, we examined the rotation forest ensemble on a random selection of 33 benchmark data sets from the UCI repository and compared it with bagging, AdaBoost, and random forest. The results were favorable to rotation forest and prompted an investigation into diversity-accuracy landscape of the ensemble models. Diversity-error diagrams revealed that rotation forest ensembles construct individual classifiers which are more accurate than these in AdaBoost and random forest, and more diverse than these in bagging, sometimes more accurate as well",
"title": ""
}
] |
[
{
"docid": "19364f2394650f8c3d899a5ceb2fc493",
"text": "In this paper, we study cost-sensitive semi-supervised learning where many of the training examples are unlabeled and different misclassification errors are associated with unequal costs. This scenario occurs in many real-world applications. For example, in some disease diagnosis, the cost of erroneously diagnosing a patient as healthy is much higher than that of diagnosing a healthy person as a patient. Also, the acquisition of labeled data requires medical diagnosis which is expensive, while the collection of unlabeled data such as basic health information is much cheaper. We propose the CS4VM (Cost-Sensitive Semi-Supervised Support Vector Machine) to address this problem. We show that the CS4VM, when given the label means of the unlabeled data, closely approximates the supervised cost-sensitive SVM that has access to the ground-truth labels of all the unlabeled data. This observation leads to an efficient algorithm which first estimates the label means and then trains the CS4VM with the plug-in label means by an efficient SVM solver. Experiments on a broad range of data sets show that the proposed method is capable of reducing the total cost and is computationally efficient.",
"title": ""
},
{
"docid": "5a74a585fb58ff09c05d807094523fb9",
"text": "Deep learning techniques are famous due to Its capability to cope with large-scale data these days. They have been investigated within various of applications e.g., language, graphical modeling, speech, audio, image recognition, video, natural language and signal processing areas. In addition, extensive researches applying machine-learning methods in Intrusion Detection System (IDS) have been done in both academia and industry. However, huge data and difficulties to obtain data instances are hot challenges to machine-learning-based IDS. We show some limitations of previous IDSs which uses classic machine learners and introduce feature learning including feature construction, extraction and selection to overcome the challenges. We discuss some distinguished deep learning techniques and its application for IDS purposes. Future research directions using deep learning techniques for IDS purposes are briefly summarized.",
"title": ""
},
{
"docid": "182bb07fb7dbbaf17b6c7a084f1c4fb2",
"text": "Network Functions Virtualization (NFV) is an upcoming paradigm where network functionality is virtualized and split up into multiple building blocks that can be chained together to provide the required functionality. This approach increases network flexibility and scalability as these building blocks can be allocated and reallocated at runtime depending on demand. The success of this approach depends on the existence and performance of algorithms that determine where, and how these building blocks are instantiated. In this paper, we present and evaluate a formal model for resource allocation of virtualized network functions within NFV environments, a problem we refer to as Virtual Network Function Placement (VNF-P). We focus on a hybrid scenario where part of the services may be provided by dedicated physical hardware, and where part of the services are provided using virtualized service instances. We evaluate the VNF-P model using a small service provider scenario and two types of service chains, and evaluate its execution speed. We find that the algorithms finish in 16 seconds or less for a small service provider scenario, making it feasible to react quickly to changing demand.",
"title": ""
},
{
"docid": "873e9eb826c0ae454db3032fc63f7073",
"text": "The purpose of this study is to explore Thai online customers' repurchase intention towards clothing. This study integrated Delone and Mclean's e-commerce success model to predict customers' repurchase intention to purchase clothing on the Internet. The data was collected using convenience sampling method with a survey of the customers in Thailand who had experienced purchasing clothing online. The findings indicate that repurchase intention is mostly influenced by both online shopping satisfaction and online shopping trust. The relationships between Internet shopping value and online shopping satisfaction and online shopping trust are found to be significant as well. Components of website quality have differing effect on utilitarian and hedonic value. System quality and service quickness influences utilitarian value as well as the hedonic value. System accessibility and information timely positively influence utilitarian value while information variety and service receptiveness have a positive effect hedonic value.",
"title": ""
},
{
"docid": "85cd0262fec2586740fe4199cf56c766",
"text": "New information on infectious diseases in older adults has become available in the past 20 years. In this review, in-depth discussions on the general problem of geriatric infectious diseases (epidemiology, pathogenesis, age-related host defenses, clinical manifestations, diagnostic approach); diagnosis and management of bacterial pneumonia, urinary tract infection, and Clostridium difficile infection; and the unique challenges of diagnosing and managing infections in a long-term care setting are presented.",
"title": ""
},
{
"docid": "061ac4487fba7837f44293a2d20b8dd9",
"text": "This paper describes a model of cooperative behavior and describes how such a model can be applied in a natural language understanding system. We assume that agents attempt to recognize the plans of other agents and, then, use this plan when deciding what response to make. In particular, we show that, given a setting in which purposeful dialogues occur, this model can account for responses that provide more information that explicitly requested and for appropriate responses to both short sentence fragments and indirect speech acts.",
"title": ""
},
{
"docid": "cdd27bbcbab81a243dda6bb855fb8f72",
"text": "The Internet of Things (IoT), which can be regarded as an enhanced version of machine-to-machine communication technology, was proposed to realize intelligent thing-to-thing communications by utilizing the Internet connectivity. In the IoT, \"things\" are generally heterogeneous and resource constrained. In addition, such things are connected to each other over low-power and lossy networks. In this paper, we propose an inter-device authentication and session-key distribution system for devices with only encryption modules. In the proposed system, unlike existing sensor-network environments where the key distribution center distributes the key, each sensor node is involved with the generation of session keys. In addition, in the proposed scheme, the performance is improved so that the authenticated device can calculate the session key in advance. The proposed mutual authentication and session-key distribution system can withstand replay attacks, man-in-the-middle attacks, and wiretapped secret-key attacks.",
"title": ""
},
{
"docid": "64d53035eb919d5e27daef6b666b7298",
"text": "The 3L-NPC (Neutral-Point-Clamped) is the most popular multilevel converter used in high-power medium-voltage applications. An important disadvantage of this structure is the unequal distribution of losses among the switches. The performances of 3L-NPC structure were improved by developing the 3L-ANPC (Active-NPC) converter which has more degrees of freedom. In this paper the switching states and the loss distribution problem are studied for different PWM strategies in a STATCOM application. The PSIM simulation results are shown in order to validate the PWM strategies studied for 3L-ANPC converter.",
"title": ""
},
{
"docid": "644d262f1d2f64805392c15506764558",
"text": "In this paper, we present a comprehensive survey of Markov Random Fields (MRFs) in computer vision and image understanding, with respect to the modeling, the inference and the learning. While MRFs were introduced into the computer vision eld about two decades ago, they started to become a ubiquitous tool for solving visual perception problems around the turn of the millennium following the emergence of efficient inference methods. During the past decade, a variety of MRF models as well as inference and learning methods have been developed for addressing numerous low, mid and high-level vision problems. While most of the literature concerns pairwise MRFs, in recent years we have also witnessed signi cant progress in higher-order MRFs, which substantially enhances the expressiveness of graph-based models and expands the domain of solvable problems. This survey provides a compact and informative summary of the major literature in this research topic.",
"title": ""
},
{
"docid": "9fb5db3cdcffb968b54c7d23d8a690a2",
"text": "BACKGROUND\nPhysical activity is associated with many physical and mental health benefits, however many children do not meet the national physical activity guidelines. While schools provide an ideal setting to promote children's physical activity, adding physical activity to the school day can be difficult given time constraints often imposed by competing key learning areas. Classroom-based physical activity may provide an opportunity to increase school-based physical activity while concurrently improving academic-related outcomes. The primary aim of this systematic review and meta-analysis was to evaluate the impact of classroom-based physical activity interventions on academic-related outcomes. A secondary aim was to evaluate the impact of these lessons on physical activity levels over the study duration.\n\n\nMETHODS\nA systematic search of electronic databases (PubMed, ERIC, SPORTDiscus, PsycINFO) was performed in January 2016 and updated in January 2017. Studies that investigated the association between classroom-based physical activity interventions and academic-related outcomes in primary (elementary) school-aged children were included. Meta-analyses were conducted in Review Manager, with effect sizes calculated separately for each outcome assessed.\n\n\nRESULTS\nThirty-nine articles met the inclusion criteria for the review, and 16 provided sufficient data and appropriate design for inclusion in the meta-analyses. Studies investigated a range of academic-related outcomes including classroom behaviour (e.g. on-task behaviour), cognitive functions (e.g. executive function), and academic achievement (e.g. standardised test scores). Results of the meta-analyses showed classroom-based physical activity had a positive effect on improving on-task and reducing off-task classroom behaviour (standardised mean difference = 0.60 (95% CI: 0.20,1.00)), and led to improvements in academic achievement when a progress monitoring tool was used (standardised mean difference = 1.03 (95% CI: 0.22,1.84)). However, no effect was found for cognitive functions (standardised mean difference = 0.33 (95% CI: -0.11,0.77)) or physical activity (standardised mean difference = 0.40 (95% CI: -1.15,0.95)).\n\n\nCONCLUSIONS\nResults suggest classroom-based physical activity may have a positive impact on academic-related outcomes. However, it is not possible to draw definitive conclusions due to the level of heterogeneity in intervention components and academic-related outcomes assessed. Future studies should consider the intervention period when selecting academic-related outcome measures, and use an objective measure of physical activity to determine intervention fidelity and effects on overall physical activity levels.",
"title": ""
},
{
"docid": "22d878a735d649f5932be6cd0b3979c9",
"text": "This study investigates the potential to introduce basic programming concepts to middle school children within the context of a classroom writing-workshop. In this paper we describe how students drafted, revised, and published their own digital stories using the introductory programming language Scratch and in the process learned fundamental CS concepts as well as the wider connection between programming and writing as interrelated processes of composition.",
"title": ""
},
{
"docid": "d8ead5d749b9af092adf626245e8178a",
"text": "This paper describes a LIN (Local Interconnect Network) Transmitter designed in a BCD HV technology. The key design target is to comply with EMI (electromagnetic interference) specification limits. The two main aspects are low EME (electromagnetic emission) and sufficient immunity against RF disturbance. A gate driver is proposed which uses a certain current summation network for lowering the slew rate on the one hand and being reliable against radio frequency (RF) disturbances within the automotive environment on the other hand. Nowadays the low cost single wire LIN Bus is used for establishing communication between sensors, actuators and other components.",
"title": ""
},
{
"docid": "f9e018fff97ac8ee91b68948cab52047",
"text": "How do humans navigate to target objects in novel scenes? Do we use the semantic/functional priors we have built over years to efficiently search and navigate? For example, to search for mugs, we search cabinets near the coffee machine and for fruits we try the fridge. In this work, we focus on incorporating semantic priors in the task of semantic navigation. We propose to use Graph Convolutional Networks for incorporating the prior knowledge into a deep reinforcement learning framework. The agent uses the features from the knowledge graph to predict the actions. For evaluation, we use the AI2-THOR framework. Our experiments show how semantic knowledge improves performance significantly. More importantly, we show improvement in generalization to unseen scenes and/or objects. The supplementary video can be accessed at the following link: https://youtu.be/otKjuO805dE .",
"title": ""
},
{
"docid": "b2f7826fe74d5bb3be8361aeb6ae41c4",
"text": "Skid steering of 4-wheel-drive electric vehicles has good maneuverability and mobility as a result of the application of differential torque to wheels on opposite sides. For path following, the paper utilizes the techniques of sliding mode control based on extended state observer which not only has robustness against the system dynamics not modeled and uncertain parameter but also reduces the switch gain effectively, so as to obtain a predictable behavior for the instantaneous center of rotation thus preventing excessive skidding. The efficiency of the algorithm is validated on a vehicle model with 14 degree of freedom. The simulation results show that the control law is robust against to the evaluation error of parameter and to the variation of the friction force within the wheel-ground interaction, what's more, it is easy to be carried out in controller.",
"title": ""
},
{
"docid": "2c5eb3fb74c6379dfd38c1594ebe85f4",
"text": "Accurately recognizing speaker emotion and age/gender from speech can provide better user experience for many spoken dialogue systems. In this study, we propose to use deep neural networks (DNNs) to encode each utterance into a fixed-length vector by pooling the activations of the last hidden layer over time. The feature encoding process is designed to be jointly trained with the utterance-level classifier for better classification. A kernel extreme learning machine (ELM) is further trained on the encoded vectors for better utterance-level classification. Experiments on a Mandarin dataset demonstrate the effectiveness of our proposed methods on speech emotion and age/gender recognition tasks.",
"title": ""
},
{
"docid": "44941e8f5b703bcacb51b6357cba7633",
"text": "Convolutional neural networks provide visual features that perform remarkably well in many computer vision applications. However, training these networks requires significant amounts of supervision. This paper introduces a generic framework to train deep networks, end-to-end, with no supervision. We propose to fix a set of target representations, called Noise As Targets (NAT), and to constrain the deep features to align to them. This domain agnostic approach avoids the standard unsupervised learning issues of trivial solutions and collapsing of features. Thanks to a stochastic batch reassignment strategy and a separable square loss function, it scales to millions of images. The proposed approach produces representations that perform on par with state-of-the-art unsupervised methods on ImageNet and PASCAL VOC.",
"title": ""
},
{
"docid": "8a8f310d13eea0fdb5b9c3b6f0a2818b",
"text": "In recent years, the rapid development of Internet, Internet of Things, and Cloud Computing have led to the explosive growth of data in almost every industry and business area. Big data has rapidly developed into a hot topic that attracts extensive attention from academia, industry, and governments around the world. In this position paper, we first briefly introduce the concept of big data, including its definition, features, and value. We then identify from different perspectives the significance and opportunities that big data brings to us. Next, we present representative big data initiatives all over the world. We describe the grand challenges (namely, data complexity, computational complexity, and system complexity), as well as possible solutions to address these challenges. Finally, we conclude the paper by presenting several suggestions on carrying out big data projects.",
"title": ""
},
{
"docid": "d8127fc372994baee6fd8632d585a347",
"text": "Dynamic query interfaces (DQIs) form a recently developed method of database access that provides continuous realtime feedback to the user during the query formulation process. Previous work shows that DQIs are elegant and powerful interfaces to small databases. Unfortunately, when applied to large databases, previous DQI algorithms slow to a crawl. We present a new approach to DQI algorithms that works well with large databases.",
"title": ""
},
{
"docid": "b54a359025d863f6e2f5236eb823e740",
"text": "We present a method for fusing two acquisition modes, 2D photographs and 3D LiDAR scans, for depth-layer decomposition of urban facades. The two modes have complementary characteristics: point cloud scans are coherent and inherently 3D, but are often sparse, noisy, and incomplete; photographs, on the other hand, are of high resolution, easy to acquire, and dense, but view-dependent and inherently 2D, lacking critical depth information. In this paper we use photographs to enhance the acquired LiDAR data. Our key observation is that with an initial registration of the 2D and 3D datasets we can decompose the input photographs into rectified depth layers. We decompose the input photographs into rectangular planar fragments and diffuse depth information from the corresponding 3D scan onto the fragments by solving a multi-label assignment problem. Our layer decomposition enables accurate repetition detection in each planar layer, using which we propagate geometry, remove outliers and enhance the 3D scan. Finally, the algorithm produces an enhanced, layered, textured model. We evaluate our algorithm on complex multi-planar building facades, where direct autocorrelation methods for repetition detection fail. We demonstrate how 2D photographs help improve the 3D scans by exploiting data redundancy, and transferring high level structural information to (plausibly) complete large missing regions.",
"title": ""
}
] |
scidocsrr
|
0d81e785988bcffc2a8c232a22aaf74f
|
A kernel decomposition architecture for binary-weight Convolutional Neural Networks
|
[
{
"docid": "d716725f2a5d28667a0746b31669bbb7",
"text": "This work observes that a large fraction of the computations performed by Deep Neural Networks (DNNs) are intrinsically ineffectual as they involve a multiplication where one of the inputs is zero. This observation motivates Cnvlutin (CNV), a value-based approach to hardware acceleration that eliminates most of these ineffectual operations, improving performance and energy over a state-of-the-art accelerator with no accuracy loss. CNV uses hierarchical data-parallel units, allowing groups of lanes to proceed mostly independently enabling them to skip over the ineffectual computations. A co-designed data storage format encodes the computation elimination decisions taking them off the critical path while avoiding control divergence in the data parallel units. Combined, the units and the data storage format result in a data-parallel architecture that maintains wide, aligned accesses to its memory hierarchy and that keeps its data lanes busy. By loosening the ineffectual computation identification criterion, CNV enables further performance and energy efficiency improvements, and more so if a loss in accuracy is acceptable. Experimental measurements over a set of state-of-the-art DNNs for image classification show that CNV improves performance over a state-of-the-art accelerator from 1.24× to 1.55× and by 1.37× on average without any loss in accuracy by removing zero-valued operand multiplications alone. While CNV incurs an area overhead of 4.49%, it improves overall EDP (Energy Delay Product) and ED2P (Energy Delay Squared Product) on average by 1.47× and 2.01×, respectively. The average performance improvements increase to 1.52× without any loss in accuracy with a broader ineffectual identification policy. Further improvements are demonstrated with a loss in accuracy.",
"title": ""
},
{
"docid": "b7d13c090e6d61272f45b1e3090f0341",
"text": "Deep Neural Networks (DNN) have achieved state-of-the-art results in a wide range of tasks, with the best results obtained with large training sets and large models. In the past, GPUs enabled these breakthroughs because of their greater computational speed. In the future, faster computation at both training and test time is likely to be crucial for further progress and for consumer applications on low-power devices. As a result, there is much interest in research and development of dedicated hardware for Deep Learning (DL). Binary weights, i.e., weights which are constrained to only two possible values (e.g. -1 or 1), would bring great benefits to specialized DL hardware by replacing many multiply-accumulate operations by simple accumulations, as multipliers are the most space and powerhungry components of the digital implementation of neural networks. We introduce BinaryConnect, a method which consists in training a DNN with binary weights during the forward and backward propagations, while retaining precision of the stored weights in which gradients are accumulated. Like other dropout schemes, we show that BinaryConnect acts as regularizer and we obtain near state-of-the-art results with BinaryConnect on the permutation-invariant MNIST, CIFAR-10 and SVHN.",
"title": ""
}
] |
[
{
"docid": "cc086da5b3eb84e5294a14b09cdfae63",
"text": "In high-performance microprocessor cores, the on-die supply voltage seen by the transistors is non-ideal and exhibits significant fluctuations. These supply fluctuations are caused by sudden changes in the current consumed by the microprocessor in response to variations in workloads. This non-ideal supply can cause performance degradation or functional failures. Therefore, a significant amount of margin (10-15%) needs to be added to the ideal voltage (if there were no AC voltage variations) to ensure that the processor always executes correctly at the committed voltage-frequency points. This excess voltage wastes power proportional to the square of the voltage increase.",
"title": ""
},
{
"docid": "019c27341b9811a7347467490cea6a72",
"text": "For intelligent robots to interact in meaningful ways with their environment, they must understand both the geometric and semantic properties of the scene surrounding them. The majority of research to date has addressed these mapping challenges separately, focusing on either geometric or semantic mapping. In this paper we address the problem of building environmental maps that include both semantically meaningful, object-level entities and point- or mesh-based geometrical representations. We simultaneously build geometric point cloud models of previously unseen instances of known object classes and create a map that contains these object models as central entities. Our system leverages sparse, feature-based RGB-D SLAM, image-based deep-learning object detection and 3D unsupervised segmentation.",
"title": ""
},
{
"docid": "6d149a530769b61a34bcd5b8d900dbcd",
"text": "Click here and insert your abstract text. The Web accessibility issue has been subject of study for a wide number of organizations all around the World. The current paper describes an accessibility evaluation that aimed to test the Portuguese enterprises websites. Has the presented results state, the evaluated websites accessibility levels are significantly bad, but the majority of the detected errors are not very complex from a technological point-of-view. With this is mind, our research team, in collaboration with a Portuguese enterprise named ANO and the support of its UTAD-ANOgov/PEPPOL research project, elaborated an improvement proposal, directed to the Web content developers, which aimed on helping these specialists to better understand and implement Web accessibility features. © 2013 The Authors. Published by Elsevier B.V. Selection and peer-review under responsibility of the Scientific Programme Committee of the 5th International Conference on Software Development and Technologies for Enhancing Accessibility and Fighting Info-exclusion (DSAI 2013).",
"title": ""
},
{
"docid": "83d330486c50fe2ae1d6960a4933f546",
"text": "In this paper, an upgraded version of vehicle tracking system is developed for inland vessels. In addition to the features available in traditional VTS (Vehicle Tracking System) for automobiles, it has the capability of remote monitoring of the vessel's motion and orientation. Furthermore, this device can detect capsize events and other accidents by motion tracking and instantly notify the authority and/or the owner with current coordinates of the vessel, which is obtained using the Global Positioning System (GPS). This can certainly boost up the rescue process and minimize losses. We have used GSM network for the communication between the device installed in the ship and the ground control. So, this can be implemented only in the inland vessels. But using iridium satellite communication instead of GSM will enable the device to be used in any sea-going ships. At last, a model of an integrated inland waterway control system (IIWCS) based on this device is discussed.",
"title": ""
},
{
"docid": "010a132f3726405ca3a3a4da0ff60086",
"text": "Blockchain, an emerging decentralized security system, has been applied in many applications, such as bitcoin, smart grid, and Internet-of-Things. However, running the mining process may cost too much energy consumption and computing resource usage on handheld devices, which restricts the use of blockchain in mobile environments. In this paper, we consider deploying edge computing service to support the mobile blockchain. We propose an auction-based edge computing resource allocation mechanism for the edge computing service provider. Since there is competition among miners, the allocative externalities are taken into account in the model. In our auction mechanism, we maximize the social welfare while guaranteeing the truthfulness, individual rationality and computational efficiency. Through extensive simulations, we evaluate the performance of our auction mechanism which shows that the proposed mechanism can efficiently solve the social welfare maximization problem for the edge computing service provider.",
"title": ""
},
{
"docid": "78697b1a87b2bada5bf169c075cca18b",
"text": "Recent trends show that Internet traffic is increasingly dominated by content, which is accompanied by the exponential growth of traffic. To cope with this phenomena, network caching is introduced to utilize the storage capacity of diverse network devices. In this paper, we first summarize four basic caching placement strategies, i.e., local caching, Device-to-Device (D2D) caching, Small cell Base Station (SBS) caching and Macrocell Base Station (MBS) caching. However, studies show that so far, much of the research has ignored the impact of user mobility. Therefore, taking the effect of the user mobility into consideration, we proposes a joint mobility-aware caching and SBS density placement scheme (MS caching). In addition, differences and relationships between caching and computation offloading are discussed. We present a design of a hybrid computation offloading and support it with experimental results, which demonstrate improved performance in terms of energy cost. Finally, we discuss the design of an incentive mechanism by considering network dynamics, differentiated user's quality of experience (QoE) and the heterogeneity of mobile terminals in terms of caching and computing capabilities.",
"title": ""
},
{
"docid": "ca7afb87dae38ee0cf079f91dbd91d43",
"text": "Diet is associated with the development of CHD. The incidence of CHD is lower in southern European countries than in northern European countries and it has been proposed that this difference may be a result of diet. The traditional Mediterranean diet emphasises a high intake of fruits, vegetables, bread, other forms of cereals, potatoes, beans, nuts and seeds. It includes olive oil as a major fat source and dairy products, fish and poultry are consumed in low to moderate amounts. Many observational studies have shown that the Mediterranean diet is associated with reduced risk of CHD, and this result has been confirmed by meta-analysis, while a single randomised controlled trial, the Lyon Diet Heart study, has shown a reduction in CHD risk in subjects following the Mediterranean diet in the secondary prevention setting. However, it is uncertain whether the benefits of the Mediterranean diet are transferable to other non-Mediterranean populations and whether the effects of the Mediterranean diet will still be feasible in light of the changes in pharmacological therapy seen in patients with CHD since the Lyon Diet Heart study was conducted. Further randomised controlled trials are required and if the risk-reducing effect is confirmed then the best methods to effectively deliver this public health message worldwide need to be considered.",
"title": ""
},
{
"docid": "33f53ba19c1198fc2342960c57dd22f8",
"text": "This paper reports on a facile and low cost method to fabricate highly stretchable potentiometric pH sensor arrays for biomedical and wearable applications. The technique uses laser carbonization of a thermoset polymer followed by transfer and embedment of carbonized nanomaterial onto an elastomeric matrix. The process combines selective laser pyrolization/carbonization with meander interconnect methodology to fabricate stretchable conductive composites with which pH sensors can be realized. The stretchable pH sensors display a sensitivity of -51 mV/pH over the clinically-relevant range of pH 4-10. The sensors remain stable for strains of up to 50 %.",
"title": ""
},
{
"docid": "6be44677f42b5a6aaaea352e11024cfa",
"text": "In this paper, we intend to discuss if and in what sense semiosis (meaning process, cf. C.S. Peirce) can be regarded as an “emergent” process in semiotic systems. It is not our problem here to answer when or how semiosis emerged in nature. As a prerequisite for the very formulation of these problems, we are rather interested in discussing the conditions which should be fulfilled for semiosis to be characterized as an emergent process. The first step in this work is to summarize a systematic analysis of the variety of emergence theories and concepts, elaborated by Achim Stephan. Along the summary of this analysis, we pose fundamental questions that have to be answered in order to ascribe a precise meaning to the term “emergence” in the context of an understanding of semiosis. After discussing a model for explaining emergence based on Salthe’s hierarchical structuralism, which considers three levels at a time in a semiotic system, we present some tentative answers to those questions.",
"title": ""
},
{
"docid": "2ec9ac2c283fa0458eb97d1e359ec358",
"text": "Multiple automakers have in development or in production automated driving systems (ADS) that offer freeway-pilot functions. This type of ADS is typically limited to restricted-access freeways only, that is, the transition from manual to automated modes takes place only after the ramp merging process is completed manually. One major challenge to extend the automation to ramp merging is that the automated vehicle needs to incorporate and optimize long-term objectives (e.g. successful and smooth merge) when near-term actions must be safely executed. Moreover, the merging process involves interactions with other vehicles whose behaviors are sometimes hard to predict but may influence the merging vehicle's optimal actions. To tackle such a complicated control problem, we propose to apply Deep Reinforcement Learning (DRL) techniques for finding an optimal driving policy by maximizing the long-term reward in an interactive environment. Specifically, we apply a Long Short-Term Memory (LSTM) architecture to model the interactive environment, from which an internal state containing historical driving information is conveyed to a Deep Q-Network (DQN). The DQN is used to approximate the Q-function, which takes the internal state as input and generates Q-values as output for action selection. With this DRL architecture, the historical impact of interactive environment on the long-term reward can be captured and taken into account for deciding the optimal control policy. The proposed architecture has the potential to be extended and applied to other autonomous driving scenarios such as driving through a complex intersection or changing lanes under varying traffic flow conditions.",
"title": ""
},
{
"docid": "e49515145975eadccc20b251d56f0140",
"text": "High mortality of nestling cockatiels (Nymphicus hollandicus) was observed in one breeding flock in Slovakia. The nestling mortality affected 50% of all breeding pairs. In general, all the nestlings in affected nests died. Death occurred suddenly in 4to 6-day-old birds, most of which had full crops. No feather disorders were diagnosed in this flock. Two dead nestlings were tested by nested PCR for the presence of avian polyomavirus (APV) and Chlamydophila psittaci and by single-round PCR for the presence of beak and feather disease virus (BFDV). After the breeding season ended, a breeding pair of cockatiels together with their young one and a fledgling budgerigar (Melopsittacus undulatus) were examined. No clinical alterations were observed in these birds. Haemorrhages in the proventriculus and irregular foci of yellow liver discoloration were found during necropsy in the young cockatiel and the fledgling budgerigar. Microscopy revealed liver necroses and acute haemolysis in the young cockatiel and confluent liver necroses and heart and kidney haemorrhages in the budgerigar. Two dead cockatiel nestlings, the young cockatiel and the fledgling budgerigar were tested positive for APV, while the cockatiel adults were negative. The presence of BFDV or Chlamydophila psittaci DNA was detected in none of the birds. The specificity of PCR was confirmed by the sequencing of PCR products amplified from the samples from the young cockatiel and the fledgling budgerigar. The sequences showed 99.6–100% homology with the previously reported sequences. To our knowledge, this is the first report of APV infection which caused a fatal disease in parent-raised cockatiel nestlings and merely subclinical infection in budgerigar nestlings.",
"title": ""
},
{
"docid": "7cef2fac422d9fc3c3ffbc130831b522",
"text": "Development of advanced driver assistance systems with vehicle hardware-in-the-loop simulations , \" (Received 00 Month 200x; In final form 00 Month 200x) This paper presents a new method for the design and validation of advanced driver assistance systems (ADASs). With vehicle hardware-in-the-loop (VEHIL) simulations the development process, and more specifically the validation phase, of intelligent vehicles is carried out safer, cheaper, and more manageable. In the VEHIL laboratory a full-scale ADAS-equipped vehicle is set up in a hardware-in-the-loop simulation environment, where a chassis dynamometer is used to emulate the road interaction and robot vehicles to represent other traffic. In this controlled environment the performance and dependability of an ADAS is tested to great accuracy and reliability. The working principle and the added value of VEHIL are demonstrated with test results of an adaptive cruise control and a forward collision warning system. Based on the 'V' diagram, the position of VEHIL in the development process of ADASs is illustrated.",
"title": ""
},
{
"docid": "1ae2b50f5b4faaf6c343d02b90f93250",
"text": "A binaural beat can be produced by presenting two tones of a differing frequency, one to each ear. Such auditory stimulation has been suggested to influence behaviour and cognition via the process of cortical entrainment. However, research so far has only shown the frequency following responses in the traditional EEG frequency ranges of delta, theta and gamma. Hence a primary aim of this research was to ascertain whether it would be possible to produce clear changes in the EEG in either the alpha or beta frequency ranges. Such changes, if possible, would have a number of important implications as well as potential applications. A secondary goal was to track any observable changes in the EEG throughout the entrainment epoch to gain some insight into the nature of the entrainment effects on any changes in an effort to identify more effective entrainment regimes. Twenty two healthy participants were recruited and randomly allocated to one of two groups, each of which was exposed to a distinct binaural beat frequency for ten 1-minute epochs. The first group listened to an alpha binaural beat of 10 Hz and the second to a beta binaural beat of 20 Hz. EEG was recorded from the left and right temporal regions during pre-exposure baselines, stimulus exposure epochs and post-exposure baselines. Analysis of changes in broad-band and narrow-band amplitudes, and frequency showed no effect of binaural beat frequency eliciting a frequency following effect in the EEG. Possible mediating factors are discussed and a number of recommendations are made regarding future studies, exploring entrainment effects from a binaural beat presentation.",
"title": ""
},
{
"docid": "6bdc9ba3cd272018795108fe5004c060",
"text": "Electrical characteristics of the fabricated 600V class CSTBT™ with a Light Punch Through (LPT) structure on an advanced thin wafer technology are presented for the first time. The electrical characteristics of LPT-CSTBT are superior to the conventional Punch Through type (PT) one, especially in low current density regions because of the inherent lower built-in potential. Furthermore, we also have evaluated the effects of the mechanical stress on the device characteristics after soldering, utilizing a novel evaluation method with a very small size sub-chip layout. The results validate the proposed tool is useful to examine the influence of the mechanical stress on the electrical characteristics.",
"title": ""
},
{
"docid": "e7c8abf3387ba74ca0a6a2da81a26bc4",
"text": "An experiment was conducted to test the relationships between users' perceptions of a computerized system's beauty and usability. The experiment used a computerized application as a surrogate for an Automated Teller Machine (ATM). Perceptions were elicited before and after the participants used the system. Pre-experimental measures indicate strong correlations between system's perceived aesthetics and perceived usability. Post-experimental measures indicated that the strong correlation remained intact. A multivariate analysis of covariance revealed that the degree of system's aesthetics affected the post-use perceptions of both aesthetics and usability, whereas the degree of actual usability had no such effect. The results resemble those found by social psychologists regarding the effect of physical attractiveness on the valuation of other personality attributes. The ®ndings stress the importance of studying the aesthetic aspect of human±computer interaction (HCI) design and its relationships to other design dimensions. q 2000 Elsevier Science B.V. All rights reserved.",
"title": ""
},
{
"docid": "911545273424b27832310d9869ccb55f",
"text": "Current people detectors operate either by scanning an image in a sliding window fashion or by classifying a discrete set of proposals. We propose a model that is based on decoding an image into a set of people detections. Our system takes an image as input and directly outputs a set of distinct detection hypotheses. Because we generate predictions jointly, common post-processing steps such as nonmaximum suppression are unnecessary. We use a recurrent LSTM layer for sequence generation and train our model end-to-end with a new loss function that operates on sets of detections. We demonstrate the effectiveness of our approach on the challenging task of detecting people in crowded scenes1.",
"title": ""
},
{
"docid": "ac95eeba1f0f7632485c8138ea98fb6b",
"text": "Spreadsheets are becoming increasingly popular in solving engineering related problems. Among the strong features of spreadsheets are their instinctive cell-based structure and easy to use capabilities. Excel, for example, is a powerful spreadsheet with VBA robust programming capabilities that can be a powerful tool for teaching civil engineering concepts. Spreadsheets can do basic calculations such as cost estimates, schedule and cost control, and markup estimation, as well as structural calculations of reactions, stresses, strains, deflections, and slopes. Spreadsheets can solve complex problems, create charts and graphs, and generate useful reports. This paper highlights the use of Excel spreadsheet and VBA in teaching civil engineering concepts and creating useful applications. The focus is on concepts related to construction management and structural engineering ranging from a simple cost estimating problem to advanced applications like the simulation using PERT and the analysis of structural members. Several spreadsheet were developed for time-cost tradeoff analysis, optimum markup estimation, simulating activities with uncertain durations, scheduling repetitive projects, schedule and cost control, and optimization of construction operations, and structural calculations of reactions, internal forces, stresses, strains, deflections, and slopes. Seven illustrative examples are presented to demonstrate the use of spreadsheets as a powerful tool for teaching civil engineering concepts.",
"title": ""
},
{
"docid": "a289775f693d6b37f54b13898c242a82",
"text": "The large-scale, dynamic, and heterogeneous nature of cloud computing poses numerous security challenges. But the cloud's main challenge is to provide a robust authorization mechanism that incorporates multitenancy and virtualization aspects of resources. The authors present a distributed architecture that incorporates principles from security management and software engineering and propose key requirements and a design model for the architecture.",
"title": ""
},
{
"docid": "7da83f5d7bc383e5a2b791a2d45e6422",
"text": "Generating logical form equivalents of human language is a fresh way to employ neural architectures where long shortterm memory effectively captures dependencies in both encoder and decoder units. The logical form of the sequence usually preserves information from the natural language side in the form of similar tokens, and recently a copying mechanism has been proposed which increases the probability of outputting tokens from the source input through decoding. In this paper we propose a caching mechanism as a more general form of the copying mechanism which also weighs all the words from the source vocabulary according to their relation to the current decoding context. Our results confirm that the proposed method achieves improvements in sequence/token-level accuracy on sequence to logical form tasks. Further experiments on cross-domain adversarial attacks show substantial improvements when using the most influential examples of other domains for training.",
"title": ""
},
{
"docid": "05d3029a38631e4c0e445731f655b52c",
"text": "This paper presents a non-inverting buck-boost based power-factor-correction (PFC) converter operating in the boundary-conduction-mode (BCM) for the wide input-voltage-range applications. Unlike other conventional PFC converters, the proposed non-inverting buck-boost based PFC converter has both step-up and step-down conversion functionalities to provide positive DC output-voltage. In order to reduce the turn-on switching-loss in high frequency applications, the BCM current control is employed to achieve zero current turn-on for the power switches. Besides, the relationships of the power factor versus the voltage conversion ratio between the BCM boost PFC converter and the proposed BCM non-inverting buck-boost PFC converter are also provided. Finally, the 70-watt prototype circuit of the proposed BCM buck-boost based PFC converter is built for the verification of the high frequency and wide input-voltage-range.",
"title": ""
}
] |
scidocsrr
|
9a287355a9527e38a8c78f0e88362339
|
A Tutorial on Network Embeddings
|
[
{
"docid": "da607ab67cb9c1e1d08a70b15f9470d7",
"text": "Network embedding (NE) is playing a critical role in network analysis, due to its ability to represent vertices with efficient low-dimensional embedding vectors. However, existing NE models aim to learn a fixed context-free embedding for each vertex and neglect the diverse roles when interacting with other vertices. In this paper, we assume that one vertex usually shows different aspects when interacting with different neighbor vertices, and should own different embeddings respectively. Therefore, we present ContextAware Network Embedding (CANE), a novel NE model to address this issue. CANE learns context-aware embeddings for vertices with mutual attention mechanism and is expected to model the semantic relationships between vertices more precisely. In experiments, we compare our model with existing NE models on three real-world datasets. Experimental results show that CANE achieves significant improvement than state-of-the-art methods on link prediction and comparable performance on vertex classification. The source code and datasets can be obtained from https://github.com/ thunlp/CANE.",
"title": ""
}
] |
[
{
"docid": "cf6c2d8fac95d95998431fbb31953997",
"text": "Global software development (GSD) is a phenomenon that is receiving considerable interest from companies all over the world. In GSD, stakeholders from different national and organizational cultures are involved in developing software and the many benefits include access to a large labour pool, cost advantage and round-the-clock development. However, GSD is technologically and organizationally complex and presents a variety of challenges to be managed by the software development team. In particular, temporal, geographical and socio-cultural distances impose problems not experienced in traditional systems development. In this paper, we present findings from a case study in which we explore the particular challenges associated with managing GSD. Our study also reveals some of the solutions that are used to deal with these challenges. We do so by empirical investigation at three US based GSD companies operating in Ireland. Based on qualitative interviews we present challenges related to temporal, geographical and socio-cultural distance",
"title": ""
},
{
"docid": "b1f29f32ecc6aa2404cad271427675f2",
"text": "RATIONALE\nAnti-N-methyl-D-aspartate (NMDA) receptor encephalitis is an autoimmune disorder that can be controlled and reversed by immunotherapy. The presentation of NMDA receptor encephalitis varies, but NMDA receptor encephalitis is seldom reported in patients with both bilateral teratomas and preexisting brain injury.\n\n\nPATIENT CONCERNS\nA 28-year-old female with a history of traumatic intracranial hemorrhage presented acute psychosis, seizure, involuntary movement, and conscious disturbance with a fulminant course. Anti-NMDA receptor antibody was identified in both serum and cerebrospinal fluid, confirming the diagnosis of anti-NMDA receptor encephalitis. Bilateral teratomas were also identified during tumor survey. DIAGNOSES:: anti-N-methyl-D-aspartate receptor encephalitis.\n\n\nINTERVENTIONS\nTumor resection and immunotherapy were performed early during the course.\n\n\nOUTCOMES\nThe patient responded well to tumor resection and immunotherapy. Compared with other reports in the literature, her symptoms rapidly improved without further relapse.\n\n\nLESSONS\nThis case report demonstrates that bilateral teratomas may be related to high anybody titers and that the preexisting head injury may be responsible for lowering the threshold of neurological deficits. Early diagnosis and therapy are crucial for a good prognosis in such patients.",
"title": ""
},
{
"docid": "b803d626421c7e7eaf52635c58523e8f",
"text": "Force-directed algorithms are among the most flexible methods for calculating layouts of simple undirected graphs. Also known as spring embedders, such algorithms calculate the layout of a graph using only information contained within the structure of the graph itself, rather than relying on domain-specific knowledge. Graphs drawn with these algorithms tend to be aesthetically pleasing, exhibit symmetries, and tend to produce crossing-free layouts for planar graphs. In this survey we consider several classical algorithms, starting from Tutte’s 1963 barycentric method, and including recent scalable multiscale methods for large and dynamic graphs.",
"title": ""
},
{
"docid": "eabb50988aeb711995ff35833a47770d",
"text": "Although chemistry is by far the largest scientific discipline according to any quantitative measure, it had, until recently, been virtually ignored by professional philosophers of science. They left both a vacuum and a one-sided picture of science tailored to physics. Since the early 1990s, the situation has changed drastically, such that philosophy of chemistry is now one of the most flourishing fields in the philosophy of science, like the philosophy of biology that emerged in the 1970s. This article narrates the development and provides a survey of the main topics and trends.",
"title": ""
},
{
"docid": "62f8eb0e7eafe1c0d857dadc72008684",
"text": "In the current Web 2.0 era, the popularity of Web resources fluctuates ephemerally, based on trends and social interest. As a result, content-based relevance signals are insufficient to meet users' constantly evolving information needs in searching for Web 2.0 items. Incorporating future popularity into ranking is one way to counter this. However, predicting popularity as a third party (as in the case of general search engines) is difficult in practice, due to their limited access to item view histories. To enable popularity prediction externally without excessive crawling, we propose an alternative solution by leveraging user comments, which are more accessible than view counts. Due to the sparsity of comments, traditional solutions that are solely based on view histories do not perform well. To deal with this sparsity, we mine comments to recover additional signal, such as social influence. By modeling comments as a time-aware bipartite graph, we propose a regularization-based ranking algorithm that accounts for temporal, social influence and current popularity factors to predict the future popularity of items. Experimental results on three real-world datasets --- crawled from YouTube, Flickr and Last.fm --- show that our method consistently outperforms competitive baselines in several evaluation tasks.",
"title": ""
},
{
"docid": "a7f0f573b28b1fb82c3cba2d782e7d58",
"text": "This paper presents a meta-analysis of theory and research about writing and writing pedagogy, identifying six discourses – configurations of beliefs and practices in relation to the teaching of writing. It introduces and explains a framework for the analysis of educational data about writing pedagogy inwhich the connections are drawn across viewsof language, viewsofwriting, views of learning towrite,approaches to the teaching of writing, and approaches to the assessment of writing. The framework can be used for identifying discourses of writing in data such as policy documents, teaching and learning materials, recordings of pedagogic practice, interviews and focus groups with teachers and learners, and media coverage of literacy education. The paper also proposes that, while there are tensions and contradictions among these discourses, a comprehensive writing pedagogy might integrate teaching approaches from all six.",
"title": ""
},
{
"docid": "4e4560d1434ee05c30168e49ffc3d94a",
"text": "We present a tree data structure for fast nearest neighbor operations in general <i>n</i>-point metric spaces (where the data set consists of <i>n</i> points). The data structure requires <i>O</i>(<i>n</i>) space <i>regardless</i> of the metric's structure yet maintains all performance properties of a navigating net (Krauthgamer & Lee, 2004b). If the point set has a bounded expansion constant <i>c</i>, which is a measure of the intrinsic dimensionality, as defined in (Karger & Ruhl, 2002), the cover tree data structure can be constructed in <i>O</i> (<i>c</i><sup>6</sup><i>n</i> log <i>n</i>) time. Furthermore, nearest neighbor queries require time only logarithmic in <i>n</i>, in particular <i>O</i> (<i>c</i><sup>12</sup> log <i>n</i>) time. Our experimental results show speedups over the brute force search varying between one and several orders of magnitude on natural machine learning datasets.",
"title": ""
},
{
"docid": "a86840c1c1c6bef15889fd0e62815402",
"text": "The Web offers a corpus of over 100 million tables [6], but the meaning of each table is rarely explicit from the table itself. Header rows exist in few cases and even when they do, the attribute names are typically useless. We describe a system that attempts to recover the semantics of tables by enriching the table with additional annotations. Our annotations facilitate operations such as searching for tables and finding related tables. To recover semantics of tables, we leverage a database of class labels and relationships automatically extracted from the Web. The database of classes and relationships has very wide coverage, but is also noisy. We attach a class label to a column if a sufficient number of the values in the column are identified with that label in the database of class labels, and analogously for binary relationships. We describe a formal model for reasoning about when we have seen sufficient evidence for a label, and show that it performs substantially better than a simple majority scheme. We describe a set of experiments that illustrate the utility of the recovered semantics for table search and show that it performs substantially better than previous approaches. In addition, we characterize what fraction of tables on the Web can be annotated using our approach.",
"title": ""
},
{
"docid": "bd0b0cef8ef780a44ad92258ac705395",
"text": "This chapter introduces some of the theoretical foundations of swarm intelligence. We focus on the design and implementation of the Particle Swarm Optimization (PSO) and Ant Colony Optimization (ACO) algorithms for various types of function optimization problems, real world applications and data mining. Results are analyzed, discussed and their potentials are illustrated.",
"title": ""
},
{
"docid": "ddd3f4e9bf77a65c7b183d04905e1b68",
"text": "The immune system is built to defend an organism against both known and new attacks, and functions as an adaptive distributed defense system. Artificial Immune Systems abstract the structure of immune systems to incorporate memory, fault detection and adaptive learning. We propose an immune system based real time intrusion detection system using unsupervised clustering. The model consists of two layers: a probabilistic model based T-cell algorithm which identifies possible attacks, and a decision tree based B-cell model which uses the output from T-cells together with feature information to confirm true attacks. The algorithm is tested on the KDD 99 data, where it achieves a low false alarm rate while maintaining a high detection rate. This is true even in case of novel attacks,which is a significant improvement over other algorithms.",
"title": ""
},
{
"docid": "cb408e52b5e96669e08f70888b11b3e3",
"text": "Centrality is one of the most studied concepts in social network analysis. There is a huge literature regarding centrality measures, as ways to identify the most relevant users in a social network. The challenge is to find measures that can be computed efficiently, and that can be able to classify the users according to relevance criteria as close as possible to reality. We address this problem in the context of the Twitter network, an online social networking service with millions of users and an impressive flow of messages that are published and spread daily by interactions between users. Twitter has different types of users, but the greatest utility lies in finding the most influential ones. The purpose of this article is to collect and classify the different Twitter influence measures that exist so far in literature. These measures are very diverse. Some are based on simple metrics provided by the Twitter API, while others are based on complex mathematical models. Several measures are based on the PageRank algorithm, traditionally used to rank the websites on the Internet. Some others consider the timeline of publication, others the content of the messages, some are focused on specific topics, and others try to make predictions. We consider all these aspects, and some additional ones. Furthermore, we include measures of activity and popularity, the traditional mechanisms to correlate measures, and some important aspects of computational complexity for this particular context.",
"title": ""
},
{
"docid": "f0af49e37fa37cf74c79f6903ae05748",
"text": "We show that early vision can use monocular cues to rapidly complete partially-occluded objects. Visual search for easily-detected fragments becomes difficult when the completed shape is similar to others in the display; conversely, search for fragments that are difficult to detect becomes easy when the completed shape is distinctive. Results indicate that completion occurs via the occlusion-triggered removal of occlusion edges and linking of associated regions. We fail to find evidence for a visible filling-in of contours or surfaces, but do find evidence for a 'functional' filling-in that prevents the constituent fragments from being rapidly accessed. As such, it is only the completed structures--and not the fragments themselves--that serve as the basis for rapid recognition.",
"title": ""
},
{
"docid": "48568865b27e8edb88d4683e702dd4f8",
"text": "This study investigates how individuals process an online product review when an avatar is included to represent the peer reviewer. The researchers predicted that both perceived avatar and textual credibility would have a positive influence on perceptions of source trustworthiness and the data supported this prediction. Expectancy violations theory also predicted that discrepancies between the perceived avatar and textual credibility would produce violations. Violations were statistically captured using a residual analysis. The results of this research ultimately demonstrated that discrepancies in perceived avatar and textual credibility can have a significant impact on perceptions of source trustworthiness. These findings suggest that predicting perceived source trustworthiness in an online consumer review setting goes beyond the linear effects of avatar and textual credibility.",
"title": ""
},
{
"docid": "76849958320dde148b7dadcb6113d9d3",
"text": "Numerous recent approaches attempt to remove image blur due to camera shake, either with one or multiple input images, by explicitly solving an inverse and inherently ill-posed deconvolution problem. If the photographer takes a burst of images, a modality available in virtually all modern digital cameras, we show that it is possible to combine them to get a clean sharp version. This is done without explicitly solving any blur estimation and subsequent inverse problem. The proposed algorithm is strikingly simple: it performs a weighted average in the Fourier domain, with weights depending on the Fourier spectrum magnitude. The method's rationale is that camera shake has a random nature and therefore each image in the burst is generally blurred differently. Experiments with real camera data show that the proposed Fourier Burst Accumulation algorithm achieves state-of-the-art results an order of magnitude faster, with simplicity for on-board implementation on camera phones.",
"title": ""
},
{
"docid": "ac41c57bcb533ab5dabcc733dd69a705",
"text": "In this paper we propose two ways to deal with the imbalanced data classification problem using random forest. One is based on cost sensitive learning, and the other is based on a sampling technique. Performance metrics such as precision and recall, false positive rate and false negative rate, F-measure and weighted accuracy are computed. Both methods are shown to improve the prediction accuracy of the minority class, and have favorable performance compared to the existing algorithms.",
"title": ""
},
{
"docid": "f1ebd840092228e48a3ab996287e7afd",
"text": "Negative emotions are reliably associated with poorer health (e.g., Kiecolt-Glaser, McGuire, Robles, & Glaser, 2002), but only recently has research begun to acknowledge the important role of positive emotions for our physical health (Fredrickson, 2003). We examine the link between dispositional positive affect and one potential biological pathway between positive emotions and health-proinflammatory cytokines, specifically levels of interleukin-6 (IL-6). We hypothesized that greater trait positive affect would be associated with lower levels of IL-6 in a healthy sample. We found support for this hypothesis across two studies. We also explored the relationship between discrete positive emotions and IL-6 levels, finding that awe, measured in two different ways, was the strongest predictor of lower levels of proinflammatory cytokines. These effects held when controlling for relevant personality and health variables. This work suggests a potential biological pathway between positive emotions and health through proinflammatory cytokines.",
"title": ""
},
{
"docid": "27ddea786e06ffe20b4f526875cdd76b",
"text": "It , is generally unrecognized that Sigmund Freud's contribution to the scientific understanding of dreams derived from a radical reorientation to the dream experience. During the nineteenth century, before publication of The Interpretation of Dreams, the presence of dreaming was considered by the scientific community as a manifestation of mental activity during sleep. The state of sleep was given prominence as a factor accounting for the seeming lack of organization and meaning to the dream experience. Thus, the assumed relatively nonpsychological sleep state set the scientific stage for viewing the nature of the dream. Freud radically shifted the context. He recognized-as myth, folklore, and common sense had long understood-that dreams were also linked with the psychology of waking life. This shift in orientation has proved essential for our modern view of dreams and dreaming. Dreams are no longer dismissed as senseless notes hit at random on a piano keyboard by an untrained player. Dreams are now recognized as psychologically significant and meaningful expressions of the life of the dreamer, albeit expressed in disguised and concealed forms. (For a contrasting view, see AcFIIa ION_sYNTHESIS xxroTESis .) Contemporary Dream Research During the past quarter-century, there has been increasing scientific interest in the process of dreaming. A regular sleep-wakefulness cycle has been discovered, and if experimental subjects are awakened during periods of rapid eye movements (REM periods), they will frequently report dreams. In a typical night, four or five dreams occur during REM periods, accompanied by other signs of physiological activation, such as increased respiratory rate, heart rate, and penile and clitoral erection. Dreams usually last for the duration of the eye movements, from about 10 to 25 minutes. Although dreaming usually occurs in such regular cycles ;.dreaming may occur at other times during sleep, as well as during hypnagogic (falling asleep) or hypnopompic .(waking up) states, when REMs are not present. The above findings are discoveries made since the monumental work of Freud reported in The Interpretation of Dreams, and .although of great interest to the study of the mind-body problem, these .findings as yet bear only a peripheral relationship to the central concerns of the psychology of dream formation, the meaning of dream content, the dream as an approach to a deeper understanding of emotional life, and the use of the dream in psychoanalytic treatment .",
"title": ""
},
{
"docid": "444bcff9a7fdcb80041aeb01b8724eed",
"text": "The morphologic anatomy of the liver is described as 2 main and 2 accessory lobes. The more recent functional anatomy of the liver is based on the distribution of the portal pedicles and the location of the hepatic veins. The liver is divided into 4 sectors, some of them composed of 2 segments. In all, there are 8 segments. According to the anatomy, typical hepatectomies (or “réglées”) are those which are performed along anatomical scissurae. The 2 main technical conceptions of typical hepatectomies are those with preliminary vascular control (Lortat-Jacob's technique) and hepatectomies with primary parenchymatous transection (Ton That Tung's technique). A good knowledge of the anatomy of the liver is a prerequisite for anatomical surgery of this organ. L'anatomie morphologique du foie permet d'individualiser 2 lobes principaux et 2 lobes accessoires. L'anatomie fonctionnelle du foie, plus récemment décrite, est fondée sur la distribution des pédicules portaux et sur la localisation des veines sus-hépatiques. Le foie est divisé en 4 secteurs, eux-mÊmes composés en général de 2 segments. Au total, il y a 8 segments. Selon les données anatomiques, les hépatectomies typiques (ou réglées) sont celles qui sont réalisées le long des scissures anatomiques. Les deux conceptions principales des exérèses hépatiques typiques sont, du point de vue technique, les hépatectomies avec contrÔle vasculaire préalable (technique de Lortat-Jacob) et les hépatectomies avec abord transparenchymateux premier (technique de Ton That Tung). Une connaissance approfondie de l'anatomie du foie est une condition préalable à la réalisation d'une chirurgie anatomique de cet organe.",
"title": ""
},
{
"docid": "f734f6059c849c88e5b53d3584bf0a97",
"text": "In three studies (two representative nationwide surveys, N = 1,007, N = 682; and one experimental, N = 76) we explored the effects of exposure to hate speech on outgroup prejudice. Following the General Aggression Model, we suggest that frequent and repetitive exposure to hate speech leads to desensitization to this form of verbal violence and subsequently to lower evaluations of the victims and greater distancing, thus increasing outgroup prejudice. In the first survey study, we found that lower sensitivity to hate speech was a positive mediator of the relationship between frequent exposure to hate speech and outgroup prejudice. In the second study, we obtained a crucial confirmation of these effects. After desensitization training individuals were less sensitive to hate speech and more prejudiced toward hate speech victims than their counterparts in the control condition. In the final study, we replicated several previous effects and additionally found that the effects of exposure to hate speech on prejudice were mediated by a lower sensitivity to hate speech, and not by lower sensitivity to social norms. Altogether, our studies are the first to elucidate the effects of exposure to hate speech on outgroup prejudice.",
"title": ""
},
{
"docid": "d6a0dbdfda18a11e3a39d3f27e915426",
"text": "Concepts embody the knowledge to facilitate our cognitive processes of learning. Mapping short texts to a large set of open domain concepts has gained many successful applications. In this paper, we unify the existing conceptualization methods from a Bayesian perspective, and discuss the three modeling approaches: descriptive, generative, and discriminative models. Motivated by the discussion of their advantages and shortcomings, we develop a generative + descriptive modeling approach. Our model considers term relatedness in the context, and will result in disambiguated conceptualization. We show the results of short text clustering using a news title data set and a Twitter message data set, and demonstrate the effectiveness of the developed approach compared with the state-of-the-art conceptualization and topic modeling approaches.",
"title": ""
}
] |
scidocsrr
|
44e93a8c56340a2f41cbc30e6c6915ef
|
Design of a Broadband and Multiband Planar Inverted-F Antenna
|
[
{
"docid": "76e374d5a1e71822e1d72632136ad9f2",
"text": "This paper proposes two novel broadband microstrip antennas using coplanar feed-line. By feeding the patch with a suitable shape of the coplanar line in the slot of the patch, the broadband character is achieved. Compared with the antenna fed by a U-shaped feed-line, the antenna with L-shaped feed-line not only has wider bandwidth but also achieves the circular polarization character. The measured bandwidths of 25% and 34% are achieved, and both of the antennas have good radiation characteristics in the work band.",
"title": ""
}
] |
[
{
"docid": "d76d1c068f4f2f7d4af1b5bc268aaca9",
"text": "This paper proposes a secure image steganography technique to hide a secret image using the key. The secret image itself is not hidden, instead a key is generated and the key is hidden in the cover image. Using the key the secret image can be extracted. Integer Wavelet Transform (IWT) is used to hide the key. So it is very secure and robust because no one can realize the hidden information and it cannot be lost due to noise or any signal processing operations. Experimental results show very good Peak Signal to Noise Ratio (PSNR), which is a measure of security. In this technique the secret information is hidden in the middle bit-planes of the integer wavelet coefficients in high frequency sub-bands.",
"title": ""
},
{
"docid": "ed80c1ad22dbf51bfb20351b3d7a2b8b",
"text": "Three central problems in the recent literature on visual attention are reviewed. The first concerns the control of attention by top-down (or goal-directed) and bottom-up (or stimulus-driven) processes. The second concerns the representational basis for visual selection, including how much attention can be said to be location- or object-based. Finally, we consider the time course of attention as it is directed to one stimulus after another.",
"title": ""
},
{
"docid": "d5081c1f13d06b43386e2db276351abd",
"text": "We introduce an optimised pipeline for multi-atlas brain MRI segmentation. Both accuracy and speed of segmentation are considered. We study different similarity measures used in non-rigid registration. We show that intensity differences for intensity normalised images can be used instead of standard normalised mutual information in registration without compromising the accuracy but leading to threefold decrease in the computation time. We study and validate also different methods for atlas selection. Finally, we propose two new approaches for combining multi-atlas segmentation and intensity modelling based on segmentation using expectation maximisation (EM) and optimisation via graph cuts. The segmentation pipeline is evaluated with two data cohorts: IBSR data (N=18, six subcortial structures: thalamus, caudate, putamen, pallidum, hippocampus, amygdala) and ADNI data (N=60, hippocampus). The average similarity index between automatically and manually generated volumes was 0.849 (IBSR, six subcortical structures) and 0.880 (ADNI, hippocampus). The correlation coefficient for hippocampal volumes was 0.95 with the ADNI data. The computation time using a standard multicore PC computer was about 3-4 min. Our results compare favourably with other recently published results.",
"title": ""
},
{
"docid": "582738ff2d1369a7faf9480e5af9a717",
"text": "Deep learning has led to significant advances in artificial intelligence in recent years, in part by adopting architectures and functions motivated by neurophysiology. However, current deep learning algorithms are biologically infeasible, because they assume non-spiking units, discontinuous-time, and non-local synaptic weight updates. Here, we build on recent discoveries in artificial neural networks to develop a spiking, continuous-time neural network model that learns to categorize images from the MNIST data-set with local synaptic weight updates. The model achieves this via a three-compartment cellular architecture, motivated by neocortical pyramidal cell neurophysiology, wherein feedforward sensory information and feedback from higher layers are received at separate compartments in the neurons. We show that, thanks to the separation of feedforward and feedback information in different dendrites, our learning algorithm can coordinate learning across layers, taking advantage of multilayer architectures to identify abstract categories—the hallmark of deep learning. Our model demonstrates that deep learning can be achieved within a biologically feasible framework using segregated dendritic compartments, which may help to explain the anatomy of neocortical pyramidal neurons.",
"title": ""
},
{
"docid": "00206407887e9a6d5723e56de2de0d72",
"text": "The C-Leg (Otto Bock, Duderstadt, Germany) is a microprocessor-controlled prosthetic knee that may enhance amputee gait. This intrasubject randomized study compared the gait biomechanics of transfemoral amputees wearing the C-Leg with those wearing a common noncomputerized prosthesis, the Mauch SNS (Ossur, Reykjavik, Iceland). After subjects had a 3-month acclimation period with each prosthetic knee, typical gait biomechanical data were collected in a gait laboratory. At a controlled walking speed (CWS), peak swing phase knee-flexion angle decreased for the C-Leg group compared with the Mauch SNS group (55.2 degrees +/- 6.5 degrees vs 64.41 degrees +/- 5.8 degrees , respectively; p = 0.005); the C-Leg group was similar to control subjects' peak swing knee-flexion angle (56.0 degrees +/- 3.4 degrees ). Stance knee-flexion moment increased for the C-Leg group compared with the Mauch SNS group (0.142 +/- 0.05 vs 0.067 +/- 0.07 N\"m, respectively; p = 0.01), but remained significantly reduced compared with control subjects (0.477 +/- 0.1 N\"m). Prosthetic limb step length at CWS was less for the C-Leg group compared with the Mauch SNS group (0.66 +/- 0.04 vs 0.70 +/- 0.06 m, respectively; p = 0.005), which resulted in increased symmetry between limbs for the C-Leg group. Subjects also walked faster with the C-Leg versus the Mauch SNS (1.30 +/- 0.1 vs 1.21 +/- 0.1 m/s, respectively; p = 0.004). The C-Leg prosthetic limb vertical ground reaction force decreased compared with the Mauch SNS (96.3 +/- 4.7 vs 100.3 +/- 7.5 % body weight, respectively; p = 0.0092).",
"title": ""
},
{
"docid": "6b7c3381d80e88ff2b69bd8b5f90516a",
"text": "Commitments are a persistent feature of international affairs. Disagreement over the effect of international commitments and the causes of compliance with them is equally persistent. Yet in the last decade the long-standing divide between those who believed that international rules per se shaped state behavior and those who saw such rules as epiphenomena1 or insignificant has given way to a more nuanced and complex debate. Regime theory, originally focused on the creation and persistence of regimes, increasingly emphasizes variations in regimes and in their impact on behavior. The legal quality of regime rules is one important source of regime variation. At the same time the proliferation and evolution of intema-tional legal agreements, organizations and judicial bodies in the wake of the Cold War has provided the empirical predicate and a policy imperative for heightened attention to the role of international law. Across many issue-areas, the use of law to structure world politics seems to be increasing. This phenomenon of legalization raises several questions. What factors explain the choice to create and use international law? If law is a tool or method to organize interaction, how does it work? Does the use of international law make a difference to how states or domestic actors behave? These questions are increasingly of interest to IR theorists axid policy-makers alike. The core issue is the impact of law and legal-ization on state behavior, often understood in terms of compliance. While the distinction should not be overstated, legal rules and institutions presume compliance in a way that non-legal rules and institutions do not. Law and compliance are conceptually linked because law explicitly aims to produce compliance with its rules: legal rules set the standard by which compliance is gauged Explanations of why and when states comply with international law can help account for the turn to law as a positive phenomenon, but they also provide critical policy guidance for the design of new institutions and agreements. This chapter surveys the study of compliance in both the international relations (IR) and international law (IL) literature.' In many ways, the compliance literature is a microcosm of developments in both fields, and particularly of the rapproche-For IR scholars interested in reviving the study of international law in their discipline, it was a natural step to focus first on questions of whether, when and how law 'mattered' to state behavior. For international lawyers eager to use IR theory to …",
"title": ""
},
{
"docid": "c99fd51e8577a5300389c565aebebdb3",
"text": "Face Detection and Recognition is an important area in the field of substantiation. Maintenance of records of students along with monitoring of class attendance is an area of administration that requires significant amount of time and efforts for management. Automated Attendance Management System performs the daily activities of attendance analysis, for which face recognition is an important aspect. The prevalent techniques and methodologies for detecting and recognizing faces by using feature extraction tools like mean, standard deviation etc fail to overcome issues such as scaling, pose, illumination, variations. The proposed system provides features such as detection of faces, extraction of the features, detection of extracted features, and analysis of student’s attendance. The proposed system integrates techniques such as Principal Component Analysis (PCA) for feature extraction and voila-jones for face detection &Euclidian distance classifier. Faces are recognized using PCA, using the database that contains images of students and is used to recognize student using the captured image. Better accuracy is attained in results and the system takes into account the changes that occurs in the face over the period of time.",
"title": ""
},
{
"docid": "eb40a59fe4f5205c5e933c5c2de5943f",
"text": "Menthol as an important component of monoterpenes essential oil in peppermint (Mentha piperita L.) is widely applied for medical and industrial uses. In this study, the effect of exogenous applications of chitosan (200 mg/L), gibberellic acid (50 mg/L) and methyl jasmonate (300 μM) was investigated in the main genes of menthol biosynthesis pathways within a 72 h time period using qRT-PCR. Transcript levels of most genes were either unaffected or down-regulated following chitosan treatment relative to control plants. Decreasing of geranyl diphosphate synthase (GDS) and limonene synthase (LS) genes transcript in chitosan treatment could possibly be effective in reducing of limonene level. On the other hand, it seems that an increase in menthone-menthol reductase (MMR) transcription level at 72 h under these treatments had a positive role in increasing the amount of menthol in this plant. Since exogenous application of gibberellic acid (GA3) down-regulated transcript levels of several genes involved in menthol biosynthesis, there is this expectance that GA3 treatment might not have a prominent role in enhancing menthol yield via transcription regulation. Transcript level of the majority genes after methyl jasmonate treatment gradually increased and reached the highest level at 72 h, therefore, it is possible that methyl jasmonate improves medicinal properties of M. piperita.",
"title": ""
},
{
"docid": "c7c40106a804061b96b6243cff85d317",
"text": "In this paper, we describe a system for detecting duplicate images and videos in a large collection of multimedia data. Our system consists of three major elements: Local-Difference-Pattern (LDP) as the unified feature to describe both images and videos, Locality-Sensitive-Hashing (LSH) as the core indexing structure to assure the most frequent data access occurred in the main memory, and multi-steps verification for queries to best exclude false positives and to increase the precision. The experimental results, validated on two public datasets, demonstrate that the proposed method is robust against the common image-processing tricks used to produce duplicates. In addition, the memory requirement has been addressed in our system to handle large-scale database.",
"title": ""
},
{
"docid": "a737511620632ac8920a20d566c93974",
"text": "Hidradenitis suppurativa (HS) is an inflammatory skin disease. Several observations imply that sex hormones may play a role in its pathogenesis. HS is more common in women, and the disease severity appears to vary in intensity according to the menstrual cycle. In addition, parallels have been drawn between HS and acne vulgaris, suggesting that sex hormones may play a role in the condition. The role of androgens and estrogens in HS has therefore been explored in numerous observational and some interventional studies; however, the studies have often reported conflicting results. This systematic review includes 59 unique articles and aims to give an overview of the available research. Articles containing information on natural variation, severity changes during menstruation and pregnancy, as well as articles on serum levels of hormones in patients with HS and the therapeutic options of hormonal manipulation therapy have all been included and are presented in this systematic review. Our results show that patients with HS do not seem to have increased levels of sex hormones and that their hormone levels lie within the normal range. While decreasing levels of progesterone and estrogen seem to coincide with disease flares in premenopausal women, the association is speculative and requires experimental confirmation. Antiandrogen treatment could be a valuable approach in treating HS, however randomized control trials are lacking.",
"title": ""
},
{
"docid": "271f3780fe6c1d58a8f5dffbd182e1ac",
"text": "We are presenting the design of a high gain printed antenna array consisting of 420 identical patch antennas intended for FMCW radar at Ku band. The array exhibits 3 dB-beamwidths of 2° and 10° in H and E plane, respectively, side lobe suppression better than 20 dB, gain about 30 dBi and VSWR less than 2 in the frequency range 17.1 - 17.6 GHz. Excellent antenna efficiency that is between 60 and 70 % is achieved by proper impedance matching throughout the array and by using series feeding architecture with both resonant and traveling-wave feed. Enhanced cross polarization suppression is obtained by anti-phase feeding of the upper and the lower halves of the antenna. Overall antenna dimensions are 31 λ0 × 7.5 λ0.",
"title": ""
},
{
"docid": "4d10e865793788892ecf5ed967b6b6cf",
"text": "This article is a retrospective on the theme of knowledge harvesting: automatically constructing large highquality knowledge bases from Internet sources. We draw on our experience in the Yago-Naga project over the last decade, but consider other projects as well. The article discusses lessons learned on the architecture of a knowledge harvesting system, and points out open challenges and research opportunities. 1 Large High-Quality Knowledge Bases Turning Internet content, with its wealth of latent-value but noisy text and data sources, into crisp “machine knowledge” that can power intelligent applications is a long-standing goal of computer science. Over the last ten years, knowledge harvesting has made tremendous progress, leveraging advances in scalable information extraction and the availability of curated knowledge-sharing sources such as Wikipedia. Unlike the seminal projects on manually crafted knowledge bases and ontologies, like Cyc [27] and WordNet [14], knowledge harvesting is automated and operates at Web scale. Automatically constructed knowledge bases – KB’s for short – have become a powerful asset for search, analytics, recommendations, and data integration, with intensive use at big industrial stakeholders. Prominent examples are the Google Knowledge Graph, Facebook’s Graph Search, Microsoft Satori as well as domainspecific knowledge bases in business, finance, life sciences, and more. These achievements are rooted in academic research and community projects starting ten years ago, most notably, DBpedia [2], Freebase [5], KnowItAll [13], WikiTaxonomy [34] and Yago [41]. More recent major projects along these lines include BabelNet [31] ConceptNet [40], DeepDive [39], EntityCube (aka. Renlifang) [33], KnowledgeVault [9], Nell [6] Probase [50], Wikidata [47], XLore [48]. The largest of the KB’s from these projects contain many millions of entities (i.e., people, places, products etc.) and billions of facts about them (i.e., attribute values and relationships with other entities). Moreover, entities are organized into a taxonomy of semantic classes, sometimes with hundred thousands of fine-grained types. All this is often represented in the form of subject-predicate-object (SPO) triples, following the RDF data model, and some of the KB’s – most notably DBpedia – are central to the Web of Linked Open Data [18]. For illustration, here are some examples of SPO triples about Steve Jobs: Copyright 2016 IEEE. Personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional purposes or for creating new collective works for resale or redistribution to servers or lists, or to reuse any copyrighted component of this work in other works must be obtained from the IEEE. Bulletin of the IEEE Computer Society Technical Committee on Data Engineering",
"title": ""
},
{
"docid": "5892af3dde2314267154a0e5a3c76985",
"text": "We describe a method for on-line handwritten signature veri!cation. The signatures are acquired using a digitizing tablet which captures both dynamic and spatial information of the writing. After preprocessing the signature, several features are extracted. The authenticity of a writer is determined by comparing an input signature to a stored reference set (template) consisting of three signatures. The similarity between an input signature and the reference set is computed using string matching and the similarity value is compared to a threshold. Several approaches for obtaining the optimal threshold value from the reference set are investigated. The best result yields a false reject rate of 2.8% and a false accept rate of 1.6%. Experiments on a database containing a total of 1232 signatures of 102 individuals show that writer-dependent thresholds yield better results than using a common threshold. ? 2002 Pattern Recognition Society. Published by Elsevier Science Ltd. All rights reserved.",
"title": ""
},
{
"docid": "47d2ebd3794647708d41c6b3d604e796",
"text": "Most stream data classification algorithms apply the supervised learning strategy which requires massive labeled data. Such approaches are impractical since labeled data are usually hard to obtain in reality. In this paper, we build a clustering feature decision tree model, CFDT, from data streams having both unlabeled and a small number of labeled examples. CFDT applies a micro-clustering algorithm that scans the data only once to provide the statistical summaries of the data for incremental decision tree induction. Micro-clusters also serve as classifiers in tree leaves to improve classification accuracy and reinforce the any-time property. Our experiments on synthetic and real-world datasets show that CFDT is highly scalable for data streams while generating high classification accuracy with high speed.",
"title": ""
},
{
"docid": "9f2b64afae5d0d3977b2459591f4e465",
"text": "This paper presents an active islanding detection method for a distributed resource (DR) unit which is coupled to a utility grid through a three-phase voltage-sourced converter (VSC). The method is based on injecting a negative-sequence current through the VSC controller and detecting and quantifying the corresponding negative-sequence voltage at the point of common coupling of the VSC by means of a unified three-phase signal processor (UTSP). UTSP is an enhanced phase-locked loop system which provides high degree of immunity to noise, and thus enable islanding detection based on injecting a small (3%) negative-sequence current. The negative-sequence current is injected by a negative-sequence controller which is adopted as the complementary of the conventional VSC current controller. Based on simulation studies in the PSCAD/EMTDC environment, performance of the islanding detection method under UL1741 anti-islanding test is evaluated, and its sensitivity to noise, grid short-circuit ratio, grid voltage imbalance, and deviations in the UL1741 test parameters are presented. The studies show that based on negative-sequence current injection of about 2% to 3%, islanding can be detected within 60 ms even for the worst case scenario.",
"title": ""
},
{
"docid": "a1a8dc4d3c1c0d2d76e0f1cd0cb039d2",
"text": "73 generalized vertex median of a weighted graph, \" Operations Res., pp. 955-961, July 1967. and 1973, respectively. He spent two and a half years at Bell Laboratories , Murray Hill, NJ, developing telemetrized automatic surveillance and control systems. He is now Manager at Data Communications Systems, Vienna, VA, where he has major responsibilities in research and development of network analysis and design capabilities, and has applied these capabilities in the direction of projects ranging from feasability analysis and design of front end processors for the Navy to development of network architectures for the FAA. NY, responsible for contributing to the ongoing research in the areas of large network design, topological optimization for terminal access, the concentrator location problem, and flow and congestion control strategies for packet switching networks. At present, Absfruct-An algorithm is defined for establishing routing tables in the individual nodes of a data network. The routing fable at a node i specifies, for each other node j , what fraction of the traffic destined far node j should leave node i on each of the links emanating from node i. The algorithm is applied independently at each node and successively updates the routing table at that node based on information communicated between adjacent nodes about the marginal delay to each destination. For stationary input traffic statistics, the average delay per message through the network converges, with successive updates of the routing tables, to the minimum average delay over all routing assignments. The algorithm has the additional property that the traffic to each destination is guaranteed to be loop free at each iteration of the algorithm. In addition, a new global convergence theorem for non-continuous iteration algorithms is developed. INTRODUCTION T HE problem of routing assignments has been one of the most intensively studied areas in the field of data networks in recent years. These routing problems can be roughly classified as static routing, quasi-static routing, and dynamic routing. Static routing can be typified by the following type of problem. One wishes to establish a new data network and makes various assumptions about the node locations, the link locations, and the capacities of the links. Given the traffic between each source and destination, one can calculate the traffic on each link as a function of the routing of the traffic. If one approximates the queueing delays on each link as a function of the link traffic, one can …",
"title": ""
},
{
"docid": "32aaaa1bb43a5631cebb4dd85ef54105",
"text": "In this work sentiment analysis of annual budget for Financial year 2016–17 is done. Text mining is used to extract text data from the budget document and to compute the word association of significant words and their correlation in computed with the associated words. Word frequency and the corresponding word cloud is plotted. The analysis is done in R software. The corresponding sentiment score is computed and analyzed. This analysis is of significant importance keeping in mind the sentiment reflected about the budget in the official budget document.",
"title": ""
},
{
"docid": "43b9753d934d2e7598d6342a81f21bed",
"text": "A system has been developed which is capable of inducing brain injuries of graded severity from mild concussion to instantaneous death. A pneumatic shock tester subjects a monkey to a non-impact controlled single sagittal rotation which displaces the head 60 degrees in 10-20 msec. Results derived from 53 experiments show that a good correlation exists between acceleration delivered to the head, the resultant neurological status and the brain pathology. A simple experimental trauma severity (ETS) scale is offered based on changes in the heart rate, respiratory rate, corneal reflex and survivability. ETS grades 1 and 2 show heart rate or respiratory changes but no behavioral or pathological abnormality. ETS grades 3 and 4 have temporary corneal reflex abolition, behavioral unconsciousness, and post-traumatic behavioral abnormalities. Occasional subdural haematomas are seen. Larger forces cause death (ETS 5) from primary apnea or from large subdural haematomas. At the extreme range, instantaneous death (ETS 6) occurs because of pontomedullary lacerations. This model and the ETS scale offer the ability to study a broad spectrum of types of experimental head injury and underscore the importance of angular acceleration as a mechanism of head injury.",
"title": ""
},
{
"docid": "fdc8a54623f38ec29012d2f0f3bda8b1",
"text": "Object tracking is an important issue for research and application related to visual servoing and more generally for robot vision. In this paper, we address the problem of realizing visual servoing tasks on complex objects in real environments. We briefly present a set of tracking algorithms (2D features-based or motion-based tracking, 3D model-based tracking, . . . ) that have been used for ten years to achieve this goal.",
"title": ""
},
{
"docid": "c15369f923be7c8030cc8f2b1f858ced",
"text": "An important goal of scientific data analysis is to understand the behavior of a system or process based on a sample of the system. In many instances it is possible to observe both input parameters and system outputs, and characterize the system as a high-dimensional function. Such data sets arise, for instance, in large numerical simulations, as energy landscapes in optimization problems, or in the analysis of image data relating to biological or medical parameters. This paper proposes an approach to analyze and visualizing such data sets. The proposed method combines topological and geometric techniques to provide interactive visualizations of discretely sampled high-dimensional scalar fields. The method relies on a segmentation of the parameter space using an approximate Morse-Smale complex on the cloud of point samples. For each crystal of the Morse-Smale complex, a regression of the system parameters with respect to the output yields a curve in the parameter space. The result is a simplified geometric representation of the Morse-Smale complex in the high dimensional input domain. Finally, the geometric representation is embedded in 2D, using dimension reduction, to provide a visualization platform. The geometric properties of the regression curves enable the visualization of additional information about each crystal such as local and global shape, width, length, and sampling densities. The method is illustrated on several synthetic examples of two dimensional functions. Two use cases, using data sets from the UCI machine learning repository, demonstrate the utility of the proposed approach on real data. Finally, in collaboration with domain experts the proposed method is applied to two scientific challenges. The analysis of parameters of climate simulations and their relationship to predicted global energy flux and the concentrations of chemical species in a combustion simulation and their integration with temperature.",
"title": ""
}
] |
scidocsrr
|
d229c9839339d596488653be4137fbf6
|
Sampling and Recovery of Pulse Streams
|
[
{
"docid": "59786d8ea951639b8b9a4e60c9d43a06",
"text": "Compressed sensing is a technique to sample compressible signals below the Nyquist rate, whilst still allowing near optimal reconstruction of the signal. In this paper we present a theoretical analysis of the iterative hard thresholding algorithm when applied to the compressed sensing recovery problem. We show that the algorithm has the following properties (made more precise in the main text of the paper) • It gives near-optimal error guarantees. • It is robust to observation noise. • It succeeds with a minimum number of observations. • It can be used with any sampling operator for which the operator and its adjoint can be computed. • The memory requirement is linear in the problem size. Preprint submitted to Elsevier 28 January 2009 • Its computational complexity per iteration is of the same order as the application of the measurement operator or its adjoint. • It requires a fixed number of iterations depending only on the logarithm of a form of signal to noise ratio of the signal. • Its performance guarantees are uniform in that they only depend on properties of the sampling operator and signal sparsity.",
"title": ""
}
] |
[
{
"docid": "bba4256906b1aee1c76d817b9926226c",
"text": "In this paper, we present an analytical framework to evaluate the latency performance of connection-based spectrum handoffs in cognitive radio (CR) networks. During the transmission period of a secondary connection, multiple interruptions from the primary users result in multiple spectrum handoffs and the need of predetermining a set of target channels for spectrum handoffs. To quantify the effects of channel obsolete issue on the target channel predetermination, we should consider the three key design features: 1) general service time distribution of the primary and secondary connections; 2) different operating channels in multiple handoffs; and 3) queuing delay due to channel contention from multiple secondary connections. To this end, we propose the preemptive resume priority (PRP) M/G/1 queuing network model to characterize the spectrum usage behaviors with all the three design features. This model aims to analyze the extended data delivery time of the secondary connections with proactively designed target channel sequences under various traffic arrival rates and service time distributions. These analytical results are applied to evaluate the latency performance of the connection-based spectrum handoff based on the target channel sequences mentioned in the IEEE 802.22 wireless regional area networks standard. Then, to reduce the extended data delivery time, a traffic-adaptive spectrum handoff is proposed, which changes the target channel sequence of spectrum handoffs based on traffic conditions. Compared to the existing target channel selection methods, this traffic-adaptive target channel selection approach can reduce the extended data transmission time by 35 percent, especially for the heavy traffic loads of the primary users.",
"title": ""
},
{
"docid": "bb7ac8c753d09383ecbf1c8cd7572d05",
"text": "Skills learned through (deep) reinforcement learning often generalizes poorly across domains and re-training is necessary when presented with a new task. We present a framework that combines techniques in formal methods with reinforcement learning (RL). The methods we provide allows for convenient specification of tasks with logical expressions, learns hierarchical policies (meta-controller and low-level controllers) with well-defined intrinsic rewards, and construct new skills from existing ones with little to no additional exploration. We evaluate the proposed methods in a simple grid world simulation as well as a more complicated kitchen environment in AI2Thor (Kolve et al. [2017]).",
"title": ""
},
{
"docid": "406fab96a8fd49f4d898a9735ee1512f",
"text": "An otolaryngology phenol applicator kit can be successfully and safely used in the performance of chemical matricectomy. The applicator kit provides a convenient way to apply phenol to the nail matrix precisely and efficiently, whereas minimizing both the risk of application to nonmatrix surrounding soft tissue and postoperative recovery time.Given the smaller size of the foam-tipped applicator, we feel that this is a more precise tool than traditional cotton-tipped applicators for chemical matricectomy. Particularly with regard to lower extremity nail ablation and matricectomy, minimizing soft tissue inflammation could in turn reduce the risk of postoperative infections, decrease recovery time, as well and make for a more positive overall patient experience.",
"title": ""
},
{
"docid": "517abd2ff0ed007c5011059d055e19e1",
"text": "Long Short-Term Memory (LSTM) is a particular type of recurrent neural network (RNN) that can model long term temporal dynamics. Recently it has been shown that LSTM-RNNs can achieve higher recognition accuracy than deep feed-forword neural networks (DNNs) in acoustic modelling. However, speaker adaption for LSTM-RNN based acoustic models has not been well investigated. In this paper, we study the LSTM-RNN speaker-aware training that incorporates the speaker information during model training to normalise the speaker variability. We first present several speaker-aware training architectures, and then empirically evaluate three types of speaker representation: I-vectors, bottleneck speaker vectors and speaking rate. Furthermore, to factorize the variability in the acoustic signals caused by speakers and phonemes respectively, we investigate the speaker-aware and phone-aware joint training under the framework of multi-task learning. In AMI meeting speech transcription task, speaker-aware training of LSTM-RNNs reduces word error rates by 6.5% relative to a very strong LSTM-RNN baseline, which uses FMLLR features.",
"title": ""
},
{
"docid": "7681a78f2d240afc6b2e48affa0612c1",
"text": "Web usage mining applies data mining procedures to analyze user access of Web sites. As with any KDD (knowledge discovery and data mining) process, WUM contains three main steps: preprocessing, knowledge extraction, and results analysis. We focus on data preprocessing, a fastidious, complex process. Analysts aim to determine the exact list of users who accessed the Web site and to reconstitute user sessions-the sequence of actions each user performed on the Web site. Intersites WUM deals with Web server logs from several Web sites, generally belonging to the same organization. Thus, analysts must reassemble the users' path through all the different Web servers that they visited. Our solution is to join all the log files and reconstitute the visit. Classical data preprocessing involves three steps: data fusion, data cleaning, and data structuration. Our solution for WUM adds what we call advanced data preprocessing. This consists of a data summarization step, which will allow the analyst to select only the information of interest. We've successfully tested our solution in an experiment with log files from INRIA Web sites.",
"title": ""
},
{
"docid": "a9d5220445f3cac82fd38b16c26c2bbc",
"text": "Genomics is a Big Data science and is going to get much bigger, very soon, but it is not known whether the needs of genomics will exceed other Big Data domains. Projecting to the year 2025, we compared genomics with three other major generators of Big Data: astronomy, YouTube, and Twitter. Our estimates show that genomics is a \"four-headed beast\"--it is either on par with or the most demanding of the domains analyzed here in terms of data acquisition, storage, distribution, and analysis. We discuss aspects of new technologies that will need to be developed to rise up and meet the computational challenges that genomics poses for the near future. Now is the time for concerted, community-wide planning for the \"genomical\" challenges of the next decade.",
"title": ""
},
{
"docid": "c935ba16ca618659c8fcaa432425db22",
"text": "Dynamic Voltage/Frequency Scaling (DVFS) is a useful tool for improving system energy efficiency, especially in multi-core chips where energy is more of a limiting factor. Per-core DVFS, where cores can independently scale their voltages and frequencies, is particularly effective. We present a DVFS policy using machine learning, which learns the best frequency choices for a machine as a decision tree.\n Machine learning is used to predict the frequency which will minimize the expected energy per user-instruction (epui) or energy per (user-instruction)2 (epui2). While each core independently sets its frequency and voltage, a core is sensitive to other cores' frequency settings. Also, we examine the viability of using only partial training to train our policy, rather than full profiling for each program.\n We evaluate our policy on a 16-core machine running multiprogrammed, multithreaded benchmarks from the PARSEC benchmark suite against a baseline fixed frequency as well as a recently-proposed greedy policy. For 1ms DVFS intervals, our technique improves system epui2 by 14.4% over the baseline no-DVFS policy and 11.3% on average over the greedy policy.",
"title": ""
},
{
"docid": "724388aac829af9671a90793b1b31197",
"text": "We present a statistical phrase-based translation model that useshierarchical phrases — phrases that contain subphrases. The model is formally a synchronous context-free grammar but is learned from a bitext without any syntactic information. Thus it can be seen as a shift to the formal machinery of syntaxbased translation systems without any linguistic commitment. In our experiments using BLEU as a metric, the hierarchical phrasebased model achieves a relative improvement of 7.5% over Pharaoh, a state-of-the-art phrase-based system.",
"title": ""
},
{
"docid": "e733b08455a5ca2a5afa596268789993",
"text": "In this paper a new PWM inverter topology suitable for medium voltage (2300/4160 V) adjustable speed drive (ASD) systems is proposed. The modular inverter topology is derived by combining three standard 3-phase inverter modules and a 0.33 pu output transformer. The output voltage is high quality, multistep PWM with low dv/dt. Further, the approach also guarantees balanced operation and 100% utilization of each 3-phase inverter module over the entire speed range. These features enable the proposed topology to be suitable for powering constant torque as well as variable torque type loads. Clean power utility interface of the proposed inverter system can be achieved via an 18-pulse input transformer. Analysis, simulation, and experimental results are shown to validate the concepts.",
"title": ""
},
{
"docid": "866c1e87076da5a94b9adeacb9091ea3",
"text": "Training a support vector machine (SVM) is usually done by ma pping the underlying optimization problem into a quadratic progr amming (QP) problem. Unfortunately, high quality QP solvers are not rea dily available, which makes research into the area of SVMs difficult for he those without a QP solver. Recently, the Sequential Minimal Optim ization algorithm (SMO) was introduced [1, 2]. SMO reduces SVM trainin g down to a series of smaller QP subproblems that have an analytical solution and, therefore, does not require a general QP solver. SMO has been shown to be very efficient for classification problems using l ear SVMs and/or sparse data sets. This work shows how SMO can be genera lized to handle regression problems.",
"title": ""
},
{
"docid": "eb64f11d3795bd2e97eb6d440169a3f0",
"text": "Emotional states can be transferred to others via emotional contagion, leading people to experience the same emotions without their awareness. Emotional contagion is well established in laboratory experiments, with people transferring positive and negative emotions to others. Data from a large real-world social network, collected over a 20-y period suggests that longer-lasting moods (e.g., depression, happiness) can be transferred through networks [Fowler JH, Christakis NA (2008) BMJ 337:a2338], although the results are controversial. In an experiment with people who use Facebook, we test whether emotional contagion occurs outside of in-person interaction between individuals by reducing the amount of emotional content in the News Feed. When positive expressions were reduced, people produced fewer positive posts and more negative posts; when negative expressions were reduced, the opposite pattern occurred. These results indicate that emotions expressed by others on Facebook influence our own emotions, constituting experimental evidence for massive-scale contagion via social networks. This work also suggests that, in contrast to prevailing assumptions, in-person interaction and nonverbal cues are not strictly necessary for emotional contagion, and that the observation of others' positive experiences constitutes a positive experience for people.",
"title": ""
},
{
"docid": "1e2767ace7b4d9f8ca2a5eee21684240",
"text": "Modern data analytics applications typically process massive amounts of data on clusters of tens, hundreds, or thousands of machines to support near-real-time decisions.The quantity of data and limitations of disk and memory bandwidth often make it infeasible to deliver answers at interactive speeds. However, it has been widely observed that many applications can tolerate some degree of inaccuracy. This is especially true for exploratory queries on data, where users are satisfied with \"close-enough\" answers if they can come quickly. A popular technique for speeding up queries at the cost of accuracy is to execute each query on a sample of data, rather than the whole dataset. To ensure that the returned result is not too inaccurate, past work on approximate query processing has used statistical techniques to estimate \"error bars\" on returned results. However, existing work in the sampling-based approximate query processing (S-AQP) community has not validated whether these techniques actually generate accurate error bars for real query workloads. In fact, we find that error bar estimation often fails on real world production workloads. Fortunately, it is possible to quickly and accurately diagnose the failure of error estimation for a query. In this paper, we show that it is possible to implement a query approximation pipeline that produces approximate answers and reliable error bars at interactive speeds.",
"title": ""
},
{
"docid": "1b4eb25d20cd2ca431c2b73588021086",
"text": "Machine rule induction was examined on a difficult categorization problem by applying a Holland-style classifier system to a complex letter recognition task. A set of 20,000 unique letter images was generated by randomly distorting pixel images of the 26 uppercase letters from 20 different commercial fonts. The parent fonts represented a full range of character types including script, italic, serif, and Gothic. The features of each of the 20,000 characters were summarized in terms of 16 primitive numerical attributes. Our research focused on machine induction techniques for generating IF-THEN classifiers in which the IF part was a list of values for each of the 16 attributes and the THEN part was the correct category, i.e., one of the 26 letters of the alphabet. We examined the effects of different procedures for encoding attributes, deriving new rules, and apportioning credit among the rules. Binary and Gray-code attribute encodings that required exact matches for rule activation were compared with integer representations that employed fuzzy matching for rule activation. Random and genetic methods for rule creation were compared with instance-based generalization. The strength/specificity method for credit apportionment was compared with a procedure we call “accuracy/utility.”",
"title": ""
},
{
"docid": "3e280f302493b9ed1caaea6937629d09",
"text": "The increasing popularity of the framing concept in media analysis goes hand in hand with significant inconsistency in its application. This paper outlines an integrated process model of framing that includes production, content, and media use perspectives. A typology of generic and issue-specific frames is proposed based on previous studies of media frames. An example is given of how generic news frames may be identified and used to understand cross-national differences in news coverage. The paper concludes with an identification of contentious issues in current framing research.",
"title": ""
},
{
"docid": "69179341377477af8ebe9013c664828c",
"text": "1. Intensive agricultural practices drive biodiversity loss with potentially drastic consequences for ecosystem services. To advance conservation and production goals, agricultural practices should be compatible with biodiversity. Traditional or less intensive systems (i.e. with fewer agrochemicals, less mechanisation, more crop species) such as shaded coffee and cacao agroforests are highlighted for their ability to provide a refuge for biodiversity and may also enhance certain ecosystem functions (i.e. predation). 2. Ants are an important predator group in tropical agroforestry systems. Generally, ant biodiversity declines with coffee and cacao intensification yet the literature lacks a summary of the known mechanisms for ant declines and how this diversity loss may affect the role of ants as predators. 3. Here, how shaded coffee and cacao agroforestry systems protect biodiversity and may preserve related ecosystem functions is discussed in the context of ants as predators. Specifically, the relationships between biodiversity and predation, links between agriculture and conservation, patterns and mechanisms for ant diversity loss with agricultural intensification, importance of ants as control agents of pests and fungal diseases, and whether ant diversity may influence the functional role of ants as predators are addressed. Furthermore, because of the importance of homopteran-tending by ants in the ecological and agricultural literature, as well as to the success of ants as predators, the costs and benefits of promoting ants in agroforests are discussed. 4. Especially where the diversity of ants and other predators is high, as in traditional agroforestry systems, both agroecosystem function and conservation goals will be advanced by biodiversity protection.",
"title": ""
},
{
"docid": "9d615d361cb1a357ae1663d1fe581d24",
"text": "We report three patients with dissecting cellulitis of the scalp. Prolonged treatment with oral isotretinoin was highly effective in all three patients. Furthermore, long-term post-treatment follow-up in two of the patients has shown a sustained therapeutic benefit.",
"title": ""
},
{
"docid": "bb6b34c125b79b515d0cac7299ed6376",
"text": "Deep learning has been successful in various domains including image recognition, speech recognition and natural language processing. However, the research on its application in graph mining is still in an early stage. Here we present Model R, a neural network model created to provide a deep learning approach to link weight prediction problem. This model extracts knowledge of nodes from known links' weights and uses this knowledge to predict unknown links' weights. We demonstrate the power of Model R through experiments and compare it with stochastic block model and its derivatives. Model R shows that deep learning can be successfully applied to link weight prediction and it outperforms stochastic block model and its derivatives by up to 73% in terms of prediction accuracy. We anticipate this new approach to provide effective solutions to more graph mining tasks.",
"title": ""
},
{
"docid": "52be5bbccc0c4a840585dccc629e2412",
"text": "A voltage scaling technique for energy-efficient operation requires an adaptive power-supply regulator to significantly reduce dynamic power consumption in synchronous digital circuits. A digitally controlled power converter that dynamically tracks circuit performance with a ring oscillator and regulates the supply voltage to the minimum required to operate at a desired frequency is presented. This paper investigates the issues involved in designing a fully digital power converter and describes a design fabricated in a MOSIS 0.8m process. A variable-frequency digital controller design takes advantage of the power savings available through adaptive supply-voltage scaling and demonstrates converter efficiency greater than 90% over a dynamic range of regulated voltage levels.",
"title": ""
},
{
"docid": "1718c817d15b9bc1ab99d359ff8d1157",
"text": "Semantic matching, which aims to determine the matching degree between two texts, is a fundamental problem for many NLP applications. Recently, deep learning approach has been applied to this problem and significant improvements have been achieved. In this paper, we propose to view the generation of the global interaction between two texts as a recursive process: i.e. the interaction of two texts at each position is a composition of the interactions between their prefixes as well as the word level interaction at the current position. Based on this idea, we propose a novel deep architecture, namely Match-SRNN, to model the recursive matching structure. Firstly, a tensor is constructed to capture the word level interactions. Then a spatial RNN is applied to integrate the local interactions recursively, with importance determined by four types of gates. Finally, the matching score is calculated based on the global interaction. We show that, after degenerated to the exact matching scenario, Match-SRNN can approximate the dynamic programming process of longest common subsequence. Thus, there exists a clear interpretation for Match-SRNN. Our experiments on two semantic matching tasks showed the effectiveness of Match-SRNN, and its ability of visualizing the learned matching structure.",
"title": ""
}
] |
scidocsrr
|
d4df91c6165cee01aaf87994bac5c1c6
|
We are not All Equal: Personalizing Models for Facial Expression Analysis with Transductive Parameter Transfer
|
[
{
"docid": "23afac6bd3ed34fc0c040581f630c7bd",
"text": "Automatic Facial Expression Recognition and Analysis, in particular FACS Action Unit (AU) detection and discrete emotion detection, has been an active topic in computer science for over two decades. Standardisation and comparability has come some way; for instance, there exist a number of commonly used facial expression databases. However, lack of a common evaluation protocol and lack of sufficient details to reproduce the reported individual results make it difficult to compare systems to each other. This in turn hinders the progress of the field. A periodical challenge in Facial Expression Recognition and Analysis would allow this comparison in a fair manner. It would clarify how far the field has come, and would allow us to identify new goals, challenges and targets. In this paper we present the first challenge in automatic recognition of facial expressions to be held during the IEEE conference on Face and Gesture Recognition 2011, in Santa Barbara, California. Two sub-challenges are defined: one on AU detection and another on discrete emotion detection. It outlines the evaluation protocol, the data used, and the results of a baseline method for the two sub-challenges.",
"title": ""
},
{
"docid": "7eec1e737523dc3b78de135fc71b058f",
"text": "Discriminative learning is challenging when examples are sets of features, and the sets vary in cardinality and lack any sort of meaningful ordering. Kernel-based classification methods can learn complex decision boundaries, but a kernel over unordered set inputs must somehow solve for correspondences epsivnerally a computationally expensive task that becomes impractical for large set sizes. We present a new fast kernel function which maps unordered feature sets to multi-resolution histograms and computes a weighted histogram intersection in this space. This \"pyramid match\" computation is linear in the number of features, and it implicitly finds correspondences based on the finest resolution histogram cell where a matched pair first appears. Since the kernel does not penalize the presence of extra features, it is robust to clutter. We show the kernel function is positive-definite, making it valid for use in learning algorithms whose optimal solutions are guaranteed only for Mercer kernels. We demonstrate our algorithm on object recognition tasks and show it to be accurate and dramatically faster than current approaches",
"title": ""
},
{
"docid": "51e0caf419babd61615e1537545e40e8",
"text": "Past research on automatic facial expression analysis has focused mostly on the recognition of prototypic expressions of discrete emotions rather than on the analysis of dynamic changes over time, although the importance of temporal dynamics of facial expressions for interpretation of the observed facial behavior has been acknowledged for over 20 years. For instance, it has been shown that the temporal dynamics of spontaneous and volitional smiles are fundamentally different from each other. In this work, we argue that the same holds for the temporal dynamics of brow actions and show that velocity, duration, and order of occurrence of brow actions are highly relevant parameters for distinguishing posed from spontaneous brow actions. The proposed system for discrimination between volitional and spontaneous brow actions is based on automatic detection of Action Units (AUs) and their temporal segments (onset, apex, offset) produced by movements of the eyebrows. For each temporal segment of an activated AU, we compute a number of mid-level feature parameters including the maximal intensity, duration, and order of occurrence. We use Gentle Boost to select the most important of these parameters. The selected parameters are used further to train Relevance Vector Machines to determine per temporal segment of an activated AU whether the action was displayed spontaneously or volitionally. Finally, a probabilistic decision function determines the class (spontaneous or posed) for the entire brow action. When tested on 189 samples taken from three different sets of spontaneous and volitional facial data, we attain a 90.7% correct recognition rate.",
"title": ""
},
{
"docid": "9906a8e8302f4178472113d074415f25",
"text": "The usage and applications of social media have become pervasive. This has enabled an innovative paradigm to solve multimedia problems (e.g., recommendation and popularity prediction), which are otherwise hard to address purely by traditional approaches. In this paper, we investigate how to build a mutual connection among the disparate social media on the Internet, using which cross-domain media recommendation can be realized. We accomplish this goal through SocialTransfer---a novel cross-domain real-time transfer learning framework. While existing transfer learning methods do not address how to utilize the real time social streams, our proposed SocialTransfer is able to effectively learn from social streams to help multimedia applications, assuming an intermediate topic space can be built across domains. It is characterized by two key components: 1) a topic space learned in real time from social streams via Online Streaming Latent Dirichlet Allocation (OSLDA), and 2) a real-time cross-domain graph spectra analysis based transfer learning method that seamlessly incorporates learned topic models from social streams into the transfer learning framework. We present as use cases of \\emph{SocialTransfer} two video recommendation applications that otherwise can hardly be achieved by conventional media analysis techniques: 1) socialized query suggestion for video search, and 2) socialized video recommendation that features socially trending topical videos. We conduct experiments on a real-world large-scale dataset, including 10.2 million tweets and 5.7 million YouTube videos and show that \\emph{SocialTransfer} outperforms traditional learners significantly, and plays a natural and interoperable connection across video and social domains, leading to a wide variety of cross-domain applications.",
"title": ""
},
{
"docid": "8b9524be7006ffc9cb6c1db35654d4c5",
"text": "In cognitive science and neuroscience, there have been two leading models describing how humans perceive and classify facial expressions of emotion-the continuous and the categorical model. The continuous model defines each facial expression of emotion as a feature vector in a face space. This model explains, for example, how expressions of emotion can be seen at different intensities. In contrast, the categorical model consists of C classifiers, each tuned to a specific emotion category. This model explains, among other findings, why the images in a morphing sequence between a happy and a surprise face are perceived as either happy or surprise but not something in between. While the continuous model has a more difficult time justifying this latter finding, the categorical model is not as good when it comes to explaining how expressions are recognized at different intensities or modes. Most importantly, both models have problems explaining how one can recognize combinations of emotion categories such as happily surprised versus angrily surprised versus surprise. To resolve these issues, in the past several years, we have worked on a revised model that justifies the results reported in the cognitive science and neuroscience literature. This model consists of C distinct continuous spaces. Multiple (compound) emotion categories can be recognized by linearly combining these C face spaces. The dimensions of these spaces are shown to be mostly configural. According to this model, the major task for the classification of facial expressions of emotion is precise, detailed detection of facial landmarks rather than recognition. We provide an overview of the literature justifying the model, show how the resulting model can be employed to build algorithms for the recognition of facial expression of emotion, and propose research directions in machine learning and computer vision researchers to keep pushing the state of the art in these areas. We also discuss how the model can aid in studies of human perception, social interactions and disorders.",
"title": ""
}
] |
[
{
"docid": "a3735cc40727de4016ee29f6a29d578f",
"text": "By running applications and services closer to the user, edge processing provides many advantages, such as short response time and reduced network traffic. Deep-learning based algorithms provide significantly better performances than traditional algorithms in many fields but demand more resources, such as higher computational power and more memory. Hence, designing deep learning algorithms that are more suitable for resource-constrained mobile devices is vital. In this paper, we build a lightweight neural network, termed LiteNet which uses a deep learning algorithm design to diagnose arrhythmias, as an example to show how we design deep learning schemes for resource-constrained mobile devices. Compare to other deep learning models with an equivalent accuracy, LiteNet has several advantages. It requires less memory, incurs lower computational cost, and is more feasible for deployment on resource-constrained mobile devices. It can be trained faster than other neural network algorithms and requires less communication across different processing units during distributed training. It uses filters of heterogeneous size in a convolutional layer, which contributes to the generation of various feature maps. The algorithm was tested using the MIT-BIH electrocardiogram (ECG) arrhythmia database; the results showed that LiteNet outperforms comparable schemes in diagnosing arrhythmias, and in its feasibility for use at the mobile devices.",
"title": ""
},
{
"docid": "b7c04b3a73c4373d231ee5fce3fef094",
"text": "The usual approach to optimisation, of ranking algorithms for search and in many other contexts, is to obtain some training set of labeled data and optimise the algorithm on this training set, then apply the resulting model (with the chosen optimal parameter set) to the live environment. (There may be an intermediate test stage, but this does not affect the present argument.) This approach involves the choice of a metric, in this context normally some particular IR effectiveness metric. It is commonly assumed, overtly or tacitly, that if we want to optimise a particular evaluation metric M for a live environment, we should try to optimise exactly the metric M on the training set (even though in practice we often use an approximation or other substitute measure). When the assumption is stated explicitly, it is sometimes presented as self-evident. In this paper I will explore some reasons why the assumption might not be a good general rule.",
"title": ""
},
{
"docid": "a35a564a2f0e16a21e0ef5e26601eab9",
"text": "The social media revolution has created a dynamic shift in the digital marketing landscape. The voice of influence is moving from traditional marketers towards consumers through online social interactions. In this study, we focus on two types of online social interactions, namely, electronic word of mouth (eWOM) and observational learning (OL), and explore how they influence consumer purchase decisions. We also examine how receiver characteristics, consumer expertise and consumer involvement, moderate consumer purchase decision process. Analyzing panel data collected from a popular online beauty forum, we found that consumer purchase decisions are influenced by their online social interactions with others and that action-based OL information is more influential than opinion-based eWOM. Further, our results show that both consumer expertise and consumer involvement play an important moderating role, albeit in opposite direction: Whereas consumer expertise exerts a negative moderating effect, consumer involvement is found to have a positive moderating effect. The study makes important contributions to research and practice.",
"title": ""
},
{
"docid": "fbc8d5de518adc5b9ed7b6bb14c7f526",
"text": "Collection data structures have a major impact on the performance of applications, especially in languages such as Java, C#, or C++. This requires a developer to select an appropriate collection from a large set of possibilities, including different abstractions (e.g. list, map, set, queue), and multiple implementations. In Java, the default implementation of collections is provided by the standard Java Collection Framework (JCF). However, there exist a large variety of less known third-party collection libraries which can provide substantial performance benefits with minimal code changes.\n In this paper, we first study the popularity and usage patterns of collection implementations by mining a code corpus comprised of 10,986 Java projects. We use the results to evaluate and compare the performance of the six most popular alternative collection libraries in a large variety of scenarios. We found that for almost every scenario and JCF collection type there is an alternative implementation that greatly decreases memory consumption while offering comparable or even better execution time. Memory savings range from 60% to 88% thanks to reduced overhead and some operations execute 1.5x to 50x faster.\n We present our results as a comprehensive guideline to help developers in identifying the scenarios in which an alternative implementation can provide a substantial performance improvement. Finally, we discuss how some coding patterns result in substantial performance differences of collections.",
"title": ""
},
{
"docid": "84470a2a19c09a3c5d898f37f196dddf",
"text": "Breast cancer is the leading type of malignant tumor observed in women and the effective treatment depends on its early diagnosis. Diagnosis from histopathological images remains the \"gold standard\" for breast cancer. The complexity of breast cell histopathology (BCH) images makes reliable segmentation and classification hard. In this paper, an automatic quantitative image analysis technique of BCH images is proposed. For the nuclei segmentation, top-bottom hat transform is applied to enhance image quality. Wavelet decomposition and multi-scale region-growing (WDMR) are combined to obtain regions of interest (ROIs) thereby realizing precise location. A double-strategy splitting model (DSSM) containing adaptive mathematical morphology and Curvature Scale Space (CSS) corner detection method is applied to split overlapped cells for better accuracy and robustness. For the classification of cell nuclei, 4 shape-based features and 138 textural features based on color spaces are extracted. Optimal feature set is obtained by support vector machine (SVM) with chain-like agent genetic algorithm (CAGA). The proposed method was tested on 68 BCH images containing more than 3600 cells. Experimental results show that the mean segmentation sensitivity was 91.53% (74.05%) and specificity was 91.64% (74.07%). The classification performance of normal and malignant cell images can achieve 96.19% (70.31%) for accuracy, 99.05% (70.27%) for sensitivity and 93.33% (70.81%) for specificity. & 2015 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "7652f940fd21c8d1630b5ac2910e195b",
"text": "Internet Finance promote the innovation and development of financial field, at the same time, it also has its own risks, such as system security risk, capital security risk, higher liquidity risk, market selection risk, monetary policy risk. So the internet financial supervision is very necessary, regulators should pay attention to the risk characteristics of the Internet financial regulation, establish and perfect the Internet financial laws and regulations system, effort to achieve targeted risk prevention.",
"title": ""
},
{
"docid": "5f0ac7ac64ab3f1e39193a12836485ce",
"text": "A fast and accurate white blood cell (WBC) segmentation remains a challenging task, as different WBCs vary significantly in color and shape due to cell type differences, staining technique variations and the adhesion between the WBC and red blood cells. In this paper, a self-supervised learning approach, consisting of unsupervised initial segmentation and supervised segmentation refinement, is presented. The first module extracts the overall foreground region from the cell image by K-means clustering, and then generates a coarse WBC region by touching-cell splitting based on concavity analysis. The second module further uses the coarse segmentation result of the first module as automatic labels to actively train a support vector machine (SVM) classifier. Then, the trained SVM classifier is further used to classify each pixel of the image and achieve a more accurate segmentation result. To improve its segmentation accuracy, median color features representing the topological structure and a new weak edge enhancement operator (WEEO) handling fuzzy boundary are introduced. To further reduce its time cost, an efficient cluster sampling strategy is also proposed. We tested the proposed approach with two blood cell image datasets obtained under various imaging and staining conditions. The experiment results show that our approach has a superior performance of accuracy and time cost on both datasets.",
"title": ""
},
{
"docid": "3f657657a24c03038bd402498b7abddd",
"text": "We propose a system for real-time animation of eyes that can be interactively controlled in a WebGL enabled device using a small number of animation parameters, including gaze. These animation parameters can be obtained using traditional keyframed animation curves, measured from an actor's performance using off-the-shelf eye tracking methods, or estimated from the scene observed by the character, using behavioral models of human vision. We present a model of eye movement, that includes not only movement of the globes, but also of the eyelids and other soft tissues in the eye region. The model includes formation of expression wrinkles in soft tissues. To our knowledge this is the first system for real-time animation of soft tissue movement around the eyes based on gaze input.",
"title": ""
},
{
"docid": "d362b36e0c971c43856a07b7af9055f3",
"text": "s (New York: ACM), pp. 1617 – 20. MASLOW, A.H., 1954,Motivation and personality (New York: Harper). MCDONAGH, D., HEKKERT, P., VAN ERP, J. and GYI, D. (Eds), 2003, Design and Emotion: The Experience of Everyday Things (London: Taylor & Francis). MILLARD, N., HOLE, L. and CROWLE, S., 1999, Smiling through: motivation at the user interface. In Proceedings of the HCI International’99, Volume 2 (pp. 824 – 8) (Mahwah, NJ, London: Lawrence Erlbaum Associates). NORMAN, D., 2004a, Emotional design: Why we love (or hate) everyday things (New York: Basic Books). NORMAN, D., 2004b, Introduction to this special section on beauty, goodness, and usability. Human Computer Interaction, 19, pp. 311 – 18. OVERBEEKE, C.J., DJAJADININGRAT, J.P., HUMMELS, C.C.M. and WENSVEEN, S.A.G., 2002, Beauty in Usability: Forget about ease of use! In Pleasure with products: Beyond usability, W. Green and P. Jordan (Eds), pp. 9 – 18 (London: Taylor & Francis). 96 M. Hassenzahl and N. Tractinsky D ow nl oa de d by [ M as se y U ni ve rs ity L ib ra ry ] at 2 1: 34 2 3 Ju ly 2 01 1 PICARD, R., 1997, Affective computing (Cambridge, MA: MIT Press). PICARD, R. and KLEIN, J., 2002, Computers that recognise and respond to user emotion: theoretical and practical implications. Interacting with Computers, 14, pp. 141 – 69. POSTREL, V., 2002, The substance of style (New York: Harper Collins). SELIGMAN, M.E.P. and CSIKSZENTMIHALYI, M., 2000, Positive Psychology: An Introduction. American Psychologist, 55, pp. 5 – 14. SHELDON, K.M., ELLIOT, A.J., KIM, Y. and KASSER, T., 2001, What is satisfying about satisfying events? Testing 10 candidate psychological needs. Journal of Personality and Social Psychology, 80, pp. 325 – 39. SINGH, S.N. and DALAL, N.P., 1999, Web home pages as advertisements. Communications of the ACM, 42, pp. 91 – 8. SUH, E., DIENER, E. and FUJITA, F., 1996, Events and subjective well-being: Only recent events matter. Journal of Personality and Social Psychology,",
"title": ""
},
{
"docid": "4e142571b30a66dd9ff55dc0d28282cf",
"text": "Test models are needed to evaluate and benchmark algorithms and tools in model driven development. Most model generators randomly apply graph operations on graph representations of models. This approach leads to test models of poor quality. Some approaches do not guarantee the basic syntactic correctness of the created models. Even if so, it is almost impossible to guarantee, or even control, the creation of complex structures, e.g. a subgraph which implements an association between two classes. Such a subgraph consists of an association node, two association end nodes, and several edges, and is normally created by one user command. This paper presents the SiDiff Model Generator, which can generate models, or sets of models, which are syntactically correct, contain complex structures, and exhibit defined statistical characteristics.",
"title": ""
},
{
"docid": "a926341e8b663de6c412b8e3a61ee171",
"text": "— Studies within the EHEA framework include the acquisition of skills such as the ability to learn autonomously, which requires students to devote much of their time to individual and group work to reinforce and further complement the knowledge acquired in the classroom. In order to consolidate the results obtained from classroom activities, lecturers must develop tools to encourage learning and facilitate the process of independent learning. The aim of this work is to present the use of virtual laboratories based on Easy Java Simulations to assist in the understanding and testing of electrical machines. con los usuarios integrándose fácilmente en plataformas de e-aprendizaje. Para nuestra aplicación hemos escogido el Java Ejs (Easy Java Simulations), ya que es una herramienta de software gratuita, diseñada para el desarrollo de laboratorios virtuales interactivos, dispone de elementos visuales parametrizables",
"title": ""
},
{
"docid": "e8ff86bd701792e6eb5f2fa8fcc2e028",
"text": "Memory layout transformations via data reorganization are very common operations, which occur as a part of the computation or as a performance optimization in data-intensive applications. These operations require inefficient memory access patterns and roundtrip data movement through the memory hierarchy, failing to utilize the performance and energy-efficiency potentials of the memory subsystem. This paper proposes a high-bandwidth and energy-efficient hardware accelerated memory layout transform (HAMLeT) system integrated within a 3D-stacked DRAM. HAMLeT uses a low-overhead hardware that exploits the existing infrastructure in the logic layer of 3D-stacked DRAMs, and does not require any changes to the DRAM layers, yet it can fully exploit the locality and parallelism within the stack by implementing efficient layout transform algorithms. We analyze matrix layout transform operations (such as matrix transpose, matrix blocking and 3D matrix rotation) and demonstrate that HAMLeT can achieve close to peak system utilization, offering up to an order of magnitude performance improvement compared to the CPU and GPU memory subsystems which does not employ HAMLeT.",
"title": ""
},
{
"docid": "247eb1c32cf3fd2e7a925d54cb5735da",
"text": "Several applications in machine learning and machine-to-human interactions tolerate small deviations in their computations. Digital systems can exploit this fault-tolerance to increase their energy-efficiency, which is crucial in embedded applications. Hence, this paper introduces a new means of Approximate Computing: Dynamic-Voltage-Accuracy-Frequency-Scaling (DVAFS), a circuit-level technique enabling a dynamic trade-off of energy versus computational accuracy that outperforms other Approximate Computing techniques. The usage and applicability of DVAFS is illustrated in the context of Deep Neural Networks, the current state-of-the-art in advanced recognition. These networks are typically executed on CPU's or GPU's due to their high computational complexity, making their deployment on battery-constrained platforms only possible through wireless connections with the cloud. This work shows how deep learning can be brought to IoT devices by running every layer of the network at its optimal computational accuracy. Finally, we demonstrate a DVAFS processor for Convolutional Neural Networks, achieving efficiencies of multiple TOPS/W.",
"title": ""
},
{
"docid": "2aecaa95df956d905a39a7394a4b08ad",
"text": "Superpixels provide an efficient low/mid-level representation of image data, which greatly reduces the number of image primitives for subsequent vision tasks. Existing superpixel algorithms are not differentiable, making them difficult to integrate into otherwise end-to-end trainable deep neural networks. We develop a new differentiable model for superpixel sampling that leverages deep networks for learning superpixel segmentation. The resulting Superpixel Sampling Network (SSN) is end-to-end trainable, which allows learning task-specific superpixels with flexible loss functions and has fast runtime. Extensive experimental analysis indicates that SSNs not only outperform existing superpixel algorithms on traditional segmentation benchmarks, but can also learn superpixels for other tasks. In addition, SSNs can be easily integrated into downstream deep networks resulting in performance improvements.",
"title": ""
},
{
"docid": "48d6e8658a2b8b13510426a6da9a5095",
"text": "A double-discone antenna for an ultra-wideband frequency scan is presented. An exquisite assembly of two inverse-feeding discone antennas shows a 30:1 broad bandwidth with VSWR below 2.5 and an omnidirectional radiation pattern. These features make the proposed antenna very suitable for both the UWB system antenna and the wideband scan antenna. © 2004 Wiley Periodicals, Inc. Microwave Opt Technol Lett 42: 113–115, 2004; Published online in Wiley InterScience (www.interscience.wiley.com). DOI 10.1002/mop.20224",
"title": ""
},
{
"docid": "68e137f9c722f833a7fdbc8032fc58be",
"text": "BACKGROUND\nChronic Obstructive Pulmonary Disease (COPD) has been a leading cause of morbidity and mortality worldwide, over the years. In 1995, the implementation of a respiratory function survey seemed to be an adequate way to draw attention to neglected respiratory symptoms and increase the awareness of spirometry surveys. By 2002 there were new consensual guidelines in place and the awareness that prevalence of COPD depended on the criteria used for airway obstruction definition. The purpose of this study is to revisit the two studies and to turn public some of the data and respective methodologies.\n\n\nMETHODS\nFrom Pneumobil study database of 12,684 subjects, only the individuals with 40+ years old (n = 9.061) were selected. The 2002 study included a randomized representative sample of 1,384 individuals with 35-69 years old.\n\n\nRESULTS\nThe prevalence of COPD was 8.96% in Pneumobil and 5.34% in the 2002 study. In both studies, presence of COPD was greater in males and there was a positive association between presence of COPD and older age groups. Smokers and ex-smokers showed a higher proportion of cases of COPD.\n\n\nCONCLUSIONS\nPrevalence in Portugal is lower than in other European countries. This may be related to lower smokers' prevalence. Globally, the most important risk factors associated with COPD were age over 60 years, male gender and smoking exposure. All aspects and limitations regarding different recruitment methodologies and different criteria for defining COPD cases highlight the need of a standardized method to evaluate COPD prevalence and associated risks factors, whose results can be compared across countries, as it is the case of BOLD project.",
"title": ""
},
{
"docid": "c8a2804a0c1a32956d1d850daa57bfff",
"text": "BACKGROUND\nData for the causes of maternal deaths are needed to inform policies to improve maternal health. We developed and analysed global, regional, and subregional estimates of the causes of maternal death during 2003-09, with a novel method, updating the previous WHO systematic review.\n\n\nMETHODS\nWe searched specialised and general bibliographic databases for articles published between between Jan 1, 2003, and Dec 31, 2012, for research data, with no language restrictions, and the WHO mortality database for vital registration data. On the basis of prespecified inclusion criteria, we analysed causes of maternal death from datasets. We aggregated country level estimates to report estimates of causes of death by Millennium Development Goal regions and worldwide, for main and subcauses of death categories with a Bayesian hierarchical model.\n\n\nFINDINGS\nWe identified 23 eligible studies (published 2003-12). We included 417 datasets from 115 countries comprising 60 799 deaths in the analysis. About 73% (1 771 000 of 2 443 000) of all maternal deaths between 2003 and 2009 were due to direct obstetric causes and deaths due to indirect causes accounted for 27·5% (672 000, 95% UI 19·7-37·5) of all deaths. Haemorrhage accounted for 27·1% (661 000, 19·9-36·2), hypertensive disorders 14·0% (343 000, 11·1-17·4), and sepsis 10·7% (261 000, 5·9-18·6) of maternal deaths. The rest of deaths were due to abortion (7·9% [193 000], 4·7-13·2), embolism (3·2% [78 000], 1·8-5·5), and all other direct causes of death (9·6% [235 000], 6·5-14·3). Regional estimates varied substantially.\n\n\nINTERPRETATION\nBetween 2003 and 2009, haemorrhage, hypertensive disorders, and sepsis were responsible for more than half of maternal deaths worldwide. More than a quarter of deaths were attributable to indirect causes. These analyses should inform the prioritisation of health policies, programmes, and funding to reduce maternal deaths at regional and global levels. Further efforts are needed to improve the availability and quality of data related to maternal mortality.",
"title": ""
},
{
"docid": "48ce635355fbb5ffb7d6166948b4f135",
"text": "Computational generation of literary artifacts very often resorts to template-like schemas that can be instantiated into complex structures. With this view in mind, the present paper reviews a number of existing attempts to provide an elementary set of patterns for basic plots. An attempt is made to formulate these descriptions of possible plots in terms of character functions, an abstraction of plot-bearing elements of a story originally formulated by Vladimir Propp. These character functions act as the building blocks of the Propper system, an existing framework for computational story generation. The paper explores the set of extensions required to the original set of character functions to allow for a basic representation of the analysed schemata, and a solution for automatic generation of stories based on this formulation of the narrative schemas. This solution uncovers important insights on the relative expressive power of the representation of narrative in terms of character functions, and their impact on the generative potential of the framework is discussed. 1998 ACM Subject Classification F.4.1 Knowledge Representation Formalisms and Methods",
"title": ""
},
{
"docid": "2e27078279131bf08b3f1cb060586599",
"text": "The QTW VTOL UAV, which features tandem tilt wings with propellers mounted at the mid-span of each wing, is one of the most promising UAV configurations, having both VTOL capability and high cruise performance. A six-degree-of-freedom dynamic simulation model covering the full range of the QTW flight envelope was developed and a flight control system including a transition schedule and a stability and control augmentation system (SCAS) was designed. The flight control system was installed in a small prototype QTW and a full transition flight test including vertical takeoff, accelerating transition, cruise, decelerating transition and hover landing was successfully accomplished.",
"title": ""
},
{
"docid": "7a77d8d381ec543033626be54119358a",
"text": "The advent of continuous glucose monitoring (CGM) is a significant stride forward in our ability to better understand the glycemic status of our patients. Current clinical practice employs two forms of CGM: professional (retrospective or \"masked\") and personal (real-time) to evaluate and/or monitor glycemic control. Most studies using professional and personal CGM have been done in those with type 1 diabetes (T1D). However, this technology is agnostic to the type of diabetes and can also be used in those with type 2 diabetes (T2D). The value of professional CGM in T2D for physicians, patients, and researchers is derived from its ability to: (1) to discover previously unknown hyper- and hypoglycemia (silent and symptomatic); (2) measure glycemic control directly rather than through the surrogate metric of hemoglobin A1C (HbA1C) permitting the observation of a wide variety of metrics that include glycemic variability, the percent of time within, below and above target glucose levels, the severity of hypo- and hyperglycemia throughout the day and night; (3) provide actionable information for healthcare providers derived by the CGM report; (4) better manage patients on hemodialysis; and (5) effectively and efficiently analyze glycemic effects of new interventions whether they be pharmaceuticals (duration of action, pharmacodynamics, safety, and efficacy), devices, or psycho-educational. Personal CGM has also been successfully used in a small number of studies as a behavior modification tool in those with T2D. This comprehensive review describes the differences between professional and personal CGM and the evidence for the use of each form of CGM in T2D. Finally, the opinions of key professional societies on the use of CGM in T2D are presented.",
"title": ""
}
] |
scidocsrr
|
e27e110d335fa20ba648100bd1c1bf9a
|
Mechanisms of the Anatomically Correct Testbed Hand
|
[
{
"docid": "f3ee129af2a833f8775c5366c188d71c",
"text": "Strong motivation for developing new prosthetic hand devices is provided by the fact that low functionality and controllability—in addition to poor cosmetic appearance—are the most important reasons why amputees do not regularly use their prosthetic hands. This paper presents the design of the CyberHand, a cybernetic anthropomorphic hand intended to provide amputees with functional hand replacement. Its design was bio-inspired in terms of its modular architecture, its physical appearance, kinematics, sensorization, and actuation, and its multilevel control system. Its underactuated mechanisms allow separate control of each digit as well as thumb–finger opposition and, accordingly, can generate a multitude of grasps. Its sensory system was designed to provide proprioceptive information as well as to emulate fundamental functional properties of human tactile mechanoreceptors of specific importance for grasp-and-hold tasks. The CyberHand control system presumes just a few efferent and afferent channels and was divided in two main layers: a high-level control that interprets the user’s intention (grasp selection and required force level) and can provide pertinent sensory feedback and a low-level control responsible for actuating specific grasps and applying the desired total force by taking advantage of the intelligent mechanics. The grasps made available by the high-level controller include those fundamental for activities of daily living: cylindrical, spherical, tridigital (tripod), and lateral grasps. The modular and flexible design of the CyberHand makes it suitable for incremental development of sensorization, interfacing, and control strategies and, as such, it will be a useful tool not only for clinical research but also for addressing neuroscientific hypotheses regarding sensorimotor control.",
"title": ""
},
{
"docid": "c8ae8e62b3ede653cb9088207086b37b",
"text": "To measure the motions of the trapeziometacarpal joint of the thumb quantitatively, a roentgenographic method was developed and tested using T-shaped metal markers, a special cassette-holder, and biplane roentgenograms. Two experiments were performed. In the first one, the metal markers were fixed to the trapezium and third metacarpal in ten cadaver specimens, and a fixed spatial relationship between the trapezium and the third metacarpal was identified roentgenographically. This relationship was that the reference axes of the trapezium were aligned at median angles of 48 degrees of flexion, 38 degrees of abduction, and 80 degrees of pronation with reference to the reference axes of the third metacarpal. In the second experiment, in the dominant hand of nine male and ten female subjects (average age, twenty-six years) T-shaped markers were fixed to the skin overlying the third metacarpal and the metacarpal and phalanges of the thumb. Using the same roentgenographic technique and coordinate systems employed in the first study, the average total motions of the trapeziometacarpal joint (determined as motions of the first metacarpal with reference to the third metacarpal) were 53 degrees of flexion-extension, 42 degrees of abduction-adduction, and 17 degrees of axial rotation (pronation-supination). In addition, six functional positions of the thumb were studied: rest, flexion, extension, abduction, tip pinch, and grasp. A position of adduction and flexion of the trapeziometacarpal joint was most common during thumb function, and both the trapeziometacarpal and metacarpophalangeal joints contributed to rotation of the thumb.",
"title": ""
}
] |
[
{
"docid": "1fe8a60595463038046be38b747565e3",
"text": "Recent WiFi standards use Channel State Information (CSI) feedback for better MIMO and rate adaptation. CSI provides detailed information about current channel conditions for different subcarriers and spatial streams. In this paper, we show that CSI feedback from a client to the AP can be used to recognize different fine-grained motions of the client. We find that CSI can not only identify if the client is in motion or not, but also classify different types of motions. To this end, we propose APsense, a framework that uses CSI to estimate the sensor patterns of the client. It is observed that client's sensor (e.g. accelerometer) values are correlated to CSI values available at the AP. We show that using simple machine learning classifiers, APsense can classify different motions with accuracy as high as 90%.",
"title": ""
},
{
"docid": "e0b1e38b08b6fb098808585a5a3c8753",
"text": "The decade since the Human Genome Project ended has witnessed a remarkable sequencing technology explosion that has permitted a multitude of questions about the genome to be asked and answered, at unprecedented speed and resolution. Here I present examples of how the resulting information has both enhanced our knowledge and expanded the impact of the genome on biomedical research. New sequencing technologies have also introduced exciting new areas of biological endeavour. The continuing upward trajectory of sequencing technology development is enabling clinical applications that are aimed at improving medical diagnosis and treatment.",
"title": ""
},
{
"docid": "6d47a7579d6e833cbac403381652e140",
"text": "In response to the growing gap between memory access time and processor speed, DRAM manufacturers have created several new DRAM architectures. This paper presents a simulation-based performance study of a representative group, each evaluated in a small system organization. These small-system organizations correspond to workstation-class computers and use on the order of 10 DRAM chips. The study covers Fast Page Mode, Extended Data Out, Synchronous, Enhanced Synchronous, Synchronous Link, Rambus, and Direct Rambus designs. Our simulations reveal several things: (a) current advanced DRAM technologies are attacking the memory bandwidth problem but not the latency problem; (b) bus transmission speed will soon become a primary factor limiting memory-system performance; (c) the post-L2 address stream still contains significant locality, though it varies from application to application; and (d) as we move to wider buses, row access time becomes more prominent, making it important to investigate techniques to exploit the available locality to decrease access time.",
"title": ""
},
{
"docid": "cb47702dd86dab0fd4288db7c8836c82",
"text": "This paper presents the design and simulation results of the ultrafast LVDS I/O interface designed in 40 nm CMOS process. The LVDS transmitters and receivers were designed to support a data transfer of a multichannel Integrated Circuit dedicated for readout of hybrid pixel semiconductor detectors used for X-ray imaging applications. The transmitter is based on the current switching bridge architecture while the receiver is built of the inverting comparator with hysteresis. The LVDS I/O interface is supplied from 2.5 V and 0.9 V supply voltage. The transmitter and receiver occupy respectively 0.1 mm2 and 0.009 mm2 of chip area. The static/dynamic power consumption of the transmitter and receiver are respectively equal to 17.9/26.4 mW and 7.1/12.1 mW.",
"title": ""
},
{
"docid": "3b8e716e658176cebfbdb313c8cb22ac",
"text": "To realize the vision of Internet-of-Things (IoT), numerous IoT devices have been developed for improving daily lives, in which smart home devices are among the most popular ones. Smart locks rely on smartphones to ease the burden of physical key management and keep tracking the door opening/close status, the security of which have aroused great interests from the security community. As security is of utmost importance for the IoT environment, we try to investigate the security of IoT by examining smart lock security. Specifically, we focus on analyzing the security of August smart lock. The threat models are illustrated for attacking August smart lock. We then demonstrate several practical attacks based on the threat models toward August smart lock including handshake key leakage, owner account leakage, personal information leakage, and denial-of-service (DoS) attacks. We also propose the corresponding defense methods to counteract these attacks.",
"title": ""
},
{
"docid": "54ba46965571a60e073dfab95ede656e",
"text": "ÐThis paper presents a fair decentralized mutual exclusion algorithm for distributed systems in which processes communicate by asynchronous message passing. The algorithm requires between N ÿ 1 and 2
N ÿ 1 messages per critical section access, where N is the number of processes in the system. The exact message complexity can be expressed as a deterministic function of concurrency in the computation. The algorithm does not introduce any other overheads over Lamport's and RicartAgrawala's algorithms, which require 3
N ÿ 1 and 2
N ÿ 1 messages, respectively, per critical section access and are the only other decentralized algorithms that allow mutual exclusion access in the order of the timestamps of requests. Index TermsÐAlgorithm, concurrency, distributed system, fairness, mutual exclusion, synchronization.",
"title": ""
},
{
"docid": "715e5655651ed879f2439ed86e860bc9",
"text": "This paper presents a new permanent-magnet gear based on the cycloid gearing principle, which normally is characterized by an extreme torque density and a very high gearing ratio. An initial design of the proposed magnetic gear was designed, analyzed, and optimized with an analytical model regarding torque density. The results were promising as compared to other high-performance magnetic-gear designs. A test model was constructed to verify the analytical model.",
"title": ""
},
{
"docid": "9d3e0a8af748c9addf598a27f414e0b2",
"text": "Although insecticide resistance is a widespread problem for most insect pests, frequently the assessment of resistance occurs over a limited geographic range. Herein, we report the first widespread survey of insecticide resistance in the USA ever undertaken for the house fly, Musca domestica, a major pest in animal production facilities. The levels of resistance to six different insecticides were determined (using discriminating concentration bioassays) in 10 collections of house flies from dairies in nine different states. In addition, the frequencies of Vssc and CYP6D1 alleles that confer resistance to pyrethroid insecticides were determined for each fly population. Levels of resistance to the six insecticides varied among states and insecticides. Resistance to permethrin was highest overall and most consistent across the states. Resistance to methomyl was relatively consistent, with 65-91% survival in nine of the ten collections. In contrast, resistance to cyfluthrin and pyrethrins + piperonyl butoxide varied considerably (2.9-76% survival). Resistance to imidacloprid was overall modest and showed no signs of increasing relative to collections made in 2004, despite increasing use of this insecticide. The frequency of Vssc alleles that confer pyrethroid resistance was variable between locations. The highest frequencies of kdr, kdr-his and super-kdr were found in Minnesota, North Carolina and Kansas, respectively. In contrast, the New Mexico population had the highest frequency (0.67) of the susceptible allele. The implications of these results to resistance management and to the understanding of the evolution of insecticide resistance are discussed.",
"title": ""
},
{
"docid": "0b0cba5d582cb21d2519a6138701c99b",
"text": "Web page loads are slow due to intrinsic inefficiencies in the page load process. Our study shows that the inefficiencies are attributable not only to the contents and structure of the Web pages (e.g., three-fourths of the CSS resources are not used during the initial page load) but also the way that pages are loaded (e.g., 15% of page load times are spent waiting for parsing-blocking resources to be loaded). To address these inefficiencies, this paper presents Shandian (which means lightening in Chinese) that restructures the page load process to speed up page loads. Shandian exercises control over what portions of the page gets communicated and in what order so that the initial page load is optimized. Unlike previous techniques, Shandian works on demand without requiring a training period, is compatible with existing latency-reducing techniques (e.g., caching and CDNs), supports security features that enforce same-origin policies, and does not impose additional privacy risks. Our evaluations show that Shandian reduces page load times by more than half for both mobile phones and desktops while incurring modest overheads to data usage.",
"title": ""
},
{
"docid": "900448785a5aa402165406daff206c93",
"text": "Electrospun membranes are gaining interest for use in membrane distillation (MD) due to their high porosity and interconnected pore structure; however, they are still susceptible to wetting during MD operation because of their relatively low liquid entry pressure (LEP). In this study, post-treatment had been applied to improve the LEP, as well as its permeation and salt rejection efficiency. The post-treatment included two continuous procedures: heat-pressing and annealing. In this study, annealing was applied on the membranes that had been heat-pressed. It was found that annealing improved the MD performance as the average flux reached 35 L/m2·h or LMH (>10% improvement of the ones without annealing) while still maintaining 99.99% salt rejection. Further tests on LEP, contact angle, and pore size distribution explain the improvement due to annealing well. Fourier transform infrared spectroscopy and X-ray diffraction analyses of the membranes showed that there was an increase in the crystallinity of the polyvinylidene fluoride-co-hexafluoropropylene (PVDF-HFP) membrane; also, peaks indicating the α phase of polyvinylidene fluoride (PVDF) became noticeable after annealing, indicating some β and amorphous states of polymer were converted into the α phase. The changes were favorable for membrane distillation as the non-polar α phase of PVDF reduces the dipolar attraction force between the membrane and water molecules, and the increase in crystallinity would result in higher thermal stability. The present results indicate the positive effect of the heat-press followed by an annealing post-treatment on the membrane characteristics and MD performance.",
"title": ""
},
{
"docid": "ebb6f9ab7918edc2b0746ee8ee244f4a",
"text": "P u b l i s h e d b y t h e I E E E C o m p u t e r S o c i e t y Pervasive Computing: A Paradigm for the 21st Century In 1991, Mark Weiser, then chief technology officer for Xerox’s Palo Alto Research Center, described a vision for 21st century computing that countered the ubiquity of personal computers. “The most profound technologies are those that disappear,” he wrote. “They weave themselves into the fabric of everyday life until they are indistinguishable from it.” Computing has since mobilized itself beyond the desktop PC. Significant hardware developments—as well as advances in location sensors, wireless communications, and global networking—have advanced Weiser’s vision toward technical and economic viability. Moreover, the Web has diffused some of the psychological barriers that he also thought would have to disappear. However, the integration of information technology into our lives still falls short of Weiser’s concluding vision:",
"title": ""
},
{
"docid": "0e4c0ffb4c6f036fc872b2a5fd9eeaf4",
"text": "This paper proposes a fast and simple mapping method for lens distortion correction. Typical correction methods use a distortion model defined on distorted coordinates. They need inverse mapping for distortion correction. Inverse mapping of distortion equations is not trivial; approximation must be taken for real time applications. We propose a distortion model defined on ideal undistorted coordinates, so that we can reduce computation time and maintain the high accuracy. We verify accuracy and efficiency of the proposed method from experiments.",
"title": ""
},
{
"docid": "5745ed6c874867ad2de84b040e40d336",
"text": "The chemokine (C-X-C motif) ligand 1 (CXCL1) regulates tumor-stromal interactions and tumor invasion. However, the precise role of CXCL1 on gastric tumor growth and patient survival remains unclear. In the current study, protein expressions of CXCL1, vascular endothelial growth factor (VEGF) and phospho-signal transducer and activator of transcription 3 (p-STAT3) in primary tumor tissues from 98 gastric cancer patients were measured by immunohistochemistry (IHC). CXCL1 overexpressed cell lines were constructed using Lipofectamine 2000 reagent or lentiviral vectors. Effects of CXCL1 on VEGF expression and local tumor growth were evaluated in vitro and in vivo. CXCL1 was positively expressed in 41.4% of patients and correlated with VEGF and p-STAT3 expression. Higher CXCL1 expression was associated with advanced tumor stage and poorer prognosis. In vitro studies in AGS and SGC-7901 cells revealed that CXCL1 increased cell migration but had little effect on cell proliferation. CXCL1 activated VEGF signaling in gastric cancer (GC) cells, which was inhibited by STAT3 or chemokine (C-X-C motif) receptor 2 (CXCR2) blockade. CXCL1 also increased p-STAT3 expression in GC cells. In vivo, CXCL1 increased xenograft local tumor growth, phospho-Janus kinase 2 (p-JAK2), p-STAT3 levels, VEGF expression and microvessel density. These results suggested that CXCL1 increased local tumor growth through activation of VEGF signaling which may have mechanistic implications for the observed inferior GC survival. The CXCL1/CXCR2 pathway might be potent to improve anti-angiogenic therapy for gastric cancer.",
"title": ""
},
{
"docid": "586e5d9ec9253500b12467df3c31ad06",
"text": "The use of business intelligence (BI) is common among corporations in the private sector to improve business decision making and create insights for competitive advantage. Increasingly, emergency management agencies are using tools and processes similar to BI systems. With a more thorough understanding of the principles of BI and its supporting technologies, and a careful comparison to the business model of emergency management, this paper seeks to provide insights into how lessons from the private sector can contribute to the development of effective and efficient emergency management BI utilisation.",
"title": ""
},
{
"docid": "0f14a27b09fb02a55090b97d28ad200b",
"text": "BACKGROUND\nThe Yellow Cat Member of the Cedar Mountain Formation (Early Cretaceous, Barremian?--Aptian) of Utah has yielded a rich theropod fauna, including the coelurosaur Nedcolbertia justinhofmanni, the therizinosauroid Falcarius utahensis, the troodontid Geminiraptor suarezarum, and the dromaeosaurid Utahraptor ostrommaysorum. Recent excavation has uncovered three new dromaeosaurid specimens. One specimen, which we designate the holotype of the new genus and species Yurgovuchia doellingi, is represented by a partial axial skeleton and a partial left pubis. A second specimen consists of a right pubis and a possibly associated radius. The third specimen consists of a tail skeleton that is unique among known Cedar Mountain dromaeosaurids.\n\n\nMETHODOLOGY/PRINCIPAL FINDINGS\nY. doellingi resembles Utahraptor ostrommaysorum in that its caudal prezygapophyses are elongated but not to the degree present in most dromaeosaurids. The specimen represented by the right pubis exhibits a pronounced pubic tubercle, a velociraptorine trait that is absent in Y. doellingi. The specimen represented by the tail skeleton exhibits the extreme elongation of the caudal prezygapophyses that is typical of most dromaeosaurids. Here we perform a phylogenetic analysis to determine the phylogenetic position of Y. doellingi. Using the resulting phylogeny as a framework, we trace changes in character states of the tail across Coelurosauria to elucidate the evolution of the dromaeosaurid tail.\n\n\nCONCLUSIONS/SIGNIFICANCE\nThe new specimens add to the known diversity of Dromaeosauridae and to the known diversity within the Yellow Cat paleofauna. Phylogenetic analysis places Y. doellingi in a clade with Utahraptor, Achillobator, and Dromaeosaurus. Character state distribution indicates that the presence of intermediate-length caudal prezygapophyses in that clade is not an evolutionarily precursor to extreme prezygapophyseal elongation but represents a secondary shortening of caudal prezygapophyses. It appears to represent part of a trend within Dromaeosauridae that couples an increase in tail flexibility with increasing size.",
"title": ""
},
{
"docid": "4d400d084e9eb14fe44b9a9b26a0e739",
"text": "Various image editing tools make our pictures more attractive, and at the same time, evoke different emotional responses. With powerful and easy-to-use imaging applications, capturing, editing and then sharing pictures have become daily life for many. This paper investigates the influence of several image manipulations on evoked emotions for different types of images. To do so, various types of images clustered in different categories, were collected from Instagram and subjective evaluations were conducted via crowdsourcing to gather the emotional responses on different manipulations as perceived by subjects. Evaluation results show that certain image manipulations can induce different evoked emotions on transformed pictures when compared to the original ones. However, such changes in image emotions due to manipulation are highly content dependent. Then, we conducted a machine learning based experiment, in attempt to predict the emotions of a manipulated image given its original version and the desired manipulation method. Experimental results present a promising performance of such a prediction model, which could pave the road to automatic selection or recommendation of image editing tools that can efficiently transform or emphasize desired emotions in pictures. Introduction Thanks to wide spread popularity of smart mobile devices with high-resolution cameras, as well as user-friendly imaging and social networking applications, taking pictures, then editing and sharing, have become part of everyday life for many. Photo sharing has been used as a way to share not only stories but also current moods with friends, family and public at large. Modern photo sharing applications equipped with advanced and easy-touse image editing tools, such as Instagram, provide consumers with very convenient solutions to make their pictures more attractive, and more importantly, to arouse stronger emotional resonances. Different types of image content generate different emotions. Using different photographic techniques, visual filters or editing tools, pictures of the same scene can also evoke different emotions. Motivated by these facts, we attempt to change an original picture’s evoked emotion and transform it to new emotions (stronger, weaker, or completely different) by image manipulation. To achieve this goal, we first need to understand the emotional responses evoked by different image manipulations when applied to pictures. This paper investigates the influence of image manipulations on evoked emotions, and tries to find the potential pattern between image manipulation and generated emotions. To do so, we conducted subjective experiments based on online crowdsourcing. Different types of images were collected from Instagram, and manipulated by a number of typical image editing tools. Crowdsourcing subjects were then exposed to each, and questioned regarding the emotions pictures induced on them. Using the crowdsourced data as groundtruth, we trained and evaluated a model based on machine learning for predicting evoked emotions, taking an original image and desired manipulation as input. The rest of the paper is structured as follows. The next section introduces the related works by other researchers, followed by a section describing the data collection and user study. Then we analyze and interpret emotional responses obtained from subjects, and report the experiments of emotion prediction upon image manipulation in the followed two sections. Finally, the last section concludes the paper and discusses future work. Prior Work Image aesthetic quality estimation, emotion recognition and classification have been largely studied in the field of computer vision [1, 2, 3, 4, 5]. Most previous works use image features for affective image classification and emotion prediction [2, 3, 6, 7, 5]. Such features include color, texture, composition, edge and semantic information. A few researchers have worked on transforming image emotions by editing images. In [8], Wang et al. associate color themes with emotion keywords depending on art theory and transform the color theme of an input image to the desired one. However, in their work, only a few cartoon-like images are used. Peng et al. [9] propose a framework to change an image’s emotion by randomly sampling from a set of possible target images, but only show a few examples. Jun et al. [10] show that changing brightness and contrast of an image can affect the pleasure and excitement felt by observers. However, only a limited variation of an input image can be produced by changing the two features. Peng et al. [11] change the color tone and texture related features of an image to transfer the evoked emotion distribution, with experiments conducted on only limited types of image content. Evaluating image’s evoked emotions after image manipulation is not a trivial task. Many well-established image manipulation and editing tools have been widely used in online photo sharing and social networks, as ways for users to enhance their image content either to draw better attention or to evoke stronger emotions. Popular image editing tools include image enhancement [12], grayscale conversion, vintage processing, cartoonizing [13], and more recently addition of stickers1 [14]. However, most image manipulation methods have been studied merely from the perspective of image processing and not so much on their emotional impact. 1https://www.facebook.com/help/1597631423793468 (a) Original (b) Cartoon (c) Emoji (d) Enhance (e) Halo (f) Gray (g) Grunge (h) Old paper Figure 1. Example image manipulated by different methods. Several affective image databases have been created in previous works, including artistic photos or abstract paintings used in [2], International Affective Picture System (IAPS) [15], The Geneva affective picture database (GAPED) [16] and Emotion6 [11]. In our research, we are more interested in the emotions of everyday photographs, especially those images that are widely shared by online users. Unfortunately, most existing affective image datasets contain either extremely emotional images, or images without much natural high-level semantic features like human face. All those types of images do not fit our requirements. Therefore we decided to collect our own dataset using Instagram, one of the most popular online photo sharing services. To measure emotions, different types of models have been designed by psychologists. One of the most popular is the valence-arousal (VA) model (proposed by Russell [17]), characterizing emotions in two dimensions, where valence measures attractiveness in a scale from positive to negative, while arousal indicates the degree of excitement or stimulation. In terms of categorization of emotions, Ekman’s six basic emotions (anger, disgust, fear, joy, sadness and surprise) [18] are widely known. In our work, we used both models similar to works in [16, 11]. Image Dataset and User Study This section describes in detail the image dataset creation and crowdsourcing experiment. Image Collection and Processing We collected images from Instagram. According to a previous study by Hu et al. [19], images shared within Instagram can be classified into the following eight basic categories in terms of their content: Friends, Food, Gadget, Captioned photo, Pet, Activity, Selfie and Fashion. Therefore, we collected image dataset by searching for the eight category keywords or their synonyms via Instagram #tag. This was mainly motivated in order to have a wider variety of image content. At the end 13 color images were selected manually for each category resulting in 104 images in total. All selected images have the same size of 640×640 pixels. For each image, seven different manipulations were applied to create different visual effects. We will refer to these manipulations as the following names: • Cartoon: Applies a cartoon effect to an image. • Emoji: Adds an Emoji on top-right corner of an image. • Enhance: Applies brightness/contrast/colorization enhancement on an image via LAB colorspace. • Halo: Applies a circular halo effect to an image. • Gray: Converts an image to gray scale. • Grunge: Applies a classic vintage effect with a grunge background to an image. • Old paper: Applies another heritage style vintage effect with an old paper background to an image. The reason of selecting the seven particular manipulations is that the changes of an image caused by these operations cover different aspects of image information, e.g. color, texture, composition, and higher-level image semantics. The emoji sticker “Tear of Joy” was selected as it has been in the top 10 most popular emojis on Emojipedia for all of 20152, and the emotion it expresses is not that obvious. The seven manipulations were implemented by using ImageMagick software3. An example image processed by the 7 different manipulations is illustrated in Figure 1. Summing up, a grand total of 832 (104× 8) images were generated, including the original versions of each image. The image dataset is publicly accessible at http://mmspg.epfl.ch/ emotion-image-datasets. User Study We used Microworkers4 platform to collect emotional responses from subjects. A questionnaire was designed where four emotion-related questions are asked for each image. The first two questions are about the valence and arousal ratings respectively, where a 9-point scale was used, same as [11, 15]. For valence, 1, 5, and 9 mean very negative, neutral, and very positive emotions respectively, in terms of attractiveness. For arousal, 1 and 9 mean emotions with very low and very high stimulating effects respectively. In the questionnaire, instead of directly asking subjects to provide VA scores, questions were rephrased to be similar as in [11]. The third question is about the emotion distribution of the image, based on Ekman’s six basic emotions [18]. Similar to [11], 7 emotion keywords (Ekman’s six basic emotions and “Neutral”) were used and subjects wer",
"title": ""
},
{
"docid": "dfde48aa79ac10382fe4b9a312662cd9",
"text": "221 Abstract— Due to rapid advances and availabilities of powerful image processing software's, it is easy to manipulate and modify digital images. So it is very difficult for a viewer to judge the authenticity of a given image. Nowadays, it is possible to add or remove important features from an image without leaving any obvious traces of tampering. As digital cameras and video cameras replace their analog counterparts, the need for authenticating digital images, validating their content and detecting forgeries will only increase. For digital photographs to be used as evidence in law issues or to be circulated in mass media, it is necessary to check the authenticity of the image. So In this paper, describes an Image forgery detection method based on SIFT. In particular, we focus on detection of a special type of digital forgery – the copy-move attack, in a copy-move image forgery method; a part of an image is copied and then pasted on a different location within the same image. In this approach an improved algorithm based on scale invariant features transform (SIFT) is used to detect such cloning forgery, In this technique Transform is applied to the input image to yield a reduced dimensional representation, After that Apply key point detection and feature descriptor along with a matching over all the key points. Such a method allows us to both understand if a copy–move attack has occurred and, also furthermore gives output by applying clustering over matched points.",
"title": ""
},
{
"docid": "aeb9cf8565953920d6912486b8909aee",
"text": "Changes in microbial diversity and composition are increasingly associated with several disease states including obesity and behavioural disorders. Obesity-associated microbiota alter host energy harvesting, insulin resistance, inflammation, and fat deposition. Additionally, intestinal microbiota can regulate metabolism, adiposity, homoeostasis, and energy balance as well as central appetite and food reward signalling, which together have crucial roles in obesity. Moreover, some strains of bacteria and their metabolites might target the brain directly via vagal stimulation or indirectly through immune-neuroendocrine mechanisms. Therefore, the gut microbiota is becoming a target for new anti-obesity therapies. Further investigations are needed to elucidate the intricate gut-microbiota-host relationship and the potential of gut-microbiota-targeted strategies, such as dietary interventions and faecal microbiota transplantation, as promising metabolic therapies that help patients to maintain a healthy weight throughout life.",
"title": ""
},
{
"docid": "7c9ded948f76bba73cb05e009d81cc89",
"text": "This paper proposes a two-phase resource allocation framework (RAF) for a parallel cooperative joint multi-bitrate video caching and transcoding (CVCT) in heterogeneous virtualized mobileedge computing (HV-MEC) networks. In the cache placement phase, we propose delivery-aware cache placement strategies (DACPSs) based on the available video popularity distribution (VPD) and channel distribution information (CDI) to exploit the flexible delivery opportunities, i.e., video transmission and transcoding capabilities. Then, for the delivery phase, we propose a delivery policy for given caching status, instantaneous requests of users, and channel state information (CSI). The optimization problems corresponding to both phases aim to maximize the total revenue of slices subject to the quality of services contracted between slices and end-users and the system constraints based on their own assumptions. Both problems are non-convex and suffer from their high-computational complexities. For each phase, we show how these two problems can be solved efficiently. We also design a low-complexity RAF (LCRAF) in which the complexity of the delivery algorithm is significantly reduced. Extensive numerical assessments demonstrate up to 30% performance improvement of our proposed DACPSs over traditional approaches.",
"title": ""
}
] |
scidocsrr
|
cbcc696e3af1af8899b4958e67ba6741
|
Towards rain detection through use of in-vehicle multipurpose cameras
|
[
{
"docid": "2ee5e5ecd9304066b12771f3349155f8",
"text": "An intelligent wiper speed adjustment system can be found in most middle and upper class cars. A core piece of this gadget is the rain sensor on the windshield. With the upcoming number of cars being equipped with an in-vehicle camera for vision-based applications the call for integrating all sensors in the area of the rearview mirror into one device rises to reduce the number of parts and variants. In this paper, functionality of standard rain sensors and different vision-based approaches are explained and a novel rain sensing concept based on an automotive in-vehicle camera for Driver Assistance Systems (DAS) is developed to enhance applicability. Hereby, the region at the bottom of the field of view (FOV) of the imager is used to detect raindrops, while the upper part of the image is still usable for other vision-based applications. A simple algorithm is set up to keep the additional processing time low and to quantitatively gather the rain intensity. Mechanisms to avoid false activations of the wipers are introduced. First experimental experiences based on real scenarios show promising results.",
"title": ""
}
] |
[
{
"docid": "ea1a56c7bcf4871d1c6f2f9806405827",
"text": "—Prior to the successful use of non-contact photoplethysmography, several engineering issues regarding this monitoring technique must be considered. These issues include ambient light and motion artefacts, the wide dynamic signal range and the effect of direct light source coupling. The latter issue was investigated and preliminary results show that direct coupling can cause attenuation of the detected PPG signal. It is shown that a physical offset can be introduced between the light source and the detector in order to reduce this effect.",
"title": ""
},
{
"docid": "a4b57037235e306034211e07e8500399",
"text": "As wireless devices boom and bandwidth-hungry applications (e.g., video and cloud uploading) get popular, today's wireless local area networks (WLANs) become not only crowded but also stressed at throughput. Multiuser multiple-input-multiple-output (MU-MIMO), an advanced form of MIMO, has gained attention due to its huge potential in improving the performance of WLANs. This paper surveys random access-based medium access control (MAC) protocols for MU-MIMO-enabled WLANs. It first provides background information about the evolution and the fundamental MAC schemes of IEEE 802.11 Standards and Amendments, and then identifies the key requirements of designing MU-MIMO MAC protocols for WLANs. After this, the most representative MU-MIMO MAC proposals in the literature are overviewed by benchmarking their MAC procedures and examining the key components, such as the channel state information acquisition, decoding/precoding, and scheduling schemes. Classifications and discussions on important findings of the surveyed MAC protocols are provided, based on which, the research challenges for designing effective MU-MIMO MAC protocols, as well as the envisaged MAC's role in the future heterogeneous networks, are highlighted.",
"title": ""
},
{
"docid": "f5d25ff18b9a5308fe45a2fe3e8c9ff8",
"text": "Synopsis: Using B cells from patients with chronic lymphocytic leukemia (CLL), Nakahara and colleagues have produced a lamprey monoclonal antibody with CLL idiotope specificity that can be used for early detection of leukemia recurrence. Lamprey antibodies can be generated rapidly and offer a complementary approach to the use of classical Ig-based anti-idiotope antibodies in the monitoring and management of patients with CLL.",
"title": ""
},
{
"docid": "0209132c7623c540c125a222552f33ac",
"text": "This paper reviews the criticism on the 4Ps Marketing Mix framework, the most popular tool of traditional marketing management, and categorizes the main objections of using the model as the foundation of physical marketing. It argues that applying the traditional approach, based on the 4Ps paradigm, is also a poor choice in the case of virtual marketing and identifies two main limitations of the framework in online environments: the drastically diminished role of the Ps and the lack of any strategic elements in the model. Next to identifying the critical factors of the Web marketing, the paper argues that the basis for successful E-Commerce is the full integration of the virtual activities into the company’s physical strategy, marketing plan and organisational processes. The four S elements of the Web-Marketing Mix framework present a sound and functional conceptual basis for designing, developing and commercialising Business-to-Consumer online projects. The model was originally developed for educational purposes and has been tested and refined by means of field projects; two of them are presented as case studies in the paper. 2002 Elsevier Science B.V. All rights reserved.",
"title": ""
},
{
"docid": "cf04d97cf2b0b8d1a8b1f4318c553856",
"text": "We analyze how network effects affect competition in the nascent cryptocurrency market. We do so by examining the changes over time in exchange rate data among cryptocurrencies. Specifically, we look at two aspects: (1) competition among different currencies, and (2) competition among exchanges where those currencies are traded. Our data suggest that the winner-take-all effect is dominant early in the market. During this period, when Bitcoin becomes more valuable against the U.S. dollar, it also becomes more valuable against other cryptocurrencies. This trend is reversed in the later period. The data in the later period are consistent with the use of cryptocurrencies as financial assets (popularized by Bitcoin), and not consistent with “winner-take-all”",
"title": ""
},
{
"docid": "6db6dccccbdcf77068ae4270a1d6b408",
"text": "In many engineering disciplines, abstract models are used to describe systems on a high level of abstraction. On this abstract level, it is often easier to gain insights about that system that is being described. When models of a system change – for example because the system itself has changed – any analyses based on these models have to be invalidated and thus have to be reevaluated again in order for the results to stay meaningful. In many cases, the time to get updated analysis results is critical. However, as most often only small parts of the model change, large parts of this reevaluation could be saved by using previous results but such an incremental execution is barely done in practice as it is non-trivial and error-prone. The approach of implicit incrementalization o ers a solution by deriving an incremental evaluation strategy implicitly from a batch speci cation of the analysis. This works by deducing a dynamic dependency graph that allows to only reevaluate those parts of an analysis that are a ected by a given model change. Thus advantages of an incremental execution can be gained without changes to the code that would potentially degrade its understandability. However, current approaches to implicit incremental computation only support narrow classes of analysis, are restricted to an incremental derivation at instruction level or require an explicit state management. In addition, changes are only propagated sequentially, meanwhile modern multi-core architectures would allow parallel change propagation. Even with such improvements, it is unclear whether incremental execution in fact brings advantages as changes may easily cause butter y e ects, making a reuse of previous analysis results pointless (i.e. ine cient). This thesis deals with the problems of implicit incremental model analyses by proposing multiple approaches that mostly can be combined. Further, the",
"title": ""
},
{
"docid": "554d0255aef7ffac9e923da5d93b97e3",
"text": "In this demo paper, we present a text simplification approach that is directed at improving the performance of state-of-the-art Open Relation Extraction (RE) systems. As syntactically complex sentences often pose a challenge for current Open RE approaches, we have developed a simplification framework that performs a pre-processing step by taking a single sentence as input and using a set of syntactic-based transformation rules to create a textual input that is easier to process for subsequently applied Open RE systems.",
"title": ""
},
{
"docid": "c393e229c735648e8469fe81014634a4",
"text": "Multivariate time series data are becoming increasingly common in numerous real world applications, e.g., power plant monitoring, health care, wearable devices, automobile, etc. As a result, multivariate time series retrieval, i.e., given the current multivariate time series segment, how to obtain its relevant time series segments in the historical data (or in the database), attracts significant amount of interest in many fields. Building such a system, however, is challenging since it requires a compact representation of the raw time series which can explicitly encode the temporal dynamics as well as the correlations (interactions) between different pairs of time series (sensors). Furthermore, it requires query efficiency and expects a returned ranking list with high precision on the top. Despite the fact that various approaches have been developed, few of them can jointly resolve these two challenges. To cope with this issue, in this paper we propose a Deep r-th root of Rank Supervised Joint Binary Embedding (Deep r-RSJBE) to perform multivariate time series retrieval. Given a raw multivariate time series segment, we employ Long Short-Term Memory (LSTM) units to encode the temporal dynamics and utilize Convolutional Neural Networks (CNNs) to encode the correlations (interactions) between different pairs of time series (sensors). Subsequently, a joint binary embedding is pursued to incorporate both the temporal dynamics and the correlations. Finally, we develop a novel r-th root ranking loss to optimize the precision at the top of a Hamming distance ranking list. Thoroughly empirical studies based upon three publicly available time series datasets demonstrate the effectiveness and the efficiency of Deep r-RSJBE.",
"title": ""
},
{
"docid": "d25a34b3208ee28f9cdcddb9adf46eb4",
"text": "1 Umeå University, Department of Computing Science, SE-901 87 Umeå, Sweden, {jubo,thomasj,marie}@cs.umu.se Abstract The transition to object-oriented programming is more than just a matter of programming language. Traditional syllabi fail to teach students the “big picture” and students have difficulties taking advantage of objectoriented concepts. In this paper we present a holistic approach to a CS1 course in Java favouring general objectoriented concepts over the syntactical details of the language. We present goals for designing such a course and a case study showing interesting results.",
"title": ""
},
{
"docid": "e5e2d26950e0a75014ffdbeabf55668e",
"text": "Agriculture is the most important sector that influences the economy of India. It contributes to 18% of India's Gross Domestic Product (GDP) and gives employment to 50% of the population of India. People of India are practicing Agriculture for years but the results are never satisfying due to various factors that affect the crop yield. To fulfill the needs of around 1.2 billion people, it is very important to have a good yield of crops. Due to factors like soil type, precipitation, seed quality, lack of technical facilities etc the crop yield is directly influenced. Hence, new technologies are necessary for satisfying the growing need and farmers must work smartly by opting new technologies rather than going for trivial methods. This paper focuses on implementing crop yield prediction system by using Data Mining techniques by doing analysis on agriculture dataset. Different classifiers are used namely J48, LWL, LAD Tree and IBK for prediction and then the performance of each is compared using WEKA tool. For evaluating performance Accuracy is used as one of the factors. The classifiers are further compared with the values of Root Mean Squared Error (RMSE), Mean Absolute Error (MAE) and Relative Absolute Error (RAE). Lesser the value of error, more accurate the algorithm will work. The result is based on comparison among the classifiers.",
"title": ""
},
{
"docid": "471eca6664d0ae8f6cdfb848bc910592",
"text": "Taxonomic relation identification aims to recognize the ‘is-a’ relation between two terms. Previous works on identifying taxonomic relations are mostly based on statistical and linguistic approaches, but the accuracy of these approaches is far from satisfactory. In this paper, we propose a novel supervised learning approach for identifying taxonomic relations using term embeddings. For this purpose, we first design a dynamic weighting neural network to learn term embeddings based on not only the hypernym and hyponym terms, but also the contextual information between them. We then apply such embeddings as features to identify taxonomic relations using a supervised method. The experimental results show that our proposed approach significantly outperforms other state-of-the-art methods by 9% to 13% in terms of accuracy for both general and specific domain datasets.",
"title": ""
},
{
"docid": "476aa14f6b71af480e8ab4747849d7e3",
"text": "The present study explored the relationship between risky cybersecurity behaviours, attitudes towards cybersecurity in a business environment, Internet addiction, and impulsivity. 538 participants in part-time or full-time employment in the UK completed an online questionnaire, with responses from 515 being used in the data analysis. The survey included an attitude towards cybercrime and cybersecurity in business scale, a measure of impulsivity, Internet addiction and a 'risky' cybersecurity behaviours scale. The results demonstrated that Internet addiction was a significant predictor for risky cybersecurity behaviours. A positive attitude towards cybersecurity in business was negatively related to risky cybersecurity behaviours. Finally, the measure of impulsivity revealed that both attentional and motor impulsivity were both significant positive predictors of risky cybersecurity behaviours, with non-planning being a significant negative predictor. The results present a further step in understanding the individual differences that may govern good cybersecurity practices, highlighting the need to focus directly on more effective training and awareness mechanisms.",
"title": ""
},
{
"docid": "b3bab7639acde03cbe12253ebc6eba31",
"text": "Autism spectrum disorder (ASD) is a wide-ranging collection of developmental diseases with varying symptoms and degrees of disability. Currently, ASD is diagnosed mainly with psychometric tools, often unable to provide an early and reliable diagnosis. Recently, biochemical methods are being explored as a means to meet the latter need. For example, an increased predisposition to ASD has been associated with abnormalities of metabolites in folate-dependent one carbon metabolism (FOCM) and transsulfuration (TS). Multiple metabolites in the FOCM/TS pathways have been measured, and statistical analysis tools employed to identify certain metabolites that are closely related to ASD. The prime difficulty in such biochemical studies comes from (i) inefficient determination of which metabolites are most important and (ii) understanding how these metabolites are collectively related to ASD. This paper presents a new method based on scores produced in Support Vector Machine (SVM) modeling combined with High Dimensional Model Representation (HDMR) sensitivity analysis. The new method effectively and efficiently identifies the key causative metabolites in FOCM/TS pathways, ranks their importance, and discovers their independent and correlative action patterns upon ASD. Such information is valuable not only for providing a foundation for a pathological interpretation but also for potentially providing an early, reliable diagnosis ideally leading to a subsequent comprehensive treatment of ASD. With only tens of SVM model runs, the new method can identify the combinations of the most important metabolites in the FOCM/TS pathways that lead to ASD. Previous efforts to find these metabolites required hundreds of thousands of model runs with the same data.",
"title": ""
},
{
"docid": "c227f76c42ae34af11193e3ecb224ecb",
"text": "Antibiotics and antibiotic resistance determinants, natural molecules closely related to bacterial physiology and consistent with an ancient origin, are not only present in antibiotic-producing bacteria. Throughput sequencing technologies have revealed an unexpected reservoir of antibiotic resistance in the environment. These data suggest that co-evolution between antibiotic and antibiotic resistance genes has occurred since the beginning of time. This evolutionary race has probably been slow because of highly regulated processes and low antibiotic concentrations. Therefore to understand this global problem, a new variable must be introduced, that the antibiotic resistance is a natural event, inherent to life. However, the industrial production of natural and synthetic antibiotics has dramatically accelerated this race, selecting some of the many resistance genes present in nature and contributing to their diversification. One of the best models available to understand the biological impact of selection and diversification are β-lactamases. They constitute the most widespread mechanism of resistance, at least among pathogenic bacteria, with more than 1000 enzymes identified in the literature. In the last years, there has been growing concern about the description, spread, and diversification of β-lactamases with carbapenemase activity and AmpC-type in plasmids. Phylogenies of these enzymes help the understanding of the evolutionary forces driving their selection. Moreover, understanding the adaptive potential of β-lactamases contribute to exploration the evolutionary antagonists trajectories through the design of more efficient synthetic molecules. In this review, we attempt to analyze the antibiotic resistance problem from intrinsic and environmental resistomes to the adaptive potential of resistance genes and the driving forces involved in their diversification, in order to provide a global perspective of the resistance problem.",
"title": ""
},
{
"docid": "6a33013c19dc59d8871e217461d479e9",
"text": "Cancer tissues in histopathology images exhibit abnormal patterns; it is of great clinical importance to label a histopathology image as having cancerous regions or not and perform the corresponding image segmentation. However, the detailed annotation of cancer cells is often an ambiguous and challenging task. In this paper, we propose a new learning method, multiple clustered instance learning (MCIL), to classify, segment and cluster cancer cells in colon histopathology images. The proposed MCIL method simultaneously performs image-level classification (cancer vs. non-cancer image), pixel-level segmentation (cancer vs. non-cancer tissue), and patch-level clustering (cancer subclasses). We embed the clustering concept into the multiple instance learning (MIL) setting and derive a principled solution to perform the above three tasks in an integrated framework. Experimental results demonstrate the efficiency and effectiveness of MCIL in analyzing colon cancers.",
"title": ""
},
{
"docid": "4faacdbc093ac8dea97403355f95e504",
"text": "What frameworks and architectures are necessary to create a vision system for AGI? In this paper, we propose a formal model that states the task of perception within AGI. We show the role of discriminative and generative models in achieving efficient and general solution of this task, thus specifying the task in more detail. We discuss some existing generative and discriminative models and demonstrate their insufficiency for our purposes. Finally, we discuss some architectural dilemmas and open questions.",
"title": ""
},
{
"docid": "72e4984c05e6b68b606775bbf4ce3b33",
"text": "This paper defines a generative probabilistic model of parse trees, which we call PCFG-LA. This model is an extension of PCFG in which non-terminal symbols are augmented with latent variables. Finegrained CFG rules are automatically induced from a parsed corpus by training a PCFG-LA model using an EM-algorithm. Because exact parsing with a PCFG-LA is NP-hard, several approximations are described and empirically compared. In experiments using the Penn WSJ corpus, our automatically trained model gave a performance of 86.6% (F , sentences 40 words), which is comparable to that of an unlexicalized PCFG parser created using extensive manual feature selection.",
"title": ""
},
{
"docid": "3a011bdec6531de3f0f9718f35591e52",
"text": "Since Markowitz (1952) formulated the portfolio selection problem, many researchers have developed models aggregating simultaneously several conflicting attributes such as: the return on investment, risk and liquidity. The portfolio manager generally seeks the best combination of stocks/assets that meets his/ her investment objectives. The Goal Programming (GP) model is widely applied to finance and portfolio management. The aim of this paper is to present the different variants of the GP model that have been applied to the financial portfolio selection problem from the 1970s to nowadays. 2013 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "ec93b4c61694916dd494e9376102726b",
"text": "In 1969 Barlow introduced the phrase economy of impulses to express the tendency for successive neural systems to use lower and lower levels of cell firings to produce equivalent encodings. From this viewpoint, the ultimate economy of impulses is a neural code of minimal redundancy. The hypothesis motivating our research is that energy expenditures, e.g., the metabolic cost of recovering from an action potential relative to the cost of inactivity, should also be factored into the economy of impulses. In fact, coding schemes with the largest representational capacity are not, in general, optimal when energy expenditures are taken into account. We show that for both binary and analog neurons, increased energy expenditure per neuron implies a decrease in average firing rate if energy efficient information transmission is to be maintained.",
"title": ""
},
{
"docid": "94d2c88b11c79e2f4bf9fdc3ed8e1861",
"text": "The advent of pulsed power technology in the 1960s has enabled the development of very high peak power sources of electromagnetic radiation in the microwave and millimeter wave bands of the electromagnetic spectrum. Such sources have applications in plasma physics, particle acceleration techniques, fusion energy research, high-power radars, and communications, to name just a few. This article describes recent ongoing activity in this field in both Russia and the United States. The overview of research in Russia focuses on high-power microwave (HPM) sources that are powered using SINUS accelerators, which were developed at the Institute of High Current Electronics. The overview of research in the United States focuses more broadly on recent accomplishments of a multidisciplinary university research initiative on HPM sources, which also involved close interactions with Department of Defense laboratories and industry. HPM sources described in this article have generated peak powers exceeding several gigawatts in pulse durations typically on the order of 100 ns in frequencies ranging from about 1 GHz to many tens of gigahertz.",
"title": ""
}
] |
scidocsrr
|
c1957d49ea08b47f516dcc7f032a3a71
|
Mining evolutionary multi-branch trees from text streams
|
[
{
"docid": "2ecfc909301dcc6241bec2472b4d4135",
"text": "Previous work on text mining has almost exclusively focused on a single stream. However, we often have available multiple text streams indexed by the same set of time points (called coordinated text streams), which offer new opportunities for text mining. For example, when a major event happens, all the news articles published by different agencies in different languages tend to cover the same event for a certain period, exhibiting a correlated bursty topic pattern in all the news article streams. In general, mining correlated bursty topic patterns from coordinated text streams can reveal interesting latent associations or events behind these streams. In this paper, we define and study this novel text mining problem. We propose a general probabilistic algorithm which can effectively discover correlated bursty patterns and their bursty periods across text streams even if the streams have completely different vocabularies (e.g., English vs Chinese). Evaluation of the proposed method on a news data set and a literature data set shows that it can effectively discover quite meaningful topic patterns from both data sets: the patterns discovered from the news data set accurately reveal the major common events covered in the two streams of news articles (in English and Chinese, respectively), while the patterns discovered from two database publication streams match well with the major research paradigm shifts in database research. Since the proposed method is general and does not require the streams to share vocabulary, it can be applied to any coordinated text streams to discover correlated topic patterns that burst in multiple streams in the same period.",
"title": ""
}
] |
[
{
"docid": "5d318e2df97f539e227f0aef60d0732b",
"text": "The concept of intuition has, until recently, received scant scholarly attention within and beyond the psychological sciences, despite its potential to unify a number of lines of inquiry. Presently, the literature on intuition is conceptually underdeveloped and dispersed across a range of domains of application, from education, to management, to health. In this article, we clarify and distinguish intuition from related constructs, such as insight, and review a number of theoretical models that attempt to unify cognition and affect. Intuition's place within a broader conceptual framework that distinguishes between two fundamental types of human information processing is explored. We examine recent evidence from the field of social cognitive neuroscience that identifies the potential neural correlates of these separate systems and conclude by identifying a number of theoretical and methodological challenges associated with the valid and reliable assessment of intuition as a basis for future research in this burgeoning field of inquiry.",
"title": ""
},
{
"docid": "942be0aa4dab5904139919351d6d63d4",
"text": "Since Hinton and Salakhutdinov published their landmark science paper in 2006 ending the previous neural-network winter, research in neural networks has increased dramatically. Researchers have applied neural networks seemingly successfully to various topics in the field of computer science. However, there is a risk that we overlook other methods. Therefore, we take a recent end-to-end neural-network-based work (Dhingra et al., 2018) as a starting point and contrast this work with more classical techniques. This prior work focuses on the LAMBADA word prediction task, where broad context is used to predict the last word of a sentence. It is often assumed that neural networks are good at such tasks where feature extraction is important. We show that with simpler syntactic and semantic features (e.g. Across Sentence Boundary (ASB) N-grams) a state-ofthe-art neural network can be outperformed. Our discriminative language-model-based approach improves the word prediction accuracy from 55.6% to 58.9% on the LAMBADA task. As a next step, we plan to extend this work to other language modeling tasks.",
"title": ""
},
{
"docid": "7d0ebf939deed43253d5360e325c3e8e",
"text": "Roughly speaking, clustering evolving networks aims at detecting structurally dense subgroups in networks that evolve over time. This implies that the subgroups we seek for also evolve, which results in many additional tasks compared to clustering static networks. We discuss these additional tasks and difficulties resulting thereof and present an overview on current approaches to solve these problems. We focus on clustering approaches in online scenarios, i.e., approaches that incrementally use structural information from previous time steps in order to incorporate temporal smoothness or to achieve low running time. Moreover, we describe a collection of real world networks and generators for synthetic data that are often used for evaluation.",
"title": ""
},
{
"docid": "78e3d9bbfc9fdd9c3454c34f09e5abd4",
"text": "This paper presents the first ever reported implementation of the Gapped Basic Local Alignment Search Tool (Gapped BLAST) for biological sequence alignment, with the Two-Hit method, on CUDA (compute unified device architecture)-compatible Graphic Processing Units (GPUs). The latter have recently emerged as relatively low cost and easy to program high performance platforms for general purpose computing. Our Gapped BLAST implementation on an NVIDIA Geforce 8800 GTX GPU is up to 2.7x quicker than the most optimized CPU-based implementation, namely NCBI BLAST, running on a Pentium4 3.4 GHz desktop computer with 2GB RAM.",
"title": ""
},
{
"docid": "846f8f33181c3143bb8f54ce8eb3e5cc",
"text": "Story Point is a relative measure heavily used for agile estimation of size. The team decides how big a point is, and based on that size, determines how many points each work item is. In many organizations, the use of story points for similar features can vary from team to another, and successfully, based on the teams' sizes, skill set and relative use of this tool. But in a CMMI organization, this technique demands a degree of consistency across teams for a more streamlined approach to solution delivery. This generates a challenge for CMMI organizations to adopt Agile in software estimation and planning. In this paper, a process and methodology that guarantees relativity in software sizing while using agile story points is introduced. The proposed process and methodology are applied in a CMMI company level three on different projects. By that, the story point is used on the level of the organization, not the project. Then, the performance of sizing process is measured to show a significant improvement in sizing accuracy after adopting the agile story point in CMMI organizations. To complete the estimation cycle, an improvement in effort estimation dependent on story point is also introduced, and its performance effect is measured.",
"title": ""
},
{
"docid": "44f831d346d42fd39bab3f577e6feec4",
"text": "We propose a training framework for sequence-to-sequence voice conversion (SVC). A well-known problem regarding a conventional VC framework is that acoustic-feature sequences generated from a converter tend to be over-smoothed, resulting in buzzy-sounding speech. This is because a particular form of similarity metric or distribution for parameter training of the acoustic model is assumed so that the generated feature sequence that averagely fits the training target example is considered optimal. This over-smoothing occurs as long as a manually constructed similarity metric is used. To overcome this limitation, our proposed SVC framework uses a similarity metric implicitly derived from a generative adversarial network, enabling the measurement of the distance in the high-level abstract space. This would enable the model to mitigate the oversmoothing problem caused in the low-level data space. Furthermore, we use convolutional neural networks to model the long-range context-dependencies. This also enables the similarity metric to have a shift-invariant property; thus, making the model robust against misalignment errors involved in the parallel data. We tested our framework on a non-native-to-native VC task. The experimental results revealed that the use of the proposed framework had a certain effect in improving naturalness, clarity, and speaker individuality.",
"title": ""
},
{
"docid": "3c0e132f0738105eb7fff7f73c520ef7",
"text": "Fan-out wafer-level-packaging (FO-WLP) technology gets more and more significant attention with its advantages of small form factor, higher I/O density, cost effective and high performance for wide range application. However, wafer warpage is still one critical issue which is needed to be addressed for successful subsequent processes for FO-WLP packaging. In this study, methodology to reduce wafer warpage of 12\" wafer at different processes was proposed in terms of geometry design, material selection, and process optimization through finite element analysis (FEA) and experiment. Wafer process dependent modeling results were validated by experimental measurement data. Solutions for reducing wafer warpage were recommended. Key parameters were identified based on FEA modeling results: thickness ratio of die to total mold thickness, molding compound and support wafer materials, dielectric material and RDL design.",
"title": ""
},
{
"docid": "7fc49f042770caf691e8bf074605a7ed",
"text": "Human prostate cancer is characterized by multiple gross chromosome alterations involving several chromosome regions. However, the specific genes involved in the development of prostate tumors are still largely unknown. Here we have studied the chromosome composition of the three established prostate cancer cell lines, LNCaP, PC-3, and DU145, by spectral karyotyping (SKY). SKY analysis showed complex karyotypes for all three cell lines, with 87, 58/113, and 62 chromosomes, respectively. All cell lines were shown to carry structural alterations of chromosomes 1, 2, 4, 6, 10, 15, and 16; however, no recurrent breakpoints were detected. Compared to previously published findings on these cell lines using comparative genomic hybridization, SKY revealed several balanced translocations and pinpointed rearrangement breakpoints. The SKY analysis was validated by fluorescence in situ hybridization using chromosome-specific, as well as locus-specific, probes. Identification of chromosome alterations in these cell lines by SKY may prove to be helpful in attempts to clone the genes involved in prostate cancer tumorigenesis.",
"title": ""
},
{
"docid": "1569bcea0c166d9bf2526789514609c5",
"text": "In this paper, we present the developmert and initial validation of a new self-report instrument, the Differentiation of Self Inventory (DSI). T. DSI represents the first attempt to create a multi-dimensional measure of differentiation based on Bowen Theory, focusing specifically on adults (ages 25 +), their current significant relationships, and their relations with families of origin. Principal components factor analysis on a sample of 313 normal adults (mean age = 36.8) suggested four dimensions: Emotional Reactivity, Reactive Distancing, Fusion with Parents, and \"I\" Position. Scales constructed from these factors were found to be moderately correlated in the expected direction, internally consistent, and significantly predictive of trait anxiety. The potential contribution of the DSI is discussed -for testing Bowen Theory, as a clinical assessment tool, and as an indicator of psychotherapeutic outcome.",
"title": ""
},
{
"docid": "92b2b7fb95624a187f5304c882d31dca",
"text": "Automatically predicting human eye fixations is a useful technique that can facilitate many multimedia applications, e.g., image retrieval, action recognition, and photo retargeting. Conventional approaches are frustrated by two drawbacks. First, psychophysical experiments show that an object-level interpretation of scenes influences eye movements significantly. Most of the existing saliency models rely on object detectors, and therefore, only a few prespecified categories can be discovered. Second, the relative displacement of objects influences their saliency remarkably, but current models cannot describe them explicitly. To solve these problems, this paper proposes weakly supervised fixations prediction, which leverages image labels to improve accuracy of human fixations prediction. The proposed model hierarchically discovers objects as well as their spatial configurations. Starting from the raw image pixels, we sample superpixels in an image, thereby seamless object descriptors termed object-level graphlets (oGLs) are generated by random walking on the superpixel mosaic. Then, a manifold embedding algorithm is proposed to encode image labels into oGLs, and the response map of each prespecified object is computed accordingly. On the basis of the object-level response map, we propose spatial-level graphlets (sGLs) to model the relative positions among objects. Afterward, eye tracking data is employed to integrate these sGLs for predicting human eye fixations. Thorough experiment results demonstrate the advantage of the proposed method over the state-of-the-art.",
"title": ""
},
{
"docid": "352c61af854ffc6dab438e7a1be56fcb",
"text": "Question-answering (QA) on video contents is a significant challenge for achieving human-level intelligence as it involves both vision and language in real-world settings. Here we demonstrate the possibility of an AI agent performing video story QA by learning from a large amount of cartoon videos. We develop a video-story learning model, i.e. Deep Embedded Memory Networks (DEMN), to reconstruct stories from a joint scene-dialogue video stream using a latent embedding space of observed data. The video stories are stored in a long-term memory component. For a given question, an LSTM-based attention model uses the long-term memory to recall the best question-story-answer triplet by focusing on specific words containing key information. We trained the DEMN on a novel QA dataset of children’s cartoon video series, Pororo. The dataset contains 16,066 scene-dialogue pairs of 20.5-hour videos, 27,328 fine-grained sentences for scene description, and 8,913 story-related QA pairs. Our experimental results show that the DEMN outperforms other QA models. This is mainly due to 1) the reconstruction of video stories in a scene-dialogue combined form that utilize the latent embedding and 2) attention. DEMN also achieved state-of-the-art results on the MovieQA benchmark.",
"title": ""
},
{
"docid": "63ed24b818f83ab04160b5c690075aac",
"text": "In this paper, we discuss the impact of digital control in high-frequency switched-mode power supplies (SMPS), including point-of-load and isolated DC-DC converters, microprocessor power supplies, power-factor-correction rectifiers, electronic ballasts, etc., where switching frequencies are typically in the hundreds of kHz to MHz range, and where high efficiency, static and dynamic regulation, low size and weight, as well as low controller complexity and cost are very important. To meet these application requirements, a digital SMPS controller may include fast, small analog-to-digital converters, hardware-accelerated programmable compensators, programmable digital modulators with very fine time resolution, and a standard microcontroller core to perform programming, monitoring and other system interface tasks. Based on recent advances in circuit and control techniques, together with rapid advances in digital VLSI technology, we conclude that high-performance digital controller solutions are both feasible and practical, leading to much enhanced system integration and performance gains. Examples of experimentally demonstrated results are presented, together with pointers to areas of current and future research and development.",
"title": ""
},
{
"docid": "84c37ea2545042a2654b162491846628",
"text": "Ever since the agile manifesto was created in 2001, the research community has devoted a great deal of attention to agile software development. This article examines publications and citations to illustrate how the research on agile has progressed in the 10 years following the articulation of the manifesto. nformation systems Xtreme programming, XP",
"title": ""
},
{
"docid": "1203f22bfdfc9ecd211dbd79a2043a6a",
"text": "After a short introduction to classic cryptography we explain thoroughly how quantum cryptography works. We present then an elegant experimental realization based on a self-balanced interferometer with Faraday mirrors. This phase-coding setup needs no alignment of the interferometer nor polarization control, and therefore considerably facilitates the experiment. Moreover it features excellent fringe visibility. Next, we estimate the practical limits of quantum cryptography. The importance of the detector noise is illustrated and means of reducing it are presented. With present-day technologies maximum distances of about 70 kmwith bit rates of 100 Hzare achievable. PACS: 03.67.Dd; 85.60; 42.25; 33.55.A Cryptography is the art of hiding information in a string of bits meaningless to any unauthorized party. To achieve this goal, one uses encryption: a message is combined according to an algorithm with some additional secret information – the key – to produce a cryptogram. In the traditional terminology, Alice is the party encrypting and transmitting the message, Bob the one receiving it, and Eve the malevolent eavesdropper. For a crypto-system to be considered secure, it should be impossible to unlock the cryptogram without Bob’s key. In practice, this demand is often softened, and one requires only that the system is sufficiently difficult to crack. The idea is that the message should remain protected as long as the information it contains is valuable. There are two main classes of crypto-systems, the publickey and the secret-key crypto-systems: Public key systems are based on so-called one-way functions: given a certainx, it is easy to computef(x), but difficult to do the inverse, i.e. compute x from f(x). “Difficult” means that the task shall take a time that grows exponentially with the number of bits of the input. The RSA (Rivest, Shamir, Adleman) crypto-system for example is based on the factorizing of large integers. Anyone can compute 137 ×53 in a few seconds, but it may take a while to find the prime factors of 28 907. To transmit a message Bob chooses a private key (based on two large prime numbers) and computes from it a public key (based on the product of these numbers) which he discloses publicly. Now Alice can encrypt her message using this public key and transmit it to Bob, who decrypts it with the private key. Public key systems are very convenient and became very popular over the last 20 years, however, they suffer from two potential major flaws. To date, nobody knows for sure whether or not factorizing is indeed difficult. For known algorithms, the time for calculation increases exponentially with the number of input bits, and one can easily improve the safety of RSA by choosing a longer key. However, a fast algorithm for factorization would immediately annihilate the security of the RSA system. Although it has not been published yet, there is no guarantee that such an algorithm does not exist. Second, problems that are difficult for a classical computer could become easy for a quantum computer. With the recent developments in the theory of quantum computation, there are reasons to fear that building these machines will eventually become possible. If one of these two possibilities came true, RSA would become obsolete. One would then have no choice, but to turn to secret-key cryptosystems. Very convenient and broadly used are crypto-systems based on a public algorithm and a relatively short secret key. The DES (Data Encryption Standard, 1977) for example uses a 56-bit key and the same algorithm for coding and decoding. The secrecy of the cryptogram, however, depends again on the calculating power and the time of the eavesdropper. The only crypto-system providing proven, perfect secrecy is the “one-time pad” proposed by Vernam in 1935. With this scheme, a message is encrypted using a random key of equal length, by simply “adding” each bit of the message to the orresponding bit of the key. The scrambled text can then be sent to Bob, who decrypts the message by “subtracting” the same key. The bits of the ciphertext are as random as those of the key and consequently do not contain any information. Although perfectly secure, the problem with this system is that it is essential for Alice and Bob to share a common secret key, at least as long as the message they want to exchange, and use it only for a single encryption. This key must be transmitted by some trusted means or personal meeting, which turns out to be complex and expensive.",
"title": ""
},
{
"docid": "3177e9dd683fdc66cbca3bd985f694b1",
"text": "Online communities allow millions of people who would never meet in person to interact. People join web-based discussion boards, email lists, and chat rooms for friendship, social support, entertainment, and information on technical, health, and leisure activities [24]. And they do so in droves. One of the earliest networks of online communities, Usenet, had over nine million unique contributors, 250 million messages, and approximately 200,000 active groups in 2003 [27], while the newer MySpace, founded in 2003, attracts a quarter million new members every day [27].",
"title": ""
},
{
"docid": "c450ac5c84d962bb7f2262cf48e1280a",
"text": "Animal-assisted therapies have become widespread with programs targeting a variety of pathologies and populations. Despite its popularity, it is unclear if this therapy is useful. The aim of this systematic review is to establish the efficacy of Animal assisted therapies in the management of dementia, depression and other conditions in adult population. A search was conducted in MEDLINE, EMBASE, CINAHL, LILACS, ScienceDirect, and Taylor and Francis, OpenGrey, GreyLiteratureReport, ProQuest, and DIALNET. No language or study type filters were applied. Conditions studied included depression, dementia, multiple sclerosis, PTSD, stroke, spinal cord injury, and schizophrenia. Only articles published after the year 2000 using therapies with significant animal involvement were included. 23 articles and dissertations met inclusion criteria. Overall quality was low. The degree of animal interaction significantly influenced outcomes. Results are generally favorable, but more thorough and standardized research should be done to strengthen the existing evidence.",
"title": ""
},
{
"docid": "6a2c7d43cde643f295ace71f5681285f",
"text": "Quantum mechanics and information theory are among the most important scientific discoveries of the last century. Although these two areas initially developed separately, it has emerged that they are in fact intimately related. In this review the author shows how quantum information theory extends traditional information theory by exploring the limits imposed by quantum, rather than classical, mechanics on information storage and transmission. The derivation of many key results differentiates this review from the usual presentation in that they are shown to follow logically from one crucial property of relative entropy. Within the review, optimal bounds on the enhanced speed that quantum computers can achieve over their classical counterparts are outlined using information-theoretic arguments. In addition, important implications of quantum information theory for thermodynamics and quantum measurement are intermittently discussed. A number of simple examples and derivations, including quantum superdense coding, quantum teleportation, and Deutsch’s and Grover’s algorithms, are also included.",
"title": ""
},
{
"docid": "95c4a2cfd063abdac35572927c4dcfc1",
"text": "Community detection is an important task in network analysis. A community (also referred to as a cluster) is a set of cohesive vertices that have more connections inside the set than outside. In many social and information networks, these communities naturally overlap. For instance, in a social network, each vertex in a graph corresponds to an individual who usually participates in multiple communities. In this paper, we propose an efficient overlapping community detection algorithm using a seed expansion approach. The key idea of our algorithm is to find good seeds, and then greedily expand these seeds based on a community metric. Within this seed expansion method, we investigate the problem of how to determine good seed nodes in a graph. In particular, we develop new seeding strategies for a personalized PageRank clustering scheme that optimizes the conductance community score. An important step in our method is the neighborhood inflation step where seeds are modified to represent their entire vertex neighborhood. Experimental results show that our seed expansion algorithm outperforms other state-of-the-art overlapping community detection methods in terms of producing cohesive clusters and identifying ground-truth communities. We also show that our new seeding strategies are better than existing strategies, and are thus effective in finding good overlapping communities in real-world networks.",
"title": ""
},
{
"docid": "646572f76cffd3ba225105d6647a588f",
"text": "Context: Cyber-physical systems (CPSs) have emerged to be the next generation of engineered systems driving the so-called fourth industrial revolution. CPSs are becoming more complex, open and more prone to security threats, which urges security to be engineered systematically into CPSs. Model-Based Security Engineering (MBSE) could be a key means to tackle this challenge via security by design, abstraction, and",
"title": ""
}
] |
scidocsrr
|
7a0c82f620a3528e0e61ea064bc48b90
|
Mobile Application Development for Quran Verse Recognition and Interpretations
|
[
{
"docid": "505e80ac2fe0ee1a34c60279b90d0ca7",
"text": "In an effective e-learning game, the learner’s enjoyment acts as a catalyst to encourage his/her learning initiative. Therefore, the availability of a scale that effectively measures the enjoyment offered by e-learning games assist the game designer to understanding the strength and flaw of the game efficiently from the learner’s points of view. E-learning games are aimed at the achievement of learning objectives via the creation of a flow effect. Thus, this study is based on Sweetser’s & Wyeth’s framework to develop a more rigorous scale that assesses user enjoyment of e-learning games. The scale developed in the present study consists of eight dimensions: Immersion, social interaction, challenge, goal clarity, feedback, concentration, control, and knowledge improvement. Four learning games employed in a university’s online learning course ‘‘Introduction to Software Application” were used as the instruments of scale verification. Survey questionnaires were distributed to students taking the course and 166 valid samples were subsequently collected. The results showed that the validity and reliability of the scale, EGameFlow, were satisfactory. Thus, the measurement is an effective tool for evaluating the level of enjoyment provided by elearning games to their users. 2008 Elsevier Ltd. All rights reserved.",
"title": ""
}
] |
[
{
"docid": "e6e65cdee48c6b9606fa14904d176982",
"text": "The use of prediction to eliminate or reduce the effects of system delays in Head-Mounted Display systems has been the subject of several recent papers. A variety of methods have been proposed but almost all the analysis has been empirical, making comparisons of results difficult and providing little direction to the designer of new systems. In this paper, we characterize the performance of two classes of head-motion predictors by analyzing them in the frequency domain. The first predictor is a polynomial extrapolation and the other is based on the Kalman filter. Our analysis shows that even with perfect, noise-free inputs, the error in predicted position grows rapidly with increasing prediction intervals and input signal frequencies. Given the spectra of the original head motion, this analysis estimates the spectra of the predicted motion, quantifying a predictor's performance on different systems and applications. Acceleration sensors are shown to be more useful to a predictor than velocity sensors. The methods described will enable designers to determine maximum acceptable system delay based on maximum tolerable error and the characteristics of user motions in the application. CR",
"title": ""
},
{
"docid": "4b2b4caa7dbf747833ff0f5f669ffa64",
"text": "This paper studies the use of everyday words to describe images. The common saying has it that 'a picture is worth a thousand words', here we ask which thousand? The proliferation of tagged social multimedia data presents a challenge to understanding collective tag-use at large scale -- one can ask if patterns from photo tags help understand tag-tag relations, and how it can be leveraged to improve visual search and recognition. We propose a new method to jointly analyze three distinct visual knowledge resources: Flickr, ImageNet/WordNet, and ConceptNet. This allows us to quantify the visual relevance of both tags learn their relationships. We propose a novel network estimation algorithm, Inverse Concept Rank, to infer incomplete tag relationships. We then design an algorithm for image annotation that takes into account both image and tag features. We analyze over 5 million photos with over 20,000 visual tags. The statistics from this collection leads to good results for image tagging, relationship estimation, and generalizing to unseen tags. This is a first step in analyzing picture tags and everyday semantic knowledge. Potential other applications include generating natural language descriptions of pictures, as well as validating and supplementing knowledge databases.",
"title": ""
},
{
"docid": "131fed1e0821b049d423c491b1c0166e",
"text": "Conventional approaches to understanding consciousness are generally concerned with the contribution of specific brain areas or groups of neurons. By contrast, it is considered here what kinds of neural processes can account for key properties of conscious experience. Applying measures of neural integration and complexity, together with an analysis of extensive neurological data, leads to a testable proposal-the dynamic core hypothesis-about the properties of the neural substrate of consciousness.",
"title": ""
},
{
"docid": "6103e5150e47baaa6447507ee66165e9",
"text": "Urban building reconstruction is an important step for urban digitization and realisticvisualization. In this paper, we propose a novel automatic method to recover urban building geometry from 3D point clouds. The proposed method is suitable for buildings composed of planar polygons and aligned with the gravity direction, which are quite common in the city. Our key observation is that the building shapes are usually piecewise constant along the gravity direction and determined by several dominant shapes. Based on this observation, we formulate building reconstruction as an energy minimization problem under the Markov Random Field (MRF) framework. Specifically, point clouds are first cutinto a sequence of slices along the gravity direction. Then, floorplans are reconstructed by extracting boundaries of these slices, among which dominant floorplans are extracted and propagated to other floors via MRF. To guarantee correct propagation, a new distance measurement for floorplans is designed, which first encodes floorplans into strings and then calculates distances between their corresponding strings. Additionally, an image based editing method is also proposed to recover detailed window structures. Experimental results on both synthetic and real data sets have validated the effectiveness of our method.",
"title": ""
},
{
"docid": "92b4d9c69969c66a1d523c38fd0495a4",
"text": "A level designer typically creates the levels of a game to cater for a certain set of objectives, or mission. But in procedural content generation, it is common to treat the creation of missions and the generation of levels as two separate concerns. This often leads to generic levels that allow for various missions. However, this also creates a generic impression for the player, because the potential for synergy between the objectives and the level is not utilised. Following up on the mission-space generation concept, as described by Dormans [5], we explore the possibilities of procedurally generating a level from a designer-made mission. We use a generative grammar to transform a mission into a level in a mixed-initiative design setting. We provide two case studies, dungeon levels for a rogue-like game, and platformer levels for a metroidvania game. The generators differ in the way they use the mission to generate the space, but are created with the same tool for content generation based on model transformations. We discuss the differences between the two generation processes and compare it with a parameterized approach.",
"title": ""
},
{
"docid": "3e7e40f82ebb83b4314c974334c8ce0c",
"text": "Three-dimensional shape reconstruction of 2D landmark points on a single image is a hallmark of human vision, but is a task that has been proven difficult for computer vision algorithms. We define a feed-forward deep neural network algorithm that can reconstruct 3D shapes from 2D landmark points almost perfectly (i.e., with extremely small reconstruction errors), even when these 2D landmarks are from a single image. Our experimental results show an improvement of up to two-fold over state-of-the-art computer vision algorithms; 3D shape reconstruction error (measured as the Procrustes distance between the reconstructed shape and the ground-truth) of human faces is <inline-formula><tex-math notation=\"LaTeX\">$<.004$</tex-math><alternatives> <inline-graphic xlink:href=\"martinez-ieq1-2772922.gif\"/></alternatives></inline-formula>, cars is .0022, human bodies is .022, and highly-deformable flags is .0004. Our algorithm was also a top performer at the 2016 3D Face Alignment in the Wild Challenge competition (done in conjunction with the European Conference on Computer Vision, ECCV) that required the reconstruction of 3D face shape from a single image. The derived algorithm can be trained in a couple hours and testing runs at more than 1,000 frames/s on an i7 desktop. We also present an innovative data augmentation approach that allows us to train the system efficiently with small number of samples. And the system is robust to noise (e.g., imprecise landmark points) and missing data (e.g., occluded or undetected landmark points).",
"title": ""
},
{
"docid": "d170d7cf20b0a848bb0d81c5d163b505",
"text": "The organizational and social issues associated with the development, implementation and use of computer-based information systems have increasingly attracted the attention of information systems researchers. Interest in qualitative research methods such as action research, case study research and ethnography, which focus on understanding social phenomena in their natural setting, has consequently grown. Case study research is the most widely used qualitative research method in information systems research, and is well suited to understanding the interactions between information technology-related innovations and organizational contexts. Although case study research is useful as ameans of studying information systems development and use in the field, there can be practical difficulties associated with attempting to undertake case studies as a rigorous and effective method of research. This paper addresses a number of these difficulties and offers some practical guidelines for successfully completing case study research. The paper focuses on the pragmatics of conducting case study research, and draws from the discussion at a panel session conducted by the authors at the 8th Australasian Conference on Information Systems, September 1997 (ACIS 97), from the authors' practical experiences, and from the case study research literature.",
"title": ""
},
{
"docid": "9bea0e85c3de06ef440c255700b041fd",
"text": "Preterm birth and infants’ admission to neonatal intensive care units (NICU) are associated with significant emotional and psychological stresses on mothers that interfere with normal mother-infant relationship. Maternal selfefficacy in parenting ability may predict long-term outcome of mother-infant relationship as well as neurodevelopmental and behavioral development of preterm infants. The Perceived Maternal Parenting Self-Efficacy (PMP S-E) tool was developed to measure self-efficacy in mothers of premature infants in the United Kingdom. The present study determined if maternal and neonatal characteristics could predict PMP S-E scores of mothers who were administered to in a mid-west community medical center NICU. Mothers whose infants were born less than 37 weeks gestational age and admitted to a level III neonatal intensive care unit participated. Participants completed the PMP S-E and demographic survey prior to discharge. A logistic regression analysis was conducted from PMP SE scores involving 103 dyads using maternal education, race, breast feeding, maternal age, infant’s gestational age, Apgar 5-minute score, birth weight, mode of delivery and time from birth to completion of PMP S-E questionnaire. Time to completion of survey and gestational age were the significant predictors of PMP S-E scores. The finding of this study concerning the utilization of the PMP S-E in a United States mid-west tertiary neonatal center suggest that interpretation of the score requires careful consideration of these two variables.",
"title": ""
},
{
"docid": "9cc23cd9bfb3e422e2b4ace1fe816855",
"text": "Evaluating surgeon skill has predominantly been a subjective task. Development of objective methods for surgical skill assessment are of increased interest. Recently, with technological advances such as robotic-assisted minimally invasive surgery (RMIS), new opportunities for objective and automated assessment frameworks have arisen. In this paper, we applied machine learning methods to automatically evaluate performance of the surgeon in RMIS. Six important movement features were used in the evaluation including completion time, path length, depth perception, speed, smoothness and curvature. Different classification methods applied to discriminate expert and novice surgeons. We test our method on real surgical data for suturing task and compare the classification result with the ground truth data (obtained by manual labeling). The experimental results show that the proposed framework can classify surgical skill level with relatively high accuracy of 85.7%. This study demonstrates the ability of machine learning methods to automatically classify expert and novice surgeons using movement features for different RMIS tasks. Due to the simplicity and generalizability of the introduced classification method, it is easy to implement in existing trainers. .",
"title": ""
},
{
"docid": "8ae28438f9fbeb9fa22188f37d7b91a3",
"text": "Supply Chain Management systems provide information sharing and analysis to companies and support their planning activities. They are not based on the real data because there is asymmetric information between companies, then leading to disturbance of the planning algorithms. On the other hand, sharing data between manufacturers, suppliers and customers becomes very important to ensure reactivity towards markets variability. Especially, double marginalization is a widespread and serious problem in supply chain management. Decentralized systems under wholesale price contracts are investigated, with double marginalization effects shown to lead to supply insufficiencies, in the cases of both deterministic and random demands. This paper proposes a blockchain based solution to address the problems of supply chain such as Double Marginalization and Information Asymmetry etc.",
"title": ""
},
{
"docid": "2f97792b65ab2a630405bb955e577ef1",
"text": "graphic designs The messages describing novel graphic designs and synthesized sounds obtained in the experiments by Krauss, et al. (in press) were coded into categories of description types, and the rate of gesturing associated with these description types was examined. For the novel designs, we used a category system developed by Fussell and Krauss (1989a) for descriptionsof these figures that partitions the descriptions into three categories: Literal descriptions, in which a design was characterized in terms of its geometric elements — as a collection of lines, arcs, angles, etc.; Figurative descriptions, in which a design was described in terms of objects or images it suggested; Symbol descriptions, in which a design was likened to a familiar symbol, typically one or more numbers or letters.21 When a message contained more than one type of description (as many did), it was coded for the type that predominated. Overall, about 60% of the descriptions were coded as figurative, about 24% as literal and the remaining 16% as symbols. For the descriptions of the graphic designs, a one-way ANOVA was performed with description type (literal, figurative or symbol) as the independent variable and gesture rate as the dependent variable to determine whether gesturing varied as a function of the kind of content. A significant effect was found F (2, 350) = 4.26, p = .015. Figurative descriptions were accompanied by slightly more gestures than literal descriptions; both were accompanied by more gestures than were the symbol descriptions (14.6 vs. 13.7 vs. 10.6 gestures per m, respectively). Both figurative and literal descriptions tended to be formulated in spatial terms. Symbol descriptions tended to be brief and static—essentially a statement of the resemblance.",
"title": ""
},
{
"docid": "74da516d4a74403ac5df760b0b656b1f",
"text": "In this paper a novel and effective approach for automated audio classification is presented that is based on the fusion of different sets of features, both visual and acoustic. A number of different acoustic and visual features of sounds are evaluated and compared. These features are then fused in an ensemble that produces better classification accuracy than other state-of-the-art approaches. The visual features of sounds are built starting from the audio file and are taken from images constructed from different spectrograms, a gammatonegram, and a rhythm image. These images are divided into subwindows from which a set of texture descriptors are extracted. For each feature descriptor a different Support Vector Machine (SVM) is trained. The SVMs outputs are summed for a final decision. The proposed ensemble is evaluated on three well-known databases of music genre classification (the Latin Music Database, the ISMIR 2004 database, and the GTZAN genre collection), a dataset of Bird vocalization aiming specie recognition, and a dataset of right whale calls aiming whale detection. The MATLAB code for the ensemble of classifiers and for the extraction of the features will be publicly available (https://www.dei.unipd.it/node/2357 +Pattern Recognition and Ensemble Classifiers).",
"title": ""
},
{
"docid": "c76d8583d805b61a8210c4e5f8854c80",
"text": "BACKGROUND AND OBJECTIVES\nThe present study proposes an intelligent system for automatic categorization of Pap smear images to detect cervical dysplasia, which has been an open problem ongoing for last five decades.\n\n\nMETHODS\nThe classification technique is based on shape, texture and color features. It classifies the cervical dysplasia into two-level (normal and abnormal) and three-level (Negative for Intraepithelial Lesion or Malignancy, Low-grade Squamous Intraepithelial Lesion and High-grade Squamous Intraepithelial Lesion) classes reflecting the established Bethesda system of classification used for diagnosis of cancerous or precancerous lesion of cervix. The system is evaluated on two generated databases obtained from two diagnostic centers, one containing 1610 single cervical cells and the other 1320 complete smear level images. The main objective of this database generation is to categorize the images according to the Bethesda system of classification both of which require lots of training and expertise. The system is also trained and tested on the benchmark Herlev University database which is publicly available. In this contribution a new segmentation technique has also been proposed for extracting shape features. Ripplet Type I transform, Histogram first order statistics and Gray Level Co-occurrence Matrix have been used for color and texture features respectively. To improve classification results, ensemble method is used, which integrates the decision of three classifiers. Assessments are performed using 5 fold cross validation.\n\n\nRESULTS\nExtended experiments reveal that the proposed system can successfully classify Pap smear images performing significantly better when compared with other existing methods.\n\n\nCONCLUSION\nThis type of automated cancer classifier will be of particular help in early detection of cancer.",
"title": ""
},
{
"docid": "ca9ac3c8e45511e4157657191d2681e9",
"text": "This paper shows how Linked Open Data can ease the challenges of information triage in disaster response efforts. Recently, disaster management has seen a revolution in data collection. Local victims as well as people all over the world collect observations and make them available on the web. Yet, this crucial and timely information source comes unstructured. This hinders a processing and integration, and often a general consideration of this information. Linked Open Data is supported by number of freely available technologies, backed up by a large community in academia and it offers the opportunity to create flexible mash-up solutions. At hand of the Ushahidi Haiti platform, this paper suggests crowdsourced Linked Open Data. We take a look at the requirements, the tools that are there to meet these requirements, and suggest an architecture to enable non-experts to contribute Linked Open Data.",
"title": ""
},
{
"docid": "0fc50684d7bb4b4eba85bbd474a6548e",
"text": "Failure of corollary discharge, a mechanism for distinguishing self-generated from externally generated percepts, has been posited to underlie certain positive symptoms of schizophrenia, including auditory hallucinations. Although originally described in the visual system, corollary discharge may exist in the auditory system, whereby signals from motor speech commands prepare auditory cortex for self-generated speech. While associated with sensorimotor systems, it might also apply to inner speech or thought, regarded as our most complex motor act. In this paper, we describe the results of a series of studies in which we have shown that: (1) event-related brain potentials (ERPs) can be used to demonstrate the corollary discharge phenomenon during talking, (2) corollary discharge is abnormal in patients with schizophrenia, (3) EEG gamma band coherence between frontal and temporal lobes is greater during talking than listening and is disrupted by distorted feedback during talking in normals, and (4) patients with schizophrenia do not show this pattern for EEG gamma coherence. While these studies have identified ERPs and EEG gamma coherence indices of the efference copy/corollary discharge system and documented abnormalities in these systems in patients with schizophrenia, we have so far had limited success in establishing a relationship between these neurobiologic indicators of corollary discharge abnormality and reports of hallucinations in patients.",
"title": ""
},
{
"docid": "1c1042473f724da2ba2400110c2d4c48",
"text": "Recent work has shown good recognition results in 3D object recognition using 3D convolutional networks. In this paper, we show that the object orientation plays an important role in 3D recognition. More specifically, we argue that objects induce different features in the network under rotation. Thus, we approach the category-level classification task as a multi-task problem, in which the network is trained to predict the pose of the object in addition to the class label as a parallel task. We show that this yields significant improvements in the classification results. We test our suggested architecture on several datasets representing various 3D data sources: LiDAR data, CAD models, and RGB-D images. We report state-of-the-art results on classification as well as significant improvements in precision and speed over the baseline on 3D detection.",
"title": ""
},
{
"docid": "b277765cf0ced8162b6f05cc8f91fb71",
"text": "Questions and their corresponding answers within a community based question answering (CQA) site are frequently presented as top search results forWeb search queries and viewed by millions of searchers daily. The number of answers for CQA questions ranges from a handful to dozens, and a searcher would be typically interested in the different suggestions presented in various answers for a question. Yet, especially when many answers are provided, the viewer may not want to sift through all answers but to read only the top ones. Prior work on answer ranking in CQA considered the qualitative notion of each answer separately, mainly whether it should be marked as best answer. We propose to promote CQA answers not only by their relevance to the question but also by the diversification and novelty qualities they hold compared to other answers. Specifically, we aim at ranking answers by the amount of new aspects they introduce with respect to higher ranked answers (novelty), on top of their relevance estimation. This approach is common in Web search and information retrieval, yet it was not addressed within the CQA settings before, which is quite different from classic document retrieval. We propose a novel answer ranking algorithm that borrows ideas from aspect ranking and multi-document summarization, but adapts them to our scenario. Answers are ranked in a greedy manner, taking into account their relevance to the question as well as their novelty compared to higher ranked answers and their coverage of important aspects. An experiment over a collection of Health questions, using a manually annotated gold-standard dataset, shows that considering novelty for answer ranking improves the quality of the ranked answer list.",
"title": ""
},
{
"docid": "a51c3a5136404754cda9eab78c4e1bab",
"text": "Malware encyclopedias now play a vital role in disseminating information about security threats. Coupled with categorization and generalization capabilities, such encyclopedias might help better defend against both isolated and clustered specimens.In this paper, we present Malware Evaluator, a classification framework that treats malware categorization as a supervised learning task, builds learning models with both support vector machines and decision trees and finally, visualizes classifications with self-organizing maps. Malware Evaluator refrains from using readily available taxonomic features to produce species classifications. Instead, we generate attributes of malware strains via a tokenization process and select the attributes used according to their projected information gain. We also deploy word stemming and stopword removal techniques to reduce dimensions of the feature space. In contrast to existing approaches, Malware Evaluator defines its taxonomic features based on the behavior of species throughout their lifecycle, allowing it to discover properties that previously might have gone unobserved. The learning and generalization capabilities of the framework also help detect and categorize zero-day attacks. Our prototype helps establish that malicious strains improve their penetration rate through multiple propagation channels as well as compact code footprints; moreover, they attempt to evade detection by resorting to code polymorphism and information encryption. Malware Evaluator also reveals that breeds in the categories of Trojan, Infector, Backdoor, and Worm significantly contribute to the malware population and impose critical risks on the Internet ecosystem.",
"title": ""
},
{
"docid": "affa48f455d5949564302b4c23324458",
"text": "MicroRNAs (miRNAs) have within the past decade emerged as key regulators of metabolic homoeostasis. Major tissues in intermediary metabolism important during development of the metabolic syndrome, such as β-cells, liver, skeletal and heart muscle as well as adipose tissue, have all been shown to be affected by miRNAs. In the pancreatic β-cell, a number of miRNAs are important in maintaining the balance between differentiation and proliferation (miR-200 and miR-29 families) and insulin exocytosis in the differentiated state is controlled by miR-7, miR-375 and miR-335. MiR-33a and MiR-33b play crucial roles in cholesterol and lipid metabolism, whereas miR-103 and miR-107 regulates hepatic insulin sensitivity. In muscle tissue, a defined number of miRNAs (miR-1, miR-133, miR-206) control myofibre type switch and induce myogenic differentiation programmes. Similarly, in adipose tissue, a defined number of miRNAs control white to brown adipocyte conversion or differentiation (miR-365, miR-133, miR-455). The discovery of circulating miRNAs in exosomes emphasizes their importance as both endocrine signalling molecules and potentially disease markers. Their dysregulation in metabolic diseases, such as obesity, type 2 diabetes and atherosclerosis stresses their potential as therapeutic targets. This review emphasizes current ideas and controversies within miRNA research in metabolism.",
"title": ""
},
{
"docid": "947fdb3233e57b5df8ce92df31f2a0be",
"text": "Recent work by Cohen et al. [1] has achieved state-of-the-art results for learning spherical images in a rotation invariant way by using ideas from group representation theory and noncommutative harmonic analysis. In this paper we propose a generalization of this work that generally exhibits improved performace, but from an implementation point of view is actually simpler. An unusual feature of the proposed architecture is that it uses the Clebsch–Gordan transform as its only source of nonlinearity, thus avoiding repeated forward and backward Fourier transforms. The underlying ideas of the paper generalize to constructing neural networks that are invariant to the action of other compact groups.",
"title": ""
}
] |
scidocsrr
|
8d0f890590d41d3e24f7463ed329ccad
|
Blockchain-Based Database to Ensure Data Integrity in Cloud Computing Environments
|
[
{
"docid": "016a07d2ddb55149708409c4c62c67e3",
"text": "Cloud computing has emerged as a computational paradigm and an alternative to the conventional computing with the aim of providing reliable, resilient infrastructure, and with high quality of services for cloud users in both academic and business environments. However, the outsourced data in the cloud and the computation results are not always trustworthy because of the lack of physical possession and control over the data for data owners as a result of using to virtualization, replication and migration techniques. Since that the security protection the threats to outsourced data have become a very challenging and potentially formidable task in cloud computing, many researchers have focused on ameliorating this problem and enabling public auditability for cloud data storage security using remote data auditing (RDA) techniques. This paper presents a comprehensive survey on the remote data storage auditing in single cloud server domain and presents taxonomy of RDA approaches. The objective of this paper is to highlight issues and challenges to current RDA protocols in the cloud and the mobile cloud computing. We discuss the thematic taxonomy of RDA based on significant parameters such as security requirements, security metrics, security level, auditing mode, and update mode. The state-of-the-art RDA approaches that have not received much coverage in the literature are also critically analyzed and classified into three groups of provable data possession, proof of retrievability, and proof of ownership to present a taxonomy. It also investigates similarities and differences in such framework and discusses open research issues as the future directions in RDA research. & 2014 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "9f6e103a331ab52b303a12779d0d5ef6",
"text": "Cryptocurrencies, based on and led by Bitcoin, have shown promise as infrastructure for pseudonymous online payments, cheap remittance, trustless digital asset exchange, and smart contracts. However, Bitcoin-derived blockchain protocols have inherent scalability limits that trade-off between throughput and latency and withhold the realization of this potential. This paper presents Bitcoin-NG, a new blockchain protocol designed to scale. Based on Bitcoin’s blockchain protocol, Bitcoin-NG is Byzantine fault tolerant, is robust to extreme churn, and shares the same trust model obviating qualitative changes to the ecosystem. In addition to Bitcoin-NG, we introduce several novel metrics of interest in quantifying the security and efficiency of Bitcoin-like blockchain protocols. We implement Bitcoin-NG and perform large-scale experiments at 15% the size of the operational Bitcoin system, using unchanged clients of both protocols. These experiments demonstrate that Bitcoin-NG scales optimally, with bandwidth limited only by the capacity of the individual nodes and latency limited only by the propagation time of the network.",
"title": ""
},
{
"docid": "b76e466d4b446760bf3fd5d70e2edc1b",
"text": "Cloud computing has emerged as a long-dreamt vision of the utility computing paradigm that provides reliable and resilient infrastructure for users to remotely store data and use on-demand applications and services. Currently, many individuals and organizations mitigate the burden of local data storage and reduce the maintenance cost by outsourcing data to the cloud. However, the outsourced data is not always trustworthy due to the loss of physical control and possession over the data. As a result, many scholars have concentrated on relieving the security threats of the outsourced data by designing the Remote Data Auditing (RDA) technique as a new concept to enable public auditability for the stored data in the cloud. The RDA is a useful technique to check the reliability and integrity of data outsourced to a single or distributed servers. This is because all of the RDA techniques for single cloud servers are unable to support data recovery; such techniques are complemented with redundant storage mechanisms. The article also reviews techniques of remote data auditing more comprehensively in the domain of the distributed clouds in conjunction with the presentation of classifying ongoing developments within this specified area. The thematic taxonomy of the distributed storage auditing is presented based on significant parameters, such as scheme nature, security pattern, objective functions, auditing mode, update mode, cryptography model, and dynamic data structure. The more recent remote auditing approaches, which have not gained considerable attention in distributed cloud environments, are also critically analyzed and further categorized into three different classes, namely, replication based, erasure coding based, and network coding based, to present a taxonomy. This survey also aims to investigate similarities and differences of such a framework on the basis of the thematic taxonomy to diagnose significant and explore major outstanding issues.",
"title": ""
}
] |
[
{
"docid": "8fcb30825553e58ff66fd85ded10111e",
"text": "Most ecological processes now show responses to anthropogenic climate change. In terrestrial, freshwater, and marine ecosystems, species are changing genetically, physiologically, morphologically, and phenologically and are shifting their distributions, which affects food webs and results in new interactions. Disruptions scale from the gene to the ecosystem and have documented consequences for people, including unpredictable fisheries and crop yields, loss of genetic diversity in wild crop varieties, and increasing impacts of pests and diseases. In addition to the more easily observed changes, such as shifts in flowering phenology, we argue that many hidden dynamics, such as genetic changes, are also taking place. Understanding shifts in ecological processes can guide human adaptation strategies. In addition to reducing greenhouse gases, climate action and policy must therefore focus equally on strategies that safeguard biodiversity and ecosystems.",
"title": ""
},
{
"docid": "6c04e51492224fa3dc2c5bbf6608266b",
"text": "In many applications, one can obtain descriptions about the same objects or events from a variety of sources. As a result, this will inevitably lead to data or information conflicts. One important problem is to identify the true information (i.e., the truths) among conflicting sources of data. It is intuitive to trust reliable sources more when deriving the truths, but it is usually unknown which one is more reliable a priori. Moreover, each source possesses a variety of properties with different data types. An accurate estimation of source reliability has to be made by modeling multiple properties in a unified model. Existing conflict resolution work either does not conduct source reliability estimation, or models multiple properties separately. In this paper, we propose to resolve conflicts among multiple sources of heterogeneous data types. We model the problem using an optimization framework where truths and source reliability are defined as two sets of unknown variables. The objective is to minimize the overall weighted deviation between the truths and the multi-source observations where each source is weighted by its reliability. Different loss functions can be incorporated into this framework to recognize the characteristics of various data types, and efficient computation approaches are developed. Experiments on real-world weather, stock and flight data as well as simulated multi-source data demonstrate the necessity of jointly modeling different data types in the proposed framework.",
"title": ""
},
{
"docid": "644de61e0da130aafcd65691a8e1f47a",
"text": "We report on the first implementation of a single photon avalanche diode (SPAD) in 130 nm complementary metal-oxide-semiconductor (CMOS) technology. The SPAD is fabricated as p+/n-well junction with octagonal shape. A guard ring of p-well around the p+ anode is used to prevent premature discharge. To investigate the dynamics of the new device, both active and passive quenching methods have been used. Single photon detection is achieved by sensing the avalanche using a fast comparator. The SPAD exhibits a maximum photon detection probability of 41% and a typical dark count rate of 100 kHz at room temperature. Thanks to its timing resolution of 144 ps full-width at half-maximum (FWHM), the SPAD has several uses in disparate disciplines, including medical imaging, 3D vision, biophotonics, low-light illumination imaging, etc.",
"title": ""
},
{
"docid": "8f70026ff59ed1ae54ab5b6dadd2a3da",
"text": "Exoskeleton suit is a kind of human-machine robot, which combines the humans intelligence with the powerful energy of mechanism. It can help people to carry heavy load, walking on kinds of terrains and have a broadly apply area. Though many exoskeleton suits has been developed, there need many complex sensors between the pilot and the exoskeleton system, which decrease the comfort of the pilot. Sensitivity amplification control (SAC) is a method applied in exoskeleton system without any sensors between the pilot and the exoskeleton. In this paper simulation research was made to verify the feasibility of SAC include a simple 1-dof model and a swing phase model of 3-dof. A PID controller was taken to describe the human-machine interface model. Simulation results show the human only need to exert a scale-down version torque compared with the actuator and decrease the power consumes of the pilot.",
"title": ""
},
{
"docid": "5407b8e976d7e6e1d7aa1e00c278a400",
"text": "In his paper a 7T SRAM cell operating well in low voltages is presented. Suitable read operation structure is provided by controlling the drain induced barrier lowering (DIBL) effect and body-source voltage in the hold `1' state. The read-operation structure of the proposed cell utilizes the single transistor which leads to a larger write margin. The simulation results at 90nm TSMC CMOS demonstrate the outperforms of the proposed SRAM cell in terms of power dissipation, write margin, sensitivity to process variations as compared with the other most efficient low-voltage SRAM cells.",
"title": ""
},
{
"docid": "73e27f751c8027bac694f2e876d4d910",
"text": "The numerous and diverse applications of the Internet of Things (IoT) have the potential to change all areas of daily life of individuals, businesses, and society as a whole. The vision of a pervasive IoT spans a wide range of application domains and addresses the enabling technologies needed to meet the performance requirements of various IoT applications. In order to accomplish this vision, this paper aims to provide an analysis of literature in order to propose a new classification of IoT applications, specify and prioritize performance requirements of such IoT application classes, and give an insight into state-of-the-art technologies used to meet these requirements, all from telco’s perspective. A deep and comprehensive understanding of the scope and classification of IoT applications is an essential precondition for determining their performance requirements with the overall goal of defining the enabling technologies towards fifth generation (5G) networks, while avoiding over-specification and high costs. Given the fact that this paper presents an overview of current research for the given topic, it also targets the research community and other stakeholders interested in this contemporary and attractive field for the purpose of recognizing research gaps and recommending new research directions.",
"title": ""
},
{
"docid": "ca4100a8c305c064ea8716702859f11b",
"text": "It is widely believed, in the areas of optics, image analysis, and visual perception, that the Hilbert transform does not extend naturally and isotropically beyond one dimension. In some areas of image analysis, this belief has restricted the application of the analytic signal concept to multiple dimensions. We show that, contrary to this view, there is a natural, isotropic, and elegant extension. We develop a novel two-dimensional transform in terms of two multiplicative operators: a spiral phase spectral (Fourier) operator and an orientational phase spatial operator. Combining the two operators results in a meaningful two-dimensional quadrature (or Hilbert) transform. The new transform is applied to the problem of closed fringe pattern demodulation in two dimensions, resulting in a direct solution. The new transform has connections with the Riesz transform of classical harmonic analysis. We consider these connections, as well as others such as the propagation of optical phase singularities and the reconstruction of geomagnetic fields.",
"title": ""
},
{
"docid": "f250e8879618f73d5e23676a96f02e81",
"text": "Brain oscillatory activity is associated with different cognitive processes and plays a critical role in meditation. In this study, we investigated the temporal dynamics of oscillatory changes during Sahaj Samadhi meditation (a concentrative form of meditation that is part of Sudarshan Kriya yoga). EEG was recorded during Sudarshan Kriya yoga meditation for meditators and relaxation for controls. Spectral and coherence analysis was performed for the whole duration as well as specific blocks extracted from the initial, middle, and end portions of Sahaj Samadhi meditation or relaxation. The generation of distinct meditative states of consciousness was marked by distinct changes in spectral powers especially enhanced theta band activity during deep meditation in the frontal areas. Meditators also exhibited increased theta coherence compared to controls. The emergence of the slow frequency waves in the attention-related frontal regions provides strong support to the existing claims of frontal theta in producing meditative states along with trait effects in attentional processing. Interestingly, increased frontal theta activity was accompanied reduced activity (deactivation) in parietal–occipital areas signifying reduction in processing associated with self, space and, time.",
"title": ""
},
{
"docid": "a036dd162a23c5d24125d3270e22aaf7",
"text": "1 Problem Description This work is focused on the relationship between the news articles (breaking news) and stock prices. The student will design and develop methods to analyze how and when the news articles influence the stock market. News articles about Norwegian oil related companies and stock prices from \" BW Offshore Limited \" (BWO), \" DNO International \" (DNO), \" Frontline \" (FRO), \" Petroleum Geo-Services \" (PGS), \" Seadrill \" (SDRL), \" Sevan Marine \" (SEVAN), \" Siem Offshore \" (SIOFF), \" Statoil \" (STL) and \" TGS-NOPEC Geophysical Company \" (TGS) will be crawled, preprocessed and the important features in the text will be extracted to effectively represent the news in a form that allows the application of computational techniques. This data will then be used to train text sense classifiers. A prototype system that employs such classifiers will be developed to support the trader in taking sell/buy decisions. Methods will be developed for automaticall sense-labeling of news that are informed by the correlation between the changes in the stock prices and the breaking news. Performance of the prototype decision support system will be compared with a chosen baseline method for trade-related decision making. Abstract This thesis investigates the prediction of possible stock price changes immediately after news article publications. This is done by automatic analysis of these news articles. Some background information about financial trading theory and text mining is given in addition to an overview of earlier related research in the field of automatic news article analyzes with the purpose of predicting future stock prices. In this thesis a system is designed and implemented to predict stock price trends for the time immediately after the publication of news articles. This system consists mainly of four components. The first component gathers news articles and stock prices automatically from internet. The second component prepares the news articles by sending them to some document preprocessing steps and finding relevant features before they are sent to a document representation process. The third component categorizes the news articles into predefined categories, and finally the fourth component applies appropriate trading strategies depending on the category of the news article. This system requires a labeled data set to train the categorization component. This data set is labeled automatically on the basis of the price trends directly after the news article publication. An additional label refining step using clustering is added in an …",
"title": ""
},
{
"docid": "db7bc8bbfd7dd778b2900973f2cfc18d",
"text": "In this paper, the self-calibration of micromechanical acceleration sensors is considered, specifically, based solely on user-generated movement data without the support of laboratory equipment or external sources. The autocalibration algorithm itself uses the fact that under static conditions, the squared norm of the measured sensor signal should match the magnitude of the gravity vector. The resulting nonlinear optimization problem is solved using robust statistical linearization instead of the common analytical linearization for computing bias and scale factors of the accelerometer. To control the forgetting rate of the calibration algorithm, artificial process noise models are developed and compared with conventional ones. The calibration methodology is tested using arbitrarily captured acceleration profiles of the human daily routine and shows that the developed algorithm can significantly reject any misconfiguration of the acceleration sensor.",
"title": ""
},
{
"docid": "f3a838d6298c8ae127e548ba62e872eb",
"text": "Plasmodium falciparum resistance to artemisinins, the most potent and fastest acting anti-malarials, threatens malaria elimination strategies. Artemisinin resistance is due to mutation of the PfK13 propeller domain and involves an unconventional mechanism based on a quiescence state leading to parasite recrudescence as soon as drug pressure is removed. The enhanced P. falciparum quiescence capacity of artemisinin-resistant parasites results from an increased ability to manage oxidative damage and an altered cell cycle gene regulation within a complex network involving the unfolded protein response, the PI3K/PI3P/AKT pathway, the PfPK4/eIF2α cascade and yet unidentified transcription factor(s), with minimal energetic requirements and fatty acid metabolism maintained in the mitochondrion and apicoplast. The detailed study of these mechanisms offers a way forward for identifying future intervention targets to fend off established artemisinin resistance.",
"title": ""
},
{
"docid": "20563a2f75e074fe2a62a5681167bc01",
"text": "The introduction of a new generation of attractive touch screen-based devices raises many basic usability questions whose answers may influence future design and market direction. With a set of current mobile devices, we conducted three experiments focusing on one of the most basic interaction actions on touch screens: the operation of soft buttons. Issues investigated in this set of experiments include: a comparison of soft button and hard button performance; the impact of audio and vibrato-tactile feedback; the impact of different types of touch sensors on use, behavior, and performance; a quantitative comparison of finger and stylus operation; and an assessment of the impact of soft button sizes below the traditional 22 mm recommendation as well as below finger width.",
"title": ""
},
{
"docid": "cbc2b592efc227a5c6308edfbca51bd6",
"text": "The rapidly growing presence of Internet of Things (IoT) devices is becoming a continuously alluring playground for malicious actors who try to harness their vast numbers and diverse locations. One of their primary goals is to assemble botnets that can serve their nefarious purposes, ranging from Denial of Service (DoS) to spam and advertisement fraud. The most recent example that highlights the severity of the problem is the Mirai family of malware, which is accountable for a plethora of massive DDoS attacks of unprecedented volume and diversity. The aim of this paper is to offer a comprehensive state-of-the-art review of the IoT botnet landscape and the underlying reasons of its success with a particular focus on Mirai and major similar worms. To this end, we provide extensive details on the internal workings of IoT malware, examine their interrelationships, and elaborate on the possible strategies for defending against them.",
"title": ""
},
{
"docid": "5d6bd34fb5fdb44950ec5d98e77219c3",
"text": "This paper describes an experimental setup and results of user tests focusing on the perception of temporal characteristics of vibration of a mobile device. The experiment consisted of six vibration stimuli of different length. We asked the subjects to score the subjective perception level in a five point Lickert scale. The results suggest that the optimal duration of the control signal should be between 50 and 200 ms in this specific case. Longer durations were perceived as being irritating.",
"title": ""
},
{
"docid": "8d350cc11997b6a0dc96c9fef2b1919f",
"text": "Task-parameterized models of movements aims at automatically adapting movements to new situations encountered by a robot. The task parameters can for example take the form of positions of objects in the environment, or landmark points that the robot should pass through. This tutorial aims at reviewing existing approaches for task-adaptive motion encoding. It then narrows down the scope to the special case of task parameters that take the form of frames of reference, coordinate systems, or basis functions, which are most commonly encountered in service robotics. Each section of the paper is accompanied with source codes designed as simple didactic examples implemented in Matlab with a full compatibility with GNU Octave, closely following the notation and equations of the article. It also presents ongoing work and further challenges that remain to be addressed, with examples provided in simulation and on a real robot (transfer of manipulation behaviors to the Baxter bimanual robot). The repository for the accompanying source codes is available at http://www.idiap.ch/software/pbdlib/.",
"title": ""
},
{
"docid": "6442c9e4eb9034abf90fcd697c32a343",
"text": "With the increasing popularity and demand for mobile applications, there has been a significant increase in the number of mobile application development projects. Highly volatile requirements of mobile applications require adaptive software development methods. The Agile approach is seen as a natural fit for mobile application and there is a need to explore various Agile methodologies for the development of mobile applications. This paper evaluates how adopting various Agile approaches improves the development of mobile applications and if they can be used in order to provide more tailor-made process improvements within an organization. A survey related to mobile application development process improvement was developed. The use of various Agile approaches for success in mobile application development were evaluated by determining the significance of the most used Agile engineering paradigms such as XP, Scrum, and Lean. The findings of the study show that these Agile methods have the potential to help deliver enhanced speed and quality for mobile application development.",
"title": ""
},
{
"docid": "c6f173f75917ee0632a934103ca7566c",
"text": "Mersenne Twister (MT) is a widely-used fast pseudorandom number generator (PRNG) with a long period of 2 − 1, designed 10 years ago based on 32-bit operations. In this decade, CPUs for personal computers have acquired new features, such as Single Instruction Multiple Data (SIMD) operations (i.e., 128bit operations) and multi-stage pipelines. Here we propose a 128-bit based PRNG, named SIMD-oriented Fast Mersenne Twister (SFMT), which is analogous to MT but making full use of these features. Its recursion fits pipeline processing better than MT, and it is roughly twice as fast as optimised MT using SIMD operations. Moreover, the dimension of equidistribution of SFMT is better than MT. We also introduce a block-generation function, which fills an array of 32-bit integers in one call. It speeds up the generation by a factor of two. A speed comparison with other modern generators, such as multiplicative recursive generators, shows an advantage of SFMT. The implemented C-codes are downloadable from http://www.math.sci.hiroshima-u.ac.jp/~m-mat/MT/SFMT/index.html.",
"title": ""
},
{
"docid": "77b4cb00c3a72fdeefa99aa504f492d8",
"text": "This article considers a short survey of basic methods of social networks analysis, which are used for detecting cyber threats. The main types of social network threats are presented. Basic methods of graph theory and data mining, that deals with social networks analysis are described. Typical security tasks of social network analysis, such as community detection in network, detection of leaders in communities, detection experts in networks, clustering text information and others are considered.",
"title": ""
},
{
"docid": "3669d58dc1bed1d83e5d0d6747771f0e",
"text": "To cite: He A, Kwatra SG, Kim N, et al. BMJ Case Rep Published online: [please include Day Month Year] doi:10.1136/bcr-2016214761 DESCRIPTION A 26-year-old woman with a reported history of tinea versicolour presented for persistent hypopigmentation on her bilateral forearms. Detailed examination revealed multiple small (5–10 mm), irregularly shaped white macules on the extensor surfaces of the bilateral forearms overlying slightly erythaematous skin. The surrounding erythaematous skin blanched with pressure and with elevation of the upper extremities the white macules were no longer visible (figures 1 and 2). A clinical diagnosis of Bier spots was made based on the patient’s characteristic clinical features. Bier spots are completely asymptomatic and are often found on the extensor surfaces of the upper and lower extremities, although they are sometimes generalised. They are a benign physiological vascular anomaly, arising either from cutaneous vessels responding to venous hypertension or from small vessel vasoconstriction leading to tissue hypoxia. 3 Our patient had neither personal nor family history of vascular disease. Bier spots are easily diagnosed by a classic sign on physical examination: the pale macules disappear with pressure applied on the surrounding skin or by elevating the affected limbs (figure 2). However, Bier spots can be easily confused with a variety of other disorders associated with hypopigmented macules. The differential diagnosis includes vitiligo, postinflammatory hypopigmentation and tinea versicolour, which was a prior diagnosis in this case. Bier spots are often idiopathic and regress spontaneously, although there are reports of Bier spots heralding systemic diseases, such as scleroderma renal crisis, mixed cryoglobulinaemia or lymphoma. Since most Bier spots are idiopathic and transient, no treatment is required.",
"title": ""
},
{
"docid": "f331337a19cff2cf29e89a87d7ab234f",
"text": "This paper presents an investigation of lexical chaining (Morris and Hirst, 1991) for measuring discourse coherence quality in test-taker essays. We hypothesize that attributes of lexical chains, as well as interactions between lexical chains and explicit discourse elements, can be harnessed for representing coherence. Our experiments reveal that performance achieved by our new lexical chain features is better than that of previous discourse features used for this task, and that the best system performance is achieved when combining lexical chaining features with complementary discourse features, such as those provided by a discourse parser based on rhetorical structure theory, and features that reflect errors in grammar, word usage, and mechanics.",
"title": ""
}
] |
scidocsrr
|
808bf55e7a9e46337e9302ca9e863081
|
Design of reconfigurable fractal antenna using pin diode switch for wireless applications
|
[
{
"docid": "dbc253488a9f5d272e75b38dc98ea101",
"text": "A new form of a hybrid design of a microstrip-fed parasitic coupled ring fractal monopole antenna with semiellipse ground plane is proposed for modern mobile devices having a wireless local area network (WLAN) module along with a Worldwide Interoperability for Microwave Access (WiMAX) function. In comparison to the previous monopole structures, the miniaturized antenna dimension is only about 25 × 25 × 1 mm3 , which is 15 times smaller than the previous proposed design. By only increasing the fractal iterations, very good impedance characteristics are obtained. Throughout this letter, the improvement process of the impedance and radiation properties is completely presented and discussed.",
"title": ""
}
] |
[
{
"docid": "0e2311c6dd24a2efe51de10c3d5e8a01",
"text": "Continents, especially their Archean cores, are underlain by thick thermal boundary layers that have been largely isolated from the convecting mantle over billion-year timescales, far exceeding the life span of oceanic thermal boundary layers. This longevity is promoted by the fact that continents are underlain by highly melt-depleted peridotites, which result in a chemically distinct boundary layer that is intrinsically buoyant and strong (owing to dehydration). This chemical boundary layer counteracts the destabilizing effect of the cold thermal state of continents. The compositions of cratonic peridotites require formation at shallower depths than they currently reside, suggesting that the building blocks of continents formed in oceanic or arc environments and became “continental” after significant thickening or underthrusting. Continents are difficult to destroy, but refertilization and rehydration of continental mantle by the passage of melts can nullify the unique stabilizing composition of continents.",
"title": ""
},
{
"docid": "f1d7e1b222e1ae313c3e751e8ba443f3",
"text": "INTRODUCTION\nLapatinib, an orally active tyrosine kinase inhibitor of epidermal growth factor receptor ErbB1 (EGFR) and ErbB2 (HER2), has activity as monotherapy and in combination with chemotherapy in HER2-overexpressing metastatic breast cancer (MBC).\n\n\nMETHODS\nThis phase II single-arm trial assessed the safety and efficacy of first-line lapatinib in combination with paclitaxel in previously untreated patients with HER2-overexpressing MBC. The primary endpoint was the overall response rate (ORR). Secondary endpoints were the duration of response (DoR), time to response, time to progression, progression-free survival (PFS), overall survival, and the incidence and severity of adverse events. All endpoints were investigator- and independent review committee (IRC)-assessed.\n\n\nRESULTS\nThe IRC-assessed ORR was 51% (29/57 patients with complete or partial response) while the investigator-assessed ORR was 77% (44/57). As per the IRC, the median DoR was 39.7 weeks, and the median PFS was 47.9 weeks. The most common toxicities were diarrhea (56%), neutropenia (44%), rash (40%), fatigue (25%), and peripheral sensory neuropathy (25%).\n\n\nCONCLUSIONS\nFirst-line lapatinib plus paclitaxel for HER2-overexpressing MBC produced an encouraging ORR with manageable toxicities. This combination may be useful in first-line treatment for patients with HER2-overexpressing MBC and supports the ongoing evaluation of this combination as first-line therapy in HER2-overexpressing MBC.",
"title": ""
},
{
"docid": "ed60a6cd11df89b5e5dac11e75179001",
"text": "Supply chain management has become more important as an academic topic due to trends in globalization leading to massive reallocation of production related advantages. Because of the massive amount of data that is generated in the global economy, new tools need to be developed in order to manage and analyze the data, as well as to monitor organizational performance worldwide. This paper proposes a framework of business analytics for supply chain analytics (SCA) as IT-enabled, analytical dynamic capabilities composed of data management capability, analytical supply chain process capability, and supply chain performance management capability. This paper also presents a dynamic-capabilities view of SCA and extensively describes a set of its three capabilities: data management capability, analytical supply chain process capability, and supply chain performance management capability. Next, using the SCM best practice, sales & operations planning (S&OP), the paper demonstrates opportunities to apply SCA in an integrated way. In discussing the implications of the proposed framework, ̄nally, the paper examines several propositions predicting the positive impact of SCA and its individual capability on SCM performance.",
"title": ""
},
{
"docid": "4d75db0597f4ca4d4a3abba398e99cb4",
"text": "Coverage path planning determines a path that guides an autonomous vehicle to pass every part of a workspace completely and efficiently. Since turns are often costly for autonomous vehicles, minimizing the number of turns usually produces more working efficiency. This paper presents an optimization approach to minimize the number of turns of autonomous vehicles in coverage path planning. For complex polygonal fields, the problem is reduced to finding the optimal decomposition of the original field into simple subfields. The optimization criterion is minimization of the sum of widths of these decomposed subfields. Here, a new algorithm is designed based on a multiple sweep line decomposition. The time complexity of the proposed algorithm is O(n2 log n). Experiments show that the proposed algorithm can provide nearly optimal solutions very efficiently when compared against recent state-of-the-art. The proposed algorithm can be applied for both convex and non-convex fields.",
"title": ""
},
{
"docid": "902aab15808014d55a9620bcc48621f5",
"text": "Software developers are always looking for ways to boost their effectiveness and productivity and perform complex jobs more quickly and easily, particularly as projects have become increasingly large and complex. Programmers want to shed unneeded complexity and outdated methodologies and move to approaches that focus on making programming simpler and faster. With this in mind, many developers are increasingly using dynamic languages such as JavaScript, Perl, Python, and Ruby. Although software experts disagree on the exact definition, a dynamic language basically enables programs that can change their code and logical structures at runtime, adding variable types, module names, classes, and functions as they are running. These languages frequently are interpreted and generally check typing at runtime",
"title": ""
},
{
"docid": "22418c06e09887d5994aee27ea05691d",
"text": "About a decade ago, psychology of the arts started to gain momentum owing to a number of drives: technological progress improved the conditions under which art could be studied in the laboratory, neuroscience discovered the arts as an area of interest, and new theories offered a more comprehensive look at aesthetic experiences. Ten years ago, Leder, Belke, Oeberst, and Augustin (2004) proposed a descriptive information-processing model of the components that integrate an aesthetic episode. This theory offered explanations for modern art's large number of individualized styles, innovativeness, and for the diverse aesthetic experiences it can stimulate. In addition, it described how information is processed over the time course of an aesthetic episode, within and over perceptual, cognitive and emotional components. Here, we review the current state of the model, and its relation to the major topics in empirical aesthetics today, including the nature of aesthetic emotions, the role of context, and the neural and evolutionary foundations of art and aesthetics.",
"title": ""
},
{
"docid": "066bf4179cc4052f381c509aeaf9e643",
"text": "We present a mathematical formulation of a theory of language change. The theory is evolutionary in nature and has close analogies with theories of population genetics. The mathematical structure we construct similarly has correspondences with the Fisher-Wright model of population genetics, but there are significant differences. The continuous time formulation of the model is expressed in terms of a Fokker-Planck equation. This equation is exactly soluble in the case of a single speaker and can be investigated analytically in the case of multiple speakers who communicate equally with all other speakers and give their utterances equal weight. Whilst the stationary properties of this system have much in common with the single-speaker case, time-dependent properties are richer. In the particular case where linguistic forms can become extinct, we find that the presence of many speakers causes a two-stage relaxation, the first being a common marginal distribution that persists for a long time as a consequence of ultimate extinction being due to rare fluctuations.",
"title": ""
},
{
"docid": "2c4ee0d42347cf75096caec62dda97f3",
"text": "A new real-time obstacle avoidance method for mobile robots has been developed and implemented. This method, named the Vector Field Hisfog\" (VFH), permits the detection of unknown obstacles and avoids collisions while simultaneously steering the mobile robot toward the target. A VFH-controlled mobile robot maneuvers quickly and without stopping among densely cluttered obstacles. The VFH method uses a two-dimensional Cartesian Histognun Gfid as a world model. This world model is updated continuously and in real-time with range data sampled by the onboard ultrasonic range sensors. Based on the accumulated environmental data, the VFH method then computes a one-dimensional Polar Histogram that is constructed around the robot's momentary location. Each sector in the Polar Histogram holds thepolar obstacle density in that direction. Finally, the algorithm selects the most suitable sector from among aU Polar Hisfogmi sectors with low obstacle density, and the steering of the robot is aligned with that direction. Experimental results from a mobile robot traversing a densely cluttered obstacle course at an average speed of 0.7 m/sec demonstrate the power of the VFH method.",
"title": ""
},
{
"docid": "4a26afba58270d7ce1a0eb50bd659eae",
"text": "Recommendation can be reduced to a sub-problem of link prediction, with specific nodes (users and items) and links (similar relations among users/items, and interactions between users and items). However, the previous link prediction algorithms need to be modified to suit the recommendation cases since they do not consider the separation of these two fundamental relations: similar or dissimilar and like or dislike. In this paper, we propose a novel and unified way to solve this problem, which models the relation duality using complex number. Under this representation, the previous works can directly reuse. In experiments with the Movie Lens dataset and the Android software website AppChina.com, the presented approach achieves significant performance improvement comparing with other popular recommendation algorithms both in accuracy and coverage. Besides, our results revealed some new findings. First, it is observed that the performance is improved when the user and item popularities are taken into account. Second, the item popularity plays a more important role than the user popularity does in final recommendation. Since its notable performance, we are working to apply it in a commercial setting, AppChina.com website, for application recommendation.",
"title": ""
},
{
"docid": "b6a68089a65d3fb183be256fd72b8720",
"text": "Headline generation is a special type of text summarization task. While the amount of available training data for this task is almost unlimited, it still remains challenging, as learning to generate headlines for news articles implies that the model has strong reasoning about natural language. To overcome this issue, we applied recent Universal Transformer architecture paired with byte-pair encoding technique and achieved new state-of-the-art results on the New York Times Annotated corpus with ROUGE-L F1-score 24.84 and ROUGE-2 F1-score 13.48. We also present the new RIA corpus and reach ROUGE-L F1-score 36.81 and ROUGE-2 F1-score 22.15 on it.",
"title": ""
},
{
"docid": "a7948753e11deab581ebcfd0d99a84df",
"text": "This paper explores a relatively less popular source of clean energy. Noise (sound) energy can be converted into viable source of electric power by using a suitable transducer. This can be done by using a transducer by converting vibrations caused by noise into electrical energy. An application is proposed for the same, in which a speaker and a transformer are used to convert noise produced by car horn into electrical energy. The vibrations created by noise can be converted into electrical energy through the principle of electromagnetic induction. The received signal was stepped up using a transformer. A similar setup was placed at distance of 1 meter from the exhaust pipe of a 350 cubic centimeter engine of a motorbike. The demonstrated ideas probe into a clean and readily available source of energy.",
"title": ""
},
{
"docid": "2c5e280525168d71d1a48fec047b5a23",
"text": "This paper presents the implementation of four channel Electromyography (EMG) signal acquisition system for acquiring the EMG signal of the lower limb muscles during ankle joint movements. Furthermore, some post processing and statistical analysis for the recorded signal were presented. Four channels were implemented using instrumentation amplifier (INA114) for pre-amplification stage then the amplified signal subjected to the band pass filter to eliminate the unwanted signals. Operational amplifier (OPA2604) was involved for the main amplification stage to get the output signal in volts. The EMG signals were detected during movement of the ankle joint of a healthy subject. Then the signal was sampled at the rate of 2 kHz using NI6009 DAQ and Labview used for displaying and storing the acquired signal. For EMG temporal representation, mean absolute value (MAV) analysis algorithm is used to investigate the level of the muscles activity. This data will be used in future as a control input signal to drive the ankle joint exoskeleton robot.",
"title": ""
},
{
"docid": "2fdf511e81080b5029f13801d5c6d783",
"text": "Content, usability, and aesthetics are core constructs in users’ perception and evaluation of websites, but little is known about their interplay in different use phases. In a first study web users (N=330) stated content as most relevant, followed by usability and aesthetics. In study 2 tests with four websites were performed (N=300), resulting data were modeled in path analyses. In this model aesthetics had the largest influence on first impressions, while all three constructs had an impact on first and overall impressions. However, only content contributed significantly to the intention to revisit or recommend a website. Using data from a third study (N=512, 42 websites), we were able to replicate this model. As before, perceived usability affected first and overall impressions, while content perception was important for all analyzed website use phases. In addition, aesthetics also had a small but significant impact on the participants’ intentions to revisit or recommend.",
"title": ""
},
{
"docid": "8fa43f63a4520e0097e94c12948121c6",
"text": "This paper describes a novel fabrication technique called hybrid deposition manufacturing (HDM), which combines additive manufacturing (AM) processes such as fused deposition manufacturing (FDM) with material deposition and embedded components to produce multimaterial parts and systems for robotics, mechatronics, and articulated mechanism applications. AM techniques are used to print both permanent components and sacrificial molds for deposited resins and inserted parts. Design strategies and practical techniques for developing these structures and molds are described, taking into account considerations such as printer resolution, build direction, and printed material strength. The strengths of interfaces between printed and deposited materials commonly used in the authors’ implementation of the process are measured to characterize the robustness of the resulting parts. The process is compared to previously documented layered manufacturing methodologies, and the authors present examples of systems produced with the process, including robot fingers, a multimaterial airless tire, and an articulated camera probe. This effort works toward simplifying fabrication and assembly complexity over comparable techniques, leveraging the benefits of AM, and expanding the range of design options for robotic mechanisms. [DOI: 10.1115/1.4029400]",
"title": ""
},
{
"docid": "cf5529a98237df9c8ed46d0a7b5c2f19",
"text": "Domain specific words and ontological information among words are important resources for general natural language applications. This paper proposes a statistical model for finding domain specific words (DSW’s) in particular domains, and thus building the association among them. When applying this model to the hierarchical structure of the web directories node-by-node, the document tree can potentially be converted into a large semantically annotated lexicon tree. Some preliminary results show that the current approach is better than a conventional TF-IDF approach for measuring domain specificity. An average precision of 65.4% and an average recall of 36.3% are observed if the top-10% candidates are extracted as domain-specific words. 1 Domain Specific Words and Lexicon Trees as Important NLP Resources Domain specific words (DSW’s) are important “anchoring words” for natural language processing applications that involve word sense disambiguation (WSD). It is appreciated that multi-sense words appearing in the same document tend to be tagged with the same word sense if they belong to the same common domain in the semantic hierarchy (Yarowsky, 1995). The existence of some DSW’s in a document will therefore be a strong evidence of a specific sense for words within the document. For instance, the existence of “basketball”in a document would strongly suggest the “sport”sense of the word “活塞”(“Pistons”), rather than its “mechanics” sense. It is also a personal belief that DSW-based sense disambiguation, document classification and many similar applications would be easier than sense-based models since sense-tagged documents are rare while domain-aware training documents are abundant on the Web. DSW identification is therefore an important issue. On the other hand, the semantics hierarchy among words (especially among sets of domain specific words) as well as the membership of domain specific words are also important resources for general natural language processing applications, since the hierarchy will provide semantic links and ontological information (such as “is-A”and “part-of”relationships) for words, and, domain specific words belonging to the same domain may have the “synonym” or “antonym”relationships. A hierarchical lexicon tree (or a network, in general) (Fellbaum, 1998; Jurafsky and Martin, 2000), indicative of sets of highly associated domain specific words and their hierarchy, is therefore invaluable for NLP applications. Manually constructing such a lexicon hierarchy and acquiring the associated words for each node in the hierarchy, however, is most likely unaffordable both in terms of time and cost. In addition, new words (or new usages of words) are dynamically produced day by day. For instance, the Chinese word “活塞”(pistons) is more frequently used as the “sport” or “basketball” sense (referring to the “Detroit Pistons”) in Chinese web pages rather than the “mechanics” or “automobile” sense. It is therefore desirable to find an automatic and inexpensive way to construct the whole hierarchy. Since the hierarchical web pages provide semantic tag information (explicitly from the HTML/XML tags or implicitly from the directory names) and useful semantic links, it is desirable that the lexicon construction process could be conducted using the web corpora. Actually, the directory hierarchy of the Web can be regarded as a kind of classification tree for web documents, which assigns an implicit hidden tag (represented by the directory name) to each document and hence the embedded domain specific words. Converting such a hierarchy into a lexicon tree is therefore feasible, provided that we can remove non-specific terms from the associated document sets. For instance, the domain-specific words for documents under the “sport”hierarchy are likely to be tagged with a “sport”tag. These tags, in turn, can be used in various word sense disambiguation (WSD) tasks and other hot applications like anti-spamming mail filters. Such rich annotation provides a useful knowledge source for mining various semantic links among words. We therefore will explore a non-conventional view for constructing a lexicon tree from the web hierarchy, where domain-specific word identification turns out to be a key issue and the first step toward such a construction process. An inter-domain entropy (IDE) measure will be proposed for this purpose. 2 Conventional Clustering View for Constructing Lexicon Trees One conventional way to construct the lexicon hierarchy from web corpora is to collect the terms in all web documents and measure the degree of word association between word pairs using some well-known association metrics (Church and Hanks, 1989; Smadja et al., 1996) as the distance measure. Terms of high association are then clustered bottom-up using some clustering techniques to build the hierarchy. The clustered hierarchy is then submitted to lexicographers to assign a semantic label to each sub-cluster. The cost will be reduced in this way, but could still be unaffordable. Besides, it still depends on the lexicographers to assign appropriate semantic tags to the list of highly associated words. There are several disadvantages with this approach. Firstly, the hierarchical relationship among the web documents, and hence the embedded DSW’s, is lost during the document collection process, since the words are collected without considering where they come from in the document hierarchy. The loss of such hierarchical information implies that the clustered one will not match human perception quite well. Secondly, the word association metric and the clustering criteria used by the clustering algorithm are not directly related to human perception. Therefore, the lexicographers may not be able to adjust the clustered hierarchy comfortably. Thirdly, most clustering algorithms merge terms in a binary way; this may not match human perception as well. As far as the computation cost is concerned, computation of word association based on pairwise word association metrics will be time consuming. Actually, such an approach may not be the only option today, thanks to the large number of web documents, which are natively arranged in a hierarchical manner. 3 Lexicon Tree Construction as Domain Specific Word Detection from Web Hierarchy Since the web documents virtually form an extremely huge document classification tree, we propose here a simple approach to convert it into a lexicon tree, and assign implicit semantic tags to the domain specific words in the web documents automatically. This simple approach is inspired by the fact that most text materials (webpages) in websites are already classified in a hierarchical manner; the hierarchical directory structures implicitly suggest that the domain specific terms in the text materials of a particular subdirectory are closely related to a common subject, which is identified by the name of the subdirectory. If we can detect domain specific words within each document, and remove words that are non-specific, and tag the DSW’s thus acquired with the directory name (or any appropriate tag), then we virtually get a hierarchical lexicon tree. In such a tree, each node is semantically linked by the original web document hierarchy, and each node has a set of domain specific words associated with it. For instance, a subdirectory entitled ‘entertainment’ is likely to have a large number of web pages containing domain specific terms like ‘singer’, ‘pop songs’, ‘rock & rol’, ‘Ah-Mei’(nickname of a pop song singer), ‘album’, and so on. Since these words are highly associated with the ‘entertainment’domain, we will be able to collect the domain specific words of the ‘entertainment’domain from such a directory. In the extraction process, the directory names can be regarded as implicit sense labels or implicit semantic tags (which may be different from linguistically motivated semantic tags), and the action to put the web pages into properly named directories can be regarded as an implicit tagging process by the webmasters. And, the hierarchical directory itself provides information on the hierarchy of the semantic tags. From a well-organized web site, we will then be able to acquire an implicitly tagged corpus from that site. Thanks to the webmasters, whose daily work include the implicit tagging of the corpora in their websites, there is almost no cost to extract DSW’s from such web corpora. This idea actually extends equally well for other Internet resources, such as news groups and BBS articles, that are associated with hierarchical group names. Extending the idea to well organized book chapters, encyclopedia and things like that would not be surprised too. The advantages of such a construction process, by removing non-specific terms, are many folds. First, the original hierarchical structure reflects human perception on document (and term) classification. Therefore, the need for adjustment may be rare, and the lexicographers may be more comfortable to adjust the hierarchy even if necessary. Second, the directory names may have higher correlation with linguistically motivated sense tags than those assigned by a clustering algorithm, since the web hierarchy was created by a human tagger (i.e., the webmaster). As far as the computation cost is concerned, pairwise word association computation is now replaced by the computation of “domain specificity” of words against domains. The reduction is significant, from O(|W|x|W|) to O(|W|x|D|), where |W| and |D| represent the vocabulary size and number of domains, respectively. 4 Domain Specific Word Extraction as the Key Technology: An Inter-Domain Entropy Approach Since the terms (words or compound words) in the documents include general terms as well as domain-specific terms, the only problem then is an effective model to exclude those domain-independent terms from the implicit tagging process. The degree of domain independency can be measured with the inter-domain entropy (IDE) a",
"title": ""
},
{
"docid": "241c020b8dfe347e362e20dfcd98f419",
"text": "The old electricity network infrastructure has proven to be inadequate, with respect to modern challenges such as alternative energy sources, electricity demand and energy saving policies. Moreover, Information and Communication Technologies (ICT) seem to have reached an adequate level of reliability and flexibility in order to support a new concept of electricity network—the smart grid. In this work, we will analyse the state-of-the-art of smart grids, in their technical, management, security, and optimization aspects. We will also provide a brief overview of the regulatory aspects involved in the development of a smart grid, mainly from the viewpoint of the European Union.",
"title": ""
},
{
"docid": "eab3dff1aecb9cec903e0bbe67b5a66d",
"text": "With a pace of about twice the observed rate of global warming, the temperature on the Qinghai-Tibetan Plateau (Earth's 'third pole') has increased by 0.2 °C per decade over the past 50 years, which results in significant permafrost thawing and glacier retreat. Our review suggested that warming enhanced net primary production and soil respiration, decreased methane (CH(4)) emissions from wetlands and increased CH(4) consumption of meadows, but might increase CH(4) emissions from lakes. Warming-induced permafrost thawing and glaciers melting would also result in substantial emission of old carbon dioxide (CO(2)) and CH(4). Nitrous oxide (N(2)O) emission was not stimulated by warming itself, but might be slightly enhanced by wetting. However, there are many uncertainties in such biogeochemical cycles under climate change. Human activities (e.g. grazing, land cover changes) further modified the biogeochemical cycles and amplified such uncertainties on the plateau. If the projected warming and wetting continues, the future biogeochemical cycles will be more complicated. So facing research in this field is an ongoing challenge of integrating field observations with process-based ecosystem models to predict the impacts of future climate change and human activities at various temporal and spatial scales. To reduce the uncertainties and to improve the precision of the predictions of the impacts of climate change and human activities on biogeochemical cycles, efforts should focus on conducting more field observation studies, integrating data within improved models, and developing new knowledge about coupling among carbon, nitrogen, and phosphorus biogeochemical cycles as well as about the role of microbes in these cycles.",
"title": ""
},
{
"docid": "7dfb3c8159e7758c414d3e8f92a0bc40",
"text": "The net primary production of the biosphere is consumed largely by microorganisms; whose metabolism creates the trophic base for detrital foodwebs, drives element cycles, and mediates atmospheric composition. Biogeochemical constraints on microbial catabolism, relative to primary production, create reserves of detrital organic carbon in soils and sediments that exceed the carbon content of the atmosphere and biomass. The production of organic matter is an intracellular process that generates thousands of compounds from a small number of precursors drawn from intermediary metabolism. Osmotrophs generate growth substrates from the products of biosynthesis and diagenesis by enzyme-catalyzed reactions that occur largely outside cells. These enzymes, which we define as ecoenzymes, enter the environment by secretion and lysis. Enzyme expression is regulated by environmental signals, but once released from the cell, ecoenzymatic activity is determined by environmental interactions, represented as a kinetic cascade, that lead to multiphasic kinetics and large spatiotemporal variation. At the ecosystem level, these interactions can be viewed as an energy landscape that directs the availability and flow of resources. Ecoenzymatic activity and microbial metabolism are integrated on the basis of resource demand relative to environmental availability. Macroecological studies show that the most widely measured ecoenzymatic activities have a similar stoichiometry for all microbial communities. Ecoenzymatic stoichiometry connects the elemental stoichiometry of microbial biomass and detrital organic matter to microbial nutrient assimilation and growth. We present a model that combines the kinetics of enzyme activity and community growth under conditions of multiple resource limitation with elements of metabolic and ecological stoichiometry theory. This biogeochemical equilibrium model provides a framework for comparative studies of microbial community metabolism, the principal driver of biogeochemical cycles.",
"title": ""
},
{
"docid": "e26d52cdc3636e3034d76bc684b9dc95",
"text": "The problem of cross-modal retrieval from multimedia repositories is considered. This problem addresses the design of retrieval systems that support queries across content modalities, for example, using an image to search for texts. A mathematical formulation is proposed, equating the design of cross-modal retrieval systems to that of isomorphic feature spaces for different content modalities. Two hypotheses are then investigated regarding the fundamental attributes of these spaces. The first is that low-level cross-modal correlations should be accounted for. The second is that the space should enable semantic abstraction. Three new solutions to the cross-modal retrieval problem are then derived from these hypotheses: correlation matching (CM), an unsupervised method which models cross-modal correlations, semantic matching (SM), a supervised technique that relies on semantic representation, and semantic correlation matching (SCM), which combines both. An extensive evaluation of retrieval performance is conducted to test the validity of the hypotheses. All approaches are shown successful for text retrieval in response to image queries and vice versa. It is concluded that both hypotheses hold, in a complementary form, although evidence in favor of the abstraction hypothesis is stronger than that for correlation.",
"title": ""
},
{
"docid": "11b88c77e646007e1c2c29c95abf3426",
"text": "Use of social media, such as Facebook, is pervasive among young women. Body dissatisfaction is also highly prevalent in this demographic. The present study examined the relationship between Facebook usage and body image concerns among female university students (N=227), and tested whether appearance comparisons on Facebook in general, or comparisons to specific female target groups (family members, close friends, distant peers [women one may know but do not regularly socialize with], celebrities) mediated this relationship. Results showed a positive relationship between Facebook usage and body image concerns, which was mediated by appearance comparisons in general, frequency of comparisons to close friends and distant peers, and by upward comparisons (judging one's own appearance to be worse) to distant peers and celebrities. Thus, young women who spend more time on Facebook may feel more concerned about their body because they compare their appearance to others (especially to peers) on Facebook.",
"title": ""
}
] |
scidocsrr
|
4f70a710f54f5b340055d06c8d703ee6
|
Influence of immediate post-extraction socket irrigation on development of alveolar osteitis after mandibular third molar removal: a prospective split-mouth study, preliminary report
|
[
{
"docid": "accbfd3c4caade25329a2a5743559320",
"text": "PURPOSE\nThe purpose of this investigation was to assess the frequency of complications of third molar surgery, both intraoperatively and postoperatively, specifically for patients 25 years of age or older.\n\n\nMATERIALS AND METHODS\nThis prospective study evaluated 3,760 patients, 25 years of age or older, who were to undergo third molar surgery by oral and maxillofacial surgeons practicing in the United States. The predictor variables were categorized as demographic (age, gender), American Society of Anesthesiologists classification, chronic conditions and medical risk factors, and preoperative description of third molars (present or absent, type of impaction, abnormalities or association with pathology). Outcome variables were intraoperative and postoperative complications, as well as quality of life issues (days of work missed or normal activity curtailed). Frequencies for data collected were tabulated.\n\n\nRESULTS\nThe sample was provided by 63 surgeons, and was composed of 3,760 patients with 9,845 third molars who were 25 years of age or older, of which 8,333 third molars were removed. Alveolar osteitis was the most frequently encountered postoperative problem (0.2% to 12.7%). Postoperative inferior alveolar nerve anesthesia/paresthesia occurred with a frequency of 1.1% to 1.7%, while lingual nerve anesthesia/paresthesia was calculated as 0.3%. All other complications also occurred with a frequency of less than 1%.\n\n\nCONCLUSION\nThe findings of this study indicate that third molar surgery in patients 25 years of age or older is associated with minimal morbidity, a low incidence of postoperative complications, and minimal impact on the patients quality of life.",
"title": ""
}
] |
[
{
"docid": "23676a52e1ed03d7b5c751a9986a7206",
"text": "Considering the increasingly complex media landscape and diversity of use, it is important to establish a common ground for identifying and describing the variety of ways in which people use new media technologies. Characterising the nature of media-user behaviour and distinctive user types is challenging and the literature offers little guidance in this regard. Hence, the present research aims to classify diverse user behaviours into meaningful categories of user types, according to the frequency of use, variety of use and content preferences. To reach a common framework, a review of the relevant research was conducted. An overview and meta-analysis of the literature (22 studies) regarding user typology was established and analysed with reference to (1) method, (2) theory, (3) media platform, (4) context and year, and (5) user types. Based on this examination, a unified Media-User Typology (MUT) is suggested. This initial MUT goes beyond the current research literature, by unifying all the existing and various user type models. A common MUT model can help the Human–Computer Interaction community to better understand both the typical users and the diversification of media-usage patterns more qualitatively. Developers of media systems can match the users’ preferences more precisely based on an MUT, in addition to identifying the target groups in the developing process. Finally, an MUT will allow a more nuanced approach when investigating the association between media usage and social implications such as the digital divide. 2010 Elsevier Ltd. All rights reserved. 1 Difficulties in understanding media-usage behaviour have also arisen because of",
"title": ""
},
{
"docid": "c02f98ba21ed80995e810c77a6def394",
"text": "Forensic and Security Laboratory School of Computer Engineering, Nanyang Technological University, Block N4, Nanyang Avenue, Singapore 639798 Biometrics Research Centre Department of Computing, The Hong Kong Polytechnic University Kowloon, Hong Kong Pattern Analysis and Machine Intelligence Research Group Department of Electrical and Computer Engineering University of Waterloo, 200 University Avenue West, Ontario, Canada",
"title": ""
},
{
"docid": "85b1fe5c3d6d68791345d32eda99055b",
"text": "Surgery and other invasive therapies are complex interventions, the assessment of which is challenged by factors that depend on operator, team, and setting, such as learning curves, quality variations, and perception of equipoise. We propose recommendations for the assessment of surgery based on a five-stage description of the surgical development process. We also encourage the widespread use of prospective databases and registries. Reports of new techniques should be registered as a professional duty, anonymously if necessary when outcomes are adverse. Case series studies should be replaced by prospective development studies for early technical modifications and by prospective research databases for later pre-trial evaluation. Protocols for these studies should be registered publicly. Statistical process control techniques can be useful in both early and late assessment. Randomised trials should be used whenever possible to investigate efficacy, but adequate pre-trial data are essential to allow power calculations, clarify the definition and indications of the intervention, and develop quality measures. Difficulties in doing randomised clinical trials should be addressed by measures to evaluate learning curves and alleviate equipoise problems. Alternative prospective designs, such as interrupted time series studies, should be used when randomised trials are not feasible. Established procedures should be monitored with prospective databases to analyse outcome variations and to identify late and rare events. Achievement of improved design, conduct, and reporting of surgical research will need concerted action by editors, funders of health care and research, regulatory bodies, and professional societies.",
"title": ""
},
{
"docid": "d961378b22aae8d793b38c40b66318de",
"text": "Socio-economic hardships put children in an underprivileged position. This systematic review was conducted to identify factors linked to underachievement of disadvantaged pupils in school science and maths. What could be done as evidence-based practice to make the lives of these young people better? The protocol from preferred reporting items for systematic reviews and meta-analyses (PRISMA) was followed. Major electronic educational databases were searched. Papers meeting pre-defined selection criteria were identified. Studies included were mainly large-scale evaluations with a clearly defined comparator group and robust research design. All studies used a measure of disadvantage such as lower SES, language barrier, ethnic minority or temporary immigrant status and an outcome measure like attainment in standardised national tests. A majority of papers capable of answering the research question were correlational studies. The review reports findings from 771 studies published from 2005 to 2014 in English language. Thirtyfour studies were synthesised. Results suggest major factors linking deprivation to underachievement can be thematically categorised into a lack of positive environment and support. Recommendations from the research reports are discussed. Subjects: Behavioral Sciences; Education; International & Comparative Education; Social Sciences",
"title": ""
},
{
"docid": "9e4da48d0fa4c7ff9566f30b73da3dc3",
"text": "Yang Song; Robert van Boeschoten University of Amsterdam Plantage Muidergracht 12, 1018 TV Amsterdam, the Netherlands y.song@uva.nl; r.m.van.boeschoten@hva.nl Abstract: Crowdfunding has been used as one of the effective ways for entrepreneurs to raise funding especially in creative industries. Individuals as well as organizations are paying more attentions to the emergence of new crowdfunding platforms. In the Netherlands, the government is also trying to help artists access financial resources through crowdfunding platforms. This research aims at discovering the success factors for crowdfunding projects from both founders’ and funders’ perspective. We designed our own website for founders and funders to observe crowdfunding behaviors. We linked our self-designed website to Google analytics in order to collect our data. Our research will contribute to crowdfunding success factors and provide practical recommendations for practitioners and researchers.",
"title": ""
},
{
"docid": "9779c9f4f15d9977a20592cabb777059",
"text": "Expert search or recommendation involves the retrieval of people (experts) in response to a query and on occasion, a given set of constraints. In this paper, we address expert recommendation in academic domains that are different from web and intranet environments studied in TREC. We propose and study graph-based models for expertise retrieval with the objective of enabling search using either a topic (e.g. \"Information Extraction\") or a name (e.g. \"Bruce Croft\"). We show that graph-based ranking schemes despite being \"generic\" perform on par with expert ranking models specific to topic-based and name-based querying.",
"title": ""
},
{
"docid": "df6c7f13814178d7b34703757899d6b1",
"text": "Regression testing of natural language systems is problematic for two main reasons: component input and output is complex, and system behaviour is context-dependent. We have developed a generic approach which solves both of these issues. We describe our regression tool, CONTEST, which supports context-dependent testing of dialogue system components, and discuss the regression test sets we developed, designed to effectively isolate components from changes and problems earlier in the pipeline. We believe that the same approach can be used in regression testing for other dialogue systems, as well as in testing any complex NLP system containing multiple components.",
"title": ""
},
{
"docid": "7b5331b0e6ad693fc97f5f3b543bf00c",
"text": "Relational learning deals with data that are characterized by relational structures. An important task is collective classification, which is to jointly classify networked objects. While it holds a great promise to produce a better accuracy than noncollective classifiers, collective classification is computational challenging and has not leveraged on the recent breakthroughs of deep learning. We present Column Network (CLN), a novel deep learning model for collective classification in multirelational domains. CLN has many desirable theoretical properties: (i) it encodes multi-relations between any two instances; (ii) it is deep and compact, allowing complex functions to be approximated at the network level with a small set of free parameters; (iii) local and relational features are learned simultaneously; (iv) longrange, higher-order dependencies between instances are supported naturally; and (v) crucially, learning and inference are efficient, linear in the size of the network and the number of relations. We evaluate CLN on multiple real-world applications: (a) delay prediction in software projects, (b) PubMed Diabetes publication classification and (c) film genre classification. In all applications, CLN demonstrates a higher accuracy than state-of-the-art rivals.",
"title": ""
},
{
"docid": "ea6eecdaed8e76c28071ad1d9c1c39f9",
"text": "When it comes to taking the public transportation, time and patience are of essence. In other words, many people using public transport buses have experienced time loss because of waiting at the bus stops. In this paper, we proposed smart bus tracking system that any passenger with a smart phone or mobile device with the QR (Quick Response) code reader can scan QR codes placed at bus stops to view estimated bus arrival times, buses' current locations, and bus routes on a map. Anyone can access these maps and have the option to sign up to receive free alerts about expected bus arrival times for the interested buses and related routes via SMS and e-mails. We used C4.5 (a statistical classifier) algorithm for the estimation of bus arrival times to minimize the passengers waiting time. GPS (Global Positioning System) and Google Maps are used for navigation and display services, respectively.",
"title": ""
},
{
"docid": "35a85d6652bd333d93f8112aff83ab83",
"text": "For natural language understanding (NLU) technology to be maximally useful, both practically and as a scientific object of study, it must be general: it must be able to process language in a way that is not exclusively tailored to any one specific task or dataset. In pursuit of this objective, we introduce the General Language Understanding Evaluation benchmark (GLUE), a tool for evaluating and analyzing the performance of models across a diverse range of existing NLU tasks. GLUE is modelagnostic, but it incentivizes sharing knowledge across tasks because certain tasks have very limited training data. We further provide a hand-crafted diagnostic test suite that enables detailed linguistic analysis of NLU models. We evaluate baselines based on current methods for multi-task and transfer learning and find that they do not immediately give substantial improvements over the aggregate performance of training a separate model per task, indicating room for improvement in developing general and robust NLU systems.",
"title": ""
},
{
"docid": "587f6e73ca6653860cda66238d2ba146",
"text": "Cable-suspended robots are structurally similar to parallel actuated robots but with the fundamental difference that cables can only pull the end-effector but not push it. From a scientific point of view, this feature makes feedback control of cable-suspended robots more challenging than their counterpart parallel-actuated robots. In the case with redundant cables, feedback control laws can be designed to make all tensions positive while attaining desired control performance. This paper presents approaches to design positive tension controllers for cable suspended robots with redundant cables. Their effectiveness is demonstrated through simulations and experiments on a three degree-of-freedom cable suspended robots.",
"title": ""
},
{
"docid": "0829cf1fb1654525627fdc61d1814196",
"text": "The selection of indexing terms for representing documents is a key decision that limits how effective subsequent retrieval can be. Often stemming algorithms are used to normalize surface forms, and thereby address the problem of not finding documents that contain words related to query terms through infectional or derivational morphology. However, rule-based stemmers are not available for every language and it is unclear which methods for coping with morphology are most effective. In this paper we investigate an assortment of techniques for representing text and compare these approaches using data sets in eighteen languages and five different writing systems.\n We find character n-gram tokenization to be highly effective. In half of the languages examined n-grams outperform unnormalized words by more than 25%; in highly infective languages relative improvements over 50% are obtained. In languages with less morphological richness the choice of tokenization is not as critical and rule-based stemming can be an attractive option, if available. We also conducted an experiment to uncover the source of n-gram power and a causal relationship between the morphological complexity of a language and n-gram effectiveness was demonstrated.",
"title": ""
},
{
"docid": "bc7c5ab8ec28e9a5917fc94b776b468a",
"text": "Reasonable house price prediction is a meaningful task, and the house clustering is an important process in the prediction. In this paper, we propose the method of Multi-Scale Affinity Propagation(MSAP) aggregating the house appropriately by the landmark and the facility. Then in each cluster, using Linear Regression model with Normal Noise(LRNN) predicts the reasonable price, which is verified by the increasing number of the renting reviews. Experiments show that the precision of the reasonable price prediction improved greatly via the method of MSAP.",
"title": ""
},
{
"docid": "592ceee67b3f8b3e8333cb104f56bd2f",
"text": "The goal of this paper is to study the team formation of multiple UAVs and UGVs for collaborative surveillance and crowd control under uncertain scenarios (e.g. crowd splitting). A comprehensive and coherent dynamic data driven adaptive multi-scale simulation (DDDAMS) framework is adopted, with the focus on simulation-based planning and control strategies related to the surveillance problem considered in this paper. To enable the team formation of multiple UAVs and UGVs, a two stage approach involving 1) crowd clustering and 2) UAV/UGV team assignment is proposed during the system operations by considering the geometry of the crowd clusters and solving a multi-objective optimization problem. For the experiment, an integrated testbed has been developed based on agent-based hardware-in-the-loop simulation involving seamless communications among simulated and real vehicles. Preliminary results indicate the effectiveness and efficiency of the proposed approach for the team formation of multiple UAVs and UGVs.",
"title": ""
},
{
"docid": "aa69409c1bddc7693ba2ed36206ac767",
"text": "Popularity of data-driven software engineering has led to an increasing demand on the infrastructures to support efficient execution of tasks that require deeper source code analysis. While task optimization and parallelization are the adopted solutions, other research directions are less explored. We present collective program analysis (CPA), a technique for scaling large scale source code analyses, especially those that make use of control and data flow analysis, by leveraging analysis specific similarity. Analysis specific similarity is about, whether two or more programs can be considered similar for a given analysis. The key idea of collective program analysis is to cluster programs based on analysis specific similarity, such that running the analysis on one candidate in each cluster is sufficient to produce the result for others. For determining analysis specific similarity and clustering analysis-equivalent programs, we use a sparse representation and a canonical labeling scheme. Our evaluation shows that for a variety of source code analyses on a large dataset of programs, substantial reduction in the analysis time can be achieved; on average a 69% reduction when compared to a baseline and on average a 36% reduction when compared to a prior technique. We also found that a large amount of analysis-equivalent programs exists in large datasets.",
"title": ""
},
{
"docid": "8f570416ceecf87310b7780ec935d814",
"text": "BACKGROUND\nInguinal lymph node involvement is an important prognostic factor in penile cancer. Inguinal lymph node dissection allows staging and treatment of inguinal nodal disease. However, it causes morbidity and is associated with complications, such as lymphocele, skin loss and infection. Video Endoscopic Inguinal Lymphadenectomy (VEIL) is an endoscopic procedure, and it seems to be a new and attractive approach duplicating the standard open procedure with less morbidity. We present here a critical perioperative assessment with points of technique.\n\n\nMETHODS\nTen patients with moderate to high grade penile carcinoma with clinically negative inguinal lymph nodes were subjected to elective VEIL. VEIL was done in standard surgical steps. Perioperative parameters were assessed that is - duration of the surgery, lymph-related complications, time until drain removal, lymph node yield, surgical emphysema and histopathological positivity of lymph nodes.\n\n\nRESULTS\nOperative time for VEIL was 120 to 180 minutes. Lymph node yield was 7 to 12 lymph nodes. No skin related complications were seen with VEIL. Lymph related complications, that is, lymphocele, were seen in only two patients. The suction drain was removed after four to eight days (mean 5.1). Overall morbidity was 20% with VEIL.\n\n\nCONCLUSION\nIn our early experience, VEIL was a safe and feasible technique in patients with penile carcinoma with non palpable inguinal lymph nodes. It allows the removal of inguinal lymph nodes within the same limits as in conventional surgical dissection and potentially reduces surgical morbidity.",
"title": ""
},
{
"docid": "f1ae820d7e067dabfda5efc1229762d8",
"text": "Data from 574 participants were used to assess perceptions of message, site, and sponsor credibility across four genres of websites; to explore the extent and effects of verifying web-based information; and to measure the relative influence of sponsor familiarity and site attributes on perceived credibility.The results show that perceptions of credibility differed, such that news organization websites were rated highest and personal websites lowest, in terms of message, sponsor, and overall site credibility, with e-commerce and special interest sites rated between these, for the most part.The results also indicated that credibility assessments appear to be primarily due to website attributes (e.g. design features, depth of content, site complexity) rather than to familiarity with website sponsors. Finally, there was a negative relationship between self-reported and observed information verification behavior and a positive relationship between self-reported verification and internet/web experience. The findings are used to inform the theoretical development of perceived web credibility. 319 new media & society Copyright © 2007 SAGE Publications Los Angeles, London, New Delhi and Singapore Vol9(2):319–342 [DOI: 10.1177/1461444807075015] ARTICLE 319-342 NMS-075015.qxd 9/3/07 11:54 AM Page 319 © 2007 SAGE Publications. All rights reserved. Not for commercial use or unauthorized distribution. at Universiteit van Amsterdam SAGE on April 25, 2007 http://nms.sagepub.com Downloaded from",
"title": ""
},
{
"docid": "18ce27c1840596779805efaeec18f3ed",
"text": "Accurate inversion of land surface geo/biophysical variables from remote sensing data for earth observation applications is an essential and challenging topic for the global change research. Land surface temperature (LST) is one of the key parameters in the physics of earth surface processes from local to global scales. The importance of LST is being increasingly recognized and there is a strong interest in developing methodologies to measure LST from the space. Landsat 8 Thermal Infrared Sensor (TIRS) is the newest thermal infrared sensor for the Landsat project, providing two adjacent thermal bands, which has a great benefit for the LST inversion. In this paper, we compared three different approaches for LST inversion from TIRS, including the radiative transfer equation-based method, the split-window algorithm and the single channel method. Four selected energy balance monitoring sites from the Surface Radiation Budget Network (SURFRAD) were used for validation, combining with the MODIS 8 day emissivity product. For the investigated sites and scenes, results show that the LST inverted from the radiative transfer equation-based method using band 10 has the highest accuracy with RMSE lower than 1 K, while the SW algorithm has moderate accuracy and the SC method has the lowest accuracy. OPEN ACCESS Remote Sens. 2014, 6 9830",
"title": ""
},
{
"docid": "ba66e377db4ef2b3c626a0a2f19da8c3",
"text": "A challenging aspect of scene text recognition is to handle text with distortions or irregular layout. In particular, perspective text and curved text are common in natural scenes and are difficult to recognize. In this work, we introduce ASTER, an end-to-end neural network model that comprises a rectification network and a recognition network. The rectification network adaptively transforms an input image into a new one, rectifying the text in it. It is powered by a flexible Thin-Plate Spline transformation which handles a variety of text irregularities and is trained without human annotations. The recognition network is an attentional sequence-to-sequence model that predicts a character sequence directly from the rectified image. The whole model is trained end to end, requiring only images and their groundtruth text. Through extensive experiments, we verify the effectiveness of the rectification and demonstrate the state-of-the-art recognition performance of ASTER. Furthermore, we demonstrate that ASTER is a powerful component in end-to-end recognition systems, for its ability to enhance the detector.",
"title": ""
}
] |
scidocsrr
|
cd29f37bb07b52331b86fb689077b87f
|
What is the right way to represent document images?
|
[
{
"docid": "dc424d2dc407e504d962c557325f035e",
"text": "Document image classification is an important step in Office Automation, Digital Libraries, and other document image analysis applications. There is great diversity in document image classifiers: they differ in the problems they solve, in the use of training data to construct class models, and in the choice of document features and classification algorithms. We survey this diverse literature using three components: the problem statement, the classifier architecture, and performance evaluation. This brings to light important issues in designing a document classifier, including the definition of document classes, the choice of document features and feature representation, and the choice of classification algorithm and learning mechanism. We emphasize techniques that classify single-page typeset document images without using OCR results. Developing a general, adaptable, high-performance classifier is challenging due to the great variety of documents, the diverse criteria used to define document classes, and the ambiguity that arises due to ill-defined or fuzzy document classes.",
"title": ""
},
{
"docid": "8ccb6c767704bc8aee424d17cf13d1e3",
"text": "In this paper, we present a page classification application in a banking workflow. The proposed architecture represents administrative document images by merging visual and textual descriptions. The visual description is based on a hierarchical representation of the pixel intensity distribution. The textual description uses latent semantic analysis to represent document content as a mixture of topics. Several off-the-shelf classifiers and different strategies for combining visual and textual cues have been evaluated. A final step uses an $$n$$ n -gram model of the page stream allowing a finer-grained classification of pages. The proposed method has been tested in a real large-scale environment and we report results on a dataset of 70,000 pages.",
"title": ""
}
] |
[
{
"docid": "bc49930fa967b93ed1e39b3a45237652",
"text": "In gene expression data, a bicluster is a subset of the genes exhibiting consistent patterns over a subset of the conditions. We propose a new method to detect significant biclusters in large expression datasets. Our approach is graph theoretic coupled with statistical modelling of the data. Under plausible assumptions, our algorithm is polynomial and is guaranteed to find the most significant biclusters. We tested our method on a collection of yeast expression profiles and on a human cancer dataset. Cross validation results show high specificity in assigning function to genes based on their biclusters, and we are able to annotate in this way 196 uncharacterized yeast genes. We also demonstrate how the biclusters lead to detecting new concrete biological associations. In cancer data we are able to detect and relate finer tissue types than was previously possible. We also show that the method outperforms the biclustering algorithm of Cheng and Church (2000).",
"title": ""
},
{
"docid": "721a64c9a5523ba836318edcdb8de021",
"text": "Highly-produced audio stories often include musical scores that reflect the emotions of the speech. Yet, creating effective musical scores requires deep expertise in sound production and is time-consuming even for experts. We present a system and algorithm for re-sequencing music tracks to generate emotionally relevant music scores for audio stories. The user provides a speech track and music tracks and our system gathers emotion labels on the speech through hand-labeling, crowdsourcing, and automatic methods. We develop a constraint-based dynamic programming algorithm that uses these emotion labels to generate emotionally relevant musical scores. We demonstrate the effectiveness of our algorithm by generating 20 musical scores for audio stories and showing that crowd workers rank their overall quality significantly higher than stories without music.",
"title": ""
},
{
"docid": "39c2c3e7f955425cd9aaad1951d13483",
"text": "This paper proposes a novel nature-inspired algorithm called Multi-Verse Optimizer (MVO). The main inspirations of this algorithm are based on three concepts in cosmology: white hole, black hole, and wormhole. The mathematical models of these three concepts are developed to perform exploration, exploitation, and local search, respectively. The MVO algorithm is first benchmarked on 19 challenging test problems. It is then applied to five real engineering problems to further confirm its performance. To validate the results, MVO is compared with four well-known algorithms: Grey Wolf Optimizer, Particle Swarm Optimization, Genetic Algorithm, and Gravitational Search Algorithm. The results prove that the proposed algorithm is able to provide very competitive results and outperforms the best algorithms in the literature on the majority of the test beds. The results of the real case studies also demonstrate the potential of MVO in solving real problems with unknown search spaces. Note that the source codes of the proposed MVO algorithm are publicly available at http://www.alimirjalili.com/MVO.html .",
"title": ""
},
{
"docid": "f1dd866b1cdd79716f2bbc969c77132a",
"text": "Fiber optic sensor technology offers the possibility of sensing different parameters like strain, temperature, pressure in harsh environment and remote locations. these kinds of sensors modulates some features of the light wave in an optical fiber such an intensity and phase or use optical fiber as a medium for transmitting the measurement information. The advantages of fiber optic sensors in contrast to conventional electrical ones make them popular in different applications and now a day they consider as a key component in improving industrial processes, quality control systems, medical diagnostics, and preventing and controlling general process abnormalities. This paper is an introduction to fiber optic sensor technology and some of the applications that make this branch of optic technology, which is still in its early infancy, an interesting field. Keywords—Fiber optic sensors, distributed sensors, sensor application, crack sensor.",
"title": ""
},
{
"docid": "6fb006066fa1a25ae348037aa1ee7be3",
"text": "Reducing redundancy in data representation leads to decreased data storage requirements and lower costs for data communication.",
"title": ""
},
{
"docid": "755f2d11ad9653806f26e5ae7beaf49b",
"text": "Deep Neural Networks (DNNs) have shown remarkable success in pattern recognition tasks. However, parallelizing DNN training across computers has been difficult. We present the Deep Stacking Network (DSN), which overcomes the problem of parallelizing learning algorithms for deep architectures. The DSN provides a method of stacking simple processing modules in buiding deep architectures, with a convex learning problem in each module. Additional fine tuning further improves the DSN, while introducing minor non-convexity. Full learning in the DSN is batch-mode, making it amenable to parallel training over many machines and thus be scalable over the potentially huge size of the training data. Experimental results on both the MNIST (image) and TIMIT (speech) classification tasks demonstrate that the DSN learning algorithm developed in this work is not only parallelizable in implementation but it also attains higher classification accuracy than the DNN.",
"title": ""
},
{
"docid": "332acb4b9ad2b278ff2af20399cf85e7",
"text": "The Character recognition is one of the most important areas in the field of pattern recognition. Recently Indian Handwritten character recognition is getting much more attention and researchers are contributing a lot in this field. But Malayalam, a South Indian language has very less works in this area and needs further attention. Malayalam OCR is a complex task owing to the various character scripts available and more importantly the difference in ways in which the characters are written. The dimensions are never the same and may be never mapped on to a square grid unlike English characters. Selection of a feature extraction method is the most important factor in achieving high recognition performance in character recognition systems. Different feature extraction methods are designed for different representation of characters. As an important component of pattern recognition, feature extraction has been paid close attention by many scholars, and currently has become one of the research hot spots in the field of pattern recognition. This article gives a general discussion of feature extraction techniques used in handwritten character recognition of other Indian languages and some of them are implemented for Malayalam handwritten characters.",
"title": ""
},
{
"docid": "a38cf37fc60e1322e391680037ff6d4e",
"text": "Robot-aided gait training is an emerging clinical tool for gait rehabilitation of neurological patients. This paper deals with a novel method of offering gait assistance, using an impedance controlled exoskeleton (LOPES). The provided assistance is based on a recent finding that, in the control of walking, different modules can be discerned that are associated with different subtasks. In this study, a Virtual Model Controller (VMC) for supporting one of these subtasks, namely the foot clearance, is presented and evaluated. The developed VMC provides virtual support at the ankle, to increase foot clearance. Therefore, we first developed a new method to derive reference trajectories of the ankle position. These trajectories consist of splines between key events, which are dependent on walking speed and body height. Subsequently, the VMC was evaluated in twelve healthy subjects and six chronic stroke survivors. The impedance levels, of the support, were altered between trials to investigate whether the controller allowed gradual and selective support. Additionally, an adaptive algorithm was tested, that automatically shaped the amount of support to the subjects’ needs. Catch trials were introduced to determine whether the subjects tended to rely on the support. We also assessed the additional value of providing visual feedback. With the VMC, the step height could be selectively and gradually influenced. The adaptive algorithm clearly shaped the support level to the specific needs of every stroke survivor. The provided support did not result in reliance on the support for both groups. All healthy subjects and most patients were able to utilize the visual feedback to increase their active participation. The presented approach can provide selective control on one of the essential subtasks of walking. This module is the first in a set of modules to control all subtasks. This enables the therapist to focus the support on the subtasks that are impaired, and leave the other subtasks up to the patient, encouraging him to participate more actively in the training. Additionally, the speed-dependent reference patterns provide the therapist with the tools to easily adapt the treadmill speed to the capabilities and progress of the patient.",
"title": ""
},
{
"docid": "594110683d0d38ba7fd7345a8c24fa81",
"text": "Wayfinding in the public transportation infrastructure takes place on traffic networks. These consist of lines that are interconnected at nodes. The network is the basis for routing decisions; it is usually presented in maps and through digital interfaces. But to the traveller, the stops and stations that make up the nodes are at least as important as the network, for it is there that the complexity of the system is experienced. These observations suggest that there are two cognitively different environments involved, which we will refer to as network space and scene space. Network space consists of the public transport network. Scene space consists of the environment at the nodes of the public transport system, through which travellers enter and leave the system and in which they change means of transport. We explore properties of the two types of spaces and how they interact to assist wayfinding. We also show how they can be modelled: for network space, graphs can be used; for scene space we propose a novel model based on cognitive schemata and",
"title": ""
},
{
"docid": "7bd539fecbfec5db45b0f2b52cec23a7",
"text": "In this paper, we consider the restoration of images with signal-dependent noise. The filter is noise smoothing and adapts to local changes in image statistics based on a nonstationary mean, nonstationary variance (NMNV) image model. For images degraded by a class of uncorrelated, signal-dependent noise without blur, the adaptive noise smoothing filter becomes a point processor and is similar to Lee's local statistics algorithm [16]. The filter is able to adapt itself to the nonstationary local image statistics in the presence of different types of signal-dependent noise. For multiplicative noise, the adaptive noise smoothing filter is a systematic derivation of Lee's algorithm with some extensions that allow different estimators for the local image variance. The advantage of the derivation is its easy extension to deal with various types of signal-dependent noise. Film-grain and Poisson signal-dependent restoration problems are also considered as examples. All the nonstationary image statistical parameters needed for the filter can be estimated from the noisy image and no a priori information about the original image is required.",
"title": ""
},
{
"docid": "0ce57a66924192a50728fb67023e0ed2",
"text": "Most studies on TCP over multi-hop wireless ad hoc networks have only addressed the issue of performance degradation due to temporarily broken routes, which results in TCP inability to distinguish between losses due to link failures or congestion. This problem tends to become more serious as network mobility increases. In this work, we tackle the equally important capture problem to which there has been little or no solution, and is present mostly in static and low mobility multihop wireless networks. This is a result of the interplay between the MAC layer and TCP backoff policies, which causes nodes to unfairly capture the wireless shared medium, hence preventing neighboring nodes to access the channel. This has been shown to have major negative effects on TCP performance comparable to the impact of mobility. We propose a novel algorithm, called COPAS (COntention-based PAth Selection), which incorporates two mechanisms to enhance TCP performance by avoiding capture conditions. First, it uses disjoint forward (sender to receiver for TCP data) and reverse (receiver to sender for TCP ACKs) paths in order to minimize the conflicts of TCP data and ACK packets. Second, COPAS employs a dynamic contentionbalancing scheme where it continuously monitors and changes forward and reverse paths according to the level of MAC layer contention, hence minimizing the likelihood of capture. Through extensive simulation, COPAS is shown to improve TCP throughput by up to 90% while keeping routing overhead low.",
"title": ""
},
{
"docid": "cbb6c80bc986b8b1e1ed3e70abb86a79",
"text": "CD44 is a cell surface adhesion receptor that is highly expressed in many cancers and regulates metastasis via recruitment of CD44 to the cell surface. Its interaction with appropriate extracellular matrix ligands promotes the migration and invasion processes involved in metastases. It was originally identified as a receptor for hyaluronan or hyaluronic acid and later to several other ligands including, osteopontin (OPN), collagens, and matrix metalloproteinases. CD44 has also been identified as a marker for stem cells of several types. Beside standard CD44 (sCD44), variant (vCD44) isoforms of CD44 have been shown to be created by alternate splicing of the mRNA in several cancer. Addition of new exons into the extracellular domain near the transmembrane of sCD44 increases the tendency for expressing larger size vCD44 isoforms. Expression of certain vCD44 isoforms was linked with progression and metastasis of cancer cells as well as patient prognosis. The expression of CD44 isoforms can be correlated with tumor subtypes and be a marker of cancer stem cells. CD44 cleavage, shedding, and elevated levels of soluble CD44 in the serum of patients is a marker of tumor burden and metastasis in several cancers including colon and gastric cancer. Recent observations have shown that CD44 intracellular domain (CD44-ICD) is related to the metastatic potential of breast cancer cells. However, the underlying mechanisms need further elucidation.",
"title": ""
},
{
"docid": "b27f43bf472e44cf393d21781c3341cd",
"text": "A massive hybrid array consists of multiple analog subarrays, with each subarray having its digital processing chain. It offers the potential advantage of balancing cost and performance for massive arrays and therefore serves as an attractive solution for future millimeter-wave (mm- Wave) cellular communications. On one hand, using beamforming analog subarrays such as phased arrays, the hybrid configuration can effectively collect or distribute signal energy in sparse mm-Wave channels. On the other hand, multiple digital chains in the configuration provide multiplexing capability and more beamforming flexibility to the system. In this article, we discuss several important issues and the state-of-the-art development for mm-Wave hybrid arrays, such as channel modeling, capacity characterization, applications of various smart antenna techniques for single-user and multiuser communications, and practical hardware design. We investigate how the hybrid array architecture and special mm-Wave channel property can be exploited to design suboptimal but practical massive antenna array schemes. We also compare two main types of hybrid arrays, interleaved and localized arrays, and recommend that the localized array is a better option in terms of overall performance and hardware feasibility.",
"title": ""
},
{
"docid": "2613ec5a77cfe296f7d16340ce133c27",
"text": "Learned feature representations and sub-phoneme posterio r from Deep Neural Networks (DNNs) have been used separately to produce significant performance gains for speaker and language recognition tasks. In this work we show how these gains are possible using a single DNN for both speaker and language recognition. The unified DNN approach is shown to yield substantial performance improvements on the the 2013 Domain Adaptation Challenge speaker recognition task (55% reduction in EER for the out-of-domain condition) and on the NIST 2011 Language Recognition Evaluation (48% reduction in EER for the 30s test condition).",
"title": ""
},
{
"docid": "11644dafde30ee5608167c04cb1f511c",
"text": "Dynamic Adaptive Streaming over HTTP (DASH) enables the video player to adapt the bitrate of the video while streaming to ensure playback without interruptions even with varying throughput. A DASH server hosts multiple representations of the same video, each of which is broken down into small segments of fixed playback duration. The video bitrate adaptation is purely driven by the player at the endhost. Typically, the player employs an Adaptive Bitrate (ABR) algorithm, that determines the most appropriate representation for the next segment to be downloaded, based on the current network conditions and user preferences. The aim of an ABR algorithm is to dynamically manage the Quality of Experience (QoE) of the user during the playback. ABR algorithms manage the QoE by maximizing the bitrate while at the same time trying to minimize the other QoE metrics: playback start time, duration and number of buffering events, and the number of bitrate switching events. Typically, the ABR algorithms manage the QoE by using the measured network throughput and buffer occupancy to adapt the playback bitrate. However, due to the video encoding schemes employed, the sizes of the individual segments may vary significantly. For low bandwidth networks, fluctuation in the segment sizes results in inaccurate estimation the expected segment fetch times, thereby resulting in inaccurate estimation of the optimum bitrate. In this paper we demonstrate how the Segment-Aware Rate Adaptation (SARA) algorithm, that considers the measured throughput, buffer occupancy, and the variation in segment sizes helps in better management of the users' QoE in a DASH system. By comparing with a typical throughput-based and buffer-based adaptation algorithm under varying network conditions, we demonstrate that SARA manages the QoE better, especially in a low bandwidth network. We also developed AStream, an open-source Python-based emulated DASH-video player that was used to evaluate three different ABR algorithms and measure the QoE metrics with each of them.",
"title": ""
},
{
"docid": "06525bcc03586c8d319f5d6f1d95b852",
"text": "Many different automatic color correction approaches have been proposed by different research communities in the past decade. However, these approaches are seldom compared, so their relative performance and applicability are unclear. For multi-view image and video stitching applications, an ideal color correction approach should be effective at transferring the color palette of the source image to the target image, and meanwhile be able to extend the transferred color from the overlapped area to the full target image without creating visual artifacts. In this paper we evaluate the performance of color correction approaches for automatic multi-view image and video stitching. We consider nine color correction algorithms from the literature applied to 40 synthetic image pairs and 30 real mosaic image pairs selected from different applications. Experimental results show that both parametric and non-parametric approaches have members that are effective at transferring colors, while parametric approaches are generally better than non-parametric approaches in extendability.",
"title": ""
},
{
"docid": "924146534d348e7a44970b1d78c97e9c",
"text": "Little is known of the extent to which heterosexual couples are satisfied with their current frequency of sex and the degree to which this predicts overall sexual and relationship satisfaction. A population-based survey of 4,290 men and 4,366 women was conducted among Australians aged 16 to 64 years from a range of sociodemographic backgrounds, of whom 3,240 men and 3,304 women were in regular heterosexual relationships. Only 46% of men and 58% of women were satisfied with their current frequency of sex. Dissatisfied men were overwhelmingly likely to desire sex more frequently; among dissatisfied women, only two thirds wanted sex more frequently. Age was a significant factor but only for men, with those aged 35-44 years tending to be least satisfied. Men and women who were dissatisfied with their frequency of sex were also more likely to express overall lower sexual and relationship satisfaction. The authors' findings not only highlight desired frequency of sex as a major factor in satisfaction, but also reveal important gender and other sociodemographic differences that need to be taken into account by researchers and therapists seeking to understand and improve sexual and relationship satisfaction among heterosexual couples. Other issues such as length of time spent having sex and practices engaged in may also be relevant, particularly for women.",
"title": ""
},
{
"docid": "a0358cfc6166fbd45d35cbb346c56b7a",
"text": "a Pontificia Universidad Católica de Valparaíso, Av. Brasil 2950, Valparaíso, Chile b Universidad Autónoma de Chile, Av. Pedro de Valdivia 641, Santiago, Chile c Universidad Finis Terrae, Av. Pedro de Valdivia 1509, Santiago, Chile d CNRS, LINA, University of Nantes, 2 rue de la Houssinière, Nantes, France e Escuela de Ingeniería Industrial, Universidad Diego Portales, Manuel Rodríguez Sur 415, Santiago, Chile",
"title": ""
},
{
"docid": "815098e9ed06dfa5335f0c2c595f4059",
"text": "Effectively managing risk is an essential element of successful project management. It is imperative that project management team consider all possible risks to establish corrective actions in the right time. So far, several techniques have been proposed for project risk analysis. Failure Mode and Effect Analysis (FMEA) is recognized as one of the most useful techniques in this field. The main goal is identifying all failure modes within a system, assessing their impact, and planning for corrective actions. In traditional FMEA, the risk priorities of failure modes are determined by using Risk Priority Numbers (RPN), which can be obtained by multiplying the scores of risk factors like occurrence (O), severity (S), and detection (D). This technique has some limitations, though in this paper, Fuzzy logic and Analytical Hierarchy Process (AHP) are used to address the limitations of traditional FMEA. Linguistic variables, expressed in fuzzy numbers, are used to assess the ratings of risk factors O, S, and D. Each factor consists of seven membership functions and on the whole there are 343 rules for fuzzy system. The analytic hierarchy process (AHP) is applied to determine the relative weightings of risk impacts on time, cost, quality and safety. A case study is presented to validate the concept. The feedbacks are showing the advantages of the proposed approach in project risk management.",
"title": ""
},
{
"docid": "a769b8f56d699b3f6eca54aeeb314f84",
"text": "Assistive mobile robots that autonomously manipulate objects within everyday settings have the potential to improve the lives of the elderly, injured, and disabled. Within this paper, we present the most recent version of the assistive mobile manipulator EL-E with a focus on the subsystem that enables the robot to retrieve objects from and deliver objects to flat surfaces. Once provided with a 3D location via brief illumination with a laser pointer, the robot autonomously approaches the location and then either grasps the nearest object or places an object. We describe our implementation in detail, while highlighting design principles and themes, including the use of specialized behaviors, task-relevant features, and low-dimensional representations. We also present evaluations of EL-E’s performance relative to common forms of variation. We tested EL-E’s ability to approach and grasp objects from the 25 object categories that were ranked most important for robotic retrieval by motor-impaired patients from the Emory ALS Center. Although reliability varied, EL-E succeeded at least once with objects from 21 out of 25 of these categories. EL-E also approached and grasped a cordless telephone on 12 different surfaces including floors, tables, and counter tops with 100% success. The same test using a vitamin pill (ca. 15mm ×5mm ×5mm) resulted in 58% success.",
"title": ""
}
] |
scidocsrr
|
ef40484cb8399d22d793fb4cb714570b
|
Competition in the Cryptocurrency Market
|
[
{
"docid": "f6fc0992624fd3b3e0ce7cc7fc411154",
"text": "Digital currencies are a globally spreading phenomenon that is frequently and also prominently addressed by media, venture capitalists, financial and governmental institutions alike. As exchange prices for Bitcoin have reached multiple peaks within 2013, we pose a prevailing and yet academically unaddressed question: What are users' intentions when changing their domestic into a digital currency? In particular, this paper aims at giving empirical insights on whether users’ interest regarding digital currencies is driven by its appeal as an asset or as a currency. Based on our evaluation, we find strong indications that especially uninformed users approaching digital currencies are not primarily interested in an alternative transaction system but seek to participate in an alternative investment vehicle.",
"title": ""
},
{
"docid": "165aa4bad30a95866be4aff878fbd2cf",
"text": "This paper reviews some recent developments in digital currency, focusing on platform-sponsored currencies such as Facebook Credits. In a model of platform management, we find that it will not likely be profitable for such currencies to expand to become fully convertible competitors to state-sponsored currencies. JEL Classification: D42, E4, L51 Bank Classification: bank notes, economic models, payment clearing and settlement systems * Rotman School of Management, University of Toronto and NBER (Gans) and Bank of Canada (Halaburda). The views here are those of the authors and no responsibility for them should be attributed to the Bank of Canada. We thank participants at the NBER Economics of Digitization Conference, Warren Weber and Glen Weyl for helpful comments on an earlier draft of this paper. Please send any comments to joshua.gans@gmail.com.",
"title": ""
}
] |
[
{
"docid": "bbb91ddd9df0d5f38b8c1317a8e84f60",
"text": "Poisson regression model is widely used in software quality modeling. W h e n the response variable of a data set includes a large number of zeros, Poisson regression model will underestimate the probability of zeros. A zero-inflated model changes the mean structure of the pure Poisson model. The predictive quality is therefore improved. I n this paper, we examine a full-scale industrial software system and develop two models, Poisson regression and zero-inflated Poisson regression. To our knowledge, this is the first study that introduces the zero-inflated Poisson regression model in software reliability. Comparing the predictive qualities of the two competing models, we conclude that for this system, the zero-inflated Poisson regression model is more appropriate in theory and practice.",
"title": ""
},
{
"docid": "7d197033396c7a55593da79a5a70fa96",
"text": "1. Introduction Fundamental questions about weighting (Fig 1) seem to be ~ most common during the analysis of survey data and I encounter them almost every week. Yet we \"lack a single, reasonably comprehensive, introductory explanation of the process of weighting\" [Sharot 1986], readily available to and usable by survey practitioners, who are looking for simple guidance, and this paper aims to meet some of that need. Some partial treatments have appeared in the survey literature [e.g., Kish 1965], but the topic seldom appears even in the indexes. However, we can expect growing interest, as witnessed by six publications since 1987 listed in the references.",
"title": ""
},
{
"docid": "4690d2b1dbde438329644b3e76b6427f",
"text": "In this work, we investigate how illuminant estimation can be performed exploiting the color statistics extracted from the faces automatically detected in the image. The proposed method is based on two observations: first, skin colors tend to form a cluster in the color space, making it a cue to estimate the illuminant in the scene; second, many photographic images are portraits or contain people. The proposed method has been tested on a public dataset of images in RAW format, using both a manual and a real face detector. Experimental results demonstrate the effectiveness of our approach. The proposed method can be directly used in many digital still camera processing pipelines with an embedded face detector working on gray level images.",
"title": ""
},
{
"docid": "0c9a76222f885b95f965211e555e16cd",
"text": "In this paper we address the following question: “Can we approximately sample from a Bayesian posterior distribution if we are only allowed to touch a small mini-batch of data-items for every sample we generate?”. An algorithm based on the Langevin equation with stochastic gradients (SGLD) was previously proposed to solve this, but its mixing rate was slow. By leveraging the Bayesian Central Limit Theorem, we extend the SGLD algorithm so that at high mixing rates it will sample from a normal approximation of the posterior, while for slow mixing rates it will mimic the behavior of SGLD with a pre-conditioner matrix. As a bonus, the proposed algorithm is reminiscent of Fisher scoring (with stochastic gradients) and as such an efficient optimizer during burn-in.",
"title": ""
},
{
"docid": "6eda7075de9d47851b2b5be026af7d84",
"text": "Maintaining consistent styles across glyphs is an arduous task in typeface design. In this work we introduce FlexyFont, a flexible tool for synthesizing a complete typeface that has a consistent style with a given small set of glyphs. Motivated by a key fact that typeface designers often maintain a library of glyph parts to achieve a consistent typeface, we intend to learn part consistency between glyphs of different characters across typefaces. We take a part assembling approach by firstly decomposing the given glyphs into semantic parts and then assembling them according to learned sets of transferring rules to reconstruct the missing glyphs. To maintain style consistency, we represent the style of a font as a vector of pairwise part similarities. By learning a distribution over these feature vectors, we are able to predict the style of a novel typeface given only a few examples. We utilize a popular machine learning method as well as retrieval-based methods to quantitatively assess the performance of our feature vector, resulting in favorable results. We also present an intuitive interface that allows users to interactively create novel typefaces with ease. The synthesized fonts can be directly used in real-world design.",
"title": ""
},
{
"docid": "2f471c24ccb38e70627eba6383c003e0",
"text": "We present an algorithm that enables casual 3D photography. Given a set of input photos captured with a hand-held cell phone or DSLR camera, our algorithm reconstructs a 3D photo, a central panoramic, textured, normal mapped, multi-layered geometric mesh representation. 3D photos can be stored compactly and are optimized for being rendered from viewpoints that are near the capture viewpoints. They can be rendered using a standard rasterization pipeline to produce perspective views with motion parallax. When viewed in VR, 3D photos provide geometrically consistent views for both eyes. Our geometric representation also allows interacting with the scene using 3D geometry-aware effects, such as adding new objects to the scene and artistic lighting effects.\n Our 3D photo reconstruction algorithm starts with a standard structure from motion and multi-view stereo reconstruction of the scene. The dense stereo reconstruction is made robust to the imperfect capture conditions using a novel near envelope cost volume prior that discards erroneous near depth hypotheses. We propose a novel parallax-tolerant stitching algorithm that warps the depth maps into the central panorama and stitches two color-and-depth panoramas for the front and back scene surfaces. The two panoramas are fused into a single non-redundant, well-connected geometric mesh. We provide videos demonstrating users interactively viewing and manipulating our 3D photos.",
"title": ""
},
{
"docid": "21a2347f9bb5b5638d63239b37c9d0e6",
"text": "This paper presents new circuits for realizing both current-mode and voltage-mode proportional-integralderivative (PID), proportional-derivative (PD) and proportional-integral (PI) controllers employing secondgeneration current conveyors (CCIIs) as active elements. All of the proposed PID, PI and PD controllers have grounded passive elements and adjustable parameters. The controllers employ reduced number of active and passive components with respect to the traditional op-amp-based PID, PI and PD controllers. A closed loop control system using the proposed PID controller is designed and simulated with SPICE.",
"title": ""
},
{
"docid": "5297929e65e662360d8ff262e877b08a",
"text": "Frontal electroencephalographic (EEG) alpha asymmetry is widely researched in studies of emotion, motivation, and psychopathology, yet it is a metric that has been quantified and analyzed using diverse procedures, and diversity in procedures muddles cross-study interpretation. The aim of this article is to provide an updated tutorial for EEG alpha asymmetry recording, processing, analysis, and interpretation, with an eye towards improving consistency of results across studies. First, a brief background in alpha asymmetry findings is provided. Then, some guidelines for recording, processing, and analyzing alpha asymmetry are presented with an emphasis on the creation of asymmetry scores, referencing choices, and artifact removal. Processing steps are explained in detail, and references to MATLAB-based toolboxes that are helpful for creating and investigating alpha asymmetry are noted. Then, conceptual challenges and interpretative issues are reviewed, including a discussion of alpha asymmetry as a mediator/moderator of emotion and psychopathology. Finally, the effects of two automated component-based artifact correction algorithms-MARA and ADJUST-on frontal alpha asymmetry are evaluated.",
"title": ""
},
{
"docid": "dea3bce3f636c87fad95f255aceec858",
"text": "In recent work, conditional Markov chain models (CMM) have been used to extract information from semi-structured text (one example is the Conditional Random Field [10]). Applications range from finding the author and title in research papers to finding the phone number and street address in a web page. The CMM framework combines a priori knowledge encoded as features with a set of labeled training data to learn an efficient extraction process. We will show that similar problems can be solved more effectively by learning a discriminative context free grammar from training data. The grammar has several distinct advantages: long range, even global, constraints can be used to disambiguate entity labels; training data is used more efficiently; and a set of new more powerful features can be introduced. The grammar based approach also results in semantic information (encoded in the form of a parse tree) which could be used for IR applications like question answering. The specific problem we consider is of extracting personal contact, or address, information from unstructured sources such as documents and emails. While linear-chain CMMs perform reasonably well on this task, we show that a statistical parsing approach results in a 50% reduction in error rate. This system also has the advantage of being interactive, similar to the system described in [9]. In cases where there are multiple errors, a single user correction can be propagated to correct multiple errors automatically. Using a discriminatively trained grammar, 93.71% of all tokens are labeled correctly (compared to 88.43% for a CMM) and 72.87% of records have all tokens labeled correctly (compared to 45.29% for the CMM).",
"title": ""
},
{
"docid": "046ae00fa67181dff54e170e48a9bacf",
"text": "For the evaluation of grasp quality, different measures have been proposed that are based on wrench spaces. Almost all of them have drawbacks that derive from the non-uniformity of the wrench space, composed of force and torque dimensions. Moreover, many of these approaches are computationally expensive. We address the problem of choosing a proper task wrench space to overcome the problems of the non-uniform wrench space and show how to integrate it in a well-known, high precision and extremely fast computable grasp quality measure.",
"title": ""
},
{
"docid": "00bf4f81944c1e98e58b891ace95797e",
"text": "Sparse methods for supervised learning aim at finding good linear predictors from as few variables as possible, i.e., with small cardinality of their supports. This combinatorial selection problem is often turned into a convex optimization problem by replacing the cardinality function by its convex envelope (tightest convex lower bound), in this case the l1-norm. In this paper, we investigate more general set-functions than the cardinality, that may incorporate prior knowledge or structural constraints which are common in many applications: namely, we show that for nondecreasing submodular set-functions, the corresponding convex envelope can be obtained from its Lovász extension, a common tool in submodular analysis. This defines a family of polyhedral norms, for which we provide generic algorithmic tools (subgradients and proximal operators) and theoretical results (conditions for support recovery or high-dimensional inference). By selecting specific submodular functions, we can give a new interpretation to known norms, such as those based on rank-statistics or grouped norms with potentially overlapping groups; we also define new norms, in particular ones that can be used as non-factorial priors for supervised learning.",
"title": ""
},
{
"docid": "5e9d63bfc3b4a66e0ead79a2d883adfe",
"text": "Cloud computing is becoming a major trend for delivering and accessing infrastructure on demand via the network. Meanwhile, the usage of FPGAs (Field Programmable Gate Arrays) for computation acceleration has made significant inroads into multiple application domains due to their ability to achieve high throughput and predictable latency, while providing programmability, low power consumption and time-to-value. Many types of workloads, e.g. databases, big data analytics, and high performance computing, can be and have been accelerated by FPGAs. As more and more workloads are being deployed in the cloud, it is appropriate to consider how to make FPGAs and their capabilities available in the cloud. However, such integration is non-trivial due to issues related to FPGA resource abstraction and sharing, compatibility with applications and accelerator logics, and security, among others. In this paper, a general framework for integrating FPGAs into the cloud is proposed and a prototype of the framework is implemented based on OpenStack, Linux-KVM and Xilinx FPGAs. The prototype enables isolation between multiple processes in multiple VMs, precise quantitative acceleration resource allocation, and priority-based workload scheduling. Experimental results demonstrate the effectiveness of this prototype, an acceptable overhead, and good scalability when hosting multiple VMs and processes.",
"title": ""
},
{
"docid": "a95f77c59a06b2d101584babc74896fb",
"text": "Magnetic wall and ceiling climbing robots have been proposed in many industrial applications where robots must move over ferromagnetic material surfaces. The magnetic circuit design with magnetic attractive force calculation of permanent magnetic wheel plays an important role which significantly affects the system reliability, payload ability and power consumption of the robot. In this paper, a flexible wall and ceiling climbing robot with six permanent magnetic wheels is proposed to climb along the vertical wall and overhead ceiling of steel cargo containers as part of an illegal contraband inspection system. The permanent magnetic wheels are designed to apply to the wall and ceiling climbing robot, whilst finite element method is employed to estimate the permanent magnetic wheels with various wheel rims. The distributions of magnetic flux lines and magnetic attractive forces are compared on both plane and corner scenarios so that the robot can adaptively travel through the convex and concave surfaces of the cargo container. Optimisation of wheel rims is presented to achieve the equivalent magnetic adhesive forces along with the estimation of magnetic ring dimensions in the axial and radial directions. Finally, the practical issues correlated with the applications of the techniques are discussed and the conclusions are drawn with further improvement and prototyping.",
"title": ""
},
{
"docid": "45cee79008d25916e8f605cd85dd7f3a",
"text": "In exploring the emotional climate of long-term marriages, this study used an observational coding system to identify specific emotional behaviors expressed by middle-aged and older spouses during discussions of a marital problem. One hundred and fifty-six couples differing in age and marital satisfaction were studied. Emotional behaviors expressed by couples differed as a function of age, gender, and marital satisfaction. In older couples, the resolution of conflict was less emotionally negative and more affectionate than in middle-aged marriages. Differences between husbands and wives and between happy and unhappy marriages were also found. Wives were more affectively negative than husbands, whereas husbands were more defensive than wives, and unhappy marriages involved greater exchange of negative affect than happy marriages.",
"title": ""
},
{
"docid": "fbe1e6b899b1a2e9d53d25e3fa70bd86",
"text": "Previous empirical studies examining the relationship between IT capability and accountingbased measures of firm performance report mixed results. We argue that extant research (1) has relied on aggregate overall measures of the firm’s IT capability, ignoring the specific type and nature of IT capability; and (2) has not fully considered important contextual (environmental) conditions that influence the IT capability-firm performance relationship. Drawing on the resource-based view (RBV), we advance a contingency perspective and propose that IT capabilities’ impact on firm resources is contingent on the “fit” between the type of IT capability/resource a firm possesses and the demands of the environment (industry) in which it competes. Specifically, using publicly available rankings as proxies for two types of IT capabilities (internally-focused and externally-focused capabilities), we empirically examines the degree to which three industry characteristics (dynamism, munificence, and complexity) influence the impact of each type of IT capability on measures of financial performance. After controlling for prior performance, the findings provide general support for the posited contingency model of IT impact. The implications of these findings on practice and research are discussed.",
"title": ""
},
{
"docid": "3ced47ece49eeec3edc5d720df9bb864",
"text": "Complex space systems typically provide the operator a means to understand the current state of system components. The operator often has to manually determine whether the system is able to perform a given set of high level objectives based on this information. The operations team needs a way for the system to quantify its capability to successfully complete a mission objective and convey that information in a clear, concise way. A mission-level space cyber situational awareness tool suite integrates the data into a complete picture to display the current state of the mission. The Johns Hopkins University Applied Physics Laboratory developed the Spyder tool suite for such a purpose. The Spyder space cyber situation awareness tool suite allows operators to understand the current state of their systems, allows them to determine whether their mission objectives can be completed given the current state, and provides insight into any anomalies in the system. Spacecraft telemetry, spacecraft position, ground system data, ground computer hardware, ground computer software processes, network connections, and network data flows are all combined into a system model service that serves the data to various display tools. Spyder monitors network connections, port scanning, and data exfiltration to determine if there is a cyber attack. The Spyder Tool Suite provides multiple ways of understanding what is going on in a system. Operators can see the logical and physical relationships between system components to better understand interdependencies and drill down to see exactly where problems are occurring. They can quickly determine the state of mission-level capabilities. The space system network can be analyzed to find unexpected traffic. Spyder bridges the gap between infrastructure and mission and provides situational awareness at the mission level.",
"title": ""
},
{
"docid": "b952967acb2eaa9c780bffe211d11fa0",
"text": "Cryptographic message authentication is a growing need for FPGA-based embedded systems. In this paper a customized FPGA implementation of a GHASH function that is used in AES-GCM, a widely-used message authentication protocol, is described. The implementation limits GHASH logic utilization by specializing the hardware implementation on a per-key basis. The implemented module can generate a 128bit message authentication code in both pipelined and unpipelined versions. The pipelined GHASH version achieves an authentication throughput of more than 14 Gbit/s on a Spartan-3 FPGA and 292 Gbit/s on a Virtex-6 device. To promote adoption in the field, the complete source code for this work has been made publically-available.",
"title": ""
},
{
"docid": "5cc666e8390b0d3cefaee2d55ad7ee38",
"text": "The thermal environment surrounding preterm neonates in closed incubators is regulated via air temperature control mode. At present, these control modes do not take account of all the thermal parameters involved in a pattern of incubator such as the thermal parameters of preterm neonates (birth weight < 1000 grams). The objective of this work is to design and validate a generalized predictive control (GPC) that takes into account the closed incubator model as well as the newborn premature model. Then, we implemented this control law on a DRAGER neonatal incubator with and without newborn using microcontroller card. Methods: The design of the predictive control law is based on a prediction model. The developed model allows us to take into account all the thermal exchanges (radioactive, conductive, convective and evaporative) and the various interactions between the environment of the incubator and the premature newborn. Results: The predictive control law and the simulation model developed in Matlab/Simulink environment make it possible to evaluate the quality of the mode of control of the air temperature to which newborn must be raised. The results of the simulation and implementation of the air temperature inside the incubator (with newborn and without newborn) prove the feasibility and effectiveness of the proposed GPC controller compared with a proportional–integral–derivative controller (PID controller). Keywords—Incubator; neonatal; model; temperature; Arduino; GPC",
"title": ""
},
{
"docid": "7b36abede1967f89b79975883074a34d",
"text": "In this paper, we introduce a generalized value iteration network (GVIN), which is an end-to-end neural network planning module. GVIN emulates the value iteration algorithm by using a novel graph convolution operator, which enables GVIN to learn and plan on irregular spatial graphs. We propose three novel differentiable kernels as graph convolution operators and show that the embedding-based kernel achieves the best performance. Furthermore, we present episodic Q-learning, an improvement upon traditional n-step Q-learning that stabilizes training for VIN and GVIN. Lastly, we evaluate GVIN on planning problems in 2D mazes, irregular graphs, and realworld street networks, showing that GVIN generalizes well for both arbitrary graphs and unseen graphs of larger scale and outperforms a naive generalization of VIN (discretizing a spatial graph into a 2D image).",
"title": ""
},
{
"docid": "8014e07969adad7e6db3bb222afaf7d2",
"text": "Scratch is a visual programming environment that is widely used by young people. We investigated if Scratch can be used to teach concepts of computer science. We developed new learning materials for middle-school students that were designed according to the constructionist philosophy of Scratch and evaluated them in two schools. The classes were normal classes, not extracurricular activities whose participants are self-selected. Questionnaires and a test were constructed based upon a novel combination of the Revised Bloom Taxonomy and the SOLO taxonomy. These quantitative instruments were augmented with a qualitative analysis of observations within the classes. The results showed that in general students could successfully learn important concepts of computer science, although there were some problems with initialization, variables and concurrency; these problems can be overcome by modifications to the teaching process.",
"title": ""
}
] |
scidocsrr
|
710db1b01100dcd3e8b6b7aa3bc9ecf1
|
Occurrence and distribution of microplastics in marine sediments along the Belgian coast.
|
[
{
"docid": "3df9bacf95281fc609ee7fd2d4724e91",
"text": "The deleterious effects of plastic debris on the marine environment were reviewed by bringing together most of the literature published so far on the topic. A large number of marine species is known to be harmed and/or killed by plastic debris, which could jeopardize their survival, especially since many are already endangered by other forms of anthropogenic activities. Marine animals are mostly affected through entanglement in and ingestion of plastic litter. Other less known threats include the use of plastic debris by \"invader\" species and the absorption of polychlorinated biphenyls from ingested plastics. Less conspicuous forms, such as plastic pellets and \"scrubbers\" are also hazardous. To address the problem of plastic debris in the oceans is a difficult task, and a variety of approaches are urgently required. Some of the ways to mitigate the problem are discussed.",
"title": ""
}
] |
[
{
"docid": "488110f56eee525ae4f06f21da795f78",
"text": "Recently, a technique called Layer-wise Relevance Propagation (LRP) was shown to deliver insightful explanations in the form of input space relevances for understanding feed-forward neural network classification decisions. In the present work, we extend the usage of LRP to recurrent neural networks. We propose a specific propagation rule applicable to multiplicative connections as they arise in recurrent network architectures such as LSTMs and GRUs. We apply our technique to a word-based bi-directional LSTM model on a five-class sentiment prediction task, and evaluate the resulting LRP relevances both qualitatively and quantitatively, obtaining better results than a gradient-based related method which was used in previous work.",
"title": ""
},
{
"docid": "1d2485f8a4e2a5a9f983bfee3e036b92",
"text": "Partial differential equations (PDEs) are commonly derived based on empirical observations. However, recent advances of technology enable us to collect and store massive amount of data, which offers new opportunities for data-driven discovery of PDEs. In this paper, we propose a new deep neural network, called PDE-Net 2.0, to discover (time-dependent) PDEs from observed dynamic data with minor prior knowledge on the underlying mechanism that drives the dynamics. The design of PDE-Net 2.0 is based on our earlier work [1] where the original version of PDE-Net was proposed. PDE-Net 2.0 is a combination of numerical approximation of differential operators by convolutions and a symbolic multi-layer neural network for model recovery. Comparing with existing approaches, PDE-Net 2.0 has the most flexibility and expressive power by learning both differential operators and the nonlinear response function of the underlying PDE model. Numerical experiments show that the PDE-Net 2.0 has the potential to uncover the hidden PDE of the observed dynamics, and predict the dynamical behavior for a relatively long time, even in a noisy environment.",
"title": ""
},
{
"docid": "ec190792463289a900c1e659012de098",
"text": "Trophic interactions between bacteria, viruses, and protozoan predators play crucial roles in structuring aquatic microbial communities and regulating microbe-mediated ecosystem functions (biogeochemical processes). In this microbial food web, protozoan predators and viruses share bacteria as a common resource, and protozoan predators can kill viruses [intraguild predation (IGP)] and vice versa, even though these latter processes are probably of less importance. However, protozoan predators (IG predator) and viruses (IG prey) generally occur together in various environments, and this cannot be fully explained by the classic IGP models. In addition, controlled experiments have often demonstrated that protozoan predators have apparently positive effects on viral activity. These surprising patterns can be explained by indirect interactions between them via induced trait changes in bacterial assemblages, which can be compared with trait-mediated indirect interactions (TMIIs) in terrestrial plant–insect systems. Here, we review some trait changes in bacterial assemblages that may positively affect the activities and abundance of viruses. It has been suggested that in bacterial assemblages, protozoan predation may enhance growth conditions for individual bacteria and induce both phenotypic trait changes at the individual (e.g., filament-forming bacteria) and group level as a result of changes in bacterial community composition (e.g., species dominance). We discuss the specificities of aquatic microbial systems and attempt find functional similarities between aquatic microbial systems and terrestrial plant–insect systems with regard to TMII function.",
"title": ""
},
{
"docid": "59db435e906db2c198afdc5cc7c7de2c",
"text": "Although the recent advances in the sparse representations of images have achieved outstanding denosing results, removing real, structured noise in digital videos remains a challenging problem. We show the utility of reliable motion estimation to establish temporal correspondence across frames in order to achieve high-quality video denoising. In this paper, we propose an adaptive video denosing framework that integrates robust optical flow into a non-local means (NLM) framework with noise level estimation. The spatial regularization in optical flow is the key to ensure temporal coherence in removing structured noise. Furthermore, we introduce approximate K-nearest neighbor matching to significantly reduce the complexity of classical NLM methods. Experimental results show that our system is comparable with the state of the art in removing AWGN, and significantly outperforms the state of the art in removing real, structured noise.",
"title": ""
},
{
"docid": "b8963bbc58acc4699e5778cf50583208",
"text": "Conceptual Metaphor Theory is a promising model that despite its deficiencies can be used to account for a number of phenomena in figurative language use. The paper reviews the arguments in favour of and against Conceptual Metaphor Theory in terms of the data, methodology and content. Since the model focuses on regularities, it is less useful in the study of idioms, where irregularities are also found. It has, however, enormous potential as it integrates corpusand discourse-driven findings.",
"title": ""
},
{
"docid": "170f14fbf337186c8bd9f36390916d2e",
"text": "In this paper, we draw upon two sets of theoretical resources to develop a comprehensive theory of sexual offender rehabilitation named the Good Lives Model-Comprehensive (GLM-C). The original Good Lives Model (GLM-O) forms the overarching values and principles guiding clinical practice in the GLM-C. In addition, the latest sexual offender theory (i.e., the Integrated Theory of Sexual Offending; ITSO) provides a clear etiological grounding for these principles. The result is a more substantial and improved rehabilitation model that is able to conceptually link latest etiological theory with clinical practice. Analysis of the GLM-C reveals that it also has the theoretical resources to secure currently used self-regulatory treatment practice within a meaningful structure. D 2005 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "867d6a1aa9699ba7178695c45a10d23e",
"text": "A study of different on-line adaptive classifiers, using various feature types is presented. Motor imagery brain computer interface (BCI) experiments were carried out with 18 naive able-bodied subjects. Experiments were done with three two-class, cue-based, electroencephalogram (EEG)-based systems. Two continuously adaptive classifiers were tested: adaptive quadratic and linear discriminant analysis. Three feature types were analyzed, adaptive autoregressive parameters, logarithmic band power estimates and the concatenation of both. Results show that all systems are stable and that the concatenation of features with continuously adaptive linear discriminant analysis classifier is the best choice of all. Also, a comparison of the latter with a discontinuously updated linear discriminant analysis, carried out in on-line experiments with six subjects, showed that on-line adaptation performed significantly better than a discontinuous update. Finally a static subject-specific baseline was also provided and used to compare performance measurements of both types of adaptation",
"title": ""
},
{
"docid": "e6ca4f592446163124bcf00f87ccb8df",
"text": "A full-vector beam propagation method based on a finite-element scheme for a helicoidal system is developed. The permittivity and permeability tensors of a straight waveguide are replaced with equivalent ones for a helicoidal system, obtained by transformation optics. A cylindrical, perfectly matched layer is implemented for the absorbing boundary condition. To treat wide-angle beam propagation, a second-order differentiation term with respect to the propagation direction is directly discretized without using a conventional Padé approximation. The transmission spectra of twisted photonic crystal fibers are thoroughly investigated, and it is found that the diameters of the air holes greatly affect the spectra. The calculated results are in good agreement with the recently reported measured results, showing the validity and usefulness of the method developed here.",
"title": ""
},
{
"docid": "ec915a68ef9ee615807412bc7f096460",
"text": "Distributed representations of sentences have been developed recently to represent their meaning as real-valued vectors. However, it is not clear how much information such representations retain about the polarity of sentences. To study this question, we decode sentiment from sentence representations learned with different architectures (sensitive to the order of words, the order of sentences, or none) in 9 typologically diverse languages. Sentiment results from the (recursive) composition of lexical items and grammatical strategies such as negation and concession. The results are manifold: we show that there is no ’one-sizefits-all’ representation architecture outperforming the others across the board. Rather, the top-ranking architectures depend on the language at hand. Moreover, we find that in several cases the additive composition model based on skip-gram word vectors may surpass state-of-art architectures such as bi-directional LSTMs. Finally, we provide a possible explanation of the observed variation based on the type of negative constructions in each language.",
"title": ""
},
{
"docid": "41b8c1b04f11f5ac86d1d6e696007036",
"text": "The neural systems involved in hearing and repeating single words were investigated in a series of experiments using PET. Neuropsychological and psycholinguistic studies implicate the involvement of posterior and anterior left perisylvian regions (Wernicke's and Broca's areas). Although previous functional neuroimaging studies have consistently shown activation of Wernicke's area, there has been only variable implication of Broca's area. This study demonstrates that Broca's area is involved in both auditory word perception and repetition but activation is dependent on task (greater during repetition than hearing) and stimulus presentation (greater when hearing words at a slow rate). The peak of frontal activation in response to hearing words is anterior to that associated with repeating words; the former is probably located in Brodmann's area 45, the latter in Brodmann's area 44 and the adjacent precentral sulcus. As Broca's area activation is more subtle and complex than that in Wernicke's area during these tasks, the likelihood of observing it is influenced by both the study design and the image analysis technique employed. As a secondary outcome from the study, the response of bilateral auditory association cortex to 'own voice' during repetition was shown to be the same as when listening to \"other voice' from a prerecorded tape.",
"title": ""
},
{
"docid": "37063598a4902435c1cb2142879b4094",
"text": "Thermal/residual deformations and stresses in plastic integrated circuit (IC) packages caused by epoxy molding compound (EMC) during the manufacturing process are investigated experimentally (only for deformations), theoretically, and numerically. A real-time Twyman-Green interferometry is used for measuring the out-of-plane thermal and residual deformations of die/EMC bi-material specimens. Dynamic mechanical analysis (DMA) and thermomechanical analysis (TMA) are for characterizing thermomechanical properties of the EMC materials. A finite element model (FEM) and theory associated with experimental observations are employed for understanding the thermal/residual deformations and stresses of IC packages due to EMC encapsulation. It is shown that EMC materials must be fully cured so that the material properties are stable enough for applications. Experimental results show that the EMC material experiences stress relaxation due to its viscoelastic behavior during the post mold curing (PMC) process. As a result, the strains (stresses) resulted from the chemical shrinkage of the EMC curing could be relaxed during the PMC process, so that the chemical shrinkage has no effect on the residual strains (stresses) for the plastic packages being post cured. Compared with numerical and theoretical analyses, the experimental results have demonstrated that die/EMC bi-material structure at high temperature (above Tg) warps less than expected, as a result of viscoelastic stress relaxation of EMC at high temperature (during solder reflow process). Meanwhile, this stress relaxation can also cause shifting this zero-stress temperature to the higher one, so that the residual deformations (stresses) of die/EMC bi-material specimens were found to increase by about 40% after the solder reflow process. The residual and thermal stresses have been resolved by FEM and theoretical analyses. The results suggest that the pure bending stresses (without shear and peel stresses) of the bi-material specimens are only limited in the region from x= 0 (the center) to x= 0.75 L due to the free edge effects, but this region is shrunk down to x= 0.4L at 200degC. And the maximum warpage and bending stress per unit temperature change is occurred around 165degC (Tg of the EMC). This study has demonstrated that the Twyman-Green experiment with associated bi-material plate theory and FEM can provide a useful tool for studying the EMC-induce residual/thermal deformations and stresses during the IC packaging fabrication",
"title": ""
},
{
"docid": "fdbcf90ffeebf9aab41833df0fff23e6",
"text": "(Under the direction of Anselmo Lastra) For image synthesis in computer graphics, two major approaches for representing a surface's appearance are texture mapping, which provides spatial detail, such as wallpaper, or wood grain; and the 4D bi-directional reflectance distribution function (BRDF) which provides angular detail, telling how light reflects off surfaces. I combine these two modes of variation to form the 6D spatial bi-directional reflectance distribution function (SBRDF). My compact SBRDF representation simply stores BRDF coefficients at each pixel of a map. I propose SBRDFs as a surface appearance representation for computer graphics and present a complete system for their use. I acquire SBRDFs of real surfaces using a device that simultaneously measures the BRDF of every point on a material. The system has the novel ability to measure anisotropy (direction of threads, scratches, or grain) uniquely at each surface point. I fit BRDF parameters using an efficient nonlinear optimization approach specific to BRDFs. SBRDFs can be rendered using graphics hardware. My approach yields significantly more detailed, general surface appearance than existing techniques for a competitive rendering cost. I also propose an SBRDF rendering method for global illumination using prefiltered environment maps. This improves on existing prefiltered environment map techniques by decoupling the BRDF from the environment maps, so a single set of maps may be used to illuminate the unique BRDFs at each surface point. I demonstrate my results using measured surfaces including gilded wallpaper, plant leaves, upholstery fabrics, wrinkled gift-wrapping paper and glossy book covers. iv To Tiffany, who has worked harder and sacrificed more for this than have I. ACKNOWLEDGMENTS I appreciate the time, guidance and example of Anselmo Lastra, my advisor. I'm grateful to Steve Molnar for being my mentor throughout graduate school. I'm grateful to the other members of my committee, Henry Fuchs, Gary Bishop, and Lars Nyland for helping and teaching me and creating an environment that allows research to be done successfully and pleasantly. I am grateful for the effort and collaboration of Ben Cloward, who masterfully modeled the Carolina Inn lobby, patiently worked with my software, and taught me much of how artists use computer graphics. I appreciate the collaboration of Wolfgang Heidrich, who worked hard on this project and helped me get up to speed on shading with graphics hardware. I'm thankful to Steve Westin, for patiently teaching me a great deal about surface appearance and light measurement. I'm grateful for …",
"title": ""
},
{
"docid": "d066670bbf58a2c96fa3ef2c037166b1",
"text": "Artificial neural networks are applied in many situations. neuralnet is built to train multi-layer perceptrons in the context of regression analyses, i.e. to approximate functional relationships between covariates and response variables. Thus, neural networks are used as extensions of generalized linear models. neuralnet is a very flexible package. The backpropagation algorithm and three versions of resilient backpropagation are implemented and it provides a custom-choice of activation and error function. An arbitrary number of covariates and response variables as well as of hidden layers can theoretically be included. The paper gives a brief introduction to multilayer perceptrons and resilient backpropagation and demonstrates the application of neuralnet using the data set infert, which is contained in the R distribution.",
"title": ""
},
{
"docid": "8ed595c86f82f8801031298bfb70f334",
"text": "This paper describes a fundamental limitation of the implicit anti-aliasing filter in continuous-time feedback compensated sigma-delta modulators and a method to overcome it. This anti-aliasing filter can be used to relax an additionally required pre filtering stage. If a strong alias rejection is required, commonly used continuous-time feedback compensated sigma-delta modulators would need some pre-filtering or a large oversampling ratio. The described technique generates a notch at the sampling frequency to improve the performance of the antialiasing filter. This method is possible without additional active components or large oversampling ratios. It is also very robust against component mismatch.",
"title": ""
},
{
"docid": "785cb08c500aea1ead360138430ba018",
"text": "A recent “third wave” of neural network (NN) approaches now delivers state-of-the-art performance in many machine learning tasks, spanning speech recognition, computer vision, and natural language processing. Because these modern NNs often comprise multiple interconnected layers, work in this area is often referred to as deep learning. Recent years have witnessed an explosive growth of research into NN-based approaches to information retrieval (IR). A significant body of work has now been created. In this paper, we survey the current landscape of Neural IR research, paying special attention to the use of learned distributed representations of textual units. We highlight the successes of neural IR thus far, catalog obstacles to its wider adoption, and suggest potentially promising directions for future research.",
"title": ""
},
{
"docid": "2a39202664217724ea0a49ceb83a82af",
"text": "This article proposes a competitive divide-and-conquer algorithm for solving large-scale black-box optimization problems for which there are thousands of decision variables and the algebraic models of the problems are unavailable. We focus on problems that are partially additively separable, since this type of problem can be further decomposed into a number of smaller independent subproblems. The proposed algorithm addresses two important issues in solving large-scale black-box optimization: (1) the identification of the independent subproblems without explicitly knowing the formula of the objective function and (2) the optimization of the identified black-box subproblems. First, a Global Differential Grouping (GDG) method is proposed to identify the independent subproblems. Then, a variant of the Covariance Matrix Adaptation Evolution Strategy (CMA-ES) is adopted to solve the subproblems resulting from its rotation invariance property. GDG and CMA-ES work together under the cooperative co-evolution framework. The resultant algorithm, named CC-GDG-CMAES, is then evaluated on the CEC’2010 large-scale global optimization (LSGO) benchmark functions, which have a thousand decision variables and black-box objective functions. The experimental results show that, on most test functions evaluated in this study, GDG manages to obtain an ideal partition of the index set of the decision variables, and CC-GDG-CMAES outperforms the state-of-the-art results. Moreover, the competitive performance of the well-known CMA-ES is extended from low-dimensional to high-dimensional black-box problems.",
"title": ""
},
{
"docid": "829eafadf393a66308db452eeef617d5",
"text": "The goal of creating non-biological intelligence has been with us for a long time, predating the nominal 1956 establishment of the field of artificial intelligence by centuries or, under some definitions, even by millennia. For much of this history it was reasonable to recast the goal of “creating” intelligence as that of “designing” intelligence. For example, it would have been reasonable in the 17th century, as Leibnitz was writing about reasoning as a form of calculation, to think that the process of creating artificial intelligence would have to be something like the process of creating a waterwheel or a pocket watch: first understand the principles, then use human intelligence to devise a design based on the principles, and finally build a system in accordance with the design. At the dawn of the 19th century William Paley made such assumptions explicit, arguing that intelligent designers are necessary for the production of complex adaptive systems. And then, of course, Paley was soundly refuted by Charles Darwin in 1859. Darwin showed how complex and adaptive systems can arise naturally from a process of selection acting on random variation. That is, he showed that complex and adaptive design could be created without an intelligent designer. On the basis of evidence from paleontology, molecular biology, and evolutionary theory we now understand that nearly all of the interesting features of biological agents, including intelligence, have arisen through roughly Darwinian evolutionary processes (with a few important refinements, some of which are mentioned below). But there are still some holdouts for the pre-Darwinian view. A recent survey in the United States found that 42% of respondents expressed a belief that “Life on Earth has existed in its present form since the beginning of time” [7], and these views are supported by powerful political forces including a stridently anti-science President. These shocking political realities are, however, beyond the scope of the present essay. This essay addresses a more subtle form of pre-Darwinian thinking that occurs even among the scientifically literate, and indeed even among highly trained scientists conducting advanced AI research. Those who engage in this form of pre-Darwinian thinking accept the evidence for the evolution of terrestrial life but ignore or even explicitly deny the power of evolutionary processes to produce adaptive complexity in other contexts. Within the artificial intelligence research community those who engage in this form of thinking ignore or deny the power of evolutionary processes to create machine intelligence. Before exploring this complaint further it is worth asking whether an evolved artificial intelligence would even serve the broader goals of AI as a field. Every AI text opens by defining the field, and some of the proffered definitions are explicitly oriented toward design—presumably design by intelligent humans. For example Dean et al. define AI as “the design and study of computer programs that behave intelligently” [2, p. 1]. Would the field, so defined, be served by the demonstration of an evolved artificial intelligence? It would insofar as we could study the evolved system and particularly if we could use our resulting understanding as the basis for future designs. So even the most design-oriented AI researchers should be interested in evolved artificial intelligence if it can in fact be created.",
"title": ""
},
{
"docid": "7edfde7d7875d88702db2aabc4ac2883",
"text": "This paper proposes a novel approach to build integer multiplication circuits based on speculation, a technique which performs a faster-but occasionally wrong-operation resorting to a multi-cycle error correction circuit only in the rare case of error. The proposed speculative multiplier uses a novel speculative carry-save reduction tree using three steps: partial products recoding, partial products partitioning, speculative compression. The speculative tree uses speculative (m:2) counters, with m > 3, that are faster than a conventional tree using full-adders and half-adders. A technique to automatically choose the suitable speculative counters, taking into accounts both error probability and delay, is also presented in the paper. The speculative tree is completed with a fast speculative carry-propagate adder and an error correction circuit. We have synthesized speculative multipliers for several operand lengths using the UMC 65 nm library. Comparisons with conventional multipliers show that speculation is effective when high speed is required. Speculative multipliers allow reaching a higher speed compared with conventional counterparts and are also quite effective in terms of power dissipation, when a high speed operation is required.",
"title": ""
},
{
"docid": "6aed3ffa374139fa9c4e0b7c1afb7841",
"text": "Recent longitudinal and cross-sectional aging research has shown that personality traits continue to change in adulthood. In this article, we review the evidence for mean-level change in personality traits, as well as for individual differences in change across the life span. In terms of mean-level change, people show increased selfconfidence, warmth, self-control, and emotional stability with age. These changes predominate in young adulthood (age 20-40). Moreover, mean-level change in personality traits occurs in middle and old age, showing that personality traits can change at any age. In terms of individual differences in personality change, people demonstrate unique patterns of development at all stages of the life course, and these patterns appear to be the result of specific life experiences that pertain to a person's stage of life.",
"title": ""
},
{
"docid": "c034cb6e72bc023a60b54d0f8316045a",
"text": "This thesis presents the design, implementation, and valid ation of a system that enables a micro air vehicle to autonomously explore and map unstruct u ed and unknown indoor environments. Such a vehicle would be of considerable use in many real-world applications such as search and rescue, civil engineering inspection, an d a host of military tasks where it is dangerous or difficult to send people. While mapping and exploration capabilities are common for ground vehicles today, air vehicles seeking t o achieve these capabilities face unique challenges. While there has been recent progres s toward sensing, control, and navigation suites for GPS-denied flight, there have been few demonstrations of stable, goal-directed flight in real environments. The main focus of this research is the development of real-ti me state estimation techniques that allow our quadrotor helicopter to fly autonomous ly in indoor, GPS-denied environments. Accomplishing this feat required the developm ent of a large integrated system that brought together many components into a cohesive packa ge. As such, the primary contribution is the development of the complete working sys tem. I show experimental results that illustrate the MAV’s ability to navigate accurat ely in unknown environments, and demonstrate that our algorithms enable the MAV to operate au tonomously in a variety of indoor environments. Thesis Supervisor: Nicholas Roy Title: Associate Professor of Aeronautics and Astronautic s",
"title": ""
}
] |
scidocsrr
|
124e4bf43f120613c8532b111157ea96
|
Encrypted accelerated least squares regression
|
[
{
"docid": "4e0e6ca2f4e145c17743c42944da4cc8",
"text": "We demonstrate that, by using a recently proposed leveled homomorphic encryption scheme, it is possible to delegate the execution of a machine learning algorithm to a computing service while retaining confidentiality of the training and test data. Since the computational complexity of the homomorphic encryption scheme depends primarily on the number of levels of multiplications to be carried out on the encrypted data, we define a new class of machine learning algorithms in which the algorithm’s predictions, viewed as functions of the input data, can be expressed as polynomials of bounded degree. We propose confidential algorithms for binary classification based on polynomial approximations to least-squares solutions obtained by a small number of gradient descent steps. We present experimental validation of the confidential machine learning pipeline and discuss the trade-offs regarding computational complexity, prediction accuracy and cryptographic security.",
"title": ""
},
{
"docid": "ef444570c043be67453317e26600972f",
"text": "In multiple regression it is shown that parameter estimates based on minimum residual sum of squares have a high probability of being unsatisfactory, if not incorrect, if the prediction vectors are not orthogonal. Proposed is an estimation procedure based on adding small positive quantities to the diagonal of X’X. Introduced is the ridge trace, a method for showing in two dimensions the effects of nonorthogonality. It is then shown how to augment X’X to obtain biased estimates with smaller mean square error.",
"title": ""
}
] |
[
{
"docid": "7432009332e13ebc473c9157505cb59c",
"text": "The use of future contextual information is typically shown to be helpful for acoustic modeling. However, for the recurrent neural network (RNN), it’s not so easy to model the future temporal context effectively, meanwhile keep lower model latency. In this paper, we attempt to design a RNN acoustic model that being capable of utilizing the future context effectively and directly, with the model latency and computation cost as low as possible. The proposed model is based on the minimal gated recurrent unit (mGRU) with an input projection layer inserted in it. Two context modules, temporal encoding and temporal convolution, are specifically designed for this architecture to model the future context. Experimental results on the Switchboard task and an internal Mandarin ASR task show that, the proposed model performs much better than long short-term memory (LSTM) and mGRU models, whereas enables online decoding with a maximum latency of 170 ms. This model even outperforms a very strong baseline, TDNN-LSTM, with smaller model latency and almost half less parameters.",
"title": ""
},
{
"docid": "4eca3018852fd3107cb76d1d95f76a0a",
"text": "Within the past decade, empirical evidence has emerged supporting the use of Acceptance and Commitment Therapy (ACT) targeting shame and self-stigma. Little is known about the role of self-compassion in ACT, but evidence from other approaches indicates that self-compassion is a promising means of reducing shame and self-criticism. The ACT processes of defusion, acceptance, present moment, values, committed action, and self-as-context are to some degree inherently self-compassionate. However, it is not yet known whether the self-compassion inherent in the ACT approach explains ACT’s effectiveness in reducing shame and stigma, and/or whether focused self-compassion work may improve ACT outcomes for highly self-critical, shame-prone people. We discuss how ACT for shame and stigma may be enhanced by existing approaches specifically targeting self-compassion.",
"title": ""
},
{
"docid": "8ef1592544071c485d82c0848d02a2d0",
"text": "Auditory beat stimulation may be a promising new tool for the manipulation of cognitive processes and the modulation of mood states. Here, we aim to review the literature examining the most current applications of auditory beat stimulation and its targets. We give a brief overview of research on auditory steady-state responses and its relationship to auditory beat stimulation (ABS). We have summarized relevant studies investigating the neurophysiological changes related to ABS and how they impact upon the design of appropriate stimulation protocols. Focusing on binaural-beat stimulation, we then discuss the role of monaural- and binaural-beat frequencies in cognition and mood states, in addition to their efficacy in targeting disease symptoms. We aim to highlight important points concerning stimulation parameters and try to address why there are often contradictory findings with regard to the outcomes of ABS.",
"title": ""
},
{
"docid": "9f9302cf8560b65bed7688f5339a865c",
"text": "Understanding short texts is crucial to many applications, but challenges abound. First, short texts do not always observe the syntax of a written language. As a result, traditional natural language processing tools, ranging from part-of-speech tagging to dependency parsing, cannot be easily applied. Second, short texts usually do not contain sufficient statistical signals to support many state-of-the-art approaches for text mining such as topic modeling. Third, short texts are more ambiguous and noisy, and are generated in an enormous volume, which further increases the difficulty to handle them. We argue that semantic knowledge is required in order to better understand short texts. In this work, we build a prototype system for short text understanding which exploits semantic knowledge provided by a well-known knowledgebase and automatically harvested from a web corpus. Our knowledge-intensive approaches disrupt traditional methods for tasks such as text segmentation, part-of-speech tagging, and concept labeling, in the sense that we focus on semantics in all these tasks. We conduct a comprehensive performance evaluation on real-life data. The results show that semantic knowledge is indispensable for short text understanding, and our knowledge-intensive approaches are both effective and efficient in discovering semantics of short texts.",
"title": ""
},
{
"docid": "aace50c8446403a9f72b24bce1e88c30",
"text": "This paper presents a model-driven approach to the development of web applications based on the Ubiquitous Web Application (UWA) design framework, the Model-View-Controller (MVC) architectural pattern and the JavaServer Faces technology. The approach combines a complete and robust methodology for the user-centered conceptual design of web applications with the MVC metaphor, which improves separation of business logic and data presentation. The proposed approach, by carrying the advantages of ModelDriven Development (MDD) and user-centered design, produces Web applications which are of high quality from the user's point of view and easier to maintain and evolve.",
"title": ""
},
{
"docid": "e5b8368f13bf0f5e1969910d1ef81ac4",
"text": "BACKGROUND\nIn girls who present with vaginal trauma, sexual abuse is often the primary diagnosis. The differential diagnosis must include patterns and the mechanism of injury that differentiate accidental injuries from inflicted trauma.\n\n\nCASE\nA 7-year-old prepubertal girl presented to the emergency department with genital bleeding after a serious accidental impaling injury from inline skating. After rapid abduction of the legs and a fall onto the blade of an inline skate this child incurred an impaling genital injury consistent with an accidental mechanism. The dramatic genital injuries when repaired healed with almost imperceptible residual evidence of previous trauma.\n\n\nSUMMARY AND CONCLUSION\nTo our knowledge, this case report represents the first in the medical literature of an impaling vaginal trauma from an inline skate and describes its clinical and surgical management.",
"title": ""
},
{
"docid": "d55aae728991060ed4ba1f9a6b59e2fe",
"text": "Evolutionary algorithms have become robust tool in data processing and modeling of dynamic, complex and non-linear processes due to their flexible mathematical structure to yield optimal results even with imprecise, ambiguity and noise at its input. The study investigates evolutionary algorithms for solving Sudoku task. Various hybrids are presented here as veritable algorithm for computing dynamic and discrete states in multipoint search in CSPs optimization with application areas to include image and video analysis, communication and network design/reconstruction, control, OS resource allocation and scheduling, multiprocessor load balancing, parallel processing, medicine, finance, security and military, fault diagnosis/recovery, cloud and clustering computing to mention a few. Solution space representation and fitness functions (as common to all algorithms) were discussed. For support and confidence model adopted π1=0.2 and π2=0.8 respectively yields better convergence rates – as other suggested value combinations led to either a slower or non-convergence. CGA found an optimal solution in 32 seconds after 188 iterations in 25runs; while GSAGA found its optimal solution in 18seconds after 402 iterations with a fitness progression achieved in 25runs and consequently, GASA found an optimal solution 2.112seconds after 391 iterations with fitness progression after 25runs respectively.",
"title": ""
},
{
"docid": "063287a98a5a45bc8e38f8f8c193990e",
"text": "This paper investigates the relationship between the contextual factors related to the firm’s decision-maker and the process of international strategic decision-making. The analysis has been conducted focusing on small and medium-sized enterprises (SME). Data for the research came from 111 usable responses to a survey on a sample of SME decision-makers in international field. The results of regression analysis indicate that the context variables, both internal and external, exerted more influence on international strategic decision making process than the decision-maker personality characteristics. DOI: 10.4018/ijabe.2013040101 2 International Journal of Applied Behavioral Economics, 2(2), 1-22, April-June 2013 Copyright © 2013, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited. The purpose of this paper is to reverse this trend and to explore the different dimensions of SMEs’ strategic decision-making process in international decisions and, within these dimensions, we want to understand if are related to the decision-maker characteristics and also to broader contextual factors characteristics. The paper is organized as follows. In the second section the concepts of strategic decision-making process and factors influencing international SDMP are approached. Next, the research methodology, findings analysis and discussion will be presented. Finally, conclusions, limitations of the study and suggestions for future research are explored. THEORETICAL BACKGROUND Strategic Decision-Making Process The process of making strategic decisions has emerged as one of the most important themes of strategy research over the last two decades (Papadakis, 2006; Papadakis & Barwise, 2002). According to Harrison (1996), the SMDP can be defined as a combination of the concepts of strategic gap and management decision making process, with the former “determined by comparing the organization’s inherent capabilities with the opportunities and threats in its external environment”, while the latter is composed by a set of decision-making functions logically connected, that begins with the setting of managerial objective, followed by the search for information to develop a set of alternatives, that are consecutively compared and evaluated, and selected. Afterward, the selected alternative is implemented and, finally, it is subjected to follow-up and control. Other authors (Fredrickson, 1984; Mintzberg, Raisinghani, & Theoret, 1976) developed several models of strategic decision-making process since 1970, mainly based on the number of stages (Nooraie, 2008; Nutt, 2008). Although different researches investigated SDMP with specific reference to either small firms (Brouthers, et al., 1998; Gibcus, Vermeulen, & Jong, 2009; Huang, 2009; Jocumsen, 2004), or internationalization process (Aharoni, Tihanyi, & Connelly, 2011; Dimitratos, et al., 2011; Nielsen & Nielsen, 2011), there is a lack of studies that examine the SDMP in both perspectives. In this study we decided to mainly follow the SDMP defined by Harrison (1996) adapted to the international arena and particularly referred to market development decisions. Thus, for the definition of objectives (first phase) we refer to those in international field, for search for information, development and comparison of alternatives related to foreign markets (second phase) we refer to the systematic International Market Selection (IMS), and to the Entry Mode Selection (EMS) methodologies. For the implementation of the selected alternative (third phase) we mainly mean the entering in a particular foreign market with a specific entry mode, and finally, for follow-up and control (fourth phase) we refer to the control and evaluation of international activities. Dimensions of the Strategic Decision-Making Process Several authors attempted to implement a set of dimensions in approaching strategic process characteristics, and the most adopted are: • Rationality; • Formalization; • Hierarchical Decentralization and lateral communication; • Political Behavior.",
"title": ""
},
{
"docid": "ceb725186e5312601091157769c07b5f",
"text": "Much of the focus in the design of deep neural networks has been on improving accuracy, leading to more powerful yet highly complex network architectures that are difficult to deploy in practical scenarios, particularly on edge devices such as mobile and other consumer devices, given their high computational and memory requirements. As a result, there has been a recent interest in the design of quantitative metrics for evaluating deep neural networks that accounts for more than just model accuracy as the sole indicator of network performance. In this study, we continue the conversation towards universal metrics for evaluating the performance of deep neural networks for practical usage. In particular, we propose a new balanced metric called NetScore, which is designed specifically to provide a quantitative assessment of the balance between accuracy, computational complexity, and network architecture complexity of a deep neural network. In what is one of the largest comparative analysis between deep neural networks in literature, the NetScore metric, the top-1 accuracy metric, and the popular information density metric were compared across a diverse set of 50 different deep convolutional neural networks for image classification on the ImageNet Large Scale Visual Recognition Challenge (ILSVRC 2012) dataset. The evaluation results across these three metrics for this diverse set of networks are presented in this study to act as a reference guide for practitioners in the field. The proposed NetScore metric, along with the other tested metrics, are by no means perfect, but the hope is to push the conversation towards better universal metrics for evaluating deep neural networks for use in practical scenarios to help guide practitioners in model design.",
"title": ""
},
{
"docid": "a91a57326a2d961e24d13b844a3556cf",
"text": "This paper describes an interactive and adaptive streaming architecture that exploits temporal concatenation of H.264/AVC video bit-streams to dynamically adapt to both user commands and network conditions. The architecture has been designed to improve the viewing experience when accessing video content through individual and potentially bandwidth constrained connections. On the one hand, the user commands typically gives the client the opportunity to select interactively a preferred version among the multiple video clips that are made available to render the scene, e.g. using different view angles, or zoomed-in and slowmotion factors. On the other hand, the adaptation to the network bandwidth ensures effective management of the client buffer, which appears to be fundamental to reduce the client-server interaction latency, while maximizing video quality and preventing buffer underflow. In addition to user interaction and network adaptation, the deployment of fully autonomous infrastructures for interactive content distribution also requires the development of automatic versioning methods. Hence, the paper also surveys a number of approaches proposed for this purpose in surveillance and sport event contexts. Both objective metrics and subjective experiments are exploited to assess our system.",
"title": ""
},
{
"docid": "1d1eeb2f5a16fd8e1deed16a5839505b",
"text": "Searchable symmetric encryption (SSE) is a widely popular cryptographic technique that supports the search functionality over encrypted data on the cloud. Despite the usefulness, however, most of existing SSE schemes leak the search pattern, from which an adversary is able to tell whether two queries are for the same keyword. In recent years, it has been shown that the search pattern leakage can be exploited to launch attacks to compromise the confidentiality of the client’s queried keywords. In this paper, we present a new SSE scheme which enables the client to search encrypted cloud data without disclosing the search pattern. Our scheme uniquely bridges together the advanced cryptographic techniques of chameleon hashing and indistinguishability obfuscation. In our scheme, the secure search tokens for plaintext keywords are generated in a randomized manner, so it is infeasible to tell whether the underlying plaintext keywords are the same given two secure search tokens. In this way, our scheme well avoids using deterministic secure search tokens, which is the root cause of the search pattern leakage. We provide rigorous security proofs to justify the security strengths of our scheme. In addition, we also conduct extensive experiments to demonstrate the performance. Although our scheme for the time being is not immediately applicable due to the current inefficiency of indistinguishability obfuscation, we are aware that research endeavors on making indistinguishability obfuscation practical is actively ongoing and the practical efficiency improvement of indistinguishability obfuscation will directly lead to the applicability of our scheme. Our paper is a new attempt that pushes forward the research on SSE with concealed search pattern.",
"title": ""
},
{
"docid": "53c0564d82737d51ca9b7ea96a624be4",
"text": "In part 1 of this article, an occupational therapy model of practice for children with attention deficit hyperactivity disorder (ADHD) was described (Chu and Reynolds 2007). It addressed some specific areas of human functioning related to children with ADHD in order to guide the practice of occupational therapy. The model provides an approach to identifying and communicating occupational performance difficulties in relation to the interaction between the child, the environment and the demands of the task. A family-centred occupational therapy assessment and treatment package based on the model was outlined. The delivery of the package was underpinned by the principles of the family-centred care approach. Part 2 of this two-part article reports on a multicentre study, which was designed to evaluate the effectiveness and acceptability of the proposed assessment and treatment package and thereby to offer some validation of the delineation model. It is important to note that no treatment has yet been proved to ‘cure’ the condition of ADHD or to produce any enduring effects in affected children once the treatment is withdrawn. So far, the only empirically validated treatments for children with ADHD with substantial research evidence are psychostimulant medication, behavioural and educational management, and combined medication and behavioural management (DuPaul and Barkley 1993, A family-centred occupational therapy assessment and treatment package for children with attention deficit hyperactivity disorder (ADHD) was evaluated. The package involves a multidimensional evaluation and a multifaceted intervention, which are aimed at achieving a goodness-of-fit between the child, the task demands and the environment in which the child carries out the task. The package lasts for 3 months, with 12 weekly contacts with the child, parents and teacher. A multicentre study was carried out, with 20 occupational therapists participating. Following a 3-day training course, they implemented the package and supplied the data that they had collected from 20 children. The outcomes were assessed using the ADHD Rating Scales, pre-intervention and post-intervention. The results showed behavioural improvement in the majority of the children. The Measure of Processes of Care – 20-item version (MPOC-20) provided data on the parents’ perceptions of the family-centredness of the package and also showed positive ratings. The results offer some support for the package and the guiding model of practice, but caution should be exercised in generalising the results because of the small sample size, lack of randomisation, absence of a control group and potential experimenter effects from the research therapists. A larger-scale randomised controlled trial should be carried out to evaluate the efficacy of an improved package.",
"title": ""
},
{
"docid": "176386fd6f456d818d7ebf81f65d5030",
"text": "Event-driven architecture is gaining momentum in research and application areas as it promises enhanced responsiveness and asynchronous communication. The combination of event-driven and service-oriented architectural paradigms and web service technologies provide a viable possibility to achieve these promises. This paper outlines an architectural design and accompanying implementation technologies for its realization as a web services-based event-driven SOA.",
"title": ""
},
{
"docid": "ad2d21232d8a9af42ea7339574739eb3",
"text": "Majority of CNN architecture design is aimed at achieving high accuracy in public benchmarks by increasing the complexity. Typically, they are over-specified by a large margin and can be optimized by a factor of 10-100x with only a small reduction in accuracy. In spite of the increase in computational power of embedded systems, these networks are still not suitable for embedded deployment. There is a large need to optimize for hardware and reduce the size of the network by orders of magnitude for computer vision applications. This has led to a growing community which is focused on designing efficient networks. However, CNN architectures are evolving rapidly and efficient architectures seem to lag behind. There is also a gap in understanding the hardware architecture details and incorporating it into the network design. The motivation of this paper is to systematically summarize efficient design techniques and provide guidelines for an application developer. We also perform a case study by benchmarking various semantic segmentation algorithms for autonomous driving.",
"title": ""
},
{
"docid": "cae2b62afbecedc995612ed3a710e9d9",
"text": "Computational Grids, emerging as an infrastructure for next generation computing, enable the sharing, selection, and aggregation of geographically distributed resources for solving large-scale problems in science, engineering, and commerce. As the resources in the Grid are heterogeneous and geographically distributed with varying availability and a variety of usage and cost policies for diverse users at different times and, priorities as well as goals that vary with time. The management of resources and application scheduling in such a large and distributed environment is a complex task. This thesis proposes a distributed computational economy as an effective metaphor for the management of resources and application scheduling. It proposes an architectural framework that supports resource trading and quality of services based scheduling. It enables the regulation of supply and demand for resources and provides an incentive for resource owners for participating in the Grid and motives the users to trade-off between the deadline, budget, and the required level of quality of service. The thesis demonstrates the capability of economicbased systems for peer-to-peer distributed computing by developing users’ quality-of-service requirements driven scheduling strategies and algorithms. It demonstrates their effectiveness by performing scheduling experiments on the World-Wide Grid for solving parameter sweep applications.",
"title": ""
},
{
"docid": "fe842f2857bf3a60166c8f52e769585a",
"text": "We study the problem of explaining a rich class of behavioral properties of deep neural networks. Distinctively, our influence-directed explanations approach this problem by peering inside the network to identify neurons with high influence on a quantity and distribution of interest, using an axiomatically-justified influence measure, and then providing an interpretation for the concepts these neurons represent. We evaluate our approach by demonstrating a number of its unique capabilities on convolutional neural networks trained on ImageNet. Our evaluation demonstrates that influence-directed explanations (1) identify influential concepts that generalize across instances, (2) can be used to extract the “essence” of what the network learned about a class, and (3) isolate individual features the network uses to make decisions and distinguish related classes.",
"title": ""
},
{
"docid": "43bfbebda8dcb788057e1c98b7fccea6",
"text": "Der Beitrag stellt mit Quasar Enterprise einen durchgängigen, serviceorientierten Ansatz zur Gestaltung großer Anwendungslandschaften vor. Er verwendet ein Architektur-Framework zur Strukturierung der methodischen Schritte und führt ein Domänenmodell zur Präzisierung der Begrifflichkeiten und Entwicklungsartefakte ein. Die dargestellten methodischen Bausteine und Richtlinien beruhen auf langjährigen Erfahrungen in der industriellen Softwareentwicklung. 1 Motivation und Hintergrund sd&m beschäftigt sich seit seiner Gründung vor 25 Jahren mit dem Bau von individuellen Anwendungssystemen. Als konsolidierte Grundlage der Arbeit in diesem Bereich wurde Quasar (Quality Software Architecture) entwickelt – die sd&m StandardArchitektur für betriebliche Informationssysteme [Si04]. Quasar dient sd&m als Referenz für seine Disziplin des Baus einzelner Anwendungen. Seit einigen Jahren beschäftigt sich sd&m im Auftrag seiner Kunden mehr und mehr mit Fragestellungen auf der Ebene ganzer Anwendungslandschaften. Das Spektrum reicht von IT-Beratung zur Unternehmensarchitektur, über die Systemintegration querschnittlicher technischer, aber auch dedizierter fachlicher COTS-Produkte bis hin zum Bau einzelner großer Anwendungssysteme auf eine Art und Weise, dass eine perfekte Passung in eine moderne Anwendungslandschaft gegeben ist. Zur Abdeckung dieses breiten Spektrums an Aufgaben wurde eine neue Disziplin zur Gestaltung von Anwendungslandschaften benötigt. sd&m entwickelte hierzu eine neue Referenz – Quasar Enterprise – ein Quasar auf Unternehmensebene.",
"title": ""
},
{
"docid": "d35c176cfe5c8296862513c26f0fdffa",
"text": "Vertical scar mammaplasty, first described by Lötsch in 1923 and Dartigues in 1924 for mastopexy, was extended later to breast reduction by Arié in 1957. It was otherwise lost to surgical history until Lassus began experimenting with it in 1964. It then was extended by Marchac and de Olarte, finally to be popularized by Lejour. Despite initial skepticism, vertical reduction mammaplasty is becoming increasingly popular in recent years because it best incorporates the two concepts of minimal scarring and a satisfactory breast shape. At the moment, vertical scar techniques seem to be more popular in Europe than in the United States. A recent survey, however, has demonstrated that even in the United States, it has surpassed the rate of inverted T-scar breast reductions. The technique, however, is not without major drawbacks, such as long vertical scars extending below the inframammary crease and excessive skin gathering and “dog-ear” at the lower end of the scar that may require long periods for resolution, causing extreme distress to patients and surgeons alike. Efforts are being made to minimize these complications and make the procedure more user-friendly either by modifying it or by replacing it with an alternative that retains the same advantages. Although conceptually opposed to the standard vertical design, the circumvertical modification probably is the most important maneuver for shortening vertical scars. Residual dog-ears often are excised, resulting in a short transverse scar (inverted T- or L-scar). The authors describe limited subdermal undermining of the skin at the inferior edge of the vertical incisions with liposculpture of the inframammary crease, avoiding scar extension altogether. Simplified circumvertical drawing that uses the familiar Wise pattern also is described.",
"title": ""
},
{
"docid": "19863150313643b977f72452bb5a8a69",
"text": "Important research effort has been devoted to the topic of optimal planning of distribution systems. However, in general it has been mostly referred to the design of the primary network, with very modest considerations to the effect of the secondary network in the planning and future operation of the complete grid. Relatively little attention has been paid to the optimization of the secondary grid and to its effect on the optimality of the design of the complete electrical system, although the investment and operation costs of the secondary grid represent an important portion of the total costs. Appropriate design procedures have been proposed separately for both the primary and the secondary grid; however, in general, both planning problems have been presented and treated as different-almost isolated-problems, setting aside with this approximation some important factors that couple both problems, such as the fact that they may share the right of way, use the same poles, etc., among other factors that strongly affect the calculation of the investment costs. The main purpose of this work is the development and initial testing of a model for the optimal planning of a distribution system that includes both the primary and the secondary grids, so that a single optimization problem is stated for the design of the integral primary-secondary distribution system that overcomes these simplifications. The mathematical model incorporates the variables that define both the primary as well as the secondary planning problems and consists of a mixed integer-linear programming problem that may be solved by means of any suitable algorithm. Results are presented of the application of the proposed integral design procedure using conventional mixed integer-linear programming techniques to a real case of a residential primary-secondary distribution system consisting of 75 electrical nodes.",
"title": ""
},
{
"docid": "96029f6daa55fff7a76ab9bd48ebe7b9",
"text": "According to the principle of compositionality, the meaning of a sentence is computed from the meaning of its parts and the way they are syntactically combined. In practice, however, the syntactic structure is computed by automatic parsers which are far-from-perfect and not tuned to the specifics of the task. Current recursive neural network (RNN) approaches for computing sentence meaning therefore run into a number of practical difficulties, including the need to carefully select a parser appropriate for the task, deciding how and to what extent syntactic context modifies the semantic composition function, as well as on how to transform parse trees to conform to the branching settings (typically, binary branching) of the RNN. This paper introduces a new model, the Forest Convolutional Network, that avoids all of these challenges, by taking a parse forest as input, rather than a single tree, and by allowing arbitrary branching factors. We report improvements over the state-of-the-art in sentiment analysis and question classification.",
"title": ""
}
] |
scidocsrr
|
936f998587b76ff8b57021398cccb750
|
How Software Project Risk Affects Project Performance: An Investigation of the Dimensions of Risk and an Exploratory Model
|
[
{
"docid": "4506bc1be6e7b42abc34d79dc426688a",
"text": "The growing interest in Structured Equation Modeling (SEM) techniques and recognition of their importance in IS research suggests the need to compare and contrast different types of SEM techniques so that research designs can be selected appropriately. After assessing the extent to which these techniques are currently being used in IS research, the article presents a running example which analyzes the same dataset via three very different statistical techniques. It then compares two classes of SEM: covariance-based SEM and partial-least-squaresbased SEM. Finally, the article discusses linear regression models and offers guidelines as to when SEM techniques and when regression techniques should be used. The article concludes with heuristics and rule of thumb thresholds to guide practice, and a discussion of the extent to which practice is in accord with these guidelines.",
"title": ""
},
{
"docid": "02b6bcef39a21b14ce327f3dc9671fef",
"text": "We've all heard tales of multimillion dollar mistakes that somehow ran off course. Are software projects that risky or do managers need to take a fresh approach when preparing for such critical expeditions? Software projects are notoriously difficult to manage and too many of them end in failure. In 1995, annual U.S. spending on software projects reached approximately $250 billion and encompassed an estimated 175,000 projects [6]. Despite the costs involved, press reports suggest that project failures are occurring with alarming frequency. In 1995, U.S companies alone spent an estimated $59 billion in cost overruns on IS projects and another $81 billion on canceled software projects [6]. One explanation for the high failure rate is that managers are not taking prudent measures to assess and manage the risks involved in these projects. is Advocates of software project risk management claim that by countering these threats to success, the incidence of failure can be reduced [4, 5]. Before we can develop meaningful risk management strategies, however, we must identify these risks. Furthermore, the relative importance of these risks needs to be established, along with some understanding as to why certain risks are perceived to be more important than others. This is necessary so that managerial attention can be focused on the areas that constitute the greatest threats. Finally, identified risks must be classified in a way that suggests meaningful risk mitigation strategies. Here, we report the results of a Delphi study in which experienced software project managers identified and ranked the most important risks. The study led not only to the identification of risk factors and their relative importance, but also to novel insights into why project managers might view certain risks as being more important than others. Based on these insights, we introduce a framework for classifying software project risks and discuss appropriate strategies for managing each type of risk. Since the 1970s, both academics and practitioners have written about risks associated with managing software projects [1, 2, 4, 5, 7, 8]. Unfortunately , much of what has been written on risk is based either on anecdotal evidence or on studies limited to a narrow portion of the development process. Moreover, no systematic attempts have been made to identify software project risks by tapping the opinions of those who actually have experience in managing such projects. With a few exceptions [3, 8], there has been little attempt to understand the …",
"title": ""
}
] |
[
{
"docid": "24167db00908c65558e8034d94dfb8da",
"text": "Due to the wide variety of devices used in computer network systems, cybersecurity plays a major role in securing and improving the performance of the network or system. Although cybersecurity has received a large amount of global interest in recent years, it remains an open research space. Current security solutions in network-based cyberspace provide an open door to attackers by communicating first before authentication, thereby leaving a black hole for an attacker to enter the system before authentication. This article provides an overview of cyberthreats, traditional security solutions, and the advanced security model to overcome current security drawbacks.",
"title": ""
},
{
"docid": "fedbeb9d39ce91c96d93e05b5856f09e",
"text": "Devices for continuous glucose monitoring (CGM) are currently a major focus of research in the area of diabetes management. It is envisioned that such devices will have the ability to alert a diabetes patient (or the parent or medical care giver of a diabetes patient) of impending hypoglycemic/hyperglycemic events and thereby enable the patient to avoid extreme hypoglycemic/hyperglycemic excursions as well as minimize deviations outside the normal glucose range, thus preventing both life-threatening events and the debilitating complications associated with diabetes. It is anticipated that CGM devices will utilize constant feedback of analytical information from a glucose sensor to activate an insulin delivery pump, thereby ultimately realizing the concept of an artificial pancreas. Depending on whether the CGM device penetrates/breaks the skin and/or the sample is measured extracorporeally, these devices can be categorized as totally invasive, minimally invasive, and noninvasive. In addition, CGM devices are further classified according to the transduction mechanisms used for glucose sensing (i.e., electrochemical, optical, and piezoelectric). However, at present, most of these technologies are plagued by a variety of issues that affect their accuracy and long-term performance. This article presents a critical comparison of existing CGM technologies, highlighting critical issues of device accuracy, foreign body response, calibration, and miniaturization. An outlook on future developments with an emphasis on long-term reliability and performance is also presented.",
"title": ""
},
{
"docid": "0d43f72f92a73b648edd2dc3d1f0d141",
"text": "While egocentric video is becoming increasingly popular, browsing it is very difficult. In this paper we present a compact 3D Convolutional Neural Network (CNN) architecture for long-term activity recognition in egocentric videos. Recognizing long-term activities enables us to temporally segment (index) long and unstructured egocentric videos. Existing methods for this task are based on hand tuned features derived from visible objects, location of hands, as well as optical flow. Given a sparse optical flow volume as input, our CNN classifies the camera wearer's activity. We obtain classification accuracy of 89%, which outperforms the current state-of-the-art by 19%. Additional evaluation is performed on an extended egocentric video dataset, classifying twice the amount of categories than current state-of-the-art. Furthermore, our CNN is able to recognize whether a video is egocentric or not with 99.2% accuracy, up by 24% from current state-of-the-art. To better understand what the network actually learns, we propose a novel visualization of CNN kernels as flow fields.",
"title": ""
},
{
"docid": "d40a1b72029bdc8e00737ef84fdf5681",
"text": "— Ability of deep networks to extract high level features and of recurrent networks to perform time-series inference have been studied. In view of universality of one hidden layer network at approximating functions under weak constraints, the benefit of multiple layers is to enlarge the space of dynamical systems approximated or, given the space, reduce the number of units required for a certain error. Traditionally shallow networks with manually engineered features are used, back-propagation extent is limited to one and attempt to choose a large number of hidden units to satisfy the Markov condition is made. In case of Markov models, it has been shown that many systems need to be modeled as higher order. In the present work, we present deep recurrent networks with longer back-propagation through time extent as a solution to modeling systems that are high order and to predicting ahead. We study epileptic seizure suppression electro-stimulator. Extraction of manually engineered complex features and prediction employing them has not allowed small low-power implementations as, to avoid possibility of surgery, extraction of any features that may be required has to be included. In this solution, a recurrent neural network performs both feature extraction and prediction. We prove analytically that adding hidden layers or increasing backpropagation extent increases the rate of decrease of approximation error. A Dynamic Programming (DP) training procedure employing matrix operations is derived. DP and use of matrix operations makes the procedure efficient particularly when using data-parallel computing. The simulation studies show the geometry of the parameter space, that the network learns the temporal structure, that parameters converge while model output displays same dynamic behavior as the system and greater than .99 Average Detection Rate on all real seizure data tried.",
"title": ""
},
{
"docid": "196ddcefb2c3fcb6edd5e8d108f7e219",
"text": "This paper may be considered as a practical reference for those who wish to add (now sufficiently matured) Agent Based modeling to their analysis toolkit and may or may not have some System Dynamics or Discrete Event modeling background. We focus on systems that contain large numbers of active objects (people, business units, animals, vehicles, or even things like projects, stocks, products, etc. that have timing, event ordering or other kind of individual behavior associated with them). We compare the three major paradigms in simulation modeling: System Dynamics, Discrete Event and Agent Based Modeling with respect to how they approach such systems. We show in detail how an Agent Based model can be built from an existing System Dynamics or a Discrete Event model and then show how easily it can be further enhanced to capture much more complicated behavior, dependencies and interactions thus providing for deeper insight in the system being modeled. Commonly understood examples are used throughout the paper; all models are specified in the visual language supported by AnyLogic tool. We view and present Agent Based modeling not as a substitution to older modeling paradigms but as a useful add-on that can be efficiently combined with System Dynamics and Discrete Event modeling. Several multi-paradigm model architectures are suggested.",
"title": ""
},
{
"docid": "b0cc7d5313acaa47eb9cba9e830fa9af",
"text": "Data-driven intelligent transportation systems utilize data resources generated within intelligent systems to improve the performance of transportation systems and provide convenient and reliable services. Traffic data refer to datasets generated and collected on moving vehicles and objects. Data visualization is an efficient means to represent distributions and structures of datasets and reveal hidden patterns in the data. This paper introduces the basic concept and pipeline of traffic data visualization, provides an overview of related data processing techniques, and summarizes existing methods for depicting the temporal, spatial, numerical, and categorical properties of traffic data.",
"title": ""
},
{
"docid": "f4b270b09649ba05dd22d681a2e3e3b7",
"text": "Advanced analytical techniques are gaining popularity in addressing complex classification type decision problems in many fields including healthcare and medicine. In this exemplary study, using digitized signal data, we developed predictive models employing three machine learning methods to diagnose an asthma patient based solely on the sounds acquired from the chest of the patient in a clinical laboratory. Although, the performances varied slightly, ensemble models (i.e., Random Forest and AdaBoost combined with Random Forest) achieved about 90% accuracy on predicting asthma patients, compared to artificial neural networks models that achieved about 80% predictive accuracy. Our results show that noninvasive, computerized lung sound analysis that rely on low-cost microphones and an embedded real-time microprocessor system would help physicians to make faster and better diagnostic decisions, especially in situations where x-ray and CT-scans are not reachable or not available. This study is a testament to the improving capabilities of analytic techniques in support of better decision making, especially in situations constraint by limited resources.",
"title": ""
},
{
"docid": "b75f793f4feac0b658437026d98a1e8b",
"text": "From a certain (admittedly narrow) perspective, one of the annoying features of natural language is the ubiquitous syntactic ambiguity. For a computational model intended to assign syntactic descriptions to natural language text, this seem like a design defect. In general, when context and lexical content are taken into account, such syntactic ambiguity can be resolved: sentences used in context show, for the most part, little ambiguity. But the grammar provides many alternative analyses, and gives little guidance about resolving the ambiguity. Prepositional phrase attachment is the canonical case of structural ambiguity, as in the time worn example,",
"title": ""
},
{
"docid": "d63543712b2bebfbd0ded148225bb289",
"text": "This paper surveys recent literature in the area of Neural Network, Data Mining, Hidden Markov Model and Neuro-Fuzzy system used to predict the stock market fluctuation. Neural Networks and Neuro-Fuzzy systems are identified to be the leading machine learning techniques in stock market index prediction area. The Traditional techniques are not cover all the possible relation of the stock price fluctuations. There are new approaches to known in-depth of an analysis of stock price variations. NN and Markov Model can be used exclusively in the finance markets and forecasting of stock price. In this paper, we propose a forecasting method to provide better an accuracy rather traditional method. Forecasting stock return is an important financial subject that has attracted researchers’ attention for many years. It involves an assumption that fundamental information publicly available in the past has some predictive relationships to the future stock returns.",
"title": ""
},
{
"docid": "41261cf72d8ee3bca4b05978b07c1c4f",
"text": "The association of Sturge-Weber syndrome with naevus of Ota is an infrequently reported phenomenon and there are only four previously described cases in the literature. In this paper we briefly review the literature regarding the coexistence of vascular and pigmentary naevi and present an additional patient with the association of the Sturge-Weber syndrome and naevus of Ota.",
"title": ""
},
{
"docid": "f741eb8ca9fb9798fb89674a0e045de9",
"text": "We investigate the issue of model uncertainty in cross-country growth regressions using Bayesian Model Averaging (BMA). We find that the posterior probability is very spread among many models suggesting the superiority of BMA over choosing any single model. Out-of-sample predictive results support this claim. In contrast with Levine and Renelt (1992), our results broadly support the more “optimistic” conclusion of Sala-i-Martin (1997b), namely that some variables are important regressors for explaining cross-country growth patterns. However, care should be taken in the methodology employed. The approach proposed here is firmly grounded in statistical theory and immediately leads to posterior and predictive inference.",
"title": ""
},
{
"docid": "516f4b7bea87fad16b774a7f037efaec",
"text": "BACKGROUND\nOperating rooms (ORs) are resource-intense and costly hospital units. Maximizing OR efficiency is essential to maintaining an economically viable institution. OR efficiency projects often focus on a limited number of ORs or cases. Efforts across an entire OR suite have not been reported. Lean and Six Sigma methodologies were developed in the manufacturing industry to increase efficiency by eliminating non-value-added steps. We applied Lean and Six Sigma methodologies across an entire surgical suite to improve efficiency.\n\n\nSTUDY DESIGN\nA multidisciplinary surgical process improvement team constructed a value stream map of the entire surgical process from the decision for surgery to discharge. Each process step was analyzed in 3 domains, ie, personnel, information processed, and time. Multidisciplinary teams addressed 5 work streams to increase value at each step: minimizing volume variation; streamlining the preoperative process; reducing nonoperative time; eliminating redundant information; and promoting employee engagement. Process improvements were implemented sequentially in surgical specialties. Key performance metrics were collected before and after implementation.\n\n\nRESULTS\nAcross 3 surgical specialties, process redesign resulted in substantial improvements in on-time starts and reduction in number of cases past 5 pm. Substantial gains were achieved in nonoperative time, staff overtime, and ORs saved. These changes resulted in substantial increases in margin/OR/day.\n\n\nCONCLUSIONS\nUse of Lean and Six Sigma methodologies increased OR efficiency and financial performance across an entire operating suite. Process mapping, leadership support, staff engagement, and sharing performance metrics are keys to enhancing OR efficiency. The performance gains were substantial, sustainable, positive financially, and transferrable to other specialties.",
"title": ""
},
{
"docid": "9098d40a9e16a1bd1ed0a9edd96f3258",
"text": "The filter bank multicarrier with offset quadrature amplitude modulation (FBMC/OQAM) is being studied by many researchers as a key enabler for the fifth-generation air interface. In this paper, a hybrid peak-to-average power ratio (PAPR) reduction scheme is proposed for FBMC/OQAM signals by utilizing multi data block partial transmit sequence (PTS) and tone reservation (TR). In the hybrid PTS-TR scheme, the data blocks signal is divided into several segments, and the number of data blocks in each segment is determined by the overlapping factor. In each segment, we select the optimal data block to transmit and jointly consider the adjacent overlapped data block to achieve minimum signal power. Then, the peak reduction tones are utilized to cancel the peaks of the segment FBMC/OQAM signals. Simulation results and analysis show that the proposed hybrid PTS-TR scheme could provide better PAPR reduction than conventional PTS and TR schemes in FBMC/OQAM systems. Furthermore, we propose another multi data block hybrid PTS-TR scheme by exploiting the adjacent multi overlapped data blocks, called as the multi hybrid (M-hybrid) scheme. Simulation results show that the M-hybrid scheme can achieve about 0.2-dB PAPR performance better than the hybrid PTS-TR scheme.",
"title": ""
},
{
"docid": "be3e02812e35000b39e4608afc61f229",
"text": "The growing use of control access systems based on face recognition shed light over the need for even more accurate systems to detect face spoofing attacks. In this paper, an extensive analysis on face spoofing detection works published in the last decade is presented. The analyzed works are categorized by their fundamental parts, i.e., descriptors and classifiers. This structured survey also brings a comparative performance analysis of the works considering the most important public data sets in the field. The methodology followed in this work is particularly relevant to observe temporal evolution of the field, trends in the existing approaches, Corresponding author: Luciano Oliveira, tel. +55 71 3283-9472 Email addresses: luiz.otavio@ufba.br (Luiz Souza), lrebouca@ufba.br (Luciano Oliveira), mauricio@dcc.ufba.br (Mauricio Pamplona), papa@fc.unesp.br (Joao Papa) to discuss still opened issues, and to propose new perspectives for the future of face spoofing detection.",
"title": ""
},
{
"docid": "91c937ddfcf7aa0957e1c9a997149f87",
"text": "Generative adversarial training can be generally understood as minimizing certain moment matching loss defined by a set of discriminator functions, typically neural networks. The discriminator set should be large enough to be able to uniquely identify the true distribution (discriminative), and also be small enough to go beyond memorizing samples (generalizable). In this paper, we show that a discriminator set is guaranteed to be discriminative whenever its linear span is dense in the set of bounded continuous functions. This is a very mild condition satisfied even by neural networks with a single neuron. Further, we develop generalization bounds between the learned distribution and true distribution under different evaluation metrics. When evaluated with neural distance, our bounds show that generalization is guaranteed as long as the discriminator set is small enough, regardless of the size of the generator or hypothesis set. When evaluated with KL divergence, our bound provides an explanation on the counter-intuitive behaviors of testing likelihood in GAN training. Our analysis sheds lights on understanding the practical performance of GANs.",
"title": ""
},
{
"docid": "976064ba00f4eb2020199f264d29dae2",
"text": "Social network analysis is a large and growing body of research on the measurement and analysis of relational structure. Here, we review the fundamental concepts of network analysis, as well as a range of methods currently used in the field. Issues pertaining to data collection, analysis of single networks, network comparison, and analysis of individual-level covariates are discussed, and a number of suggestions are made for avoiding common pitfalls in the application of network methods to substantive questions.",
"title": ""
},
{
"docid": "6b5bde39af1260effa0587d8c6afa418",
"text": "This survey highlights the major issues concerning privacy and security in online social networks. Firstly, we discuss research that aims to protect user data from the various attack vantage points including other users, advertisers, third party application developers, and the online social network provider itself. Next we cover social network inference of user attributes, locating hubs, and link prediction. Because online social networks are so saturated with sensitive information, network inference plays a major privacy role. As a response to the issues brought forth by client-server architectures, distributed social networks are discussed. We then cover the challenges that providers face in maintaining the proper operation of an online social network including minimizing spam messages, and reducing the number of sybil accounts. Finally, we present research in anonymizing social network data. This area is of particular interest in order to continue research in this field both in academia and in industry.",
"title": ""
},
{
"docid": "01b147cb417ceedf40dadcb3ee31a1b2",
"text": "BACKGROUND\nPurposeful and timely rounding is a best practice intervention to routinely meet patient care needs, ensure patient safety, decrease the occurrence of patient preventable events, and proactively address problems before they occur. The Institute for Healthcare Improvement (IHI) endorsed hourly rounding as the best way to reduce call lights and fall injuries, and increase both quality of care and patient satisfaction. Nurse knowledge regarding purposeful rounding and infrastructure supporting timeliness are essential components for consistency with this patient centred practice.\n\n\nOBJECTIVES\nThe project aimed to improve patient satisfaction and safety through implementation of purposeful and timely nursing rounds. Goals for patient satisfaction scores and fall volume were set. Specific objectives were to determine current compliance with evidence-based criteria related to rounding times and protocols, improve best practice knowledge among staff nurses, and increase compliance with these criteria.\n\n\nMETHODS\nFor the objectives of this project the Joanna Briggs Institute's Practical Application of Clinical Evidence System and Getting Research into Practice audit tool were used. Direct observation of staff nurses on a medical surgical unit in the United States was employed to assess timeliness and utilization of a protocol when rounding. Interventions were developed in response to baseline audit results. A follow-up audit was conducted to determine compliance with the same criteria. For the project aims, pre- and post-intervention unit-level data related to nursing-sensitive elements of patient satisfaction and safety were compared.\n\n\nRESULTS\nRounding frequency at specified intervals during awake and sleeping hours nearly doubled. Use of a rounding protocol increased substantially to 64% compliance from zero. Three elements of patient satisfaction had substantive rate increases but the hospital's goals were not reached. Nurse communication and pain management scores increased modestly (5% and 11%, respectively). Responsiveness of hospital staff increased moderately (15%) with a significant sub-element increase in toileting (41%). Patient falls decreased by 50%.\n\n\nCONCLUSIONS\nNurses have the ability to improve patient satisfaction and patient safety outcomes by utilizing nursing round interventions which serve to improve patient communication and staff responsiveness. Having a supportive infrastructure and an organized approach, encompassing all levels of staff, to meet patient needs during their hospital stay was a key factor for success. Hard-wiring of new practices related to workflow takes time as staff embrace change and understand how best practice interventions significantly improve patient outcomes.",
"title": ""
},
{
"docid": "ec681bc427c66adfad79008840ea9b60",
"text": "With the rapid development of the Computer Science and Technology, It has become a major problem for the users that how to quickly find useful or needed information. Text categorization can help people to solve this question. The feature selection method has become one of the most critical techniques in the field of the text automatic categorization. A new method of the text feature selection based on Information Gain and Genetic Algorithm is proposed in this paper. This method chooses the feature based on information gain with the frequency of items. Meanwhile, for the information filtering systems, this method has been improved fitness function to fully consider the characteristics of weight, text and vector similarity dimension, etc. The experiment has proved that the method can reduce the dimension of text vector and improve the precision of text classification.",
"title": ""
},
{
"docid": "1733a6f167e7e13bc816b7fc546e19e3",
"text": "As many other machine learning driven medical image analysis tasks, skin image analysis suffers from a chronic lack of labeled data and skewed class distributions, which poses problems for the training of robust and well-generalizing models. The ability to synthesize realistic looking images of skin lesions could act as a reliever for the aforementioned problems. Generative Adversarial Networks (GANs) have been successfully used to synthesize realistically looking medical images, however limited to low resolution, whereas machine learning models for challenging tasks such as skin lesion segmentation or classification benefit from much higher resolution data. In this work, we successfully synthesize realistically looking images of skin lesions with GANs at such high resolution. Therefore, we utilize the concept of progressive growing, which we both quantitatively and qualitatively compare to other GAN architectures such as the DCGAN and the LAPGAN. Our results show that with the help of progressive growing, we can synthesize highly realistic dermoscopic images of skin lesions that even expert dermatologists find hard to distinguish from real ones.",
"title": ""
}
] |
scidocsrr
|
f9dc589582b65f643fabb368c5f0d0c6
|
Low-power FPGA using pre-defined dual-Vdd/dual-Vt fabrics
|
[
{
"docid": "37a7cd907529af8e5b384a6d73ea5be2",
"text": "This paper presents a flexible FPGA architecture evaluation framework, named fpgaEVA-LP, for power efficiency analysis of LUT-based FPGA architectures. Our work has several contributions: (i) We develop a mixed-level FPGA power model that combines switch-level models for interconnects and macromodels for LUTs; (ii) We develop a tool that automatically generates a back-annotated gate-level netlist with post-layout extracted capacitances and delays; (iii) We develop a cycle-accurate power simulator based on our power model. It carries out gate-level simulation under real delay model and is able to capture glitch power; (iv) Using the framework fpgaEVA-LP, we study the power efficiency of FPGAs, in 0.10um technology, under various settings of architecture parameters such as LUT sizes, cluster sizes and wire segmentation schemes and reach several important conclusions. We also present the detailed power consumption distribution among different FPGA components and shed light on the potential opportunities of power optimization for future FPGA designs (e.g., ≤: 0.10um technology).",
"title": ""
}
] |
[
{
"docid": "6eb8e1a391398788d9b4be294b8a70d1",
"text": "To improve software quality, researchers and practitioners have proposed static analysis tools for various purposes (e.g., detecting bugs, anomalies, and vulnerabilities). Although many such tools are powerful, they typically need complete programs where all the code names (e.g., class names, method names) are resolved. In many scenarios, researchers have to analyze partial programs in bug fixes (the revised source files can be viewed as a partial program), tutorials, and code search results. As a partial program is a subset of a complete program, many code names in partial programs are unknown. As a result, despite their syntactical correctness, existing complete-code tools cannot analyze partial programs, and existing partial-code tools are limited in both their number and analysis capability. Instead of proposing another tool for analyzing partial programs, we propose a general approach, called GRAPA, that boosts existing tools for complete programs to analyze partial programs. Our major insight is that after unknown code names are resolved, tools for complete programs can analyze partial programs with minor modifications. In particular, GRAPA locates Java archive files to resolve unknown code names, and resolves the remaining unknown code names from resolved code names. To illustrate GRAPA, we implement a tool that leverages the state-of-the-art tool, WALA, to analyze Java partial programs. We thus implemented the first tool that is able to build system dependency graphs for partial programs, complementing existing tools. We conduct an evaluation on 8,198 partial-code commits from four popular open source projects. Our results show that GRAPA fully resolved unknown code names for 98.5% bug fixes, with an accuracy of 96.1% in total. Furthermore, our results show the significance of GRAPA's internal techniques, which provides insights on how to integrate with more complete-code tools to analyze partial programs.",
"title": ""
},
{
"docid": "79a22c3ad6845d469fc09f2b3ac52027",
"text": "Locking devices are widely used in robotics, for instance to lock springs, joints or to reconfigure robots. This review paper classifies the locking devices currently described in literature and preforms a comparative study. Designers can therefore better determine which locking device best matches the needs of their application. The locking devices are divided into three main categories based on different locking principles: mechanical locking, friction-based locking and singularity locking. Different locking devices in each category can be passive or active. Based on an extensive literature survey, the paper summarizes the findings by comparing different locking devices on a set of properties of an ideal locking device.",
"title": ""
},
{
"docid": "6a1605d41154f2bea869f2f89600c886",
"text": "Compositionality of semantic concepts in image synthesis and analysis is appealing as it can help in decomposing known and generatively recomposing unknown data. For instance, we may learn concepts of changing illumination, geometry or albedo of a scene, and try to recombine them to generate physically meaningful, but unseen data for training and testing. In practice however we often do not have samples from the joint concept space available: We may have data on illumination change in one data set and on geometric change in another one without complete overlap. We pose the following question: How can we learn two or more concepts jointly from different data sets with mutual consistency where we do not have samples from the full joint space? We present a novel answer in this paper based on cyclic consistency over multiple concepts, represented individually by generative adversarial networks (GANs). Our method, ConceptGAN, can be understood as a drop in for data augmentation to improve resilience for real world applications. Qualitative and quantitative evaluations demonstrate its efficacy in generating semantically meaningful images, as well as one shot face verification as an example application.",
"title": ""
},
{
"docid": "f636dece7889f998fa10c19736d90a9a",
"text": "Our use of language depends upon two capacities: a mental lexicon of memorized words and a mental grammar of rules that underlie the sequential and hierarchical composition of lexical forms into predictably structured larger words, phrases, and sentences. The declarative/procedural model posits that the lexicon/grammar distinction in language is tied to the distinction between two well-studied brain memory systems. On this view, the memorization and use of at least simple words (those with noncompositional, that is, arbitrary form-meaning pairings) depends upon an associative memory of distributed representations that is subserved by temporal-lobe circuits previously implicated in the learning and use of fact and event knowledge. This \"declarative memory\" system appears to be specialized for learning arbitrarily related information (i.e., for associative binding). In contrast, the acquisition and use of grammatical rules that underlie symbol manipulation is subserved by frontal/basal-ganglia circuits previously implicated in the implicit (nonconscious) learning and expression of motor and cognitive \"skills\" and \"habits\" (e.g., from simple motor acts to skilled game playing). This \"procedural\" system may be specialized for computing sequences. This novel view of lexicon and grammar offers an alternative to the two main competing theoretical frameworks. It shares the perspective of traditional dual-mechanism theories in positing that the mental lexicon and a symbol-manipulating mental grammar are subserved by distinct computational components that may be linked to distinct brain structures. However, it diverges from these theories where they assume components dedicated to each of the two language capacities (that is, domain-specific) and in their common assumption that lexical memory is a rote list of items. Conversely, while it shares with single-mechanism theories the perspective that the two capacities are subserved by domain-independent computational mechanisms, it diverges from them where they link both capacities to a single associative memory system with broad anatomic distribution. The declarative/procedural model, but neither traditional dual- nor single-mechanism models, predicts double dissociations between lexicon and grammar, with associations among associative memory properties, memorized words and facts, and temporal-lobe structures, and among symbol-manipulation properties, grammatical rule products, motor skills, and frontal/basal-ganglia structures. In order to contrast lexicon and grammar while holding other factors constant, we have focused our investigations of the declarative/procedural model on morphologically complex word forms. Morphological transformations that are (largely) unproductive (e.g., in go-went, solemn-solemnity) are hypothesized to depend upon declarative memory. These have been contrasted with morphological transformations that are fully productive (e.g., in walk-walked, happy-happiness), whose computation is posited to be solely dependent upon grammatical rules subserved by the procedural system. Here evidence is presented from studies that use a range of psycholinguistic and neurolinguistic approaches with children and adults. It is argued that converging evidence from these studies supports the declarative/procedural model of lexicon and grammar.",
"title": ""
},
{
"docid": "6fb56c21bab6cf8facdf8e286c3739ed",
"text": "This paper presents an application for counting people through a single fixed camera. This system performs the count distinction between input and output of people moving through the supervised area. The counter requires two steps: detection and tracking. The detection is based on finding people's heads through preprocessed image correlation with several circular patterns. Tracking is made through the application of a Kalman filter to determine the trajectory of the candidates. Finally, the system updates the counters based on the direction of the trajectories. Different tests using a set of real video sequences taken from different indoor areas give results ranging between 87% and 98% accuracies depending on the volume of flow of people crossing the counting zone. Problematic situations, such as occlusions, people grouped in different ways, scene luminance changes, etc., were used to validate the performance of the system.",
"title": ""
},
{
"docid": "42df26884900e1dcba492e2538b66197",
"text": "This paper presents a survey of applications of design optimization to helicopter problems carried out in the last decade. Helicopter optimization has not yet reached the same maturity as structural optimization, and some potential reasons are discussed first. Next, published optimization studies are reviewed, divided into sensitivity analysis studies, applications to rotor dynamics and aeroelasticity, applications to other helicopter problems, emerging technologies such as genetic algorithms and simulated annealing, and studies with experimental verification. The usefulness of design optimization for helicopter applications will increase with the availability of improved analyses, more efficient algorithms especially for sensitivity calculations, and further experimental verifications. The formulation of representative optimization test problem, including experimental verification, is also recommended.",
"title": ""
},
{
"docid": "2742db8262616f2b69d92e0066e6930c",
"text": "Most of previous work in knowledge base (KB) completion has focused on the problem of relation extraction. In this work, we focus on the task of inferring missing entity type instances in a KB, a fundamental task for KB competition yet receives little attention. Due to the novelty of this task, we construct a large-scale dataset and design an automatic evaluation methodology. Our knowledge base completion method uses information within the existing KB and external information from Wikipedia. We show that individual methods trained with a global objective that considers unobserved cells from both the entity and the type side gives consistently higher quality predictions compared to baseline methods. We also perform manual evaluation on a small subset of the data to verify the effectiveness of our knowledge base completion methods and the correctness of our proposed automatic evaluation method.",
"title": ""
},
{
"docid": "baa0bf8fe429c4fe8bfb7ebf78a1ed94",
"text": "The weakly supervised object localization (WSOL) is to locate the objects in an image while only image-level labels are available during the training procedure. In this work, the Selective Feature Category Mapping (SFCM) method is proposed, which introduces the Feature Category Mapping (FCM) and the widely-used selective search method to solve the WSOL task. Our FCM replaces layers after the specific layer in the state-of-the-art CNNs with a set of kernels and learns the weighted pooling for previous feature maps. It is trained with only image-level labels and then map the feature maps to their corresponding categories in the test phase. Together with selective search method, the location of each object is finally obtained. Extensive experimental evaluation on ILSVRC2012 and PASCAL VOC2007 benchmarks shows that SFCM is simple but very effective, and it is able to achieve outstanding classification performance and outperform the state-of-the-art methods in the WSOL task.",
"title": ""
},
{
"docid": "6841f2fb1dbe8246f184affed49fe6c3",
"text": "Instructional designers and educators recognize the potential of mobile technologies as a learning tool for students and have incorporated them into the distance learning environment. However, little research has been done to categorize the numerous examples of mobile learning in the context of distance education, and few instructional design guidelines based on a solid theoretical framework for mobile learning exist. In this paper I compare mobile learning (m-learning) with electronic learning (e-learning) and ubiquitous learning (u-learning) and describe the technological attributes and pedagogical affordances of mobile learning presented in previous studies. I modify transactional distance (TD) theory and adopt it as a relevant theoretical framework for mobile learning in distance education. Furthermore, I attempt to position previous studies into four types of mobile learning: 1) high transactional distance socialized m-learning, 2) high transactional distance individualized m-learning, 3) low transactional distance socialized mlearning and 4) low transactional distance individualized m-learning. As a result, this paper can be used by instructional designers of open and distance learning to learn about the concepts of mobile learning and how mobile technologies can be incorporated into their teaching and learning more effectively.",
"title": ""
},
{
"docid": "6b19893324e4012a622c0250436e1ab3",
"text": "Nowadays, email is one of the fastest ways to conduct communications through sending out information and attachments from one to another. Individuals and organizations are all benefit the convenience from email usage, but at the same time they may also suffer the unexpected user experience of receiving spam email all the time. Spammers flood the email servers and send out mass quantity of unsolicited email to the end users. From a business perspective, email users have to spend time on deleting received spam email which definitely leads to the productivity decrease and cause potential loss for organizations. Thus, how to detect the email spam effectively and efficiently with high accuracy becomes a significant study. In this study, data mining will be utilized to process machine learning by using different classifiers for training and testing and filters for data preprocessing and feature selection. It aims to seek out the optimal hybrid model with higher accuracy or base on other metric’s evaluation. The experiment results show accuracy improvement in email spam detection by using hybrid techniques compared to the single classifiers used in this research. The optimal hybrid model provides 93.00% of accuracy and 7.80% false positive rate for email spam detection.",
"title": ""
},
{
"docid": "cfa8e5af1a37c96617164ea319dba4a5",
"text": "In 2011, the FIGO classification system (PALM-COEIN) was published to standardize terminology, diagnostic and investigations of causes of abnormal uterine bleeding (AUB). According to FIGO new classification, in the absence of structural etiology, the formerly called \"dysfunctional uterine bleeding\" should be avoided and clinicians should state if AUB are caused by coagulation disorders (AUB-C), ovulation disorder (AUB-O), or endometrial primary dysfunction (AUB-E). Since this publication, some societies have released or revised their guidelines for the diagnosis and the management of the formerly called \"dysfunctional uterine bleeding\" according new FIGO classification. In this review, we summarize the most relevant new guidelines for the diagnosis and the management of AUB-C, AUB-O, and AUB-E.",
"title": ""
},
{
"docid": "94a646f32c4cd392f748887d1163bf51",
"text": "Article history: Received 10 February 2009 Received in revised form 7 March 2010 Accepted 10 March 2010 Available online 20 March 2010",
"title": ""
},
{
"docid": "621980ace49ca03f0a230b170c005208",
"text": "The context-free language (CFL) reachability problem is a well-known fundamental formulation in program analysis. In practice, many program analyses, especially pointer analyses, adopt a restricted version of CFL-reachability, Dyck-CFL-reachability, and compute on edge-labeled bidirected graphs. Solving the all-pairs Dyck-CFL-reachability on such bidirected graphs is expensive. For a bidirected graph with n nodes and m edges, the traditional dynamic programming style algorithm exhibits a subcubic time complexity for the Dyck language with k kinds of parentheses. When the underlying graphs are restricted to bidirected trees, an algorithm with O(n log n log k) time complexity was proposed recently. This paper studies the Dyck-CFL-reachability problems on bidirected trees and graphs. In particular, it presents two fast algorithms with O(n) and O(n + m log m) time complexities on trees and graphs respectively. We have implemented and evaluated our algorithms on a state-of-the-art alias analysis for Java. Results on standard benchmarks show that our algorithms achieve orders of magnitude speedup and consume less memory.",
"title": ""
},
{
"docid": "f48f55963cf3beb43170df96a463feba",
"text": "This article proposes and implements a class of chaotic motors for electric compaction. The key is to develop a design approach for the permanent magnets PMs of doubly salient PM DSPM motors in such a way that chaotic motion can be naturally produced. The bifurcation diagram is employed to derive the threshold of chaoization in terms of PM flux, while the corresponding phase-plane trajectories are used to characterize the chaotic motion. A practical three-phase 12/8-pole DSPM motor is used for exemplification. The proposed chaotic motor is critically assessed for application to a vibratory soil compactor, which is proven to offer better compaction performance than its counterparts. Both computer simulation and experimental results are given to illustrate the proposed chaotic motor. © 2006 American Institute of Physics. DOI: 10.1063/1.2165783",
"title": ""
},
{
"docid": "06b9f83845f3125272115894676b5e5d",
"text": "For aligning DNA sequences that differ only by sequencing errors, or by equivalent errors from other sources, a greedy algorithm can be much faster than traditional dynamic programming approaches and yet produce an alignment that is guaranteed to be theoretically optimal. We introduce a new greedy alignment algorithm with particularly good performance and show that it computes the same alignment as does a certain dynamic programming algorithm, while executing over 10 times faster on appropriate data. An implementation of this algorithm is currently used in a program that assembles the UniGene database at the National Center for Biotechnology Information.",
"title": ""
},
{
"docid": "04b7d1197e9e5d78e948e0c30cbdfcfe",
"text": "Context: Software development depends significantly on team performance, as does any process that involves human interaction. Objective: Most current development methods argue that teams should self-manage. Our objective is thus to provide a better understanding of the nature of self-managing agile teams, and the teamwork challenges that arise when introducing such teams. Method: We conducted extensive fieldwork for 9 months in a software development company that introduced Scrum. We focused on the human sensemaking, on how mechanisms of teamwork were understood by the people involved. Results: We describe a project through Dickinson and McIntyre’s teamwork model, focusing on the interrelations between essential teamwork components. Problems with team orientation, team leadership and coordination in addition to highly specialized skills and corresponding division of work were important barriers for achieving team effectiveness. Conclusion: Transitioning from individual work to self-managing teams requires a reorientation not only by developers but also by management. This transition takes time and resources, but should not be neglected. In addition to Dickinson and McIntyre’s teamwork components, we found trust and shared mental models to be of fundamental importance. 2009 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "178619e57d1f828448ad663045edd666",
"text": "Diabetes mellitus is a global disease that is increasing in an alarming rate. The present study was undertaken to study the antidiabetic effect of the ethanol extracts of Carica papaya and Pandanus amaryfollius on streptozotocin-induced diabetic mice. The results of the present study indicated that there was no significant difference in the body weight of the treated groups when compared to diabetic control. Whereas, there was significant (P < 0.05) decrease in the blood glucose level of the plant-treated groups compared to the diabetic control. Histologically the pancreas of the treated groups indicated significant regeneration of the β-cells when compared to the diabetic control. The liver tissues of the treated group indicated a reduction in fatty changes and pyknotic nucleus. The kidney tissues of the treated groups indicated significant recovery in the cuboidal tissue. The results from the phytochemical screening indicated the presence of flavonoids, alkaloids, saponin and tannin in C. papaya and P. amaryfollius. The antidiabetic effect of C. papaya and P. amaryfollius observed in the present study may be due to the presence of these phytochemicals.",
"title": ""
},
{
"docid": "581e180cd470a806eb17168f83c1db86",
"text": "There has been recent interest on designing double error correction (DEC) codes for 32-bit data words that support fast decoding as they can be useful to protect memories. To that end, solutions based on orthogonal Latin square codes have been recently presented that achieve fast decoding but require a large number of parity check bits. In this letter, a DEC code derived from difference set codes is presented. The proposed code is able to reduce the number of parity check bits needed at the cost of a slightly more complex decoding. Therefore, it provides memory designers with an additional option that can be useful when making trade-offs between memory size and speed.",
"title": ""
},
{
"docid": "c9011e071aa5d50985e50f13883c1960",
"text": "ANALYSIS OF THE SALES CHECKOUT OPERATION IN ICA SUPERMARKET USING QUEUING SIMULATION",
"title": ""
},
{
"docid": "4e8ab63a4b7fe9f78c89046628237d4d",
"text": "Modeling the structure of coherent texts is a key NLP problem. The task of coherently organizing a given set of sentences has been commonly used to build and evaluate models that understand such structure. We propose an end-to-end unsupervised deep learning approach based on the set-to-sequence framework to address this problem. Our model strongly outperforms prior methods in the order discrimination task and a novel task of ordering abstracts from scientific articles. Furthermore, our work shows that useful text representations can be obtained by learning to order sentences. Visualizing the learned sentence representations shows that the model captures high-level logical structure in paragraphs. Our representations perform comparably to state-of-the-art pre-training methods on sentence similarity and paraphrase detection tasks.",
"title": ""
}
] |
scidocsrr
|
91cefa0057de61a06d353ffeb8921304
|
Compression By Induction of Hierarchical Grammars
|
[
{
"docid": "bbf581230ec60c2402651d51e3a37211",
"text": "The state of the art in data compression is arithmetic coding, not the better-known Huffman method. Arithmetic coding gives greater compression, is faster for adaptive models, and clearly separates the model from the channel encoding.",
"title": ""
},
{
"docid": "105951b58d594fdb3a07e1adbb76dc5f",
"text": "The “Prediction by Partial Matching” (PPM) data compression algorithm developed by Cleary and Witten is capable of very high compression rates, encoding English text in as little as 2.2 bits/character. Here it is shown that the estimates made by Cleary and Witten of the resources required to implement the scheme can be revised to allow for a tractable and useful implementation. In particular, a variant is described that encodes and decodes at over 4 kbytes/s on a small workstation, and operates within a few hundred kilobytes of data space, but still obtains compression of about 2.4 bits/character on",
"title": ""
}
] |
[
{
"docid": "a94d8b425aed0ade657aa1091015e529",
"text": "Generative models for source code are an interesting structured prediction problem, requiring to reason about both hard syntactic and semantic constraints as well as about natural, likely programs. We present a novel model for this problem that uses a graph to represent the intermediate state of the generated output. Our model generates code by interleaving grammar-driven expansion steps with graph augmentation and neural message passing steps. An experimental evaluation shows that our new model can generate semantically meaningful expressions, outperforming a range of strong baselines.",
"title": ""
},
{
"docid": "149d76dfaa019b965965062645e4845d",
"text": "In this paper we provide a detailed and comprehensive survey of proposed approaches for network design, charting the evolution of models and techniques for the automatic planning of cellular wireless services. These problems present themselves as a trade-off between commitment to infrastructure and quality of service, and have become increasingly complex with the advent of more sophisticated protocols and wireless architectures. Consequently these problems are receiving increased attention from researchers in a variety of fields who adopt a wide range of models, assumptions and methodologies for problem solution. We seek to unify this dispersed and fragmented literature by charting the evolution of centralised planning for cellular systems.",
"title": ""
},
{
"docid": "a5f17126a90b45921f70439ff96a0091",
"text": "Successful methods for visual object recognition typically rely on training datasets containing lots of richly annotated images. Detailed image annotation, e.g. by object bounding boxes, however, is both expensive and often subjective. We describe a weakly supervised convolutional neural network (CNN) for object classification that relies only on image-level labels, yet can learn from cluttered scenes containing multiple objects. We quantify its object classification and object location prediction performance on the Pascal VOC 2012 (20 object classes) and the much larger Microsoft COCO (80 object classes) datasets. We find that the network (i) outputs accurate image-level labels, (ii) predicts approximate locations (but not extents) of objects, and (iii) performs comparably to its fully-supervised counterparts using object bounding box annotation for training.",
"title": ""
},
{
"docid": "e1ada58b1ae0e92f12d4fb049de5a4bb",
"text": "We propose a perspective on knowledge compilation which calls for analyzing different compilation approaches according to two key dimensions: the succinctness of the target compilation language, and the class of queries and transformations that the language supports in polytime. We then provide a knowledge compilation map, which analyzes a large number of existing target compilation languages according to their succinctness and their polytime transformations and queries. We argue that such analysis is necessary for placing new compilation approaches within the context of existing ones. We also go beyond classical, flat target compilation languages based on CNF and DNF, and consider a richer, nested class based on directed acyclic graphs (such as OBDDs), which we show to include a relatively large number of target compilation languages.",
"title": ""
},
{
"docid": "2e32d668383eaaed096aa2e34a10d8e9",
"text": "Splicing and copy-move are two well known methods of passive image forgery. In this paper, splicing and copy-move forgery detection are performed simultaneously on the same database CASIA v1.0 and CASIA v2.0. Initially, a suspicious image is taken and features are extracted through BDCT and enhanced threshold method. The proposed technique decides whether the given image is manipulated or not. If it is manipulated then support vector machine (SVM) classify that the given image is gone through splicing forgery or copy-move forgery. For copy-move detection, ZM-polar (Zernike Moment) is used to locate the duplicated regions in image. Experimental results depict the performance of the proposed method.",
"title": ""
},
{
"docid": "614cc9968370bffb32cf70f44c8f8688",
"text": "The abundance of event data in today’s information systems makes it possible to “confront” process models with the actual observed behavior. Process mining techniques use event logs to discover process models that describe the observed behavior, and to check conformance of process models by diagnosing deviations between models and reality. In many situations, it is desirable to mediate between a preexisting model and observed behavior. Hence, we would like to repair the model while improving the correspondence between model and log as much as possible. The approach presented in this article assigns predefined costs to repair actions (allowing inserting or skipping of activities). Given a maximum degree of change, we search for models that are optimal in terms of fitness—that is, the fraction of behavior in the log not possible according to the model is minimized. To compute fitness, we need to align the model and log, which can be time consuming. Hence, finding an optimal repair may be intractable. We propose different alternative approaches to speed up repair. The number of alignment computations can be reduced dramatically while still returning near-optimal repairs. The different approaches have been implemented using the process mining framework ProM and evaluated using real-life logs.",
"title": ""
},
{
"docid": "948b157586c75674e75bd50b96162861",
"text": "We propose a database design methodology for NoSQL systems. The approach is based on NoAM (NoSQL Abstract Model), a novel abstrac t d ta model for NoSQL databases, which exploits the commonalities of various N SQL systems and is used to specify a system-independent representatio n of the application data. This intermediate representation can be then implemented in target NoSQL databases, taking into account their specific features. Ov rall, the methodology aims at supporting scalability, performance, and consisten cy, as needed by next-generation web applications.",
"title": ""
},
{
"docid": "adf6ac64c2c1af405e9500ce1ea35cf2",
"text": "Mining detailed opinions buried in the vast amount of review text data is an important, yet quite challenging task with widespread applications in multiple domains. Latent Aspect Rating Analysis (LARA) refers to the task of inferring both opinion ratings on topical aspects (e.g., location, service of a hotel) and the relative weights reviewers have placed on each aspect based on review content and the associated overall ratings. A major limitation of previous work on LARA is the assumption of pre-specified aspects by keywords. However, the aspect information is not always available, and it may be difficult to pre-define appropriate aspects without a good knowledge about what aspects are actually commented on in the reviews.\n In this paper, we propose a unified generative model for LARA, which does not need pre-specified aspect keywords and simultaneously mines 1) latent topical aspects, 2) ratings on each identified aspect, and 3) weights placed on different aspects by a reviewer. Experiment results on two different review data sets demonstrate that the proposed model can effectively perform the Latent Aspect Rating Analysis task without the supervision of aspect keywords. Because of its generality, the proposed model can be applied to explore all kinds of opinionated text data containing overall sentiment judgments and support a wide range of interesting application tasks, such as aspect-based opinion summarization, personalized entity ranking and recommendation, and reviewer behavior analysis.",
"title": ""
},
{
"docid": "5e82e67ebb99cac1b3874bf08e03b550",
"text": "Nonsmooth nonnegative matrix factorization (nsNMF) is capable of producing more localized, less overlapped feature representations than other variants of NMF while keeping satisfactory fit to data. However, nsNMF as well as other existing NMF methods are incompetent to learn hierarchical features of complex data due to its shallow structure. To fill this gap, we propose a deep nsNMF method coined by the fact that it possesses a deeper architecture compared with standard nsNMF. The deep nsNMF not only gives part-based features due to the nonnegativity constraints but also creates higher level, more abstract features by combing lower level ones. The in-depth description of how deep architecture can help to efficiently discover abstract features in dnsNMF is presented, suggesting that the proposed model inherits the major advantages from both deep learning and NMF. Extensive experiments demonstrate the standout performance of the proposed method in clustering analysis.",
"title": ""
},
{
"docid": "fb80a9ad20947bee7ba23d585896b6e8",
"text": "This paper presents an intelligent streetlight management system based on LED lamps, designed to facilitate its deployment in existing facilities. The proposed approach, which is based on wireless communication technologies, will minimize the cost of investment of traditional wired systems, which always need civil engineering for burying of cable underground and consequently are more expensive than if the connection of the different nodes is made over the air. The deployed solution will be aware of their surrounding's environmental conditions, a fact that will be approached for the system intelligence in order to learn, and later, apply dynamic rules. The knowledge of real time illumination needs, in terms of instant use of the street in which it is installed, will also feed our system, with the objective of providing tangible solutions to reduce energy consumption according to the contextual needs, an exact calculation of energy consumption and reliable mechanisms for preventive maintenance of facilities.",
"title": ""
},
{
"docid": "2895400382c5c8358d83a3c16b89f83c",
"text": "The problem of dimensionality reduction arises in many fields of information processing, including machine learning, data compression, scientific visualization, pattern recognition, and neural computation. Here we describe locally linear embedding (LLE), an unsupervised learning algorithm that computes low dimensional, neighborhood preserving embeddings of high dimensional data. The data, assumed to be sampled from an underlying manifold, are mapped into a single global coordinate system of lower dimensionality. The mapping is derived from the symmetries of locally linear reconstructions, and the actual computation of the embedding reduces to a sparse eigenvalue problem. Notably, the optimizations in LLE—though capable of generating highly nonlinear embeddings—are simple to implement, and they do not involve local minima. In this paper, we describe the implementation of the algorithm in detail and discuss several extensions that enhance its performance. We present results of the algorithm applied to data sampled from known manifolds, as well as to collections of images of faces, lips, and handwritten digits. These examples are used to provide extensive illustrations of the algorithm’s performance—both successes and failures—and to relate the algorithm to previous and ongoing work in nonlinear dimensionality reduction.",
"title": ""
},
{
"docid": "2c37ee67205320d54149a71be104c0e1",
"text": "This talk will review the mission, activities, and recommendations of the Blue Ribbon Panel on Cyberinfrastructure recently appointed by the leadership on the U.S. National Science Foundation (NSF). The NSF invests in people, ideas, and tools and in particular is a major investor in basic research to produce communication and information technology (ICT) as well as its use in supporting basic research and education in most all areas of science and engineering. The NSF through its Directorate for Computer and Information Science and Engineering (CISE) has provided substantial funding for high-end computing resources, initially by awards to five supercomputer centers and later through $70 M per year investments in two partnership alliances for advanced computation infrastructures centered at the University of Illinois and the University of California, San Diego. It has also invested in an array of complementary R&D initiatives in networking, middleware, digital libraries, collaboratories, computational and visualization science, and distributed terascale grid environments.",
"title": ""
},
{
"docid": "7e127a6f25e932a67f333679b0d99567",
"text": "This paper presents a novel manipulator for human-robot interaction that has low mass and inertia without losing stiffness and payload performance. A lightweight tension amplifying mechanism that increases the joint stiffness in quadratic order is proposed. High stiffness is essential for precise and rapid manipulation, and low mass and inertia are important factors for safety due to low stored kinetic energy. The proposed tension amplifying mechanism was applied to a 1-DOF elbow joint and then extended to a 3-DOF wrist joint. The developed manipulator was analyzed in terms of inertia, stiffness, and strength properties. Its moving part weighs 3.37 kg, and its inertia is 0.57 kg·m2, which is similar to that of a human arm. The stiffness of the developed elbow joint is 1440Nm/rad, which is comparable to that of the joints with rigid components in industrial manipulators. A detailed description of the design is provided, and thorough analysis verifies the performance of the proposed mechanism.",
"title": ""
},
{
"docid": "46fa91ce587d094441466a7cbe5c5f07",
"text": "Automatic facial expression analysis is an interesting and challenging problem which impacts important applications in many areas such as human-computer interaction and data-driven animation. Deriving effective facial representative features from face images is a vital step towards successful expression recognition. In this paper, we evaluate facial representation based on statistical local features called Local Binary Patterns (LBP) for facial expression recognition. Simulation results illustrate that LBP features are effective and efficient for facial expression recognition. A real-time implementation of the proposed approach is also demonstrated which can recognize expressions accurately at the rate of 4.8 frames per second.",
"title": ""
},
{
"docid": "07f0996fe2dcd3b52931b0aa09ac6f45",
"text": "We are interested in the situation where we have two or more re presentations of an underlying phenomenon. In particular we ar e interested in the scenario where the representation are complementary. This implies that a single individual representation is not sufficient to fully dis criminate a specific instance of the underlying phenomenon, it also means that each r presentation is an ambiguous representation of the other complementary spa ce . In this paper we present a latent variable model capable of consolidating multiple complementary representations. Our method extends canonical cor relation analysis by introducing additional latent spaces that are specific to th e different representations, thereby explaining the full variance of the observat ions. These additional spaces, explaining representation specific variance, sepa rat ly model the variance in a representation ambiguous to the other. We develop a spec tral algorithm for fast computation of the embeddings and a probabilistic mode l (based on Gaussian processes) for validation and inference. The proposed mode l has several potential application areas, we demonstrate its use for multi-modal r egression on a benchmark human pose estimation data set.",
"title": ""
},
{
"docid": "5d8f33b7f28e6a8d25d7a02c1f081af1",
"text": "Background The life sciences, biomedicine and health care are increasingly turning into a data intensive science [2-4]. Particularly in bioinformatics and computational biology we face not only increased volume and a diversity of highly complex, multi-dimensional and often weaklystructured and noisy data [5-8], but also the growing need for integrative analysis and modeling [9-14]. Due to the increasing trend towards personalized and precision medicine (P4 medicine: Predictive, Preventive, Participatory, Personalized [15]), biomedical data today results from various sources in different structural dimensions, ranging from the microscopic world, and in particular from the omics world (e.g., from genomics, proteomics, metabolomics, lipidomics, transcriptomics, epigenetics, microbiomics, fluxomics, phenomics, etc.) to the macroscopic world (e.g., disease spreading data of populations in public health informatics), see Figure 1[16]. Just for rapid orientation in terms of size: the Glucose molecule has a size of 900 pm = 900× 10−12m and the Carbon atom approx. 300 pm . A hepatitis virus is relatively large with 45nm = 45× 10−9m and the X-Chromosome much bigger with 7μm = 7× 10−6m . We produce most of the “Big Data” in the omics world, we estimate many Terabytes ( 1TB = 1× 10 Byte = 1000 GByte) of genomics data in each individual, consequently, the fusion of these with Petabytes of proteomics data for personalized medicine results in Exabytes of data (1 EB = 1× 1018 Byte ). Last but not least, this “natural” data is then fused together with “produced” data, e.g., the unstructured information (text) in the patient records, wellness data, the data from physiological sensors, laboratory data etc. these data are also rapidly increasing in size and complexity. Besides the problem of heterogeneous and distributed data, we are confronted with noisy, missing and inconsistent data. This leaves a large gap between the available “dirty” data [17] and the machinery to effectively process the data for the application purposes; moreover, the procedures of data integration and information extraction may themselves introduce errors and artifacts in the data [18]. Although, one may argue that “Big Data” is a buzz word, systematic and comprehensive exploration of all these data is often seen as the fourth paradigm in the investigation of nature after empiricism, theory and computation [19], and provides a mechanism for data driven hypotheses generation, optimized experiment planning, precision medicine and evidence-based medicine. The challenge is not only to extract meaningful information from this data, but to gain knowledge, to discover previously unknown insight, look for patterns, and to make sense of the data [20], [21]. Many different approaches, including statistical and graph theoretical methods, data mining, and machine learning methods, have been applied in the past however with partly unsatisfactory success [22,23] especially in terms of performance [24]. The grand challenge is to make data useful to and useable by the end user [25]. Maybe, the key challenge is interaction, due to the fact that it is the human end user who possesses the problem solving intelligence [26], hence the ability to ask intelligent questions about the data. The problem in the life sciences is that (biomedical) data models are characterized by significant complexity [27], [28], making manual analysis by the end users difficult and often impossible [29]. At the same time, human * Correspondence: a.holzinger@tugraz.at Research Unit Human-Computer Interaction, Austrian IBM Watson Think Group, Institute for Medical Informatics, Statistics & Documentation, Medical University Graz, Austria Full list of author information is available at the end of the article Holzinger et al. BMC Bioinformatics 2014, 15(Suppl 6):I1 http://www.biomedcentral.com/1471-2105/15/S6/I1",
"title": ""
},
{
"docid": "37af5d5ee2e4f6b94aa5c93d12f98017",
"text": "This paper reviews prior research in management accounting innovations covering the period 1926-2008. Management accounting innovations refer to the adoption of “newer” or modern forms of management accounting systems such as activity-based costing, activity-based management, time-driven activity-based costing, target costing, and balanced scorecards. Although some prior reviews, covering the period until 2000, place emphasis on modern management accounting techniques, however, we believe that the time gap between 2000 and 2008 could entail many new or innovative accounting issues. We find that research in management accounting innovations has intensified during the period 2000-2008, with the main focus has been on explaining various factors associated with the implementation and the outcome of an innovation. In addition, research in management accounting innovations indicates the dominant use of sociological-based theories and increasing use of field studies. We suggest some directions for future research pertaining to management accounting innovations.",
"title": ""
},
{
"docid": "7c86594614a6bd434ee4e749eb661cee",
"text": "The ACT-R system is a general system for modeling a wide range of higher level cognitive processes. Recently, it has been embellished with a theory of how its higher level processes interact with a visual interface. This includes a theory of how visual attention can move across the screen, encoding information into a form that can be processed by ACT-R. This system is applied to modeling several classic phenomena in the literature that depend on the speed and selectivity with which visual attention can move across a visual display. ACT-R is capable of interacting with the same computer screens that subjects do and, as such, is well suited to provide a model for tasks involving human-computer interaction. In this article, we discuss a demonstration of ACT-R's application to menu selection and show that the ACT-R theory makes unique predictions, without estimating any parameters, about the time to search a menu. These predictions are confirmed. John R. Anderson is a cognitive scientist with an interest in cognitive architectures and intelligent tutoring systems; he is a Professor of Psychology and Computer Science at Carnegie Mellon University. Michael Matessa is a graduate student studying cognitive psychology at Carnegie Mellon University; his interests include cognitive architectures and modeling the acquisition of information from the environment. Christian Lebiere is a computer scientist with an interest in intelligent architectures; he is a Research Programmer in the Department of Psycholo and a graduate student in the School of Computer Science at Carnegie Me1 By on University. 440 ANDERSON, MATESSA, LEBIERE",
"title": ""
},
{
"docid": "09c5af6e117376657f44afc3a2125293",
"text": "One of the main disturbances in a frequency-modulated continuous wave radar system for range measurement is nonlinearity in the frequency ramp. The intermediate frequency (IF) signal and consequently the target range accuracy are dependent on the type of the nonlinearity present in the frequency ramp. Moreover, the type of frequency ramp nonlinearity cannot be directly specified, which makes the problem even more challenging. In this paper, the frequency ramp nonlinearity is investigated with the modified short-time Fourier transform method by using the short-time Chirp-Z transform method with high accuracy. The random and periodic nonlinearities are characterized and their sources are identified as phase noise and spurious. These types of frequency deviations are intentionally increased, and their influence on the linearity and the IF-signal is investigated. The dependence of target range estimation accuracy on the frequency ramp nonlinearity, phase noise, spurious, and signal-to-noise ratio in the IF-signal are described analytically and are verified on the basis of measurements.",
"title": ""
},
{
"docid": "b09c438933e0c9300e19f035eb0e9305",
"text": "A Reverse Conducting IGBT (RC-IGBT) is a promising device to reduce a size and cost of the power module thanks to the integration of IGBT and FWD into a single chip. However, it is difficult to achieve well-balanced performance between IGBT and FWD. Indeed, the total inverter loss of the conventional RC-IGBT was not so small as the individual IGBT and FWD pair. To minimize the loss, the most important key is the improvement of reverse recovery characteristics of FWD. We carefully extracted five effective parameters to improve the FWD characteristics, and investigated the impact of these parameters by using simulation and experiments. Finally, optimizing these parameters, we succeeded in fabricating the second-generation 600V class RC-IGBT with a smaller FWD loss than the first-generation RC-IGBT.",
"title": ""
}
] |
scidocsrr
|
26783be6c02049e3b4df3b373534313e
|
Value Chain Creation in Business Analytics
|
[
{
"docid": "e964a46706179a92b775307166a64c8a",
"text": "I general, perceptions of information systems (IS) success have been investigated within two primary research streams—the user satisfaction literature and the technology acceptance literature. These two approaches have been developed in parallel and have not been reconciled or integrated. This paper develops an integrated research model that distinguishes beliefs and attitudes about the system (i.e., object-based beliefs and attitudes) from beliefs and attitudes about using the system (i.e., behavioral beliefs and attitudes) to build the theoretical logic that links the user satisfaction and technology acceptance literature. The model is then tested using a sample of 465 users from seven different organizations who completed a survey regarding their use of data warehousing software. The proposed model was supported, providing preliminary evidence that the two perspectives can and should be integrated. The integrated model helps build the bridge from design and implementation decisions to system characteristics (a core strength of the user satisfaction literature) to the prediction of usage (a core strength of the technology acceptance literature).",
"title": ""
},
{
"docid": "77bc1c8c80f756845b87428382e8fd91",
"text": "Previous research has proposed different types for and contingency factors affecting information technology governance. Yet, in spite of this valuable work, it is still unclear through what mechanisms IT governance affects organizational performance. We make a detailed argument for the mediation of strategic alignment in this process. Strategic alignment remains a top priority for business and IT executives, but theory-based empirical research on the relative importance of the factors affecting strategic alignment is still lagging. By consolidating strategic alignment and IT governance models, this research proposes a nomological model showing how organizational value is created through IT governance mechanisms. Our research model draws upon the resource-based view of the firm and provides guidance on how strategic alignment can mediate the effectiveness of IT governance on organizational performance. As such, it contributes to the knowledge bases of both alignment and IT governance literatures. Using dyadic data collected from 131 Taiwanese companies (cross-validated with archival data from 72 firms), we uncover a positive, significant, and impactful linkage between IT governance mechanisms and strategic alignment and, further, between strategic alignment and organizational performance. We also show that the effect of IT governance mechanisms on organizational performance is fully mediated by strategic alignment. Besides making contributions to construct and measure items in this domain, this research contributes to the theory base by integrating and extending the literature on IT governance and strategic alignment, both of which have long been recognized as critical for achieving organizational goals.",
"title": ""
},
{
"docid": "c17e6363762e0e9683b51c0704d43fa7",
"text": "Your use of the JSTOR archive indicates your acceptance of JSTOR's Terms and Conditions of Use, available at http://www.jstor.org/about/terms.html. JSTOR's Terms and Conditions of Use provides, in part, that unless you have obtained prior permission, you may not download an entire issue of a journal or multiple copies of articles, and you may use content in the JSTOR archive only for your personal, non-commercial use.",
"title": ""
}
] |
[
{
"docid": "0c2e489edeac2c8ad5703eda644edfac",
"text": "Nowadays, more and more decision procedures are supported or even guided by automated processes. An important technique in this automation is data mining. In this chapter we study how such automatically generated decision support models may exhibit discriminatory behavior towards certain groups based upon, e.g., gender or ethnicity. Surprisingly, such behavior may even be observed when sensitive information is removed or suppressed and the whole procedure is guided by neutral arguments such as predictive accuracy only. The reason for this phenomenon is that most data mining methods are based upon assumptions that are not always satisfied in reality, namely, that the data is correct and represents the population well. In this chapter we discuss the implicit modeling assumptions made by most data mining algorithms and show situations in which they are not satisfied. Then we outline three realistic scenarios in which an unbiased process can lead to discriminatory models. The effects of the implicit assumptions not being fulfilled are illustrated by examples. The chapter concludes with an outline of the main challenges and problems to be solved.",
"title": ""
},
{
"docid": "6dd810d8a5180b49ded351f0acf135b8",
"text": "In classification problem, we assume that the samples around the class boundary are more likely to be incorrectly annotated than others, and propose boundaryconditional class noise (BCN). Based on the BCN assumption, we use unnormalized Gaussian and Laplace distributions to directly model how class noise is generated, in symmetric and asymmetric cases. In addition, we demonstrate that Logistic regression and Probit regression can also be reinterpreted from this class noise perspective, and compare them with the proposed models. The empirical study shows that, the proposed asymmetric models overall outperform the benchmark linear models, and the asymmetric Laplace-noise model achieves the best performance among all.",
"title": ""
},
{
"docid": "61160371b2a85f1b937105cc43d3c70d",
"text": "Regular expressions are extremely useful, because they allow us to work with text in terms of patterns. They are considered the most sophisticated means of performing operations such as string searching, manipulation, validation, and formatting in all applications that deal with text data. Character recognition problem scenarios in sequence analysis that are ideally suited for the application of regular expression algorithms. This paper describes a use of regular expressions in this problem domain, and demonstrates how the effective use of regular expressions that can serve to facilitate more efficient and more effective character recognition.",
"title": ""
},
{
"docid": "b7a4b6f6f3028923853649077c18dfa5",
"text": "The increasing ageing population around the world and the increased risk of falling among this demographic, challenges society and technology to find better ways to mitigate the occurrence of such costly and detrimental events as falls. The most common activity associated with falls is bed transfers; therefore, the most significant high risk activity. Several technological solutions exist for bed exiting detection using a variety of sensors which are attached to the body, bed or floor. However, lack of real life performance studies, technical limitations and acceptability are still key issues. In this research, we present and evaluate a novel method for mitigating the high falls risk associated with bed exits based on using an inexpensive, privacy preserving and passive sensor enabled RFID device. Our approach is based on a classification system built upon conditional random fields that requires no preprocessing of sensorial and RF metrics data extracted from an RFID platform. We evaluated our classification algorithm and the wearability of our sensor using elderly volunteers (66-86 y.o.). The results demonstrate the validity of our approach and the performance is an improvement on previous bed exit classification studies. The participants of the study also overwhelmingly agreed that the sensor was indeed wearable and presented no problems.",
"title": ""
},
{
"docid": "cbcf4ca356682ee9c09b87fa1cd26ba2",
"text": "The field of data analytics is currently going through a renaissance as a result of ever-increasing dataset sizes, the value of the models that can be trained from those datasets, and a surge in flexible, distributed programming models. In particular, the Apache Hadoop and Spark programming systems, as well as their supporting projects (e.g. HDFS, SparkSQL), have greatly simplified the analysis and transformation of datasets whose size exceeds the capacity of a single machine. While these programming models facilitate the use of distributed systems to analyze large datasets, they have been plagued by performance issues. The I/O performance bottlenecks of Hadoop are partially responsible for the creation of Spark. Performance bottlenecks in Spark due to the JVM object model, garbage collection, interpreted/managed execution, and other abstraction layers are responsible for the creation of additional optimization layers, such as Project Tungsten. Indeed, the Project Tungsten issue tracker states that the \"majority of Spark workloads are not bottlenecked by I/O or network, but rather CPU and memory\".\n In this work, we address the CPU and memory performance bottlenecks that exist in Apache Spark by accelerating user-written computational kernels using accelerators. We refer to our approach as Spark With Accelerated Tasks (SWAT). SWAT is an accelerated data analytics (ADA) framework that enables programmers to natively execute Spark applications on high performance hardware platforms with co-processors, while continuing to write their applications in a JVM-based language like Java or Scala. Runtime code generation creates OpenCL kernels from JVM bytecode, which are then executed on OpenCL accelerators. In our work we emphasize 1) full compatibility with a modern, existing, and accepted data analytics platform, 2) an asynchronous, event-driven, and resource-aware runtime, 3) multi-GPU memory management and caching, and 4) ease-of-use and programmability. Our performance evaluation demonstrates up to 3.24x overall application speedup relative to Spark across six machine learning benchmarks, with a detailed investigation of these performance improvements.",
"title": ""
},
{
"docid": "aa23ee34f7117f6d5f83374b8623f4dc",
"text": "PURPOSE OF REVIEW\nThe notion that play may facilitate learning has long been touted. Here, we review how video game play may be leveraged for enhancing attentional control, allowing greater cognitive flexibility and learning and in turn new routes to better address developmental disorders.\n\n\nRECENT FINDINGS\nVideo games, initially developed for entertainment, appear to enhance the behavior in domains as varied as perception, attention, task switching, or mental rotation. This surprisingly wide transfer may be mediated by enhanced attentional control, allowing increased signal-to-noise ratio and thus more informed decisions.\n\n\nSUMMARY\nThe possibility of enhancing attentional control through targeted interventions, be it computerized training or self-regulation techniques, is now well established. Embedding such training in video game play is appealing, given the astounding amount of time spent by children and adults worldwide with this media. It holds the promise of increasing compliance in patients and motivation in school children, and of enhancing the use of positive impact games. Yet for all the promises, existing research indicates that not all games are created equal: a better understanding of the game play elements that foster attention and learning as well as of the strategies developed by the players is needed. Computational models from machine learning or developmental robotics provide a rich theoretical framework to develop this work further and address its impact on developmental disorders.",
"title": ""
},
{
"docid": "461d47e03c5740d744dd3e3cbb1e2216",
"text": "The Multidimensional Personality Questionnaire (MPQ; A. Tellegen, 1982, in press) provides for a comprehensive analysis of personality at both the lower order trait and broader structural levels. Its higher order dimensions of Positive Emotionality, Negative Emotionality, and Constraint embody affect and temperament constructs, which have been conceptualized in psychobiological terms. The MPQ thus holds considerable potential as a structural framework for investigating personality across varying levels of analysis, and this potential would be enhanced by the availability of an abbreviated version. This article describes efforts to develop and validate a brief (155-item) form, the MPQ-BF. Success was evidenced by uniformly high correlations between the brief- and full-form trait scales and consistency of higher order structures. The MPQ-BF is recommended as a tool for investigating the genetic, neurobiological, and psychological substrates of personality.",
"title": ""
},
{
"docid": "f25f7ae3fc614a236f3948d68f488c5b",
"text": "Internet of Things (IoT) has gained substantial attention recently and play a significant role in smart city application deployments. A number of such smart city applications depend on sensor fusion capabilities in the cloud from diverse data sources. We introduce the concept of IoT and present in detail ten different parameters that govern our sensor data fusion evaluation framework. We then evaluate the current state-of-the art in sensor data fusion against our sensor data fusion framework. Our main goal is to examine and survey different sensor data fusion research efforts based on our evaluation framework. The major open research issues related to sensor data fusion are also presented.",
"title": ""
},
{
"docid": "5896289f0a9b788ef722756953a580ce",
"text": "Biodiesel, defined as the mono-alkyl esters of vegetable oils or animal fats, is an balternativeQ diesel fuel that is becoming accepted in a steadily growing number of countries around the world. Since the source of biodiesel varies with the location and other sources such as recycled oils are continuously gaining interest, it is important to possess data on how the various fatty acid profiles of the different sources can influence biodiesel fuel properties. The properties of the various individual fatty esters that comprise biodiesel determine the overall fuel properties of the biodiesel fuel. In turn, the properties of the various fatty esters are determined by the structural features of the fatty acid and the alcohol moieties that comprise a fatty ester. Structural features that influence the physical and fuel properties of a fatty ester molecule are chain length, degree of unsaturation, and branching of the chain. Important fuel properties of biodiesel that are influenced by the fatty acid profile and, in turn, by the structural features of the various fatty esters are cetane number and ultimately exhaust emissions, heat of combustion, cold flow, oxidative stability, viscosity, and lubricity. Published by Elsevier B.V.",
"title": ""
},
{
"docid": "e6d7399b88c57aebca0a43662d7fd855",
"text": "UNLABELLED\nAlthough the brain relies on auditory information to calibrate vocal behavior, the neural substrates of vocal learning remain unclear. Here we demonstrate that lesions of the dopaminergic inputs to a basal ganglia nucleus in a songbird species (Bengalese finches, Lonchura striata var. domestica) greatly reduced the magnitude of vocal learning driven by disruptive auditory feedback in a negative reinforcement task. These lesions produced no measureable effects on the quality of vocal performance or the amount of song produced. Our results suggest that dopaminergic inputs to the basal ganglia selectively mediate reinforcement-driven vocal plasticity. In contrast, dopaminergic lesions produced no measurable effects on the birds' ability to restore song acoustics to baseline following the cessation of reinforcement training, suggesting that different forms of vocal plasticity may use different neural mechanisms.\n\n\nSIGNIFICANCE STATEMENT\nDuring skill learning, the brain relies on sensory feedback to improve motor performance. However, the neural basis of sensorimotor learning is poorly understood. Here, we investigate the role of the neurotransmitter dopamine in regulating vocal learning in the Bengalese finch, a songbird with an extremely precise singing behavior that can nevertheless be reshaped dramatically by auditory feedback. Our findings show that reduction of dopamine inputs to a region of the songbird basal ganglia greatly impairs vocal learning but has no detectable effect on vocal performance. These results suggest a specific role for dopamine in regulating vocal plasticity.",
"title": ""
},
{
"docid": "30c5f12ecaec4f385c2be3bb8ef8eb1e",
"text": "Human has the ability to roughly estimate the distance and size of an object because of the stereo vision of human's eyes. In this project, we proposed to utilize stereo vision system to accurately measure the distance and size (height and width) of object in view. Object size identification is very useful in building systems or applications especially in autonomous system navigation. Many recent works have started to use multiple vision sensors or cameras for different type of application such as 3D image constructions, occlusion detection and etc. Multiple cameras system has becoming more popular since cameras are now very cheap and easy to deploy and utilize. The proposed measurement system consists of object detection on the stereo images and blob extraction and distance and size calculation and object identification. The system also employs a fast algorithm so that the measurement can be done in real-time. The object measurement using stereo camera is better than object detection using a single camera that was proposed in many previous research works. It is much easier to calibrate and can produce a more accurate results.",
"title": ""
},
{
"docid": "71757d1cee002bb235a591cf0d5aafd5",
"text": "There is an old Wall Street adage goes, ‘‘It takes volume to make price move”. The contemporaneous relation between trading volume and stock returns has been studied since stock markets were first opened. Recent researchers such as Wang and Chin [Wang, C. Y., & Chin S. T. (2004). Profitability of return and volume-based investment strategies in China’s stock market. Pacific-Basin Finace Journal, 12, 541–564], Hodgson et al. [Hodgson, A., Masih, A. M. M., & Masih, R. (2006). Futures trading volume as a determinant of prices in different momentum phases. International Review of Financial Analysis, 15, 68–85], and Ting [Ting, J. J. L. (2003). Causalities of the Taiwan stock market. Physica A, 324, 285–295] have found the correlation between stock volume and price in stock markets. To verify this saying, in this paper, we propose a dual-factor modified fuzzy time-series model, which take stock index and trading volume as forecasting factors to predict stock index. In empirical analysis, we employ the TAIEX (Taiwan stock exchange capitalization weighted stock index) and NASDAQ (National Association of Securities Dealers Automated Quotations) as experimental datasets and two multiplefactor models, Chen’s [Chen, S. M. (2000). Temperature prediction using fuzzy time-series. IEEE Transactions on Cybernetics, 30 (2), 263–275] and Huarng and Yu’s [Huarng, K. H., & Yu, H. K. (2005). A type 2 fuzzy time-series model for stock index forecasting. Physica A, 353, 445–462], as comparison models. The experimental results indicate that the proposed model outperforms the listing models and the employed factors, stock index and the volume technical indicator, VR(t), are effective in stock index forecasting. 2007 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "815feed9cce2344872c50da6ffb77093",
"text": "Over the last decade blogs became an important part of the Web, where people can announce anything that is on their mind. Due to their high popularity blogs have great potential to mine public opinions regarding products. Such knowledge is very valuable as it could be used to adjust marketing campaigns or advertisement of products accordingly. In this paper we investigate how the blogosphere can be used to predict the success of products in the domain of music and movies. We analyze and characterize the blogging behavior in both domains particularly around product releases, propose different methods for extracting characteristic features from the blogosphere, and show that our predictions correspond to the real world measures Sales Rank and box office revenue respectively.",
"title": ""
},
{
"docid": "4494d5b42c8daf6a45608159a748fd7d",
"text": "A number of recent papers have provided evidence that practical design questions about neural networks may be tackled theoretically by studying the behavior of random networks. However, until now the tools available for analyzing random neural networks have been relatively ad hoc. In this work, we show that the distribution of pre-activations in random neural networks can be exactly mapped onto lattice models in statistical physics. We argue that several previous investigations of stochastic networks actually studied a particular factorial approximation to the full lattice model. For random linear networks and random rectified linear networks we show that the corresponding lattice models in the wide network limit may be systematically approximated by a Gaussian distribution with covariance between the layers of the network. In each case, the approximate distribution can be diagonalized by Fourier transformation. We show that this approximation accurately describes the results of numerical simulations of wide random neural networks. Finally, we demonstrate that in each case the large scale behavior of the random networks can be approximated by an effective field theory.",
"title": ""
},
{
"docid": "ed35d80dd3af3acbe75e5122b2378756",
"text": "We present a system whereby the human voice may specify continuous control signals to manipulate a simulated 2D robotic arm and a real 3D robotic arm. Our goal is to move towards making accessible the manipulation of everyday objects to individuals with motor impairments. Using our system, we performed several studies using control style variants for both the 2D and 3D arms. Results show that it is indeed possible for a user to learn to effectively manipulate real-world objects with a robotic arm using only non-verbal voice as a control mechanism. Our results provide strong evidence that the further development of non-verbal voice controlled robotics and prosthetic limbs will be successful.",
"title": ""
},
{
"docid": "b99944ad31c5ad81d0e235c200a332b4",
"text": "This paper introduces speech-based visual question answering (VQA), the task of generating an answer given an image and a spoken question. Two methods are studied: an end-to-end, deep neural network that directly uses audio waveforms as input versus a pipelined approach that performs ASR (Automatic Speech Recognition) on the question, followed by text-based visual question answering. Furthermore, we investigate the robustness of both methods by injecting various levels of noise into the spoken question and find both methods to be tolerate noise at similar levels.",
"title": ""
},
{
"docid": "ec7c9fa71dcf32a3258ee8712ccb95c1",
"text": "Fuzzy graph is now a very important research area due to its wide application. Fuzzy multigraph and fuzzy planar graphs are two subclasses of fuzzy graph theory. In this paper, we define both of these graphs and studied a lot of properties. A very close association of fuzzy planar graph is fuzzy dual graph. This is also defined and studied several properties. The relation between fuzzy planar graph and fuzzy dual graph is also established.",
"title": ""
},
{
"docid": "17162eac4f1292e4c2ad7ef83af803f1",
"text": "Recent years have witnessed significant progresses in deep Reinforcement Learning (RL). Empowered with large scale neural networks, carefully designed architectures, novel training algorithms and massively parallel computing devices, researchers are able to attack many challenging RL problems. However, in machine learning, more training power comes with a potential risk of more overfitting. As deep RL techniques are being applied to critical problems such as healthcare and finance, it is important to understand the generalization behaviors of the trained agents. In this paper, we conduct a systematic study of standard RL agents and find that they could overfit in various ways. Moreover, overfitting could happen “robustly”: commonly used techniques in RL that add stochasticity do not necessarily prevent or detect overfitting. In particular, the same agents and learning algorithms could have drastically different test performance, even when all of them achieve optimal rewards during training. The observations call for more principled and careful evaluation protocols in RL. We conclude with a general discussion on overfitting in RL and a study of the generalization behaviors from the perspective of inductive bias.",
"title": ""
},
{
"docid": "20e19999be17bce4ba3ae6d94400ba3c",
"text": "Due to the coarse granularity of data accesses and the heavy use of latches, indices in the B-tree family are not efficient for in-memory databases, especially in the context of today's multi-core architecture. In this paper, we study the parallelizability of skip lists for the parallel and concurrent environment, and present PSL, a Parallel in-memory Skip List that lends itself naturally to the multi-core environment, particularly with non-uniform memory access. For each query, PSL traverses the index in a Breadth-First-Search (BFS) to find the list node with the matching key, and exploits SIMD processing to speed up this process. Furthermore, PSL distributes incoming queries among multiple execution threads disjointly and uniformly to eliminate the use of latches and achieve a high parallelizability. The experimental results show that PSL is comparable to a readonly index, FAST, in terms of read performance, and outperforms ART and Masstree respectively by up to 30% and 5x for a variety of workloads.",
"title": ""
},
{
"docid": "0b9ae0bf6f6201249756d87a56f0005e",
"text": "To reduce energy consumption and wastage, effective energy management at home is key and an integral part of the future Smart Grid. In this paper, we present the design and implementation of Green Home Service (GHS) for home energy management. Our approach addresses the key issues of home energy management in Smart Grid: a holistic management solution, improved device manageability, and an enabler of Demand-Response. We also present the scheduling algorithms in GHS for smart energy management and show the results in simulation studies.",
"title": ""
}
] |
scidocsrr
|
7096fe493a51cd3c4c428cb55b83ffbf
|
Oligarchic Control of Business-to-Business Blockchains
|
[
{
"docid": "668953b5f6fbfc440bb6f3a91ee7d06b",
"text": "Proof of Work (PoW) powered blockchains currently account for more than 90% of the total market capitalization of existing digital cryptocurrencies. Although the security provisions of Bitcoin have been thoroughly analysed, the security guarantees of variant (forked) PoW blockchains (which were instantiated with different parameters) have not received much attention in the literature. This opens the question whether existing security analysis of Bitcoin's PoW applies to other implementations which have been instantiated with different consensus and/or network parameters.\n In this paper, we introduce a novel quantitative framework to analyse the security and performance implications of various consensus and network parameters of PoW blockchains. Based on our framework, we devise optimal adversarial strategies for double-spending and selfish mining while taking into account real world constraints such as network propagation, different block sizes, block generation intervals, information propagation mechanism, and the impact of eclipse attacks. Our framework therefore allows us to capture existing PoW-based deployments as well as PoW blockchain variants that are instantiated with different parameters, and to objectively compare the tradeoffs between their performance and security provisions.",
"title": ""
},
{
"docid": "4fc67f5a4616db0906b943d7f13c856d",
"text": "Overview. A blockchain is best understood in the model of state-machine replication [8], where a service maintains some state and clients invoke operations that transform the state and generate outputs. A blockchain emulates a “trusted” computing service through a distributed protocol, run by nodes connected over the Internet. The service represents or creates an asset, in which all nodes have some stake. The nodes share the common goal of running the service but do not necessarily trust each other for more. In a “permissionless” blockchain such as the one underlying the Bitcoin cryptocurrency, anyone can operate a node and participate through spending CPU cycles and demonstrating a “proof-of-work.” On the other hand, blockchains in the “permissioned” model control who participates in validation and in the protocol; these nodes typically have established identities and form a consortium. A report of Swanson compares the two models [9].",
"title": ""
}
] |
[
{
"docid": "7747ea744400418a9003f8bd0990fe71",
"text": "0747-5632/$ see front matter 2009 Elsevier Ltd. A doi:10.1016/j.chb.2009.06.001 * Tel.: +82 02 74",
"title": ""
},
{
"docid": "0c7891f543b79f8b196504fbf81493ba",
"text": "Twenty-five years of consumer socialization research have yielded an impressive set of findings. The purpose of our article is to review these findings and assess what we know about children’s development as consumers. Our focus is on the developmental sequence characterizing the growth of consumer knowledge, skills, and values as children mature throughout childhood and adolescence. In doing so, we present a conceptual framework for understanding consumer socialization as a series of stages, with transitions between stages occurring as children grow older and mature in cognitive and social terms. We then review empirical findings illustrating these stages, including children’s knowledge of products, brands, advertising, shopping, pricing, decision-making strategies, parental influence strategies, and consumption motives and values. Based on the evidence reviewed, implications are drawn for future theoretical and empirical development in the field of consumer socialization.",
"title": ""
},
{
"docid": "3bf9e696755c939308efbcca363d4f49",
"text": "Robotic navigation requires that the robotic platform have an idea of its location and orientation within the environment. This localization is known as pose estimation, and has been a much researched topic. There are currently two main categories of pose estimation techniques: pose from hardware, and pose from video (PfV). Hardware pose estimation utilizes specialized hardware such as Global Positioning Systems (GPS) and Inertial Navigation Systems (INS) to estimate the position and orientation of the platform at the specified times. PfV systems use video cameras to estimate the pose of the system by calculating the inter-frame motion of the camera from features present in the images. These pose estimation systems are readily integrated, and can be used to augment and/or supplant each other according to the needs of the application. Both pose from video and hardware pose estimation have their uses, but each also has its degenerate cases in which they fail to provide reliable data. Hardware solutions can provide extremely accurate data, but are usually quite pricey and can be restrictive in their environments of operation. Pose from video solutions can be implemented with low-cost off-the-shelf components, but the accuracy of the PfV results can be degraded by noisy imagery, ambiguity in the feature matching process, and moving objects. This paper attempts to evaluate the cost/benefit comparison between pose from video and hardware pose estimation experimentally, and to provide a guide as to which systems should be used under certain scenarios.",
"title": ""
},
{
"docid": "6b3e0cd49c05c43abd7c8d0b6db093b0",
"text": "We present a new image reconstruction method that replaces the projector in a projected gradient descent (PGD) with a convolutional neural network (CNN). Recently, CNNs trained as image-to-image regressors have been successfully used to solve inverse problems in imaging. However, unlike existing iterative image reconstruction algorithms, these CNN-based approaches usually lack a feedback mechanism to enforce that the reconstructed image is consistent with the measurements. We propose a relaxed version of PGD wherein gradient descent enforces measurement consistency, while a CNN recursively projects the solution closer to the space of desired reconstruction images. We show that this algorithm is guaranteed to converge and, under certain conditions, converges to a local minimum of a non-convex inverse problem. Finally, we propose a simple scheme to train the CNN to act like a projector. Our experiments on sparse-view computed-tomography reconstruction show an improvement over total variation-based regularization, dictionary learning, and a state-of-the-art deep learning-based direct reconstruction technique.",
"title": ""
},
{
"docid": "a999bf3da879dde7fc2acb8794861daf",
"text": "Most OECD Member countries have sought to renew their systems and structures of public management in the last 10-15 years. Some started earlier than others and the emphasis will vary among Member countries according to their historic traditions and institutions. There is no single best model of public management, but what stands out most clearly is the extent to which countries have pursued and are pursuing broadly common approaches to public management reform. This is most probably because countries have been responding to essentially similar pressures to reform.",
"title": ""
},
{
"docid": "88b89521775ba2d8570944a54e516d0f",
"text": "The idea that the purely phenomenological knowledge that we can extract by analyzing large amounts of data can be useful in healthcare seems to contradict the desire of VPH researchers to build detailed mechanistic models for individual patients. But in practice no model is ever entirely phenomenological or entirely mechanistic. We propose in this position paper that big data analytics can be successfully combined with VPH technologies to produce robust and effective in silico medicine solutions. In order to do this, big data technologies must be further developed to cope with some specific requirements that emerge from this application. Such requirements are: working with sensitive data; analytics of complex and heterogeneous data spaces, including nontextual information; distributed data management under security and performance constraints; specialized analytics to integrate bioinformatics and systems biology information with clinical observations at tissue, organ and organisms scales; and specialized analytics to define the “physiological envelope” during the daily life of each patient. These domain-specific requirements suggest a need for targeted funding, in which big data technologies for in silico medicine becomes the research priority.",
"title": ""
},
{
"docid": "8a9cf6b4d7d6d2be1d407ef41ceb23e5",
"text": "A highly discriminative and computationally efficient descriptor is needed in many computer vision applications involving human action recognition. This paper proposes a hand-crafted skeleton-based descriptor for human action recognition. It is constructed from five fixed size covariance matrices calculated using strongly related joints coordinates over five body parts (spine, left/ right arms, and left/ right legs). Since covariance matrices are symmetric, the lower/ upper triangular parts of these matrices are concatenated to generate an efficient descriptor. It achieves a saving from 78.26 % to 80.35 % in storage space and from 75 % to 90 % in processing time (depending on the dataset) relative to techniques adopting a covariance descriptor based on all the skeleton joints. To show the effectiveness of the proposed method, its performance is evaluated on five public datasets: MSR-Action3D, MSRC-12 Kinect Gesture, UTKinect-Action, Florence3D-Action, and NTU RGB+D. The obtained recognition rates on all datasets outperform many existing methods and compete with the current state of the art techniques.",
"title": ""
},
{
"docid": "40d46bc75d11b6d4139cb7a1267ac234",
"text": "10 Abstract This paper introduces the third generation of Pleated Pneumatic Artificial Muscles (PPAM), which has been developed to simplify the production over the first and second prototype. This type of artificial muscle was developed to overcome dry friction and material deformation, which is present in the widely used McKibben muscle. The essence of the PPAM is its pleated membrane structure which enables the 15 muscle to work at low pressures and at large contractions. In order to validate the new PPAM generation, it has been compared with the mathematical model and the previous generation. The new production process and the use of new materials introduce improvements such as 55% reduction in the actuator’s weight, a higher reliability, a 75% reduction in the production time and PPAMs can now be produced in all sizes from 4 to 50 cm. This opens the possibility to commercialize this type of muscles 20 so others can implement it. Furthermore, a comparison with experiments between PPAM and Festo McKibben muscles is discussed. Small PPAMs present similar force ranges and larger contractions than commercially available McKibben-like muscles. The use of series arrangements of PPAMs allows for large strokes and relatively small diameters at the same time and, since PPAM 3.0 is much more lightweight than the commong McKibben models made by Festo, it presents better force-to-mass and energy 25 to mass ratios than Festo models. 2012 Taylor & Francis and The Robotics Society of Japan",
"title": ""
},
{
"docid": "e995ed011dedd9e543f07a4af78e27bb",
"text": "Over the last years, computer networks have evolved into highly dynamic and interconnected environments, involving multiple heterogeneous devices and providing a myriad of services on top of them. This complex landscape has made it extremely difficult for security administrators to keep accurate and be effective in protecting their systems against cyber threats. In this paper, we describe our vision and scientific posture on how artificial intelligence techniques and a smart use of security knowledge may assist system administrators in better defending their networks. To that end, we put forward a research roadmap involving three complimentary axes, namely, (I) the use of FCA-based mechanisms for managing configuration vulnerabilities, (II) the exploitation of knowledge representation techniques for automated security reasoning, and (III) the design of a cyber threat intelligence mechanism as a CKDD process. Then, we describe a machine-assisted process for cyber threat analysis which provides a holistic perspective of how these three research axes are integrated together.",
"title": ""
},
{
"docid": "631d2c75377517fed1864e3a47ae873e",
"text": "Choi, Wiemer-Hastings, and Moore (2001) proposed to use Latent Semantic Analysis (LSA) to extract semantic knowledge from corpora in order to improve the accuracy of a text segmentation algorithm. By comparing the accuracy of the very same algorithm, depending on whether or not it takes into account complementary semantic knowledge, they were able to show the benefit derived from such knowledge. In their experiments, semantic knowledge was, however, acquired from a corpus containing the texts to be segmented in the test phase. If this hyper-specificity of the LSA corpus explains the largest part of the benefit, one may wonder if it is possible to use LSA to acquire generic semantic knowledge that can be used to segment new texts. The two experiments reported here show that the presence of the test materials in the LSA corpus has an important effect, but also that the generic semantic knowledge derived from large corpora clearly improves the segmentation accuracy.",
"title": ""
},
{
"docid": "77c8dc928492524cbf665422bbcce60d",
"text": "Full terms and conditions of use: http://pubsonline.informs.org/page/terms-and-conditions This article may be used only for the purposes of research, teaching, and/or private study. Commercial use or systematic downloading (by robots or other automatic processes) is prohibited without explicit Publisher approval, unless otherwise noted. For more information, contact permissions@informs.org. The Publisher does not warrant or guarantee the article’s accuracy, completeness, merchantability, fitness for a particular purpose, or non-infringement. Descriptions of, or references to, products or publications, or inclusion of an advertisement in this article, neither constitutes nor implies a guarantee, endorsement, or support of claims made of that product, publication, or service. Copyright © 2016, INFORMS",
"title": ""
},
{
"docid": "2cebd9275e30da41a97f6d77207cc793",
"text": "Cyber-physical systems, such as mobile robots, must respond adaptively to dynamic operating conditions. Effective operation of these systems requires that sensing and actuation tasks are performed in a timely manner. Additionally, execution of mission specific tasks such as imaging a room must be balanced against the need to perform more general tasks such as obstacle avoidance. This problem has been addressed by maintaining relative utilization of shared resources among tasks near a user-specified target level. Producing optimal scheduling strategies requires complete prior knowledge of task behavior, which is unlikely to be available in practice. Instead, suitable scheduling strategies must be learned online through interaction with the system. We consider the sample complexity of reinforcement learning in this domain, and demonstrate that while the problem state space is countably infinite, we may leverage the problem’s structure to guarantee efficient learning.",
"title": ""
},
{
"docid": "259339e228c4b569f3813d3f3c7c832f",
"text": "BACKGROUND\nPrevention and management of work-related stress and related mental problems is a great challenge. Mobile applications are a promising way to integrate prevention strategies into the everyday lives of citizens.\n\n\nOBJECTIVE\nThe objectives of this study was to study the usage, acceptance, and usefulness of a mobile mental wellness training application among working-age individuals, and to derive preliminary design implications for mobile apps for stress management.\n\n\nMETHODS\nOiva, a mobile app based on acceptance and commitment therapy (ACT), was designed to support active learning of skills related to mental wellness through brief ACT-based exercises in the daily life. A one-month field study with 15 working-age participants was organized to study the usage, acceptance, and usefulness of Oiva. The usage of Oiva was studied based on the usage log files of the application. Changes in wellness were measured by three validated questionnaires on stress, satisfaction with life (SWLS), and psychological flexibility (AAQ-II) at the beginning and at end of the study and by user experience questionnaires after one week's and one month's use. In-depth user experience interviews were conducted after one month's use to study the acceptance and user experiences of Oiva.\n\n\nRESULTS\nOiva was used actively throughout the study. The average number of usage sessions was 16.8 (SD 2.4) and the total usage time per participant was 3 hours 12 minutes (SD 99 minutes). Significant pre-post improvements were obtained in stress ratings (mean 3.1 SD 0.2 vs mean 2.5 SD 0.1, P=.003) and satisfaction with life scores (mean 23.1 SD 1.3 vs mean 25.9 SD 0.8, P=.02), but not in psychological flexibility. Oiva was perceived easy to use, acceptable, and useful by the participants. A randomized controlled trial is ongoing to evaluate the effectiveness of Oiva on working-age individuals with stress problems.\n\n\nCONCLUSIONS\nA feasibility study of Oiva mobile mental wellness training app showed good acceptability, usefulness, and engagement among the working-age participants, and provided increased understanding on the essential features of mobile apps for stress management. Five design implications were derived based on the qualitative findings: (1) provide exercises for everyday life, (2) find proper place and time for challenging content, (3) focus on self-improvement and learning instead of external rewards, (4) guide gently but do not restrict choice, and (5) provide an easy and flexible tool for self-reflection.",
"title": ""
},
{
"docid": "b65ca87f617d8ddf451a4d9dab470d17",
"text": "Artificial neural network is one of the intelligent methods in Artificial Intelligence. There are many decisions of different tasks using neural network approach. The forecasting problems are high challenge and researchers use different methods to solve them. The financial tasks related to forecasting, classification and management using artificial neural network are considered. The technology and methods for prediction of financial data as well as the developed system for forecasting of financial markets via neural network are described in the paper. The designed architecture of a neural network using four different technical indicators is presented. The developed neural network is used for forecasting movement of stock prices one day ahead and consists of an input layer, one hidden layer and an output layer. The training method is a training algorithm with back propagation of the error. The main advantage of the developed system is self-determination of the optimal topology of neural network, due to which it becomes flexible and more precise. The proposed system with neural network is universal and can be applied to various financial instruments using only basic technical indicators as input data. Key-Words: neural networks, forecasting, training algorithm, financial indicators, backpropagation",
"title": ""
},
{
"docid": "fa82b75a3244ef2407c2d14c8a3a5918",
"text": "Popular sites like Houzz, Pinterest, and LikeThatDecor, have communities of users helping each other answer questions about products in images. In this paper we learn an embedding for visual search in interior design. Our embedding contains two different domains of product images: products cropped from internet scenes, and products in their iconic form. With such a multi-domain embedding, we demonstrate several applications of visual search including identifying products in scenes and finding stylistically similar products. To obtain the embedding, we train a convolutional neural network on pairs of images. We explore several training architectures including re-purposing object classifiers, using siamese networks, and using multitask learning. We evaluate our search quantitatively and qualitatively and demonstrate high quality results for search across multiple visual domains, enabling new applications in interior design.",
"title": ""
},
{
"docid": "69a6cfb649c3ccb22f7a4467f24520f3",
"text": "We propose a two-stage neural model to tackle question generation from documents. First, our model estimates the probability that word sequences in a document are ones that a human would pick when selecting candidate answers by training a neural key-phrase extractor on the answers in a question-answering corpus. Predicted key phrases then act as target answers and condition a sequence-tosequence question-generation model with a copy mechanism. Empirically, our keyphrase extraction model significantly outperforms an entity-tagging baseline and existing rule-based approaches. We further demonstrate that our question generation system formulates fluent, answerable questions from key phrases. This twostage system could be used to augment or generate reading comprehension datasets, which may be leveraged to improve machine reading systems or in educational settings.",
"title": ""
},
{
"docid": "0a05cfa04d520fcf1db6c4aafb9b65b6",
"text": "Motor learning can be defined as changing performance so as to optimize some function of the task, such as accuracy. The measure of accuracy that is optimized is called a loss function and specifies how the CNS rates the relative success or cost of a particular movement outcome. Models of pointing in sensorimotor control and learning usually assume a quadratic loss function in which the mean squared error is minimized. Here we develop a technique for measuring the loss associated with errors. Subjects were required to perform a task while we experimentally controlled the skewness of the distribution of errors they experienced. Based on the change in the subjects' average performance, we infer the loss function. We show that people use a loss function in which the cost increases approximately quadratically with error for small errors and significantly less than quadratically for large errors. The system is thus robust to outliers. This suggests that models of sensorimotor control and learning that have assumed minimizing squared error are a good approximation but tend to penalize large errors excessively.",
"title": ""
},
{
"docid": "c83db87d7ac59e1faf75b408953e1324",
"text": "PURPOSE\nThis project was conducted to obtain information about reading problems of adults with traumatic brain injury (TBI) with mild-to-moderate cognitive impairments and to investigate how these readers respond to reading comprehension strategy prompts integrated into digital versions of text.\n\n\nMETHOD\nParticipants from 2 groups, adults with TBI (n = 15) and matched controls (n = 15), read 4 different 500-word expository science passages linked to either a strategy prompt condition or a no-strategy prompt condition. The participants' reading comprehension was evaluated using sentence verification and free recall tasks.\n\n\nRESULTS\nThe TBI and control groups exhibited significant differences on 2 of the 5 reading comprehension measures: paraphrase statements on a sentence verification task and communication units on a free recall task. Unexpected group differences were noted on the participants' prerequisite reading skills. For the within-group comparison, participants showed significantly higher reading comprehension scores on 2 free recall measures: words per communication unit and type-token ratio. There were no significant interactions.\n\n\nCONCLUSION\nThe results help to elucidate the nature of reading comprehension in adults with TBI with mild-to-moderate cognitive impairments and endorse further evaluation of reading comprehension strategies as a potential intervention option for these individuals. Future research is needed to better understand how individual differences influence a person's reading and response to intervention.",
"title": ""
},
{
"docid": "1ffc6db796b8e8a03165676c1bc48145",
"text": "UMAP (Uniform Manifold Approximation and Projection) is a novel manifold learning technique for dimension reduction. UMAP is constructed from a theoretical framework based in Riemannian geometry and algebraic topology. e result is a practical scalable algorithm that applies to real world data. e UMAP algorithm is competitive with t-SNE for visualization quality, and arguably preserves more of the global structure with superior run time performance. Furthermore, UMAP has no computational restrictions on embedding dimension, making it viable as a general purpose dimension reduction technique for machine learning.",
"title": ""
},
{
"docid": "3ce021aa52dac518e1437d397c63bf68",
"text": "Malaria is a common and sometimes fatal disease caused by infection with Plasmodium parasites. Cerebral malaria (CM) is a most severe complication of infection with Plasmodium falciparum parasites which features a complex immunopathology that includes a prominent neuroinflammation. The experimental mouse model of cerebral malaria (ECM) induced by infection with Plasmodium berghei ANKA has been used abundantly to study the role of single genes, proteins and pathways in the pathogenesis of CM, including a possible contribution to neuroinflammation. In this review, we discuss the Plasmodium berghei ANKA infection model to study human CM, and we provide a summary of all host genetic effects (mapped loci, single genes) whose role in CM pathogenesis has been assessed in this model. Taken together, the reviewed studies document the many aspects of the immune system that are required for pathological inflammation in ECM, but also identify novel avenues for potential therapeutic intervention in CM and in diseases which feature neuroinflammation.",
"title": ""
}
] |
scidocsrr
|
4e9f71e76d2fbe40aa197e611385d8f4
|
Aggregating and Predicting Sequence Labels from Crowd Annotations
|
[
{
"docid": "db43034e91dbc74fc7db7f1fc02ccd7e",
"text": "We describe our experience using both Amazon Mechanical Turk (MTurk) and CrowdFlower to collect simple named entity annotations for Twitter status updates. Unlike most genres that have traditionally been the focus of named entity experiments, Twitter is far more informal and abbreviated. The collected annotations and annotation techniques will provide a first step towards the full study of named entity recognition in domains like Facebook and Twitter. We also briefly describe how to use MTurk to collect judgements on the quality of “word clouds.”",
"title": ""
}
] |
[
{
"docid": "9323c74e39a677c28d1c082b12e1f587",
"text": "Atmospheric conditions induced by suspended particles, such as fog and haze, severely degrade image quality. Restoring the true scene colors (clear day image) from a single image of a weather-degraded scene remains a challenging task due to the inherent ambiguity between scene albedo and depth. In this paper, we introduce a novel probabilistic method that fully leverages natural statistics of both the albedo and depth of the scene to resolve this ambiguity. Our key idea is to model the image with a factorial Markov random field in which the. scene albedo and depth are. two statistically independent latent layers. We. show that we may exploit natural image and depth statistics as priors on these hidden layers and factorize a single foggy image via a canonical Expectation Maximization algorithm with alternating minimization. Experimental results show that the proposed method achieves more accurate restoration compared to state-of-the-art methods that focus on only recovering scene albedo or depth individually.",
"title": ""
},
{
"docid": "7c0e4fc967e4a1a3aae97161fae29907",
"text": "A crucial step in adding structure to unstructured data is to identify references to entities and disambiguate them. Such disambiguated references can help enhance readability and draw similarities across different pieces of running text in an automated fashion. Previous research has tackled this problem by first forming a catalog of entities from a knowledge base, such as Wikipedia, and then using this catalog to disambiguate references in unseen text. However, most of the previously proposed models either do not use all text in the knowledge base, potentially missing out on discriminative features, or do not exploit word-entity proximity to learn high-quality catalogs. In this work, we propose topic models that keep track of the context of every word in the knowledge base; so that words appearing within the same context as an entity are more likely to be associated with that entity. Thus, our topic models utilize all text present in the knowledge base and help learn high-quality catalogs. Our models also learn groups of co-occurring entities thus enabling collective disambiguation. Unlike most previous topic models, our models are non-parametric and do not require the user to specify the exact number of groups present in the knowledge base. In experiments performed on an extract of Wikipedia containing almost 60,000 references, our models outperform SVM-based baselines by as much as 18% in terms of disambiguation accuracy translating to an increment of almost 11,000 correctly disambiguated references.",
"title": ""
},
{
"docid": "21daaa29b6ff00af028f3f794b0f04b7",
"text": "During the last years, we are experiencing the mushrooming and increased use of web tools enabling Internet users to both create and distribute content (multimedia information). These tools referred to as Web 2.0 technologies-applications can be considered as the tools of mass collaboration, since they empower Internet users to actively participate and simultaneously collaborate with other Internet users for producing, consuming and diffusing the information and knowledge being distributed through the Internet. In other words, Web 2.0 tools do nothing more than realising and exploiting the full potential of the genuine concept and role of the Internet (i.e. the network of the networks that is created and exists for its users). The content and information generated by users of Web 2.0 technologies are having a tremendous impact not only on the profile, expectations and decision making behaviour of Internet users, but also on e-business model that businesses need to develop and/or adapt. The tourism industry is not an exception from such developments. On the contrary, as information is the lifeblood of the tourism industry the use and diffusion of Web 2.0 technologies have a substantial impact of both tourism demand and supply. Indeed, many new types of tourism cyber-intermediaries have been created that are nowadays challenging the e-business model of existing cyberintermediaries that only few years ago have been threatening the existence of intermediaries!. In this vein, the purpose of this article is to analyse the major applications of Web 2.0 technologies in the tourism and hospitality industry by presenting their impact on both demand and supply.",
"title": ""
},
{
"docid": "364f9c36bef260cc938d04ff3b4f4c67",
"text": "We propose a scalable, efficient and accurate approach to retrieve 3D models for objects in the wild. Our contribution is twofold. We first present a 3D pose estimation approach for object categories which significantly outperforms the state-of-the-art on Pascal3D+. Second, we use the estimated pose as a prior to retrieve 3D models which accurately represent the geometry of objects in RGB images. For this purpose, we render depth images from 3D models under our predicted pose and match learned image descriptors of RGB images against those of rendered depth images using a CNN-based multi-view metric learning approach. In this way, we are the first to report quantitative results for 3D model retrieval on Pascal3D+, where our method chooses the same models as human annotators for 50% of the validation images on average. In addition, we show that our method, which was trained purely on Pascal3D+, retrieves rich and accurate 3D models from ShapeNet given RGB images of objects in the wild.",
"title": ""
},
{
"docid": "3898b7f3d55e96781c4c1dd3d72f1045",
"text": "In addition to trait EI, Cherniss identifies three other EI models whose main limitations must be succinctly mentioned, not least because they provided the impetus for the development of the trait EI model. Bar-On’s (1997) model is predicated on the problematic assumption that emotional intelligence (or ‘‘ability’’ or ‘‘competence’’ or ‘‘skill’’ or ‘‘potential’’—terms that appear to be used interchangeably in his writings) can be validly assessed through self-report questions of the type ‘‘It is easy for me to understand my emotions.’’ Psychometrically, as pointed out in Petrides and Furnham (2001), this is not a viable position because such self-report questions can only be tapping into self-perceptions rather than into abilities or competencies. This poses a fundamental threat to the validity of this model, far more serious than the pervasive faking problem noted by several authors (e.g., Grubb & McDaniel, 2008). Goleman’s (1995) model is difficult to evaluate scientifically because of its reliance on",
"title": ""
},
{
"docid": "10a6ba088f09fc02679532e8155d571c",
"text": "Now a day's internet is the most valuable source of learning, getting ideas, reviews for a product or a service. Everyday millions of reviews are generated in the internet about a product, person or a place. Because of their huge number and size it is very difficult to handle and understand such reviews. Sentiment analysis is such a research area which understands and extracts the opinion from the given review and the analysis process includes natural language processing (NLP), computational linguistics, text analytics and classifying the polarity of the opinion. In the field of sentiment analysis there are many algorithms exist to tackle NLP problems. Each algorithm is used by several applications. In this paper we have shown the taxonomy of various sentiment analysis methods. This paper also shows that Support vector machine (SVM) gives high accuracy compared to Naïve bayes and maximum entropy methods.",
"title": ""
},
{
"docid": "adf0a2cad66a7e48c16f02ef1bc4e9da",
"text": "Recently, several techniques have been explored to detect unusual behaviour in surveillance videos. Nevertheless, few studies leverage features from pre-trained CNNs and none of then present a comparison of features generate by different models. Motivated by this gap, we compare features extracted by four state-of-the-art image classification networks as a way of describing patches from security video frames. We carry out experiments on the Ped1 and Ped2 datasets and analyze the usage of different feature normalization techniques. Our results indicate that choosing the appropriate normalization is crucial to improve the anomaly detection performance when working with CNN features. Also, in the Ped2 dataset our approach was able to obtain results comparable to the ones of several state-of-the-art methods. Lastly, as our method only considers the appearance of each frame, we believe that it can be combined with approaches that focus on motion patterns to further improve performance.",
"title": ""
},
{
"docid": "881da6fd2d6c77d9f31ba6237c3d2526",
"text": "Pakistan is a developing country with more than half of its population located in rural areas. These areas neither have sufficient health care facilities nor a strong infrastructure that can address the health needs of the people. The expansion of Information and Communication Technology (ICT) around the globe has set up an unprecedented opportunity for delivery of healthcare facilities and infrastructure in these rural areas of Pakistan as well as in other developing countries. Mobile Health (mHealth)—the provision of health care services through mobile telephony—will revolutionize the way health care is delivered. From messaging campaigns to remote monitoring, mobile technology will impact every aspect of health systems. This paper highlights the growth of ICT sector and status of health care facilities in the developing countries, and explores prospects of mHealth as a transformer for health systems and service delivery especially in the remote rural areas.",
"title": ""
},
{
"docid": "2f649ca20a652ab96db6be136e2e90cc",
"text": "iii TABLE OF CONTENTS iv",
"title": ""
},
{
"docid": "52b5fa0494733f2f6b72df0cdfad01f4",
"text": "Requirements engineering encompasses many difficult, overarching problems inherent to its subareas of process, elicitation, specification, analysis, and validation. Requirements engineering researchers seek innovative, effective means of addressing these problems. One powerful tool that can be added to the researcher toolkit is that of machine learning. Some researchers have been experimenting with their own implementations of machine learning algorithms or with those available as part of the Weka machine learning software suite. There are some shortcomings to using “one off” solutions. It is the position of the authors that many problems exist in requirements engineering that can be supported by Weka's machine learning algorithms, specifically by classification trees. Further, the authors posit that adoption will be boosted if machine learning is easy to use and is integrated into requirements research tools, such as TraceLab. Toward that end, an initial concept validation of a component in TraceLab is presented that applies the Weka classification trees. The component is demonstrated on two different requirements engineering problems. Finally, insights gained on using the TraceLab Weka component on these two problems are offered.",
"title": ""
},
{
"docid": "d4c7efe10b1444d0f9cb6032856ba4e1",
"text": "This article provides a brief overview of several classes of fiber reinforced cement based composites and suggests future directions in FRC development. Special focus is placed on micromechanics based design methodology of strain-hardening cement based composites. As example, a particular engineered cementitious composite newly developed at the ACE-MRL at the University of Michigan is described in detail with regard to its design, material composition, processing, and mechanical properties. Three potential applications which utilize the unique properties of such composites are cited in this paper, and future research needs are identified. * To appear in Fiber Reinforced Concrete: Present and the Future, Eds: N. Banthia, A. Bentur, and A. Mufti, Canadian Society of Civil Engineers, 1997.",
"title": ""
},
{
"docid": "dfcc931d9cd7d084bbbcf400f44756a5",
"text": "In this paper we address the problem of aligning very long (often more than one hour) audio files to their corresponding textual transcripts in an effective manner. We present an efficient recursive technique to solve this problem that works well even on noisy speech signals. The key idea of this algorithm is to turn the forced alignment problem into a recursive speech recognition problem with a gradually restricting dictionary and language model. The algorithm is tolerant to acoustic noise and errors or gaps in the text transcript or audio tracks. We report experimental results on a 3 hour audio file containing TV and radio broadcasts. We will show accurate alignments on speech under a variety of real acoustic conditions such as speech over music and speech over telephone lines. We also report results when the same audio stream has been corrupted with white additive noise or compressed using a popular web encoding format such as RealAudio. This algorithm has been used in our internal multimedia indexing project. It has processed more than 200 hours of audio from varied sources, such as WGBH NOVA documentaries and NPR web audio files. The system aligns speech media content in about one to five times realtime, depending on the acoustic conditions of the audio signal.",
"title": ""
},
{
"docid": "68295a432f68900911ba29e5a6ca5e42",
"text": "In many forecasting applications, it is valuable to predict not only the value of a signal at a certain time point in the future, but also the values leading up to that point. This is especially true in clinical applications, where the future state of the patient can be less important than the patient's overall trajectory. This requires multi-step forecasting, a forecasting variant where one aims to predict multiple values in the future simultaneously. Standard methods to accomplish this can propagate error from prediction to prediction, reducing quality over the long term. In light of these challenges, we propose multi-output deep architectures for multi-step forecasting in which we explicitly model the distribution of future values of the signal over a prediction horizon. We apply these techniques to the challenging and clinically relevant task of blood glucose forecasting. Through a series of experiments on a real-world dataset consisting of 550K blood glucose measurements, we demonstrate the effectiveness of our proposed approaches in capturing the underlying signal dynamics. Compared to existing shallow and deep methods, we find that our proposed approaches improve performance individually and capture complementary information, leading to a large improvement over the baseline when combined (4.87 vs. 5.31 absolute percentage error (APE)). Overall, the results suggest the efficacy of our proposed approach in predicting blood glucose level and multi-step forecasting more generally.",
"title": ""
},
{
"docid": "9915a09a87126626633088cf4d6b9633",
"text": "This paper introduces ICET, a new algorithm for cost-sensitive classification. ICET uses a genetic algorithm to evolve a population of biases for a decision tree induction algorithm. The fitness function of the genetic algorithm is the average cost of classification when using the decision tree, including both the costs of tests (features, measurements) and the costs of classification errors. ICET is compared here with three other algorithms for cost-sensitive classification — EG2, CS-ID3, and IDX — and also with C4.5, which classifies without regard to cost. The five algorithms are evaluated empirically on five realworld medical datasets. Three sets of experiments are performed. The first set examines the baseline performance of the five algorithms on the five datasets and establishes that ICET performs significantly better than its competitors. The second set tests the robustness of ICET under a variety of conditions and shows that ICET maintains its advantage. The third set looks at ICET’s search in bias space and discovers a way to improve the search.",
"title": ""
},
{
"docid": "405acd07ad0d1b3b82ada19e85e23ce6",
"text": "Self-driving technology is advancing rapidly — albeit with significant challenges and limitations. This progress is largely due to recent developments in deep learning algorithms. To date, however, there has been no systematic comparison of how different deep learning architectures perform at such tasks, or an attempt to determine a correlation between classification performance and performance in an actual vehicle, a potentially critical factor in developing self-driving systems. Here, we introduce the first controlled comparison of multiple deep-learning architectures in an end-to-end autonomous driving task across multiple testing conditions. We used a simple and affordable platform consisting of an off-the-shelf, remotely operated vehicle, a GPU-equipped computer, and an indoor foamrubber racetrack. We compared performance, under identical driving conditions, across seven architectures including a fully-connected network, a simple 2 layer CNN, AlexNet, VGG-16, Inception-V3, ResNet, and an LSTM by assessing the number of laps each model was able to successfully complete without crashing while traversing an indoor racetrack. We compared performance across models when the conditions exactly matched those in training as well as when the local environment and track were configured differently and objects that were not included in the training dataset were placed on the track in various positions. In addition, we considered performance using several different data types for training and testing including single grayscale and color frames, and multiple grayscale frames stacked together in sequence. With the exception of a fully-connected network, all models performed reasonably well (around or above 80%) and most very well (∼95%) on at least one input type but with considerable variation across models and inputs. Overall, AlexNet, operating on single color frames as input, achieved the best level of performance (100% success rate in phase one and 55% in phase two) while VGG-16 performed well most consistently across image types. Performance with obstacles on the track and conditions that were different than those in training was much more variable than without objects and under conditions similar to those in the training set. Analysis of the model’s driving paths found greater consistency within vs. between models. Path similarity between models did not correlate strongly with success similarity. Our novel pixelflipping method allowed us to create a heatmap for each given image to observe what features of the image were weighted most heavily by the network when making its decision. Finally, we found that the variability across models in the driving task was not fully predicted by validation performance, indicating the presence of a ‘deployment gap’ between model training and performance in a simple, real-world task. Overall, these results demonstrate the need for increased field research in self-driving. 1Center for Complex Systems and Brain Sciences, Florida Atlantic University, 777 Glades Road, Boca Raton, FL 33431, USA 2College of Computer and Information Science, Northeastern University, 360 Huntington Ave, Boston, MA 02115, USA 3Department of Ocean and Mechanical Engineering, Florida Atlantic University, 777 Glades Road, Boca Raton, FL 33431, USA † mteti@fau.edu",
"title": ""
},
{
"docid": "681eb6ee0e4b31772612da151afbcd29",
"text": "Due to high directionality and small wavelengths, 60 GHz links are highly vulnerable to human blockage. To overcome blockage, 60 GHz radios can use a phased-array antenna to search for and switch to unblocked beam directions. However, these techniques are reactive, and only trigger after the blockage has occurred, and hence, they take time to recover the link. In this paper, we propose BeamSpy, that can instantaneously predict the quality of 60 GHz beams, even under blockage, without the costly beam searching. BeamSpy captures unique spatial and blockage-invariant correlation among beams through a novel prediction model, exploiting which we can immediately select the best alternative beam direction whenever the current beam’s quality degrades. We apply BeamSpy to a run-time fast beam adaptation protocol, and a blockage-risk assessment scheme that can guide blockage-resilient link deployment. Our experiments on a reconfigurable 60 GHz platform demonstrate the effectiveness of BeamSpy’s prediction framework, and its usefulness in enabling robust 60 GHz links.",
"title": ""
},
{
"docid": "0846274e111ccd0867466bbda93f06e6",
"text": "Encrypting Internet communications has been the subject of renewed focus in recent years. In order to add end-to-end encryption to legacy applications without losing the convenience of full-text search, ShadowCrypt and Mimesis Aegis use a new cryptographic technique called \"efficiently deployable efficiently searchable encryption\" (EDESE) that allows a standard full-text search system to perform searches on encrypted data. Compared to other recent techniques for searching on encrypted data, EDESE schemes leak a great deal of statistical information about the encrypted messages and the keywords they contain. Until now, the practical impact of this leakage has been difficult to quantify.\n In this paper, we show that the adversary's task of matching plaintext keywords to the opaque cryptographic identifiers used in EDESE can be reduced to the well-known combinatorial optimization problem of weighted graph matching (WGM). Using real email and chat data, we show how off-the-shelf WGM solvers can be used to accurately and efficiently recover hundreds of the most common plaintext keywords from a set of EDESE-encrypted messages. We show how to recover the tags from Bloom filters so that the WGM solver can be used with the set of encrypted messages that utilizes a Bloom filter to encode its search tags. We also show that the attack can be mitigated by carefully configuring Bloom filter parameters.",
"title": ""
},
{
"docid": "4c1c72fde3bbe25f6ff3c873a87b86ba",
"text": "The purpose of this study was to translate the Foot Function Index (FFI) into Italian, to perform a cross-cultural adaptation and to evaluate the psychometric properties of the Italian version of FFI. The Italian FFI was developed according to the recommended forward/backward translation protocol and evaluated in patients with foot and ankle diseases. Feasibility, reliability [intraclass correlation coefficient (ICC)], internal consistency [Cronbach’s alpha (CA)], construct validity (correlation with the SF-36 and a visual analogue scale (VAS) assessing for pain), responsiveness to surgery were assessed. The standardized effect size and standardized response mean were also evaluated. A total of 89 patients were recruited (mean age 51.8 ± 13.9 years, range 21–83). The Italian version of the FFI consisted in 18 items separated into a pain and disability subscales. CA value was 0.95 for both the subscales. The reproducibility was good with an ICC of 0.94 and 0.91 for pain and disability subscales, respectively. A strong correlation was found between the FFI and the scales of the SF-36 and the VAS with related content, particularly in the areas of physical function and pain was observed indicating good construct validity. After surgery, the mean FFI improved from 55.9 ± 24.8 to 32.4 ± 26.3 for the pain subscale and from 48.8 ± 28.8 to 24.9 ± 23.7 for the disability subscale (P < 0.01). The Italian version of the FFI showed satisfactory psychometric properties in Italian patients with foot and ankle diseases. Further testing in different and larger samples is required in order to ensure the validity and reliability of this score.",
"title": ""
},
{
"docid": "1b458ae11aa9a5e1a32454d771ce4d2e",
"text": "Inferring Structural Models of Travel Behavior: An Inverse Reinforcement Learning Approach",
"title": ""
},
{
"docid": "f38044818c401755809ba30d87374e3e",
"text": "AIM\nNeuroprotection trials for neonatal encephalopathy use moderate or severe disability as an outcome, with the Bayley Scales of Infant Development, Second Edition (Bayley-2) Index scores of <70 as part of the criteria. The Bayley Scales of Infant and Toddler, 3rd Development, Third Edition (Bayley-3) have superseded Bayley-2 and yield higher than expected scores in typically developing and high-risk infants. The aim of this study, therefore, was to compare Bayley-2 scores and Bayley-3 scores in term-born infants surviving neonatal encephalopathy treated with hypothermia.\n\n\nMETHOD\nSixty-one term-born infants (37 males, 24 females; median gestational age at birth 40 wks, range 36-42 wks; median birthweight 3280 g, range 2295-5050) following neonatal encephalopathy and hypothermia had contemporaneous assessment at 18 months using the Bayley-2 and Bayley-3.\n\n\nRESULTS\nThe median Bayley-3 Cognitive Composite score was 7 points higher than the median Bayley-2 Mental Developmental Index (MDI) score and the median Bayley-3 Motor Composite score was 18 points higher than the median Bayley-2 Psychomotor Developmental Index (PDI) score. Ten children had a Bayley-2 MDI of <70; only three children had Bayley-3 combined Cognitive/Language scores of <70. Eleven children had Bayley-2 PDI scores of <70 and four had modified Bayley-3 Motor Composite scores of <70. Applying regression equations to Bayley-3 scores adjusted rates of severe delay to similar proportions found using Bayley-2 scores.\n\n\nINTERPRETATION\nFewer children were classified with severe delay using the Bayley-3 than the Bayley-2, which prohibits direct comparison of scores. Increased Bayley-3 cut-off thresholds for classifying severe disability are recommended when comparing studies in this clinical group using Bayley-2 scores.",
"title": ""
}
] |
scidocsrr
|
8c58ca781d2b58f59f1cc48311396108
|
Scalable Kernel TCP Design and Implementation for Short-Lived Connections
|
[
{
"docid": "baa59c53346e16f4c55b6fef20f19a89",
"text": "Incoming and outgoing processing for a given TCP connection often execute on different cores: an incoming packet is typically processed on the core that receives the interrupt, while outgoing data processing occurs on the core running the relevant user code. As a result, accesses to read/write connection state (such as TCP control blocks) often involve cache invalidations and data movement between cores' caches. These can take hundreds of processor cycles, enough to significantly reduce performance.\n We present a new design, called Affinity-Accept, that causes all processing for a given TCP connection to occur on the same core. Affinity-Accept arranges for the network interface to determine the core on which application processing for each new connection occurs, in a lightweight way; it adjusts the card's choices only in response to imbalances in CPU scheduling. Measurements show that for the Apache web server serving static files on a 48-core AMD system, Affinity-Accept reduces time spent in the TCP stack by 30% and improves overall throughput by 24%.",
"title": ""
},
{
"docid": "b7222f86da6f1e44bd1dca88eb59dc4b",
"text": "A virtualized system includes a new layer of software, the virtual machine monitor. The VMM's principal role is to arbitrate accesses to the underlying physical host platform's resources so that multiple operating systems (which are guests of the VMM) can share them. The VMM presents to each guest OS a set of virtual platform interfaces that constitute a virtual machine (VM). Once confined to specialized, proprietary, high-end server and mainframe systems, virtualization is now becoming more broadly available and is supported in off-the-shelf systems based on Intel architecture (IA) hardware. This development is due in part to the steady performance improvements of IA-based systems, which mitigates traditional virtualization performance overheads. Intel virtualization technology provides hardware support for processor virtualization, enabling simplifications of virtual machine monitor software. Resulting VMMs can support a wider range of legacy and future operating systems while maintaining high performance.",
"title": ""
},
{
"docid": "bb8604e0446fd1d3b01f426a8aa8c7e5",
"text": "Commodity computer systems contain more and more processor cores and exhibit increasingly diverse architectural tradeoffs, including memory hierarchies, interconnects, instruction sets and variants, and IO configurations. Previous high-performance computing systems have scaled in specific cases, but the dynamic nature of modern client and server workloads, coupled with the impossibility of statically optimizing an OS for all workloads and hardware variants pose serious challenges for operating system structures.\n We argue that the challenge of future multicore hardware is best met by embracing the networked nature of the machine, rethinking OS architecture using ideas from distributed systems. We investigate a new OS structure, the multikernel, that treats the machine as a network of independent cores, assumes no inter-core sharing at the lowest level, and moves traditional OS functionality to a distributed system of processes that communicate via message-passing.\n We have implemented a multikernel OS to show that the approach is promising, and we describe how traditional scalability problems for operating systems (such as memory management) can be effectively recast using messages and can exploit insights from distributed systems and networking. An evaluation of our prototype on multicore systems shows that, even on present-day machines, the performance of a multikernel is comparable with a conventional OS, and can scale better to support future hardware.",
"title": ""
}
] |
[
{
"docid": "faf25bfda6d078195b15f5a36a32673a",
"text": "In high performance VLSI circuits, the power consumption is mainly related to signal transition, charging and discharging of parasitic capacitance in transistor during switching activity. Adiabatic switching is a reversible logic to conserve energy instead of dissipating power reuses it. In this paper, low power multipliers and compressor are designed using adiabatic logic. Compressors are the basic components in many applications like partial product summation in multipliers. The Vedic multiplier is designed using the compressor and the power result is analysed. The designs are implemented and the power results are obtained using TANNER EDA 12.0 tool. This paper presents a novel scheme for analysis of low power multipliers using adiabatic logic in inverter and in the compressor. The scheme is optimized for low power as well as high speed implementation over reported scheme. K e y w o r d s : A d i a b a t i c l o g i c , C o m p r e s s o r , M u l t i p l i e r s .",
"title": ""
},
{
"docid": "4a518f4cdb34f7cff1d75975b207afe4",
"text": "In this paper, the design and measurement results of a highly efficient 1-Watt broadband class J SiGe power amplifier (PA) at 700 MHz are reported. Comparisons between a class J PA and a traditional class AB/B PA have been made, first through theoretical analysis in terms of load network, efficiency and bandwidth behavior, and secondly by bench measurement data. A single-ended power cell is designed and fabricated in the 0.35 μm IBM 5PAe SiGe BiCMOS technology with through-wafer-vias (TWVs). Watt-level output power with greater than 50% efficiency is achieved on bench across a wide bandwidth of 500 MHz to 900 MHz for the class J PA (i.e., >;57% bandwidth at the center frequency of 700 MHz). Psat of 30.9 dBm with 62% collector efficiency (CE) at 700 MHz is measured while the highest efficiency of 68.9% occurs at 650 MHz using a 4.2 V supply. Load network of this class J PA is realized with lumped passive components on a FR4 printed circuit board (PCB). A narrow-band class AB PA counterpart is also designed and fabricated for comparison. The data suggests that the broadband class J SiGe PA can be promising for future multi-band wireless applications.",
"title": ""
},
{
"docid": "e1ba35e1558540c1b99abf1e05e927fc",
"text": "Device-to-device (D2D) communication underlaying cellular networks brings significant benefits to resource utilization, improving user's throughput and extending battery life of user equipments. However, the allocation of radio resources and power to D2D communication needs elaborate coordination, as D2D communication causes interference to cellular networks. In this paper, we propose a novel joint radio resource and power allocation scheme to improve the performance of the system in the uplink period. Energy efficiency is considered as our optimization objective since devices are handheld equipments with limited battery life. We formulate the the allocation problem as a reverse iterative combinatorial auction game. In the auction, radio resources occupied by cellular users are considered as bidders competing for D2D packages and their corresponding transmit power. We propose an algorithm to solve the allocation problem as an auction game. We also perform numerical simulations to prove the efficacy of the proposed algorithm.",
"title": ""
},
{
"docid": "ed97b6815085d2664c6548abcf68a767",
"text": "Good mental health literacy in young people and their key helpers may lead to better outcomes for those with mental disorders, either by facilitating early help-seeking by young people themselves, or by helping adults to identify early signs of mental disorders and seek help on their behalf. Few interventions to improve mental health literacy of young people and their helpers have been evaluated, and even fewer have been well evaluated. There are four categories of interventions to improve mental health literacy: whole-of-community campaigns; community campaigns aimed at a youth audience; school-based interventions teaching help-seeking skills, mental health literacy, or resilience; and programs training individuals to better intervene in a mental health crisis. The effectiveness of future interventions could be enhanced by using specific health promotion models to guide their development.",
"title": ""
},
{
"docid": "12fa7a50132468598cf20ac79f51b540",
"text": "As medical organizations modernize their operations, they are increasingly adopting electronic health records (EHRs) and deploying new health information technology systems that create, gather, and manage their information. As a result, the amount of data available to clinicians, administrators, and researchers in the healthcare system continues to grow at an unprecedented rate. However, despite the substantial evidence showing the benefits of EHR adoption, e-prescriptions, and other components of health information exchanges, healthcare providers often report only modest improvements in their ability to make better decisions by using more comprehensive clinical information. The large volume of clinical data now being captured for each patient poses many challenges to (a) clinicians trying to combine data from different disparate systems and make sense of the patient’s condition within the context of the patient’s medical history, (b) administrators trying to make decisions grounded in data, (c) researchers trying to understand differences in population outcomes, and (d) patients trying to make use of their own medical data. In fact, despite the many hopes that access to more information would lead to more informed decisions, access to comprehensive and large-scale clinical data resources has instead made some analytical processes even more difficult. Visual analytics is an emerging discipline that has shown significant promise in addressing many of these information overload challenges. Visual analytics is the science of analytical reasoning facilitated by advanced interactive visual interfaces. In order to facilitate reasoning over, and interpretation of, complex data, visual analytics techniques combine concepts from data mining, machine learning, human computing interaction, and human cognition. As the volume of healthrelated data continues to grow at unprecedented rates and new information systems are deployed to those already overrun with too much data, there is a need for exploring how visual analytics methods can be used to avoid information overload. Information overload is the problem that arises when individuals try to analyze a number of variables that surpass the limits of human cognition. Information overload often leads to users ignoring, overlooking, or misinterpreting crucial information. The information overload problem is widespread in the healthcare domain and can result in incorrect interpretations of data, wrong diagnoses, and missed warning signs of impending changes to patient conditions. The multi-modal and heterogeneous properties of EHR data together with the frequency of redundant, irrelevant, and subjective measures pose significant challenges to users trying to synthesize the information and obtain actionable insights. Yet despite these challenges, the promise of big data in healthcare remains. There is a critical need to support research and pilot projects to study effective ways of using visual analytics to support the analysis of large amounts of medical data. Currently new interactive interfaces are being developed to unlock the value of large-scale clinical databases for a wide variety of different tasks. For instance, visual analytics could help provide clinicians with more effective ways to combine the longitudinal clinical data with the patient-generated health data to better understand patient progression. Patients could be supported in understanding personalized wellness plans and comparing their health measurements against similar patients. Researchers could use visual analytics tools to help perform population-based analysis and obtain insights from large amounts of clinical data. Hospital administrators could use visual analytics to better understand the productivity of an organization, gaps in care, outcomes measurements, and patient satisfaction. Visual analytics systems—by combining advanced interactive visualization methods with statistical inference and correlation models—have the potential to support intuitive analysis for all of these user populations while masking the underlying complexity of the data. This special focus issue of JAMIA is dedicated to new research, applications, case studies, and approaches that use visual analytics to support the analysis of complex clinical data.",
"title": ""
},
{
"docid": "dc8143e1aee228db14347dc1094a7df6",
"text": "In this paper, we propose a novel large-scale, context-aware recommender system that provides accurate recommendations, scalability to a large number of diverse users and items, differential services, and does not suffer from “cold start” problems. Our proposed recommendation system relies on a novel algorithm which learns online the item preferences of users based on their click behavior, and constructs online item-cluster trees. The recommendations are then made by choosing an item-cluster level and then selecting an item within that cluster as a recommendation for the user. This approach is able to significantly improve the learning speed when the number of users and items is large, while still providing high recommendation accuracy. Each time a user arrives at the website, the system makes a recommendation based on the estimations of item payoffs by exploiting past context arrivals in a neighborhood of the current user's context. It exploits the similarity of contexts to learn how to make better recommendations even when the number and diversity of users and items is large. This also addresses the cold start problem by using the information gained from similar users and items to make recommendations for new users and items. We theoretically prove that the proposed algorithm for item recommendations converges to the optimal item recommendations in the long-run. We also bound the probability of making a suboptimal item recommendation for each user arriving to the system while the system is learning. Experimental results show that our approach outperforms the state-of-the-art algorithms by over 20 percent in terms of click through rates.",
"title": ""
},
{
"docid": "339bfb7f54ce8202de1a4079097a6f8d",
"text": "This article reviews research from published studies on the association between nutrition among school-aged children and their performance in school and on tests of cognitive functioning. Each reviewed article is accompanied by a brief description of its research methodology and outcomes. Articles are separated into 4 categories: food insufficiency, iron deficiency and supplementation, deficiency and supplementation of micronutrients, and the importance of breakfast. Research shows that children with iron deficiencies sufficient to cause anemia are at a disadvantage academically. Their cognitive performance seems to improve with iron therapy. A similar association and improvement with therapy is not found with either zinc or iodine deficiency, according to the reviewed articles. There is no evidence that population-wide vitamin and mineral supplementation will lead to improved academic performance. Food insufficiency is a serious problem affecting children’s ability to learn, but its relevance to US populations needs to be better understood. Research indicates that school breakfast programs seem to improve attendance rates and decrease tardiness. Among severely undernourished populations, school breakfast programs seem to improve academic performance and cognitive functioning. (J Sch Health. 2005;75(6):199-213) Parents, educators, and health professionals have long touted the association between what our children eat and their school performance. Evidence for this correlation is not always apparent, and biases on both sides of the argument sometimes override data when this topic is discussed. Understanding existing evidence linking students’ dietary intake and their ability to learn is a logical first step in developing school food service programs, policies, and curricula on nutrition and in guiding parents of school-aged children. The National Coordinating Committee on School Health and Safety (NCCSHS) comprises representatives of several federal departments and nongovernmental organizations working to develop and enhance coordinated school health programs. The NCCSHS has undertaken a project to enhance awareness of evidence linking child health and school performance and identifying gaps in our knowledge. NCCSHS has conducted a search of peerreviewed, published research reporting on the relationship between students’ health and academic performance. In addition to nutrition, NCCSHS has sponsored research reviews of the association between academic performance and asthma, diabetes, sickle cell anemia, sleep, obesity, and physical activity. SELECTION OF ARTICLES Articles meeting the following specific characteristics were selected. (1) Subjects were school-aged children (5 to 18 years), (2) article was published after 1980 in a peerreviewed journal, and (3) findings included at least 1 of the following outcome measures: school attendance, academic achievement, a measure of cognitive ability (such as general intelligence, memory), and attention. Students’ level of attention was only acceptable as an outcome measure for purposes of inclusion in this review, if attention was measured objectively in the school environment. Studies of the impact of nutritional intake in children prior to school age were not included. Studies were identified using MedLine and similar Internet-based searches. If a full article could not be retrieved, but a detailed abstract was available, the research was included. Outcomes other than academic achievement, attendance, and cognitive ability, although considered major by the authors, may not be described at all or are only briefly alluded to in the tables of research descriptions.",
"title": ""
},
{
"docid": "fc6382579f90ffbc2e54498ad2034d3b",
"text": "Features extracted by deep networks have been popular in many visual search tasks. This article studies deep network structures and training schemes for mobile visual search. The goal is to learn an effective yet portable feature representation that is suitable for bridging the domain gap between mobile user photos and (mostly) professionally taken product images while keeping the computational cost acceptable for mobile-based applications. The technical contributions are twofold. First, we propose an alternative of the contrastive loss popularly used for training deep Siamese networks, namely robust contrastive loss, where we relax the penalty on some positive and negative pairs to alleviate overfitting. Second, a simple multitask fine-tuning scheme is leveraged to train the network, which not only utilizes knowledge from the provided training photo pairs but also harnesses additional information from the large ImageNet dataset to regularize the fine-tuning process. Extensive experiments on challenging real-world datasets demonstrate that both the robust contrastive loss and the multitask fine-tuning scheme are effective, leading to very promising results with a time cost suitable for mobile product search scenarios.",
"title": ""
},
{
"docid": "b49925f5380f695ccc3f9a150030051c",
"text": "Understanding the behaviour of algorithms is a key element of computer science. However, this learning objective is not always easy to achieve, as the behaviour of some algorithms is complicated or not readily observable, or affected by the values of their input parameters. To assist students in learning the multilevel feedback queue scheduling algorithm (MLFQ), we designed and developed an interactive visualization tool, Marble MLFQ, that illustrates how the algorithm works under various conditions. The tool is intended to supplement course material and instructions in an undergraduate operating systems course. The main features of Marble MLFQ are threefold: (1) It animates the steps of the scheduling algorithm graphically to allow users to observe its behaviour; (2) It provides a series of lessons to help users understand various aspects of the algorithm; and (3) It enables users to customize input values to the algorithm to support exploratory learning.",
"title": ""
},
{
"docid": "4f1070b988605290c1588918a716cef2",
"text": "The aim of this paper was to predict the static bending modulus of elasticity (MOES) and modulus of rupture (MOR) of Scots pine (Pinus sylvestris L.) wood using three nondestructive techniques. The mean values of the dynamic modulus of elasticity based on flexural vibration (MOEF), longitudinal vibration (MOELV), and indirect ultrasonic (MOEUS) were 13.8, 22.3, and 30.9 % higher than the static modulus of elasticity (MOES), respectively. The reduction of this difference, taking into account the shear deflection effect in the output values for static bending modulus of elasticity, was also discussed in this study. The three dynamic moduli of elasticity correlated well with the static MOES and MOR; correlation coefficients ranged between 0.68 and 0.96. The correlation coefficients between the dynamic moduli and MOES were higher than those between the dynamic moduli and MOR. The highest correlation between the dynamic moduli and static bending properties was obtained by the flexural vibration technique in comparison with longitudinal vibration and indirect ultrasonic techniques. Results showed that there was no obvious relationship between the density and the acoustic wave velocity that was obtained from the longitudinal vibration and ultrasonic techniques.",
"title": ""
},
{
"docid": "d19a77b3835b7b43acf57da377b11cb4",
"text": "Given the importance of relation or event extraction from biomedical research publications to support knowledge capture and synthesis, and the strong dependency of approaches to this information extraction task on syntactic information, it is valuable to understand which approaches to syntactic processing of biomedical text have the highest performance. We perform an empirical study comparing state-of-the-art traditional feature-based and neural network-based models for two core natural language processing tasks of part-of-speech (POS) tagging and dependency parsing on two benchmark biomedical corpora, GENIA and CRAFT. To the best of our knowledge, there is no recent work making such comparisons in the biomedical context; specifically no detailed analysis of neural models on this data is available. Experimental results show that in general, the neural models outperform the feature-based models on two benchmark biomedical corpora GENIA and CRAFT. We also perform a task-oriented evaluation to investigate the influences of these models in a downstream application on biomedical event extraction, and show that better intrinsic parsing performance does not always imply better extrinsic event extraction performance. We have presented a detailed empirical study comparing traditional feature-based and neural network-based models for POS tagging and dependency parsing in the biomedical context, and also investigated the influence of parser selection for a biomedical event extraction downstream task. We make the retrained models available at https://github.com/datquocnguyen/BioPosDep.",
"title": ""
},
{
"docid": "69f95ac2ca7b32677151de88b9d95d4c",
"text": "Gunaratna, Kalpa. PhD, Department of Computer Science and Engineering, Wright State University, 2017. Semantics-based Summarization of Entities in Knowledge Graphs. The processing of structured and semi-structured content on the Web has been gaining attention with the rapid progress in the Linking Open Data project and the development of commercial knowledge graphs. Knowledge graphs capture domain-specific or encyclopedic knowledge in the form of a data layer and add rich and explicit semantics on top of the data layer to infer additional knowledge. The data layer of a knowledge graph represents entities and their descriptions. The semantic layer on top of the data layer is called the schema (ontology), where relationships of the entity descriptions, their classes, and the hierarchy of the relationships and classes are defined. Today, there exist large knowledge graphs in the research community (e.g., encyclopedic datasets like DBpedia and Yago) and corporate world (e.g., Google knowledge graph) that encapsulate a large amount of knowledge for human and machine consumption. Typically, they consist of millions of entities and billions of facts describing these entities. While it is good to have this much knowledge available on the Web for consumption, it leads to information overload, and hence proper summarization (and presentation) techniques need to be explored. In this dissertation, we focus on creating both comprehensive and concise entity summaries at: (i) the single entity level and (ii) the multiple entity level. To summarize a single entity, we propose a novel approach called FACeted Entity Summarization (FACES) that considers importance, which is computed by combining popularity and uniqueness, and diversity of facts getting selected for the summary. We first conceptually group facts using semantic expansion and hierarchical incremental clustering techniques and form facets (i.e., groupings) that go beyond syntactic similarity. Then we rank both the facts and facets using Information Retrieval (IR) ranking techniques to pick the",
"title": ""
},
{
"docid": "7f110e4769b996de13afe63962bcf2d2",
"text": "Versu is a text-based simulationist interactive drama. Because it uses autonomous agents, the drama is highly replayable: you can play the same story from multiple perspectives, or assign different characters to the various roles. The architecture relies on the notion of a social practice to achieve coordination between the independent autonomous agents. A social practice describes a recurring social situation, and is a successor to the Schankian script. Social practices are implemented as reactive joint plans, providing affordances to the agents who participate in them. The practices never control the agents directly; they merely provide suggestions. It is always the individual agent who decides what to do, using utility-based reactive action selection.",
"title": ""
},
{
"docid": "ecdeb5b8665661c55d91b782dd8fb3a7",
"text": "We present a classifier-based parser that produces constituent trees in linear time. The parser uses a basic bottom-up shiftreduce algorithm, but employs a classifier to determine parser actions instead of a grammar. This can be seen as an extension of the deterministic dependency parser of Nivre and Scholz (2004) to full constituent parsing. We show that, with an appropriate feature set used in classification, a very simple one-path greedy parser can perform at the same level of accuracy as more complex parsers. We evaluate our parser on section 23 of the WSJ section of the Penn Treebank, and obtain precision and recall of 87.54% and 87.61%, respectively.",
"title": ""
},
{
"docid": "0c842ef34f1924e899e408309f306640",
"text": "A single-tube 5' nuclease multiplex PCR assay was developed on the ABI 7700 Sequence Detection System (TaqMan) for the detection of Neisseria meningitidis, Haemophilus influenzae, and Streptococcus pneumoniae from clinical samples of cerebrospinal fluid (CSF), plasma, serum, and whole blood. Capsular transport (ctrA), capsulation (bexA), and pneumolysin (ply) gene targets specific for N. meningitidis, H. influenzae, and S. pneumoniae, respectively, were selected. Using sequence-specific fluorescent-dye-labeled probes and continuous real-time monitoring, accumulation of amplified product was measured. Sensitivity was assessed using clinical samples (CSF, serum, plasma, and whole blood) from culture-confirmed cases for the three organisms. The respective sensitivities (as percentages) for N. meningitidis, H. influenzae, and S. pneumoniae were 88.4, 100, and 91.8. The primer sets were 100% specific for the selected culture isolates. The ctrA primers amplified meningococcal serogroups A, B, C, 29E, W135, X, Y, and Z; the ply primers amplified pneumococcal serotypes 1, 2, 3, 4, 5, 6, 7, 8, 9, 10A, 11A, 12, 14, 15B, 17F, 18C, 19, 20, 22, 23, 24, 31, and 33; and the bexA primers amplified H. influenzae types b and c. Coamplification of two target genes without a loss of sensitivity was demonstrated. The multiplex assay was then used to test a large number (n = 4,113) of culture-negative samples for the three pathogens. Cases of meningococcal, H. influenzae, and pneumococcal disease that had not previously been confirmed by culture were identified with this assay. The ctrA primer set used in the multiplex PCR was found to be more sensitive (P < 0.0001) than the ctrA primers that had been used for meningococcal PCR testing at that time.",
"title": ""
},
{
"docid": "d87eeaac97b868b83e52f0154ff56071",
"text": "This paper presents a new algorithm, termed <italic>truncated amplitude flow</italic> (TAF), to recover an unknown vector <inline-formula> <tex-math notation=\"LaTeX\">$ {x}$ </tex-math></inline-formula> from a system of quadratic equations of the form <inline-formula> <tex-math notation=\"LaTeX\">$y_{i}=|\\langle {a}_{i}, {x}\\rangle |^{2}$ </tex-math></inline-formula>, where <inline-formula> <tex-math notation=\"LaTeX\">$ {a}_{i}$ </tex-math></inline-formula>’s are given random measurement vectors. This problem is known to be <italic>NP-hard</italic> in general. We prove that as soon as the number of equations is on the order of the number of unknowns, TAF recovers the solution exactly (up to a global unimodular constant) with high probability and complexity growing linearly with both the number of unknowns and the number of equations. Our TAF approach adapts the <italic>amplitude-based</italic> empirical loss function and proceeds in two stages. In the first stage, we introduce an <italic>orthogonality-promoting</italic> initialization that can be obtained with a few power iterations. Stage two refines the initial estimate by successive updates of scalable <italic>truncated generalized gradient iterations</italic>, which are able to handle the rather challenging nonconvex and nonsmooth amplitude-based objective function. In particular, when vectors <inline-formula> <tex-math notation=\"LaTeX\">$ {x}$ </tex-math></inline-formula> and <inline-formula> <tex-math notation=\"LaTeX\">${a}_{i}$ </tex-math></inline-formula>’s are real valued, our gradient truncation rule provably eliminates erroneously estimated signs with high probability to markedly improve upon its untruncated version. Numerical tests using synthetic data and real images demonstrate that our initialization returns more accurate and robust estimates relative to spectral initializations. Furthermore, even under the same initialization, the proposed amplitude-based refinement outperforms existing Wirtinger flow variants, corroborating the superior performance of TAF over state-of-the-art algorithms.",
"title": ""
},
{
"docid": "74497cbcf698a821e755b93ba5d8bb7a",
"text": "The integration of different learning and adaptation techniques to overcome individual limitations and to achieve synergetic effects through the hybridization or fusion of these techniques has, in recent years, contributed to a large number of new intelligent system designs. Computational intelligence is an innovative framework for constructing intelligent hybrid architectures involving Neural Networks (NN), Fuzzy Inference Systems (FIS), Probabilistic Reasoning (PR) and derivative free optimization techniques such as Evolutionary Computation (EC). Most of these hybridization approaches, however, follow an ad hoc design methodology, justified by success in certain application domains. Due to the lack of a common framework it often remains difficult to compare the various hybrid systems conceptually and to evaluate their performance comparatively. This chapter introduces the different generic architectures for integrating intelligent systems. The designing aspects and perspectives of different hybrid archirectures like NN-FIS, EC-FIS, EC-NN, FIS-PR and NN-FIS-EC systems are presented. Some conclusions are also provided towards the end.",
"title": ""
},
{
"docid": "4c8ac629f8a7faaa315e4e4441eb630c",
"text": "This article reviews the cognitive therapy of depression. The psychotherapy based on this theory consists of behavioral and verbal techniques to change cognitions, beliefs, and errors in logic in the patient's thinking. A few of the various techniques are described and a case example is provided. Finally, the outcome studies testing the efficacy of this approach are reviewed.",
"title": ""
},
{
"docid": "da56b994c91051847a05a5ffb69c78f0",
"text": "We define CWS, a non-preemptive scheduling policy for workloads with correlated job sizes. CWS tackles the scheduling problem by inferring the expected sizes of upcoming jobs based on the structure of correlations and on the outcome of past scheduling decisions. Size prediction is achieved using a class of Hidden Markov Models (HMM) with continuous observation densities that describe job sizes. We show how the forward-backward algorithm of HMMs applies effectively in scheduling applications and how it can be used to derive closed-form expressions for size prediction. This is particularly simple to implement in the case of observation densities that are phase-type (PH-type) distributed, where existing fitting methods for Markovian point processes may also simplify the parameterization of the HMM workload model.\n Based on the job size predictions, CWS emulates size-based policies which favor short jobs, with accuracy depending mainly on the HMM used to parametrize the scheduling algorithm. Extensive simulation and analysis illustrate that CWS is competitive with policies that assume exact information about the workload.",
"title": ""
},
{
"docid": "638265455e769ee474106f26fceb6c19",
"text": "This paper considers a novel implementation scheme for fixed priority (FP) uniprocessor scheduling of mixed criticality systems. The scheme requires that jobs have their execution times monitored. If system behavior inconsistent with lower criticality levels is detected during run-time via such monitoring, (i) tasks of lower criticalities are discarded (this is already done by current FP mixed-criticality scheduling algorithms); and (ii) the priorities of the remaining tasks may be re-ordered. Evaluations illustrate the benefits of this scheme.",
"title": ""
}
] |
scidocsrr
|
f71fbccca7f7cca0a0e87fce5e1e9f92
|
Generative Adversarial Privacy
|
[
{
"docid": "e083b5fdf76bab5cdc8fcafc77db23f7",
"text": "Working under a model of privacy in which data remains private even from the statistician, we study the tradeoff between privacy guarantees and the risk of the resulting statistical estimators. We develop private versions of classical information-theoretic bounds, in particular those due to Le Cam, Fano, and Assouad. These inequalities allow for a precise characterization of statistical rates under local privacy constraints and the development of provably (minimax) optimal estimation procedures. We provide a treatment of several canonical families of problems: mean estimation and median estimation, multinomial probability estimation, and nonparametric density estimation. For all of these families, we provide lower and upper bounds that match up to constant factors, and exhibit new (optimal) privacy-preserving mechanisms and computationally efficient estimators that achieve the bounds. Additionally, we present a variety of experimental results for estimation problems involving sensitive data, including salaries, censored blog posts and articles, and drug abuse; these experiments demonstrate the importance of deriving optimal procedures.",
"title": ""
},
{
"docid": "5c716fbdc209d5d9f703af1e88f0d088",
"text": "Protecting visual secrets is an important problem due to the prevalence of cameras that continuously monitor our surroundings. Any viable solution to this problem should also minimize the impact on the utility of applications that use images. In this work, we build on the existing work of adversarial learning to design a perturbation mechanism that jointly optimizes privacy and utility objectives. We provide a feasibility study of the proposed mechanism and present ideas on developing a privacy framework based on the adversarial perturbation mechanism.",
"title": ""
}
] |
[
{
"docid": "6b8281957b0fd7e9ff88f64b8b6462aa",
"text": "As Critical National Infrastructures are becoming more vulnerable to cyber attacks, their protection becomes a significant issue for any organization as well as a nation. Moreover, the ability to attribute is a vital element of avoiding impunity in cyberspace. In this article, we present main threats to critical infrastructures along with protective measures that one nation can take, and which are classified according to legal, technical, organizational, capacity building, and cooperation aspects. Finally we provide an overview of current methods and practices regarding cyber attribution and cyber peace keeping.",
"title": ""
},
{
"docid": "791f440add573b1c35daca1d6eb7bcf4",
"text": "PURPOSE\nNivolumab, a programmed death-1 (PD-1) immune checkpoint inhibitor antibody, has demonstrated improved survival over docetaxel in previously treated advanced non-small-cell lung cancer (NSCLC). First-line monotherapy with nivolumab for advanced NSCLC was evaluated in the phase I, multicohort, Checkmate 012 trial.\n\n\nMETHODS\nFifty-two patients received nivolumab 3 mg/kg intravenously every 2 weeks until progression or unacceptable toxicity; postprogression treatment was permitted per protocol. The primary objective was to assess safety; secondary objectives included objective response rate (ORR) and 24-week progression-free survival (PFS) rate; overall survival (OS) was an exploratory end point.\n\n\nRESULTS\nAny-grade treatment-related adverse events (AEs) occurred in 71% of patients, most commonly: fatigue (29%), rash (19%), nausea (14%), diarrhea (12%), pruritus (12%), and arthralgia (10%). Ten patients (19%) reported grade 3 to 4 treatment-related AEs; grade 3 rash was the only grade 3 to 4 event occurring in more than one patient (n = 2; 4%). Six patients (12%) discontinued because of a treatment-related AE. The confirmed ORR was 23% (12 of 52), including four ongoing complete responses. Nine of 12 responses (75%) occurred by first tumor assessment (week 11); eight (67%) were ongoing (range, 5.3+ to 25.8+ months) at the time of data lock. ORR was 28% (nine of 32) in patients with any degree of tumor PD-ligand 1 expression and 14% (two of 14) in patients with no PD-ligand 1 expression. Median PFS was 3.6 months, and the 24-week PFS rate was 41% (95% CI, 27 to 54). Median OS was 19.4 months, and the 1-year and 18-month OS rates were 73% (95% CI, 59 to 83) and 57% (95% CI, 42 to 70), respectively.\n\n\nCONCLUSION\nFirst-line nivolumab monotherapy demonstrated a tolerable safety profile and durable responses in first-line advanced NSCLC.",
"title": ""
},
{
"docid": "0ae0e78ac068d8bc27d575d90293c27b",
"text": "Deep web refers to the hidden part of the Web that remains unavailable for standard Web crawlers. To obtain content of Deep Web is challenging and has been acknowledged as a significant gap in the coverage of search engines. To this end, the paper proposes a novel deep web crawling framework based on reinforcement learning, in which the crawler is regarded as an agent and deep web database as the environment. The agent perceives its current state and selects an action (query) to submit to the environment according to Q-value. The framework not only enables crawlers to learn a promising crawling strategy from its own experience, but also allows for utilizing diverse features of query keywords. Experimental results show that the method outperforms the state of art methods in terms of crawling capability and breaks through the assumption of full-text search implied by existing methods.",
"title": ""
},
{
"docid": "d8acda345bbcb1ef25e3ee9934dd12a2",
"text": "This chapter looks into the key infrastructure factors affecting the success of small companies in developing economies that are establishing B2B ecommerce ventures by aggregating critical success factors from general ecommerce studies and studies from e-commerce in developing countries. The factors were identified through a literature review and case studies of two organizations. The results of the pilot study and literature review reveal five groups of success factors that contribute to the success of B2B e-commerce. These factors were later assessed for importance using a survey. The outcome of our analysis reveals a reduced list of key critical success factors that SMEs should emphasize as well as a couple of key policy implications for governments in developing countries. This chapter appears in the book, e-Business, e-Government & Small and Medium-Sized Enterprises: Opportunities and Challenges, edited by Brian J. Corbitt and Nabeel Al-Qirim. Copyright © 2004, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited. 701 E. Chocolate Avenue, Suite 200, Hershey PA 17033-1240, USA Tel: 717/533-8845; Fax 717/533-8661; URL-http://www.idea-group.com IDEA GROUP PUBLISHING 186 Jennex, Amoroso and Adelakun Copyright © 2004, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited. INTRODUCTION Information and Communication Technology (ICT) can provide a small enterprise an opportunity to conduct business anywhere. Use of the Internet allows small businesses to project virtual storefronts to the world as well as conduct business with other organizations. Heeks and Duncombe (2001) discuss how IT can be used in developing countries to build businesses. Domaracki (2001) discusses how the technology gap between small and large businesses is closing and evening the playing field, making B2B and B2C e-commerce available to any business with access to computers, web browsers, and telecommunication links. This chapter discusses how small start-up companies can use ICT to establish e-commerce applications within developing economies where the infrastructure is not classified as “high-technology”. E-commerce is the process of buying, selling, or exchanging products, services, and information using computer networks including the Internet (Turban et al., 2002). Kalakota and Whinston (1997) define e-commerce using the perspectives of network communications, automated business processes, automated services, and online buying and selling. Turban et al. (2002) add perspectives on collaboration and community. Deise et al. (2000) describe the E-selling process as enabling customers through E-Browsing (catalogues, what we have), E-Buying (ordering, processing, invoicing, cost determination, etc.), and E-Customer Service (contact, etc.). Partial e-commerce occurs when the process is not totally using networks. B2C e-commerce is the electronic sale of goods, services, and content to individuals, Noyce (2002), Turban et al. (2002). B2B e-commerce is a transaction conducted electronically between businesses over the Internet, extranets, intranets, or private networks. Such transactions may be conducted between a business and its supply chain members, as well as between a business and any other business. A business refers to any organization, public or private, for profit or nonprofit (Turban et al., 2002, p. 217; Noyce, 2002; Palvia and Vemuri, 2002). Initially, B2B was used almost exclusively by large organizations to buy and sell industrial outputs and/or inputs. More recently B2B has expanded to small and medium sized enterprises, SMEs, who can buy and/or sell products/services directly, Mayer-Guell (2001). B2B transactions tend to be larger in value, more complex, and longer term when compared to B2C transactions with the average B2B transaction being worth $75,000.00 while the average B2C transaction is worth $75.00 (Freeman, 2001). Typical B2B transactions involve order management, credit management and the establishment of trade terms, product delivery and billing, invoice approval, payment, and the management of information for the entire process, Domaracki (2001). Noyce (2002) discusses collaboration as the underlying principle for B2B. The companies chosen as mini-cases for this study meet the basic definition of B2B with their e-commerce ventures as both are selling services over the Internet to other business organizations. Additionally, both provide quotes and the ability to 19 more pages are available in the full version of this document, which may be purchased using the \"Add to Cart\" button on the product's webpage: www.igi-global.com/chapter/b2b-commerce-infrastructuresuccess-factors/8749?camid=4v1 This title is available in InfoSci-Books, Business-TechnologySolution, InfoSci-Business Technologies, Business, Administration, and Management, InfoSci-Select, InfoSciBusiness and Management, InfoSci-Government and Law, InfoSci-Select, InfoSci-Select. Recommend this product to your librarian: www.igi-global.com/e-resources/libraryrecommendation/?id=1",
"title": ""
},
{
"docid": "5896289f0a9b788ef722756953a580ce",
"text": "Biodiesel, defined as the mono-alkyl esters of vegetable oils or animal fats, is an balternativeQ diesel fuel that is becoming accepted in a steadily growing number of countries around the world. Since the source of biodiesel varies with the location and other sources such as recycled oils are continuously gaining interest, it is important to possess data on how the various fatty acid profiles of the different sources can influence biodiesel fuel properties. The properties of the various individual fatty esters that comprise biodiesel determine the overall fuel properties of the biodiesel fuel. In turn, the properties of the various fatty esters are determined by the structural features of the fatty acid and the alcohol moieties that comprise a fatty ester. Structural features that influence the physical and fuel properties of a fatty ester molecule are chain length, degree of unsaturation, and branching of the chain. Important fuel properties of biodiesel that are influenced by the fatty acid profile and, in turn, by the structural features of the various fatty esters are cetane number and ultimately exhaust emissions, heat of combustion, cold flow, oxidative stability, viscosity, and lubricity. Published by Elsevier B.V.",
"title": ""
},
{
"docid": "5179662c841302180848dc566a114f10",
"text": "Hyperspectral image (HSI) unmixing has attracted increasing research interests in recent decades. The major difficulty of it lies in that the endmembers and the associated abundances need to be separated from highly mixed observation data with few a priori information. Recently, sparsity-constrained nonnegative matrix factorization (NMF) algorithms have been proved effective for hyperspectral unmixing (HU) since they can sufficiently utilize the sparsity property of HSIs. In order to improve the performance of NMF-based unmixing approaches, spectral and spatial constrains have been added into the unmixing model, but spectral-spatial joint structure is required to be more accurately estimated. To exploit the property that similar pixels within a small spatial neighborhood have higher possibility to share similar abundances, hypergraph structure is employed to capture the similarity relationship among the spatial nearby pixels. In the construction of a hypergraph, each pixel is taken as a vertex of the hypergraph, and each vertex with its k nearest spatial neighboring pixels form a hyperedge. Using the hypergraph, the pixels with similar abundances can be accurately found, which enables the unmixing algorithm to obtain promising results. Experiments on synthetic data and real HSIs are conducted to investigate the performance of the proposed algorithm. The superiority of the proposed algorithm is demonstrated by comparing it with some state-of-the-art methods.",
"title": ""
},
{
"docid": "587eea887a3fcb6561833c250ae9c6e3",
"text": "We present a new interactive and online approach to 3D scene understanding. Our system, SemanticPaint, allows users to simultaneously scan their environment whilst interactively segmenting the scene simply by reaching out and touching any desired object or surface. Our system continuously learns from these segmentations, and labels new unseen parts of the environment. Unlike offline systems where capture, labeling, and batch learning often take hours or even days to perform, our approach is fully online. This provides users with continuous live feedback of the recognition during capture, allowing to immediately correct errors in the segmentation and/or learning—a feature that has so far been unavailable to batch and offline methods. This leads to models that are tailored or personalized specifically to the user's environments and object classes of interest, opening up the potential for new applications in augmented reality, interior design, and human/robot navigation. It also provides the ability to capture substantial labeled 3D datasets for training large-scale visual recognition systems.",
"title": ""
},
{
"docid": "d509601659e2192fb4ea8f112c9d75fe",
"text": "Computer vision has advanced significantly that many discriminative approaches such as object recognition are now widely used in real applications. We present another exciting development that utilizes generative models for the mass customization of medical products such as dental crowns. In the dental industry, it takes a technician years of training to design synthetic crowns that restore the function and integrity of missing teeth. Each crown must be customized to individual patients, and it requires human expertise in a time-consuming and laborintensive process, even with computer assisted design software. We develop a fully automatic approach that learns not only from human designs of dental crowns, but also from natural spatial profiles between opposing teeth. The latter is hard to account for by technicians but important for proper biting and chewing functions. Built upon a Generative Adversarial Network architecture (GAN), our deep learning model predicts the customized crown-filled depth scan from the crown-missing depth scan and opposing depth scan. We propose to incorporate additional space constraints and statistical compatibility into learning. Our automatic designs exceed human technicians’ standards for good morphology and functionality, and our algorithm is being tested for production use.",
"title": ""
},
{
"docid": "ee79f55fe096b195984ecdc1fc570179",
"text": "In bibliographies like DBLP and Citeseer, there are three kinds of entity-name problems that need to be solved. First, multiple entities share one name, which is called the name sharing problem. Second, one entity has different names, which is called the name variant problem. Third, multiple entities share multiple names, which is called the name mixing problem. We aim to solve these problems based on one model in this paper. We call this task complete entity resolution. Different from previous work, our work use global information based on data with two types of information, words and author names. We propose a generative latent topic model that involves both author names and words — the LDA-dual model, by extending the LDA (Latent Dirichlet Allocation) model. We also propose a method to obtain model parameters that is global information. Based on obtained model parameters, we propose two algorithms to solve the three problems mentioned above. Experimental results demonstrate the effectiveness and great potential of the proposed model and algorithms.",
"title": ""
},
{
"docid": "a1292045684debec0e6e56f7f5e85fad",
"text": "BACKGROUND\nLncRNA and microRNA play an important role in the development of human cancers; they can act as a tumor suppressor gene or an oncogene. LncRNA GAS5, originating from the separation from tumor suppressor gene cDNA subtractive library, is considered as an oncogene in several kinds of cancers. The expression of miR-221 affects tumorigenesis, invasion and metastasis in multiple types of human cancers. However, there's very little information on the role LncRNA GAS5 and miR-221 play in CRC. Therefore, we conducted this study in order to analyze the association of GAS5 and miR-221 with the prognosis of CRC and preliminary study was done on proliferation, metastasis and invasion of CRC cells. In the present study, we demonstrate the predictive value of long non-coding RNA GAS5 (lncRNA GAS5) and mircoRNA-221 (miR-221) in the prognosis of colorectal cancer (CRC) and their effects on CRC cell proliferation, migration and invasion.\n\n\nMETHODS\nOne hundred and fifty-eight cases with CRC patients and 173 cases of healthy subjects that with no abnormalities, who've been diagnosed through colonoscopy between January 2012 and January 2014 were selected for the study. After the clinicopathological data of the subjects, tissue, plasma and exosomes were collected, lncRNA GAS5 and miR-221 expressions in tissues, plasma and exosomes were measured by reverse transcription quantitative polymerase chain reaction (RT-qPCR). The diagnostic values of lncRNA GAS5 and miR-221 expression in tissues, plasma and exosomes in patients with CRC were analyzed using receiver operating characteristic curve (ROC). Lentiviral vector was constructed for the overexpression of lncRNA GAS5, and SW480 cell line was used for the transfection of the experiment and assigned into an empty vector and GAS5 groups. The cell proliferation, migration and invasion were tested using a cell counting kit-8 assay and Transwell assay respectively.\n\n\nRESULTS\nThe results revealed that LncRNA GAS5 was upregulated while the miR-221 was downregulated in the tissues, plasma and exosomes of patients with CRC. The results of ROC showed that the expressions of both lncRNA GAS5 and miR-221 in the tissues, plasma and exosomes had diagnostic value in CRC. While the LncRNA GAS5 expression in tissues, plasma and exosomes were associated with the tumor node metastasis (TNM) stage, Dukes stage, lymph node metastasis (LNM), local recurrence rate and distant metastasis rate, the MiR-221 expression in tissues, plasma and exosomes were associated with tumor size, TNM stage, Dukes stage, LNM, local recurrence rate and distant metastasis rate. LncRNA GAS5 and miR-221 expression in tissues, plasma and exosomes were found to be independent prognostic factors for CRC. Following the overexpression of GAS5, the GAS5 expressions was up-regulated and miR-221 expression was down-regulated; the rate of cell proliferation, migration and invasion were decreased.",
"title": ""
},
{
"docid": "553a86035f5013595ef61c4c19997d7c",
"text": "This paper proposes a novel self-oscillating, boost-derived (SOBD) dc-dc converter with load regulation. This proposed topology utilizes saturable cores (SCs) to offer self-oscillating and output regulation capabilities. Conventionally, the self-oscillating dc transformer (SODT) type of scheme can be implemented in a very cost-effective manner. The ideal dc transformer provides both input and output currents as pure, ripple-free dc quantities. However, the structure of an SODT-type converter will not provide regulation, and its oscillating frequency will change in accordance with the load. The proposed converter with SCs will allow output-voltage regulation to be accomplished by varying only the control current between the transformers, as occurs in a pulse-width modulation (PWM) converter. A control network that combines PWM schemes with a regenerative function is used for this converter. The optimum duty cycle is implemented to achieve low levels of input- and output-current ripples, which are characteristic of an ideal dc transformer. The oscillating frequency will spontaneously be kept near-constant, regardless of the load, without adding any auxiliary or compensation circuits. The typical voltage waveforms of the transistors are found to be close to quasisquare. The switching surges are well suppressed, and the voltage stress of the component is well clamped. The turn-on/turn-off of the switch is zero-voltage switching (ZVS), and its resonant transition can occur over a wide range of load current levels. A prototype circuit of an SOBD converter shows 86% efficiency at 48-V input, with 12-V, 100-W output, and presents an operating frequency of 100 kHz.",
"title": ""
},
{
"docid": "85d4675562eb87550c3aebf0017e7243",
"text": "Online social media are complementing and in some cases replacing person-to-person social interaction and redefining the diffusion of information. In particular, microblogs have become crucial grounds on which public relations, marketing, and political battles are fought. We introduce an extensible framework that will enable the real-time analysis of meme diffusion in social media by mining, visualizing, mapping, classifying, and modeling massive streams of public microblogging events. We describe a Web service that leverages this framework to track political memes in Twitter and help detect astroturfing, smear campaigns, and other misinformation in the context of U.S. political elections. We present some cases of abusive behaviors uncovered by our service. Finally, we discuss promising preliminary results on the detection of suspicious memes via supervised learning based on features extracted from the topology of the diffusion networks, sentiment analysis, and crowdsourced annotations.",
"title": ""
},
{
"docid": "58042f8c83e5cc4aa41e136bb4e0dc1f",
"text": "In this paper, we propose wire-free integrated sensors that monitor pulse wave velocity (PWV) and respiration, both non-electrical vital signs, by using an all-electrical method. The key techniques that we employ to obtain all-electrical and wire-free measurement are bio-impedance (BI) and analog-modulated body-channel communication (BCC), respectively. For PWV, time difference between ECG signal from the heart and BI signal from the wrist is measured. To remove wires and avoid sampling rate mismatch between ECG and BI sensors, ECG signal is sent to the BI sensor via analog BCC without any sampling. For respiration measurement, BI sensor is located at the abdomen to detect volume change during inhalation and exhalation. A prototype chip fabricated in 0.11 μm CMOS process consists of ECG, BI sensor and BCC transceiver. Measurement results show that heart rate and PWV are both within their normal physiological range. The chip consumes 1.28 mW at 1.2 V supply while occupying 5 mm×2.5 mm of area.",
"title": ""
},
{
"docid": "268a9b3a1a567c25c5ba93708b0a167b",
"text": "Many types of relations in physical, biological, social and information systems can be modeled as homogeneous or heterogeneous concept graphs. Hence, learning from and with graph embeddings has drawn a great deal of research interest recently, but developing an embedding learning method that is flexible enough to accommodate variations in physical networks is still a challenging problem. In this paper, we conjecture that the one-shot supervised learning mechanism is a bottleneck in improving the performance of the graph embedding learning, and propose to extend this by introducing a multi-shot \"unsupervised\" learning framework where a 2-layer MLP network for every shot .The framework can be extended to accommodate a variety of homogeneous and heterogeneous networks. Empirical results on several real-world data set show that the proposed model consistently and significantly outperforms existing state-of-the-art approaches on knowledge base completion and graph based multi-label classification tasks.",
"title": ""
},
{
"docid": "98cd53e6bf758a382653cb7252169d22",
"text": "We introduce a novel malware detection algorithm based on the analysis of graphs constructed from dynamically collected instruction traces of the target executable. These graphs represent Markov chains, where the vertices are the instructions and the transition probabilities are estimated by the data contained in the trace. We use a combination of graph kernels to create a similarity matrix between the instruction trace graphs. The resulting graph kernel measures similarity between graphs on both local and global levels. Finally, the similarity matrix is sent to a support vector machine to perform classification. Our method is particularly appealing because we do not base our classifications on the raw n-gram data, but rather use our data representation to perform classification in graph space. We demonstrate the performance of our algorithm on two classification problems: benign software versus malware, and the Netbull virus with different packers versus other classes of viruses. Our results show a statistically significant improvement over signature-based and other machine learning-based detection methods.",
"title": ""
},
{
"docid": "b1d9e27972b2ea9af105bc6c026fddc9",
"text": "Data diversity is critical to success when training deep learning models. Medical imaging data sets are often imbalanced as pathologic findings are generally rare, which introduces significant challenges when training deep learning models. In this work, we propose a method to generate synthetic abnormal MRI images with brain tumors by training a generative adversarial network using two publicly available data sets of brain MRI. We demonstrate two unique benefits that the synthetic images provide. First, we illustrate improved performance on tumor segmentation by leveraging the synthetic images as a form of data augmentation. Second, we demonstrate the value of generative models as an anonymization tool, achieving comparable tumor segmentation results when trained on the synthetic data versus when trained on real subject data. Together, these results offer a potential solution to two of the largest challenges facing machine learning in medical imaging, namely the small incidence of pathological findings, and the restrictions around sharing of patient data.",
"title": ""
},
{
"docid": "9b656d1ae57b43bb2ccf2d971e46eae3",
"text": "On the one hand, enterprises manufacturing any kinds of goods require agile production technology to be able to fully accommodate their customers’ demand for flexibility. On the other hand, Smart Objects, such as networked intelligent machines or tagged raw materials, exhibit ever increasing capabilities, up to the point where they offer their smart behaviour as web services. The two trends towards higher flexibility and more capable objects will lead to a service-oriented infrastructure where complex processes will span over all types of systems — from the backend enterprise system down to the Smart Objects. To fully support this, we present SOCRADES, an integration architecture that can serve the requirements of future manufacturing. SOCRADES provides generic components upon which sophisticated production processes can be modelled. In this paper we in particular give a list of requirements, the design, and the reference implementation of that integration architecture.",
"title": ""
},
{
"docid": "7519e3a8326e2ef2ebd28c22e80c4e34",
"text": "This paper presents a synthetic framework identifying the central drivers of start-up commercialization strategy and the implications of these drivers for industrial dynamics. We link strategy to the commercialization environment – the microeconomic and strategic conditions facing a firm that is translating an \" idea \" into a value proposition for customers. The framework addresses why technology entrepreneurs in some environments undermine established firms, while others cooperate with incumbents and reinforce existing market power. Our analysis suggests that competitive interaction between start-up innovators and established firms depends on the presence or absence of a \" market for ideas. \" By focusing on the operating requirements, efficiency, and institutions associated with markets for ideas, this framework holds several implications for the management of high-technology entrepreneurial firms. (Stern). We would like to thank the firms who participate in the MIT Commercialization Strategies survey for their time and effort. The past two decades have witnessed a dramatic increase in investment in technology entrepreneurship – the founding of small, start-up firms developing inventions and technology with significant potential commercial application. Because of their youth and small size, start-up innovators usually have little experience in the markets for which their innovations are most appropriate, and they have at most two or three technologies at the stage of potential market introduction. For these firms, a key management challenge is how to translate promising",
"title": ""
},
{
"docid": "bd2fcdd0b7139bf719f1ec7ffb4fe5d5",
"text": "Much is known on how facial expressions of emotion are produced, including which individual muscles are most active in each expression. Yet, little is known on how this information is interpreted by the human visual system. This paper presents a systematic study of the image dimensionality of facial expressions of emotion. In particular, we investigate how recognition degrades when the resolution of the image (i.e., number of pixels when seen as a 5.3 by 8 degree stimulus) is reduced. We show that recognition is only impaired in practice when the image resolution goes below 20 × 30 pixels. A study of the confusion tables demonstrates that each expression of emotion is consistently confused by a small set of alternatives and that the confusion is not symmetric, i.e., misclassifying emotion a as b does not imply we will mistake b for a. This asymmetric pattern is consistent over the different image resolutions and cannot be explained by the similarity of muscle activation. Furthermore, although women are generally better at recognizing expressions of emotion at all resolutions, the asymmetry patterns are the same. We discuss the implications of these results for current models of face perception.",
"title": ""
},
{
"docid": "1d507afcd430b70944bd7f460ee90277",
"text": "Moringa oleifera, or the horseradish tree, is a pan-tropical species that is known by such regional names as benzolive, drumstick tree, kelor, marango, mlonge, mulangay, nébéday, saijhan, and sajna. Over the past two decades, many reports have appeared in mainstream scientific journals describing its nutritional and medicinal properties. Its utility as a non-food product has also been extensively described, but will not be discussed herein, (e.g. lumber, charcoal, fencing, water clarification, lubricating oil). As with many reports of the nutritional or medicinal value of a natural product, there are an alarming number of purveyors of “healthful” food who are now promoting M. oleifera as a panacea. While much of this recent enthusiasm indeed appears to be justified, it is critical to separate rigorous scientific evidence from anecdote. Those who charge a premium for products containing Moringa spp. must be held to a high standard. Those who promote the cultivation and use of Moringa spp. in regions where hope is in short supply must be provided with the best available evidence, so as not to raise false hopes and to encourage the most fruitful use of scarce research capital. It is the purpose of this series of brief reviews to: (a) critically evaluate the published scientific evidence on M. oleifera, (b) highlight claims from the traditional and tribal medicinal lore and from non-peer reviewed sources that would benefit from further, rigorous scientific evaluation, and (c) suggest directions for future clinical research that could be carried out by local investigators in developing regions. This is the first of four planned papers on the nutritional, therapeutic, and prophylactic properties of Moringa oleifera. In this introductory paper, the scientific evidence for health effects are summarized in tabular format, and the strength of evidence is discussed in very general terms. A second paper will address a select few uses of Moringa in greater detail than they can be dealt with in the context of this paper. A third paper will probe the phytochemical components of Moringa in more depth. A fourth paper will lay out a number of suggested research projects that can be initiated at a very small scale and with very limited resources, in geographic regions which are suitable for Moringa cultivation and utilization. In advance of this fourth paper in the series, the author solicits suggestions and will gladly acknowledge contributions that are incorporated into the final manuscript. It is the intent and hope of the journal’s editors that such a network of small-scale, locally executed investigations might be successfully woven into a greater fabric which will have enhanced scientific power over similar small studies conducted and reported in isolation. Such an approach will have the added benefit that statistically sound planning, peer review, and multi-center coordination brings to a scientific investigation. Copyright: ©2005 Jed W. Fahey This is an Open Access article distributed under the terms of the Creative Commons Attribution License which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. Contact: Jed W. Fahey Email: jfahey@jhmi.edu Received: September 15, 2005 Accepted: November 20, 2005 Published: December 1, 2005 The electronic version of this article is the complete one and can be found online at: http://www.TFLJournal.org/article.php/200512011",
"title": ""
}
] |
scidocsrr
|
ce5892bac1f82a7c44bb77996aeee46a
|
Keyword spotting for Google assistant using contextual speech recognition
|
[
{
"docid": "811080d1bf24f041792d6895791242bb",
"text": "We survey the use of weighted nite state transducers WFSTs in speech recognition We show that WFSTs provide a common and natural rep resentation for HMM models context dependency pronunciation dictio naries grammars and alternative recognition outputs Furthermore gen eral transducer operations combine these representations exibly and e ciently Weighted determinization and minimization algorithms optimize their time and space requirements and a weight pushing algorithm dis tributes the weights along the paths of a weighted transducer optimally for speech recognition As an example we describe a North American Business News NAB recognition system built using these techniques that combines the HMMs full cross word triphones a lexicon of forty thousand words and a large trigram grammar into a single weighted transducer that is only somewhat larger than the trigram word grammar and that runs NAB in real time on a very simple decoder In another example we show that the same techniques can be used to optimize lattices for second pass recognition In a third example we show how general automata operations can be used to assemble lattices from di erent recognizers to improve recognition performance Introduction Much of current large vocabulary speech recognition is based on models such as HMMs tree lexicons or n gram language models that can be represented by weighted nite state transducers Even when richer models are used for instance context free grammars for spoken dialog applications they are often restricted for e ciency reasons to regular subsets either by design or by approximation Pereira and Wright Nederhof Mohri and Nederhof M Mohri Weighted FSTs in Speech Recognition A nite state transducer is a nite automaton whose state transitions are labeled with both input and output symbols Therefore a path through the transducer encodes a mapping from an input symbol sequence to an output symbol sequence A weighted transducer puts weights on transitions in addition to the input and output symbols Weights may encode probabilities durations penalties or any other quantity that accumulates along paths to compute the overall weight of mapping an input sequence to an output sequence Weighted transducers are thus a natural choice to represent the probabilistic nite state models prevalent in speech processing We present a survey of the recent work done on the use of weighted nite state transducers WFSTs in speech recognition Mohri et al Pereira and Riley Mohri Mohri et al Mohri and Riley Mohri et al Mohri and Riley We show that common methods for combin ing and optimizing probabilistic models in speech processing can be generalized and e ciently implemented by translation to mathematically well de ned op erations on weighted transducers Furthermore new optimization opportunities arise from viewing all symbolic levels of ASR modeling as weighted transducers Thus weighted nite state transducers de ne a common framework with shared algorithms for the representation and use of the models in speech recognition that has important algorithmic and software engineering bene ts We start by introducing the main de nitions and notation for weighted nite state acceptors and transducers used in this work We then present introductory speech related examples and describe the most important weighted transducer operations relevant to speech applications Finally we give examples of the ap plication of transducer representations and operations on transducers to large vocabulary speech recognition with results that meet certain optimality criteria Weighted Finite State Transducer De nitions and Al gorithms The de nitions that follow are based on the general algebraic notion of semiring Kuich and Salomaa The semiring abstraction permits the de nition of automata representations and algorithms over a broad class of weight sets and algebraic operations A semiring K consists of a set K equipped with an associative and com mutative operation and an associative operation with identities and respectively such that distributes over and a a In other words a semiring is similar to the more familiar ring algebraic structure such as the ring of polynomials over the reals except that the additive operation may not have an inverse For example N is a semiring The weights used in speech recognition often represent probabilities the cor responding semiring is then the probability semiring R For numerical stability implementations may replace probabilities with log probabilities The appropriate semiring is then the image by log of the semiring R M Mohri Weighted FSTs in Speech Recognition and is called the log semiring When using log probabilities with a Viterbi best path approximation the appropriate semiring is the tropical semiring R f g min In the following de nitions we assume an arbitrary semiring K We will give examples with di erent semirings to illustrate the variety of useful computations that can be carried out in this framework by a judicious choice of semiring",
"title": ""
},
{
"docid": "36356a91bc84888cb2dd6180983fdfc5",
"text": "We recently showed that Long Short-Term Memory (LSTM) recurrent neural networks (RNNs) outperform state-of-the-art deep neural networks (DNNs) for large scale acoustic modeling where the models were trained with the cross-entropy (CE) criterion. It has also been shown that sequence discriminative training of DNNs initially trained with the CE criterion gives significant improvements. In this paper, we investigate sequence discriminative training of LSTM RNNs in a large scale acoustic modeling task. We train the models in a distributed manner using asynchronous stochastic gradient descent optimization technique. We compare two sequence discriminative criteria – maximum mutual information and state-level minimum Bayes risk, and we investigate a number of variations of the basic training strategy to better understand issues raised by both the sequential model, and the objective function. We obtain significant gains over the CE trained LSTM RNN model using sequence discriminative training techniques.",
"title": ""
}
] |
[
{
"docid": "5b9488755fb3146adf5b6d8d767b7c8f",
"text": "This paper presents an overview of our activities for spoken and written language resources for Vietnamese implemented at CLIPSIMAG Laboratory and International Research Center MICA. A new methodology for fast text corpora acquisition for minority languages which has been applied to Vietnamese is proposed. The first results of a process of building a large Vietnamese speech database (VNSpeechCorpus) and a phonetic dictionary, which is used for automatic alignment process, are also presented.",
"title": ""
},
{
"docid": "1fa7dd4842e7505af529961b50b0c3cc",
"text": "Recently, a new kind of vulnerability has surfaced: application layer Denial-of-Service (DoS) attacks targeting web services. These attacks aim at consuming resources by sending Simple Object Access Protocol (SOAP) requests that contain malicious XML content. These requests cannot be detected on the network or transportation (TCP/IP) layer, as they appear as legitimate packets. Until now, there is no web service security specification that addresses this problem. Moreover, the current WS-Security standard induces crucial additional vulnerabilities threatening the availability of certain web service implementations. First, this paper introduces an attack-generating tool to test and confirm previously reported vulnerabilities. The results indicate that the attacks have a devastating impact on theweb service availability, even whilst utilizing an absolute minimum of attack resources. Since these highly effective attacks can be mounted with relative ease, it is clear that defending against them is essential, looking at the growth of cloud andweb services. Second, this paper proposes an intelligent, fast and adaptive system for detecting against XML and HTTP application layer attacks. The intelligent system works by extracting several features and using them to construct a model for typical requests. Finally, outlier detection can be used to detect malicious requests. Furthermore, the intelligent defense system is capable of detecting spoofing and regular flooding attacks. The system is designed to be inserted in a cloud environmentwhere it can transparently protect the cloud broker and even cloud providers. For testing its effectiveness, the defense systemwas deployed to protect web services running onWSO2 with Axis2: the defacto standard for open source web service deployment. The proposed defense system demonstrates its capability to effectively filter out the malicious requests, whilst generating a minimal amount of overhead for the total response time. © 2014 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "a4fbb63fa62ec2985b395521d51191dd",
"text": "Deep Neural Networks expose a high degree of parallelism, making them amenable to highly data parallel architectures. However, data-parallel architectures often accept inefficiency in individual computations for the sake of overall efficiency. We show that on average, activation values of convolutional layers during inference in modern Deep Convolutional Neural Networks (CNNs) contain 92% zero bits. Processing these zero bits entails ineffectual computations that could be skipped. We propose Pragmatic (PRA), a massively data-parallel architecture that eliminates most of the ineffectual computations on-the-fly, improving performance and energy efficiency compared to state-of-the-art high-performance accelerators [5]. The idea behind PRA is deceptively simple: use serial-parallel shift-and-add multiplication while skipping the zero bits of the serial input. However, a straightforward implementation based on shift-and-add multiplication yields unacceptable area, power and memory access overheads compared to a conventional bit-parallel design. PRA incorporates a set of design decisions to yield a practical, area and energy efficient design.\n Measurements demonstrate that for convolutional layers, PRA is 4.31X faster than DaDianNao [5] (DaDN) using a 16-bit fixed-point representation. While PRA requires 1.68X more area than DaDN, the performance gains yield a 1.70X increase in energy efficiency in a 65nm technology. With 8-bit quantized activations, PRA is 2.25X faster and 1.31X more energy efficient than an 8-bit version of DaDN.",
"title": ""
},
{
"docid": "015326feea60387bc2a8cdc9ea6a7f81",
"text": "Phosphorylation of the transcription factor CREB is thought to be important in processes underlying long-term memory. It is unclear whether CREB phosphorylation can carry information about the sign of changes in synaptic strength, whether CREB pathways are equally activated in neurons receiving or providing synaptic input, or how synapse-to-nucleus communication is mediated. We found that Ca(2+)-dependent nuclear CREB phosphorylation was rapidly evoked by synaptic stimuli including, but not limited to, those that induced potentiation and depression of synaptic strength. In striking contrast, high frequency action potential firing alone failed to trigger CREB phosphorylation. Activation of a submembranous Ca2+ sensor, just beneath sites of Ca2+ entry, appears critical for triggering nuclear CREB phosphorylation via calmodulin and a Ca2+/calmodulin-dependent protein kinase.",
"title": ""
},
{
"docid": "d5d85cddf50e64d602223308f448da37",
"text": "Congenital adrenal hyperplasia (CAH) is the commonest cause of ambiguous genitalia for female newborns and is one of the conditions under the umbrella term of \"Disorders of Sex Development\" (DSD). Management of these patients require multidisciplinary collaboration and is challenging because there are many aspects of care, such as the most appropriate timing and extent of feminizing surgery required and attention to psychosexual, psychological, and reproductive issues, which still require attention and reconsideration, even in developed nations. In developing nations, however, additional challenges prevail: poverty, lack of education, lack of easily accessible and affordable medical care, traditional beliefs on intersex, religious, and cultural issues, as well as poor community support. There is a paucity of long-term outcome studies on DSD and CAH to inform on best management to achieve optimal outcome. In a survey conducted on 16 patients with CAH and their parents in a Malaysian tertiary center, 31.3% of patients stated poor knowledge of their condition, and 37.5% did not realize that their medications were required for life. This review on the research done on quality of life (QOL) of female patients with CAH aims: to discuss factors affecting QOL of female patients with CAH, especially in the developing population; to summarize the extant literature on the quality of life outcomes of female patients with CAH; and to offer recommendations to improve QOL outcomes in clinical practice and research.",
"title": ""
},
{
"docid": "0efe3ccc1c45121c5167d3792a7fcd25",
"text": "This paper addresses the motion planning problem while considering Human-Robot Interaction (HRI) constraints. The proposed planner generates collision-free paths that are acceptable and legible to the human. The method extends our previous work on human-aware path planning to cluttered environments. A randomized cost-based exploration method provides an initial path that is relevant with respect to HRI and workspace constraints. The quality of the path is further improved with a local path-optimization method. Simulation results on mobile manipulators in the presence of humans demonstrate the overall efficacy of the approach.",
"title": ""
},
{
"docid": "4f4817fd70f62b15c0b52311fa677a64",
"text": "Active plasmonics is a burgeoning and challenging subfield of plasmonics. It exploits the active control of surface plasmon resonance. In this review, a first-ever in-depth description of the theoretical relationship between surface plasmon resonance and its affecting factors, which forms the basis for active plasmon control, will be presented. Three categories of active plasmonic structures, consisting of plasmonic structures in tunable dielectric surroundings, plasmonic structures with tunable gap distances, and self-tunable plasmonic structures, will be proposed in terms of the modulation mechanism. The recent advances and current challenges for these three categories of active plasmonic structures will be discussed in detail. The flourishing development of active plasmonic structures opens access to new application fields. A significant part of this review will be devoted to the applications of active plasmonic structures in plasmonic sensing, tunable surface-enhanced Raman scattering, active plasmonic components, and electrochromic smart windows. This review will be concluded with a section on the future challenges and prospects for active plasmonics.",
"title": ""
},
{
"docid": "4791a6a151070fd4c3e39ca5115b6ed0",
"text": "Furniture style describes the discriminative appearance characteristics of furniture. It plays an important role in real-world indoor decoration. In this article, we explore the furniture style features and study the problem of furniture style classification. Differing from traditional object classification, furniture style classification aims at classifying different furniture in terms of the “style” that describes its appearance (e.g., American style, Gothic style, Rococo style, etc.) rather than the “kind” that is more related to its functional structure (e.g., bed, desk, etc.). To pursue efficient furniture style features, we construct a novel dataset of furniture styles that contains 16 common style categories and implement three strategies with respect to two categories of classification, that is, handcrafted classification and learning-based classification. First, we follow the typical image classification pipeline to extract the handcrafted features and train the classifier by support vector machine. Then we use the convolutional neural network to extract learning-based features from training images. To obtain comprehensive furniture style features, we finally combine the handcrafted image classification pipeline and the learning-based network. We experimentally evaluate the performances of handcrafted features and learning-based features of each strategy, and the results show the superiority of learning-based features and also the comprehensiveness of handcrafted features.",
"title": ""
},
{
"docid": "006515574bf1f690818465200d43c4ba",
"text": "Although the concept of school engagement figures prominently in most school dropout theories, there has been little empirical research conducted on its nature and course and, more importantly, the association with dropout. Information on the natural development of school engagement would greatly benefit those interested in preventing student alienation during adolescence. Using a longitudinal sample of 11,827 French-Canadian high school students, we tested behavioral, affective, cognitive indices of engagement both separately and as a global construct. We then assessed their contribution as prospective predictors of school dropout using factor analysis and structural equation modeling. Global engagement reliably predicted school dropout. Among its three specific dimensions, only behavioral engagement made a significant contribution in the prediction equation. Our findings confirm the robustness of the overall multidimensional construct of school engagement, which reflects both cognitive and psychosocial characteristics, and underscore the importance attributed to basic participation and compliance issues in reliably estimating risk of not completing basic schooling during adolescence.",
"title": ""
},
{
"docid": "4cda615a3b0046abd7a6347d722023ca",
"text": "This paper is a survey on logical aspects of nite automata. Central points are the connection between nite automata and monadic second-order logic, the Ehrenfeucht-Fra ss e technique in the context of formal language theory, nite automata on !-words and their determinization, and a self-contained proof of the \\Rabin Tree Theorem\". Sections 5 and 6 contain material presented in a lecture series to the \\Final Winter School of AMICS\" (Palermo, February 1996). A modiied version of the paper will be a chapter of the \\Handbook of Formal Language Theory\", edited by G. Rozenberg and A. Salomaa, to appear in Springer-Verlag.",
"title": ""
},
{
"docid": "2c27fc786dadb6c0d048fcf66b22ed59",
"text": "Changes in DNA copy number contribute to cancer pathogenesis. We now show that high-density single nucleotide polymorphism (SNP) arrays can detect copy number alterations. By hybridizing genomic representations of breast and lung carcinoma cell line and lung tumor DNA to SNP arrays, and measuring locus-specific hybridization intensity, we detected both known and novel genomic amplifications and homozygous deletions in these cancer samples. Moreover, by combining genotyping with SNP quantitation, we could distinguish loss of heterozygosity events caused by hemizygous deletion from those that occur by copy-neutral events. The simultaneous measurement of DNA copy number changes and loss of heterozygosity events by SNP arrays should strengthen our ability to discover cancer-causing genes and to refine cancer diagnosis.",
"title": ""
},
{
"docid": "9c8f6dddcb9bb099eea4433534cb40da",
"text": "There has been an increasing interest in the applications of polarimctric n~icrowavc radiometers for ocean wind remote sensing. Aircraft and spaceborne radiometers have found significant wind direction signals in sea surface brightness temperatures, in addition to their sensitivities on wind speeds. However, it is not yet understood what physical scattering mechanisms produce the observed wind direction dependence. To this encl, polari]nctric microwave emissions from wind-generated sea surfaces are investigated with a polarimctric two-scale scattering model of sea surfaces, which relates the directional wind-wave spectrum to passive microwave signatures of sea surfaces. T)leoretical azimuthal modulations are found to agree well with experimental observations foI all Stokes paranletcrs from nearnadir to 65° incidence angles. The up/downwind asymmetries of brightness temperatures are interpreted usiIlg the hydrodynamic modulation. The contributions of Bragg scattering by short waves, geometric optics scattering by long waves and sea foam are examined. The geometric optics scattering mechanism underestimates the directicmal signals in the first three Stokes paranletcrs, and most importantly it predicts no signals in the fourth Stokes parameter (V), in disagreement with experimental datfi. In contrast, the Bragg scattering and and contributes to most of the wind direction signals from the two-scale model correctly predicts the phase changes of tl}e up/crosswind asymmetries in 7j U from middle to high incidence angles. The accuracy of the Bragg scattering theory for radiometric emission from water ripples is corroborated by the numerical Monte Carlo simulation of rough surface scattering. ‘I’his theoretical interpretation indicates the potential use of ]Jolarimctric brightness temperatures for retrieving the directional wave spectrum of capillary waves.",
"title": ""
},
{
"docid": "19e64f99e6dc539de2db68808495a47a",
"text": "This paper presents an overview of motor drive technologies used for safety-critical aerospace applications, with a particular focus placed on the choice of candidate machines and their drive topologies. Aircraft applications demand high reliability, high availability, and high power density while aiming to reduce weight, complexity, fuel consumption, operational costs, and environmental impact. New electric driven systems can meet these requirements and also provide significant technical and economic improvements over conventional mechanical, hydraulic, or pneumatic systems. Fault-tolerant motor drives can be achieved by partitioning and redundancy through the use of multichannel three-phase systems or multiple single-phase modules. Analytical methods are adopted to compare caged induction, reluctance, and PM motor technologies and their relative merits. The analysis suggests that the dual (or triple) three-phase PMAC motor drive may be a favored choice for general aerospace applications, striking a balance between necessary redundancy and undue complexity, while maintaining a balanced operation following a failure. The modular single-phase approach offers a good compromise between size and complexity but suffers from high total harmonic distortion of the supply and high torque ripple when faulted. For each specific aircraft application, a parametrical optimization of the suitable motor configuration is needed through a coupled electromagnetic and thermal analysis, and should be verified by finite-element analysis.",
"title": ""
},
{
"docid": "641b18d9173f4badc570662fd38859f7",
"text": "With the MPI-Sintel Flow dataset, we introduce a naturalistic dataset for optical flow evaluation derived from the open source CGI movie Sintel. In contrast to the well-known Middlebury dataset, the MPI-Sintel Flow dataset contains longer and more varied sequences with image degradations such as motion blur, defocus blur, and atmospheric effects. Animators use a variety of techniques that produce pleasing images but make the raw animation data inappropriate for computer vision applications if used “out of the box”. Several changes to the rendering software and animation files were necessary in order to produce data for flow evaluation and similar changes are likely for future efforts to construct a scientific dataset from an animated film. Here we distill our experience with Sintel into a set of best practices for using computer animation to generate scientific data for vision research.",
"title": ""
},
{
"docid": "7389a67b68243b2eae04adf525499280",
"text": "The paper presents the results of an exercise that considers the principles of the Agile Manifesto and of the Declaration of Interdependence, and evaluate how much their implementation is compatible with a high target profile in a Spice Assessment.",
"title": ""
},
{
"docid": "1e320f6c5ce9240f580aeb32a47619a1",
"text": "The human gut is populated with as many as 100 trillion cells, whose collective genome, the microbiome, is a reflection of evolutionary selection pressures acting at the level of the host and at the level of the microbial cell. The ecological rules that govern the shape of microbial diversity in the gut apply to mutualists and pathogens alike.",
"title": ""
},
{
"docid": "ee73847c9dd27672c9860219c293b8dd",
"text": "Sensing cost and data quality are two primary concerns in mobile crowd sensing. In this article, we propose a new crowd sensing paradigm, sparse mobile crowd sensing, which leverages the spatial and temporal correlation among the data sensed in different sub-areas to significantly reduce the required number of sensing tasks allocated, thus lowering overall sensing cost (e.g., smartphone energy consumption and incentives) while ensuring data quality. Sparse mobile crowdsensing applications intelligently select only a small portion of the target area for sensing while inferring the data of the remaining unsensed area with high accuracy. We discuss the fundamental research challenges in sparse mobile crowdsensing, and design a general framework with potential solutions to the challenges. To verify the effectiveness of the proposed framework, a sparse mobile crowdsensing prototype for temperature and traffic monitoring is implemented and evaluated. With several future research directions identified in sparse mobile crowdsensing, we expect that more research interests will be stimulated in this novel crowdsensing paradigm.",
"title": ""
},
{
"docid": "1a259f28221e8045568e5053ddc4ede1",
"text": "The decision tree-based classification is a popular approach for pattern recognition and data mining. Most decision tree induction methods assume training data being present at one central location. Given the growth in distributed databases at geographically dispersed locations, the methods for decision tree induction in distributed settings are gaining importance. This paper describes one distributed learning algorithm which extends the original(centralized) CHAID algorithm to its distributed version. This distributed algorithm generates exactly the same results as its centralized counterpart. For completeness, a distributed quantization method is proposed so that continuous data can be processed by our algorithm. Experimental results for several well known data sets are presented and compared with decision trees generated using CHAID with centrally stored data.",
"title": ""
},
{
"docid": "30731e817fb1c04f853caf1dd7a30418",
"text": "This paper focuses on morphological analysis of Bangla words to incorporate them into Bangla to universal networking language (UNL) processors. Researchers have been working on morphological structure of Bangla for machine translation and a considerable volume of work is available. So far, no attempt has been made to integrate the works for a concrete computational output. In this paper we particularly emphasize on bringing previous works on morphological analysis in the framework of UNL, with the goal to produce a Bangla-UNL dictionary, as UNL structures can provide, for any morphological analysis, a unified base to fit into already developed universal conversion systems of UNL. We explain the morphological rules of Bangla words for UNL structures. These rules tend to expose the modifications of parts of speech with regards to tense, person, subject etc. of the words of a sentence. Here we outline the morphology of nouns, verbs and adjective phrases only.",
"title": ""
},
{
"docid": "36e42f2e4fd2f848eaf82440c2bcbf62",
"text": "Indoor positioning systems (IPSs) locate objects in closed structures such as office buildings, hospitals, stores, factories, and warehouses, where Global Positioning System devices generally do not work. Most available systems apply wireless concepts, optical tracking, and/or ultrasound. This paper presents a standalone IPS using radio frequency identification (RFID) technology. The concept is based on an object carrying an RFID reader module, which reads low-cost passive tags installed next to the object path. A positioning system using a Kalman filter is proposed. The inputs of the proposed algorithm are the measurements of the backscattered signal power propagated from nearby RFID tags and a tag-path position database. The proposed algorithm first estimates the location of the reader, neglecting tag-reader angle-path loss. Based on the location estimate, an iterative procedure is implemented, targeting the estimation of the tag-reader angle-path loss, where the latter is iteratively compensated from the received signal strength information measurement. Experimental results are presented, illustrating the high performance of the proposed positioning system.",
"title": ""
}
] |
scidocsrr
|
65beaaa72aadb30d96cefc6d19e4b84c
|
The Truth and Nothing But the Truth: Multimodal Analysis for Deception Detection
|
[
{
"docid": "ff56bae298b25accf6cd8c2710160bad",
"text": "An important difference between traditional AI systems and human intelligence is the human ability to harness commonsense knowledge gleaned from a lifetime of learning and experience to make informed decisions. This allows humans to adapt easily to novel situations where AI fails catastrophically due to a lack of situation-specific rules and generalization capabilities. Commonsense knowledge also provides background information that enables humans to successfully operate in social situations where such knowledge is typically assumed. Since commonsense consists of information that humans take for granted, gathering it is an extremely difficult task. Previous versions of SenticNet were focused on collecting this kind of knowledge for sentiment analysis but they were heavily limited by their inability to generalize. SenticNet 4 overcomes such limitations by leveraging on conceptual primitives automatically generated by means of hierarchical clustering and dimensionality reduction.",
"title": ""
}
] |
[
{
"docid": "0837ca7bd6e28bb732cfdd300ccecbca",
"text": "In our previous research we have made literature analysis and discovered possible mind map application areas. We have pointed out why currently developed software and methods are not adequate and why we are developing a new one. We have defined system architecture and functionality that our software would have. After that, we proceeded with text-mining algorithm development and testing after which we have concluded with our plans for further research. In this paper we will give basic notions about previously published article and present our custom developed software for automatic mind map generation. This software will be tested. Generated mind maps will be critically analyzed. The paper will be concluded with research summary and possible further research and software improvement.",
"title": ""
},
{
"docid": "2fbfe1fa8cda571a931b700cbb18f46e",
"text": "A low-noise front-end and its controller are proposed for capacitive touch screen panels. The proposed front-end circuit based on a ΔΣ ADC uses differential sensing and integration scheme to maximize the input dynamic range. In addition, supply and internal reference voltage noise are effectively removed in the sensed touch signal. Furthermore, the demodulation process in front of the ΔΣ ADC provides the maximized oversampling ratio (OSR) so that the scan rate can be increased at the targeted resolution. The proposed IC is implemented in a mixed-mode 0.18-μm CMOS process. The measurement is performed on a bar-patterned 4.3-inch touch screen panel with 12 driving lines and 8 sensing channels. The report rate is 100 Hz, and SNR and spatial jitter are 54 dB and 0.11 mm, respectively. The chip area is 3 × 3 mm2 and total power consumption is 2.9 mW with 1.8-V and 3.3-V supply.",
"title": ""
},
{
"docid": "e82c0826863ccd9cd647725fc00a2137",
"text": "Particle Markov chain Monte Carlo (PMCMC) is a systematic way of combining the two main tools used for Monte Carlo statistical inference: sequential Monte Carlo (SMC) and Markov chain Monte Carlo (MCMC). We present a new PMCMC algorithm that we refer to as particle Gibbs with ancestor sampling (PGAS). PGAS provides the data analyst with an off-the-shelf class of Markov kernels that can be used to simulate, for instance, the typically high-dimensional and highly autocorrelated state trajectory in a state-space model. The ancestor sampling procedure enables fast mixing of the PGAS kernel even when using seemingly few particles in the underlying SMC sampler. This is important as it can significantly reduce the computational burden that is typically associated with using SMC. PGAS is conceptually similar to the existing PG with backward simulation (PGBS) procedure. Instead of using separate forward and backward sweeps as in PGBS, however, we achieve the same effect in a single forward sweep. This makes PGAS well suited for addressing inference problems not only in state-space models, but also in models with more complex dependencies, such as non-Markovian, Bayesian nonparametric, and general probabilistic graphical models.",
"title": ""
},
{
"docid": "629648968e2b378f46fa19ae6a343e70",
"text": "BACKGROUND\nAustralia was one of the first countries to introduce a publicly funded national human papillomavirus (HPV) vaccination program that commenced in April 2007, using the quadrivalent HPV vaccine targeting 12- to 13-year-old girls on an ongoing basis. Two-year catch-up programs were offered to 14- to 17- year-old girls in schools and 18- to 26-year-old women in community-based settings. We present data from the school-based program on population-level vaccine effectiveness against cervical abnormalities in Victoria, Australia.\n\n\nMETHODS\nData for women age-eligible for the HPV vaccination program were linked between the Victorian Cervical Cytology Registry and the National HPV Vaccination Program Register to create a cohort of screening women who were either vaccinated or unvaccinated. Entry into the cohort was 1 April 2007 or at first Pap test for women not already screening. Vaccine effectiveness (VE) and hazard ratios (HR) for cervical abnormalities by vaccination status between 1 April 2007 and 31 December 2011 were calculated using proportional hazards regression.\n\n\nRESULTS\nThe study included 14,085 unvaccinated and 24,871 vaccinated women attending screening who were eligible for vaccination at school, 85.0% of whom had received three doses. Detection rates of histologically confirmed high-grade (HG) cervical abnormalities and high-grade cytology (HGC) were significantly lower for vaccinated women (any dose) (HG 4.8 per 1,000 person-years, HGC 11.9 per 1,000 person-years) compared with unvaccinated women (HG 6.4 per 1,000 person-years, HGC 15.3 per 1,000 person-years) HR 0.72 (95% CI 0.58 to 0.91) and HR 0.75 (95% CI 0.65 to 0.87), respectively. The HR for low-grade (LG) cytological abnormalities was 0.76 (95% CI 0.72 to 0.80). VE adjusted a priori for age at first screening, socioeconomic status and remoteness index, for women who were completely vaccinated, was greatest for CIN3+/AIS at 47.5% (95% CI 22.7 to 64.4) and 36.4% (95% CI 9.8 to 55.1) for women who received any dose of vaccine, and was negatively associated with age. For women who received only one or two doses of vaccine, HRs for HG histology were not significantly different from 1.0, although the number of outcomes was small.\n\n\nCONCLUSION\nA population-based HPV vaccination program in schools significantly reduced cervical abnormalities for vaccinated women within five years of implementation, with the greatest vaccine effectiveness observed for the youngest women.",
"title": ""
},
{
"docid": "cfea41d4bc6580c91ee27201360f8e17",
"text": "It is common sense that cloud-native applications (CNA) are intentionally designed for the cloud. Although this understanding can be broadly used it does not guide and explain what a cloud-native application exactly is. The term ”cloud-native” was used quite frequently in birthday times of cloud computing (2006) which seems somehow obvious nowadays. But the term disappeared almost completely. Suddenly and in the last years the term is used again more and more frequently and shows increasing momentum. This paper summarizes the outcomes of a systematic mapping study analyzing research papers covering ”cloud-native” topics, research questions and engineering methodologies. We summarize research focuses and trends dealing with cloud-native application engineering approaches. Furthermore, we provide a definition for the term ”cloud-native application” which takes all findings, insights of analyzed publications and already existing and well-defined terminology into account.",
"title": ""
},
{
"docid": "5300e9938a545895c8b97fe6c9d06aa5",
"text": "Background subtraction is a common computer vision task. We analyze the usual pixel-level approach. We develop an efficient adaptive algorithm using Gaussian mixture probability density. Recursive equations are used to constantly update the parameters and but also to simultaneously select the appropriate number of components for each pixel.",
"title": ""
},
{
"docid": "77906a8aebb33860423077ac66dc6552",
"text": "If a physical object has a smooth or piecewise smooth boundary, its images obtained by cameras in varying positions undergo smooth apparent deformations. These deformations are locally well approximated by affine transforms of the image plane. In consequence the solid object recognition problem has often been led back to the computation of affine invariant image local features. Such invariant features could be obtained by normalization methods, but no fully affine normalization method exists for the time being. Even scale invariance is dealt with rigorously only by the scaleinvariant feature transform (SIFT) method. By simulating zooms out and normalizing translation and rotation, SIFT is invariant to four out of the six parameters of an affine transform. The method proposed in this paper, affine-SIFT (ASIFT), simulates all image views obtainable by varying the two camera axis orientation parameters, namely, the latitude and the longitude angles, left over by the SIFT method. Then it covers the other four parameters by using the SIFT method itself. The resulting method will be mathematically proved to be fully affine invariant. Against any prognosis, simulating all views depending on the two camera orientation parameters is feasible with no dramatic computational load. A two-resolution scheme further reduces the ASIFT complexity to about twice that of SIFT. A new notion, the transition tilt, measuring the amount of distortion from one view to another, is introduced. While an absolute tilt from a frontal to a slanted view exceeding 6 is rare, much higher transition tilts are common when two slanted views of an object are compared (see Figure 1). The attainable transition tilt is measured for each affine image comparison method. The new method permits one to reliably identify features that have undergone transition tilts of large magnitude, up to 36 and higher. This fact is substantiated by many experiments which show that ASIFT significantly outperforms the state-of-the-art methods SIFT, maximally stable extremal region (MSER), Harris-affine, and Hessian-affine.",
"title": ""
},
{
"docid": "94b482fefc9e8e61fe4614245ff03287",
"text": "In this paper, a general-purpose fuzzy controller for dc–dc converters is investigated. Based on a qualitative description of the system to be controlled, fuzzy controllers are capable of good performances, even for those systems where linear control techniques fail, e.g., when a mathematical description is not available or is in the presence of wide parameter variations. The presented approach is general and can be applied to any dc–dc converter topologies. Controller implementation is relatively simple and can guarantee a small-signal response as fast and stable as other standard regulators and an improved large-signal response. Simulation results of Buck-Boost and Sepic converters show control potentialities.",
"title": ""
},
{
"docid": "e64320b71675f2a059a50fd9479d2056",
"text": "Extreme sports (ES) are usually pursued in remote locations with little or no access to medical care with the athlete competing against oneself or the forces of nature. They involve high speed, height, real or perceived danger, a high level of physical exertion, spectacular stunts, and heightened risk element or death.Popularity for such sports has increased exponentially over the past two decades with dedicated TV channels, Internet sites, high-rating competitions, and high-profile sponsors drawing more participants.Recent data suggest that the risk and severity of injury in some ES is unexpectedly high. Medical personnel treating the ES athlete need to be aware there are numerous differences which must be appreciated between the common traditional sports and this newly developing area. These relate to the temperament of the athletes themselves, the particular epidemiology of injury, the initial management following injury, treatment decisions, and rehabilitation.The management of the injured extreme sports athlete is a challenge to surgeons and sports physicians. Appropriate safety gear is essential for protection from severe or fatal injuries as the margins for error in these sports are small.The purpose of this review is to provide an epidemiologic overview of common injuries affecting the extreme athletes through a focus on a few of the most popular and exciting extreme sports.",
"title": ""
},
{
"docid": "ffef3f247f0821eee02b8d8795ddb21c",
"text": "A broadband polarization reconfigurable rectenna is proposed, which can operate in three polarization modes. The receiving antenna of the rectenna is a polarization reconfigurable planar monopole antenna. By installing switches on the feeding network, the antenna can switch to receive electromagnetic (EM) waves with different polarizations, including linear polarization (LP), right-hand and left-hand circular polarizations (RHCP/LHCP). To achieve stable conversion efficiency of the rectenna (nr) in all the modes within a wide frequency band, a tunable matching network is inserted between the rectifying circuit and the antenna. The measured nr changes from 23.8% to 31.9% in the LP mode within 5.1-5.8 GHz and from 22.7% to 24.5% in the CP modes over 5.8-6 GHz. Compared to rectennas with conventional broadband matching network, the proposed rectenna exhibits more stable conversion efficiency.",
"title": ""
},
{
"docid": "6fc86c662db76c22e708c5091af6a0da",
"text": "Liver hemangiomas are the most common benign liver tumors and are usually incidental findings. Liver hemangiomas are readily demonstrated by abdominal ultrasonography, computed tomography or magnetic resonance imaging. Giant liver hemangiomas are defined by a diameter larger than 5 cm. In patients with a giant liver hemangioma, observation is justified in the absence of symptoms. Surgical resection is indicated in patients with abdominal (mechanical) complaints or complications, or when diagnosis remains inconclusive. Enucleation is the preferred surgical method, according to existing literature and our own experience. Spontaneous or traumatic rupture of a giant hepatic hemangioma is rare, however, the mortality rate is high (36-39%). An uncommon complication of a giant hemangioma is disseminated intravascular coagulation (Kasabach-Merritt syndrome); intervention is then required. Herein, the authors provide a literature update of the current evidence concerning the management of giant hepatic hemangiomas. In addition, the authors assessed treatment strategies and outcomes in a series of patients with giant liver hemangiomas managed in our department.",
"title": ""
},
{
"docid": "1e3729164ecb6b74dbe5c9019bff7ae4",
"text": "Serverless or functions as a service runtimes have shown significant benefits to efficiency and cost for event-driven cloud applications. Although serverless runtimes are limited to applications requiring lightweight computation and memory, such as machine learning prediction and inference, they have shown improvements on these applications beyond other cloud runtimes. Training deep learning can be both compute and memory intensive. We investigate the use of serverless runtimes while leveraging data parallelism for large models, show the challenges and limitations due to the tightly coupled nature of such models, and propose modifications to the underlying runtime implementations that would mitigate them. For hyperparameter optimization of smaller deep learning models, we show that serverless runtimes can provide significant benefit.",
"title": ""
},
{
"docid": "57167d5bf02e9c76057daa83d3f803c5",
"text": "When alcohol is consumed, the alcoholic beverages first pass through the various segments of the gastrointestinal (GI) tract. Accordingly, alcohol may interfere with the structure as well as the function of GI-tract segments. For example, alcohol can impair the function of the muscles separating the esophagus from the stomach, thereby favoring the occurrence of heartburn. Alcohol-induced damage to the mucosal lining of the esophagus also increases the risk of esophageal cancer. In the stomach, alcohol interferes with gastric acid secretion and with the activity of the muscles surrounding the stomach. Similarly, alcohol may impair the muscle movement in the small and large intestines, contributing to the diarrhea frequently observed in alcoholics. Moreover, alcohol inhibits the absorption of nutrients in the small intestine and increases the transport of toxins across the intestinal walls, effects that may contribute to the development of alcohol-related damage to the liver and other organs.",
"title": ""
},
{
"docid": "e4236031c7d165a48a37171c47de1c38",
"text": "We present a discrete event simulation model reproducing the adoption of Radio Frequency Identification (RFID) technology for the optimal management of common logistics processes of a Fast Moving Consumer Goods (FMCG) warehouse. In this study, simulation is exploited as a powerful tool to replicate both the reengineered RFID logistics processes and the flows of Electronic Product Code (EPC) data generated by such processes. Moreover, a complex tool has been developed to analyze data resulting from the simulation runs, thus addressing the issue of how the flows of EPC data generated by RFID technology can be exploited to provide value-added information for optimally managing the logistics processes. Specifically, an EPCIS compliant Data Warehouse has been designed to act as EPCIS Repository and store EPC data resulting from simulation. Starting from EPC data, properly designed tools, referred to as Business Intelligence Modules, provide value-added information for processes optimization. Due to the newness of RFID adoption in the logistics context and to the lack of real case examples that can be examined, we believe that both the model and the data management system developed can be very useful to understand the practical implications of the technology and related information flow, as well as to show how to leverage EPC data for process management. Results of the study can provide a proof-of-concept to substantiate the adoption of RFID technology in the FMCG industry.",
"title": ""
},
{
"docid": "1a1467aa70bbcc97e01a6ec25899bb17",
"text": "Despite numerous studies to reduce the power consumption of the display-related components of mobile devices, previous works have led to a deterioration in user experience due to compromised graphic quality. In this paper, we propose an effective scheme to reduce the energy consumption of the display subsystems of mobile devices without compromising user experience. In preliminary experiments, we noticed that mobile devices typically perform redundant display updates even if the display content does not change. Based on this observation, we first propose a metric called the content rate, which is defined as the number of meaningful frame changes in a second. Our scheme then estimates an optimal refresh rate based on the content rate in order to eliminate redundant display updates. Also proposed is the flicker compensation technique, which prevents the flickering problem caused by the reduced refresh rate. Extensive experiments conducted on the latest smartphones demonstrated that our system effectively reduces the overall power consumption of mobile devices by 35 percent while simultaneously maintaining satisfactory display quality.",
"title": ""
},
{
"docid": "ff9b5d96b762b2baacf4bf19348c614b",
"text": "Drought stress is a major factor in reduce growth, development and production of plants. Stress was applied with polyethylene glycol (PEG) 6000 and water potentials were: zero (control), -0.15 (PEG 10%), -0.49 (PEG 20%), -1.03 (PEG 30%) and -1.76 (PEG40%) MPa. The solutes accumulation of two maize (Zea mays L.) cultivars -704 and 301were determined after drought stress. In our experiments, a higher amount of soluble sugars and a lower amount of starch were found under stress. Soluble sugars concentration increased (from 1.18 to 1.90 times) in roots and shoots of both varieties when the studied varieties were subjected to drought stress, but starch content were significantly (p<0.05) decreased (from 16 to 84%) in both varieties. This suggests that sugars play an important role in Osmotic Adjustment (OA) in maize. The free proline level also increased (from 1.56 to 3.13 times) in response to drought stress and the increase in 704 var. was higher than 301 var. It seems to proline may play a role in minimizing the damage caused by dehydration. Increase of proline content in shoots was higher than roots, but increase of soluble sugar content and decrease of starch content in roots was higher than shoots.",
"title": ""
},
{
"docid": "7539af35786fba888fa3a7cafa5db0b0",
"text": "Multi-view stereo algorithms typically rely on same-exposure images as inputs due to the brightness constancy assumption. While state-of-the-art depth results are excellent, they do not produce high-dynamic range textures required for high-quality view reconstruction. In this paper, we propose a technique that adapts multi-view stereo for different exposure inputs to simultaneously recover reliable dense depth and high dynamic range textures. In our technique, we use an exposure-invariant similarity statistic to establish correspondences, through which we robustly extract the camera radiometric response function and the image exposures. This enables us to then convert all images to radiance space and selectively use the radiance data for dense depth and high dynamic range texture recovery. We show results for synthetic and real scenes.",
"title": ""
},
{
"docid": "45f2599c6a256b55ee466c258ba93f48",
"text": "Functional turnover of transcription factor binding sites (TFBSs), such as whole-motif loss or gain, are common events during genome evolution. Conventional probabilistic phylogenetic shadowing methods model the evolution of genomes only at nucleotide level, and lack the ability to capture the evolutionary dynamics of functional turnover of aligned sequence entities. As a result, comparative genomic search of non-conserved motifs across evolutionarily related taxa remains a difficult challenge, especially in higher eukaryotes, where the cis-regulatory regions containing motifs can be long and divergent; existing methods rely heavily on specialized pattern-driven heuristic search or sampling algorithms, which can be difficult to generalize and hard to interpret based on phylogenetic principles. We propose a new method: Conditional Shadowing via Multi-resolution Evolutionary Trees, or CSMET, which uses a context-dependent probabilistic graphical model that allows aligned sites from different taxa in a multiple alignment to be modeled by either a background or an appropriate motif phylogeny conditioning on the functional specifications of each taxon. The functional specifications themselves are the output of a phylogeny which models the evolution not of individual nucleotides, but of the overall functionality (e.g., functional retention or loss) of the aligned sequence segments over lineages. Combining this method with a hidden Markov model that autocorrelates evolutionary rates on successive sites in the genome, CSMET offers a principled way to take into consideration lineage-specific evolution of TFBSs during motif detection, and a readily computable analytical form of the posterior distribution of motifs under TFBS turnover. On both simulated and real Drosophila cis-regulatory modules, CSMET outperforms other state-of-the-art comparative genomic motif finders.",
"title": ""
},
{
"docid": "2b53b125dc8c79322aabb083a9c991e4",
"text": "Geographical location is vital to geospatial applications like local search and event detection. In this paper, we investigate and improve on the task of text-based geolocation prediction of Twitter users. Previous studies on this topic have typically assumed that geographical references (e.g., gazetteer terms, dialectal words) in a text are indicative of its author’s location. However, these references are often buried in informal, ungrammatical, and multilingual data, and are therefore non-trivial to identify and exploit. We present an integrated geolocation prediction framework and investigate what factors impact on prediction accuracy. First, we evaluate a range of feature selection methods to obtain “location indicative words”. We then evaluate the impact of nongeotagged tweets, language, and user-declared metadata on geolocation prediction. In addition, we evaluate the impact of temporal variance on model generalisation, and discuss how users differ in terms of their geolocatability. We achieve state-of-the-art results for the text-based Twitter user geolocation task, and also provide the most extensive exploration of the task to date. Our findings provide valuable insights into the design of robust, practical text-based geolocation prediction systems.",
"title": ""
},
{
"docid": "14b6ff85d404302af45cf608137879c7",
"text": "In this paper, an automatic multi-organ segmentation based on multi-boost learning and statistical shape model search was proposed. First, simple but robust Multi-Boost Classifier was trained to hierarchically locate and pre-segment multiple organs. To ensure the generalization ability of the classifier relative location information between organs, organ and whole body is exploited. Left lung and right lung are first localized and pre-segmented, then liver and spleen are detected upon its location in whole body and its relative location to lungs, kidney is finally detected upon the features of relative location to liver and left lung. Second, shape and appearance models are constructed for model fitting. The final refinement delineation is performed by best point searching guided by appearance profile classifier and is constrained with multi-boost classified probabilities, intensity and gradient features. The method was tested on 30 unseen CT and 30 unseen enhanced CT (CTce) datasets from ISBI 2015 VISCERAL challenge. The results demonstrated that the multi-boost learning can be used to locate multi-organ robustly and segment lung and kidney accurately. The liver and spleen segmentation based on statistical shape searching has shown good performance too. Copyright c © by the paper’s authors. Copying permitted only for private and academic purposes. In: O. Goksel (ed.): Proceedings of the VISCERAL Anatomy Grand Challenge at the 2015 IEEE International Symposium on Biomedical Imaging (ISBI), New York, NY, Apr 16, 2015 published at http://ceur-ws.org",
"title": ""
}
] |
scidocsrr
|
9823b70dbc7d21380da5b898638a45b4
|
Music Style Transfer Issues: A Position Paper
|
[
{
"docid": "2a10978fdd01c7c19d957fb4224016bf",
"text": "To my parents and my girlfriend. Abstract Techniques of Artificial Intelligence and Human-Computer Interaction have empowered computer music systems with the ability to perform with humans via a wide spectrum of applications. However, musical interaction between humans and machines is still far less musical than the interaction between humans since most systems lack any representation or capability of musical expression. This thesis contributes various techniques, especially machine-learning algorithms, to create artificial musicians that perform expressively and collaboratively with humans. The current system focuses on three aspects of expression in human-computer collaborative performance: 1) expressive timing and dynamics, 2) basic improvisation techniques, and 3) facial and body gestures. Timing and dynamics are the two most fundamental aspects of musical expression and also the main focus of this thesis. We model the expression of different musicians as co-evolving time series. Based on this representation, we develop a set of algorithms, including a sophisticated spectral learning method, to discover regularities of expressive musical interaction from rehearsals. Given a learned model, an artificial performer generates its own musical expression by interacting with a human performer given a pre-defined score. The results show that, with a small number of rehearsals, we can successfully apply machine learning to generate more expressive and human-like collaborative performance than the baseline automatic accompaniment algorithm. This is the first application of spectral learning in the field of music. Besides expressive timing and dynamics, we consider some basic improvisation techniques where musicians have the freedom to interpret pitches and rhythms. We developed a model that trains a different set of parameters for each individual measure and focus on the prediction of the number of chords and the number of notes per chord. Given the model prediction, an improvised score is decoded using nearest-neighbor search, which selects the training example whose parameters are closest to the estimation. Our result shows that our model generates more musical, interactive, and natural collaborative improvisation than a reasonable baseline based on mean estimation. Although not conventionally considered to be \" music, \" body and facial movements are also important aspects of musical expression. We study body and facial expressions using a humanoid saxophonist robot. We contribute the first algorithm to enable a robot to perform an accompaniment for a musician and react to human performance with gestural and facial expression. The current system uses rule-based performance-motion mapping and separates robot motions into three groups: finger motions, …",
"title": ""
},
{
"docid": "80a34e1544f9a20d6e1698278e0479b5",
"text": "We introduce a method for imposing higher-level structure on generated, polyphonic music. A Convolutional Restricted Boltzmann Machine (C-RBM) as a generative model is combined with gradient descent constraint optimisation to provide further control over the generation process. Among other things, this allows for the use of a “template” piece, from which some structural properties can be extracted, and transferred as constraints to the newly generated material. The sampling process is guided with Simulated Annealing to avoid local optima, and to find solutions that both satisfy the constraints, and are relatively stable with respect to the C-RBM. Results show that with this approach it is possible to control the higher-level self-similarity structure, the meter, and the tonal properties of the resulting musical piece, while preserving its local musical coherence.",
"title": ""
}
] |
[
{
"docid": "771a28efa936f7e6cc84cdd23815b165",
"text": "Job scheduling is one of the most important research problems in distributed systems, particularly cloud environments/computing. The dynamic and heterogeneous nature of resources in such distributed systems makes optimum job scheduling a non-trivial task. Maximal resource utilization in cloud computing demands/necessitates an algorithm that allocates resources to jobs with optimal execution time and cost. The critical issue for job scheduling is assigning jobs to the most suitable resources, considering user preferences and requirements. In this paper, we present a hybrid approach called FUGE that is based on fuzzy theory and a genetic algorithm (GA) that aims to perform optimal load balancing considering execution time and cost. We modify the standard genetic algorithm (SGA) and use fuzzy theory to devise a fuzzy-based steady-state GA in order to improve SGA performance in term of makespan. In details, the FUGE algorithm assigns jobs to resources by considering virtual machine (VM) processing speed, VM memory, VM bandwidth, and the job lengths. We mathematically prove our M. Shojafar (B) · N. Cordeschi Department of Information Engineering Electronics and Telecommunications (DIET), University Sapienza of Rome, via Eudossiana 18, 00184 Rome, Italy e-mail: m.shojafar@yahoo.com; shojafar@diet.uniroma1.it URL: http://www.mshojafar.com N. Cordeschi e-mail: cordeschi@diet.uniroma1.it S. Javanmardi Research and Education center, Nikan network Company, Shiraz, Fars, Iran e-mail: info@nikannetwork.com; saeedjavanmardi@gmail.com S. Abolfazli Center for Mobile Cloud Computing, University of Malaya, Kuala Lumpur, Malaysia e-mail: Abolfazli@ieee.org optimization problem which is convex with well-known analytical conditions (specifically, Karush–Kuhn–Tucker conditions). We compare the performance of our approach to several other cloud scheduling models. The results of the experiments show the efficiency of the FUGE approach in terms of execution time, execution cost, and average degree of imbalance.",
"title": ""
},
{
"docid": "ebb70af20b550c911a63757b754c6619",
"text": "This paper presents a vehicle price prediction system by using the supervised machine learning technique. The research uses multiple linear regression as the machine learning prediction method which offered 98% prediction precision. Using multiple linear regression, there are multiple independent variables but one and only one dependent variable whose actual and predicted values are compared to find precision of results. This paper proposes a system where price is dependent variable which is predicted, and this price is derived from factors like vehicle’s model, make, city, version, color, mileage, alloy rims and power steering.",
"title": ""
},
{
"docid": "9ca90172c5beff5922b4f5274ef61480",
"text": "In the past decade, Convolutional Neural Networks (CNNs) have demonstrated state-of-the-art performance in various Artificial Intelligence tasks. To accelerate the experimentation and development of CNNs, several software frameworks have been released, primarily targeting power-hungry CPUs and GPUs. In this context, reconfigurable hardware in the form of FPGAs constitutes a potential alternative platform that can be integrated in the existing deep-learning ecosystem to provide a tunable balance between performance, power consumption, and programmability. In this article, a survey of the existing CNN-to-FPGA toolflows is presented, comprising a comparative study of their key characteristics, which include the supported applications, architectural choices, design space exploration methods, and achieved performance. Moreover, major challenges and objectives introduced by the latest trends in CNN algorithmic research are identified and presented. Finally, a uniform evaluation methodology is proposed, aiming at the comprehensive, complete, and in-depth evaluation of CNN-to-FPGA toolflows.",
"title": ""
},
{
"docid": "4a1de61e9e74aa43a4e0bf195250ef72",
"text": "We present in this paper a system for converting PDF legacy documents into structured XML format. This conversion system first extracts the different streams contained in PDF files (text, bitmap and vectorial images) and then applies different components in order to express in XML the logically structured documents. Some of these components are traditional in Document Analysis, other more specific to PDF. We also present a graphical user interface in order to check, correct and validate the analysis of the components. We eventually report on two real user cases where this system was applied on.",
"title": ""
},
{
"docid": "b57859a76aea1fb5d4219068bde83283",
"text": "Software vulnerabilities are the root cause of a wide range of attacks. Existing vulnerability scanning tools are able to produce a set of suspects. However, they often suffer from a high false positive rate. Convicting a suspect and vindicating false positives are mostly a highly demanding manual process, requiring a certain level of understanding of the software. This limitation significantly thwarts the application of these tools by system administrators or regular users who are concerned about security but lack of understanding of, or even access to, the source code. It is often the case that even developers are reluctant to inspect/fix these numerous suspects unless they are convicted by evidence. In this paper, we propose a lightweight dynamic approach which generates evidence for various security vulnerabilities in software, with the goal of relieving the manual procedure. It is based on data lineage tracing, a technique that associates each execution point precisely with a set of relevant input values. These input values can be mutated by an offline analysis to generate exploits. We overcome the efficiency challenge by using Binary Decision Diagrams (BDD). Our tool successfully generates exploits for all the known vulnerabilities we studied. We also use it to uncover a number of new vulnerabilities, proved by evidence.",
"title": ""
},
{
"docid": "2314e101f501a328e3600a73dd4ee898",
"text": "Sarcasm transforms the polarity of an apparently positive or negative utterance into its opposite. We report on a method for constructing a corpus of sarcastic Twitter messages in which determination of the sarcasm of each message has been made by its author. We use this reliable corpus to compare sarcastic utterances in Twitter to utterances that express positive or negative attitudes without sarcasm. We investigate the impact of lexical and pragmatic factors on machine learning effectiveness for identifying sarcastic utterances and we compare the performance of machine learning techniques and human judges on this task. Perhaps unsurprisingly, neither the human judges nor the machine learning techniques perform very well.",
"title": ""
},
{
"docid": "51048699044d547df7ffd3a0755c76d9",
"text": "Many sequential processing tasks require complex nonlinear transition functions from one step to the next. However, recurrent neural networks with “deep\" transition functions remain difficult to train, even when using Long Short-Term Memory (LSTM) networks. We introduce a novel theoretical analysis of recurrent networks based on Geršgorin’s circle theorem that illuminates several modeling and optimization issues and improves our understanding of the LSTM cell. Based on this analysis we propose Recurrent Highway Networks, which are deep not only in time but also in space, extending the LSTM architecture to larger step-to-step transition depths. Experiments demonstrate that the proposed architecture results in powerful and efficient models benefiting from up to 10 layers in the recurrent transition. On the Penn Treebank language modeling corpus, a single network outperforms all previous ensemble results with a perplexity of 66.0 on the test set. On the larger Hutter Prize Wikipedia dataset, a single network again significantly outperforms all previous results with an entropy of 1.32 bits per character on the test set.",
"title": ""
},
{
"docid": "b0cc069fe3f2f89436137cc3bf5e624d",
"text": "Business Process Management (BPM) has become one of the most important management disciplines in recent years. In reality, however, many technical process improvement projects failed in the past and the expected benefits could not be established. In the meantime, the agile software development movement made massive progress in contrast to classic waterfall approaches which are still the foundational methodologies for many BPM projects. This paper investigates the combination of a traditional BPM methodology and agile software development to overcome the limitations of existing BPM methodologies. The main focus is on projects that cover the technical realization of processes based on modern Business Process Management Systems (BPMS).",
"title": ""
},
{
"docid": "8e228c584769a3349a8727b6a52c0650",
"text": "Networks are commonly used to model the traffic patterns, social interactions, or web pages. The nodes in a network do not possess the same characteristics: some nodes are naturally more connected and some nodes can be more important. Closeness centrality (CC) is a global metric that quantifies how important is a given node in the network. When the network is dynamic and keeps changing, the relative importance of the nodes also changes. The best known algorithm to compute the CC scores makes it impractical to recompute them from scratch after each modification. In this paper, we propose Streamer, a distributed memory framework for incrementally maintaining the closeness centrality scores of a network upon changes. It leverages pipelined and replicated parallelism and takes NUMA effects into account. It speeds up the maintenance of the CC of a real graph with 916K vertices and 4.3M edges by a factor of 497 using a 64 nodes cluster.",
"title": ""
},
{
"docid": "2710a25b3cf3caf5ebd5fb9f08c9e5e3",
"text": "Your use of the JSTOR archive indicates your acceptance of JSTOR's Terms and Conditions of Use, available at http://www.jstor.org/page/info/about/policies/terms.jsp. JSTOR's Terms and Conditions of Use provides, in part, that unless you have obtained prior permission, you may not download an entire issue of a journal or multiple copies of articles, and you may use content in the JSTOR archive only for your personal, non-commercial use.",
"title": ""
},
{
"docid": "0a4392285df7ddb92458ffa390f36867",
"text": "A good model of object shape is essential in applications such as segmentation, detection, inpainting and graphics. For example, when performing segmentation, local constraints on the shapes can help where object boundaries are noisy or unclear, and global constraints can resolve ambiguities where background clutter looks similar to parts of the objects. In general, the stronger the model of shape, the more performance is improved. In this paper, we use a type of deep Boltzmann machine (Salakhutdinov and Hinton, International Conference on Artificial Intelligence and Statistics, 2009) that we call a Shape Boltzmann Machine (SBM) for the task of modeling foreground/background (binary) and parts-based (categorical) shape images. We show that the SBM characterizes a strong model of shape, in that samples from the model look realistic and it can generalize to generate samples that differ from training examples. We find that the SBM learns distributions that are qualitatively and quantitatively better than existing models for this task.",
"title": ""
},
{
"docid": "8b94a3040ee23fa3d4403b14b0f550e2",
"text": "Reactive programming has recently gained popularity as a paradigm that is well-suited for developing event-driven and interactive applications. It facilitates the development of such applications by providing abstractions to express time-varying values and automatically managing dependencies between such values. A number of approaches have been recently proposed embedded in various languages such as Haskell, Scheme, JavaScript, Java, .NET, etc. This survey describes and provides a taxonomy of existing reactive programming approaches along six axes: representation of time-varying values, evaluation model, lifting operations, multidirectionality, glitch avoidance, and support for distribution. From this taxonomy, we observe that there are still open challenges in the field of reactive programming. For instance, multidirectionality is supported only by a small number of languages, which do not automatically track dependencies between time-varying values. Similarly, glitch avoidance, which is subtle in reactive programs, cannot be ensured in distributed reactive programs using the current techniques.",
"title": ""
},
{
"docid": "7cf90874c70202653a47fa165a1a87f7",
"text": "This work proposes a new trust management system (TMS) for the Internet of Things (IoT). The wide majority of these systems are today bound to the assessment of trustworthiness with respect to a single function. As such, they cannot use past experiences related to other functions. Even those that support multiple functions hide this heterogeneity by regrouping all past experiences into a single metric. These restrictions are detrimental to the adaptation of TMSs to today’s emerging M2M and IoT architectures, which are characterized with heterogeneity in nodes, capabilities and services. To overcome these limitations, we design a context-aware and multi-service trust management system fitting the new requirements of the IoT. Simulation results show the good performance of the proposed system and especially highlight its ability to deter a class of common attacks designed to target trust management systems. a 2013 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "ffcd59b9cf48f61ad0278effa6c167dd",
"text": "The first of this two-part series on critical illness in pregnancy dealt with obstetric disorders. In Part II, medical conditions that commonly affect pregnant women or worsen during pregnancy are discussed. ARDS occurs more frequently in pregnancy. Strategies commonly used in nonpregnant patients, including permissive hypercapnia, limits for plateau pressure, and prone positioning, may not be acceptable, especially in late pregnancy. Genital tract infections unique to pregnancy include chorioamnionitis, group A streptococcal infection causing toxic shock syndrome, and polymicrobial infection with streptococci, staphylococci, and Clostridium perfringens causing necrotizing vulvitis or fasciitis. Pregnancy predisposes to VTE; D-dimer levels have low specificity in pregnancy. A ventilation-perfusion scan is preferred over CT pulmonary angiography in some situations to reduce radiation to the mother's breasts. Low-molecular-weight or unfractionated heparins form the mainstay of treatment; vitamin K antagonists, oral factor Xa inhibitors, and direct thrombin inhibitors are not recommended in pregnancy. The physiologic hyperdynamic circulation in pregnancy worsens many cardiovascular disorders. It increases risk of pulmonary edema or arrhythmias in mitral stenosis, heart failure in pulmonary hypertension or aortic stenosis, aortic dissection in Marfan syndrome, or valve thrombosis in mechanical heart valves. Common neurologic problems in pregnancy include seizures, altered mental status, visual symptoms, and strokes. Other common conditions discussed are aspiration of gastric contents, OSA, thyroid disorders, diabetic ketoacidosis, and cardiopulmonary arrest in pregnancy. Studies confined to pregnant women are available for only a few of these conditions. We have, therefore, reviewed pregnancy-specific adjustments in the management of these disorders.",
"title": ""
},
{
"docid": "a61e8a8e20862177ff3f4633c4035560",
"text": "Human annotation for syntactic parsing is expensive, and large resources are available only for a fraction of languages. A question we ask is whether one can leverage abundant unlabeled texts to improve syntactic parsers, beyond just using the texts to obtain more generalisable lexical features (i.e. beyond word embeddings). To this end, we propose a novel latent-variable generative model for semisupervised syntactic dependency parsing. As exact inference is intractable, we introduce a differentiable relaxation to obtain approximate samples and compute gradients with respect to the parser parameters. Our method (Differentiable Perturb-andParse) relies on differentiable dynamic programming over stochastically perturbed edge scores. We demonstrate effectiveness of our approach with experiments on English, French and Swedish.",
"title": ""
},
{
"docid": "0dc0b31c4f174a69b5917cdf93a5dd22",
"text": "Webpage is becoming a more and more important visual input to us. While there are few studies on saliency in webpage, we in this work make a focused study on how humans deploy their attention when viewing webpages and for the first time propose a computational model that is designed to predict webpage saliency. A dataset is built with 149 webpages and eye tracking data from 11 subjects who free-view the webpages. Inspired by the viewing patterns on webpages, multi-scale feature maps that contain object blob representation and text representation are integrated with explicit face maps and positional bias. We propose to use multiple kernel learning (MKL) to achieve a robust integration of various feature maps. Experimental results show that the proposed model outperforms its counterparts in predicting webpage saliency.",
"title": ""
},
{
"docid": "c68b94c11170fae3caf7dc211ab83f91",
"text": "Data mining is the extraction of useful, prognostic, interesting, and unknown information from massive transaction databases and other repositories. Data mining tools predict potential trends and actions, allowing various fields to make proactive, knowledge-driven decisions. Recently, with the rapid growth of information technology, the amount of data has exponentially increased in various fields. Big data mostly comes from people’s day-to-day activities and Internet-based companies. Mining frequent itemsets and association rule mining (ARM) are well-analysed techniques for revealing attractive correlations among variables in huge datasets. The Apriori algorithm is one of the most broadly used algorithms in ARM, and it collects the itemsets that frequently occur in order to discover association rules in massive datasets. The original Apriori algorithm is for sequential (single node or computer) environments. This Apriori algorithm has many drawbacks for processing huge datasets, such as that a single machine’s memory, CPU and storage capacity are insufficient. Parallel and distributed computing is the better solution to overcome the above problems. Many researchers have parallelized the Apriori algorithm. This study performs a survey on several well-enhanced and revised techniques for the parallel Apriori algorithm in the HadoopMapReduce environment. The Hadoop-MapReduce framework is a programming model that efficiently and effectively processes enormous databases in parallel. It can handle large clusters of commodity hardware in a reliable and fault-tolerant manner. This survey will provide an overall view of the parallel Apriori algorithm implementation in the Hadoop-MapReduce environment and briefly discuss the challenges and open issues of big data in the cloud and Hadoop-MapReduce. Moreover, this survey will not only give overall existing improved Apriori algorithm methods on Hadoop-MapReduce but also provide future research direction for upcoming researchers.",
"title": ""
},
{
"docid": "b83fc3d06ff877a7851549bcd23aaed2",
"text": "Finding what is and what is not a salient object can be helpful in developing better features and models in salient object detection (SOD). In this paper, we investigate the images that are selected and discarded in constructing a new SOD dataset and find that many similar candidates, complex shape and low objectness are three main attributes of many non-salient objects. Moreover, objects may have diversified attributes that make them salient. As a result, we propose a novel salient object detector by ensembling linear exemplar regressors. We first select reliable foreground and background seeds using the boundary prior and then adopt locally linear embedding (LLE) to conduct manifold-preserving foregroundness propagation. In this manner, a foregroundness map can be generated to roughly pop-out salient objects and suppress non-salient ones with many similar candidates. Moreover, we extract the shape, foregroundness and attention descriptors to characterize the extracted object proposals, and a linear exemplar regressor is trained to encode how to detect salient proposals in a specific image. Finally, various linear exemplar regressors are ensembled to form a single detector that adapts to various scenarios. Extensive experimental results on 5 dataset and the new SOD dataset show that our approach outperforms 9 state-of-art methods.",
"title": ""
},
{
"docid": "73bbec41d27db7b660bd49f3a1046905",
"text": "U sing robots in industrial welding operations is common but far from being a streamlined technological process. The problems are with the robots, still in their early design stages and difficult to use and program by regular operators; the welding process, which is complex and not really well known; and the human-machine interfaces, which are nonnatural and not really working. In this article, these problems are discussed, and a system designed with the double objective of serving R&D efforts on welding applications and to assist industrial partners working with welding setups is presented. The system is explained in some detail and demonstrated using two test cases that reproduce two situations common in industry: multilayer butt welding, used on big structures requiring very strong welds, and multipoint fillet welding, used, for example, on structural pieces in the construction industry.",
"title": ""
},
{
"docid": "560ff157bcedf4e59d4993229ef42d80",
"text": "Hash tables are important data structures that lie at the heart of important applications such as key-value stores and relational databases. Typically bucketized cuckoo hash tables (BCHTs) are used because they provide highthroughput lookups and load factors that exceed 95%. Unfortunately, this performance comes at the cost of reduced memory access efficiency. Positive lookups (key is in the table) and negative lookups (where it is not) on average access 1.5 and 2.0 buckets, respectively, which results in 50 to 100% more table-containing cache lines to be accessed than should be minimally necessary. To reduce these surplus accesses, this paper presents the Horton table, a revamped BCHT that reduces the expected cost of positive and negative lookups to fewer than 1.18 and 1.06 buckets, respectively, while still achieving load factors of 95%. The key innovation is remap entries, small in-bucket records that allow (1) more elements to be hashed using a single, primary hash function, (2) items that overflow buckets to be tracked and rehashed with one of many alternate functions while maintaining a worst-case lookup cost of 2 buckets, and (3) shortening the vast majority of negative searches to 1 bucket access. With these advancements, Horton tables outperform BCHTs by 17% to 89%.",
"title": ""
}
] |
scidocsrr
|
c3b92cc072df52a5a003bdbc3719bc14
|
Going Further with Point Pair Features
|
[
{
"docid": "e19b6cd095129b42be0bf0fe3f3d4a96",
"text": "This work addresses the problem of estimating the 6D Pose of specific objects from a single RGB-D image. We present a flexible approach that can deal with generic objects, both textured and texture-less. The key new concept is a learned, intermediate representation in form of a dense 3D object coordinate labelling paired with a dense class labelling. We are able to show that for a common dataset with texture-less objects, where template-based techniques are suitable and state-of-the art, our approach is slightly superior in terms of accuracy. We also demonstrate the benefits of our approach, compared to template-based techniques, in terms of robustness with respect to varying lighting conditions. Towards this end, we contribute a new ground truth dataset with 10k images of 20 objects captured each under three different lighting conditions. We demonstrate that our approach scales well with the number of objects and has capabilities to run fast.",
"title": ""
}
] |
[
{
"docid": "ca927f5557a6a5713e9313848fbbc5b1",
"text": "A wide band CMOS LC-tank voltage controlled oscillator (VCO) with small VCO gain (KVCO) variation was developed. For small KVCO variation, serial capacitor bank was added to the LC-tank with parallel capacitor array. Implemented in a 0.18 mum CMOS RF technology, the proposed VCO can be tuned from 4.39 GHz to 5.26 GHz with the VCO gain variation less than 9.56%. While consuming 3.5 mA from a 1.8 V supply, the VCO has -113.65 dBc/Hz phase noise at 1 MHz offset from the carrier.",
"title": ""
},
{
"docid": "34858704b21e0665b4774802f4e66958",
"text": "Though based on abstractions of nature, current evolutionary algorithms and artificial life models lack the drive to complexity characteristic of natural evolution. Thus this paper argues that the prevalent fitness-pressure-based abstraction does not capture how natural evolution discovers complexity. Alternatively, this paper proposes that natural evolution can be abstracted as a process that discovers many ways to express the same functionality. That is, all successful organisms must meet the same minimal criteria of survival and reproduction. This abstraction leads to the key idea in this paper: Searching for novel ways of meeting the same minimal criteria, which is an accelerated model of this new abstraction, may be an effective search algorithm. Thus the existing novelty search method, which rewards any new behavior, is extended to enforce minimal criteria. Such minimal criteria novelty search prunes the space of viable behaviors and may often be more efficient than the search for novelty alone. In fact, when compared to the raw search for novelty and traditional fitness-based search in the two maze navigation experiments in this paper, minimal criteria novelty search evolves solutions more consistently. It is possible that refining the evolutionary computation abstraction in this way may lead to solving more ambitious problems and evolving more complex artificial organisms.",
"title": ""
},
{
"docid": "8bbf3135920759de0228f5ff9f5e23f3",
"text": "Campuses and cities of the near future will be equipped with vast numbers of IoT devices. Operators of such environments may not even be fully aware of their IoT assets, let alone whether each IoT device is functioning properly safe from cyber-attacks. This paper proposes the use of network traffic analytics to characterize IoT devices, including their typical behaviour mode. We first collect and synthesize traffic traces from a smart-campus environment instrumented with a diversity of IoT devices including cameras, lights, appliances, and health-monitors; our traces, collected over a period of 3 weeks, are released as open data to the public. We then analyze the traffic traces to characterize statistical attributes such as data rates and burstiness, activity cycles, and signalling patterns, for over 20 IoT devices deployed in our environment. Finally, using these attributes, we develop a classification method that can not only distinguish IoT from non-IoT traffic, but also identify specific IoT devices with over 95% accuracy. Our study empowers operators of smart cities and campuses to discover and monitor their IoT assets based on their network behaviour.",
"title": ""
},
{
"docid": "9567f5c8f637570ed47f15adbddc8ab2",
"text": "Please cite this article in press as: A. Daud et al., j.knosys.2010.04.008 This paper addresses the problem of semantics-based temporal expert finding, which means identifying a person with given expertise for different time periods. For example, many real world applications like reviewer matching for papers and finding hot topics in newswire articles need to consider time dynamics. Intuitively there will be different reviewers and reporters for different topics during different time periods. Traditional approaches used graph-based link structure by using keywords based matching and ignored semantic information, while topic modeling considered semantics-based information without conferences influence (richer text semantics and relationships between authors) and time information simultaneously. Consequently they result in not finding appropriate experts for different time periods. We propose a novel Temporal-Expert-Topic (TET) approach based on Semantics and Temporal Information based Expert Search (STMS) for temporal expert finding, which simultaneously models conferences influence and time information. Consequently, topics (semantically related probabilistic clusters of words) occurrence and correlations change over time, while the meaning of a particular topic almost remains unchanged. By using Bayes Theorem we can obtain topically related experts for different time periods and show how experts’ interests and relationships change over time. Experimental results on scientific literature dataset show that the proposed generalized time topic modeling approach significantly outperformed the non-generalized time topic modeling approaches, due to simultaneously capturing conferences influence with time information. 2010 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "6330bfa6be0361e2c0d2985372db9f0a",
"text": "The increasing pervasiveness of the internet, broadband connections and the emergence of digital compression technologies have dramatically changed the face of digital music piracy. Digitally compressed music files are essentially a perfect public economic good, and illegal copying of these files has increasingly become rampant. This paper presents a study on the behavioral dynamics which impact the piracy of digital audio files, and provides a contrast with software piracy. Our results indicate that the general ethical model of software piracy is also broadly applicable to audio piracy. However, significant enough differences with software underscore the unique dynamics of audio piracy. Practical implications that can help the recording industry to effectively combat piracy, and future research directions are highlighted.",
"title": ""
},
{
"docid": "7fed6f57ba2e17db5986d47742dc1a9c",
"text": "Partial Least Squares Regression (PLSR) is a linear regression technique developed to deal with high-dimensional regressors and one or several response variables. In this paper we introduce robustified versions of the SIMPLS algorithm being the leading PLSR algorithm because of its speed and efficiency. Because SIMPLS is based on the empirical cross-covariance matrix between the response variables and the regressors and on linear least squares regression, the results are affected by abnormal observations in the data set. Two robust methods, RSIMCD and RSIMPLS, are constructed from a robust covariance matrix for high-dimensional data and robust linear regression. We introduce robust RMSECV and RMSEP values for model calibration and model validation. Diagnostic plots are constructed to visualize and classify the outliers. Several simulation results and the analysis of real data sets show the effectiveness and the robustness of the new approaches. Because RSIMPLS is roughly twice as fast as RSIMCD, it stands out as the overall best method.",
"title": ""
},
{
"docid": "ceb02ddf8b2085d67ccf27c3c5b57dfd",
"text": "We present a novel latent embedding model for learning a compatibility function between image and class embeddings, in the context of zero-shot classification. The proposed method augments the state-of-the-art bilinear compatibility model by incorporating latent variables. Instead of learning a single bilinear map, it learns a collection of maps with the selection, of which map to use, being a latent variable for the current image-class pair. We train the model with a ranking based objective function which penalizes incorrect rankings of the true class for a given image. We empirically demonstrate that our model improves the state-of-the-art for various class embeddings consistently on three challenging publicly available datasets for the zero-shot setting. Moreover, our method leads to visually highly interpretable results with clear clusters of different fine-grained object properties that correspond to different latent variable maps.",
"title": ""
},
{
"docid": "36ed994422af57284e1c98b41b46a9fc",
"text": "The atypical face scanning patterns in individuals with Autism Spectrum Disorder (ASD) has been repeatedly discovered by previous research. The present study examined whether their face scanning patterns could be potentially useful to identify children with ASD by adopting the machine learning algorithm for the classification purpose. Particularly, we applied the machine learning method to analyze an eye movement dataset from a face recognition task [Yi et al., 2016], to classify children with and without ASD. We evaluated the performance of our model in terms of its accuracy, sensitivity, and specificity of classifying ASD. Results indicated promising evidence for applying the machine learning algorithm based on the face scanning patterns to identify children with ASD, with a maximum classification accuracy of 88.51%. Nevertheless, our study is still preliminary with some constraints that may apply in the clinical practice. Future research should shed light on further valuation of our method and contribute to the development of a multitask and multimodel approach to aid the process of early detection and diagnosis of ASD. Autism Res 2016, 9: 888-898. © 2016 International Society for Autism Research, Wiley Periodicals, Inc.",
"title": ""
},
{
"docid": "b8c455777e428aa07abe1cecc34f2494",
"text": "Successive point-of-interest (POI) recommendation in location-based social networks (LBSNs) becomes a significant task since it helps users to navigate a number of candidate POIs and provides the best POI recommendations based on users’ most recent check-in knowledge. However, all existing methods for successive POI recommendation only focus on modeling the correlation between POIs based on users’ check-in sequences, but ignore an important fact that successive POI recommendation is a time-subtle recommendation task. In fact, even with the same previous check-in information, users would prefer different successive POIs at different time. To capture the impact of time on successive POI recommendation, in this paper, we propose a spatial-temporal latent ranking (STELLAR) method to explicitly model the interactions among user, POI, and time. In particular, the proposed STELLAR model is built upon a ranking-based pairwise tensor factorization framework with a fine-grained modeling of user-POI, POI-time, and POI-POI interactions for successive POI recommendation. Moreover, we propose a new interval-aware weight utility function to differentiate successive check-ins’ correlations, which breaks the time interval constraint in prior work. Evaluations on two real-world datasets demonstrate that the STELLAR model outperforms state-of-the-art successive POI recommendation model about 20% in Precision@5 and Recall@5.",
"title": ""
},
{
"docid": "3ee4c2bad88020c945227039b01ddd4c",
"text": "This paper proposes a new cellphone application algorithm which has been implemented for the prediction of energy consumption at electric vehicle (EV) charging stations at the University of California, Los Angeles (UCLA). For this interactive user application, the total time for accessing the database, processing the data, and making the prediction needs to be within a few seconds. We first analyze three relatively fast machine learning-based time series prediction algorithms and find that the nearest neighbor (NN) algorithm (k NN with k = 1) shows better accuracy. Considering the sparseness of the time series of the charging records, we then discuss the new algorithm based on the new proposed time-weighted dot product (TWDP) dissimilarity measure to improve the accuracy and processing time. Two applications have been designed on top of the proposed prediction algorithm: one predicts the expected available energy at the outlet and the other one predicts the expected charging finishing time. The total time, including accessing the database, data processing, and prediction is approximately 1 s for both applications. The granularity of the prediction is 1 h and the horizon is 24 h; data have been collected from 20 EV charging outlets.",
"title": ""
},
{
"docid": "b7965cf7a1e4746cfd0e93993ea72bf2",
"text": "The accuracy of the positions of a pedestrian is very important and useful information for the statistics, advertisement, and safety of different applications. Although the GPS chip in a smartphone is currently the most convenient device to obtain the positions, it still suffers from the effect of multipath and nonline-of-sight propagation in urban canyons. These reflections could greatly degrade the performance of a GPS receiver. This paper describes an approach to estimate a pedestrian position by the aid of a 3-D map and a ray-tracing method. The proposed approach first distributes the numbers of position candidates around a reference position. The weighting of the position candidates is evaluated based on the similarity between the simulated pseudorange and the observed pseudorange. Simulated pseudoranges are calculated using a ray-tracing simulation and a 3-D map. Finally, the proposed method was verified through field experiments in an urban canyon in Tokyo. According to the results, the proposed approach successfully estimates the reflection and direct paths so that the estimate appears very close to the ground truth, whereas the result of a commercial GPS receiver is far from the ground truth. The results show that the proposed method has a smaller error distance than the conventional method.",
"title": ""
},
{
"docid": "64efd590a51fc3cab97c9b4b17ba9b40",
"text": "The problem of detecting bots, automated social media accounts governed by software but disguising as human users, has strong implications. For example, bots have been used to sway political elections by distorting online discourse, to manipulate the stock market, or to push anti-vaccine conspiracy theories that caused health epidemics. Most techniques proposed to date detect bots at the account level, by processing large amount of social media posts, and leveraging information from network structure, temporal dynamics, sentiment analysis, etc. In this paper, we propose a deep neural network based on contextual long short-term memory (LSTM) architecture that exploits both content and metadata to detect bots at the tweet level: contextual features are extracted from user metadata and fed as auxiliary input to LSTM deep nets processing the tweet text. Another contribution that we make is proposing a technique based on synthetic minority oversampling to generate a large labeled dataset, suitable for deep nets training, from a minimal amount of labeled data (roughly 3,000 examples of sophisticated Twitter bots). We demonstrate that, from just one single tweet, our architecture can achieve high classification accuracy (AUC > 96%) in separating bots from humans. We apply the same architecture to account-level bot detection, achieving nearly perfect classification accuracy (AUC > 99%). Our system outperforms previous state of the art while leveraging a small and interpretable set of features yet requiring minimal training data.",
"title": ""
},
{
"docid": "b0375e481218c48fe775416d75b5e85c",
"text": "BACKGROUND\nEtanercept, a soluble tumor necrosis factor receptor, has been shown to lessen disease severity in adult patients with psoriasis. We assessed the efficacy and safety of etanercept in children and adolescents with moderate-to-severe plaque psoriasis.\n\n\nMETHODS\nIn this 48-week study, 211 patients with psoriasis (4 to 17 years of age) were initially randomly assigned to a double-blind trial of 12 once-weekly subcutaneous injections of placebo or 0.8 mg of etanercept per kilogram of body weight (to a maximum of 50 mg), followed by 24 weeks of once-weekly open-label etanercept. At week 36, 138 patients underwent a second randomization to placebo or etanercept to investigate the effects of withdrawal and retreatment. The primary end point was 75% or greater improvement from baseline in the psoriasis area-and-severity index (PASI 75) at week 12. Secondary end points included PASI 50, PASI 90, physician's global assessment of clear or almost clear of disease, and safety assessments.\n\n\nRESULTS\nAt week 12, 57% of patients receiving etanercept achieved PASI 75, as compared with 11% of those receiving placebo (P<0.001). A significantly higher proportion of patients in the etanercept group than in the placebo group had PASI 50 (75% vs. 23%), PASI 90 (27% vs. 7%), and a physician's global assessment of clear or almost clear (53% vs. 13%) at week 12 (P<0.001). At week 36, after 24 weeks of open-label etanercept, rates of PASI 75 were 68% and 65% for patients initially assigned to etanercept and placebo, respectively. During the withdrawal period from week 36 to week 48, response was lost by 29 of 69 patients (42%) assigned to placebo at the second randomization. Four serious adverse events (including three infections) occurred in three patients during treatment with open-label etanercept; all resolved without sequelae.\n\n\nCONCLUSIONS\nEtanercept significantly reduced disease severity in children and adolescents with moderate-to-severe plaque psoriasis. (ClinicalTrials.gov number, NCT00078819 [ClinicalTrials.gov].).",
"title": ""
},
{
"docid": "aa8ae1fc471c46b5803bfa1303cb7001",
"text": "It is widely recognized that steganography with sideinformation in the form of a precover at the sender enjoys significantly higher empirical security than other embedding schemes. Despite the success of side-informed steganography, current designs are purely heuristic and little has been done to develop the embedding rule from first principles. Building upon the recently proposed MiPOD steganography, in this paper we impose multivariate Gaussian model on acquisition noise and estimate its parameters from the available precover. The embedding is then designed to minimize the KL divergence between cover and stego distributions. In contrast to existing heuristic algorithms that modulate the embedding costs by 1–2|e|, where e is the rounding error, in our model-based approach the sender should modulate the steganographic Fisher information, which is a loose equivalent of embedding costs, by (1–2|e|)^2. Experiments with uncompressed and JPEG images show promise of this theoretically well-founded approach. Introduction Steganography is a privacy tool in which messages are embedded in inconspicuous cover objects to hide the very presence of the communicated secret. Digital media, such as images, video, and audio are particularly suitable cover sources because of their ubiquity and the fact that they contain random components, the acquisition noise. On the other hand, digital media files are extremely complex objects that are notoriously hard to describe with sufficiently accurate and estimable statistical models. This is the main reason for why current steganography in such empirical sources [3] lacks perfect security and heavily relies on heuristics, such as embedding “costs” and intuitive modulation factors. Similarly, practical steganalysis resorts to increasingly more complex high-dimensional descriptors (rich models) and advanced machine learning paradigms, including ensemble classifiers and deep learning. Often, a digital media object is subjected to processing and/or format conversion prior to embedding the secret. The last step in the processing pipeline is typically quantization. In side-informed steganography with precover [21], the sender makes use of the unquantized cover values during embedding to hide data in a more secure manner. The first embedding scheme of this type described in the literature is the embedding-while-dithering [14] in which the secret message was embedded by perturbing the process of color quantization and dithering when converting a true-color image to a palette format. Perturbed quantization [15] started another direction in which rounding errors of DCT coefficients during JPEG compression were used to modify the embedding algorithm. This method has been advanced through a series of papers [23, 24, 29, 20], culminating with approaches based on advanced coding techniques with a high level of empirical security [19, 18, 6]. Side-information can have many other forms. Instead of one precover, the sender may have access to the acquisition oracle (a camera) and take multiple images of the same scene. These multiple exposures can be used to estimate the acquisition noise and also incorporated during embedding. This research direction has been developed to a lesser degree compared to steganography with precover most likely due to the difficulty of acquiring the required imagery and modeling the differences between acquisitions. In a series of papers [10, 12, 11], Franz et al. proposed a method in which multiple scans of the same printed image on a flat-bed scanner were used to estimate the model of the acquisition noise at every pixel. This requires acquiring a potentially large number of scans, which makes this approach rather labor intensive. Moreover, differences in the movement of the scanner head between individual scans lead to slight spatial misalignment that complicates using this type of side-information properly. Recently, the authors of [7] showed how multiple JPEG images of the same scene can be used to infer the preferred direction of embedding changes. By working with quantized DCT coefficients instead of pixels, the embedding is less sensitive to small differences between multiple acquisitions. Despite the success of side-informed schemes, there appears to be an alarming lack of theoretical analysis that would either justify the heuristics or suggest a well-founded (and hopefully more powerful) approach. In [13], the author has shown that the precover compensates for the lack of the cover model. In particular, for a Gaussian model of acquisition noise, precover-informed rounding is more secure than embedding designed to preserve the cover model estimated from the precover image assuming the cover is “sufficiently non-stationary.” Another direction worth mentioning in this context is the bottom-up model-based approach recently proposed by Bas [2]. The author showed that a high-capacity steganographic scheme with a rather low empirical detectability can be constructed when the process of digitally developing a RAW sensor capture is sufficiently simplified. The impact of embedding is masked as an increased level of photonic noise, e.g., due to a higher ISO setting. It will likely be rather difficult, however, to extend this approach to realistic processing pipelines. Inspired by the success of the multivariate Gaussian model in steganography for digital images [25, 17, 26], in this paper we adopt the same model for the precover and then derive the embedding rule to minimize the KL divergence between cover and stego distributions. The sideinformation is used to estimate the parameters of the acquisition noise and the noise-free scene. In the next section, we review current state of the art in heuristic side-informed steganography with precover. In the following section, we introduce a formal model of image acquisition. In Section “Side-informed steganography with MVG acquisition noise”, we describe the proposed model-based embedding method, which is related to heuristic approaches in Section “Connection to heuristic schemes.” The main bulk of results from experiments on images represented in the spatial and JPEG domain appear in Section “Experiments.” In the subsequent section, we investigate whether the public part of the selection channel, the content adaptivity, can be incorporated in selection-channel-aware variants of steganalysis features to improve detection of side-informed schemes. The paper is then closed with Conclusions. The following notation is adopted for technical arguments. Matrices and vectors will be typeset in boldface, while capital letters are reserved for random variables with the corresponding lower case symbols used for their realizations. In this paper, we only work with grayscale cover images. Precover values will be denoted with xij ∈ R, while cover and stego values will be integer arrays cij and sij , 1 ≤ i ≤ n1, 1 ≤ j ≤ n2, respectively. The symbols [x], dxe, and bxc are used for rounding and rounding up and down the value of x. By N (μ,σ2), we understand Gaussian distribution with mean μ and variance σ2. The complementary cumulative distribution function of a standard normal variable (the tail probability) will be denoted Q(x) = ∫∞ x (2π)−1/2 exp ( −z2/2 ) dz. Finally, we say that f(x)≈ g(x) when limx→∞ f(x)/g(x) = 1. Prior art in side-informed steganography with precover All modern steganographic schemes, including those that use side-information, are implemented within the paradigm of distortion minimization. First, each cover element cij is assigned a “cost” ρij that measures the impact on detectability should that element be modified during embedding. The payload is then embedded while minimizing the sum of costs of all changed cover elements, ∑ cij 6=sij ρij . A steganographic scheme that embeds with the minimal expected cost changes each cover element with probability βij = exp(−λρij) 1 +exp(−λρij) , (1) if the embedding operation is constrained to be binary, and βij = exp(−λρij) 1 +2exp(−λρij) , (2) for a ternary scheme with equal costs of changing cij to cij ± 1. Syndrome-trellis codes [8] can be used to build practical embedding schemes that operate near the rate–distortion bound. For steganography designed to minimize costs (embedding distortion), a popular heuristic to incorporate a precover value xij during embedding is to modulate the costs based on the rounding error eij = cij − xij , −1/2≤ eij ≤ 1/2 [23, 29, 20, 18, 19, 6, 24]. A binary embedding scheme modulates the cost of changing cij = [xij ] to [xij ] + sign(eij) by 1−2|eij |, while prohibiting the change to [xij ]− sign(eij): ρij(sign(eij)) = (1−2|eij |)ρij (3) ρij(−sign(eij)) = Ω, (4) where ρij(u) is the cost of modifying the cover value by u∈ {−1,1}, ρij are costs of some additive embedding scheme, and Ω is a large constant. This modulation can be justified heuristically because when |eij | ≈ 1/2, a small perturbation of xij could cause cij to be rounded to the other side. Such coefficients are thus assigned a proportionally smaller cost because 1− 2|eij | ≈ 0. On the other hand, the costs are unchanged when eij ≈ 0, as it takes a larger perturbation of the precover to change the rounded value. A ternary version of this embedding strategy [6] allows modifications both ways with costs: ρij(sign(eij)) = (1−2|eij |)ρij (5) ρij(−sign(eij)) = ρij . (6) Some embedding schemes do not use costs and, instead, minimize statistical detectability. In MiPOD [25], the embedding probabilities βij are derived from their impact on the cover multivariate Gaussian model by solving the following equation for each pixel ij: βijIij = λ ln 1−2βij βij , (7) where Iij = 2/σ̂4 ij is the Fisher information with σ̂ 2 ij an estimated variance of the acquisition noise at pixel ij, and λ is a Lagrange multiplier determined by the payload size. To incorporate the side-information, the sender first converts the embedding probabilities into costs and then modulates them as in (3) or (5). This can be done b",
"title": ""
},
{
"docid": "6381c10a963b709c4af88047f38cc08c",
"text": "A great deal of research has been focused on solving the job-shop problem (ΠJ), over the last forty years, resulting in a wide variety of approaches. Recently, much effort has been concentrated on hybrid methods to solve ΠJ as a single technique cannot solve this stubborn problem. As a result much effort has recently been concentrated on techniques that combine myopic problem specific methods and a meta-strategy which guides the search out of local optima. These approaches currently provide the best results. Such hybrid techniques are known as iterated local search algorithms or meta-heuristics. In this paper we seek to assess the work done in the job-shop domain by providing a review of many of the techniques used. The impact of the major contributions is indicated by applying these techniques to a set of standard benchmark problems. It is established that methods such as Tabu Search, Genetic Algorithms, Simulated Annealing should be considered complementary rather than competitive. In addition this work suggests guide-lines on features that should be incorporated to create a good ΠJ system. Finally the possible direction for future work is highlighted so that current barriers within ΠJ maybe surmounted as we approach the 21st Century.",
"title": ""
},
{
"docid": "ba4fb2947987c87a5103616d4bc138de",
"text": "In intelligent tutoring systems with natural language dialogue, speech act classification, the task of detecting learners’ intentions, informs the system’s response mechanism. In this paper, we propose supervised machine learning models for speech act classification in the context of an online collaborative learning game environment. We explore the role of context (i.e. speech acts of previous utterances) for speech act classification. We compare speech act classification models trained and tested with contextual and non-contextual features (contents of the current utterance). The accuracy of the proposed models is high. A surprising finding is the modest role of context in automatically predicting the speech acts.",
"title": ""
},
{
"docid": "cea0f4b7409729fd310024d2e9a31b71",
"text": "Relative ranging between Wireless Sensor Network (WSN) nod es is considered to be an important requirement for a number of dis tributed applications. This paper focuses on a two-way, time of flight (ToF) te chnique which achieves good accuracy in estimating the point-to-point di s ance between two wireless nodes. The underlying idea is to utilize a two-way t ime transfer approach in order to avoid the need for clock synchronization b etween the participating wireless nodes. Moreover, by employing multipl e ToF measurements, sub-clock resolution is achieved. A calibration stage is us ed to estimate the various delays that occur during a message exchange and require subtraction from the initial timed value. The calculation of the range betwee n the nodes takes place on-node making the proposed scheme suitable for distribute d systems. Care has been taken to exclude the erroneous readings from the set of m easurements that are used in the estimation of the desired range. The two-way T oF technique has been implemented on commercial off-the-self (COTS) device s without the need for additional hardware. The system has been deployed in var ous experimental locations both indoors and outdoors and the obtained result s reveal that accuracy between 1m RMS and 2.5m RMS in line-of-sight conditions over a 42m range can be achieved.",
"title": ""
},
{
"docid": "36a616fb73473edecb1df2db0f3d1870",
"text": "We study online learnability of a wide class of problems, extending the results of Rakhlin et al. (2010a) to general notions of performance measure well beyond external regret. Our framework simultaneously captures such well-known notions as internal and general Φ-regret, learning with non-additive global cost functions, Blackwell's approachability, calibration of forecasters, and more. We show that learnability in all these situations is due to control of the same three quantities: a martingale convergence term, a term describing the ability to perform well if future is known, and a generalization of sequential Rademacher complexity, studied in Rakhlin et al. (2010a). Since we directly study complexity of the problem instead of focusing on efficient algorithms, we are able to improve and extend many known results which have been previously derived via an algorithmic construction. Disciplines Computer Sciences | Statistics and Probability This conference paper is available at ScholarlyCommons: http://repository.upenn.edu/statistics_papers/133 JMLR: Workshop and Conference Proceedings 19 (2011) 559–594 24th Annual Conference on Learning Theory Online Learning: Beyond Regret Alexander Rakhlin Department of Statistics University of Pennsylvania Karthik Sridharan TTI-Chicago Ambuj Tewari Department of Computer Science University of Texas at Austin Editor: Sham Kakade, Ulrike von Luxburg Abstract We study online learnability of a wide class of problems, extending the results of Rakhlin et al. (2010a) to general notions of performance measure well beyond external regret. Our framework simultaneously captures such well-known notions as internal and general Φregret, learning with non-additive global cost functions, Blackwell’s approachability, calibration of forecasters, and more. We show that learnability in all these situations is due to control of the same three quantities: a martingale convergence term, a term describing the ability to perform well if future is known, and a generalization of sequential Rademacher complexity, studied in Rakhlin et al. (2010a). Since we directly study complexity of the problem instead of focusing on efficient algorithms, we are able to improve and extend many known results which have been previously derived via an algorithmic construction.We study online learnability of a wide class of problems, extending the results of Rakhlin et al. (2010a) to general notions of performance measure well beyond external regret. Our framework simultaneously captures such well-known notions as internal and general Φregret, learning with non-additive global cost functions, Blackwell’s approachability, calibration of forecasters, and more. We show that learnability in all these situations is due to control of the same three quantities: a martingale convergence term, a term describing the ability to perform well if future is known, and a generalization of sequential Rademacher complexity, studied in Rakhlin et al. (2010a). Since we directly study complexity of the problem instead of focusing on efficient algorithms, we are able to improve and extend many known results which have been previously derived via an algorithmic construction.",
"title": ""
},
{
"docid": "63a14ae93563bc66d9880c4c04c0c686",
"text": "This brief analyzes the jitter as well as the power dissipation of phase-locked loops (PLLs). It aims at defining a benchmark figure-of-merit (FOM) that is compatible with the well-known FOM for oscillators but now extended to an entire PLL. The phase noise that is generated by the thermal noise in the oscillator and loop components is calculated. The power dissipation is estimated, focusing on the required dynamic power. The absolute PLL output jitter is calculated, and the optimum PLL bandwidth that gives minimum jitter is derived. It is shown that, with a steep enough input reference clock, this minimum jitter is independent of the reference frequency and output frequency for a given PLL power budget. Based on these insights, a benchmark FOM for PLL designs is proposed.",
"title": ""
},
{
"docid": "ca544972e6fe3c051f72d04608ff36c1",
"text": "The prefrontal cortex (PFC) plays a key role in controlling goal-directed behavior. Although a variety of task-related signals have been observed in the PFC, whether they are differentially encoded by various cell types remains unclear. Here we performed cellular-resolution microendoscopic Ca(2+) imaging from genetically defined cell types in the dorsomedial PFC of mice performing a PFC-dependent sensory discrimination task. We found that inhibitory interneurons of the same subtype were similar to each other, but different subtypes preferentially signaled different task-related events: somatostatin-positive neurons primarily signaled motor action (licking), vasoactive intestinal peptide-positive neurons responded strongly to action outcomes, whereas parvalbumin-positive neurons were less selective, responding to sensory cues, motor action, and trial outcomes. Compared to each interneuron subtype, pyramidal neurons showed much greater functional heterogeneity, and their responses varied across cortical layers. Such cell-type and laminar differences in neuronal functional properties may be crucial for local computation within the PFC microcircuit.",
"title": ""
}
] |
scidocsrr
|
8f754bda1b9615ba479f386b86764ae7
|
MFCC and its applications in speaker recognition
|
[
{
"docid": "ea8716e339cdc51210f64436a5c91c44",
"text": "Feature selection has been the focus of interest for quite some time and much work has been done. With the creation of huge databases and the consequent requirements for good machine learning techniques, new problems arise and novel approaches to feature selection are in demand. This survey is a comprehensive overview of many existing methods from the 1970’s to the present. It identifies four steps of a typical feature selection method, and categorizes the different existing methods in terms of generation procedures and evaluation functions, and reveals hitherto unattempted combinations of generation procedures and evaluation functions. Representative methods are chosen from each category for detailed explanation and discussion via example. Benchmark datasets with different characteristics are used for comparative study. The strengths and weaknesses of different methods are explained. Guidelines for applying feature selection methods are given based on data types and domain characteristics. This survey identifies the future research areas in feature selection, introduces newcomers to this field, and paves the way for practitioners who search for suitable methods for solving domain-specific real-world applications. (Intelligent Data Analysis, Vol. I, no. 3, http:llwwwelsevier.co&ocate/ida)",
"title": ""
}
] |
[
{
"docid": "ebc77c29a8f761edb5e4ca588b2e6fb5",
"text": "Gigantomastia by definition means bilateral benign progressive breast enlargement to a degree that requires breast reduction surgery to remove more than 1800 g of tissue on each side. It is seen at puberty or during pregnancy. The etiology for this condition is still not clear, but surgery remains the mainstay of treatment. We present a unique case of Gigantomastia, which was neither related to puberty nor pregnancy and has undergone three operations so far for recurrence.",
"title": ""
},
{
"docid": "e6c1747e859f64517e7dddb6c1fd900e",
"text": "More and more mobile objects are now equipped with sensors allowing real time monitoring of their movements. Nowadays, the data produced by these sensors can be stored in spatio-temporal databases. The main goal of this article is to perform a data mining on a huge quantity of mobile object’s positions moving in an open space in order to deduce its behaviour. New tools must be defined to ease the detection of outliers. First of all, a zone graph is set up in order to define itineraries. Then, trajectories of mobile objects following the same itinerary are extracted from the spatio-temporal database and clustered. A statistical analysis on this set of trajectories lead to spatio-temporal patterns such as the main route and spatio-temporal channel followed by most of trajectories of the set. Using these patterns, unusual situations can be detected. Furthermore, a mobile object’s behaviour can be defined by comparing its positions with these spatio-temporal patterns. In this article, this technique is applied to ships’ movements in an open maritime area. Unusual behaviours such as being ahead of schedule or delayed or veering to the left or to the right of the main route are detected. A case study illustrates these processes based on ships’ positions recorded during two years around the Brest area. This method can be extended to almost all kinds of mobile objects (pedestrians, aircrafts, hurricanes, ...) moving in an open area.",
"title": ""
},
{
"docid": "4c12c08d72960b3b75662e9459e23079",
"text": "Graph structures play a critical role in computer vision, but they are inconvenient to use in pattern recognition tasks because of their combinatorial nature and the consequent difficulty in constructing feature vectors. Spectral representations have been used for this task which are based on the eigensystem of the graph Laplacian matrix. However, graphs of different sizes produce eigensystems of different sizes where not all eigenmodes are present in both graphs. We use the Levenshtein distance to compare spectral representations under graph edit operations which add or delete vertices. The spectral representations are therefore of different sizes. We use the concept of the string-edit distance to allow for the missing eigenmodes and compare the correct modes to each other. We evaluate the method by first using generated graphs to compare the effect of vertex deletion operations. We then examine the performance of the method on graphs from a shape database.",
"title": ""
},
{
"docid": "f3abf5a6c20b6fff4970e1e63c0e836b",
"text": "We demonstrate a physically-based technique for predicting the drape of a wide variety of woven fabrics. The approach exploits a theoretical model that explicitly represents the microstructure of woven cloth with interacting particles, rather than utilizing a continuum approximation. By testing a cloth sample in a Kawabata fabric testing device, we obtain data that is used to tune the model's energy functions, so that it reproduces the draping behavior of the original material. Photographs, comparing the drape of actual cloth with visualizations of simulation results, show that we are able to reliably model the unique large-scale draping characteristics of distinctly different fabric types.",
"title": ""
},
{
"docid": "209b304009db4a04400da178d19fe63e",
"text": "Mecanum wheels give vehicles and robots autonomous omni-directional capabilities, while regular wheels don’t. The omni-directionality that such wheels provide makes the vehicle extremely maneuverable, which could be very helpful in different indoor and outdoor applications. However, current Mecanum wheel designs can only operate on flat hard surfaces, and perform very poorly on rough terrains. This paper presents two modified Mecanum wheel designs targeted for complex rough terrains and discusses their advantages and disadvantages in comparison to regular Mecanum wheels. The wheels proposed here are particularly advantageous for overcoming obstacles up to 75% of the overall wheel diameter in lateral motion which significantly facilitates the lateral motion of vehicles on hard rough surfaces and soft soils such as sand which cannot be achieved using other types of wheels. The paper also presents control aspects that need to be considered when controlling autonomous vehicles/robots using the proposed wheels.",
"title": ""
},
{
"docid": "71140208f527bc0b1f193550a587d9ed",
"text": "Data sets are often modeled as point clouds in R, for D large. It is often assumed that the data has some interesting low-dimensional structure, for example that of a d-dimensional manifold M, with d much smaller than D. When M is simply a linear subspace, one may exploit this assumption for encoding efficiently the data by projecting onto a dictionary of d vectors in R (for example found by SVD), at a cost (n + D)d for n data points. When M is nonlinear, there are no “explicit” constructions of dictionaries that achieve a similar efficiency: typically one uses either random dictionaries, or dictionaries obtained by black-box optimization. In this paper we construct data-dependent multi-scale dictionaries that aim at efficient encoding and manipulating of the data. Their construction is fast, and so are the algorithms that map data points to dictionary coefficients and vice versa. In addition, data points are guaranteed to have a sparse representation in terms of the dictionary. We think of dictionaries as the analogue of wavelets, but for approximating point clouds rather than functions.",
"title": ""
},
{
"docid": "1c9dd9b98b141e87ca7b74e995630456",
"text": "Transportation systems in mega-cities are often affected by various kinds of events such as natural disasters, accidents, and public gatherings. Highly dense and complicated networks in the transportation systems propagate confusion in the network because they offer various possible transfer routes to passengers. Visualization is one of the most important techniques for examining such cascades of unusual situations in the huge networks. This paper proposes visual integration of traffic analysis and social media analysis using two forms of big data: smart card data on the Tokyo Metro and social media data on Twitter. Our system provides multiple coordinated views to visually, intuitively, and simultaneously explore changes in passengers' behavior and abnormal situations extracted from smart card data and situational explanations from real voices of passengers such as complaints about services extracted from social media data. We demonstrate the possibilities and usefulness of our novel visualization environment using a series of real data case studies and domain experts' feedbacks about various kinds of events.",
"title": ""
},
{
"docid": "a8c1224f291df5aeb655a2883b16bcfb",
"text": "We present a scalable approach to automatically suggest relevant clothing products, given a single image without metadata. We formulate the problem as cross-scenario retrieval: the query is a real-world image, while the products from online shopping catalogs are usually presented in a clean environment. We divide our approach into two main stages: a) Starting from articulated pose estimation, we segment the person area and cluster promising image regions in order to detect the clothing classes present in the query image. b) We use image retrieval techniques to retrieve visually similar products from each of the detected classes. We achieve clothing detection performance comparable to the state-of-the-art on a very recent annotated dataset, while being more than 50 times faster. Finally, we present a large scale clothing suggestion scenario, where the product database contains over one million products.",
"title": ""
},
{
"docid": "4f7fcd45fa7b27dd3cf25ed441b7d527",
"text": "Forecasting financial time-series has long been among the most challenging problems in financial market analysis. In order to recognize the correct circumstances to enter or exit the markets investors usually employ statistical models (or even simple qualitative methods). However, the inherently noisy and stochastic nature of markets severely limits the forecasting accuracy of the used models. The introduction of electronic trading and the availability of large amounts of data allow for developing novel machine learning techniques that address some of the difficulties faced by the aforementioned methods. In this work we propose a deep learning methodology, based on recurrent neural networks, that can be used for predicting future price movements from large-scale high-frequency time-series data on Limit Order Books. The proposed method is evaluated using a large-scale dataset of limit order book events.",
"title": ""
},
{
"docid": "c26f27dd49598b7f9120f9a31dccb012",
"text": "The effects of music training in relation to brain plasticity have caused excitement, evident from the popularity of books on this topic among scientists and the general public. Neuroscience research has shown that music training leads to changes throughout the auditory system that prime musicians for listening challenges beyond music processing. This effect of music training suggests that, akin to physical exercise and its impact on body fitness, music is a resource that tones the brain for auditory fitness. Therefore, the role of music in shaping individual development deserves consideration.",
"title": ""
},
{
"docid": "f0284358cf418353b5b46d73bd887c77",
"text": "BACKGROUND\nSevere acute malnutrition has continued to be growing problem in Sub Saharan Africa. We investigated the factors associated with morbidity and mortality of under-five children admitted and managed in hospital for severe acute malnutrition.\n\n\nMETHODS\nIt was a retrospective quantitative review of hospital based records using patient files, ward death and discharge registers. It was conducted focussing on demographic, clinical and mortality data which was extracted on all children aged 0-60 months admitted to the University Teaching Hospital in Zambia from 2009 to 2013. Cox proportional Hazards regression was used to identify predictors of mortality and Kaplan Meier curves where used to predict the length of stay on the ward.\n\n\nRESULTS\nOverall (n = 9540) under-five children with severe acute malnutrition were admitted during the period under review, comprising 5148 (54%) males and 4386 (46%) females. Kwashiorkor was the most common type of severe acute malnutrition (62%) while diarrhoea and pneumonia were the most common co-morbidities. Overall mortality was at 46% with children with marasmus having the lowest survival rates on Kaplan Meier graphs. HIV infected children were 80% more likely to die compared to HIV uninfected children (HR = 1.8; 95%CI: 1.6-1.2). However, over time (2009-2013), admissions and mortality rates declined significantly (mortality 51% vs. 35%, P < 0.0001).\n\n\nCONCLUSIONS\nWe find evidence of declining mortality among the core morbid nutritional conditions, namely kwashiorkor, marasmus and marasmic-kwashiorkor among under-five children admitted at this hospital. The reasons for this are unclear or could be beyond the scope of this study. This decline in numbers could be either be associated with declining admissions or due to the interventions that have been implemented at community level to combat malnutrition such as provision of \"Ready to Use therapeutic food\" and prevention of mother to child transmission of HIV at health centre level. Strategies that enhance and expand growth monitoring interventions at community level to detect malnutrition early to reduce incidence of severe cases and mortality need to be strengthened.",
"title": ""
},
{
"docid": "dc34a320af0e7a104686a36f7a6101c3",
"text": "In this paper, the proposed SIMO (Single input multiple outputs) DC-DC converter based on coupled inductor. The required controllable high DC voltage and intermediate DC voltage with high voltage gain from low input voltage sources, like renewable energy, can be achieved easily from the proposed converter. The high voltage DC bus can be used as the leading power for a DC load and intermediate voltage DC output terminals can charge supplementary power sources like battery modules. This converter operates simply with one power switch. It incorporates the techniques of voltage clamping (VC) and zero current switching (ZCS). The simulation result in PSIM software shows that the aims of high efficiency, high voltage gain, several output voltages with unlike levels, are achieved.",
"title": ""
},
{
"docid": "3c0edb8ae2cf8ef616a500ec9f3ceb52",
"text": "In his book Outliers, Malcom Gladwell describes the 10,000-Hour Rule, a key to success in any field, as simply a matter of practicing a specific task that can be accomplished with 20 hours of work a week for 10 years [10]. Ongoing changes in technology and national security needs require aspiring excellent cybersecurity professionals to set a goal of 10,000 hours of relevant, hands-on skill development. The education system today is ill prepared to meet the challenge of producing an adequate number of cybersecurity professionals, but programs that use competitions and learning environments that teach depth are filling this void.",
"title": ""
},
{
"docid": "e793b233039c9cb105fa311fa08312cd",
"text": "A generalized single-phase multilevel current source inverter (MCSI) topology with self-balancing current is proposed, which uses the duality transformation from the generalized multilevel voltage source inverter (MVSI) topology. The existing single-phase 8- and 6-switch 5-level current source inverters (CSIs) can be derived from this generalized MCSI topology. In the proposed topology, each intermediate DC-link current level can be balanced automatically without adding any external circuits; thus, a true multilevel structure is provided. Moreover, owing to the dual relationship, many research results relating to the operation, modulation, and control strategies of MVSIs can be applied directly to the MCSIs. Some simulation results are presented to verify the proposed MCSI topology.",
"title": ""
},
{
"docid": "08d59866cf8496573707d46a6cb520d4",
"text": "Healthcare is an integral component in people's lives, especially for the rising elderly population. Medicare is one such healthcare program that provides for the needs of the elderly. It is imperative that these healthcare programs are affordable, but this is not always the case. Out of the many possible factors for the rising cost of healthcare, claims fraud is a major contributor, but its impact can be lessened through effective fraud detection. We propose a general outlier detection model, based on Bayesian inference, using probabilistic programming. Our model provides probability distributions rather than just point values, as with most common outlier detection methods. Credible intervals are also generated to further enhance confidence that the detected outliers should in fact be considered outliers. Two case studies are presented demonstrating our model's effectiveness in detecting outliers. The first case study uses temperature data in order to provide a clear comparison of several outlier detection techniques. The second case study uses a Medicare dataset to showcase our proposed outlier detection model. Our results show that the successful detection of outliers, which indicate possible fraudulent activities, can provide effective and meaningful results for further investigation within medical specialties or by using real-world, medical provider fraud investigation cases.",
"title": ""
},
{
"docid": "ecf2b2d6a951d84aad15321f029fd014",
"text": "This paper reports the design principles and evaluation results of a new experimental hybrid intrusion detection system (HIDS). This hybrid system combines the advantages of low false-positive rate of signature-based intrusion detection system (IDS) and the ability of anomaly detection system (ADS) to detect novel unknown attacks. By mining anomalous traffic episodes from Internet connections, we build an ADS that detects anomalies beyond the capabilities of signature-based SNORT or Bro systems. A weighted signature generation scheme is developed to integrate ADS with SNORT by extracting signatures from anomalies detected. HIDS extracts signatures from the output of ADS and adds them into the SNORT signature database for fast and accurate intrusion detection. By testing our HIDS scheme over real-life Internet trace data mixed with 10 days of Massachusetts Institute of Technology/Lincoln Laboratory (MIT/LL) attack data set, our experimental results show a 60 percent detection rate of the HIDS, compared with 30 percent and 22 percent in using the SNORT and Bro systems, respectively. This sharp increase in detection rate is obtained with less than 3 percent false alarms. The signatures generated by ADS upgrade the SNORT performance by 33 percent. The HIDS approach proves the vitality of detecting intrusions and anomalies, simultaneously, by automated data mining and signature generation over Internet connection episodes",
"title": ""
},
{
"docid": "5ee5f4450ecc89b684e90e7b846f8365",
"text": "This study scrutinizes the predictive relationship between three referral channels, search engine, social medial, and third-party advertising, and online consumer search and purchase. The results derived from vector autoregressive models suggest that the three channels have differential predictive relationship with sale measures. The predictive power of the three channels is also considerably different in referring customers among competing online shopping websites. In the short run, referrals from all three channels have a significantly positive predictive relationship with the focal online store’s sales amount and volume, but having no significant relationship with conversion. Only referrals from search engines to the rival website have a significantly negative predictive relationship with the focal website’s sales and volume. In the long run, referrals from all three channels have a significant positive predictive relationship with the focal website’s sales, conversion and sales volume. In contrast, referrals from all three channels to the competing online stores have a significant negative predictive relationship with the focal website’s sales, conversion and sales volume. Our results also show that search engine referrals explains the most of the variance in sales, while social media referrals explains the most of the variance in conversion and third party ads referrals explains the most of the variance in sales volume. This study offers new insights for IT and marketing practitioners in respect to better and deeper understanding on marketing attribution and how different channels perform in order to optimize the media mix and overall performance.",
"title": ""
},
{
"docid": "bd3792071a2c7b13bf479aa138f67544",
"text": "Aging is considered the major risk factor for cancer, one of the most important mortality causes in the western world. Inflammaging, a state of chronic, low-level systemic inflammation, is a pervasive feature of human aging. Chronic inflammation increases cancer risk and affects all cancer stages, triggering the initial genetic mutation or epigenetic mechanism, promoting cancer initiation, progression and metastatic diffusion. Thus, inflammaging is a strong candidate to connect age and cancer. A corollary of this hypothesis is that interventions aiming to decrease inflammaging should protect against cancer, as well as most/all age-related diseases. Epidemiological data are concordant in suggesting that the Mediterranean Diet (MD) decreases the risk of a variety of cancers but the underpinning mechanism(s) is (are) still unclear. Here we review data indicating that the MD (as a whole diet or single bioactive nutrients typical of the MD) modulates multiple interconnected processes involved in carcinogenesis and inflammatory response such as free radical production, NF-κB activation and expression of inflammatory mediators, and the eicosanoids pathway. Particular attention is devoted to the capability of MD to affect the balance between pro- and anti-inflammaging as well as to emerging topics such as maintenance of gut microbiota (GM) homeostasis and epigenetic modulation of oncogenesis through specific microRNAs.",
"title": ""
},
{
"docid": "f856effa28ba7b60a5a2a4bba06ba2c4",
"text": "Entity synonyms are critical for many applications like information retrieval and named entity recognition in documents. The current trend is to automatically discover entity synonyms using statistical techniques on web data. Prior techniques suffer from several limitations like click log sparsity and inability to distinguish between entities of different concept classes. In this paper, we propose a general framework for robustly discovering entity synonym with two novel similarity functions that overcome the limitations of prior techniques. We develop efficient and scalable techniques leveraging the MapReduce framework to discover synonyms at large scale. To handle long entity names with extraneous tokens, we propose techniques to effectively map long entity names to short queries in query log. Our experiments on real data from different entity domains demonstrate the superior quality of our synonyms as well as the efficiency of our algorithms. The entity synonyms produced by our system is in production in Bing Shopping and Video search, with experiments showing the significance it brings in improving search experience.",
"title": ""
}
] |
scidocsrr
|
ae15929c2b4f225f097efb90c0c3721f
|
CATalyst: Defeating last-level cache side channel attacks in cloud computing
|
[
{
"docid": "de8415d1674a0e5e84cfc067fd3940cc",
"text": "We apply the FLUSH+RELOAD side-channel attack based on cache hits/misses to extract a small amount of data from OpenSSL ECDSA signature requests. We then apply a “standard” lattice technique to extract the private key, but unlike previous attacks we are able to make use of the side-channel information from almost all of the observed executions. This means we obtain private key recovery by observing a relatively small number of executions, and by expending a relatively small amount of post-processing via lattice reduction. We demonstrate our analysis via experiments using the curve secp256k1 used in the Bitcoin protocol. In particular we show that with as little as 200 signatures we are able to achieve a reasonable level of success in recovering the secret key for a 256-bit curve. This is significantly better than prior methods of applying lattice reduction techniques to similar side channel information.",
"title": ""
}
] |
[
{
"docid": "8dfa68e87eee41dbef8e137b860e19cc",
"text": "We investigate regrets associated with users' posts on a popular social networking site. Our findings are based on a series of interviews, user diaries, and online surveys involving 569 American Facebook users. Their regrets revolved around sensitive topics, content with strong sentiment, lies, and secrets. Our research reveals several possible causes of why users make posts that they later regret: (1) they want to be perceived in favorable ways, (2) they do not think about their reason for posting or the consequences of their posts, (3) they misjudge the culture and norms within their social circles, (4) they are in a \"hot\" state of high emotion when posting, or under the influence of drugs or alcohol, (5) their postings are seen by an unintended audience, (6) they do not foresee how their posts could be perceived by people within their intended audience, and (7) they misunderstand or misuse the Facebook platform. Some reported incidents had serious repercussions, such as breaking up relationships or job losses. We discuss methodological considerations in studying negative experiences associated with social networking posts, as well as ways of helping users of social networking sites avoid such regrets.",
"title": ""
},
{
"docid": "e02207c42eda7ec15db5dcd26ee55460",
"text": "This paper focuses on a new task, i.e. transplanting a category-and-task-specific neural network to a generic, modular network without strong supervision. We design an functionally interpretable structure for the generic network. Like building LEGO blocks, we teach the generic network a new category by directly transplanting the module corresponding to the category from a pre-trained network with a few or even without sample annotations. Our method incrementally adds new categories to the generic network but does not affect representations of existing categories. In this way, our method breaks the typical bottleneck of learning a net for massive tasks and categories, i.e. the requirement of collecting samples for all tasks and categories at the same time before the learning begins. Thus, we use a new distillation algorithm, namely back-distillation, to overcome specific challenges of network transplanting. Our method without training samples even outperformed the baseline with 100 training samples.",
"title": ""
},
{
"docid": "ee6925a80a6c49fb37181377d7287bb6",
"text": "In two articles Timothy Noakes proposes a new physiological model in which skeletal muscle recruitment is regulated by a central \"govenor,\" specifically to prevent the development of a progressive myocardial ischemia that would precede the development of skeletal muscle anaerobiosis during maximal exercise. In this rebuttal to the Noakes' papers, we argue that Noakes has ignored data supporting the existing hypothesis that under normal conditions cardiac output is limiting maximal aerobic power during dynamic exercise engaging large muscle groups.",
"title": ""
},
{
"docid": "44543067012ee060c00aa21af9c1320d",
"text": "We present, visualize and analyse the similarities and differences between the controversial topics related to “edit wars” identified in 10 different language versions of Wikipedia. After a brief review of the related work we describe the methods developed to locate, measure, and categorize the controversial topics in the different languages. Visualizations of the degree of overlap between the top 100 list of most controversial articles in different languages and the content related geographical locations will be presented. We discuss what the presented analysis and visualizations can tell us about the multicultural aspects of Wikipedia, and, in general, about cultures of peer-production with focus on universal and specifically, local features. We demonstrate that Wikipedia is more than just an encyclopaedia; it is also a window into divergent social-spatial priorities, interests and preferences.",
"title": ""
},
{
"docid": "f2ab1e48647b20265b9ce8a1c4de9988",
"text": "Urinary tract infections (UTIs) are one of the most common bacterial infections with global expansion. These infections are predominantly caused by uropathogenic Escherichia coli (UPEC). Totally, 123 strains of Escherichia coli isolated from UTIs patients, using bacterial culture method were subjected to polymerase chain reactions for detection of various O- serogroups, some urovirulence factors, antibiotic resistance genes and resistance to 13 different antibiotics. According to data, the distribution of O1, O2, O6, O7 and O16 serogroups were 2.43%, besides O22, O75 and O83 serogroups were 1.62%. Furthermore, the distribution of O4, O8, O15, O21 and O25 serogroups were 5.69%, 3.25%, 21.13%, 4.06% and 26.01%, respectively. Overall, the fim virulence gene had the highest (86.17%) while the usp virulence gene had the lowest distributions of virulence genes in UPEC strains isolated from UTIs patients. The vat and sen virulence genes were not detected in any UPEC strains. Totally, aadA1 (52.84%), and qnr (46.34%) were the most prevalent antibiotic resistance genes while the distribution of cat1 (15.44%), cmlA (15.44%) and dfrA1 (21.95%) were the least. Resistance to penicillin (100%) and tetracycline (73.98%) had the highest while resistance to nitrofurantoin (5.69%) and trimethoprim (16.26%) had the lowest frequencies. This study indicated that the UPEC strains which harbored the high numbers of virulence and antibiotic resistance genes had the high ability to cause diseases that are resistant to most antibiotics. In the current situation, it seems that the administration of penicillin and tetracycline for the treatment of UTIs is vain.",
"title": ""
},
{
"docid": "3f30c821132e07838de325c4f2183f84",
"text": "This paper argues for the recognition of important experiential aspects of consumption. Specifically, a general framework is constructed to represent typical consumer behavior variables. Based on this paradigm, the prevailing information processing model is contrasted with an experiential view that focuses on the symbolic, hedonic, and esthetic nature of consumption. This view regards the consumption experience as a phenomenon directed toward the pursuit of fantasies, feelings, and fun.",
"title": ""
},
{
"docid": "8f5028ec9b8e691a21449eef56dc267e",
"text": "It can be shown that by replacing the sigmoid activation function often used in neural networks with an exponential function, a neural network can be formed which computes nonlinear decision boundaries. This technique yields decision surfaces which approach the Bayes optimal under certain conditions. There is a continuous control of the linearity of the decision boundaries, from linear for small training sets to any degree of nonlinearity justified by larger training sets. A four-layer neural network of the type proposed can map any input pattern to any number of classifications. The input variables can be either continuous or binary. Modification of the decision boundaries based on new data can be accomplished in real time simply by defining a set of weights equal to the new training vector. The decision boundaries can be implemented using analog 'neurons', which operate entirely in parallel. The organization proposed takes into account the projected pin limitations of neural-net chips of the near future. By a change in architecture, these same components could be used as associative memories, to compute nonlinear multivariate regression surfaces, or to compute a posteriori probabilities of an event.<<ETX>>",
"title": ""
},
{
"docid": "46658067ffc4fd2ecdc32fbaaa606170",
"text": "Adolescent resilience research differs from risk research by focusing on the assets and resources that enable some adolescents to overcome the negative effects of risk exposure. We discuss three models of resilience-the compensatory, protective, and challenge models-and describe how resilience differs from related concepts. We describe issues and limitations related to resilience and provide an overview of recent resilience research related to adolescent substance use, violent behavior, and sexual risk behavior. We then discuss implications that resilience research has for intervention and describe some resilience-based interventions.",
"title": ""
},
{
"docid": "c86c10428bfca028611a5e989ca31d3f",
"text": "In the study, we discussed the ARCH/GARCH family models and enhanced them with artificial neural networks to evaluate the volatility of daily returns for 23.10.1987–22.02.2008 period in Istanbul Stock Exchange. We proposed ANN-APGARCH model to increase the forecasting performance of APGARCH model. The ANN-extended versions of the obtained GARCH models improved forecast results. It is noteworthy that daily returns in the ISE show strong volatility clustering, asymmetry and nonlinearity characteristics. 2008 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "fcc94c9c9f388386b7eadc42c432f273",
"text": "Thanks to the growing availability of spoofing databases and rapid advances in using them, systems for detecting voice spoofing attacks are becoming more and more capable, and error rates close to zero are being reached for the ASVspoof2015 database. However, speech synthesis and voice conversion paradigms that are not considered in the ASVspoof2015 database are appearing. Such examples include direct waveform modelling and generative adversarial networks. We also need to investigate the feasibility of training spoofing systems using only low-quality found data. For that purpose, we developed a generative adversarial networkbased speech enhancement system that improves the quality of speech data found in publicly available sources. Using the enhanced data, we trained state-of-the-art text-to-speech and voice conversion models and evaluated them in terms of perceptual speech quality and speaker similarity. The results show that the enhancement models significantly improved the SNR of low-quality degraded data found in publicly available sources and that they significantly improved the perceptual cleanliness of the source speech without significantly degrading the naturalness of the voice. However, the results also show limitations when generating speech with the low-quality found data.",
"title": ""
},
{
"docid": "5a38a2d349838b32bc5c41d362a220ac",
"text": "This article considers the challenges associated with completing risk assessments in countering violent extremism. In particular, it is concerned with risk assessment of those who come to the attention of government and nongovernment organizations as being potentially on a trajectory toward terrorism and where there is an obligation to consider the potential future risk that they may pose. Risk assessment in this context is fraught with difficulty, primarily due to the variable nature of terrorism, the low base-rate problem, and the dearth of strong evidence on relevant risk and resilience factors. Statistically, this will lead to poor predictive value. Ethically, it can lead to the labeling of an individual who is not on a trajectory toward violence as being \"at risk\" of engaging in terrorism and the imposing of unnecessary risk management actions. The article argues that actuarial approaches to risk assessment in this context cannot work. However, it further argues that approaches that help assessors to process and synthesize information in a structured way are of value and are in line with good practice in the broader field of violence risk assessment. (PsycINFO Database Record",
"title": ""
},
{
"docid": "d0ebee0648beecbd00faaf67f76f256c",
"text": "Text mining is the use of automated methods for exploiting the enormous amount of knowledge available in the biomedical literature. There are at least as many motivations for doing text mining work as there are types of bioscientists. Model organism database curators have been heavy participants in the development of the field due to their need to process large numbers of publications in order to populate the many data fields for every gene in their species of interest. Bench scientists have built biomedical text mining applications to aid in the development of tools for interpreting the output of high-throughput assays and to improve searches of sequence databases (see [1] for a review). Bioscientists of every stripe have built applications to deal with the dual issues of the doubleexponential growth in the scientific literature over the past few years and of the unique issues in searching PubMed/ MEDLINE for genomics-related publications. A surprising phenomenon can be noted in the recent history of biomedical text mining: although several systems have been built and deployed in the past few years—Chilibot, Textpresso, and PreBIND (see Text S1 for these and most other citations), for example—the ones that are seeing high usage rates and are making productive contributions to the working lives of bioscientists have been built not by text mining specialists, but by bioscientists. We speculate on why this might be so below. Three basic types of approaches to text mining have been prevalent in the biomedical domain. Co-occurrence– based methods do no more than look for concepts that occur in the same unit of text—typically a sentence, but sometimes as large as an abstract—and posit a relationship between them. (See [2] for an early co-occurrence–based system.) For example, if such a system saw that BRCA1 and breast cancer occurred in the same sentence, it might assume a relationship between breast cancer and the BRCA1 gene. Some early biomedical text mining systems were co-occurrence–based, but such systems are highly error prone, and are not commonly built today. In fact, many text mining practitioners would not consider them to be text mining systems at all. Co-occurrence of concepts in a text is sometimes used as a simple baseline when evaluating more sophisticated systems; as such, they are nontrivial, since even a co-occurrence– based system must deal with variability in the ways that concepts are expressed in human-produced texts. For example, BRCA1 could be referred to by any of its alternate symbols—IRIS, PSCP, BRCAI, BRCC1, or RNF53 (or by any of their many spelling variants, which include BRCA1, BRCA-1, and BRCA 1)— or by any of the variants of its full name, viz. breast cancer 1, early onset (its official name per Entrez Gene and the Human Gene Nomenclature Committee), as breast cancer susceptibility gene 1, or as the latter’s variant breast cancer susceptibility gene-1. Similarly, breast cancer could be referred to as breast cancer, carcinoma of the breast, or mammary neoplasm. These variability issues challenge more sophisticated systems, as well; we discuss ways of coping with them in Text S1. Two more common (and more sophisticated) approaches to text mining exist: rule-based or knowledgebased approaches, and statistical or machine-learning-based approaches. The variety of types of rule-based systems is quite wide. In general, rulebased systems make use of some sort of knowledge. This might take the form of general knowledge about how language is structured, specific knowledge about how biologically relevant facts are stated in the biomedical literature, knowledge about the sets of things that bioscientists talk about and the kinds of relationships that they can have with one another, and the variant forms by which they might be mentioned in the literature, or any subset or combination of these. (See [3] for an early rule-based system, and [4] for a discussion of rule-based approaches to various biomedical text mining tasks.) At one end of the spectrum, a simple rule-based system might use hardcoded patterns—for example, ,gene. plays a role in ,disease. or ,disease. is associated with ,gene.—to find explicit statements about the classes of things in which the researcher is interested. At the other end of the spectrum, a rulebased system might use sophisticated linguistic and semantic analyses to recognize a wide range of possible ways of making assertions about those classes of things. It is worth noting that useful systems have been built using technologies at both ends of the spectrum, and at many points in between. In contrast, statistical or machine-learning–based systems operate by building classifiers that may operate on any level, from labelling part of speech to choosing syntactic parse trees to classifying full sentences or documents. (See [5] for an early learning-based system, and [4] for a discussion of learning-based approaches to various biomedical text mining tasks.) Rule-based and statistical systems each have their advantages and",
"title": ""
},
{
"docid": "9fdb52d61c5f6d278c656f75d22aa10d",
"text": "BACKGROUND\nIncreasing demand for memory assessment in clinical settings in Iran, as well as the absence of a comprehensive and standardized task based upon the Persian culture and language, requires an appropriate culture- and language-specific version of the commonly used neuropsychological measure of verbal learning and memory, the Rey Auditory Verbal Learning Test (RAVLT).\n\n\nMETHODS\nThe Persian adapted version of the original RAVLT and two other alternate word lists were generated based upon criteria previously set for developing new word lists. A total of 90 subjects (three groups of 30 persons), aged 29.75±7.10 years, volunteered to participate in our study and were tested using the original word list. The practice effect was assessed by retesting the first and second groups using the same word list after 30 and 60 days, respectively. The test-retest reliability was evaluated by retesting the third group of participants twice using two new alternate word lists with an interval of 30 days.\n\n\nRESULTS\nThe re-administration of the same list after one or even two months led to significant practice effects. However, the use of alternate forms after a one-month delay yielded no significant difference across the forms. The first and second trials, as well as the total, immediate, and delayed recall scores showed the best reliability in retesting by the alternate list.\n\n\nCONCLUSION\nThe difference between the generated forms was minor, and it seems that the Persian version of the RAVLT is a reliable instrument for repeated neuropsychological testing as long as alternate forms are used and scores are carefully chosen. ",
"title": ""
},
{
"docid": "184da4d4589a3a9dc1f339042e6bc674",
"text": "Ocular dominance plasticity has long served as a successful model for examining how cortical circuits are shaped by experience. In this paradigm, altered retinal activity caused by unilateral eye-lid closure leads to dramatic shifts in the binocular response properties of neurons in the visual cortex. Much of the recent progress in identifying the cellular and molecular mechanisms underlying ocular dominance plasticity has been achieved by using the mouse as a model system. In this species, monocular deprivation initiated in adulthood also causes robust ocular dominance shifts. Research on ocular dominance plasticity in the mouse is starting to provide insight into which factors mediate and influence cortical plasticity in juvenile and adult animals.",
"title": ""
},
{
"docid": "4ce67aeca9e6b31c5021712f148108e2",
"text": "Self-endorsing—the portrayal of potential consumers using products—is a novel advertising strategy made possible by the development of virtual environments. Three experiments compared self-endorsing to endorsing by an unfamiliar other. In Experiment 1, self-endorsing in online advertisements led to higher brand attitude and purchase intention than other-endorsing. Moreover, photographs were a more effective persuasion channel than text. In Experiment 2, participants wore a brand of clothing in a high-immersive virtual environment and preferred the brand worn by their virtual self to the brand worn by others. Experiment 3 demonstrated that an additional mechanism behind self-endorsing was the interactivity of the virtual representation. Evidence for self-referencing as a mediator is presented. 94 The Journal of Advertising context, consumers can experience presence while interacting with three-dimensional products on Web sites (Biocca et al. 2001; Edwards and Gangadharbatla 2001; Li, Daugherty, and Biocca 2001). When users feel a heightened sense of presence and perceive the virtual experience to be real, they are more easily persuaded by the advertisement (Kim and Biocca 1997). The differing degree, or the objectively measurable property of presence, is called immersion. Immersion is the extent to which media are capable of delivering a vivid illusion of reality using rich layers of sensory input (Slater and Wilbur 1997). Therefore, different levels of immersion (objective unit) lead to different experiences of presence (subjective unit), and both concepts are closely related to interactivity. Web sites are considered to be low-immersive virtual environments because of limited interactive capacity and lack of richness in sensory input, which decreases the sense of presence, whereas virtual reality is considered a high-immersive virtual environment because of its ability to reproduce perceptual richness, which heightens the sense of feeling that the virtual experience is real. Another differentiating aspect of virtual environments is that they offer plasticity of the appearance and behavior of virtual self-representations. It is well known that virtual selves may or may not be true replications of physical appearances (Farid 2009; Yee and Bailenson 2006), but users can also be faced with situations in which they are not controlling the behaviors of their own virtual representations (Fox and Bailenson 2009). In other words, a user can see himor herself using (and perhaps enjoying) a product he or she has never physically used. Based on these unique features of virtual platforms, the current study aims to explore the effect of viewing a virtual representation that may or may not look like the self, endorsing a brand by use. We also manipulate the interactivity of endorsers within virtual environments to provide evidence for the mechanism behind self-endorsing. THE SELF-ENDORSED ADVERTISEMENT Recent studies have confirmed that positive connections between the self and brands can be created by subtle manipulations, such as mimicry of the self ’s nonverbal behaviors (Tanner et al. 2008). The slightest affiliation between the self and the other can lead to positive brand evaluations. In a study by Ferraro, Bettman, and Chartrand (2009), an unfamiliar ingroup or out-group member was portrayed in a photograph with a water bottle bearing a brand name. The simple detail of the person wearing a baseball cap with the same school logo (i.e., in-group affiliation) triggered participants to choose the brand associated with the in-group member. Thus, the self–brand relationship significantly influences brand attitude, but self-endorsing has not received scientific attention to date, arguably because it was not easy to implement before the onset of virtual environments. Prior research has studied the effectiveness of different types of endorsers and their influence on the persuasiveness of advertisements (Friedman and Friedman 1979; Stafford, Stafford, and Day 2002), but the self was not considered in these investigations as a possible source of endorsement. However, there is the possibility that the currently sporadic use of self-endorsing (e.g., www.myvirtualmodel.com) will increase dramatically. For instance, personalized recommendations are being sent to consumers based on online “footsteps” of prior purchases (Tam and Ho 2006). Furthermore, Google has spearheaded keyword search advertising, which displays text advertisements in real-time based on search words ( Jansen, Hudson, and Hunter 2008), and Yahoo has begun to display video and image advertisements based on search words (Clifford 2009). Considering the availability of personal images on the Web due to the widespread employment of social networking sites, the idea of self-endorsing may spread quickly. An advertiser could replace the endorser shown in the image advertisement called by search words with the user to create a self-endorsed advertisement. Thus, the timely investigation of the influence of self-endorsing on users, as well as its mechanism, is imperative. Based on positivity biases related to the self (Baumeister 1998; Chambers and Windschitl 2004), self-endorsing may be a powerful persuasion tool. However, there may be instances when using the self in an advertisement may not be effective, such as when the virtual representation does not look like the consumer and the consumer fails to identify with the representation. Self-endorsed advertisements may also lose persuasiveness when movements of the representation are not synched with the actions of the consumer. Another type of endorser that researchers are increasingly focusing on is the typical user endorser. Typical endorsers have an advantage in that they appeal to the similarity of product usage with the average user. For instance, highly attractive models are not always effective compared with normally attractive models, even for beauty-enhancing products (i.e., acne treatment), when users perceive that the highly attractive models do not need those products (Bower and Landreth 2001). Moreover, with the advancement of the Internet, typical endorsers are becoming more influential via online testimonials (Lee, Park, and Han 2006; Wang 2005). In the current studies, we compared the influence of typical endorsers (i.e., other-endorsing) and self-endorsers on brand attitude and purchase intentions. In addition to investigating the effects of self-endorsing, this work extends results of earlier studies on the effectiveness of different types of endorsers and makes important theoretical contributions by studying self-referencing as an underlying mechanism of self-endorsing.",
"title": ""
},
{
"docid": "d47c543f396059cc0ab6c5d98f8db35c",
"text": "Visual object tracking is a challenging computer vision problem with numerous real-world applications. This paper investigates the impact of convolutional features for the visual tracking problem. We propose to use activations from the convolutional layer of a CNN in discriminative correlation filter based tracking frameworks. These activations have several advantages compared to the standard deep features (fully connected layers). Firstly, they miti-gate the need of task specific fine-tuning. Secondly, they contain structural information crucial for the tracking problem. Lastly, these activations have low dimensionality. We perform comprehensive experiments on three benchmark datasets: OTB, ALOV300++ and the recently introduced VOT2015. Surprisingly, different to image classification, our results suggest that activations from the first layer provide superior tracking performance compared to the deeper layers. Our results further show that the convolutional features provide improved results compared to standard hand-crafted features. Finally, results comparable to state-of-the-art trackers are obtained on all three benchmark datasets.",
"title": ""
},
{
"docid": "a53a81b0775992ea95db85b045463ddf",
"text": "We start by asking an interesting yet challenging question, “If a large proportion (e.g., more than 90% as shown in Fig. 1) of the face/sketch is missing, can a realistic whole face sketch/image still be estimated?” Existing face completion and generation methods either do not conduct domain transfer learning or can not handle large missing area. For example, the inpainting approach tends to blur the generated region when the missing area is large (i.e., more than 50%). In this paper, we exploit the potential of deep learning networks in filling large missing region (e.g., as high as 95% missing) and generating realistic faces with high-fidelity in cross domains. We propose the recursive generation by bidirectional transformation networks (rBTN) that recursively generates a whole face/sketch from a small sketch/face patch. The large missing area and the cross domain challenge make it difficult to generate satisfactory results using a unidirectional cross-domain learning structure. On the other hand, a forward and backward bidirectional learning between the face and sketch domains would enable recursive estimation of the missing region in an incremental manner (Fig. 1) and yield appealing results. r-BTN also adopts an adversarial constraint to encourage the generation of realistic faces/sketches. Extensive experiments have been conducted to demonstrate the superior performance from r-BTN as compared to existing potential solutions.",
"title": ""
},
{
"docid": "b92484f67bf2d3f71d51aee9fb7abc86",
"text": "This research addresses the kinds of matching elements that determine analogical relatedness and literal similarity. Despite theoretical agreement on the importance of relational match, the empirical evidence is neither systematic nor definitive. In 3 studies, participants performed online evaluations of relatedness of sentence pairs that varied in either the object or relational match. Results show a consistent focus on relational matches as the main determinant of analogical acceptance. In addition, analogy does not require strict overall identity of relational concepts. Semantically overlapping but nonsynonymous relations were commonly accepted, but required more processing time. Finally, performance in a similarity rating task partly paralleled analogical acceptance; however, relatively more weight was given to object matches. Implications for psychological theories of analogy and similarity are addressed.",
"title": ""
},
{
"docid": "e0301bf133296361b4547730169d2672",
"text": "Radar warning receivers (RWRs) classify the intercepted pulses into clusters utilizing multiple parameter deinterleaving. In order to make classification more elaborate time-of-arrival (TOA) deinterleaving should be performed for each cluster. In addition, identification of the classified pulse sequences has been exercised at last. It is essential to identify the classified sequences with a minimum number of pulses. This paper presents a method for deinterleaving of intercepted signals having small number of pulses that belong to stable or jitter pulse repetition interval (PRI) types in the presence of missed pulses. It is necessary for both stable and jitter PRI TOA deinterleaving algorithms to utilize predefined PRI range. However, jitter PRI TOA deinterleaving also requires variation about mean PRI value of emitter of interest as a priori.",
"title": ""
}
] |
scidocsrr
|
3f27cea3c4c04e9eff2e0996d73c577d
|
Employees' Compliance with BYOD Security Policy: Insights from Reactance, Organizational Justice, and Protection Motivation Theory
|
[
{
"docid": "ff14cc28a72827c14aba42f3a036a088",
"text": "Employees’ failure to comply with IS security procedures is a key concern for organizations today. A number of socio-cognitive theories have been used to explain this. However, prior studies have not examined the influence of past and automatic behavior on employee decisions to comply. This is an important omission because past behavior has been assumed to strongly affect decision-making. To address this gap, we integrated habit (a routinized form of past behavior) with Protection Motivation Theory (PMT), to explain compliance. An empirical test showed that habitual IS security compliance strongly reinforced the cognitive processes theorized by PMT, as well as employee intention for future compliance. We also found that nearly all components of PMT significantly impacted employee intention to comply with IS security policies. Together, these results highlighted the importance of addressing employees’ past and automatic behavior in order to improve compliance. 2012 Elsevier B.V. All rights reserved. * Corresponding author. Tel.: +1 801 361 2531; fax: +1 509 275 0886. E-mail addresses: anthony@vance.name (A. Vance), mikko.siponen@oulu.fi (M. Siponen), seppo.pahnila@oulu.fi (S. Pahnila). URL: http://www.anthonyvance.com 1 http://www.issrc.oulu.fi/.",
"title": ""
}
] |
[
{
"docid": "4a6c133bd060160537640180dfbb3d38",
"text": "OBJECTIVES\nThis study examined the relationship between child sexual abuse (CSA) and subsequent onset of psychiatric disorders, accounting for other childhood adversities, CSA type, and chronicity of the abuse.\n\n\nMETHODS\nRetrospective reports of CSA, other adversities, and psychiatric disorders were obtained by the National Comorbidity Survey, a nationally representative survey of the United States (n = 5877). Reports were analyzed by multivariate methods.\n\n\nRESULTS\nCSA was reported by 13.5% of women and 2.5% of men. When other childhood adversities were controlled for, significant associations were found between CSA and subsequent onset of 14 mood, anxiety, and substance use disorders among women and 5 among men. In a subsample of respondents reporting no other adversities, odds of depression and substance problems associated with CSA were higher. Among women, rape (vs molestation), knowing the perpetrator (vs strangers), and chronicity of CSA (vs isolated incidents) were associated with higher odds of some disorders.\n\n\nCONCLUSIONS\nCSA usually occurs as part of a larger syndrome of childhood adversities. Nonetheless, CSA, whether alone or in a larger adversity cluster, is associated with substantial increased risk of subsequent psychopathology.",
"title": ""
},
{
"docid": "3cc9f615445f3692aa258300d73f57ff",
"text": "In good old-fashioned artificial intelligence (GOFAI), humans specified systems that solved problems. Much of the recent progress in AI has come from replacing human insights by learning. However, learning itself is still usually built by humans – specifically the choice that parameter updates should follow the gradient of a cost function. Yet, in analogy with GOFAI, there is no reason to believe that humans are particularly good at defining such learning systems: we may expect learning itself to be better if we learn it. Recent research in machine learning has started to realize the benefits of that strategy. We should thus expect this to be relevant for neuroscience: how could the correct learning rules be acquired? Indeed, behavioral science has long shown that humans learn-to-learn, which is potentially responsible for their impressive learning abilities. Here we discuss ideas across machine learning, neuroscience, and behavioral science that matter for the principle of learning-to-learn.",
"title": ""
},
{
"docid": "4c48c79cc941aafe09ce6f843ebfbfd7",
"text": "Chromatin immunoprecipitation followed by sequencing (ChIP-seq) is an increasingly common experimental approach to generate genome-wide maps of histone modifications and to dissect the complexity of the epigenome. Here, we propose EpiCSeg: a novel algorithm that combines several histone modification maps for the segmentation and characterization of cell-type specific epigenomic landscapes. By using an accurate probabilistic model for the read counts, EpiCSeg provides a useful annotation for a considerably larger portion of the genome, shows a stronger association with validation data, and yields more consistent predictions across replicate experiments when compared to existing methods. The software is available at http://github.com/lamortenera/epicseg",
"title": ""
},
{
"docid": "f5311de600d7e50d5c9ecff5c49f7167",
"text": "Most work in machine reading focuses on question answering problems where the answer is directly expressed in the text to read. However, many real-world question answering problems require the reading of text not because it contains the literal answer, but because it contains a recipe to derive an answer together with the reader’s background knowledge. One example is the task of interpreting regulations to answer “Can I...?” or “Do I have to...?” questions such as “I am working in Canada. Do I have to carry on paying UK National Insurance?” after reading a UK government website about this topic. This task requires both the interpretation of rules and the application of background knowledge. It is further complicated due to the fact that, in practice, most questions are underspecified, and a human assistant will regularly have to ask clarification questions such as “How long have you been working abroad?” when the answer cannot be directly derived from the question and text. In this paper, we formalise this task and develop a crowd-sourcing strategy to collect 32k task instances based on real-world rules and crowd-generated questions and scenarios. We analyse the challenges of this task and assess its difficulty by evaluating the performance of rule-based and machine-learning baselines. We observe promising results when no background knowledge is necessary, and substantial room for improvement whenever background knowledge is needed.",
"title": ""
},
{
"docid": "8e7d3462f93178f6c2901a429df22948",
"text": "This article analyzes China's pension arrangement and notes that China has recently established a universal non-contributory pension plan covering urban non-employed workers and all rural residents, combined with the pension plan covering urban employees already in place. Further, in the latest reform, China has discontinued the special pension plan for civil servants and integrated this privileged welfare class into the urban old-age pension insurance program. With these steps, China has achieved a degree of universalism and integration of its pension arrangement unprecedented in the non-Western world. Despite this radical pension transformation strategy, we argue that the current Chinese pension arrangement represents a case of \"incomplete\" universalism. First, its benefit level is low. Moreover, the benefit level varies from region to region. Finally, universalism in rural China has been undermined due to the existence of the \"policy bundle.\" Additionally, we argue that the 2015 pension reform has created a situation in which the stratification of Chinese pension arrangements has been \"flattened,\" even though it remains stratified to some extent.",
"title": ""
},
{
"docid": "3886d46c2420216f5950cfc22597c82e",
"text": "In this article, we describe a new approach to enhance driving safety via multi-media technologies by recognizing and adapting to drivers’ emotions with multi-modal intelligent car interfaces. The primary objective of this research was to build an affectively intelligent and adaptive car interface that could facilitate a natural communication with its user (i.e., the driver). This objective was achieved by recognizing drivers’ affective states (i.e., emotions experienced by the drivers) and by responding to those emotions by adapting to the current situation via an affective user model created for each individual driver. A controlled experiment was designed and conducted in a virtual reality environment to collect physiological data signals (galvanic skin response, heart rate, and temperature) from participants who experienced driving-related emotions and states (neutrality, panic/fear, frustration/anger, and boredom/sleepiness). k-Nearest Neighbor (KNN), Marquardt-Backpropagation (MBP), and Resilient Backpropagation (RBP) Algorithms were implemented to analyze the collected data signals and to find unique physiological patterns of emotions. RBP was the best classifier of these three emotions with 82.6% accuracy, followed by MBP with 73.26% and by KNN with 65.33%. Adaptation of the interface was designed to provide multi-modal feedback to the users about their current affective state and to respond to users’ negative emotional states in order to decrease the possible negative impacts of those emotions. Bayesian Belief Networks formalization was employed to develop the user model to enable the intelligent system to appropriately adapt to the current context and situation by considering user-dependent factors, such as personality traits and preferences. 2010 Elsevier Inc. All rights reserved.",
"title": ""
},
{
"docid": "66d5fb34585e60918aca1a5aaf658ac0",
"text": "We present a framework for extracting image contours based on geometric and structural consistency among edge element locations and orientations. The paper presents two contributions. First, we observe that while the traditional edge orientation operators are based on first-order derivatives, orientation as tangent of a localized curve requires third-order derivatives. We derive a numerically stable third-order edge operator and show that it outperforms current techniques. Second, we consider all discrete n-tuples of edges in a local neighborhood (7times7) and retain those that are geometrically consistent with a third-order local curve model. This results in a number of ordered discrete combinations of edges, each represented by a bundle of curves. The resulting curve bundle map is a representation of all possible local groupings from which longer contour fragments are constructed. We validate our results and show that our framework outperforms traditional approaches to contour extraction.",
"title": ""
},
{
"docid": "22bfe6518994bac7d009ca98990f42b0",
"text": "BACKGROUND\nThe free nipple breast reduction method has certain disadvantages, such as nipple hyposensitivity, loss of lactation, and loss of projection. To eliminate these risks, the authors describe a patient-based breast reduction technique in which the major supplier vessels of the nipple-areola complex were determined by color Doppler ultrasonography. Pedicles containing these vessels were designed for reductions.\n\n\nMETHODS\nSixteen severe gigantomastia patients with a mean age of 41 years (range, 23 to 60 years) were included in the study. Major nipple-areola complex perforators were determined with 13- to 5-MHz linear probe Doppler ultrasonography before surgery. Pedicles were designed according to the vessel locations, and reductions were performed with superomedial-, superolateral-, or mediolateral-based designs.\n\n\nRESULTS\nDifferent combinations of internal mammary and lateral thoracic artery perforator-based reductions were achieved. None of the patients had areola necrosis. Mean reduction weight was 1795 g (range, 1320 to 2280) per breast.\n\n\nCONCLUSIONS\nInstead of using standard markings for severe gigantomastia patients, custom-made and sonographically determined pedicles were used. This technique can be considered as a \"guide\" for the surgeon during very large breast reductions.",
"title": ""
},
{
"docid": "c451d86c6986fab1a1c4cd81e87e6952",
"text": "Large-scale is a trend in person re-identi- fication (re-id). It is important that real-time search be performed in a large gallery. While previous methods mostly focus on discriminative learning, this paper makes the attempt in integrating deep learning and hashing into one framework to evaluate the efficiency and accuracy for large-scale person re-id. We integrate spatial information for discriminative visual representation by partitioning the pedestrian image into horizontal parts. Specifically, Part-based Deep Hashing (PDH) is proposed, in which batches of triplet samples are employed as the input of the deep hashing architecture. Each triplet sample contains two pedestrian images (or parts) with the same identity and one pedestrian image (or part) of the different identity. A triplet loss function is employed with a constraint that the Hamming distance of pedestrian images (or parts) with the same identity is smaller than ones with the different identity. In the experiment, we show that the proposed PDH method yields very competitive re-id accuracy on the large-scale Market-1501 and Market-1501+500K datasets.",
"title": ""
},
{
"docid": "5bee27378a98ff5872f7ae5e899f81e2",
"text": "An algorithmic framework is proposed to process acceleration and surface electromyographic (SEMG) signals for gesture recognition. It includes a novel segmentation scheme, a score-based sensor fusion scheme, and two new features. A Bayes linear classifier and an improved dynamic time-warping algorithm are utilized in the framework. In addition, a prototype system, including a wearable gesture sensing device (embedded with a three-axis accelerometer and four SEMG sensors) and an application program with the proposed algorithmic framework for a mobile phone, is developed to realize gesture-based real-time interaction. With the device worn on the forearm, the user is able to manipulate a mobile phone using 19 predefined gestures or even personalized ones. Results suggest that the developed prototype responded to each gesture instruction within 300 ms on the mobile phone, with the average accuracy of 95.0% in user-dependent testing and 89.6% in user-independent testing. Such performance during the interaction testing, along with positive user experience questionnaire feedback, demonstrates the utility of the framework.",
"title": ""
},
{
"docid": "3309e09d16e74f87a507181bd82cd7f0",
"text": "The goal of this work is to overview and summarize the grasping taxonomies reported in the literature. Our long term goal is to understand how to reduce mechanical complexity of anthropomorphic hands and still preserve their dexterity. On the basis of a literature survey, 33 different grasp types are taken into account. They were then arranged in a hierarchical manner, resulting in 17 grasp types.",
"title": ""
},
{
"docid": "03be8a60e1285d62c34b982ddf1bcf58",
"text": "A cognitive map has long been the dominant metaphor for hippocampal function, embracing the idea that place cells encode a geometric representation of space. However, evidence for predictive coding, reward sensitivity and policy dependence in place cells suggests that the representation is not purely spatial. We approach this puzzle from a reinforcement learning perspective: what kind of spatial representation is most useful for maximizing future reward? We show that the answer takes the form of a predictive representation. This representation captures many aspects of place cell responses that fall outside the traditional view of a cognitive map. Furthermore, we argue that entorhinal grid cells encode a low-dimensionality basis set for the predictive representation, useful for suppressing noise in predictions and extracting multiscale structure for hierarchical planning.",
"title": ""
},
{
"docid": "ead196a54f4ea7b5a1fe4b5b85f0b2c6",
"text": "Supervised machine learning and opinion lexicon are the most frequent approaches for opinion mining, but they require considerable effort to prepare the training data and to build the opinion lexicon, respectively. In this paper, a novel unsupervised clustering approach is proposed for opinion mining. Three swarm algorithms based on Particle Swarm Optimization are evaluated using three corpora with different levels of complexity with respect to size, number of opinions, domains, languages, and class balancing. K-means and Agglomerative clustering algorithms, as well as, the Artificial Bee Colony and Cuckoo Search swarm-based algorithms were selected for comparison. The proposed swarm-based algorithms achieved better accuracy using the word bigram feature model as the pre-processing technique, the Global Silhouette as optimization function, and on datasets with two classes: positive and negative. Although the swarm-based algorithms obtained lower result for datasets with three classes, they are still competitive considering that neither labeled data, nor opinion lexicons are required for the opinion clustering approach.",
"title": ""
},
{
"docid": "9cedc3f1a04fa51fb8ce1cf0cf01fbc3",
"text": "OBJECTIVES:The objective of this study was to provide updated explicit and relevant consensus statements for clinicians to refer to when managing hospitalized adult patients with acute severe ulcerative colitis (UC).METHODS:The Canadian Association of Gastroenterology consensus group of 23 voting participants developed a series of recommendation statements that addressed pertinent clinical questions. An iterative voting and feedback process was used to do this in conjunction with systematic literature reviews. These statements were brought to a formal consensus meeting held in Toronto, Ontario (March 2010), when each statement was discussed, reformulated, voted upon, and subsequently revised until group consensus (at least 80% agreement) was obtained. The modified GRADE (Grading of Recommendations Assessment, Development, and Evaluation) criteria were used to rate the strength of recommendations and the quality of evidence.RESULTS:As a result of the iterative process, consensus was reached on 21 statements addressing four themes (General considerations and nutritional issues, Steroid use and predictors of steroid failure, Cyclosporine and infliximab, and Surgical issues).CONCLUSIONS:Key recommendations for the treatment of hospitalized patients with severe UC include early escalation to second-line medical therapy with either infliximab or cyclosporine in individuals in whom parenteral steroids have failed after 72 h. These agents should be used in experienced centers where appropriate support is available. Sequential therapy with cyclosporine and infliximab is not recommended. Surgery is an option when first-line steroid therapy fails, and is indicated when second-line medical therapy fails and/or when complications arise during the hospitalization.",
"title": ""
},
{
"docid": "d504185046e8f51c65a25e448598a2b9",
"text": "The improved version of a broadband planar magic-T using microstrip-slotline transitions is presented. The design implements a small microstrip-slotline tee junction with minimum size slotline terminations to reduce radiation loss. A multisection impedance transformation network is used to increase the operating bandwidth and minimize the parasitic coupling around the microstrip-slotline tee junction. As a result, the improved magic-T has greater bandwidth and lower phase imbalance at the sum and difference ports than the earlier magic-T design. The experimental results show that the 10-GHz magic-T provides more than 70% of 1-dB operating bandwidth with the average in-band insertion loss of less than 0.6 dB. It also has phase and amplitude imbalance of less than plusmn1deg and plusmn0.25 dB, respectively.",
"title": ""
},
{
"docid": "ec18c088e0068c58410bf427528aa8e4",
"text": "Abnormal accounting accruals are unusually high around stock offers, especially high for firms whose offers subsequently attract lawsuits. Accruals tend to reverse after stock offers and are negatively related to post-offer stock returns. Reversals are more pronounced and stock returns are lower for sued firms than for those that are not sued. The incidence of lawsuits involving stock offers and settlement amounts are significantly positively related to abnormal accruals around the offer and significantly negatively related to post-offer stock returns. Our results support the view that some firms opportunistically manipulate earnings upward before stock issues rendering themselves vulnerable to litigation. r 2003 Elsevier B.V. All rights reserved. JEL classification: G14; G24; G32; K22; M41",
"title": ""
},
{
"docid": "872ef59b5bec5f6cbb9fcb206b6fe49e",
"text": "In this paper, the analysis and design of a three-level LLC series resonant converter (TL LLC SRC) for high- and wide-input-voltage applications is presented. The TL LLC SRC discussed in this paper consists of two half-bridge LLC SRCs in series, sharing a resonant inductor and a transformer. Its main advantages are that the voltage across each switch is clamped at half of the input voltage and that voltage balance is achieved. Thus, it is suitable for high-input-voltage applications. Moreover, due to its simple driving signals, the additional circulating current of the conventional TL LLC SRCs does not appear in the converter, and a simpler driving circuitry is allowed to be designed. With this converter, the operation principles, the gain of the LLC resonant tank, and the zero-voltage-switching condition under wide input voltage variation are analyzed. Both the current and voltage stresses over different design factors of the resonant tank are discussed as well. Based on the results of these analyses, a design example is provided and its validity is confirmed by an experiment involving a prototype converter with an input of 400-600 V and an output of 48 V/20 A. In addition, a family of TL LLC SRCs with double-resonant tanks for high-input-voltage applications is introduced. While this paper deals with a TL LLC SRC, the analysis results can be applied to other TL LLC SRCs for wide-input-voltage applications.",
"title": ""
},
{
"docid": "bb7ba369cd3baf1f5ba26aef7b5574fb",
"text": "Static computer vision techniques enable non-intrusive observation and analysis of biometrics such as eye blinks. However, ambiguous eye behaviors such as partial blinks and asymmetric eyelid movements present problems that computer vision techniques relying on static appearance alone cannot solve reliably. Image flow analysis enables reliable and efficient interpretation of these ambiguous eye blink behaviors. In this paper we present a method for using image flow analysis to compute problematic eye blink parameters. The flow analysis produces the magnitude and direction of the eyelid movement. A deterministic finite state machine uses the eyelid movement data to compute blink parameters (e.g., blink count, blink rate, and other transitional statistics) for use in human computer interaction applications across a wide range of disciplines. We conducted extensive experiments employing this method on approximately 750K color video frames of five subjects",
"title": ""
},
{
"docid": "153d33d03e34e85986227e05f2b80e34",
"text": "I,c~arriilig in complex, changing erivironrrlents requires rrltlt hods that are able to tolerate noise (less than perIixc t I:edback) a11d &if1 ( concepts that change over time). ‘I‘tic5e two aspecfs of complex environments iriteract with cm.h oltier: w heri some particular learned predictor fails to corrc~cl,ly predict the expected outcome (or when the out(‘011ic’ oc’c urs without havitig been preceded by the learIled prtlclic.Lor), a learner must, be able to dcterrriine whethet 1 t\\ts siLuaL.ion is an iIistan<e of noise or an irldication ttlat t ht> c.of~c.t~pL is beginning to drift. We j)rescnt, a learning trlt~lhoci that, is able to learrl complex Boolean characteriz.aliollb while tolerating noise and drift. An analysis of I tit, aIgorit,hrri illust.rates why it teas these desirable bellavions, a.r~ti tmpirical results from an irrij)lerrlcntatiorl (called S’I‘A( ;( ;EI~) are prc:seI~!mi to show its ability to t,rack changirrg coiiccpts over time.",
"title": ""
}
] |
scidocsrr
|
4e1af4fe8608454cfeadc2805bc52569
|
A Neural Probabilistic Model for Context Based Citation Recommendation
|
[
{
"docid": "908baa7a1004a372f1e8e42f037e0501",
"text": "Scientists depend on literature search to find prior work that is relevant to their research ideas. We introduce a retrieval model for literature search that incorporates a wide variety of factors important to researchers, and learns the weights of each of these factors by observing citation patterns. We introduce features like topical similarity and author behavioral patterns, and combine these with features from related work like citation count and recency of publication. We present an iterative process for learning weights for these features that alternates between retrieving articles with the current retrieval model, and updating model weights by training a supervised classifier on these articles. We propose a new task for evaluating the resulting retrieval models, where the retrieval system takes only an abstract as its input and must produce as output the list of references at the end of the abstract's article. We evaluate our model on a collection of journal, conference and workshop articles from the ACL Anthology Reference Corpus. Our model achieves a mean average precision of 28.7, a 12.8 point improvement over a term similarity baseline, and a significant improvement both over models using only features from related work and over models without our iterative learning.",
"title": ""
},
{
"docid": "062c970a14ac0715ccf96cee464a4fec",
"text": "A goal of statistical language modeling is to learn the joint probability function of sequences of words in a language. This is intrinsically difficult because of the curse of dimensionality: a word sequence on which the model will be tested is likely to be different from all the word sequences seen during training. Traditional but very successful approaches based on n-grams obtain generalization by concatenating very short overlapping sequences seen in the training set. We propose to fight the curse of dimensionality by learning a distributed representation for words which allows each training sentence to inform the model about an exponential number of semantically neighboring sentences. The model learns simultaneously (1) a distributed representation for each word along with (2) the probability function for word sequences, expressed in terms of these representations. Generalization is obtained because a sequence of words that has never been seen before gets high probability if it is made of words that are similar (in the sense of having a nearby representation) to words forming an already seen sentence. Training such large models (with millions of parameters) within a reasonable time is itself a significant challenge. We report on experiments using neural networks for the probability function, showing on two text corpora that the proposed approach significantly improves on state-of-the-art n-gram models, and that the proposed approach allows to take advantage of longer contexts.",
"title": ""
}
] |
[
{
"docid": "d36a69538293e384d64c905c678f4944",
"text": "Many studies have investigated factors that affect susceptibility to false memories. However, few have investigated the role of sleep deprivation in the formation of false memories, despite overwhelming evidence that sleep deprivation impairs cognitive function. We examined the relationship between self-reported sleep duration and false memories and the effect of 24 hr of total sleep deprivation on susceptibility to false memories. We found that under certain conditions, sleep deprivation can increase the risk of developing false memories. Specifically, sleep deprivation increased false memories in a misinformation task when participants were sleep deprived during event encoding, but did not have a significant effect when the deprivation occurred after event encoding. These experiments are the first to investigate the effect of sleep deprivation on susceptibility to false memories, which can have dire consequences.",
"title": ""
},
{
"docid": "e0c83197770752c9fdfe5e51edcd3d46",
"text": "In the last decade, it has become obvious that Alzheimer's disease (AD) is closely linked to changes in lipids or lipid metabolism. One of the main pathological hallmarks of AD is amyloid-β (Aβ) deposition. Aβ is derived from sequential proteolytic processing of the amyloid precursor protein (APP). Interestingly, both, the APP and all APP secretases are transmembrane proteins that cleave APP close to and in the lipid bilayer. Moreover, apoE4 has been identified as the most prevalent genetic risk factor for AD. ApoE is the main lipoprotein in the brain, which has an abundant role in the transport of lipids and brain lipid metabolism. Several lipidomic approaches revealed changes in the lipid levels of cerebrospinal fluid or in post mortem AD brains. Here, we review the impact of apoE and lipids in AD, focusing on the major brain lipid classes, sphingomyelin, plasmalogens, gangliosides, sulfatides, DHA, and EPA, as well as on lipid signaling molecules, like ceramide and sphingosine-1-phosphate. As nutritional approaches showed limited beneficial effects in clinical studies, the opportunities of combining different supplements in multi-nutritional approaches are discussed and summarized.",
"title": ""
},
{
"docid": "3d89b509ab12e41eb54b7b6800e5c785",
"text": "We have constructed a new “Who-did-What” dataset of over 200,000 fill-in-the-gap (cloze) multiple choice reading comprehension problems constructed from the LDC English Gigaword newswire corpus. The WDW dataset has a variety of novel features. First, in contrast with the CNN and Daily Mail datasets (Hermann et al., 2015) we avoid using article summaries for question formation. Instead, each problem is formed from two independent articles — an article given as the passage to be read and a separate article on the same events used to form the question. Second, we avoid anonymization — each choice is a person named entity. Third, the problems have been filtered to remove a fraction that are easily solved by simple baselines, while remaining 84% solvable by humans. We report performance benchmarks of standard systems and propose the WDW dataset as a challenge task for the community.1",
"title": ""
},
{
"docid": "d7bb22eefbff0a472d3e394c61788be2",
"text": "Crowd evacuation of a building has been studied over the last decades. In this paper, seven methodological approaches for crowd evacuation have been identified. These approaches include cellular automata models, lattice gas models, social force models, fluid-dynamic models, agent-based models, game theoretic models, and approaches based on experiments with animals. According to available literatures, we discuss the advantages and disadvantages of these approaches, and conclude that a variety of different kinds of approaches should be combined to study crowd evacuation. Psychological and physiological elements affecting individual and collective behaviors should be also incorporated into the evacuation models. & 2008 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "982d7d2d65cddba4fa7dac3c2c920790",
"text": "In this paper, we present our multichannel neural architecture for recognizing emerging named entity in social media messages, which we applied in the Novel and Emerging Named Entity Recognition shared task at the EMNLP 2017 Workshop on Noisy User-generated Text (W-NUT). We propose a novel approach, which incorporates comprehensive word representations with multichannel information and Conditional Random Fields (CRF) into a traditional Bidirectional Long Short-Term Memory (BiLSTM) neural network without using any additional hand-crafted features such as gazetteers. In comparison with other systems participating in the shared task, our system won the 3rd place in terms of the average of two evaluation metrics.",
"title": ""
},
{
"docid": "fbddd20271cf134e15b33e7d6201c374",
"text": "Authors and publishers who wish their publications to be considered for review in Computational Linguistics should send a copy to the book review editor, Graeme Hirst, Department of Computer Science, University of Toronto, Toronto, Canada M5S 3G4. All relevant books received will be listed, but not all can be reviewed. Technical reports (other than dissertations) will not be listed or reviewed. Authors should be aware that some publishers will not send books for review (even when instructed to do so); authors wishing to enquire as to whether their book has been received for review may contact the book review editor.",
"title": ""
},
{
"docid": "e7eae4ab0859d66acbf435f2430a63a1",
"text": "Voice recognition technology-enabled devices possess extraordinary growth potential, yet some research indicates that organizations and consumers are resisting their adoption. This study investigates the implementation of a voice recognition device in the United States Navy. Grounded in the social psychology and information systems literature, the researchers adapted instruments and developed a tool to explain technology adoption in this environment. Using factor analysis and structural equation modeling, analysis of data from the 270 participants explained almost 90% of the variance in the model. This research adapts the technology acceptance model by adding elements of the theory of planned behavior, providing researchers and practitioners with a valuable instrument to predict technology adoption.",
"title": ""
},
{
"docid": "5481f319296c007412e62129d2ec5943",
"text": "We propose a new family of optimization criteria for variational auto-encoding models, generalizing the standard evidence lower bound. We provide conditions under which they recover the data distribution and learn latent features, and formally show that common issues such as blurry samples and uninformative latent features arise when these conditions are not met. Based on these new insights, we propose a new sequential VAE model that can generate sharp samples on the LSUN image dataset based on pixel-wise reconstruction loss, and propose an optimization criterion that encourages unsupervised learning of informative latent features.",
"title": ""
},
{
"docid": "db0c7a200d76230740e027c2966b066c",
"text": "BACKGROUND\nPromotion and provision of low-cost technologies that enable improved water, sanitation, and hygiene (WASH) practices are seen as viable solutions for reducing high rates of morbidity and mortality due to enteric illnesses in low-income countries. A number of theoretical models, explanatory frameworks, and decision-making models have emerged which attempt to guide behaviour change interventions related to WASH. The design and evaluation of such interventions would benefit from a synthesis of this body of theory informing WASH behaviour change and maintenance.\n\n\nMETHODS\nWe completed a systematic review of existing models and frameworks through a search of related articles available in PubMed and in the grey literature. Information on the organization of behavioural determinants was extracted from the references that fulfilled the selection criteria and synthesized. Results from this synthesis were combined with other relevant literature, and from feedback through concurrent formative and pilot research conducted in the context of two cluster-randomized trials on the efficacy of WASH behaviour change interventions to inform the development of a framework to guide the development and evaluation of WASH interventions: the Integrated Behavioural Model for Water, Sanitation, and Hygiene (IBM-WASH).\n\n\nRESULTS\nWe identified 15 WASH-specific theoretical models, behaviour change frameworks, or programmatic models, of which 9 addressed our review questions. Existing models under-represented the potential role of technology in influencing behavioural outcomes, focused on individual-level behavioural determinants, and had largely ignored the role of the physical and natural environment. IBM-WASH attempts to correct this by acknowledging three dimensions (Contextual Factors, Psychosocial Factors, and Technology Factors) that operate on five-levels (structural, community, household, individual, and habitual).\n\n\nCONCLUSIONS\nA number of WASH-specific models and frameworks exist, yet with some limitations. The IBM-WASH model aims to provide both a conceptual and practical tool for improving our understanding and evaluation of the multi-level multi-dimensional factors that influence water, sanitation, and hygiene practices in infrastructure-constrained settings. We outline future applications of our proposed model as well as future research priorities needed to advance our understanding of the sustained adoption of water, sanitation, and hygiene technologies and practices.",
"title": ""
},
{
"docid": "d90407926b8dc5454902875d66b2404b",
"text": "In many machine learning tasks it is desirable that a model's prediction transforms in an equivariant way under transformations of its input. Convolutional neural networks (CNNs) implement translational equivariance by construction; for other transformations, however, they are compelled to learn the proper mapping. In this work, we develop Steerable Filter CNNs (SFCNNs) which achieve joint equivariance under translations and rotations by design. The proposed architecture employs steerable filters to efficiently compute orientation dependent responses for many orientations without suffering interpolation artifacts from filter rotation. We utilize group convolutions which guarantee an equivariant mapping. In addition, we generalize He's weight initialization scheme to filters which are defined as a linear combination of a system of atomic filters. Numerical experiments show a substantial enhancement of the sample complexity with a growing number of sampled filter orientations and confirm that the network generalizes learned patterns over orientations. The proposed approach achieves state-of-the-art on the rotated MNIST benchmark and on the ISBI 2012 2D EM segmentation challenge.",
"title": ""
},
{
"docid": "95903410bc39b26e44f6ea80ad85e182",
"text": "We propose distributed deep neural networks (DDNNs) over distributed computing hierarchies, consisting of the cloud, the edge (fog) and end devices. While being able to accommodate inference of a deep neural network (DNN) in the cloud, a DDNN also allows fast and localized inference using shallow portions of the neural network at the edge and end devices. When supported by a scalable distributed computing hierarchy, a DDNN can scale up in neural network size and scale out in geographical span. Due to its distributed nature, DDNNs enhance sensor fusion, system fault tolerance and data privacy for DNN applications. In implementing a DDNN, we map sections of a DNN onto a distributed computing hierarchy. By jointly training these sections, we minimize communication and resource usage for devices and maximize usefulness of extracted features which are utilized in the cloud. The resulting system has built-in support for automatic sensor fusion and fault tolerance. As a proof of concept, we show a DDNN can exploit geographical diversity of sensors to improve object recognition accuracy and reduce communication cost. In our experiment, compared with the traditional method of offloading raw sensor data to be processed in the cloud, DDNN locally processes most sensor data on end devices while achieving high accuracy and is able to reduce the communication cost by a factor of over 20x.",
"title": ""
},
{
"docid": "38693524e69d494b95c311840d599c93",
"text": "To avoid a sarcastic message being understood in its unintended literal meaning, in microtexts such as messages on Twitter.com sarcasm is often explicitly marked with the hashtag ‘#sarcasm’. We collected a training corpus of about 78 thousand Dutch tweets with this hashtag. Assuming that the human labeling is correct (annotation of a sample indicates that about 85% of these tweets are indeed sarcastic), we train a machine learning classifier on the harvested examples, and apply it to a test set of a day’s stream of 3.3 million Dutch tweets. Of the 135 explicitly marked tweets on this day, we detect 101 (75%) when we remove the hashtag. We annotate the top of the ranked list of tweets most likely to be sarcastic that do not have the explicit hashtag. 30% of the top-250 ranked tweets are indeed sarcastic. Analysis shows that sarcasm is often signalled by hyperbole, using intensifiers and exclamations; in contrast, non-hyperbolic sarcastic messages often receive an explicit marker. We hypothesize that explicit markers such as hashtags are the digital extralinguistic equivalent of nonverbal expressions that people employ in live interaction when conveying sarcasm.",
"title": ""
},
{
"docid": "14839c18d1029270174e9f94d122edd5",
"text": "Nested event structures are a common occurrence in both open domain and domain specific extraction tasks, e.g., a “crime” event can cause a “investigation” event, which can lead to an “arrest” event. However, most current approaches address event extraction with highly local models that extract each event and argument independently. We propose a simple approach for the extraction of such structures by taking the tree of event-argument relations and using it directly as the representation in a reranking dependency parser. This provides a simple framework that captures global properties of both nested and flat event structures. We explore a rich feature space that models both the events to be parsed and context from the original supporting text. Our approach obtains competitive results in the extraction of biomedical events from the BioNLP’09 shared task with a F1 score of 53.5% in development and 48.6% in testing.",
"title": ""
},
{
"docid": "d087b127025074c48477b964c9c2483a",
"text": "In this letter, a 77-GHz transmitter (TX) with a 12.8-GHz phase-locked-loop (PLL) and a $\\times$ 6 frequency multiplier is presented for a FMCW radar sensor in a 65-nm CMOS process. To realize the low-phase-noise TX, a voltage controlled oscillator (VCO) with an excellent phase noise performance at a lower fundamental frequency (12.8 GHz) is designed and scaled up ( $\\times$ 6) for the desired target frequency (77 GHz). The measured FMCW modulation range with an external triangular chirp signal (1-ms sweep time) is 601 MHz. The output power and the total DC power consumption of the TX are 8.9 dBm and 116.7 mW, respectively. Here, a good phase noise level of -91.16 dBc/Hz at a 1-MHz offset frequency from a 76.81-GHz carrier is achieved.",
"title": ""
},
{
"docid": "97f8b8ee60e3f03e64833a16aaf5e743",
"text": "OBJECTIVE\nA pilot randomized controlled trial (RCT) of the effectiveness of occupational therapy using a sensory integration approach (OT-SI) was conducted with children who had sensory modulation disorders (SMDs). This study evaluated the effectiveness of three treatment groups. In addition, sample size estimates for a large scale, multisite RCT were calculated.\n\n\nMETHOD\nTwenty-four children with SMD were randomly assigned to one of three treatment conditions; OT-SI, Activity Protocol, and No Treatment. Pretest and posttest measures of behavior, sensory and adaptive functioning, and physiology were administered.\n\n\nRESULTS\nThe OT-SI group, compared to the other two groups, made significant gains on goal attainment scaling and on the Attention subtest and the Cognitive/Social composite of the Leiter International Performance Scale-Revised. Compared to the control groups, OT-SI improvement trends on the Short Sensory Profile, Child Behavior Checklist, and electrodermal reactivity were in the hypothesized direction.\n\n\nCONCLUSION\nFindings suggest that OT-SI may be effective in ameliorating difficulties of children with SMD.",
"title": ""
},
{
"docid": "ca468aa680c29fb00f55e9d851676200",
"text": "The class of problems involving the random generation of combinatorial structures from a uniform distribution is considered. Uniform generation problems are, in computational difficulty, intermediate between classical existence and counting problems. It is shown that exactly uniform generation of 'efficiently verifiable' combinatorial structures is reducible to approximate counting (and hence, is within the third level of the polynomial hierarchy). Natural combinatorial problems are presented which exhibit complexity gaps between their existence and generation, and between their generation and counting versions. It is further shown that for self-reducible problems, almost uniform generation and randomized approximate counting are inter-reducible, and hence, of similar complexity. CR Categories. F.I.1, F.1.3, G.2.1, G.3",
"title": ""
},
{
"docid": "37a8fe29046ec94d54e62f202a961129",
"text": "Detection of salient image regions is useful for applications like image segmentation, adaptive compression, and region-based image retrieval. In this paper we present a novel method to determine salient regions in images using low-level features of luminance and color. The method is fast, easy to implement and generates high quality saliency maps of the same size and resolution as the input image. We demonstrate the use of the algorithm in the segmentation of semantically meaningful whole objects from digital images.",
"title": ""
},
{
"docid": "542d17b1f1437420a003895f9ca16406",
"text": "This paper discusses the Correntropy Induced Metric (CIM) based Growing Neural Gas (GNG) architecture. CIM is a kernel method based similarity measurement from the information theoretic learning perspective, which quantifies the similarity between probability distributions of input and reference vectors. We apply CIM to find a maximum error region and node insert criterion, instead of euclidean distance based function in original GNG. Furthermore, we introduce the two types of Gaussian kernel bandwidth adaptation methods for CIM. The simulation experiments in terms of the affect of kernel bandwidth σ in CIM, the self-organizing ability, and the quantitative comparison show that proposed model has the superior abilities than original GNG.",
"title": ""
},
{
"docid": "0d750d31bcd0a998bd944910e707830c",
"text": "In this paper we focus on estimating the post-click engagement on native ads by predicting the dwell time on the corresponding ad landing pages. To infer relationships between features of the ads and dwell time we resort to the application of survival analysis techniques, which allow us to estimate the distribution of the length of time that the user will spend on the ad. This information is then integrated into the ad ranking function with the goal of promoting the rank of ads that are likely to be clicked and consumed by users (dwell time greater than a given threshold). The online evaluation over live tra c shows that considering post-click engagement has a consistent positive e↵ect on both CTR, decreases the number of bounces and increases the average dwell time, hence leading to a better user post-click experience.",
"title": ""
},
{
"docid": "3bcb57af56157f974f1acac7a5c09d95",
"text": "During the past 70+ years of research and development in the domain of Artificial Intelligence (AI) we observe three principal, historical waves: embryonic, embedded and embodied AI. As the first two waves have demonstrated huge potential to seed new technologies and provide tangible business results, we describe likely developments of embodied AI in the next 25-35 years. We postulate that the famous Turing Test was a noble goal for AI scientists, making key, historical inroads - while we believe that Biological Systems Intelligence and the Insect/Swarm Intelligence analogy/mimicry, though largely disregarded, represents the key to further developments. We describe briefly the key lines of past and ongoing research, and outline likely future developments in this remarkable field.",
"title": ""
}
] |
scidocsrr
|
4e96acf21ce5f9c02e1664d7ee6b5eb5
|
THE EFFECT OF BRAND IMAGE ON OVERALL SATISFACTION AND LOYALTY INTENTION IN THE CONTEXT OF COLOR COSMETIC
|
[
{
"docid": "3da6fadaf2363545dfd0cea87fe2b5da",
"text": "It is a marketplace reality that marketing managers sometimes inflict switching costs on their customers, to inhibit them from defecting to new suppliers. In a competitive setting, such as the Internet market, where competition may be only one click away, has the potential of switching costs as an exit barrier and a binding ingredient of customer loyalty become altered? To address that issue, this article examines the moderating effects of switching costs on customer loyalty through both satisfaction and perceived-value measures. The results, evoked from a Web-based survey of online service users, indicate that companies that strive for customer loyalty should focus primarily on satisfaction and perceived value. The moderating effects of switching costs on the association of customer loyalty and customer satisfaction and perceived value are significant only when the level of customer satisfaction or perceived value is above average. In light of the major findings, the article sets forth strategic implications for customer loyalty in the setting of electronic commerce. © 2004 Wiley Periodicals, Inc. In the consumer marketing community, customer loyalty has long been regarded as an important goal (Reichheld & Schefter, 2000). Both marketing academics and professionals have attempted to uncover the most prominent antecedents of customer loyalty. Numerous studies have Psychology & Marketing, Vol. 21(10):799–822 (October 2004) Published online in Wiley InterScience (www.interscience.wiley.com) © 2004 Wiley Periodicals, Inc. DOI: 10.1002/mar.20030",
"title": ""
},
{
"docid": "80ce6c8c9fc4bf0382c5f01d1dace337",
"text": "Customer loyalty is viewed as the strength of the relationship between an individual's relative attitude and repeat patronage. The relationship is seen as mediated by social norms and situational factors. Cognitive, affective, and conative antecedents of relative attitude are identified as contributing to loyalty, along with motivational, perceptual, and behavioral consequences. Implications for research and for the management of loyalty are derived.",
"title": ""
}
] |
[
{
"docid": "13774d2655f2f0ac575e11991eae0972",
"text": "This paper considers regularized block multiconvex optimization, where the feasible set and objective function are generally nonconvex but convex in each block of variables. It also accepts nonconvex blocks and requires these blocks to be updated by proximal minimization. We review some interesting applications and propose a generalized block coordinate descent method. Under certain conditions, we show that any limit point satisfies the Nash equilibrium conditions. Furthermore, we establish global convergence and estimate the asymptotic convergence rate of the method by assuming a property based on the Kurdyka– Lojasiewicz inequality. The proposed algorithms are tested on nonnegative matrix and tensor factorization, as well as matrix and tensor recovery from incomplete observations. The tests include synthetic data and hyperspectral data, as well as image sets from the CBCL and ORL databases. Compared to the existing state-of-the-art algorithms, the proposed algorithms demonstrate superior performance in both speed and solution quality. The MATLAB code of nonnegative matrix/tensor decomposition and completion, along with a few demos, are accessible from the authors’ homepages.",
"title": ""
},
{
"docid": "513224bb1034217b058179f3805dd37f",
"text": "Existing work on subgraph isomorphism search mainly focuses on a-query-at-a-time approaches: optimizing and answering each query separately. When multiple queries arrive at the same time, sequential processing is not always the most efficient. In this paper, we study multi-query optimization for subgraph isomorphism search. We first propose a novel method for efficiently detecting useful common subgraphs and a data structure to organize them. Then we propose a heuristic algorithm based on the data structure to compute a query execution order so that cached intermediate results can be effectively utilized. To balance memory usage and the time for cached results retrieval, we present a novel structure for caching the intermediate results. We provide strategies to revise existing single-query subgraph isomorphism algorithms to seamlessly utilize the cached results, which leads to significant performance improvement. Extensive experiments verified the effectiveness of our solution.",
"title": ""
},
{
"docid": "fbfb6b7cb2dc3e774197c470c55a928b",
"text": "The integrated modular avionics (IMA) architectures have ushered in a new wave of thought regarding avionics integration. IMA architectures utilize shared, configurable computing, communication, and I/O resources. These architectures allow avionics system integrators to benefit from increased system scalability, as well as from a form of platform management that reduces the workload for aircraft-level avionics integration activities. In order to realize these architectural benefits, the avionics suppliers must engage in new philosophies for sharing a set of system-level resources that are managed a level higher than each individual avionics system. The mechanisms for configuring and managing these shared intersystem resources are integral to managing the increased level of avionics integration that is inherent to the IMA architectures. This paper provides guidance for developing the methodology and tools to efficiently manage the set of shared intersystem resources. This guidance is based upon the author's experience in developing the Genesis IMA architecture at Smiths Aerospace. The Genesis IMA architecture was implemented on the Boeing 787 Dreamliner as the common core system (CCS)",
"title": ""
},
{
"docid": "401cb3ebbc226ae117303f6a6bb6714c",
"text": "Brain-related disorders such as epilepsy can be diagnosed by analyzing electroencephalograms (EEG). However, manual analysis of EEG data requires highly trained clinicians, and is a procedure that is known to have relatively low inter-rater agreement (IRA). Moreover, the volume of the data and the rate at which new data becomes available make manual interpretation a time-consuming, resource-hungry, and expensive process. In contrast, automated analysis of EEG data offers the potential to improve the quality of patient care by shortening the time to diagnosis and reducing manual error. In this paper, we focus on one of the first steps in interpreting an EEG session identifying whether the brain activity is abnormal or normal. To address this specific task, we propose a novel recurrent neural network (RNN) architecture termed ChronoNet which is inspired by recent developments from the field of image classification and designed to work efficiently with EEG data. ChronoNet is formed by stacking multiple 1D convolution layers followed by deep gated recurrent unit (GRU) layers where each 1D convolution layer uses multiple filters of exponentially varying lengths and the stacked GRU layers are densely connected in a feed-forward manner. We used the recently released TUH Abnormal EEG Corpus dataset for evaluating the performance of ChronoNet. Unlike previous studies using this dataset, ChronoNet directly takes time-series EEG as input and learns meaningful representations of brain activity patterns. ChronoNet outperforms previously reported results on this dataset thereby setting a new benchmark. Furthermore, we demonstrate the domain-independent nature of ChronoNet by successfully applying it to classify speech commands.",
"title": ""
},
{
"docid": "357ff730c3d0f8faabe1fa14d4b04463",
"text": "In this paper, we propose a novel two-stage video captioning framework composed of 1) a multi-channel video encoder and 2) a sentence-generating language decoder. Both of the encoder and decoder are based on recurrent neural networks with long-short-term-memory cells. Our system can take videos of arbitrary lengths as input. Compared with the previous sequence-to-sequence video captioning frameworks, the proposed model is able to handle multiple channels of video representations and jointly learn how to combine them. The proposed model is evaluated on two large-scale movie datasets (MPII Corpus and Montreal Video Description) and one YouTube dataset (Microsoft Video Description Corpus) and achieves the state-of-the-art performances. Furthermore, we extend the proposed model towards automatic American Sign Language recognition. To evaluate the performance of our model on this novel application, a new dataset for ASL video description is collected based on YouTube videos. Results on this dataset indicate that the proposed framework on ASL recognition is promising and will significantly benefit the independent communication between ASL users and",
"title": ""
},
{
"docid": "3bcf0e33007feb67b482247ef6702901",
"text": "Bitcoin is a popular cryptocurrency that records all transactions in a distributed append-only public ledger called blockchain. The security of Bitcoin heavily relies on the incentive-compatible proof-of-work (PoW) based distributed consensus protocol, which is run by the network nodes called miners. In exchange for the incentive, the miners are expected to maintain the blockchain honestly. Since its launch in 2009, Bitcoin economy has grown at an enormous rate, and it is now worth about 150 billions of dollars. This exponential growth in the market value of bitcoins motivate adversaries to exploit weaknesses for profit, and researchers to discover new vulnerabilities in the system, propose countermeasures, and predict upcoming trends. In this paper, we present a systematic survey that covers the security and privacy aspects of Bitcoin. We start by giving an overview of the Bitcoin system and its major components along with their functionality and interactions within the system. We review the existing vulnerabilities in Bitcoin and its major underlying technologies such as blockchain and PoW-based consensus protocol. These vulnerabilities lead to the execution of various security threats to the standard functionality of Bitcoin. We then investigate the feasibility and robustness of the state-of-the-art security solutions. Additionally, we discuss the current anonymity considerations in Bitcoin and the privacy-related threats to Bitcoin users along with the analysis of the existing privacy-preserving solutions. Finally, we summarize the critical open challenges, and we suggest directions for future research towards provisioning stringent security and privacy solutions for Bitcoin.",
"title": ""
},
{
"docid": "3ae9da3a27b00fb60f9e8771de7355fe",
"text": "In the past decade, graph-based structures have penetrated nearly every aspect of our lives. The detection of anomalies in these networks has become increasingly important, such as in exposing infected endpoints in computer networks or identifying socialbots. In this study, we present a novel unsupervised two-layered meta-classifier that can detect irregular vertices in complex networks solely by utilizing topology-based features. Following the reasoning that a vertex with many improbable links has a higher likelihood of being anomalous, we applied our method on 10 networks of various scales, from a network of several dozen students to online networks with millions of vertices. In every scenario, we succeeded in identifying anomalous vertices with lower false positive rates and higher AUCs compared to other prevalent methods. Moreover, we demonstrated that the presented algorithm is generic, and efficient both in revealing fake users and in disclosing the influential people in social networks.",
"title": ""
},
{
"docid": "301cf9a13184f2e7587f16b3de16222d",
"text": "Recently, highly accurate positioning devices enable us to provide various types of location-based services. On the other hand, because position data obtained by such devices include deeply personal information, protection of location privacy is one of the most significant issues of location-based services. Therefore, we propose a technique to anonymize position data. In our proposed technique, the psrsonal user of a location-based service generates several false position data (dummies) sent to the service provider with the true position data of the user. Because the service provider cannot distinguish the true position data, the user’s location privacy is protected. We conducted performance study experiments on our proposed technique using practical trajectory data. As a result of the experiments, we observed that our proposed technique protects the location privacy of users.",
"title": ""
},
{
"docid": "58b5c0628b2b964aa75d65a241f028d7",
"text": "This paper reports on the development and formal certification (proof of semantic preservation) of a compiler from Cminor (a C-like imperative language) to PowerPC assembly code, using the Coq proof assistant both for programming the compiler and for proving its correctness. Such a certified compiler is useful in the context of formal methods applied to the certification of critical software: the certification of the compiler guarantees that the safety properties proved on the source code hold for the executable compiled code as well.",
"title": ""
},
{
"docid": "4ee6894fade929db82af9cb62fecc0f9",
"text": "Federated learning is a recent advance in privacy protection. In this context, a trusted curator aggregates parameters optimized in decentralized fashion by multiple clients. The resulting model is then distributed back to all clients, ultimately converging to a joint representative model without explicitly having to share the data. However, the protocol is vulnerable to differential attacks, which could originate from any party contributing during federated optimization. In such an attack, a client’s contribution during training and information about their data set is revealed through analyzing the distributed model. We tackle this problem and propose an algorithm for client sided differential privacy preserving federated optimization. The aim is to hide clients’ contributions during training, balancing the trade-off between privacy loss and model performance. Empirical studies suggest that given a sufficiently large number of participating clients, our proposed procedure can maintain client-level differential privacy at only a minor cost in model performance.",
"title": ""
},
{
"docid": "a7e6a2145b9ae7ca2801a3df01f42f5e",
"text": "The aim of this systematic review was to compare the clinical performance and failure modes of teeth restored with intra-radicular retainers. A search was performed on PubMed/Medline, Central and ClinicalTrials databases for randomized clinical trials comparing clinical behavior and failures of at least two types of retainers. From 341 detected papers, 16 were selected for full-text analysis, of which 9 met the eligibility criteria. A manual search added 2 more studies, totalizing 11 studies that were included in this review. Evaluated retainers were fiber (prefabricated and customized) and metal (prefabricated and cast) posts, and follow-up ranged from 6 months to 10 years. Most studies showed good clinical behavior for evaluated intra-radicular retainers. Reported survival rates varied from 71 to 100% for fiber posts and 50 to 97.1% for metal posts. Studies found no difference in the survival among different metal posts and most studies found no difference between fiber and metal posts. Two studies also showed that remaining dentine height, number of walls and ferrule increased the longevity of the restored teeth. Failures of fiber posts were mainly due to post loss of retention, while metal post failures were mostly related to root fracture, post fracture and crown and/or post loss of retention. In conclusion, metal and fiber posts present similar clinical behavior at short to medium term follow-up. Remaining dental structure and ferrule increase the survival of restored pulpless teeth. Studies with longer follow-up are needed.",
"title": ""
},
{
"docid": "7dd3c935b6a5a38284b36ddc1dc1d368",
"text": "(2012): Mindfulness and self-compassion as predictors of psychological wellbeing in long-term meditators and matched nonmeditators, The Journal of Positive Psychology: This article may be used for research, teaching, and private study purposes. Any substantial or systematic reproduction, redistribution, reselling, loan, sub-licensing, systematic supply, or distribution in any form to anyone is expressly forbidden. The publisher does not give any warranty express or implied or make any representation that the contents will be complete or accurate or up to date. The accuracy of any instructions, formulae, and drug doses should be independently verified with primary sources. The publisher shall not be liable for any loss, actions, claims, proceedings, demand, or costs or damages whatsoever or howsoever caused arising directly or indirectly in connection with or arising out of the use of this material.",
"title": ""
},
{
"docid": "da9b9a32db674e5f6366f6b9e2c4ee10",
"text": "We introduce a data-driven approach to aid the repairing and conservation of archaeological objects: ORGAN, an object reconstruction generative adversarial network (GAN). By using an encoder-decoder 3D deep neural network on a GAN architecture, and combining two loss objectives: a completion loss and an Improved Wasserstein GAN loss, we can train a network to effectively predict the missing geometry of damaged objects. As archaeological objects can greatly differ between them, the network is conditioned on a variable, which can be a culture, a region or any metadata of the object. In our results, we show that our method can recover most of the information from damaged objects, even in cases where more than half of the voxels are missing, without producing many errors.",
"title": ""
},
{
"docid": "28f106c6d6458f619cdc89967d5648cd",
"text": "Term graphs constructed from document collections as well as external resources, such as encyclopedias (DBpedia) and knowledge bases (Freebase and ConceptNet), have been individually shown to be effective sources of semantically related terms for query expansion, particularly in case of difficult queries. However, it is not known how they compare with each other in terms of retrieval effectiveness. In this work, we use standard TREC collections to empirically compare the retrieval effectiveness of these types of term graphs for regular and difficult queries. Our results indicate that the term association graphs constructed from document collections using information theoretic measures are nearly as effective as knowledge graphs for Web collections, while the term graphs derived from DBpedia, Freebase and ConceptNet are more effective than term association graphs for newswire collections. We also found out that the term graphs derived from ConceptNet generally outperformed the term graphs derived from DBpedia and Freebase.",
"title": ""
},
{
"docid": "d7b479be278251dab459411628ca1744",
"text": "0950-7051/$ see front matter 2013 Elsevier B.V. A http://dx.doi.org/10.1016/j.knosys.2013.01.018 ⇑ Corresponding author. Tel.: +34 953 213016; fax: E-mail addresses: alberto.fernandez@ujaen.es (A. ugr.es (V. López), mikel.galar@unavarra.es (M. Galar Jesus), herrera@decsai.ugr.es (F. Herrera). The imbalanced class problem is related to the real-world application of classification in engineering. It is characterised by a very different distribution of examples among the classes. The condition of multiple imbalanced classes is more restrictive when the aim of the final system is to obtain the most accurate precision for each of the concepts of the problem. The goal of this work is to provide a thorough experimental analysis that will allow us to determine the behaviour of the different approaches proposed in the specialised literature. First, we will make use of binarization schemes, i.e., one versus one and one versus all, in order to apply the standard approaches to solving binary class imbalanced problems. Second, we will apply several ad hoc procedures which have been designed for the scenario of imbalanced data-sets with multiple classes. This experimental study will include several well-known algorithms from the literature such as decision trees, support vector machines and instance-based learning, with the intention of obtaining global conclusions from different classification paradigms. The extracted findings will be supported by a statistical comparative analysis using more than 20 data-sets from the KEEL repository. 2013 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "0a335ec3a17c202e92341b51a90d9f61",
"text": "Recently, very deep convolutional neural networks (CNNs) have shown outstanding performance in object recognition and have also been the first choice for dense classification problems such as semantic segmentation. However, repeated subsampling operations like pooling or convolution striding in deep CNNs lead to a significant decrease in the initial image resolution. Here, we present RefineNet, a generic multi-path refinement network that explicitly exploits all the information available along the down-sampling process to enable high-resolution prediction using long-range residual connections. In this way, the deeper layers that capture high-level semantic features can be directly refined using fine-grained features from earlier convolutions. The individual components of RefineNet employ residual connections following the identity mapping mindset, which allows for effective end-to-end training. Further, we introduce chained residual pooling, which captures rich background context in an efficient manner. We carry out comprehensive experiments and set new stateof-the-art results on seven public datasets. In particular, we achieve an intersection-over-union score of 83.4 on the challenging PASCAL VOC 2012 dataset, which is the best reported result to date.",
"title": ""
},
{
"docid": "e81cffe3f2f716520ede92d482ddab34",
"text": "An active research trend is to exploit the consensus mechanism of cryptocurrencies to secure the execution of distributed applications. In particular, some recent works have proposed fair lotteries which work on Bitcoin. These protocols, however, require a deposit from each player which grows quadratically with the number of players. We propose a fair lottery on Bitcoin which only requires a constant deposit.",
"title": ""
},
{
"docid": "37e904ddffdb9f7eee75b6415efde722",
"text": "Different actors like teachers, course designers and content providers need to gain more information about the way the resources provided with Moodle are used by the students so they can adjust and adapt their offer better. In this contribution we show that Excel Pivot Tables can be used to conduct a flexible analytical processing of usage data and gain valuable information. An advantage of Excel Pivot Tables is that they can be mastered by persons with good IT-skills but not necessarily computer scientists.",
"title": ""
},
{
"docid": "e5f2101e7937c61a4d6b11d4525a7ed8",
"text": "This article reviews an emerging field that aims for autonomous reinforcement learning (RL) directly on sensor-observations. Straightforward end-to-end RL has recently shown remarkable success, but relies on large amounts of samples. As this is not feasible in robotics, we review two approaches to learn intermediate state representations from previous experiences: deep auto-encoders and slow-feature analysis. We analyze theoretical properties of the representations and point to potential improvements.",
"title": ""
},
{
"docid": "1dbaaa804573e9a834616cce38547d8d",
"text": "This paper combines traditional fundamentals, such as earnings and cash flows, with measures tailored for growth firms, such as earnings stability, growth stability and intensity of R&D, capital expenditure and advertising, to create an index – GSCORE. A long–short strategy based on GSCORE earns significant excess returns, though most of the returns come from the short side. Results are robust in partitions of size, analyst following and liquidity and persist after controlling for momentum, book-tomarket, accruals and size. High GSCORE firms have greater market reaction and analyst forecast surprises with respect to future earnings announcements. Further, the results are inconsistent with a riskbased explanation as returns are positive in most years, and firms with lower risk earn higher returns. Finally, a contextual approach towards fundamental analysis works best, with traditional analysis appropriate for high BM stocks and growth oriented fundamental analysis appropriate for low BM stocks.",
"title": ""
}
] |
scidocsrr
|
3484f4181d878358a50d88cd8b4c00fb
|
Efficient and extensible security enforcement using dynamic data flow analysis
|
[
{
"docid": "1a0d0b0b38e6d6434448cee8959c58a8",
"text": "This paper reports the first results of an investigation into solutions to problems of security in computer systems; it establishes the basis for rigorous investigation by providing a general descriptive model of a computer system. Borrowing basic concepts and constructs from general systems theory, we present a basic result concerning security in computer systems, using precise notions of \"security\" and \"compromise\". We also demonstrate how a change in requirements can be reflected in the resulting mathematical model. A lengthy introductory section is included in order to bridge the gap between general systems theory and practical problem solving. ii PREFACE General systems theory is a relatively new and rapidly growing mathematical discipline which shows great promise for application in the computer sciences. The discipline includes both \"general systems-theory\" and \"general-systems-theory\": that is, one may properly read the phrase \"general systems theory\" in both ways. In this paper, we have borrowed from the works of general systems theorists, principally from the basic work of Mesarovic´, to formulate a mathematical framework within which to deal with the problems of secure computer systems. At the present time we feel that the mathematical representation developed herein is adequate to deal with most if not all of the security problems one may wish to pose. In Section III we have given a result which deals with the most trivial of the secure computer systems one might find viable in actual use. In the concluding section we review the application of our mathematical methodology and suggest major areas of concern in the design of a secure system. The results reported in this paper lay the groundwork for further, more specific investigation into secure computer systems. The investigation will proceed by specializing the elements of the model to represent particular aspects of system design and operation. Such an investigation will be reported in the second volume of this series where we assume a system with centralized access control. A preliminary investigation of distributed access is just beginning; the results of that investigation would be reported in a third volume of the series.",
"title": ""
}
] |
[
{
"docid": "26d235dbaa2bfd6bdf81cbd78610b68c",
"text": "In the information systems (IS) domain, technology adoption has been one of the most extensively researched areas. Although in the last decade various models had been introduced to address the acceptance or rejection of information systems, there is still a lack of existing studies regarding a comprehensive review and classification of researches in this area. The main objective of this study is steered toward gaining a comprehensive understanding of the progresses made in the domain of IT adoption research, by highlighting the achievements, setbacks, and prospects recorded in this field so as to be able to identify existing research gaps and prospective areas for future research. This paper aims at providing a comprehensive review on the current state of IT adoption research. A total of 330 articles published in IS ranked journals between the years 2006 and 2015 in the domain of IT adoption were reviewed. The research scope was narrowed to six perspectives, namely year of publication, theories underlining the technology adoption, level of research, dependent variables, context of the technology adoption, and independent variables. In this research, information on trends in IT adoption is provided by examining related research works to provide insights and future direction on technology adoption for practitioners and researchers. This paper highlights future research paths that can be taken by researchers who wish to endeavor in technology adoption research. It also summarizes the key findings of previous research works including statistical findings of factors that had been introduced in IT adoption studies.",
"title": ""
},
{
"docid": "0705cadb5baa97c4995c9b829389810c",
"text": "The production and culture of new species of mushrooms is increasing. The breeding of new strains has significantly improved, allowing the use of strains with high yield and resistance to diseases, increasing productivity and diminishing the use of chemicals for pest control. The improvement and development of modern technologies, such as computerized control, automated mushroom harvesting, preparation of compost, production of mushrooms in a non-composted substrate, and new methods of substrate sterilization and spawn preparation, will increase the productivity of mushroom culture. All these aspects are crucial for the production of mushrooms with better flavor, appearance, texture, nutritional qualities, and medicinal properties at low cost. Mushroom culture is a biotechnological process that recycles ligninocellulosic wastes, since mushrooms are food for human consumption and the spent substrate can be used in different ways.",
"title": ""
},
{
"docid": "69ad93c7b6224321d69456c23a4185ce",
"text": "Modeling fashion compatibility is challenging due to its complexity and subjectivity. Existing work focuses on predicting compatibility between product images (e.g. an image containing a t-shirt and an image containing a pair of jeans). However, these approaches ignore real-world ‘scene’ images (e.g. selfies); such images are hard to deal with due to their complexity, clutter, variations in lighting and pose (etc.) but on the other hand could potentially provide key context (e.g. the user’s body type, or the season) for making more accurate recommendations. In this work, we propose a new task called ‘Complete the Look’, which seeks to recommend visually compatible products based on scene images. We design an approach to extract training data for this task, and propose a novel way to learn the scene-product compatibility from fashion or interior design images. Our approach measures compatibility both globally and locally via CNNs and attention mechanisms. Extensive experiments show that our method achieves significant performance gains over alternative systems. Human evaluation and qualitative analysis are also conducted to further understand model behavior. We hope this work could lead to useful applications which link large corpora of real-world scenes with shoppable products.",
"title": ""
},
{
"docid": "a6834bf39e84e4aa9964a7b01e79095f",
"text": "As in many neural network architectures, the use of Batch Normalization (BN) has become a common practice for Generative Adversarial Networks (GAN). In this paper, we propose using Euclidean reconstruction error on a test set for evaluating the quality of GANs. Under this measure, together with a careful visual analysis of generated samples, we found that while being able to speed training during early stages, BN may have negative effects on the quality of the trained model and the stability of the training process. Furthermore, Weight Normalization, a more recently proposed technique, is found to improve the reconstruction, training speed and especially the stability of GANs, and thus should be used in place of BN in GAN training.",
"title": ""
},
{
"docid": "f6f6f322118f5240aec5315f183a76ab",
"text": "Learning from data sets that contain very few instances of the minority class usually produces biased classifiers that have a higher predictive accuracy over the majority class, but poorer predictive accuracy over the minority class. SMOTE (Synthetic Minority Over-sampling Technique) is specifically designed for learning from imbalanced data sets. This paper presents a modified approach (MSMOTE) for learning from imbalanced data sets, based on the SMOTE algorithm. MSMOTE not only considers the distribution of minority class samples, but also eliminates noise samples by adaptive mediation. The combination of MSMOTE and AdaBoost are applied to several highly and moderately imbalanced data sets. The experimental results show that the prediction performance of MSMOTE is better than SMOTEBoost in the minority class and F-values are also improved.",
"title": ""
},
{
"docid": "64c2b9f59a77f03e6633e5804356e9fc",
"text": "AbstructWe present a novel method, that we call EVENODD, for tolerating up to two disk failures in RAID architectures. EVENODD employs the addition of only two redundant disks and consists of simple exclusive-OR computations. This redundant storage is optimal, in the sense that two failed disks cannot be retrieved with less than two redundant disks. A major advantage of EVENODD is that it only requires parity hardware, which is typically present in standard RAID-5 controllers. Hence, EVENODD can be implemented on standard RAID-5 controllers without any hardware changes. The most commonly used scheme that employes optimal redundant storage (Le., two extra disks) is based on ReedSolomon (RS) error-correcting codes. This scheme requires computation over finite fields and results in a more complex implementation. For example, we show that the complexity of implementing EVENODD in a disk array with 15 disks is about 50% of the one required when using the RS scheme. The new scheme is not limited to RAID architectures: it can be used in any system requiring large symbols and relatively short codes, for instance, in multitrack magnetic recording. To this end, we also present a decoding algorithm for one column (track) in error.",
"title": ""
},
{
"docid": "f70c07e15c4070edf75e8846b4dff0b3",
"text": "Polyphenols, including flavonoids, phenolic acids, proanthocyanidins and resveratrol, are a large and heterogeneous group of phytochemicals in plant-based foods, such as tea, coffee, wine, cocoa, cereal grains, soy, fruits and berries. Growing evidence indicates that various dietary polyphenols may influence carbohydrate metabolism at many levels. In animal models and a limited number of human studies carried out so far, polyphenols and foods or beverages rich in polyphenols have attenuated postprandial glycemic responses and fasting hyperglycemia, and improved acute insulin secretion and insulin sensitivity. The possible mechanisms include inhibition of carbohydrate digestion and glucose absorption in the intestine, stimulation of insulin secretion from the pancreatic beta-cells, modulation of glucose release from the liver, activation of insulin receptors and glucose uptake in the insulin-sensitive tissues, and modulation of intracellular signalling pathways and gene expression. The positive effects of polyphenols on glucose homeostasis observed in a large number of in vitro and animal models are supported by epidemiological evidence on polyphenol-rich diets. To confirm the implications of polyphenol consumption for prevention of insulin resistance, metabolic syndrome and eventually type 2 diabetes, human trials with well-defined diets, controlled study designs and clinically relevant end-points together with holistic approaches e.g., systems biology profiling technologies are needed.",
"title": ""
},
{
"docid": "3cbd22082a7bf570520e8175dff30bf7",
"text": "Gender dysphoria is suggested to be a consequence of sex atypical cerebral differentiation. We tested this hypothesis in a magnetic resonance study of voxel-based morphometry and structural volumetry in 48 heterosexual men (HeM) and women (HeW) and 24 gynephillic male to female transsexuals (MtF-TR). Specific interest was paid to gray matter (GM) and white matter (WM) fraction, hemispheric asymmetry, and volumes of the hippocampus, thalamus, caudate, and putamen. Like HeM, MtF-TR displayed larger GM volumes than HeW in the cerebellum and lingual gyrus and smaller GM and WM volumes in the precentral gyrus. Both male groups had smaller hippocampal volumes than HeW. As in HeM, but not HeW, the right cerebral hemisphere and thalamus volume was in MtF-TR lager than the left. None of these measures differed between HeM and MtF-TR. MtF-TR displayed also singular features and differed from both control groups by having reduced thalamus and putamen volumes and elevated GM volumes in the right insular and inferior frontal cortex and an area covering the right angular gyrus.The present data do not support the notion that brains of MtF-TR are feminized. The observed changes in MtF-TR bring attention to the networks inferred in processing of body perception.",
"title": ""
},
{
"docid": "2509b427f650c7fc54cdb5c38cdb2bba",
"text": "Inbreeding depression on female fertility and calving ease in Spanish dairy cattle was studied by the traditional inbreeding coefficient (F) and an alternative measurement indicating the inbreeding rate (DeltaF) for each animal. Data included records from 49,497 and 62,134 cows for fertility and calving ease, respectively. Both inbreeding measurements were included separately in the routine genetic evaluation models for number of insemination to conception (sequential threshold animal model) and calving ease (sire-maternal grandsire threshold model). The F was included in the model as a categorical effect, whereas DeltaF was included as a linear covariate. Inbred cows showed impaired fertility and tended to have more difficult calvings than low or noninbred cows. Pregnancy rate decreased by 1.68% on average for cows with F from 6.25 to 12.5%. This amount of inbreeding, however, did not seem to increase dystocia incidence. Inbreeding depression was larger for F greater than 12.5%. Cows with F greater than 25% had lower pregnancy rate and higher dystocia rate (-6.37 and 1.67%, respectively) than low or noninbred cows. The DeltaF had a significant effect on female fertility. A DeltaF = 0.01, corresponding to an inbreeding coefficient of 5.62% for the average equivalent generations in the data used (5.68), lowered pregnancy rate by 1.5%. However, the posterior estimate for the effect of DeltaF on calving ease was not significantly different from zero. Although similar patterns were found with both F and DeltaF, the latter detected a lowered pregnancy rate at an equivalent F, probably because it may consider the known depth of the pedigree. The inbreeding rate might be an alternative choice to measure inbreeding depression.",
"title": ""
},
{
"docid": "6c3d5a7f92d68863ef484d5367267eaf",
"text": "This paper complements a series of works on implicative verbs such as manage to and fail to. It extends the description of simple implicative verbs to phrasal implicatives as take the time to and waste the chance to. It shows that the implicative signatures of over 300 verb-noun collocations depend both on the semantic type of the verb and the semantic type of the noun in a systematic way.",
"title": ""
},
{
"docid": "69c8584255b16e6bc05fdfc6510d0dc4",
"text": "OBJECTIVE\nThis study assesses the psychometric properties of Ward's seven-subtest short form (SF) for WAIS-IV in a sample of adults with schizophrenia (SZ) and schizoaffective disorder.\n\n\nMETHOD\nSeventy patients diagnosed with schizophrenia or schizoaffective disorder were administered the full version of the WAIS-IV. Four different versions of the Ward's SF were then calculated. The subtests used were: Similarities, Digit Span, Arithmetic, Information, Coding, Picture Completion, and Block Design (BD version) or Matrix Reasoning (MR version). Prorated and regression-based formulae were assessed for each version.\n\n\nRESULTS\nThe actual and estimated factorial indexes reflected the typical pattern observed in schizophrenia. The four SFs correlated significantly with their full-version counterparts, but the Perceptual Reasoning Index (PRI) correlated below the acceptance threshold for all four versions. The regression-derived estimates showed larger differences compared to the full form. The four forms revealed comparable but generally low clinical category agreement rates for factor indexes. All SFs showed an acceptable reliability, but they were not correlated with clinical outcomes.\n\n\nCONCLUSIONS\nThe WAIS-IV SF offers a good estimate of WAIS-IV intelligence quotient, which is consistent with previous results. Although the overall scores are comparable between the four versions, the prorated forms provided a better estimation of almost all indexes. MR can be used as an alternative for BD without substantially changing the psychometric properties of the SF. However, we recommend a cautious use of these abbreviated forms when it is necessary to estimate the factor index scores, especially PRI, and Processing Speed Index.",
"title": ""
},
{
"docid": "3848dd7667a25e8e7f69ecc318324224",
"text": "This paper describes the CloudProtect middleware that empowers users to encrypt sensitive data stored within various cloud applications. However, most web applications require data in plaintext for implementing the various functionalities and in general, do not support encrypted data management. Therefore, CloudProtect strives to carry out the data transformations (encryption/decryption) in a manner that is transparent to the application, i.e., preserves all functionalities of the application, including those that require data to be in plaintext. Additionally, CloudProtect allows users flexibility in trading off performance for security in order to let them optimally balance their privacy needs and usage-experience.",
"title": ""
},
{
"docid": "0f659ff5414e75aefe23bb85127d93dd",
"text": "Important information is captured in medical documents. To make use of this information and intepret the semantics, technologies are required for extracting, analysing and interpreting it. As a result, rich semantics including relations among events, subjectivity or polarity of events, become available. The First Workshop on Extraction and Processing of Rich Semantics from Medical Texts, is devoted to the technologies for dealing with clinical documents for medical information gathering and application in knowledge based systems. New approaches for identifying and analysing rich semantics are presented. In this paper, we introduce the topic and summarize the workshop contributions.",
"title": ""
},
{
"docid": "104d16c298c8790ca8da0df4d7e34a4b",
"text": "musical structure of a culture or genre” (Bharucha 1984, p. 421). So, unlike tonal hierarchies that refer to cognitive representations of the structure of music across different pieces of music in the style, event hierarchies refer to a particular piece of music and the place of each event in that piece. The two hierarchies occupy complementary roles. In listening to music or music-like experimental materials (melodies and harmonic progressions), the listener responds both to the structure provided by the tonal hierarchy and the structure provided by the event hierarchy. Musical activity involves dynamic patterns of stability and instability to which both the tonal and event hierarchies contribute. Understanding the relations between them and their interaction in processing musical structure is a central issue, not yet extensively studied empirically. 3.3 Empirical Research: The Basic Studies This section outlines the classic findings that illustrate tonal relationships and the methodologies used to establish these findings. 3.3.1 The Probe Tone Method Quantification is the first step in empirical studies because it makes possible the kinds of analytic techniques needed to understand complex human behaviors. An experimental method that has been used to quantify the tonal hierarchy is called the probe-tone method (Krumhansl and Shepard 1979). It was based on the observation that if you hear the incomplete ascending C major scale, C-D-E-F-G-A-B, you strongly expect that the next tone will be the high C. It is the next logical tone in the series, proximal to the last tone of the context, B, and it is the tonic of the key. When, in the experiment, incomplete ascending and descending scale contexts were followed by the tone C (the probe tone), listeners rated it highly as to how well it completed the scale (1 = very badly, 7 = very well). Other probe tones, however, also received fairly high ratings, and they were not necessarily those that are close in pitch to the last tone of the context. For example, the more musically trained listeners also gave high ratings to the dominant, G, and the mediant, E, which together with the C form the tonic triad. The tones of the scale received higher ratings than the nonscale tones, C# D# F# G# and A#. Less musically trained listeners were more influenced by how close the probe tone was to the tone sounded most recently at the end of the context, although their ratings also contained some of the tonal hierarchy pattern. A subsequent study used this method with a variety of contexts at the beginning of the trials (Krumhansl and Kessler 1982). Contexts were chosen because they are clear indicators of a key. They included the scale, the tonic triad chord, and chord 56 C.L. Krumhansl and L.L. Cuddy sequences strongly defining major and minor keys. These contexts were followed by all possible probe tones in the 12-tone chromatic scale, which musically trained listeners were instructed to judge in terms of how well they fit with the preceding context in a musical sense. The results for contexts of the same mode (major or minor) were similar when transposed to a common tonic. Also, the results were largely independent of which particular type of context was used (e.g., chord versus chord cadence). Consequently, the rating data were transposed to a common tonic and averaged over the context types. The resulting values are termed standardized key profiles. The values for the major key profile are 6.35, 2.23, 3.48, 2.33, 4.38, 4.09, 2.52, 5.19, 2.39, 3.66, 2.29, 2.88, where the first number corresponds to the mean rating for the tonic of the key, the second to the next of the 12 tones in the chromatic scale, and so on. The values for the minor key context are 6.33, 2.68, 3.52, 5.38, 2.60, 3.53, 2.54, 4.75, 3.98, 2.69, 3.34, 3.17. These are plotted in Fig. 3.1, in which C is assumed to be the tonic. Both major and minor contexts produce clear and musically interpretable hierarchies in the sense that tones are ordered or ranked according to music-theoretic descriptions. The results of these initial studies suggested that it is possible to obtain quantitative judgments of the degree to which different tones are perceived as stable reference tones in musical contexts. The task appeared to be accessible to listeners who differed considerably in their music training. This was important for further investigations of the responses of listeners without knowledge of specialized vocabularies for describing music, or who were unfamiliar with the musical style. Finally, the results in these and many subsequent studies were quite consistent over a variety of task instructions and musical contexts used to induce a sense of key. Quantification Fig. 3.1 (a) Probe tone ratings for a C major context. (b) Probe tone ratings for a C minor context. Values from Krumhansl and Kessler (1982) 57 3 A Theory of Tonal Hierarchies in Music of the tonal hierarchies is an important first step in empirical research but, as seen later, a great deal of research has studied it from a variety of different perspectives. 3.3.2 Converging Evidence To substantiate any theoretical construct, such as the tonal hierarchy, it is important to have evidence from experiments using different methods. This strategy is known as “converging operations” (Garner et al. 1956). This section describes a number of other experimental measures that show influences of the tonal hierarchy. It has an effect on the degree to which tones are perceived as similar to one another (Krumhansl 1979), such that tones high in the hierarchy are perceived as relatively similar to one another. For example, in the key of C major, C and G are perceived as highly related, whereas C# and G# are perceived as distantly related, even though they are just as far apart objectively (in semitones). In addition, a pair of tones is heard as more related when the second is more stable in the tonal hierarchy than the first (compared to the reverse order). For example, the tones F#-G are perceived as more related to one another than are G-F# because G is higher in the tonal hierarchy than F#. Similar temporal-order asymmetries also appear in memory studies. For example, F# is more often confused with G than G is confused with F# (Krumhansl 1979). These data reflect the proposition that each tone is drawn toward, or expected to resolve to, a tone of greater stability in the tonal hierarchy. Janata and Reisberg (1988) showed that the tonal hierarchy also influenced reaction time measures in tasks requiring a categorical judgment about a tone’s key membership. For both scale and chord contexts, faster reaction times (in-key/outof-key) were obtained for tones higher in the hierarchy. In addition, a recency effect was found for the scale context as for the nonmusicians in the original probe tone study (Krumhansl and Shepard 1979). Miyazaki (1989) found that listeners with absolute pitch named tones highest in tonal hierarchy of C major faster and more accurately than other tones. This is remarkable because it suggests that musical training has a very specific effect on the acquisition of absolute pitch. Most of the early piano repertoire is written in the key of C major and closely related keys. All of these listeners began piano lessons as young as 3–5 years of age, and were believed to have acquired absolute pitch through exposure to piano tones. The tonal hierarchy also appears in judgments of what tone constitutes a good phrase ending (Palmer and Krumhansl 1987a, b; Boltz 1989a, b). A number of studies show that the tonal hierarchy is one of the factors that influences expectations for melodic continuations (Schmuckler 1989; Krumhansl 1991, 1995b; Cuddy and Lunney 1995; Krumhansl et al. 1999, 2000). Other factors include pitch proximity, interval size, and melodic direction. The influence of the tonal hierarchy has also been demonstrated in a study of expressive piano performance (Thompson and Cuddy 1997). Expression refers to 58 C.L. Krumhansl and L.L. Cuddy the changes in duration and dynamics (loudness) that performers add beyond the notated music. For the harmonized sequences used in their study, the performance was influenced by the tonal hierarchy. Tones that were tonally stable within a key (higher in the tonal hierarchy) tended to be played for longer duration in the melody than those less stable (lower in the tonal hierarchy). A method used more recently (Aarden 2003, described in Huron 2006) is a reaction-time task in which listeners had to judge whether unfamiliar melodies went up, down, or stayed the same (a tone was repeated). The underlying idea is that reaction times should be faster when the tone conforms to listeners’ expectations. His results confirmed this hypothesis, namely, that reaction times were faster for tones higher in the hierarchy. As described later, his data conformed to a very large statistical analysis he did of melodies in major and minor keys. Finally, tonal expectations result in event-related potentials (ERPs), changes in electrical potentials measured on the surface of the head (Besson and Faïta 1995; Besson et al. 1998). A larger P300 component, a positive change approximately 300 ms after the final tone, was found when a melody ended with a tone out of the scale of its key than a tone in the scale. This finding was especially true for musicians and familiar melodies, suggesting that learning plays some role in producing the effect; however, the effect was also present in nonmusicians, only to a lesser degree. This section has cited only a small proportion of the studies that have been conducted on tonal hierarchies. A closely related issue that has also been studied extensively is the existence of, and the effects of, a hierarchy of chords. The choice of the experiments reviewed here was to illustrate the variety of approaches that have been taken. Across the studies, consistent effects were found with many different kinds of experimental",
"title": ""
},
{
"docid": "5dbd994583805d41fb34837ca52fc712",
"text": "This editorial is part of a For-Discussion-Section of Methods of Information in Medicine about the paper \"Evidence-based Health informatics: How do we know what we know?\", written by Elske Ammenwerth [1]. Health informatics uses and applications have crept up on health systems over half a century, starting as simple automation of large-scale calculations, but now manifesting in many cases as rule- and algorithm-based creation of composite clinical analyses and 'black box' computation of clinical aspects, as well as enablement of increasingly complex care delivery modes and consumer health access. In this process health informatics has very largely bypassed the rules of precaution, proof of effectiveness, and assessment of safety applicable to all other health sciences and clinical support systems. Evaluation of informatics applications, compilation and recognition of the importance of evidence, and normalisation of Evidence Based Health Informatics, are now long overdue on grounds of efficiency and safety. Ammenwerth has now produced a rigorous analysis of the current position on evidence, and evaluation as its lifeblood, which demands careful study then active promulgation. Decisions based on political aspirations, 'modernisation' hopes, and unsupported commercial claims must cease - poor decisions are wasteful and bad systems can kill. Evidence Based Health Informatics should be promoted, and expected by users, as rigorously as Cochrane promoted Effectiveness and Efficiency, and Sackett promoted Evidence Based Medicine - both of which also were introduced retrospectively to challenge the less robust and partially unsafe traditional 'wisdom' in vogue. Ammenwerth's analysis gives the necessary material to promote that mission.",
"title": ""
},
{
"docid": "eab81b9df11e38384f1e49d56cc4e3dc",
"text": "BACKGROUND\nIntraoperative tumour perforation, positive tumour margins, wound complications and local recurrence are frequent difficulties with conventional abdominoperineal resection (APR) for rectal cancer. An alternative technique is the extended posterior perineal approach with gluteus maximus flap reconstruction of the pelvic floor. The aim of this study was to report the technique and early experience of extended APR in a select cohort of patients.\n\n\nMETHODS\nThe principles of operation are that the mesorectum is not dissected off the levator muscles, the perineal dissection is done in the prone position and the levator muscles are resected en bloc with the anus and lower rectum. The perineal defect is reconstructed with a gluteus maximus flap. Between 2001 and 2005, 28 patients with low rectal cancer were treated accordingly at the Karolinska Hospital.\n\n\nRESULTS\nTwo patients had ypT0 tumours, 20 ypT3 and six ypT4 tumours. Bowel perforation occurred in one, the circumferential resection margin (CRM) was positive in two, and four patients had local perineal wound complications. Two patients developed local recurrence after a median follow-up of 16 months.\n\n\nCONCLUSION\nThe extended posterior perineal approach with gluteus maximus flap reconstruction in APR has a low risk of bowel perforation, CRM involvement and local perineal wound complications. The rate of local recurrence may be lower than with conventional APR.",
"title": ""
},
{
"docid": "f5b9cde4b7848f803b3e742298c92824",
"text": "For many years, analysis of short chain fatty acids (volatile fatty acids, VFAs) has been routinely used in identification of anaerobic bacteria. In numerous scientific papers, the fatty acids between 9 and 20 carbons in length have also been used to characterize genera and species of bacteria, especially nonfermentative Gram negative organisms. With the advent of fused silica capillary columns (which allows recovery of hydroxy acids and resolution of many isomers), it has become practical to use gas chromatography of whole cell fatty acid methyl esters to identify a wide range of organisms.",
"title": ""
},
{
"docid": "c410b6cd3f343fc8b8c21e23e58013cd",
"text": "Virtualization is increasingly being used to address server management and administration issues like flexible resource allocation, service isolation and workload migration. In a virtualized environment, the virtual machine monitor (VMM) is the primary resource manager and is an attractive target for implementing system features like scheduling, caching, and monitoring. However, the lackof runtime information within the VMM about guest operating systems, sometimes called the semantic gap, is a significant obstacle to efficiently implementing some kinds of services.In this paper we explore techniques that can be used by a VMM to passively infer useful information about a guest operating system's unified buffer cache and virtual memory system. We have created a prototype implementation of these techniques inside the Xen VMM called Geiger and show that it can accurately infer when pages are inserted into and evicted from a system's buffer cache. We explore several nuances involved in passively implementing eviction detection that have not previously been addressed, such as the importance of tracking disk block liveness, the effect of file system journaling, and the importance of accounting for the unified caches found in modern operating systems.Using case studies we show that the information provided by Geiger enables a VMM to implement useful VMM-level services. We implement a novel working set size estimator which allows the VMM to make more informed memory allocation decisions. We also show that a VMM can be used to drastically improve the hit rate in remote storage caches by using eviction-based cache placement without modifying the application or operating system storage interface. Both case studies hint at a future where inference techniques enable a broad new class of VMM-level functionality.",
"title": ""
},
{
"docid": "91c792fac981d027ac1f2a2773674b10",
"text": "Cancer is a molecular disease associated with alterations in the genome, which, thanks to the highly improved sensitivity of mutation detection techniques, can be identified in cell-free DNA (cfDNA) circulating in blood, a method also called liquid biopsy. This is a non-invasive alternative to surgical biopsy and has the potential of revealing the molecular signature of tumors to aid in the individualization of treatments. In this review, we focus on cfDNA analysis, its advantages, and clinical applications employing genomic tools (NGS and dPCR) particularly in the field of oncology, and highlight its valuable contributions to early detection, prognosis, and prediction of treatment response.",
"title": ""
},
{
"docid": "38e7a36e4417bff60f9ae0dbb7aaf136",
"text": "Asynchronous implementation techniques, which measure logic delays at runtime and activate registers accordingly, are inherently more robust than their synchronous counterparts, which estimate worst case delays at design time and constrain the clock cycle accordingly. Desynchronization is a new paradigm to automate the design of asynchronous circuits from synchronous specifications, thus, permitting widespread adoption of asynchronicity without requiring special design skills or tools. In this paper, different protocols for desynchronization are first studied, and their correctness is formally proven using techniques originally developed for distributed deployment of synchronous language specifications. A taxonomy of existing protocols for asynchronous latch controllers, covering, in particular, the four-phase handshake protocols devised in the literature for micropipelines, is also provided. A new controller that exhibits provably maximal concurrency is then proposed, and the performance of desynchronized circuits is analyzed with respect to the original synchronous optimized implementation. Finally, this paper proves the feasibility and effectiveness of the proposed approach by showing its application to a set of real designs, including a complete implementation of the DLX microprocessor architecture",
"title": ""
}
] |
scidocsrr
|
a708f82df7dec84cb5c43455ca508afb
|
N-BaIoT—Network-Based Detection of IoT Botnet Attacks Using Deep Autoencoders
|
[
{
"docid": "2093c7b23da9d4260efb3cd80414255f",
"text": "In the Internet of Things (IoT), resources' constrained tiny sensors and devices could be connected to unreliable and untrusted networks. Nevertheless, securing IoT technology is mandatory, due to the relevant data handled by these devices. Intrusion Detection System (IDS) is the most efficient technique to detect the attackers with a high accuracy when cryptography is broken. This is achieved by combining the advantages of anomaly and signature detection, which are high detection and low false positive rates, respectively. To achieve a high detection rate, the anomaly detection technique relies on a learning algorithm to model the normal behavior of a node and when a new attack pattern (often known as signature) is detected, it will be modeled with a set of rules. This latter is used by the signature detection technique for attack confirmation. However, the activation of anomaly detection for low-resource IoT devices could generate a high-energy consumption, specifically when this technique is activated all the time. Using game theory and with the help of Nash equilibrium, anomaly detection is activated only when a new attack's signature is expected to occur. This will make a balance between accuracy detection and energy consumption. Simulation results show that the proposed anomaly detection approach requires a low energy consumption to detect the attacks with high accuracy (i.e. high detection and low false positive rates).",
"title": ""
},
{
"docid": "1ede796449f610b186638aa2ac9ceedf",
"text": "We introduce a framework for exploring and learning representations of log data generated by enterprise-grade security devices with the goal of detecting advanced persistent threats (APTs) spanning over several weeks. The presented framework uses a divide-and-conquer strategy combining behavioral analytics, time series modeling and representation learning algorithms to model large volumes of data. In addition, given that we have access to human-engineered features, we analyze the capability of a series of representation learning algorithms to complement human-engineered features in a variety of classification approaches. We demonstrate the approach with a novel dataset extracted from 3 billion log lines generated at an enterprise network boundaries with reported command and control communications. The presented results validate our approach, achieving an area under the ROC curve of 0.943 and 95 true positives out of the Top 100 ranked instances on the test data set.",
"title": ""
}
] |
[
{
"docid": "20c76c212eed35aaae4e238ab4055684",
"text": "Philosophers, psychologists, and economists have long asserted that deception harms trust. We challenge this claim. Across four studies, we demonstrate that deception can increase trust. Specifically, prosocial lies increase the willingness to pass money in the trust game, a behavioral measure of benevolence-based trust. In Studies 1a and 1b, we find that altruistic lies increase trust when deception is directly experienced and when it is merely observed. In Study 2, we demonstrate that mutually beneficial lies also increase trust. In Study 3, we disentangle the effects of intentions and deception; intentions are far more important than deception for building benevolence-based trust. In Study 4, we examine how prosocial lies influence integrity-based trust. We introduce a new economic game, the Rely-or-Verify game, to measure integrity-based trust. Prosocial lies increase benevolence-based trust, but harm integrity-based trust. Our findings expand our understanding of deception and deepen our insight into the mechanics",
"title": ""
},
{
"docid": "ceaab471634611c7d98776a7f33662e3",
"text": "Visible Light Communication (VLC) has many advantages such as high-speed data transmission and non-frequency authorization, which provided a good solution for indoor access environment with an effective and energy-saving way. This paper proposes a combination VLC + WiFi based indoor wireless access network structure. Different network architectures are analyzed and optimal scheme is given. Based on the numerical calculation, the handover model is introduced. Finally, a demo system is designed and implemented.",
"title": ""
},
{
"docid": "3509f0bb534fbb5da5b232b91d81c8e9",
"text": "BACKGROUND\nBlighia sapida is a woody perennial multipurpose fruit tree species native to the Guinean forests of West Africa. The fleshy arils of the ripened fruits are edible. Seeds and capsules of the fruits are used for soap-making and all parts of the tree have medicinal properties. Although so far overlooked by researchers in the region, the tree is highly valued by farmers and is an important component of traditional agroforestry systems in Benin. Fresh arils, dried arils and soap are traded in local and regional markets in Benin providing substantial revenues for farmers, especially women. Recently, ackee has emerged as high-priority species for domestication in Benin but information necessary to elaborate a clear domestication strategy is still very sketchy. This study addresses farmers' indigenous knowledge on uses, management and perception of variation of the species among different ethnic groups taking into account also gender differences.\n\n\nMETHODS\n240 randomly selected persons (50% women) belonging to five different ethnic groups, 5 women active in the processing of ackee fruits and 6 traditional healers were surveyed with semi-structured interviews. Information collected refer mainly to the motivation of the respondents to conserve ackee trees in their land, the local uses, the perception of variation, the preference in fruits traits, the management practices to improve the production and regenerate ackee.\n\n\nRESULTS\nPeople have different interests on using ackee, variable knowledge on uses and management practices, and have reported nine differentiation criteria mainly related to the fruits. Ackee phenotypes with preferred fruit traits are perceived by local people to be more abundant in managed in-situ and cultivated stands than in unmanaged wild stands, suggesting that traditional management has initiated a domestication process. As many as 22 diseases have been reported to be healed with ackee. In general, indigenous knowledge about ackee varies among ethnic and gender groups.\n\n\nCONCLUSIONS\nWith the variation observed among ethnic groups and gender groups for indigenous knowledge and preference in fruits traits, a multiple breeding sampling strategy is recommended during germplasm collection and multiplication. This approach will promote sustainable use and conservation of ackee genetic resources.",
"title": ""
},
{
"docid": "a4da82c9c98203810cdfcf5c1a2c7f0a",
"text": "Software producing organizations are frequently judged by others for being ‘open’ or ‘closed’, where a more ‘closed’ organization is seen as being detrimental to its software ecosystem. These qualifications can harm the reputation of these companies, for they are deemed to promote vendor lock-in, use closed data formats, and are seen as using intellectual property laws to harm others. These judgements, however, are frequently based on speculation and the need arises for a method to establish openness of an organization, such that decisions are no longer based on prejudices, but on an objective assessment of the practices of a software producing organization. In this article the open software enterprise model is presented that roduct software vendors",
"title": ""
},
{
"docid": "1d8cd32e2a2748b9abd53cf32169d798",
"text": "Optimizing the weights of Artificial Neural Networks (ANNs) is a great important of a complex task in the research of machine learning due to dependence of its performance to the success of learning process and the training method. This paper reviews the implementation of meta-heuristic algorithms in ANNs’ weight optimization by studying their advantages and disadvantages giving consideration to some meta-heuristic members such as Genetic algorithim, Particle Swarm Optimization and recently introduced meta-heuristic algorithm called Harmony Search Algorithm (HSA). Also, the application of local search based algorithms to optimize the ANNs weights and their benefits as well as their limitations are briefly elaborated. Finally, a comparison between local search methods and global optimization methods is carried out to speculate the trends in the progresses of ANNs’ weight optimization in the current resrearch.",
"title": ""
},
{
"docid": "bceaded3710f8d6501aa1118d191aaaa",
"text": "The human gut harbors a large and complex community of beneficial microbes that remain stable over long periods. This stability is considered critical for good health but is poorly understood. Here we develop a body of ecological theory to help us understand microbiome stability. Although cooperating networks of microbes can be efficient, we find that they are often unstable. Counterintuitively, this finding indicates that hosts can benefit from microbial competition when this competition dampens cooperative networks and increases stability. More generally, stability is promoted by limiting positive feedbacks and weakening ecological interactions. We have analyzed host mechanisms for maintaining stability—including immune suppression, spatial structuring, and feeding of community members—and support our key predictions with recent data.",
"title": ""
},
{
"docid": "b990e62cb73c0f6c9dd9d945f72bb047",
"text": "Admissible heuristics are an important class of heuristics worth discovering: they guarantee shortest path solutions in search algorithms such asA* and they guarantee less expensively produced, but boundedly longer solutions in search algorithms such as dynamic weighting. Unfortunately, effective (accurate and cheap to compute) admissible heuristics can take years for people to discover. Several researchers have suggested that certain transformations of a problem can be used to generate admissible heuristics. This article defines a more general class of transformations, calledabstractions, that are guaranteed to generate only admissible heuristics. It also describes and evaluates an implemented program (Absolver II) that uses a means-ends analysis search control strategy to discover abstracted problems that result in effective admissible heuristics. Absolver II discovered several well-known and a few novel admissible heuristics, including the first known effective one for Rubik's Cube, thus concretely demonstrating that effective admissible heuristics can be tractably discovered by a machine.",
"title": ""
},
{
"docid": "3caa8fc1ea07fcf8442705c3b0f775c5",
"text": "Recent research in the field of computational social science have shown how data resulting from the widespread adoption and use of social media channels such as twitter can be used to predict outcomes such as movie revenues, election winners, localized moods, and epidemic outbreaks. Underlying assumptions for this research stream on predictive analytics are that social media actions such as tweeting, liking, commenting and rating are proxies for user/consumer's attention to a particular object/product and that the shared digital artefact that is persistent can create social influence. In this paper, we demonstrate how social media data from twitter can be used to predict the sales of iPhones. Based on a conceptual model of social data consisting of social graph (actors, actions, activities, and artefacts) and social text (topics, keywords, pronouns, and sentiments), we develop and evaluate a linear regression model that transforms iPhone tweets into a prediction of the quarterly iPhone sales with an average error close to the established prediction models from investment banks. This strong correlation between iPhone tweets and iPhone sales becomes marginally stronger after incorporating sentiments of tweets. We discuss the findings and conclude with implications for predictive analytics with big social data.",
"title": ""
},
{
"docid": "50c0f3cdccc1fe63f3fcb4cb3c983617",
"text": "Junho Yang Department of Mechanical Science and Engineering, University of Illinois at Urbana-Champaign, Urbana, Illinois 61801 e-mail: yang125@illinois.edu Ashwin Dani Department of Aerospace Engineering, University of Illinois at Urbana-Champaign, Urbana, Illinois 61801 e-mail: adani@illinois.edu Soon-Jo Chung Department of Aerospace Engineering, University of Illinois at Urbana-Champaign, Urbana, Illinois 61801 e-mail: sjchung@illinois.edu Seth Hutchinson Department of Electrical and Computer Engineering, University of Illinois at Urbana-Champaign, Urbana, Illinois 61801 e-mail: seth@illinois.edu",
"title": ""
},
{
"docid": "865d7b8fae1cab739570229889177d58",
"text": "This paper presents design and implementation of scalar control of induction motor. This method leads to be able to adjust the speed of the motor by control the frequency and amplitude of the stator voltage of induction motor, the ratio of stator voltage to frequency should be kept constant, which is called as V/F or scalar control of induction motor drive. This paper presents a comparative study of open loop and close loop V/F control induction motor. The V/F",
"title": ""
},
{
"docid": "41b7e610e0aa638052f71af1902e92d5",
"text": "This work investigates how social bots can phish employees of organizations, and thus endanger corporate network security. Current literature mostly focuses on traditional phishing methods (through e-mail, phone calls, and USB sticks). We address the serious organizational threats and security risks caused by phishing through online social media, specifically through Twitter. This paper first provides a review of current work. It then describes our experimental development, in which we created and deployed eight social bots on Twitter, each associated with one specific subject. For a period of four weeks, each bot published tweets about its subject and followed people with similar interests. In the final two weeks, our experiment showed that 437 unique users could have been phished, 33 of which visited our website through the network of an organization. Without revealing any sensitive or real data, the paper analyses some findings of this experiment and addresses further plans for research in this area.",
"title": ""
},
{
"docid": "986a57617f3e14cd1c87d2ae44ffee32",
"text": "Although switched-mode Electronic Loads (E-Loads) are commonly used in different applications, they are facing particular limitations especially for higher frequency purposes. While increasing the switching frequency in switched-mode E-Loads enables them to operate at high frequencies, simply rising the switching frequency might not be an efficient approach in terms of design issues, device limitations, electromagnetic noise problems and also complicated gate drive designs. To deal with these obstacles, a novel electronic load based on analog techniques is proposed in this paper. First of all, different parts of the proposed design are introduced in detail. Then, LTspice as a powerful software in analog circuits modeling is employed to simulate the suggested circuit. Analysis and assessment of simulation results present the effectiveness of the proposed E-Load. In the last step, to validate this concept, an experimental set-up is implemented and the results are shown.",
"title": ""
},
{
"docid": "7107c5a8e81e3cedf750bcd9937581cd",
"text": "AVR XMEGA is the recent general-purpose 8-bit microcontroller from Atmel featuring symmetric crypto engines. We analyze the resistance of XMEGA crypto engines to side channel attacks. We reveal the relatively strong side channel leakage of the AES engine that enables full 128-bit AES secret key recovery in a matter of several minutes with a measurement setup cost about 1000 USD. 3000 power consumption traces are sufficient for the successful attack. Our analysis was performed without knowing the details of the crypto engine internals; quite the contrary, it reveals some details about the implementation. We sketch other feasible side channel attacks on XMEGA and suggest the counter-measures that can raise the complexity of the attacks but not fully prevent them.",
"title": ""
},
{
"docid": "2c0b13b5a1a4c207d52e674a518bf868",
"text": "We have developed a new mutual information-based registration method for matching unlabeled point features. In contrast to earlier mutual information-based registration methods, which estimate the mutual information using image intensity information, our approach uses the point feature location information. A novel aspect of our approach is the emergence of correspondence (between the two sets of features) as a natural by-product of joint density estimation. We have applied this algorithm to the problem of geometric alignment of primate autoradiographs. We also present preliminary results on three-dimensional robust matching of sulci derived from anatomical magnetic resonance images. Finally, we present an experimental comparison between the mutual information approach and other recent approaches which explicitly parameterize feature correspondence.",
"title": ""
},
{
"docid": "7ba37f2dcf95f36727e1cd0f06e31cc0",
"text": "The neonate receiving parenteral nutrition (PN) therapy requires a physiologically appropriate solution in quantity and quality given according to a timely, cost-effective strategy. Maintaining tissue integrity, metabolism, and growth in a neonate is challenging. To support infant growth and influence subsequent development requires critical timing for nutrition assessment and intervention. Providing amino acids to neonates has been shown to improve nitrogen balance, glucose metabolism, and amino acid profiles. In contrast, supplying the lipid emulsions (currently available in the United States) to provide essential fatty acids is not the optimal composition to help attenuate inflammation. Recent investigations with an omega-3 fish oil IV emulsion are promising, but there is need for further research and development. Complications from PN, however, remain problematic and include infection, hepatic dysfunction, and cholestasis. These complications in the neonate can affect morbidity and mortality, thus emphasizing the preference to provide early enteral feedings, as well as medication therapy to improve liver health and outcome. Potential strategies aimed at enhancing PN therapy in the neonate are highlighted in this review, and a summary of guidelines for practical management is included.",
"title": ""
},
{
"docid": "a53904f277c06e32bd6ad148399443c6",
"text": "Big data is flowing into every area of our life, professional and personal. Big data is defined as datasets whose size is beyond the ability of typical software tools to capture, store, manage and analyze, due to the time and memory complexity. Velocity is one of the main properties of big data. In this demo, we present SAMOA (Scalable Advanced Massive Online Analysis), an open-source platform for mining big data streams. It provides a collection of distributed streaming algorithms for the most common data mining and machine learning tasks such as classification, clustering, and regression, as well as programming abstractions to develop new algorithms. It features a pluggable architecture that allows it to run on several distributed stream processing engines such as Storm, S4, and Samza. SAMOA is written in Java and is available at http://samoa-project.net under the Apache Software License version 2.0.",
"title": ""
},
{
"docid": "3b6de41443a56f619178427f80474c17",
"text": "Most multi-view 3D reconstruction algorithms, especially when shapefrom-shading cues are used, assume that object appearance is predominantly diffuse. To alleviate this restriction, we introduce S2Dnet, a generative adversarial network for transferring multiple views of objects with specular reflection into diffuse ones, so that multi-view reconstruction methods can be applied more effectively. Our network extends unsupervised image-to-image translation to multiview “specular to diffuse” translation. To preserve object appearance across multiple views, we introduce a Multi-View Coherence loss (MVC) that evaluates the similarity and faithfulness of local patches after the view-transformation. Our MVC loss ensures that the similarity of local correspondences among multi-view images is preserved under the image-to-image translation. As a result, our network yields significantly better results than several single-view baseline techniques. In addition, we carefully design and generate a large synthetic training data set using physically-based rendering. During testing, our network takes only the raw glossy images as input, without extra information such as segmentation masks or lighting estimation. Results demonstrate that multi-view reconstruction can be significantly improved using the images filtered by our network. We also show promising performance on real world training and testing data.",
"title": ""
},
{
"docid": "dfdb7ab4a1ce74695757442b83f246fe",
"text": "In this paper, we propose an O(N) time distributed algorithm for computing betweenness centralities of all nodes in the network where N is the number of nodes. Our distributed algorithm is designed under the widely employed CONGEST model in the distributed computing community which limits each message only contains O(log N) bits. To our best knowledge, this is the first linear time deterministic distributed algorithm for computing the betweenness centralities in the published literature. We also give a lower bound for distributively computing the betweenness centrality under the CONGEST model as Ω(D+N/ log N) where D is the diameter of the network. This implies that our distributed algorithm is nearly optimal.",
"title": ""
},
{
"docid": "a66fd6798950172c461f1f318f37520f",
"text": "Despite the effectiveness of recurrent neural network language models, their maximum likelihood estimation suffers from two limitations. It treats all sentences that do not match the ground truth as equally poor, ignoring the structure of the output space. Second, it suffers from “exposure bias”: during training tokens are predicted given ground-truth sequences, while at test time prediction is conditioned on generated output sequences. To overcome these limitations we build upon the recent reward augmented maximum likelihood approach i.e. sequence-level smoothing that encourages the model to predict sentences close to the ground truth according to a given performance metric. We extend this approach to token-level loss smoothing, and propose improvements to the sequence-level smoothing approach. Our experiments on two different tasks, image captioning and machine translation, show that token-level and sequence-level loss smoothing are complementary, and significantly improve results.",
"title": ""
}
] |
scidocsrr
|
bf600914fd5e039942734dd724c518f0
|
Hypnosis and Mindfulness: The Twain Finally Meet.
|
[
{
"docid": "0b096c5cf5bac921c0e81a30c6a482a4",
"text": "OBJECTIVE\nTo provide a comprehensive review and evaluation of the psychological and neurophysiological literature pertaining to mindfulness meditation.\n\n\nMETHODS\nA search for papers in English was undertaken using PsycINFO (from 1804 onward), MedLine (from 1966 onward) and the Cochrane Library with the following search terms: Vipassana, Mindfulness, Meditation, Zen, Insight, EEG, ERP, fMRI, neuroimaging and intervention. In addition, retrieved papers and reports known to the authors were also reviewed for additional relevant literature.\n\n\nRESULTS\nMindfulness-based therapeutic interventions appear to be effective in the treatment of depression, anxiety, psychosis, borderline personality disorder and suicidal/self-harm behaviour. Mindfulness meditation per se is effective in reducing substance use and recidivism rates in incarcerated populations but has not been specifically investigated in populations with psychiatric disorders. Electroencephalography research suggests increased alpha, theta and beta activity in frontal and posterior regions, some gamma band effects, with theta activity strongly related to level of experience of meditation; however, these findings have not been consistent. The few neuroimaging studies that have been conducted suggest volumetric and functional change in key brain regions.\n\n\nCONCLUSIONS\nPreliminary findings from treatment outcome studies provide support for the application of mindfulness-based interventions in the treatment of affective, anxiety and personality disorders. However, direct evidence for the effectiveness of mindfulness meditation per se in the treatment of psychiatric disorders is needed. Current neurophysiological and imaging research findings have identified neural changes in association with meditation and provide a potentially promising avenue for future research.",
"title": ""
}
] |
[
{
"docid": "f8a1ba148f564f9dcc0c57873bb5ce60",
"text": "Advances in online technologies have raised new concerns about privacy. A sample of expert household end users was surveyed concerning privacy, risk perceptions, and online behavior intentions. A new e-privacy typology consisting of privacyaware, privacy-suspicious, and privacy-active types was developed from a principal component factor analysis. Results suggest the presence of a privacy hierarchy of effects where awareness leads to suspicion, which subsequently leads to active behavior. An important finding was that privacy-active behavior that was hypothesized to increase the likelihood of online subscription and purchasing was not found to be significant. A further finding was that perceived risk had a strong negative influence on the extent to which respondents participated in online subscription and purchasing. Based on these results, a number of implications for managers and directions for future research are discussed.",
"title": ""
},
{
"docid": "fc26f9bcbd28125607c90e15c3069cab",
"text": "Topological data analysis (TDA) is an emerging mathematical concept for characterizing shapes in complex data. In TDA, persistence diagrams are widely recognized as a useful descriptor of data, and can distinguish robust and noisy topological properties. This paper proposes a kernel method on persistence diagrams to develop a statistical framework in TDA. The proposed kernel satisfies the stability property and provides explicit control on the effect of persistence. Furthermore, the method allows a fast approximation technique. The method is applied into practical data on proteins and oxide glasses, and the results show the advantage of our method compared to other relevant methods on persistence diagrams.",
"title": ""
},
{
"docid": "41ec184d686b2ff1ffdabb8e4c24a6e9",
"text": "In this paper, we present a three-stage method for the estimation of the color of the illuminant in RAW images. The first stage uses a convolutional neural network that has been specially designed to produce multiple local estimates of the illuminant. The second stage, given the local estimates, determines the number of illuminants in the scene. Finally, local illuminant estimates are refined by non-linear local aggregation, resulting in a global estimate in case of single illuminant. An extensive comparison with both local and global illuminant estimation methods in the state of the art, on standard data sets with single and multiple illuminants, proves the effectiveness of our method.",
"title": ""
},
{
"docid": "4337803c5834dc98da0af2141293bb1b",
"text": "This paper addresses the joint design of transmit and receive beamforming or linear processing (commonly termed linear precoding at the transmitter and equalization at the receiver) for multicarrier multi-input multi-output (MIMO) channels under a variety of design criteria. Instead of considering each design criterion in a separate way, we generalize the existing results by developing a unified framework based on considering two families of objective functions that embrace most reasonable criteria to design a communication system: Schur-concave and Schur-convex functions. Once the optimal structure of the transmit-receive processing is known, the design problem simplifies and can be formulated within the powerful framework of convex optimization theory, in which a great number of interesting design criteria can be easily accommodated and efficiently solved even though closed-form expressions may not exist. From this perspective, we analyze a variety of design criteria and, in particular, we derive optimal beamvectors in the sense of having minimum average bit error rate (BER). Additional constraints on the Peak-to-Average Ratio (PAR) or on the signal dynamic range are easily included in the design. We propose two multi-level water-filling practical solutions that perform very close to the optimal in terms of average BER with a low implementation complexity. If cooperation among the processing operating at different carriers is allowed, the performance improves significantly. Interestingly, with carrier cooperation, it turns out that the exact optimal solution in terms of average BER can be obtained in closed-form. Manuscript received February 25, 2002; revised December 20, 2002. Part of the work was presented at the 40th Annual Allerton Conference on Communication, Control, and Computing, Monticello, IL, Oct. 2002 [37] . This work was partially supported by the European Comission under Project IST-2000-30148 I-METRA; Samsung Advanced Institute of Technology; the Spanish Government (CICYT) TIC2000-1025, TIC2001-2356, TIC2002-04594, FIT-070000-2000-649 (MEDEA + A105 UniLAN); and the Catalan Government (DURSI) 1999FI 00588, 2001SGR 00268.",
"title": ""
},
{
"docid": "843aa1e751391fb740571c08de46d2ca",
"text": "Antineutrophil cytoplasm antibody (ANCA)-associated vasculitides are small-vessel vasculitides that include granulomatosis with polyangiitis (formerly Wegener's granulomatosis), microscopic polyangiitis, and eosinophilic granulomatosis with polyangiitis (Churg-Strauss syndrome). Renal-limited ANCA-associated vasculitides can be considered the fourth entity. Despite their rarity and still unknown cause(s), research pertaining to ANCA-associated vasculitides has been very active over the past decades. The pathogenic role of antimyeloperoxidase ANCA (MPO-ANCA) has been supported using several animal models, but that of antiproteinase 3 ANCA (PR3-ANCA) has not been as strongly demonstrated. Moreover, some MPO-ANCA subsets, which are directed against a few specific MPO epitopes, have recently been found to be better associated with disease activity, but a different method than the one presently used in routine detection is required to detect them. B cells possibly play a major role in the pathogenesis because they produce ANCAs, as well as neutrophil abnormalities and imbalances in different T-cell subtypes [T helper (Th)1, Th2, Th17, regulatory cluster of differentiation (CD)4+ CD25+ forkhead box P3 (FoxP3)+ T cells] and/or cytokine-chemokine networks. The alternative complement pathway is also involved, and its blockade has been shown to prevent renal disease in an MPO-ANCA murine model. Other recent studies suggested strongest genetic associations by ANCA type rather than by clinical diagnosis. The induction treatment for severe granulomatosis with polyangiitis and microscopic polyangiitis is relatively well codified but does not (yet) really differ by precise diagnosis or ANCA type. It comprises glucocorticoids combined with another immunosuppressant, cyclophosphamide or rituximab. The choice between the two immunosuppressants must consider the comorbidities, past exposure to cyclophosphamide for relapsers, plans for pregnancy, and also the cost of rituximab. Once remission is achieved, maintenance strategy following cyclophosphamide-based induction relies on less toxic agents such as azathioprine or methotrexate. The optimal maintenance strategy following rituximab-based induction therapy remains to be determined. Preliminary results on rituximab for maintenance therapy appear promising. Efforts are still under way to determine the optimal duration of maintenance therapy, ideally tailored according to the characteristics of each patient and the previous treatment received.",
"title": ""
},
{
"docid": "359d76f0b4f758c3a58e886e840c5361",
"text": "Cover crops are important components of sustainable agricultural systems. They increase surface residue and aid in the reduction of soil erosion. They improve the structure and water-holding capacity of the soil and thus increase the effectiveness of applied N fertilizer. Legume cover crops such as hairy vetch and crimson clover fix nitrogen and contribute to the nitrogen requirements of subsequent crops. Cover crops can also suppress weeds, provide suitable habitat for beneficial predator insects, and act as non-host crops for nematodes and other pests in crop rotations. This paper reviews the agronomic and economic literature on using cover crops in sustainable food production and reports on past and present research on cover crops and sustainable agriculture at the Beltsville Agricultural Research Center, Maryland. Previous studies suggested that the profitability of cover crops is primarily the result of enhanced crop yields rather than reduced input costs. The experiments at the Beltsville Agricultural Research Center on fresh-market tomato production showed that tomatoes grown with hairy vetch mulch were higher yielding and more profitable than those grown with black polyethylene and no mulch system. Previous studies of cover crops in grain production indicated that legume cover crops such as hairy vetch and crimson clover are more profitable than grass cover crops such as rye or wheat because of the ability of legumes to contribute N to the following crop. A com-",
"title": ""
},
{
"docid": "49c7b5cab51301d8b921fa87d6c0b1ff",
"text": "We introduce the input output automa ton a simple but powerful model of computation in asynchronous distributed networks With this model we are able to construct modular hierarchical correct ness proofs for distributed algorithms We de ne this model and give an interesting example of how it can be used to construct such proofs",
"title": ""
},
{
"docid": "b36058bcfcb5f5f4084fe131c42b13d9",
"text": "We present regular linear temporal logic (RLTL), a logic that generalizes linear temporal logic with the ability to use regular expressions arbitrarily as sub-expressions. Every LTL operator can be defined as a context in regular linear temporal logic. This implies that there is a (linear) translation from LTL to RLTL. Unlike LTL, regular linear temporal logic can define all ω-regular languages, while still keeping the satisfiability problem in PSPACE. Unlike the extended temporal logics ETL∗, RLTL is defined with an algebraic signature. In contrast to the linear time μ-calculus, RLTL does not depend on fix-points in its syntax.",
"title": ""
},
{
"docid": "814e593fac017e5605c4992ef7b25d6d",
"text": "This paper discusses the design of high power density transformer and inductor for the high frequency dual active bridge (DAB) GaN charger. Because the charger operates at 500 kHz, the inductance needed to achieve ZVS for the DAB converter is reduced to as low as 3μH. As a result, it is possible to utilize the leakage inductor as the series inductor of DAB converter. To create such amount of leakage inductance, certain space between primary and secondary winding is allocated to store the leakage flux energy. The designed transformer is above 99.2% efficiency while delivering 3.3kW. The power density of the designed transformer is 6.3 times of the lumped transformer and inductor in 50 kHz Si Charger. The detailed design procedure and loss analysis are discussed.",
"title": ""
},
{
"docid": "f16676f00cd50173d75bd61936ec200c",
"text": "Training of the neural autoregressive density estimator (NADE) can be viewed as doing one step of probabilistic inference on missing values in data. We propose a new model that extends this inference scheme to multiple steps, arguing that it is easier to learn to improve a reconstruction in k steps rather than to learn to reconstruct in a single inference step. The proposed model is an unsupervised building block for deep learning that combines the desirable properties of NADE and multi-prediction training: (1) Its test likelihood can be computed analytically, (2) it is easy to generate independent samples from it, and (3) it uses an inference engine that is a superset of variational inference for Boltzmann machines. The proposed NADE-k is competitive with the state-of-the-art in density estimation on the two datasets tested.",
"title": ""
},
{
"docid": "24e54cbc2c419de1d2d56e64eb428004",
"text": "Internet of Things has become a predominant phenomenon in every sphere of smart life. Connected Cars and Vehicular Internet of Things, which involves communication and data exchange between vehicles, traffic infrastructure or other entities are pivotal to realize the vision of smart city and intelligent transportation. Vehicular Cloud offers a promising architecture wherein storage and processing capabilities of smart objects are utilized to provide on-the-fly fog platform. Researchers have demonstrated vulnerabilities in this emerging vehicular IoT ecosystem, where data has been stolen from critical sensors and smart vehicles controlled remotely. Security and privacy is important in Internet of Vehicles (IoV) where access to electronic control units, applications and data in connected cars should only be authorized to legitimate users, sensors or vehicles. In this paper, we propose an authorization framework to secure this dynamic system where interactions among entities is not pre-defined. We provide an extended access control oriented (E-ACO) architecture relevant to IoV and discuss the need of vehicular clouds in this time and location sensitive environment. We outline approaches to different access control models which can be enforced at various layers of E-ACO architecture and in the authorization framework. Finally, we discuss use cases to illustrate access control requirements in our vision of cloud assisted connected cars and vehicular IoT, and discuss possible research directions.",
"title": ""
},
{
"docid": "dc71b53847d33e82c53f0b288da89bfa",
"text": "We explore the use of convolutional neural networks for the semantic classification of remote sensing scenes. Two recently proposed architectures, CaffeNet and GoogLeNet, are adopted, with three different learning modalities. Besides conventional training from scratch, we resort to pre-trained networks that are only fine-tuned on the target data, so as to avoid overfitting problems and reduce design time. Experiments on two remote sensing datasets, with markedly different characteristics, testify on the effectiveness and wide applicability of the proposed solution, which guarantees a significant performance improvement over all state-of-the-art references.",
"title": ""
},
{
"docid": "0fbf7e46689102a4dd031eb54e6c083c",
"text": "The analyzing and extracting important information from a text document is crucial and has produced interest in the area of text mining and information retrieval. This process is used in order to notice particularly in the text. Furthermore, on view of the readers that people tend to read almost everything in text documents to find some specific information. However, reading a text document consumes time to complete and additional time to extract information. Thus, classifying text to a subject can guide a person to find relevant information. In this paper, a subject identification method which is based on term frequency to categorize groups of text into a particular subject is proposed. Since term frequency tends to ignore the semantics of a document, the term extraction algorithm is introduced for improving the result of the extracted relevant terms from the text. The evaluation of the extracted terms has shown that the proposed method is exceeded other extraction techniques.",
"title": ""
},
{
"docid": "bb19e6b00fca27c455316f09a626407c",
"text": "On the basis of the most recent epidemiologic research, Autism Spectrum Disorder (ASD) affects approximately 1% to 2% of all children. (1)(2) On the basis of some research evidence and consensus, the Modified Checklist for Autism in Toddlers isa helpful tool to screen for autism in children between ages 16 and 30 months. (11) The Diagnostic Statistical Manual of Mental Disorders, Fourth Edition, changes to a 2-symptom category from a 3-symptom category in the Diagnostic Statistical Manual of Mental Disorders, Fifth Edition(DSM-5): deficits in social communication and social interaction are combined with repetitive and restrictive behaviors, and more criteria are required per category. The DSM-5 subsumes all the previous diagnoses of autism (classic autism, Asperger syndrome, and pervasive developmental disorder not otherwise specified) into just ASDs. On the basis of moderate to strong evidence, the use of applied behavioral analysis and intensive behavioral programs has a beneficial effect on language and the core deficits of children with autism. (16) Currently, minimal or no evidence is available to endorse most complementary and alternative medicine therapies used by parents, such as dietary changes (gluten free), vitamins, chelation, and hyperbaric oxygen. (16) On the basis of consensus and some studies, pediatric clinicians should improve their capacity to provide children with ASD a medical home that is accessible and provides family-centered, continuous, comprehensive and coordinated, compassionate, and culturally sensitive care. (20)",
"title": ""
},
{
"docid": "daf997a64778e0e2d5fc1a07ad69b0e4",
"text": "A soft-switching single-ended primary inductor converter (SEPIC) is presented in this paper. An auxiliary switch and a clamp capacitor are connected. A coupled inductor and an auxiliary inductor are utilized to obtain ripple-free input current and achieve zero-voltage-switching (ZVS) operation of the main and auxiliary switches. The voltage multiplier technique and active clamp technique are applied to the conventional SEPIC converter to increase the voltage gain, reduce the voltage stresses of the power switches and diode. Moreover, by utilizing the resonance between the resonant inductor and the capacitor in the voltage multiplier circuit, the zero-current-switching operation of the output diode is achieved and its reverse-recovery loss is significantly reduced. The proposed converter achieves high efficiency due to soft-switching commutations of the power semiconductor devices. The presented theoretical analysis is verified by a prototype of 100 kHz and 80 W converter. Also, the measured efficiency of the proposed converter has reached a value of 94.8% at the maximum output power.",
"title": ""
},
{
"docid": "76375aa50ebe8388d653241ba481ecd2",
"text": "Sequential learning of tasks using gradient descent leads to an unremitting decline in the accuracy of tasks for which training data is no longer available, termed catastrophic forgetting. Generative models have been explored as a means to approximate the distribution of old tasks and bypass storage of real data. Here we propose a cumulative closed-loop generator and embedded classifier using an AC-GAN architecture provided with external regularization by a small buffer. We evaluate incremental learning using a notoriously hard paradigm, “single headed learning,” in which each task is a disjoint subset of classes in the overall dataset, and performance is evaluated on all previous classes. First, we show that the variability contained in a small percentage of a dataset (memory buffer) accounts for a significant portion of the reported accuracy, both in multi-task and continual learning settings. Second, we show that using a generator to continuously output new images while training provides an up-sampling of the buffer, which prevents catastrophic forgetting and yields superior performance when compared to a fixed buffer. We achieve an average accuracy for all classes of 92.26% in MNIST and 76.15% in FASHION-MNIST after 5 tasks using GAN sampling with a buffer of only 0.17% of the entire dataset size. We compare to a network with regularization (EWC) which shows a deteriorated average performance of 29.19% (MNIST) and 26.5% (FASHION). The baseline of no regularization (plain gradient descent) performs at 99.84% (MNIST) and 99.79% (FASHION) for the last task, but below 3% for all previous tasks. Our method has very low long-term memory cost, the buffer, as well as negligible intermediate memory storage.",
"title": ""
},
{
"docid": "1bdf406fd827af2dddcecef934e291d4",
"text": "This study was conducted to collect data on specific volatile fatty acids (produced from soft tissue decomposition) and various anions and cations (liberated from soft tissue and bone), deposited in soil solution underneath decomposing human cadavers as an aid in determining the \"time since death.\" Seven nude subjects (two black males, a white female and four white males) were placed within a decay research facility at various times of the year and allowed to decompose naturally. Data were amassed every three days in the spring and summer, and weekly in the fall and winter. Analyses of the data reveal distinct patterns in the soil solution for volatile fatty acids during soft tissue decomposition and for specific anions and cations once skeletonized, when based on accumulated degree days. Decompositional rates were also obtained, providing valuable information for estimating the \"maximum time since death.\" Melanin concentrations observed in soil solution during this study also yields information directed at discerning racial affinities. Application of these data can significantly enhance \"time since death\" determinations currently in use.",
"title": ""
},
{
"docid": "90558e7b7d2a5fbc76fe3d2c824289b0",
"text": "This paper deals with a 3 dB Ku-band coupler designed in substrate integrated waveguide (SIW) technology. A microstrip-SIW-transition is designed with a return loss (RL) greater than 20 dB. Rogers 4003 substrate is used for the SIW with a gold plated copper metallisation. The coupler achieves a relative bandwidth of 26.1% with an insertion loss (IL) lower than 2 dB, coupling balance smaller than 0.5 dB and RL and isolation greater than 15 dB.",
"title": ""
},
{
"docid": "120e36cc162f4ce602da810c80c18c7d",
"text": "We describe a new model for learning meaningful representations of text documents from an unlabeled collection of documents. This model is inspired by the recently proposed Replicated Softmax, an undirected graphical model of word counts that was shown to learn a better generative model and more meaningful document representations. Specifically, we take inspiration from the conditional mean-field recursive equations of the Replicated Softmax in order to define a neural network architecture that estimates the probability of observing a new word in a given document given the previously observed words. This paradigm also allows us to replace the expensive softmax distribution over words with a hierarchical distribution over paths in a binary tree of words. The end result is a model whose training complexity scales logarithmically with the vocabulary size instead of linearly as in the Replicated Softmax. Our experiments show that our model is competitive both as a generative model of documents and as a document representation learning algorithm.",
"title": ""
},
{
"docid": "7aaee14383fc247165017345ab2927a8",
"text": "The goals of the paper are as follows: i) review some qualitative properties of oil and gas prices in the last 15 years; ii) propose some mathematical elements towards a definition of mean reversion that would not be reduced to the form of the drift in a stochastic differential equation; iii) conduct econometric tests in order to conclude whether mean reversion still exists in the energy commodity price behavior. Regarding the third point, a clear “break” in the properties of oil and natural gas prices and volatility can be exhibited in the period 2000-2001.",
"title": ""
}
] |
scidocsrr
|
bd0bfbe872b60586bee33d0dd9ecf75c
|
CASENG: ARABIC SEMANTIC SEARCH ENGINE
|
[
{
"docid": "ce7175f868e2805e9e08e96a1c9738f4",
"text": "The development of the Semantic Web, with machine-readable content, has the potential to revolutionize the World Wide Web and its use. In A Semantic Web Primer Grigoris Antoniou and Frank van Harmelen provide an introduction and guide to this emerging field, describing its key ideas, languages, and technologies. Suitable for use as a textbook or for self-study by professionals, the book concentrates on undergraduate-level fundamental concepts and techniques that will enable readers to proceed with building applications on their own and includes exercises, project descriptions, and annotated references to relevant online materials. A Semantic Web Primer is the only available book on the Semantic Web to include a systematic treatment of the different languages (XML, RDF, OWL, and rules) and technologies (explicit metadata, ontologies, and logic and inference) that are central to Semantic Web development. The book also examines such crucial related topics as ontology engineering and application scenarios. After an introductory chapter, topics covered in succeeding chapters include XML and related technologies that support semantic interoperability; RDF and RDF Schema, the standard data model for machine-processible semantics; and OWL, the W3C-approved standard for a Web ontology language that is more extensive than RDF Schema; rules, both monotonic and nonmonotonic, in the framework of the Semantic Web; selected application domains and how the Semantic Web would benefit them; the development of ontology-based systems; and current debates on key issues and predictions for the future.",
"title": ""
},
{
"docid": "bb786e3cacb6512a4c309e37cde75a03",
"text": "Logs of user queries to an internet search engine p rovide a large amount of implicit and explicit inform ation about language. In this paper, we investigate their use in spelling correction of search queries, a task which poses many additional challenges beyond the traditional spelling correction problem. We pre sent an approach that uses an iterative transformat ion of the input query strings into other strings that correspond to more and more likely queries according to statistics extracted from internet search query log s.",
"title": ""
}
] |
[
{
"docid": "e9a66ce7077baf347d325bca7b008d6b",
"text": "Recent research have shown that the Wavelet Transform (WT) can potentially be used to extract Partial Discharge (PD) signals from severe noise like White noise, Random noise and Discrete Spectral Interferences (DSI). It is important to define that noise is a significant problem in PD detection. Accordingly, the paper mainly deals with denoising of PD signals, based on improved WT techniques namely Translation Invariant Wavelet Transform (TIWT). The improved WT method is distinct from other traditional method called as Fast Fourier Transform (FFT). The TIWT not only remain the edge of the original signal efficiently but also reduce impulsive noise to some extent. Additionally Translation Invariant (TI) Wavelet Transform denoising is used to suppress Pseudo Gibbs phenomenon. In this paper an attempt has been made to review the methodology of denoising the partial discharge signals and shows that the proposed denoising method results are better when compared to other wavelet-based approaches like FFT, wavelet hard thresholding, wavelet soft thresholding, by evaluating five different parameters like, Signal to noise ratio, Cross correlation coefficient, Pulse amplitude distortion, Mean square error, Reduction in noise level.",
"title": ""
},
{
"docid": "29c8c8abf86b2d7358a1cd70751f3f93",
"text": "Data domain description concerns the characterization of a data set. A good description covers all target data but includes no superfluous space. The boundary of a dataset can be used to detect novel data or outliers. We will present the Support Vector Data Description (SVDD) which is inspired by the Support Vector Classifier. It obtains a spherically shaped boundary around a dataset and analogous to the Support Vector Classifier it can be made flexible by using other kernel functions. The method is made robust against outliers in the training set and is capable of tightening the description by using negative examples. We show characteristics of the Support Vector Data Descriptions using artificial and real data.",
"title": ""
},
{
"docid": "43f200b97e2b6cb9c62bbbe71bed72e3",
"text": "We compare nonreturn-to-zero (NRZ) with return-to-zero (RZ) modulation format for wavelength-division-multiplexed systems operating at data rates up to 40 Gb/s. We find that in 10-40-Gb/s dispersion-managed systems (single-mode fiber alternating with dispersion compensating fiber), NRZ is more adversely affected by nonlinearities, whereas RZ is more affected by dispersion. In this dispersion map, 10- and 20-Gb/s systems operate better using RZ modulation format because nonlinearity dominates. However, 40-Gb/s systems favor the usage of NRZ because dispersion becomes the key limiting factor at 40 Gb/s.",
"title": ""
},
{
"docid": "f284c6e32679d8413e366d2daf1d4613",
"text": "Summary form only given. Existing studies on ensemble classifiers typically take a static approach in assembling individual classifiers, in which all the important features are specified in advance. In this paper, we propose a new concept, dynamic ensemble, as an advanced classifier that could have dynamic component classifiers and have dynamic configurations. Toward this goal, we have substantially expanded the existing \"overproduce and choose\" paradigm for ensemble construction. A new algorithm called BAGA is proposed to explore this approach. Taking a set of decision tree component classifiers as input, BAGA generates a set of candidate ensembles using combined bagging and genetic algorithm techniques so that component classifiers are determined at execution time. Empirical studies have been carried out on variations of the BAGA algorithm, where the sizes of chosen classifiers, effects of bag size, voting function and evaluation functions on the dynamic ensemble construction, are investigated.",
"title": ""
},
{
"docid": "80759a5c2e60b444ed96c9efd515cbdf",
"text": "The Web of Things is an active research field which aims at promoting the easy access and handling of smart things' digital representations through the adoption of Web standards and technologies. While huge research and development efforts have been spent on lower level networks and software technologies, it has been recognized that little experience exists instead in modeling and building applications for the Web of Things. Although several works have proposed Representational State Transfer (REST) inspired approaches for the Web of Things, a main limitation is that poor support is provided to web developers for speeding up the development of Web of Things applications while taking full advantage of REST benefits. In this paper, we propose a framework which supports developers in modeling smart things as web resources, exposing them through RESTful Application Programming Interfaces (APIs) and developing applications on top of them. The framework consists of a Web Resource information model, a middleware, and tools for developing and publishing smart things' digital representations on the Web. We discuss the framework compliance with REST guidelines and its major implementation choices. Finally, we report on our test activities carried out within the SmartSantander European Project to evaluate the use and proficiency of our framework in a smart city scenario.",
"title": ""
},
{
"docid": "7b347abe744b19215cf7a50ebd1b7f89",
"text": "The thickness of the cerebral cortex was measured in 106 non-demented participants ranging in age from 18 to 93 years. For each participant, multiple acquisitions of structural T1-weighted magnetic resonance imaging (MRI) scans were averaged to yield high-resolution, high-contrast data sets. Cortical thickness was estimated as the distance between the gray/white boundary and the outer cortical surface, resulting in a continuous estimate across the cortical mantle. Global thinning was apparent by middle age. Men and women showed a similar degree of global thinning, and did not differ in mean thickness in the younger or older groups. Age-associated differences were widespread but demonstrated a patchwork of regional atrophy and sparing. Examination of subsets of the data from independent samples produced highly similar age-associated patterns of atrophy, suggesting that the specific anatomic patterns within the maps were reliable. Certain results, including prominent atrophy of prefrontal cortex and relative sparing of temporal and parahippocampal cortex, converged with previous findings. Other results were unexpected, such as the finding of prominent atrophy in frontal cortex near primary motor cortex and calcarine cortex near primary visual cortex. These findings demonstrate that cortical thinning occurs by middle age and spans widespread cortical regions that include primary as well as association cortex.",
"title": ""
},
{
"docid": "7956e5fd3372716cb5ae16c6f9e846fb",
"text": "Understanding query intent helps modern search engines to improve search results as well as to display instant answers to the user. In this work, we introduce an accurate query classification method to detect the intent of a user search query. We propose using convolutional neural networks (CNN) to extract query vector representations as the features for the query classification. In this model, queries are represented as vectors so that semantically similar queries can be captured by embedding them into a vector space. Experimental results show that the proposed method can effectively detect intents of queries with higher precision and recall compared to current methods.",
"title": ""
},
{
"docid": "7e03d09882c7c8fcab5df7a6bd12764f",
"text": "This paper describes a background digital calibration technique based on bitwise correlation (BWC) to correct the capacitive digital-to-analog converter (DAC) mismatch error in successive-approximation-register (SAR) analog-to-digital converters (ADC's). Aided by a single-bit pseudorandom noise (PN) injected to the ADC input, the calibration engine extracts all bit weights simultaneously to facilitate a digital-domain correction. The analog overhead associated with this technique is negligible and the conversion speed is fully retained (in contrast to [1] in which the ADC throughput is halved). A prototype 12bit 50-MS/s SAR ADC fabricated in 90-nm CMOS measured a 66.5-dB peak SNDR and an 86.0-dB peak SFDR with calibration, while occupying 0.046 mm2 and dissipating 3.3 mW from a 1.2-V supply. The calibration logic is estimated to occupy 0.072 mm2 with a power consumption of 1.4 mW in the same process.",
"title": ""
},
{
"docid": "c8e64b40f971b430505a2c86a3f94c84",
"text": "V. Rastogi, R. Shao, Y. Chen, X. Pan, S. Zou, and R. Riley, “Are These Ads Safe: Detecting Hidden Attacks through the Mobile App-Web Interfaces,” in Proceedings of the Network and Distributed System Security Symposium (NDSS), 2016. V. Rastogi, Z. Qu, J. McClurg, Y. Cao, and Y. Chen, “Uranine: Real-time Privacy Leakage Monitoring without System Modification for Android,” in Proceedings of the 11th International Conference on Security and Privacy in Communication Networks (SecureComm), 2015. B. He, V. Rastogi, Y. Cao, Y. Chen, V. N. Venkatakrishnan, R. Yang, and Z. Zhang, “Vetting SSL Usage in Applications with SSLint,” in Proceedings of the 36th IEEE Symposium on Security and Privacy (Oakland), 2015. Z. Qu, V. Rastogi, X. Zhang, Y. Chen, T. Zhu, and Z. Chen, “AutoCog: Measuring the Description-to-permission Fidelity in Android Applications,” in Proceedings of the 21st Vaibhav Rastogi",
"title": ""
},
{
"docid": "2d340d004f81a9ed16ead41044103c5d",
"text": "Bio-medical image segmentation is one of the promising sectors where nuclei segmentation from high-resolution histopathological images enables extraction of very high-quality features for nuclear morphometrics and other analysis metrics in the field of digital pathology. The traditional methods including Otsu thresholding and watershed methods do not work properly in different challenging cases. However, Deep Learning (DL) based approaches are showing tremendous success in different modalities of bio-medical imaging including computation pathology. Recently, the Recurrent Residual U-Net (R2U-Net) has been proposed, which has shown state-of-the-art (SOTA) performance in different modalities (retinal blood vessel, skin cancer, and lung segmentation) in medical image segmentation. However, in this implementation, the R2U-Net is applied to nuclei segmentation for the first time on a publicly available dataset that was collected from the Data Science Bowl Grand Challenge in 2018. The R2U-Net shows around 92.15% segmentation accuracy in terms of the Dice Coefficient (DC) during the testing phase. In addition, the qualitative results show accurate segmentation, which clearly demonstrates the robustness of the R2U-Net model for the nuclei segmentation task.",
"title": ""
},
{
"docid": "ebc46b56d477c6a3543f69d57e891064",
"text": "Breast cancer places fifth as a cause of death from cancer, especially among women. However, screening techniques have increased the success rate of medical therapies. Based on the well-known limitations of the most commonly used screening methods, microwave imaging has been proposed as an alternative solution. By detecting the dielectric discontinuities in the tissues, microwave imaging systems are able to detect malignant lesions, without exposing the patient to ionizing radiations. With respect to the actual microwave prototypes, the novel system proposed in this paper works at an higher central frequency, i.e. 33.25 GHz, and with a bandwidth of about 13.5 GHz. These two values determine a higher achievable resolution, which would make it possible an early-stage cancer detection. However, at these frequencies the propagation medium is particularly challenging in terms of loss and penetration depth. Consequently, the beam-forming algorithms used to form the image play a crucial role to obtain optimum results even in a lossy medium. In this paper, the feasibility study of a system based on a mm-waves multi-static radar architecture is presented, focusing the attention on the comparison between two different beamforming algorithms.",
"title": ""
},
{
"docid": "4463a242a313f82527c4bdfff3d3c13c",
"text": "This paper examines the impact of capital structure on financial performance of Nigerian firms using a sample of thirty non-financial firms listed on the Nigerian Stock Exchange during the seven year period, 2004 – 2010. Panel data for the selected firms were generated and analyzed using ordinary least squares (OLS) as a method of estimation. The result shows that a firm’s capita structure surrogated by Debt Ratio, Dr has a significantly negative impact on the firm’s financial measures (Return on Asset, ROA, and Return on Equity, ROE). The study of these findings, indicate consistency with prior empirical studies and provide evidence in support of Agency cost theory.",
"title": ""
},
{
"docid": "d63946a096b9e8a99be6d5ddfe4097da",
"text": "While the first open comparative challenges in the field of paralinguistics targeted more ‘conventional’ phenomena such as emotion, age, and gender, there still exists a multiplicity of not yet covered, but highly relevant speaker states and traits. The INTERSPEECH 2011 Speaker State Challenge thus addresses two new sub-challenges to overcome the usually low compatibility of results: In the Intoxication Sub-Challenge, alcoholisation of speakers has to be determined in two classes; in the Sleepiness Sub-Challenge, another two-class classification task has to be solved. This paper introduces the conditions, the Challenge corpora “Alcohol Language Corpus” and “Sleepy Language Corpus”, and a standard feature set that may be used. Further, baseline results are given.",
"title": ""
},
{
"docid": "9a397ca2a072d9b1f861f8a6770aa792",
"text": "Computational photography systems are becoming increasingly diverse, while computational resources---for example on mobile platforms---are rapidly increasing. As diverse as these camera systems may be, slightly different variants of the underlying image processing tasks, such as demosaicking, deconvolution, denoising, inpainting, image fusion, and alignment, are shared between all of these systems. Formal optimization methods have recently been demonstrated to achieve state-of-the-art quality for many of these applications. Unfortunately, different combinations of natural image priors and optimization algorithms may be optimal for different problems, and implementing and testing each combination is currently a time-consuming and error-prone process. ProxImaL is a domain-specific language and compiler for image optimization problems that makes it easy to experiment with different problem formulations and algorithm choices. The language uses proximal operators as the fundamental building blocks of a variety of linear and nonlinear image formation models and cost functions, advanced image priors, and noise models. The compiler intelligently chooses the best way to translate a problem formulation and choice of optimization algorithm into an efficient solver implementation. In applications to the image processing pipeline, deconvolution in the presence of Poisson-distributed shot noise, and burst denoising, we show that a few lines of ProxImaL code can generate highly efficient solvers that achieve state-of-the-art results. We also show applications to the nonlinear and nonconvex problem of phase retrieval.",
"title": ""
},
{
"docid": "adc0de2a4c4baf4fdd35ff5a585550ef",
"text": "Sequence generation models such as recurrent networks can be trained with a diverse set of learning algorithms. For example, maximum likelihood learning is simple and efficient, yet suffers from the exposure bias problem. Reinforcement learning like policy gradient addresses the problem but can have prohibitively poor exploration efficiency. A variety of other algorithms such as RAML, SPG, and data noising, have also been developed from different perspectives. This paper establishes a formal connection between these algorithms. We present a generalized entropy regularized policy optimization formulation, and show that the apparently divergent algorithms can all be reformulated as special instances of the framework, with the only difference being the configurations of reward function and a couple of hyperparameters. The unified interpretation offers a systematic view of the varying properties of exploration and learning efficiency. Besides, based on the framework, we present a new algorithm that dynamically interpolates among the existing algorithms for improved learning. Experiments on machine translation and text summarization demonstrate the superiority of the proposed algorithm.",
"title": ""
},
{
"docid": "77731bed6cf76970e851f3b2ce467c1b",
"text": "We introduce SparkGalaxy, a big data processing toolkit that is able to encode complex data science experiments as a set of high-level workflows. SparkGalaxy combines the Spark big data processing platform and the Galaxy workflow management system to o↵er a set of tools for graph processing and machine learning using a novel interaction model for creating and using complex workflows. SparkGalaxy contributes an easy-to-use interface and scalable algorithms for data science. We demonstrate SparkGalaxy use in large social network analysis and other case stud-",
"title": ""
},
{
"docid": "5d35e34a5db727917e5105f857c174be",
"text": "Human face feature extraction using digital images is a vital element for several applications such as: identification and facial recognition, medical application, video games, cosmetology, etc. The skin pores are very important element of the structure of the skin. A novelty method is proposed allowing decomposing an photography of human face from digital image (RGB) in two layers, melanin and hemoglobin. From melanin layer, the main pores from the face can be obtained, as well as the centroids of each of them. It has been found that the pore configuration of the skin is invariant and unique for each individual. Therefore, from the localization of the pores of a human face, it is a possibility to use them for diverse application in the fields of pattern",
"title": ""
},
{
"docid": "45eea1373ec204261d98d99e33214225",
"text": "Current wireless network design is built on the ethos of avoiding interference. In this paper we question this long-held design principle. We show that with appropriate design, successful concurrent transmissions can be enabled and exploited on both the uplink and downlink. We show that this counter-intuitive approach of encouraging interference can be exploited to increase network capacity significantly and simplify network design. We design and implement name, a novel MAC and PHY protocol that exploits recently proposed rateless coding techniques to provide such concurrency. We show via a prototype implementation and experimental evaluation that name can provide a 60% increase in network capacity on the uplink compared to traditional Wifi that does omniscient rate adaptation and a $35\\%$ median throughput gain on the downlink PHY layer as compared to an omniscient scheme that picks the best conventional bitrate.",
"title": ""
},
{
"docid": "126d8080f7dd313d534a95d8989b0fbd",
"text": "Intrusion prevention mechanisms are largely insufficient for protection of databases against Information Warfare attacks by authorized users and has drawn interest towards intrusion detection. We visualize the conflicting motives between an attacker and a detection system as a multi-stage game between two players, each trying to maximize his payoff. We consider the specific application of credit card fraud detection and propose a fraud detection system based on a game-theoretic approach. Not only is this approach novel in the domain of Information Warfare, but also it improvises over existing rule-based systems by predicting the next move of the fraudster and learning at each step.",
"title": ""
},
{
"docid": "04aa6c7ede8d418297e498d7a163f996",
"text": "Dual active bridge (DAB) converters have been popular in high voltage, low and medium power DC-DC applications, as well as an intermediate high frequency link in solid state transformers. In this paper, a multilevel DAB (ML-DAB) has been proposed in which two active bridges produce two-level (2L)-5L, 5L-2L and 3L-5L voltage waveforms across the high frequency transformer. The proposed ML-DAB has the advantage of being used in high step-up/down converters, which deal with higher voltages, as compared to conventional two-level DABs. A three-level neutral point diode clamped (NPC) topology has been used in the high voltage bridge, which enables the semiconductor switches to be operated within a higher voltage range without the need for cascaded bridges or multiple two-level DAB converters. A symmetric modulation scheme, based on the least number of angular parameters rather than the duty-ratio, has been proposed for a different combination of bridge voltages. This ML-DAB is also suitable for maximum power point tracking (MPPT) control in photovoltaic applications. Steady-state analysis of the converter with symmetric phase-shift modulation is presented and verified using simulation and hardware experiments.",
"title": ""
}
] |
scidocsrr
|
73e8035d61ff6b469dffb788d4b44865
|
Corda : An Introduction
|
[
{
"docid": "7c0748301936c39166b9f91ba72d92ef",
"text": "methods and native methods are considered to be type safe if they do not override a final method. methodIsTypeSafe(Class, Method) :doesNotOverrideFinalMethod(Class, Method), methodAccessFlags(Method, AccessFlags), member(abstract, AccessFlags). methodIsTypeSafe(Class, Method) :doesNotOverrideFinalMethod(Class, Method), methodAccessFlags(Method, AccessFlags), member(native, AccessFlags). private methods and static methods are orthogonal to dynamic method dispatch, so they never override other methods (§5.4.5). doesNotOverrideFinalMethod(class('java/lang/Object', L), Method) :isBootstrapLoader(L). doesNotOverrideFinalMethod(Class, Method) :isPrivate(Method, Class). doesNotOverrideFinalMethod(Class, Method) :isStatic(Method, Class). doesNotOverrideFinalMethod(Class, Method) :isNotPrivate(Method, Class), isNotStatic(Method, Class), doesNotOverrideFinalMethodOfSuperclass(Class, Method). doesNotOverrideFinalMethodOfSuperclass(Class, Method) :classSuperClassName(Class, SuperclassName), classDefiningLoader(Class, L), loadedClass(SuperclassName, L, Superclass), classMethods(Superclass, SuperMethodList), finalMethodNotOverridden(Method, Superclass, SuperMethodList). 4.10 Verification of class Files THE CLASS FILE FORMAT 202 final methods that are private and/or static are unusual, as private methods and static methods cannot be overridden per se. Therefore, if a final private method or a final static method is found, it was logically not overridden by another method. finalMethodNotOverridden(Method, Superclass, SuperMethodList) :methodName(Method, Name), methodDescriptor(Method, Descriptor), member(method(_, Name, Descriptor), SuperMethodList), isFinal(Method, Superclass), isPrivate(Method, Superclass). finalMethodNotOverridden(Method, Superclass, SuperMethodList) :methodName(Method, Name), methodDescriptor(Method, Descriptor), member(method(_, Name, Descriptor), SuperMethodList), isFinal(Method, Superclass), isStatic(Method, Superclass). If a non-final private method or a non-final static method is found, skip over it because it is orthogonal to overriding. finalMethodNotOverridden(Method, Superclass, SuperMethodList) :methodName(Method, Name), methodDescriptor(Method, Descriptor), member(method(_, Name, Descriptor), SuperMethodList), isNotFinal(Method, Superclass), isPrivate(Method, Superclass), doesNotOverrideFinalMethodOfSuperclass(Superclass, Method). finalMethodNotOverridden(Method, Superclass, SuperMethodList) :methodName(Method, Name), methodDescriptor(Method, Descriptor), member(method(_, Name, Descriptor), SuperMethodList), isNotFinal(Method, Superclass), isStatic(Method, Superclass), doesNotOverrideFinalMethodOfSuperclass(Superclass, Method). THE CLASS FILE FORMAT Verification of class Files 4.10 203 If a non-final, non-private, non-static method is found, then indeed a final method was not overridden. Otherwise, recurse upwards. finalMethodNotOverridden(Method, Superclass, SuperMethodList) :methodName(Method, Name), methodDescriptor(Method, Descriptor), member(method(_, Name, Descriptor), SuperMethodList), isNotFinal(Method, Superclass), isNotStatic(Method, Superclass), isNotPrivate(Method, Superclass). finalMethodNotOverridden(Method, Superclass, SuperMethodList) :methodName(Method, Name), methodDescriptor(Method, Descriptor), notMember(method(_, Name, Descriptor), SuperMethodList), doesNotOverrideFinalMethodOfSuperclass(Superclass, Method). 4.10 Verification of class Files THE CLASS FILE FORMAT 204 4.10.1.6 Type Checking Methods with Code Non-abstract, non-native methods are type correct if they have code and the code is type correct. methodIsTypeSafe(Class, Method) :doesNotOverrideFinalMethod(Class, Method), methodAccessFlags(Method, AccessFlags), methodAttributes(Method, Attributes), notMember(native, AccessFlags), notMember(abstract, AccessFlags), member(attribute('Code', _), Attributes), methodWithCodeIsTypeSafe(Class, Method). A method with code is type safe if it is possible to merge the code and the stack map frames into a single stream such that each stack map frame precedes the instruction it corresponds to, and the merged stream is type correct. The method's exception handlers, if any, must also be legal. methodWithCodeIsTypeSafe(Class, Method) :parseCodeAttribute(Class, Method, FrameSize, MaxStack, ParsedCode, Handlers, StackMap), mergeStackMapAndCode(StackMap, ParsedCode, MergedCode), methodInitialStackFrame(Class, Method, FrameSize, StackFrame, ReturnType), Environment = environment(Class, Method, ReturnType, MergedCode, MaxStack, Handlers), handlersAreLegal(Environment), mergedCodeIsTypeSafe(Environment, MergedCode, StackFrame). THE CLASS FILE FORMAT Verification of class Files 4.10 205 Let us consider exception handlers first. An exception handler is represented by a functor application of the form: handler(Start, End, Target, ClassName) whose arguments are, respectively, the start and end of the range of instructions covered by the handler, the first instruction of the handler code, and the name of the exception class that this handler is designed to handle. An exception handler is legal if its start (Start) is less than its end (End), there exists an instruction whose offset is equal to Start, there exists an instruction whose offset equals End, and the handler's exception class is assignable to the class Throwable. The exception class of a handler is Throwable if the handler's class entry is 0, otherwise it is the class named in the handler. An additional requirement exists for a handler inside an <init> method if one of the instructions covered by the handler is invokespecial of an <init> method. In this case, the fact that a handler is running means the object under construction is likely broken, so it is important that the handler does not swallow the exception and allow the enclosing <init> method to return normally to the caller. Accordingly, the handler is required to either complete abruptly by throwing an exception to the caller of the enclosing <init> method, or to loop forever. 4.10 Verification of class Files THE CLASS FILE FORMAT 206 handlersAreLegal(Environment) :exceptionHandlers(Environment, Handlers), checklist(handlerIsLegal(Environment), Handlers). handlerIsLegal(Environment, Handler) :Handler = handler(Start, End, Target, _), Start < End, allInstructions(Environment, Instructions), member(instruction(Start, _), Instructions), offsetStackFrame(Environment, Target, _), instructionsIncludeEnd(Instructions, End), currentClassLoader(Environment, CurrentLoader), handlerExceptionClass(Handler, ExceptionClass, CurrentLoader), isBootstrapLoader(BL), isAssignable(ExceptionClass, class('java/lang/Throwable', BL)), initHandlerIsLegal(Environment, Handler). instructionsIncludeEnd(Instructions, End) :member(instruction(End, _), Instructions). instructionsIncludeEnd(Instructions, End) :member(endOfCode(End), Instructions). handlerExceptionClass(handler(_, _, _, 0), class('java/lang/Throwable', BL), _) :isBootstrapLoader(BL). handlerExceptionClass(handler(_, _, _, Name), class(Name, L), L) :Name \\= 0. THE CLASS FILE FORMAT Verification of class Files 4.10 207 initHandlerIsLegal(Environment, Handler) :notInitHandler(Environment, Handler). notInitHandler(Environment, Handler) :Environment = environment(_Class, Method, _, Instructions, _, _), isNotInit(Method). notInitHandler(Environment, Handler) :Environment = environment(_Class, Method, _, Instructions, _, _), isInit(Method), member(instruction(_, invokespecial(CP)), Instructions), CP = method(MethodClassName, MethodName, Descriptor), MethodName \\= '<init>'. initHandlerIsLegal(Environment, Handler) :isInitHandler(Environment, Handler), sublist(isApplicableInstruction(Target), Instructions, HandlerInstructions), noAttemptToReturnNormally(HandlerInstructions). isInitHandler(Environment, Handler) :Environment = environment(_Class, Method, _, Instructions, _, _), isInit(Method). member(instruction(_, invokespecial(CP)), Instructions), CP = method(MethodClassName, '<init>', Descriptor). isApplicableInstruction(HandlerStart, instruction(Offset, _)) :Offset >= HandlerStart. noAttemptToReturnNormally(Instructions) :notMember(instruction(_, return), Instructions). noAttemptToReturnNormally(Instructions) :member(instruction(_, athrow), Instructions). 4.10 Verification of class Files THE CLASS FILE FORMAT 208 Let us now turn to the stream of instructions and stack map frames. Merging instructions and stack map frames into a single stream involves four cases: • Merging an empty StackMap and a list of instructions yields the original list of instructions. mergeStackMapAndCode([], CodeList, CodeList). • Given a list of stack map frames beginning with the type state for the instruction at Offset, and a list of instructions beginning at Offset, the merged list is the head of the stack map frame list, followed by the head of the instruction list, followed by the merge of the tails of the two lists. mergeStackMapAndCode([stackMap(Offset, Map) | RestMap], [instruction(Offset, Parse) | RestCode], [stackMap(Offset, Map), instruction(Offset, Parse) | RestMerge]) :mergeStackMapAndCode(RestMap, RestCode, RestMerge). • Otherwise, given a list of stack map frames beginning with the type state for the instruction at OffsetM, and a list of instructions beginning at OffsetP, then, if OffsetP < OffsetM, the merged list consists of the head of the instruction list, followed by the merge of the stack map frame list and the tail of the instruction list. mergeStackMapAndCode([stackMap(OffsetM, Map) | RestMap], [instruction(OffsetP, Parse) | RestCode], [instruction(OffsetP, Parse) | RestMerge]) :OffsetP < OffsetM, mergeStackMapAndCode([stackMap(OffsetM, Map) | RestMap], RestCode, RestMerge). • Otherwise, the merge of the two lists is undefined. Since the instruction list has monotonically increasing offsets, the merge of the two lists is not defined unless every stack map frame offset has a corresponding instruction offset and the stack map frames are in monotonically ",
"title": ""
}
] |
[
{
"docid": "6e74bd999e2155d5e19c2e11e1a0e782",
"text": "The phenomenon of digital transformation received some attention in previous literature concerning industries such as media, entertainment and publishing. However, there is a lack of understanding about digital transformation of primarily physical industries, whose products cannot be completely digitized, e.g., automotive industry. We conducted a rigorous content analysis of substantial secondary data from industry magazines aiming to generate insights to this phenomenon in the automotive industry. We examined the impact of major digital trends on dominant business models. Our findings indicate that trends related to social media, mobile, big data and cloud computing are driving automobile manufactures to extend, revise, terminate, and create business models. By doing so, they contribute to the constitution of a digital layer upon the physical mobility infrastructure. Despite its strong foundation in the physical world, the industry is undergoing important structural changes due to the ongoing digitalization of consumer lives and business.",
"title": ""
},
{
"docid": "f4e96df66db50120d2684835598a569e",
"text": "Many prominent studies of infant cognition over the past two decades have relied on the fact that infants habituate to repeated stimuli - i.e. that their looking times tend to decline upon repeated stimulus presentations. This phenomenon had been exploited to reveal a great deal about the minds of preverbal infants. Many prominent studies of the neural bases of adult cognition over the past decade have relied on the fact that brain regions habituate to repeated stimuli - i.e. that the hemodynamic responses observed in fMRI tend to decline upon repeated stimulus presentations. This phenomenon has been exploited to reveal a great deal about the neural mechanisms of perception and cognition. Similarities in the mechanics of these two forms of habituation suggest that it may be useful to relate them to each other. Here we outline this analogy, explore its nuances, and highlight some ways in which the study of habituation in functional neuroimaging could yield novel insights into the nature of habituation in infant cognition - and vice versa.",
"title": ""
},
{
"docid": "b57006686160241bf118c2c638971764",
"text": "Reproducibility is the hallmark of good science. Maintaining a high degree of transparency in scientific reporting is essential not just for gaining trust and credibility within the scientific community but also for facilitating the development of new ideas. Sharing data and computer code associated with publications is becoming increasingly common, motivated partly in response to data deposition requirements from journals and mandates from funders. Despite this increase in transparency, it is still difficult to reproduce or build upon the findings of most scientific publications without access to a more complete workflow. Version control systems (VCS), which have long been used to maintain code repositories in the software industry, are now finding new applications in science. One such open source VCS, Git, provides a lightweight yet robust framework that is ideal for managing the full suite of research outputs such as datasets, statistical code, figures, lab notes, and manuscripts. For individual researchers, Git provides a powerful way to track and compare versions, retrace errors, explore new approaches in a structured manner, while maintaining a full audit trail. For larger collaborative efforts, Git and Git hosting services make it possible for everyone to work asynchronously and merge their contributions at any time, all the while maintaining a complete authorship trail. In this paper I provide an overview of Git along with use-cases that highlight how this tool can be leveraged to make science more reproducible and transparent, foster new collaborations, and support novel uses.",
"title": ""
},
{
"docid": "d8a194a88ccf20b8160b75d930969c85",
"text": "We describe the design and hardware implementation of our walking and manipulation controllers that are based on a cascade of online optimizations. A virtual force acting at the robot's center of mass (CoM) is estimated and used to compensated for modeling errors of the CoM and unplanned external forces. The proposed controllers have been implemented on the Atlas robot, a full size humanoid robot built by Boston Dynamics, and used in the DARPA Robotics Challenge Finals, which consisted of a wide variety of locomotion and manipulation tasks.",
"title": ""
},
{
"docid": "807564cfc2e90dee21a3efd8dc754ba3",
"text": "The present paper reports two studies designed to test the Dualistic Model of Passion with regard to performance attainment in two fields of expertise. Results from both studies supported the Passion Model. Harmonious passion was shown to be a positive source of activity investment in that it directly predicted deliberate practice (Study 1) and positively predicted mastery goals which in turn positively predicted deliberate practice (Study 2). In turn, deliberate practice had a direct positive impact on performance attainment. Obsessive passion was shown to be a mixed source of activity investment. While it directly predicted deliberate practice (Study 1) and directly predicted mastery goals (which predicted deliberate practice), it also predicted performance-avoidance and performance-approach goals, with the former having a tendency to facilitate performance directly, and the latter to directly negatively impact on performance attainment (Study 2). Finally, harmonious passion was also positively related to subjective well-being (SWB) in both studies, while obsessive passion was either unrelated (Study 1) or negatively related to SWB (Study 2). The conceptual and applied implications of the differential influences of harmonious and obsessive passion in performance are discussed.",
"title": ""
},
{
"docid": "1d5119a4aeb7d678b58fb4e55c43fe94",
"text": "This chapter provides a simplified introduction to cloud computing. This chapter starts by introducing the history of cloud computing and moves on to describe the cloud architecture and operation. This chapter also discusses briefly cloud servicemodels: Infrastructure-as-a-Service, Platform-as-a-Service, and Software-as-a-Service. Clouds are also categorized based on their ownership to private and public clouds. This chapter concludes by explaining the reasons for choosing cloud computing over other technologies by exploring the economic and technological benefits of the cloud.",
"title": ""
},
{
"docid": "e92e53500dea9442f9a98ff67bcb7b62",
"text": "The radar is expected to go beyond the traditional functionality of range and speed estimation to target classification. The complementary use of radar and video is becoming increasingly popular for applications such as autonomous cars, smart home automation etc. Target classification based on radar depends on the characteristic motion patterns of target nonrigidities. The Micro-Doppler (MD) signal captures such motions that have been used to extract reliable distinguishing features for various classes of targets. Popular MD analysis techniques such as Cadence frequency estimation require long captures before reliably identifying the target. Such a latency has an impact on the response times especially in time critical systems such as autonomous cars. Although a finite latency is unavoidable, it is in the interest of the community to keep it as small as possible. In this paper, we use unsupervised learning, specifically auto-encoders for learning Micro-Doppler features. We use a particular fast learning algorithm which learns very quickly with little training data and deliver reliable classification.",
"title": ""
},
{
"docid": "f417966c584f20aecccf80cd8ee2fb32",
"text": "The Routing Protocol for Low-Power and Lossy Networks (RPL) constructs routes by using Objective Functions that optimize or constrain the routes it selects and uses. This specification describes the Minimum Rank with Hysteresis Objective Function (MRHOF), an Objective Function that selects routes that minimize a metric, while using hysteresis to reduce churn in response to small metric changes. MRHOF works with additive metrics along a route, and the metrics it uses are determined by the metrics that the RPL Destination Information Object (DIO) messages advertise.",
"title": ""
},
{
"docid": "128a616b2c33dded974d792579662f2c",
"text": "III Editorial V Corporate social responsibility: Challenges and opportunities for trade unionists, by Dwight W. Justice 1 Sustainable bargaining: Labour agreements go global, by Ian Graham 15 The social responsibilities of business and workers' rights, by Guy Ryder 21 OECD Guidelines – one tool for corporate social accountability, by John Evans 25 Corporate social responsibility – new morals for business?, by Philip Jennings 31 Corporate social responsibility in Europe: A chance for social dialogue?, by Anne Renaut 35 Strengths and weaknesses of Belgium's social label, by Bruno Melckmans 41 Social auditing, freedom of association and the right to collective bargaining, by Philip Hunter and Michael Urminsky 47 Corporate public reporting on labour and employment issues, by Michael Urminsky 55 The ILO Conventions: A \" major reference \" , by Nicole Notat 63 Workers' capital and corporate social responsibility, by Jon Robinson 67 The social responsibility of business, by Reg Green 75",
"title": ""
},
{
"docid": "a73275f83b94ee3fb1675a125edbb55a",
"text": "Treatment of biowaste, the predominant waste fraction in lowand middle-income settings, offers public health, environmental and economic benefits by converting waste into a hygienic product, diverting it from disposal sites, and providing a source of income. This article presents a comprehensive overview of 13 biowaste treatment technologies, grouped into four categories: (1) direct use (direct land application, direct animal feed, direct combustion), (2) biological treatment (composting, vermicomposting, black soldier fly treatment, anaerobic digestion, fermentation), (3) physico-chemical treatment (transesterification, densification), and (4) thermo-chemical treatment (pyrolysis, liquefaction, gasification). Based on a literature review and expert consultation, the main feedstock requirements, process conditions and treatment products are summarized, and the challenges and trends, particularly regarding the applicability of each technology in the urban lowand middle-income context, are critically discussed. An analysis of the scientific articles published from 2005 to 2015 reveals substantial differences in the amount and type of research published for each technology, a fact that can partly be explained with the development stage of the technologies. Overall, publications from case studies and field research seem disproportionately underrepresented for all technologies. One may argue that this reflects the main task of researchers—to conduct fundamental research for enhanced process understanding—but it may also be a result of the traditional embedding of the waste sector in the discipline of engineering science, where socio-economic and management aspects are seldom object of the research. More unbiased, wellstructured and reproducible evidence from case studies at scale could foster the knowledge transfer to practitioners and enhance the exchange between academia, policy and practice.",
"title": ""
},
{
"docid": "0a4fd637914f538a37830655f8c5df01",
"text": "Many children's books contain movable pictures with elements that can be physically opened, closed, pushed, pulled, spun, flipped, or swung. But these tangible, interactive reading experiences are inaccessible to children with visual impairments. This paper presents a set of 3D-printable models designed as building blocks for creating movable tactile pictures that can be touched, moved, and understood by children with visual impairments. Examples of these models are canvases, connectors, hinges, spinners, sliders, lifts, walls, and cutouts. They can be used to compose movable tactile pictures to convey a range of spatial concepts, such as in/out, up/down, and high/low. The design and development of these models were informed by three formative studies including 1) a survey on popular moving mechanisms in children's books and 3D-printed parts to implement them, 2) two workshops on the process creating movable tactile pictures by hand (e.g., Lego, Play-Doh), and 3) creation of wood-based prototypes and an informal testing on sighted preschoolers. Also, we propose a design language based on XML and CSS for specifying the content and structure of a movable tactile picture. Given a specification, our system can generate a 3D-printable model. We evaluate our approach by 1) transcribing six children's books, and 2) conducting six interviews on domain experts including four teachers for the visually impaired, one blind adult, two publishers at the National Braille Press, a renowned tactile artist, and a librarian.",
"title": ""
},
{
"docid": "cbc2b592efc227a5c6308edfbca51bd6",
"text": "The rapidly growing presence of Internet of Things (IoT) devices is becoming a continuously alluring playground for malicious actors who try to harness their vast numbers and diverse locations. One of their primary goals is to assemble botnets that can serve their nefarious purposes, ranging from Denial of Service (DoS) to spam and advertisement fraud. The most recent example that highlights the severity of the problem is the Mirai family of malware, which is accountable for a plethora of massive DDoS attacks of unprecedented volume and diversity. The aim of this paper is to offer a comprehensive state-of-the-art review of the IoT botnet landscape and the underlying reasons of its success with a particular focus on Mirai and major similar worms. To this end, we provide extensive details on the internal workings of IoT malware, examine their interrelationships, and elaborate on the possible strategies for defending against them.",
"title": ""
},
{
"docid": "d141c13cea52e72bb7b84d3546496afb",
"text": "A number of resource-intensive applications, such as augmented reality, natural language processing, object recognition, and multimedia-based software are pushing the computational and energy boundaries of smartphones. Cloud-based services augment the resource-scare capabilities of smartphones while offloading compute-intensive methods to resource-rich cloud servers. The amalgam of cloud and mobile computing technologies has ushered the rise of Mobile Cloud Computing (MCC) paradigm which envisions operating smartphones and modern mobile devices beyond their intrinsic capabilities. System virtualization, application virtualization, and dynamic binary translation (DBT) techniques are required to address the heterogeneity of smartphone and cloud architectures. However, most of the current research work has only focused on the offloading of virtualized applications while giving limited consideration to native code offloading. Moreover, researchers have not attended to the requirements of multimedia based applications in MCC offloading frameworks. In this study, we present a survey and taxonomy of state-of-the-art MCC frameworks, DBT techniques for native offloading, and cross-platform execution techniques for multimedia based applications. We survey the MCC frameworks from the perspective of offload enabling techniques. We focus on native code offloading frameworks and analyze the DBT and emulation techniques of smartphones (ARM) on a cloud server (x86) architectures. Furthermore, we debate the open research issues and challenges to native offloading of multimedia based smartphone applications.",
"title": ""
},
{
"docid": "06ddb465297deb903dc812607d9d7c95",
"text": "Today’s speed of image processing tools as well as the availability of robust techniques for extracting geometric and basic thematic information from image streams makes real-time photogrammetry possible. The paper discusses the basic tools for fully automatic calibration, orientation and surface recontruction as well as for tracking, ego-motion determination and behaviour analysis. Examples demonstrate today’s potential for future applications.",
"title": ""
},
{
"docid": "108a3f06052f615a7ebfc561c3c87cfc",
"text": "There are an estimated 0.5-1 million mite species on earth. Among the many mites that are known to affect humans and animals, only a subset are parasitic but these can cause significant disease. We aim here to provide an overview of the most recent work in this field in order to identify common biological features of these parasites and to inform common strategies for future research. There is a critical need for diagnostic tools to allow for better surveillance and for drugs tailored specifically to the respective parasites. Multi-'omics' approaches represent a logical and timely strategy to identify the appropriate mite molecules. Recent advances in sequencing technology enable us to generate de novo genome sequence data, even from limited DNA resources. Consequently, the field of mite genomics has recently emerged and will now rapidly expand, which is a particular advantage for parasitic mites that cannot be cultured in vitro. Investigations of the microbiota associated with mites will elucidate the link between parasites and pathogens, and define the role of the mite in transmission and pathogenesis. The databases generated will provide the crucial knowledge essential to design novel diagnostic tools, control measures, prophylaxes, drugs and immunotherapies against the mites and associated secondary infections.",
"title": ""
},
{
"docid": "ef53fb4fa95575c6472173db51d77a65",
"text": "I review existing knowledge, unanswered questions, and new directions in research on stress, coping resource, coping strategies, and social support processes. New directions in research on stressors include examining the differing impacts of stress across a range of physical and mental health outcomes, the \"carry-overs\" of stress from one role domain or stage of life into another, the benefits derived from negative experiences, and the determinants of the meaning of stressors. Although a sense of personal control and perceived social support influence health and mental health both directly and as stress buffers, the theoretical mechanisms through which they do so still require elaboration and testing. New work suggests that coping flexibility and structural constraints on individuals' coping efforts may be important to pursue. Promising new directions in social support research include studies of the negative effects of social relationships and of support giving, mutual coping and support-giving dynamics, optimal \"matches\" between individuals' needs and support received, and properties of groups which can provide a sense of social support. Qualitative comparative analysis, optimal matching analysis, and event-structure analysis are new techniques which may help advance research in these broad topic areas. To enhance the effectiveness of coping and social support interventions, intervening mechanisms need to be better understood. Nevertheless, the policy implications of stress research are clear and are important given current interest in health care reform in the United States.",
"title": ""
},
{
"docid": "588fa44a37a2f932182f01f0f0010f3e",
"text": "High attrition rates in massive open online courses (MOOCs) have motivated growing interest in the automatic detection of student “stopout”. Stopout classifiers can be used to orchestrate an intervention before students quit, and to survey students dynamically about why they ceased participation. In this paper we expand on existing stop-out detection research by (1) exploring important elements of classifier design such as generalizability to new courses; (2) developing a novel framework inspired by control theory for how to use a classifier’s outputs to make intelligent decisions; and (3) presenting results from a “dynamic survey intervention” conducted on 2 HarvardX MOOCs, containing over 40000 students, in early 2015. Our results suggest that surveying students based on an automatic stopout classifier achieves higher response rates compared to traditional post-course surveys, and may boost students’ propensity to “come back” into the course.",
"title": ""
},
{
"docid": "32415299f9df0d776bc30b3fcc106113",
"text": "The advent of affordable consumer grade RGB-D cameras has brought about a profound advancement of visual scene reconstruction methods. Both computer graphics and computer vision researchers spend significant effort to develop entirely new algorithms to capture comprehensive shape models of static and dynamic scenes with RGB-D cameras. This led to significant advances of the state of the art along several dimensions. Some methods achieve very high reconstruction detail, despite limited sensor resolution. Others even achieve real-time performance, yet possibly at lower quality. New concepts were developed to capture scenes at larger spatial and temporal extent. Other recent algorithms flank shape reconstruction with concurrent material and lighting estimation, even in general scenes and unconstrained conditions. In this state-of-the-art report, we analyze these recent developments in RGB-D scene reconstruction in detail and review essential related work. We explain, compare, and critically analyze the common underlying algorithmic concepts that enabled these recent advancements. Furthermore, we show how algorithms are designed to best exploit the benefits of RGB-D data while suppressing their often non-trivial data distortions. In addition, this report identifies and discusses important open research questions and suggests relevant directions for future work. CCS Concepts •Computing methodologies , . . ., Reconstruction; Appearance and texture representations; Motion capture;",
"title": ""
},
{
"docid": "68c2b36ae2be6a0bc0a42cb8fcf284fe",
"text": "We present a data-driven shape model for reconstructing human body models from one or more 2D photos. One of the key tasks in reconstructing the 3D model from image data is shape recovery, a task done until now in utterly geometric way, in the domain of human body modeling. In contrast, we adopt a data-driven, parameterized deformable model that is acquired from a collection of range scans of real human body. The key idea is to complement the image-based reconstruction method by leveraging the quality shape and statistic information accumulated from multiple shapes of range-scanned people. In the presence of ambiguity either from the noise or missing views, our technique has a bias towards representing as much as possible the previously acquired ‘knowledge’ on the shape geometry. Texture coordinates are then generated by projecting the modified deformable model onto the front and back images. Our technique has shown to reconstruct successfully human body models from minimum number images, even from a single image input.",
"title": ""
},
{
"docid": "6ed4d5ae29eef70f5aae76ebed76b8ca",
"text": "Web services that thrive on mining user interaction data such as search engines can currently track clicks and mouse cursor activity on their Web pages. Cursor interaction mining has been shown to assist in user modeling and search result relevance, and is becoming another source of rich information that data scientists and search engineers can tap into. Due to the growing popularity of touch-enabled mobile devices, search systems may turn to tracking touch interactions in place of cursor interactions. However, unlike cursor interactions, touch interactions are difficult to record reliably and their coordinates have not been shown to relate to regions of user interest. A better approach may be to track the viewport coordinates instead, which the user must manipulate to view the content on a mobile device. These recorded viewport coordinates can potentially reveal what regions of the page interest users and to what degree. Using this information, search system can then improve the design of their pages or use this information in click models or learning to rank systems. In this position paper, we discuss some of the challenges faced in mining interaction data for new modes of interaction, and future research directions in this field.",
"title": ""
}
] |
scidocsrr
|
ccaae771adaf42a8c6afed7d8a0f2821
|
Benchmarking modern distributed streaming platforms
|
[
{
"docid": "4fa73e04ccc8620c12aaea666ea366a6",
"text": "The popularity of the Web and Internet commerce provides many extremely large datasets from which information can be gleaned by data mining. This book focuses on practical algorithms that have been used to solve key problems in data mining and can be used on even the largest datasets. It begins with a discussion of the map-reduce framework, an important tool for parallelizing algorithms automatically. The tricks of locality-sensitive hashing are explained. This body of knowledge, which deserves to be more widely known, is essential when seeking similar objects in a very large collection without having to compare each pair of objects. Stream processing algorithms for mining data that arrives too fast for exhaustive processing are also explained. The PageRank idea and related tricks for organizing the Web are covered next. Other chapters cover the problems of finding frequent itemsets and clustering, each from the point of view that the data is too large to fit in main memory, and two applications: recommendation systems and Web advertising, each vital in e-commerce. This second edition includes new and extended coverage on social networks, machine learning and dimensionality reduction. Written by leading authorities in database and web technologies, it is essential reading for students and practitioners alike",
"title": ""
}
] |
[
{
"docid": "73cfe07d02651eee42773824d03dcfa1",
"text": "Discovery of usage patterns from Web data is one of the primary purposes for Web Usage Mining. In this paper, a technique to generate Significant Usage Patterns (SUP) is proposed and used to acquire significant “user preferred navigational trails”. The technique uses pipelined processing phases including sub-abstraction of sessionized Web clickstreams, clustering of the abstracted Web sessions, concept-based abstraction of the clustered sessions, and SUP generation. Using this technique, valuable customer behavior information can be extracted by Web site practitioners. Experiments conducted using Web log data provided by J.C.Penney demonstrate that SUPs of different types of customers are distinguishable and interpretable. This technique is particularly suited for analysis of dynamic websites.",
"title": ""
},
{
"docid": "c68ec0f721c8d8bfa27a415ba10708cf",
"text": "Textures are widely used in modern computer graphics. Their size, however, is often a limiting factor. Considering the widespread adaptation of mobile virtual and augmented reality applications, efficient storage of textures has become an important factor.\n We present an approach to analyse textures of a given mesh and compute a new set of textures with the goal of improving storage efficiency and reducing memory requirements. During this process the texture coordinates of the mesh are updated as required. Textures are analysed based on the UV-coordinates of one or more meshes and deconstructed into per-triangle textures. These are further analysed to detect single coloured as well as identical per-triangle textures. Our approach aims to remove these redundancies in order to reduce the amount of memory required to store the texture data. After this analysis, the per-triangle textures are compiled into a new set of texture images of user defined size. Our algorithm aims to pack texture data as tightly as possible in order to reduce the memory requirements.",
"title": ""
},
{
"docid": "5b45931590cb1e20b0a6f546316dc465",
"text": "We consider the task of accurately controlling a complex system, such as autonomously sliding a car sideways into a parking spot. Although certain regions of this domain are extremely hard to model (i.e., the dynamics of the car while skidding), we observe that in practice such systems are often remarkably deterministic over short periods of time, even in difficult-to-model regions. Motivated by this intuition, we develop a probabilistic method for combining closed-loop control in the well-modeled regions and open-loop control in the difficult-to-model regions. In particular, we show that by combining 1) an inaccurate model of the system and 2) a demonstration of the desired behavior, our approach can accurately and robustly control highly challenging systems, without the need to explicitly model the dynamics in the most complex regions and without the need to hand-tune the switching control law. We apply our approach to the task of autonomous sideways sliding into a parking spot, and show that we can repeatedly and accurately control the system, placing the car within about 2 feet of the desired location; to the best of our knowledge, this represents the state of the art in terms of accurately controlling a vehicle in such a maneuver.",
"title": ""
},
{
"docid": "6aed31a677c2fca976c91c67abd1e7b1",
"text": "Facebook is the most popular Social Network Site (SNS) among college students. Despite the popularity and extensive use of Facebook by students, its use has not made significant inroads into classroom usage. In this study, we seek to examine why this is the case and whether it would be worthwhile for faculty to invest the time to integrate Facebook into their teaching. To this end, we decided to undertake a study with a sample of 214 undergraduate students at the University of Huelva (Spain). We applied the structural equation model specifically designed by Mazman and Usluel (2010) to identify the factors that may motivate these students to adopt and use social network tools, specifically Facebook, for educational purposes. According to our results, Social Influence is the most important factor in predicting the adoption of Facebook; students are influenced to adopt it to establish or maintain contact with other people with whom they share interests. Regarding the purposes of Facebook usage, Social Relations is perceived as the most important factor among all of the purposes collected. Our findings also revealed that the educational use of Facebook is explained directly by its purposes of usage and indirectly by its adoption. 2013 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "106915eaac271c255aef1f1390577c64",
"text": "Parking is costly and limited in almost every major city in the world. Innovative parking systems for meeting near-term parking demand are needed. This paper proposes a novel, secure, and intelligent parking system (SmartParking) based on secured wireless network and sensor communication. From the point of users' view, SmartParking is a secure and intelligent parking service. The parking reservation is safe and privacy preserved. The parking navigation is convenient and efficient. The whole parking process will be a non-stop service. From the point of management's view, SmartParking is an intelligent parking system. The parking process can be modeled as birth-death stochastic process and the prediction of revenues can be made. Based on the prediction, new business promotion can be made, for example, on-sale prices and new parking fees. In SmartParking, new promotions can be published through wireless network. We address hardware/software architecture, implementations, and analytical models and results. The evaluation of this proposed system proves its efficiency.",
"title": ""
},
{
"docid": "70d7c838e7b5c4318e8764edb5a70555",
"text": "This research developed and tested a model of turnover contagion in which the job embeddedness and job search behaviors of coworkers influence employees’ decisions to quit. In a sample of 45 branches of a regional bank and 1,038 departments of a national hospitality firm, multilevel analysis revealed that coworkers’ job embeddedness and job search behaviors explain variance in individual “voluntary turnover” over and above that explained by other individual and group-level predictors. Broadly speaking, these results suggest that coworkers’ job embeddedness and job search behaviors play critical roles in explaining why people quit their jobs. Implications are discussed.",
"title": ""
},
{
"docid": "e34c102bf9c690e394ce7e373128be10",
"text": "We learn a joint model of sentence extraction and compression for multi-document summarization. Our model scores candidate summaries according to a combined linear model whose features factor over (1) the n-gram types in the summary and (2) the compressions used. We train the model using a marginbased objective whose loss captures end summary quality. Because of the exponentially large set of candidate summaries, we use a cutting-plane algorithm to incrementally detect and add active constraints efficiently. Inference in our model can be cast as an ILP and thereby solved in reasonable time; we also present a fast approximation scheme which achieves similar performance. Our jointly extracted and compressed summaries outperform both unlearned baselines and our learned extraction-only system on both ROUGE and Pyramid, without a drop in judged linguistic quality. We achieve the highest published ROUGE results to date on the TAC 2008 data set.",
"title": ""
},
{
"docid": "28720ce70b52adf92d8924143377ddd6",
"text": "This article describes an approach to building a cost-effective and research-grade visual-inertial (VI) odometry-aided vertical takeoff and landing (VTOL) platform. We utilize an off-the-shelf VI sensor, an onboard computer, and a quadrotor platform, all of which are factory calibrated and mass produced, thereby sharing similar hardware and sensor specifications [e.g., mass, dimensions, intrinsic and extrinsic of camera-inertial measurement unit (IMU) systems, and signal-to-noise ratio]. We then perform system calibration and identification, enabling the use of our VI odometry, multisensor fusion (MSF), and model predictive control (MPC) frameworks with off-the-shelf products. This approach partially circumvents the tedious parameter-tuning procedures required to build a full system. The complete system is extensively evaluated both indoors using a motioncapture system and outdoors using a laser tracker while performing hover and step responses and trajectory-following tasks in the presence of external wind disturbances. We achieve root-mean-square (rms) pose errors of 0.036 m with respect to reference hover trajectories. We also conduct relatively long distance (.180 m) experiments on a farm site, demonstrating a 0.82% drift error of the total flight distance. This article conveys the insights we acquired about the platform and sensor module and offers open-source code with tutorial documentation to the community.",
"title": ""
},
{
"docid": "7ddab8f1a5306062f4b835e7bf696e9e",
"text": "WGCNA begins with the understanding that the information captured by microarray experiments is far richer than a list of differentially expressed genes. Rather, microarray data are more completely represented by considering the relationships between measured transcripts, which can be assessed by pair-wise correlations between gene expression profiles. In most microarray data analyses, however, these relationships go essentially unexplored. WGCNA starts from the level of thousands of genes, identifies clinically interesting gene modules, and finally uses intramodular connectivity, gene significance (e.g. based on the correlation of a gene expression profile with a sample trait) to identify key genes in the disease pathways for further validation. WGCNA alleviates the multiple testing problem inherent in microarray data analysis. Instead of relating thousands of genes to a microarray sample trait, it focuses on the relationship between a few (typically less than 10) modules and the sample trait. Toward this end, it calculates the eigengene significance (correlation between sample trait and eigengene) and the corresponding p-value for each module. The module definition does not make use of a priori defined gene sets. Instead, modules are constructed from the expression data by using hierarchical clustering. Although it is advisable to relate the resulting modules to gene ontology information to assess their biological plausibility, it is not required. Because the modules may correspond to biological pathways, focusing the analysis on intramodular hub genes (or the module eigengenes) amounts to a biologically motivated data reduction scheme. Because the expression profiles of intramodular hub genes are highly correlated, typically dozens of candidate biomarkers result. Although these candidates are statistically equivalent, they may differ in terms of biological plausibility or clinical utility. Gene ontology information can be useful for further prioritizing intramodular hub genes. Examples of biological studies that show the importance of intramodular hub genes can be found reported in [4, 1, 2, 3, 5]. A flow chart of a typical network analysis is shown in Fig. 1. Below we present a short glossary of important network-related terms.",
"title": ""
},
{
"docid": "8f227f66fc7c86c19edae8036c571579",
"text": "Traditionally, the most commonly used source of bibliometric data is Thomson ISI Web of Knowledge, in particular the Web of Science and the Journal Citation Reports (JCR), which provide the yearly Journal Impact Factors (JIF). This paper presents an alternative source of data (Google Scholar, GS) as well as 3 alternatives to the JIF to assess journal impact (h-index, g-index and the number of citations per paper). Because of its broader range of data sources, the use of GS generally results in more comprehensive citation coverage in the area of management and international business. The use of GS particularly benefits academics publishing in sources that are not (well) covered in ISI. Among these are books, conference papers, non-US journals, and in general journals in the field of strategy and international business. The 3 alternative GS-based metrics showed strong correlations with the traditional JIF. As such, they provide academics and universities committed to JIFs with a good alternative for journals that are not ISI-indexed. However, we argue that these metrics provide additional advantages over the JIF and that the free availability of GS allows for a democratization of citation analysis as it provides every academic access to citation data regardless of their institution’s financial means.",
"title": ""
},
{
"docid": "bbee52ebe65b2f7b8d0356a3fbdb80bf",
"text": "Science Study Book Corpus Document Filter [...] enters a d orbital. The valence electrons (those added after the last noble gas configuration) in these elements include the ns and (n \\u2013 1) d electrons. The official IUPAC definition of transition elements specifies those with partially filled d orbitals. Thus, the elements with completely filled orbitals (Zn, Cd, Hg, as well as Cu, Ag, and Au in Figure 6.30) are not technically transition elements. However, the term is frequently used to refer to the entire d block (colored yellow in Figure 6.30), and we will adopt this usage in this textbook. Inner transition elements are metallic elements in which the last electron added occupies an f orbital.",
"title": ""
},
{
"docid": "409d104fa3e992ac72c65b004beaa963",
"text": "The 19-item Body-Image Questionnaire, developed by our team and first published in this journal in 1987 by Bruchon-Schweitzer, was administered to 1,222 male and female French subjects. A principal component analysis of their responses yielded an axis we interpreted as a general Body Satisfaction dimension. The four-factor structure observed in 1987 was not replicated. Body Satisfaction was associated with sex, health, and with current and future emotional adjustment.",
"title": ""
},
{
"docid": "f7dbb8adec55a4c52563194ecb6f3e8a",
"text": "The emotion of gratitude is thought to have social effects, but empirical studies of such effects have focused largely on the repaying of kind gestures. The current research focused on the relational antecedents of gratitude and its implications for relationship formation. The authors examined the role of naturally occurring gratitude in college sororities during a week of gift-giving from older members to new members. New members recorded reactions to benefits received during the week. At the end of the week and 1 month later, the new and old members rated their interactions and their relationships. Perceptions of benefactor responsiveness predicted gratitude for benefits, and gratitude during the week predicted future relationship outcomes. Gratitude may function to promote relationship formation and maintenance.",
"title": ""
},
{
"docid": "2e5a51176d1c0ab5394bb6a2b3034211",
"text": "School transport is used by millions of children worldwide. However, not a substantial effort is done in order to improve the existing school transport systems. This paper presents the development of an IoT based scholar bus monitoring system. The development of new telematics technologies has enabled the development of various Intelligent Transport Systems. However, these are not presented as ITS services to end users. This paper presents the development of an IoT based scholar bus monitoring system that through localization and speed sensors will allow many stakeholders such as parents, the goverment, the school and many other authorities to keep realtime track of the scholar bus behavior, resulting in a better controlled scholar bus.",
"title": ""
},
{
"docid": "61953281f4b568ad15e1f62be9d68070",
"text": "Most of the effort in today’s digital forensics community lies in the retrieval and analysis of existing information from computing systems. Little is being done to increase the quantity and quality of the forensic information on today’s computing systems. In this paper we pose the question of what kind of information is desired on a system by a forensic investigator. We give an overview of the information that exists on current systems and discuss its shortcomings. We then examine the role that file system metadata plays in digital forensics and analyze what kind of information is desirable for different types of forensic investigations, how feasible it is to obtain it, and discuss issues about storing the information.",
"title": ""
},
{
"docid": "2f7edc539bc61f8fc07bc6f5f8e496e0",
"text": "We investigate the contextual multi-armed bandit problem in an adversarial setting and introduce an online algorithm that asymptotically achieves the performance of the best contextual bandit arm selection strategy under certain conditions. We show that our algorithm is highly efficient and provides significantly improved performance with a guaranteed performance upper bound in a strong mathematical sense. We have no statistical assumptions on the context vectors and the loss of the bandit arms, hence our results are guaranteed to hold even in adversarial environments. We use a tree notion in order to partition the space of context vectors in a nested structure. Using this tree, we construct a large class of context dependent bandit arm selection strategies and adaptively combine them to achieve the performance of the best strategy. We use the hierarchical nature of introduced tree to implement this combination with a significantly low computational complexity, thus our algorithm can be efficiently used in applications involving big data. Through extensive set of experiments involving synthetic and real data, we demonstrate significant performance gains achieved by the proposed algorithm with respect to the state-of-the-art adversarial bandit algorithms.",
"title": ""
},
{
"docid": "198944af240d732b6fadcee273c1ba18",
"text": "This paper presents a fast and energy-efficient current mirror based level shifter with wide shifting range from sub-threshold voltage up to I/O voltage. Small delay and low power consumption are achieved by addressing the non-full output swing and charge sharing issues in the level shifter from [4]. The measurement results show that the proposed level shifter can convert from 0.21V up to 3.3V with significantly improved delay and power consumption over the existing level shifters. Compared with [4], the maximum reduction of delay, switching energy and leakage power are 3X, 19X, 29X respectively when converting 0.3V to a higher voltage between 0.6V and 3.3V.",
"title": ""
},
{
"docid": "c221568e2ed4d6192ab04119046c4884",
"text": "An efficient Ultra-Wideband (UWB) Frequency Selective Surface (FSS) is presented to mitigate the potential harmful effects of Electromagnetic Interference (EMI) caused by the radiations emitted by radio devices. The proposed design consists of circular and square elements printed on the opposite surfaces of FR4 substrate of 3.2 mm thickness. It ensures better angular stability by up to 600, bandwidth has been significantly enhanced by up to 16. 21 GHz to provide effective shielding against X-, Ka- and K-bands. While signal attenuation has also been improved remarkably in the desired band compared to the results presented in the latest research. Theoretical results are presented for TE and TM polarization for normal and oblique angles of incidence.",
"title": ""
},
{
"docid": "8143d59b02198a634c15d9f484f37d56",
"text": "The manufacturing industry is faced with strong competition making the companies’ knowledge resources and their systematic management a critical success factor. Yet, existing concepts for the management of process knowledge in manufacturing are characterized by major shortcomings. Particularly, they are either exclusively based on structured knowledge, e. g., formal rules, or on unstructured knowledge, such as documents, and they focus on isolated aspects of manufacturing processes. To address these issues, we present the Manufacturing Knowledge Repository, a holistic repository that consolidates structured and unstructured process knowledge to facilitate knowledge management and process optimization in manufacturing. First, we define requirements, especially the types of knowledge to be handled, e. g., data mining models and text documents. On this basis, we develop a conceptual repository data model associating knowledge items and process components such as machines and process steps. Furthermore, we discuss implementation issues including storage architecture variants and finally present both an evaluation of the data model and a proof of concept based on a prototypical implementation in a case example.",
"title": ""
},
{
"docid": "f119ffed641d2403dbcefad70a0669ac",
"text": "The fast growing market of mobile device adoption and cloud computing has led to exploitation of mobile devices utilizing cloud services. One major challenge facing the usage of mobile devices in the cloud environment is mobile synchronization to the cloud, e.g., synchronizing contacts, text messages, images, and videos. Owing to the expected high volume of traffic and high time complexity required for synchronization, an appropriate synchronization algorithm needs to be developed. Delta synchronization is one method of synchronizing compressed files that requires uploading the whole file, even when no changes were made or if it was only partially changed. In the present study, we proposed an algorithm, based on Delta synchronization, to solve the problem of synchronizing compressed files under various forms of modification (e.g., not modified, partially modified, or completely modified). To measure the efficiency of our proposed algorithm, we compared it to the Dropbox application algorithm. The results demonstrated that our algorithm outperformed the regular Dropbox synchronization mechanism by reducing the synchronization time, cost, and traffic load between clients and the cloud service provider.",
"title": ""
}
] |
scidocsrr
|
ed85393d34027c6aa4816c7d3e47528b
|
On Identifying Strongly Connected Components in Parallel
|
[
{
"docid": "734ca5ac095cc8339056fede2a642909",
"text": "The value of depth-first search or \"bacltracking\" as a technique for solving problems is illustrated by two examples. An improved version of an algorithm for finding the strongly connected components of a directed graph and ar algorithm for finding the biconnected components of an undirect graph are presented. The space and time requirements of both algorithms are bounded by k1V + k2E dk for some constants kl, k2, and ka, where Vis the number of vertices and E is the number of edges of the graph being examined.",
"title": ""
}
] |
[
{
"docid": "40e55e77a59e3ed63ae0a86b0c832f32",
"text": "Decision tree is an important method for both induction research and data mining, which is mainly used for model classification and prediction. ID3 algorithm is the most widely used algorithm in the decision tree so far. Through illustrating on the basic ideas of decision tree in data mining, in this paper, the shortcoming of ID3's inclining to choose attributes with many values is discussed, and then a new decision tree algorithm combining ID3 and Association Function(AF) is presented. The experiment results show that the proposed algorithm can overcome ID3's shortcoming effectively and get more reasonable and effective rules",
"title": ""
},
{
"docid": "88bdaa1ee78dd24f562e632cdb5ed396",
"text": "We present a novel paraphrase fragment pair extraction method that uses a monolingual comparable corpus containing different articles about the same topics or events. The procedure consists of document pair extraction, sentence pair extraction, and fragment pair extraction. At each stage, we evaluate the intermediate results manually, and tune the later stages accordingly. With this minimally supervised approach, we achieve 62% of accuracy on the paraphrase fragment pairs we collected and 67% extracted from the MSR corpus. The results look promising, given the minimal supervision of the approach, which can be further scaled up.",
"title": ""
},
{
"docid": "09c73c2a1eb6c7f12af022fb3accb306",
"text": "Linguistics studies have shown that action verbs often denote some Change of State (CoS) as the result of an action. However, the causality of action verbs and its potential connection with the physical world has not been systematically explored. To address this limitation, this paper presents a study on physical causality of action verbs and their implied changes in the physical world. We first conducted a crowdsourcing experiment and identified eighteen categories of physical causality for action verbs. For a subset of these categories, we then defined a set of detectors that detect the corresponding change from visual perception of the physical environment. We further incorporated physical causality modeling and state detection in grounded language understanding. Our empirical studies have demonstrated the effectiveness of causality modeling in grounding language to perception.",
"title": ""
},
{
"docid": "a5d100fd83620d9cc868a33ab6367be2",
"text": "Identifying the lineage path of neural cells is critical for understanding the development of brain. Accurate neural cell detection is a crucial step to obtain reliable delineation of cell lineage. To solve this task, in this paper we present an efficient neural cell detection method based on SSD (single shot multibox detector) neural network model. Our method adapts the original SSD architecture and removes the unnecessary blocks, leading to a light-weight model. Moreover, we formulate the cell detection as a binary regression problem, which makes our model much simpler. Experimental results demonstrate that, with only a small training set, our method is able to accurately capture the neural cells under severe shape deformation in a fast way.",
"title": ""
},
{
"docid": "2af262d6dda0e4de4abbc593a828326a",
"text": "We investigate strategies for selection of databases and instances for training cross-corpus emotion recognition systems, that is, systems that generalize across different labelling concepts, languages and interaction scenarios. We propose objective measures for prototypicality based on distances in a large space of brute-forced acoustic features and show their relation to the expected performance in cross-corpus testing. We perform extensive evaluation on eight commonly used corpora of emotional speech reaching from acted to fully natural emotion and limited phonetic content to conversational speech. In the result, selecting prototypical training instances by the proposed criterion can deliver a gain of up to 7.5 % unweighted accuracy in cross-corpus arousal recognition, and there is a correlation of .571 between the proposed prototypicality measure of databases and the expected unweighted accuracy in cross-corpus testing by Support Vector Machines.",
"title": ""
},
{
"docid": "e68df5caea19c1df5d29f048be5ef396",
"text": "Ontology alignment is regarded as the most perspective way to achieve semantic interoperability among heterogeneous data. The majority of state of art ontology alignment systems used one or more string similarity metrics, while the performance of these metrics were not given much attention. In this paper we first analyze naming variations in competing ontologies, then we evaluate a wide range of string similarity metrics, from the experimental result we can get some heuristic strategies to achieve better alignment results with regard to effectiveness and efficiency.",
"title": ""
},
{
"docid": "b1f4de8f708d098a0b30217c765ef82c",
"text": "This paper reports the results of extensive numerical studies related to spectral properties of the Laplacian and the scattering matrix for planar domains (called billiards). There is a close connection between eigenvalues of the billiard Laplacian and the scattering phases, basically that every energy at which a scattering phase is 2π corresponds to an eigenenergy of the Laplacian. Interesting phenomena appear when the shape of the domain does not allow an extension of the eigenfunction to the exterior. In this paper these phenomena are studied and illustrated from several points of view. DIETZ, Barbara, et al. Inside-outside duality for planar billiards: A numerical study. Physical Review. E, 1995, vol. 51, no. 5, p. 4222-4231 DOI : 10.1103/PhysRevE.51.4222",
"title": ""
},
{
"docid": "471f4399e42aa0b00effac824a309ad6",
"text": "Resource management in Cloud Computing has been dominated by system-level virtual machines to enable the management of resources using a coarse grained approach, largely in a manner independent from the applications running on these infrastructures. However, in such environments, although different types of applications can be running, the resources are delivered equally to each one, missing the opportunity to manage the available resources in a more efficient and application driven way. So, as more applications target managed runtimes, high level virtualization is a relevant abstraction layer that has not been properly explored to enhance resource usage, control, and effectiveness. We propose a VM economics model to manage cloud infrastructures, governed by a quality-of-execution (QoE) metric and implemented by an extended virtual machine. The Adaptive and Resource-Aware Java Virtual Machine (ARA-JVM) is a cluster-enabled virtual execution environment with the ability to monitor base mechanisms (e.g. thread cheduling, garbage collection, memory or network consumptions) to assess application's performance and reconfigure these mechanisms in runtime according to previously defined resource allocation policies. Reconfiguration is driven by incremental gains in quality-of-execution (QoE), used by the VM economics model to balance relative resource savings and perceived performance degradation. Our work in progress, aims to allow cloud providers to exchange resource slices among virtual machines, continually addressing where those resources are required, while being able to determine where the reduction will be more economically effective, i.e., will contribute in lesser extent to performance degradation.",
"title": ""
},
{
"docid": "d29c4e8598bbe2406ae314402f200f41",
"text": "A big step forward to improve power system monitoring and performance, continued load growth without a corresponding increase in transmission resources has resulted in reduced operational margins for many power systems worldwide and has led to operation of power systems closer to their stability limits and to power exchange in new patterns. These issues, as well as the on-going worldwide trend towards deregulation of the entire industry on the one hand and the increased need for accurate and better network monitoring on the other hand, force power utilities exposed to this pressure to demand new solutions for wide area monitoring, protection and control. Wide-area monitoring, protection, and control require communicating the specific-node information to a remote station but all information should be time synchronized so that to neutralize the time difference between information. It gives a complete simultaneous snap shot of the power system. The conventional system is not able to satisfy the time-synchronized requirement of power system. Phasor Measurement Unit (PMU) is enabler of time-synchronized measurement, it communicate the synchronized local information to remote station.",
"title": ""
},
{
"docid": "c0b71e1120a65af5b71935bd4daa88fc",
"text": "In a last few decades, development in power electronics systems has created its necessity in industrial and domestic applications like electric drives, UPS, solar and wind power conversion and many more. This paper presents the design, simulation, analysis and fabrication of a three phase, two-level inverter. The Space Vector Pulse Width Modulation (SVPWM) technique is used for the generation of gating signals for the three phase inverter. The proposed work is about real time embedded code generation technique that can be implemented using any microprocessor or microcontroller board of choice. The proposed technique reduces the analogue circuitry and eliminates the need of coding for generation of pulses, thereby making it simple and easy to implement. Control structure of SVPWM is simulated in MATLAB Simulink environment for analysis of different parameters of inverter. Comparative analysis of simulation results and hardware results is presented which shows that embedded code generation technique is very reliable and accurate.",
"title": ""
},
{
"docid": "f7366a6a67eb032a1080e000b687929f",
"text": "Internet of things (IoT) is intensely gaining reputation due to its necessity and efficiency in the computer realm. The support of wireless connectivity as well as the emergence of gadgets alleviates its usage essentially in governing systems in various fields. Though these systems are ubiquitous, pervasive and seamless, an issue concerning consumers’ privacy remains debatable. This is most evident in the health sector, as there is an immaculate rise in terms of awareness amongst patients where data privacy is concerned. In this paper, we propose a framework modelling the privacy requirements for IoT-based health applications. We have reviewed several privacy frameworks to derive at the essential principles required to develop privacy-aware IoT health applications. The proposed framework presents important privacy requirements to be addressed in the development of novel IoT health applications.",
"title": ""
},
{
"docid": "b44600830a6aacd0a1b7ec199cba5859",
"text": "Existing e-service quality scales mainly focus on goal-oriented e-shopping behavior excluding hedonic quality aspects. As a consequence, these scales do not fully cover all aspects of consumer's quality evaluation. In order to integrate both utilitarian and hedonic e-service quality elements, we apply a transaction process model to electronic service encounters. Based on this general framework capturing all stages of the electronic service delivery process, we develop a transaction process-based scale for measuring service quality (eTransQual). After conducting exploratory and confirmatory factor analysis, we identify five discriminant quality dimensions: functionality/design, enjoyment, process, reliability and responsiveness. All extracted dimensions of eTransQual show a significant positive impact on important outcome variables like perceived value and customer satisfaction. Moreover, enjoyment is a dominant factor in influencing both relationship duration and repurchase intention as major drivers of customer lifetime value. As a result, we present conceptual and empirical evidence for the need to integrate both utilitarian and hedonic e-service quality elements into one measurement scale. © 2006 Elsevier Inc. All rights reserved.",
"title": ""
},
{
"docid": "864d97df4021751abe0aa60964690f9b",
"text": "Due to the increasing deployment of Deep Neural Networks (DNNs) in real-world security-critical domains including autonomous vehicles and collision avoidance systems, formally checking security properties of DNNs, especially under different attacker capabilities, is becoming crucial. Most existing security testing techniques for DNNs try to find adversarial examples without providing any formal security guarantees about the non-existence of such adversarial examples. Recently, several projects have used different types of Satisfiability Modulo Theory (SMT) solvers to formally check security properties of DNNs. However, all of these approaches are limited by the high overhead caused by the solver. In this paper, we present a new direction for formally checking security properties of DNNs without using SMT solvers. Instead, we leverage interval arithmetic to compute rigorous bounds on the DNN outputs. Our approach, unlike existing solver-based approaches, is easily parallelizable. We further present symbolic interval analysis along with several other optimizations to minimize overestimations of output bounds. We design, implement, and evaluate our approach as part of ReluVal, a system for formally checking security properties of Relu-based DNNs. Our extensive empirical results show that ReluVal outperforms Reluplex, a stateof-the-art solver-based system, by 200 times on average. On a single 8-core machine without GPUs, within 4 hours, ReluVal is able to verify a security property that Reluplex deemed inconclusive due to timeout after running for more than 5 days. Our experiments demonstrate that symbolic interval analysis is a promising new direction towards rigorously analyzing different security properties of DNNs.",
"title": ""
},
{
"docid": "8be48759b1ae6b7d65ff61ebc43dfee6",
"text": "In this study, we introduce a household object dataset for recognition and manipulation tasks, focusing on commonly available objects in order to facilitate sharing of applications and algorithms. The core information available for each object consists of a 3D surface model annotated with a large set of possible grasp points, pre-computed using a grasp simulator. The dataset is an integral part of a complete Robot Operating System (ROS) architecture for performing pick and place tasks. We present our current applications using this data, and discuss possible extensions and future directions for shared datasets for robot operation in unstructured settings. I. DATASETS FOR ROBOTICS RESEARCH Recent years have seen a growing consensus that one of the keys to robotic applications in unstructured environments lies in collaboration and reusable functionality. An immediate result has been the emergence of a number of platforms and frameworks for sharing operational “building blocks,” usually in the form of code modules, with functionality ranging from low-level hardware drivers to complex algorithms such as path or motion planners. By using a set of now well-established guidelines, such as stable documented interfaces and standardized communication protocols, this type of collaboration has accelerated development towards complex applications. However, a similar set of methods for sharing and reusing data has been slower to emerge. In this paper we describe our effort in producing and releasing to the community a complete architecture for performing pick-and-place tasks in unstructured (or semistructured) environments. There are two key components to this architecture: the algorithms themselves, developed using the Robot Operating System (ROS) framework, and the knowledge base that they operate on. In our case, the algorithms provide abilities such as object segmentation and recognition, motion planning with collision avoidance, grasp execution using tactile feedback, etc. The knowledge base, which is the main focus of this study, contains relevant information for object recognition and grasping for a large set of common household objects. Some of the key aspects of combining computational tools with the data that they operate on are: • other researchers will have the option of directly using our dataset over the Internet (in an open, read-only fashion), or downloading and customizing it for their own applications; • defining a stable interface to the dataset component of the release will allow other researchers to provide their own modified and/or extended versions of the data to †Willow Garage Inc., Menlo Park, CA. Email: {matei, bradski, hsiao, pbrook}@willowgarage.com ∗University of Washington, Seattle, WA. the community, knowing that it will be directly usable by anyone running the algorithmic component; • the data and algorithm components can evolve together, like any other components of a large software distribution, with well-defined and documented interfaces, version numbering and control, etc. In particular, our current dataset is available in the form of a relational database, using the SQL standard. This choice provides additional benefits, including optimized relational queries, both for using the data on-line and managing it off-line, and low-level serialization functionality for most major languages. We believe that these features can help foster collaboration as well as provide useful tools for benchmarking as we advance towards increasingly complex behavior in unstructured environments. There have been previous example of datasets released in the research community (as described for example in [3], [7], [13] to name only a few), used either for benchmarking or for data-driven algorithms. However, few of these have been accompanied by the relevant algorithms, or have offered a well-defined interface to be used for extensions. The database component of our architecture was directly inspired by the Columbia Grasp Database (CGDB) [5], [6], released together with processing software integrated with the GraspIt! simulator [9]. The CGDB contains object shape and grasp information for a very large (n = 7, 256) set of general shapes from the Princeton Shape Benchmark [12]. The dataset presented here is smaller in scope (n = 180), referring only to actual graspable objects from the real world, and is integrated with a complete manipulation pipeline on the PR2 robot. II. THE OBJECT AND GRASP DATABASE",
"title": ""
},
{
"docid": "8b2c83868c16536910e7665998b2d87e",
"text": "Nowadays organizations turn to any standard procedure to gain a competitive advantage. If sustainable, competitive advantage can bring about benefit to the organization. The aim of the present study was to introduce competitive advantage as well as to assess the impacts of the balanced scorecard as a means to measure the performance of organizations. The population under study included employees of organizations affiliated to the Social Security Department in North Khorasan Province, of whom a total number of 120 employees were selected as the participants in the research sample. Two researcher-made questionnaires with a 5-point Likert scale were used to measure the competitive advantage and the balanced scorecard. Besides, Cronbach's alpha coefficient was used to measure the reliability of the instruments that was equal to 0.74 and 0.79 for competitive advantage and the balanced scorecard, respectively. The data analysis was performed using the structural equation modeling and the results indicated the significant and positive impact of the implementation of the balanced scorecard on the sustainable competitive advantage. © 2015 AESS Publications. All Rights Reserved.",
"title": ""
},
{
"docid": "e405462fd8764769477e5e16950f1fe8",
"text": "This paper looks briefly at the case study of psychopathic sexual serial killer Frederick Walter Stephen West. His criminal behaviour and other behavioural problems are often assumed to be rooted in the home, inadequate discipline, or poor role models. However, based on research arguments presented in this paper, it is obvious that the answer to what significantly contributed to the development of this complex distorted personality and subsequent violent behaviour is far more multifaceted. It seems to be a result of a highly complex interaction of biological, psychological and sociological factors.",
"title": ""
},
{
"docid": "3c8b9a015157a7dd7ce4a6b0b35847d9",
"text": "While more and more people are relying on social media for news feeds, serious news consumers still resort to well-established news outlets for more accurate and in-depth reporting and analyses. They may also look for reports on related events that have happened before and other background information in order to better understand the event being reported. Many news outlets already create sidebars and embed hyperlinks to help news readers, often with manual efforts. Technologies in IR and NLP already exist to support those features, but standard test collections do not address the tasks of modern news consumption. To help advance such technologies and transfer them to news reporting, NIST, in partnership with the Washington Post, is starting a new TREC track in 2018 known as the News Track.",
"title": ""
},
{
"docid": "73f605a48d0494d0007f242cee5c67ff",
"text": "BACKGROUND\nLarge comparative studies that have evaluated long-term functional outcome of operatively treated ankle fractures are lacking. This study was performed to analyse the influence of several combinations of malleolar fractures on long-term functional outcome and development of osteoarthritis.\n\n\nMETHODS\nRetrospective cohort-study on operated (1995-2007) malleolar fractures. Results were assessed with use of the AAOS- and AOFAS-questionnaires, VAS-pain score, dorsiflexion restriction (range of motion) and osteoarthritis. Categorisation was determined using the number of malleoli involved.\n\n\nRESULTS\n243 participants with a mean follow-up of 9.6 years were included. Significant differences for all outcomes were found between unimalleolar (isolated fibular) and bimalleolar (a combination of fibular and medial) fractures (AOFAS 97 vs 91, p = 0.035; AAOS 97 vs 90, p = 0.026; dorsiflexion restriction 2.8° vs 6.7°, p = 0.003). Outcomes after fibular fractures with an additional posterior fragment were similar to isolated fibular fractures. However, significant differences were found between unimalleolar and trimalleolar (a combination of lateral, medial and posterior) fractures (AOFAS 97 vs 88, p < 0.001; AAOS 97 vs 90, p = 0.003; VAS-pain 1.1 vs 2.3 p < 0.001; dorsiflexion restriction 2.9° vs 6.9°, p < 0.001). There was no significant difference in isolated fibular fractures with or without additional deltoid ligament injury. In addition, no functional differences were found between bimalleolar and trimalleolar fractures. Surprisingly, poor outcomes were found for isolated medial malleolar fractures. Development of osteoarthritis occurred mainly in trimalleolar fractures with a posterior fragment larger than 5 %.\n\n\nCONCLUSIONS\nThe results of our study show that long-term functional outcome is strongly associated to medial malleolar fractures, isolated or as part of bi- or trimalleolar fractures. More cases of osteoarthritis are found in trimalleolar fractures.",
"title": ""
},
{
"docid": "fe126ffb1d1868539bf0ecae638afb38",
"text": "Networks, also called graphs by mathematicians, provide a useful abstraction of the structure of many complex systems, ranging from social systems and computer networks to biological networks and the state spaces of physical systems. In the past decade there have been significant advances in experiments to determine the topological structure of networked systems, but there remain substantial challenges in extracting scientific understanding from the large quantities of data produced by the experiments. A variety of basic measures and metrics are available that can tell us about small-scale structure in networks, such as correlations, connections and recurrent patterns, but it is considerably more difficult to quantify structure on medium and large scales, to understand the ‘big picture’. Important progress has been made, however, within the past few years, a selection of which is reviewed here.",
"title": ""
}
] |
scidocsrr
|
b56abe7d62498573653d23a0b4ebea92
|
Multi-DOF Counterbalance Mechanism for a Service Robot Arm
|
[
{
"docid": "dbc09474868212acf3b29e49a6facbce",
"text": "In this paper, we propose a sophisticated design of human symbiotic robots that provide physical supports to the elderly such as attendant care with high-power and kitchen supports with dexterity while securing contact safety even if physical contact occurs with them. First of all, we made clear functional requirements for such a new generation robot, amounting to fifteen items to consolidate five significant functions such as “safety”, “friendliness”, “dexterity”, “high-power” and “mobility”. In addition, we set task scenes in daily life where support by robot is useful for old women living alone, in order to deduce specifications for the robot. Based on them, we successfully developed a new generation of human symbiotic robot, TWENDY-ONE that has a head, trunk, dual arms with a compact passive mechanism, anthropomorphic dual hands with mechanical softness in joints and skins and an omni-wheeled vehicle. Evaluation experiments focusing on attendant care and kitchen supports using TWENDY-ONE indicate that this new robot will be extremely useful to enhance quality of life for the elderly in the near future where human and robot co-exist.",
"title": ""
}
] |
[
{
"docid": "4487f3713062ef734ceab5c7f9ccc6e3",
"text": "In the analysis of machine learning models, it is often convenient to assume that the parameters are IID. This assumption is not satisfied when the parameters are updated through training processes such as SGD. A relaxation of the IID condition is a probabilistic symmetry known as exchangeability. We show the sense in which the weights in MLPs are exchangeable. This yields the result that in certain instances, the layer-wise kernel of fully-connected layers remains approximately constant during training. We identify a sharp change in the macroscopic behavior of networks as the covariance between weights changes from zero.",
"title": ""
},
{
"docid": "5fc76164af859604c5c2543bce017094",
"text": "We train and validate a semi-supervised, multi-task LSTM on 57,675 person-weeks of data from off-the-shelf wearable heart rate sensors, showing high accuracy at detecting multiple medical conditions, including diabetes (0.8451), high cholesterol (0.7441), high blood pressure (0.8086), and sleep apnea (0.8298). We compare two semi-supervised training methods, semi-supervised sequence learning and heuristic pretraining, and show they outperform hand-engineered biomarkers from the medical literature. We believe our work suggests a new approach to patient risk stratification based on cardiovascular risk scores derived from popular wearables such as Fitbit, Apple Watch, or Android Wear.",
"title": ""
},
{
"docid": "db31a8887bfc1b24c2d2c2177d4ef519",
"text": "The equilibrium microstructure of a fluid may only be described exactly in terms of a complete set of n-body atomic distribution functions, where n is 1, 2, 3 , . . . , N, and N is the total number of particles in the system. The higher order functions, i. e. n > 2, are complex and practically inaccessible but con siderable qualitative information can already be derived from studies of the mean radial occupation function n(r) defined as the average number of atoms in a sphere of radius r centred on a particular atom. The function for a perfect gas of non-inter acting particles is",
"title": ""
},
{
"docid": "c43de372dac79cf922f560450545e5b3",
"text": "Unsupervised learning and supervised learning are key research topics in deep learning. However, as high-capacity supervised neural networks trained with a large amount of labels have achieved remarkable success in many computer vision tasks, the availability of large-scale labeled images reduced the significance of unsupervised learning. Inspired by the recent trend toward revisiting the importance of unsupervised learning, we investigate joint supervised and unsupervised learning in a large-scale setting by augmenting existing neural networks with decoding pathways for reconstruction. First, we demonstrate that the intermediate activations of pretrained large-scale classification networks preserve almost all the information of input images except a portion of local spatial details. Then, by end-to-end training of the entire augmented architecture with the reconstructive objective, we show improvement of the network performance for supervised tasks. We evaluate several variants of autoencoders, including the recently proposed “what-where\" autoencoder that uses the encoder pooling switches, to study the importance of the architecture design. Taking the 16-layer VGGNet trained under the ImageNet ILSVRC 2012 protocol as a strong baseline for image classification, our methods improve the validation-set accuracy by a noticeable margin.",
"title": ""
},
{
"docid": "c4f706ff9ceb514e101641a816ba7662",
"text": "Open set recognition problems exist in many domains. For example in security, new malware classes emerge regularly; therefore malware classication systems need to identify instances from unknown classes in addition to discriminating between known classes. In this paper we present a neural network based representation for addressing the open set recognition problem. In this representation instances from the same class are close to each other while instances from dierent classes are further apart, resulting in statistically signicant improvement when compared to other approaches on three datasets from two dierent domains.",
"title": ""
},
{
"docid": "b27224825bb28b9b8d0eea37f8900d42",
"text": "The use of Convolutional Neural Networks (CNN) in natural im age classification systems has produced very impressive results. Combined wit h the inherent nature of medical images that make them ideal for deep-learning, fu rther application of such systems to medical image classification holds much prom ise. However, the usefulness and potential impact of such a system can be compl etely negated if it does not reach a target accuracy. In this paper, we present a s tudy on determining the optimum size of the training data set necessary to achiev e igh classification accuracy with low variance in medical image classification s ystems. The CNN was applied to classify axial Computed Tomography (CT) imag es into six anatomical classes. We trained the CNN using six different sizes of training data set ( 5, 10, 20, 50, 100, and200) and then tested the resulting system with a total of 6000 CT images. All images were acquired from the Massachusetts G eneral Hospital (MGH) Picture Archiving and Communication System (PACS). U sing this data, we employ the learning curve approach to predict classificat ion ccuracy at a given training sample size. Our research will present a general me thodology for determining the training data set size necessary to achieve a cert in target classification accuracy that can be easily applied to other problems within such systems.",
"title": ""
},
{
"docid": "5e240ad1d257a90c0ca414ce8e7e0949",
"text": "Improving Cloud Security using Secure Enclaves by Jethro Gideon Beekman Doctor of Philosophy in Engineering – Electrical Engineering and Computer Sciences University of California, Berkeley Professor David Wagner, Chair Internet services can provide a wealth of functionality, yet their usage raises privacy, security and integrity concerns for users. This is caused by a lack of guarantees about what is happening on the server side. As a worst case scenario, the service might be subjected to an insider attack. This dissertation describes the unalterable secure service concept for trustworthy cloud computing. Secure services are a powerful abstraction that enables viewing the cloud as a true extension of local computing resources. Secure services combine the security benefits one gets locally with the manageability and availability of the distributed cloud. Secure services are implemented using secure enclaves. Remote attestation of the server is used to obtain guarantees about the programming of the service. This dissertation addresses concerns related to using secure enclaves such as providing data freshness and distributing identity information. Certificate Transparency is augmented to distribute information about which services exist and what they do. All combined, this creates a platform that allows legacy clients to obtain security guarantees about Internet services.",
"title": ""
},
{
"docid": "82af5212b43e8dfe6d54582de621d96c",
"text": "The use of multiple radar configurations can overcome some of the geometrical limitations that exist when obtaining radar images of a target using inverse synthetic aperture radar (ISAR) techniques. It is shown here how a particular bistatic configuration can produce three view angles and three ISAR images simultaneously. A new ISAR signal model is proposed and the applicability of employing existing monostatic ISAR techniques to bistatic configurations is analytically demonstrated. An analysis of the distortion introduced by the bistatic geometry to the ISAR image point spread function (PSF) is then carried out and the limits of the applicability of ISAR techniques (without the introduction of additional signal processing) are found and discussed. Simulations and proof of concept experimental data are also provided that support the theory.",
"title": ""
},
{
"docid": "d8f6f4bef57e26e9d2dc3684ea07a2f4",
"text": "Alzheimer's disease is a progressive neurodegenerative disease that typically manifests clinically as an isolated amnestic deficit that progresses to a characteristic dementia syndrome. Advances in neuroimaging research have enabled mapping of diverse molecular, functional, and structural aspects of Alzheimer's disease pathology in ever increasing temporal and regional detail. Accumulating evidence suggests that distinct types of imaging abnormalities related to Alzheimer's disease follow a consistent trajectory during pathogenesis of the disease, and that the first changes can be detected years before the disease manifests clinically. These findings have fuelled clinical interest in the use of specific imaging markers for Alzheimer's disease to predict future development of dementia in patients who are at risk. The potential clinical usefulness of single or multimodal imaging markers is being investigated in selected patient samples from clinical expert centres, but additional research is needed before these promising imaging markers can be successfully translated from research into clinical practice in routine care.",
"title": ""
},
{
"docid": "f2d8ee741a61b1f950508ac57b2aa379",
"text": "The concentrations of cellulose chemical markers, in oil, are influenced by various parameters due to the partition between the oil and the cellulose insulation. One major parameter is the oil temperature which is a function of the transformer load, ambient temperature and the type of cooling. To accurately follow the chemical markers concentration trends during all the transformer life, it is crucial to normalize the concentrations at a specific temperature. In this paper, we propose equations for the normalization of methanol, ethanol and 2-furfural at 20 °C. The proposed equations have been validated on some real power transformers.",
"title": ""
},
{
"docid": "b85112d759d9facedacb3935ce2d0de5",
"text": "Internet is one of the primary sources of Big Data. Rise of the social networking platforms are creating enormous amount of data in every second where human emotions are constantly expressed in real-time. The sentiment behind each post, comments, likes can be found using opinion mining. It is possible to determine business values from these objects and events if sentiment analysis is done on the huge amount of data. Here, we have chosen FOODBANK which is a very popular Facebook group in Bangladesh; to analyze sentiment of the data to find out their market values.",
"title": ""
},
{
"docid": "977efac2809f4dc455e1289ef54008b0",
"text": "A novel 3-D NAND flash memory device, VSAT (Vertical-Stacked-Array-Transistor), has successfully been achieved. The VSAT was realized through a cost-effective and straightforward process called PIPE (planarized-Integration-on-the-same-plane). The VSAT combined with PIPE forms a unique 3-D vertical integration method that may be exploited for ultra-high-density Flash memory chip and solid-state-drive (SSD) applications. The off-current level in the polysilicon-channel transistor dramatically decreases by five orders of magnitude by using an ultra-thin body of 20 nm thick and a double-gate-in-series structure. In addition, hydrogen annealing improves the subthreshold swing and the mobility of the polysilicon-channel transistor.",
"title": ""
},
{
"docid": "3a092c071129e2ffced1800f2b4d519c",
"text": "Human actions captured in video sequences are threedimensional signals characterizing visual appearance and motion dynamics. To learn action patterns, existing methods adopt Convolutional and/or Recurrent Neural Networks (CNNs and RNNs). CNN based methods are effective in learning spatial appearances, but are limited in modeling long-term motion dynamics. RNNs, especially Long Short- Term Memory (LSTM), are able to learn temporal motion dynamics. However, naively applying RNNs to video sequences in a convolutional manner implicitly assumes that motions in videos are stationary across different spatial locations. This assumption is valid for short-term motions but invalid when the duration of the motion is long.,,In this work, we propose Lattice-LSTM (L2STM), which extends LSTM by learning independent hidden state transitions of memory cells for individual spatial locations. This method effectively enhances the ability to model dynamics across time and addresses the non-stationary issue of long-term motion dynamics without significantly increasing the model complexity. Additionally, we introduce a novel multi-modal training procedure for training our network. Unlike traditional two-stream architectures which use RGB and optical flow information as input, our two-stream model leverages both modalities to jointly train both input gates and both forget gates in the network rather than treating the two streams as separate entities with no information about the other. We apply this end-to-end system to benchmark datasets (UCF-101 and HMDB-51) of human action recognition. Experiments show that on both datasets, our proposed method outperforms all existing ones that are based on LSTM and/or CNNs of similar model complexities.",
"title": ""
},
{
"docid": "ee1e2400ed5c944826747a8e616b18c1",
"text": "Metastasis remains the greatest challenge in the clinical management of cancer. Cell motility is a fundamental and ancient cellular behaviour that contributes to metastasis and is conserved in simple organisms. In this Review, we evaluate insights relevant to human cancer that are derived from the study of cell motility in non-mammalian model organisms. Dictyostelium discoideum, Caenorhabditis elegans, Drosophila melanogaster and Danio rerio permit direct observation of cells moving in complex native environments and lend themselves to large-scale genetic and pharmacological screening. We highlight insights derived from each of these organisms, including the detailed signalling network that governs chemotaxis towards chemokines; a novel mechanism of basement membrane invasion; the positive role of E-cadherin in collective direction-sensing; the identification and optimization of kinase inhibitors for metastatic thyroid cancer on the basis of work in flies; and the value of zebrafish for live imaging, especially of vascular remodelling and interactions between tumour cells and host tissues. While the motility of tumour cells and certain host cells promotes metastatic spread, the motility of tumour-reactive T cells likely increases their antitumour effects. Therefore, it is important to elucidate the mechanisms underlying all types of cell motility, with the ultimate goal of identifying combination therapies that will increase the motility of beneficial cells and block the spread of harmful cells.",
"title": ""
},
{
"docid": "812c1713c1405c4925c6c6057624465b",
"text": "Fuel cell hybrid tramway has gained increasing attention recently and energy management strategy (EMS) is one of its key technologies. A hybrid tramway power system consisting of proton exchange membrane fuel cell (PEMFC) and battery is designed in the MATLAB /SIMULINK software as basic for the energy management strategy research. An equivalent consumption minimization strategy (ECMS) for hybrid tramway is proposed and embedded into the aforementioned hybrid model. In order to evaluate the proposed energy management, a real tramway driving cycle is adopted to simulate in RT-LAB platform. The simulation results prove the effectiveness of the proposed EMS.",
"title": ""
},
{
"docid": "cfd60f60a0a0bcc16ede57c7cee4fd23",
"text": "A compact planar multiband four-unit multiple-input multiple-output (MIMO) antenna system with high isolation is developed. At VSWR ≤ 2.75, the proposed MIMO antenna operates in the frequency range of LTE Band-1, 2, 3, 7, 40 and WLAN 2.4 GHz band. A T-strip and dumbbell shaped slots are studied to mitigate mutual coupling effects. The measured worst case isolation is better that 15.3 dB and envelope correlation coefficient is less than 0.01. The received signals satisfy the equal power gain condition and radiation patterns confirm the pattern diversity to combat multipath fading effects. At 29 dB SNR, the achieved MIMO channel capacity is about 22.2 b/s/Hz. These results infer that the proposed MIMO antenna is an attractive candidate for 4G-LTE mobile phone applications.",
"title": ""
},
{
"docid": "93bc26aa1a020f178692f40f4542b691",
"text": "The \"Fast Fourier Transform\" has now been widely known for about a year. During that time it has had a major effect on several areas of computing, the most striking example being techniques of numerical convolution, which have been completely revolutionized. What exactly is the \"Fast Fourier Transform\"?",
"title": ""
},
{
"docid": "9988f6dc4a2241e2a9025fd7b76ef4ee",
"text": "In this paper we present a multimodal approach for the recognition of eight emotions that integrates information from facial expressions, body movement and gestures and speech. We trained and tested a model with a Bayesian classifier, using a multimodal corpus with eight emotions and ten subjects. First individual classifiers were trained for each modality. Then data were fused at the feature level and the decision level. Fusing multimodal data increased very much the recognition rates in comparison with the unimodal systems: the multimodal approach gave an improvement of more than 10% with respect to the most successful unimodal system. Further, the fusion performed at the feature level showed better results than the one performed at the decision level.",
"title": ""
}
] |
scidocsrr
|
1806f96192a12df943c552df15ea61e0
|
Wearable and Implantable Wireless Sensor Network Solutions for Healthcare Monitoring
|
[
{
"docid": "f921555856d856eef308af6e987c1fbb",
"text": "Wireless Body Area Networks (WBANs) provide efficient communication solutions to the ubiquitous healthcare systems. Health monitoring, telemedicine, military, interactive entertainment, and portable audio/video systems are some of the applications where WBANs can be used. The miniaturized sensors together with advance micro-electro-mechanical systems (MEMS) technology create a WBAN that continuously monitors the health condition of a patient. This paper presents a comprehensive discussion on the applications of WBANs in smart healthcare systems. We highlight a number of projects that enable WBANs to provide unobtrusive long-term healthcare monitoring with real-time updates to the health center. In addition, we list many potential medical applications of a WBAN including epileptic seizure warning, glucose monitoring, and cancer detection.",
"title": ""
}
] |
[
{
"docid": "e766e5a45936c53767898c591e6126f8",
"text": "Video completion is a computer vision technique to recover the missing values in video sequences by filling the unknown regions with the known information. In recent research, tensor completion, a generalization of matrix completion for higher order data, emerges as a new solution to estimate the missing information in video with the assumption that the video frames are homogenous and correlated. However, each video clip often stores the heterogeneous episodes and the correlations among all video frames are not high. Thus, the regular tenor completion methods are not suitable to recover the video missing values in practical applications. To solve this problem, we propose a novel spatiallytemporally consistent tensor completion method for recovering the video missing data. Instead of minimizing the average of the trace norms of all matrices unfolded along each mode of a tensor data, we introduce a new smoothness regularization along video time direction to utilize the temporal information between consecutive video frames. Meanwhile, we also minimize the trace norm of each individual video frame to employ the spatial correlations among pixels. Different to previous tensor completion approaches, our new method can keep the spatio-temporal consistency in video and do not assume the global correlation in video frames. Thus, the proposed method can be applied to the general and practical video completion applications. Our method shows promising results in all evaluations on both 3D biomedical image sequence and video benchmark data sets. Video completion is the process of filling in missing pixels or replacing undesirable pixels in a video. The missing values in a video can be caused by many situations, e.g., the natural noise in video capture equipment, the occlusion from the obstacles in environment, segmenting or removing interested objects from videos. Video completion is of great importance to many applications such as video repairing and editing, movie post-production (e.g., remove unwanted objects), etc. Missing information recovery in images is called inpaint∗To whom all correspondence should be addressed. This work was partially supported by US NSF IIS-1117965, IIS-1302675, IIS-1344152. Copyright c © 2014, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. ing, which is usually accomplished by inferring or guessing the missing information from the surrounding regions, i.e. the spatial information. Video completion can be considered as an extension of 2D image inpainting to 3D. Video completion uses the information from the past and the future frames to fill the pixels in the missing region, i.e. the spatiotemporal information, which has been getting increasing attention in recent years. In computer vision, an important application area of artificial intelligence, there are many video completion algorithms. The most representative approaches include video inpainting, analogous to image inpainting (Bertalmio, Bertozzi, and Sapiro 2001), motion layer video completion, which splits the video sequence into different motion layers and completes each motion layer separately (Shiratori et al. 2006), space-time video completion, which is based on texture synthesis and is good but slow (Wexler, Shechtman, and Irani 2004), and video repairing, which repairs static background with motion layers and repairs moving foreground using model alignment (Jia et al. 2004). Many video completion methods are less effective because the video is often treated as a set of independent 2D images. Although the temporal independence assumption simplifies the problem, losing temporal consistency in recovered pixels leads to the unsatisfactory performance. On the other hand, temporal information can improve the video completion results (Wexler, Shechtman, and Irani 2004; Matsushita et al. 2005), but to exploit it the computational speeds of most methods are significantly reduced. Thus, how to efficiently and effectively utilize both spatial and temporal information is a challenging problem in video completion. In most recent work, Liu et. al. (Liu et al. 2013) estimated the missing data in video via tensor completion which was generalized from matrix completion methods. In these methods, the rank or rank approximation (trace norm) is used, as a powerful tool, to capture the global information. The tensor completion method (Liu et al. 2013) minimizes the trace norm of a tensor, i.e. the average of the trace norms of all matrices unfolded along each mode. Thus, it assumes the video frames are highly correlated in the temporal direction. If the video records homogenous episodes and all frames describe the similar information, this assumption has no problem. However, one video clip usually includes multiple different episodes and the frames from different episodes Proceedings of the Twenty-Eighth AAAI Conference on Artificial Intelligence",
"title": ""
},
{
"docid": "a2e2e49ba695f81eed05abaa9333b4f2",
"text": "This paper presents an automatic lesion segmentation method based on similarities between multichannel patches. A patch database is built using training images for which the label maps are known. For each patch in the testing image, k similar patches are retrieved from the database. The matching labels for these k patches are then combined to produce an initial segmentation map for the test case. Finally an iterative patch-based label refinement process based on the initial segmentation map is performed to ensure the spatial consistency of the detected lesions. The method was evaluated in experiments on multiple sclerosis (MS) lesion segmentation in magnetic resonance images (MRI) of the brain. An evaluation was done for each image in the MICCAI 2008 MS lesion segmentation challenge. Results are shown to compete with the state of the art in the challenge. We conclude that the proposed algorithm for segmentation of lesions provides a promising new approach for local segmentation and global detection in medical images.",
"title": ""
},
{
"docid": "6dfe8b18e3d825b2ecfa8e6b353bbb99",
"text": "In the last decade tremendous effort has been put in the study of the Apollonian circle packings. Given the great variety of mathematics it exhibits, this topic has attracted experts from different fields: number theory, homogeneous dynamics, expander graphs, group theory, to name a few. The principle investigator (PI) contributed to this program in his PhD studies. The scenery along the way formed the horizon of the PI at his early mathematical career. After his PhD studies, the PI has successfully applied tools and ideas from Apollonian circle packings to the studies of topics from various fields, and will continue this endeavor in his proposed research. The proposed problems are roughly divided into three categories: number theory, expander graphs, geometry. Each of which will be discussed in depth in later sections. Since Apollonian circle packing provides main inspirations for this proposal, let’s briefly review how it comes up and what has been done. We start with four mutually circles, with one circle bounding the other three. We can repeatedly inscribe more and more circles into curvilinear triangular gaps as illustrated in Figure 1, and we call the resultant set an Apollonian circle packing, which consists of infinitely many circles.",
"title": ""
},
{
"docid": "3f8860bc21f26b81b066f4c75b9390e1",
"text": "Adaptive filter algorithms are extensively use in active control applications and the availability of low cost powerful digital signal processor (DSP) platforms has opened the way for new applications and further research opportunities in e.g. the active control area. The field of active control demands a solid exposure to practical systems and DSP platforms for a comprehensive understanding of the theory involved. Traditional laboratory experiments prove to be insufficient to fulfill these demands and need to be complemented with more flexible and economic remotely controlled laboratories. The purpose of this thesis project is to implement a number of different adaptive control algorithms in the recently developed remotely controlled Virtual Instrument Systems in Reality (VISIR) ANC/DSP remote laboratory at Blekinge Institute of Technology and to evaluate the performance of these algorithms in the remote laboratory. In this thesis, performance of different filtered-x versions adaptive algorithms (NLMS, LLMS, RLS and FuRLMS) has been evaluated in a remote Laboratory. The adaptive algorithms were implemented remotely on a Texas Instrument DSP TMS320C6713 in an ANC system to attenuate low frequency noise which ranges from 0-200 Hz in a circular ventilation duct using single channel feed forward control. Results show that the remote lab can handle complex and advanced control algorithms. These algorithms were tested and it was found that remote lab works effectively and the achieved attenuation level for the algorithms used on the duct system is comparable to similar applications.",
"title": ""
},
{
"docid": "94485a72ab9392be5398322e651e553a",
"text": "The current study, integrating relevant concepts derived from self-regulatory focus, prospect/involvement and knowledge structure theories, proposes a conceptual framework that depicts how the message framing strategy of advertising may have an impact on the persuasiveness of brand marketing. As empirical examination of the framework shows, the consumer characteristics of self-construal, consumer involvement, and product knowledge are three key antecedents of the persuasiveness that message framing generates at the dimensions of advertising attitude, brand attitude, and purchase intention. Besides, significant interaction exists among these three variables. Implications of the research findings, both for academics and practitioners, are discussed.",
"title": ""
},
{
"docid": "206dc1a4a27b603360888d414e0b5cf6",
"text": "Standard deep reinforcement learning methods such as Deep Q-Networks (DQN) for multiple tasks (domains) face scalability problems due to large search spaces. This paper proposes a three-stage method for multi-domain dialogue policy learning-termed NDQN, and applies it to an information-seeking spoken dialogue system in the domains of restaurants and hotels. In this method, the first stage does multi-policy learning via a network of DQN agents; the second makes use of compact state representations by compressing raw inputs; and the third stage applies a pre-training phase for bootstraping the behaviour of agents in the network. Experimental results comparing DQN (baseline) versus NDQN (proposed) using simulations report that the proposed method exhibits better scalability and is promising for optimising the behaviour of multi-domain dialogue systems. An additional evaluation reports that the NDQN agents outperformed a K-Nearest Neighbour baseline in task success and dialogue length, yielding more efficient and successful dialogues.",
"title": ""
},
{
"docid": "f7ec4acfd6c4916f3fec0dfa26db558c",
"text": "In the real-world online social networks, users tend to form different social communities. Due to its extensive applications, community detection in online social networks has been a hot research topic in recent years. In this chapter, we will focus on introducing the social community detection problem in online social networks. To be more specific, we will take the hard community detection problem as an example to introduce the existing models proposed for conventional (one single) homogeneous social network, and the recent broad learning based (multiple aligned) heterogeneous social networks respectively. Key Word: Community Detection; Social Media; Aligned Heterogeneous Networks; Broad Learning",
"title": ""
},
{
"docid": "ac168ff92c464cb90a9a4ca0eb5bfa5c",
"text": "Path computing is a new paradigm that generalizes the edge computing vision into a multi-tier cloud architecture deployed over the geographic span of the network. Path computing supports scalable and localized processing by providing storage and computation along a succession of datacenters of increasing sizes, positioned between the client device and the traditional wide-area cloud data-center. CloudPath is a platform that implements the path computing paradigm. CloudPath consists of an execution environment that enables the dynamic installation of light-weight stateless event handlers, and a distributed eventual consistent storage system that replicates application data on-demand. CloudPath handlers are small, allowing them to be rapidly instantiated on demand on any server that runs the CloudPath execution framework. In turn, CloudPath automatically migrates application data across the multiple datacenter tiers to optimize access latency and reduce bandwidth consumption.",
"title": ""
},
{
"docid": "69b631f179ea3c521f1dde75be537279",
"text": "A conceptually simple but effective noise smoothing algorithm is described. This filter is motivated by the sigma probability of the Gaussian distribution, and it smooths the image noise by averaging only those neighborhood pixels which have the intensities within a fixed sigma range of the center pixel. Consequently, image edges are preserved, and subtle details and thin tines such as roads are retained. The characteristics of this smoothing algorithm are analyzed and compared with several other known filtering algorithms by their ability to retain subtle details, preserving edge shapes, sharpening ramp edges, etc. The comparison also indicates that the sigma filter is the most computationally efficient filter among those evaluated. The filter can be easily extended into several forms which can be used in contrast enhancement, image segmentation, and smoothing signal-dependent noisy images. Several test images 128 X 128 and 256 X 256 pixels in size are used to substantiate its characteristics. The algorithm can be easily extended to 3-D image smoothing.",
"title": ""
},
{
"docid": "cc10051c413cfb6f87d0759100bc5182",
"text": "Social Media Hate Speech has continued to grow both locally and globally due to the increase of Online Social Media web forums like Facebook, Twitter and blogging. This has been propelled even further by smartphones and mobile data penetration locally. Global and Local terrorism has posed a vital question for technologists to investigate, prosecute, predict and prevent Social Media Hate Speech. This study provides a social media digital forensics tool through the design, development and implementation of a software application. The study will develop an application using Linux Apache MySQL PHP and Python. The application will use Scrapy Python page ranking algorithm to perform web crawling and the data will be placed in a MySQL database for data mining. The application used Agile Software development methodology with twenty websites being the subject of interest. The websites will be the sample size to demonstrate how the application",
"title": ""
},
{
"docid": "f87e64901ede5cc11dbb14f59cd95e80",
"text": "This paper presents a methodology to develop a dimensional data warehouse by integrating all three development approaches such as supply-driven, goal-driven and demand-driven. By having the combination of all three approaches, the final design will ensure that user requirements, company interest and existing source of data are included in the model. We proposed an automatic system using ontology as the knowledge domain. Starting from operational ER-D (Entity Relationship-Diagram), the selection of facts table, verification of terms and consistency checking will utilize domain ontology. The model will also be verified against user and company requirements. Any discrepancy in the final design requires designer and user intervention. The proposed methodology is supported by a prototype using a business data warehouse example.",
"title": ""
},
{
"docid": "ed9e53f132eada9ceb1f943cce00f20a",
"text": "With the proliferation of e-commerce websites and the ubiquitousness of smart phones, cross-domain image retrieval using images taken by smart phones as queries to search products on e-commerce websites is emerging as a popular application. One challenge of this task is to locate the attention of both the query and database images. In particular, database images, e.g. of fashion products, on e-commerce websites are typically displayed with other accessories, and the images taken by users contain noisy background and large variations in orientation and lighting. Consequently, their attention is difficult to locate. In this paper, we exploit the rich tag information available on the e-commerce websites to locate the attention of database images. For query images, we use each candidate image in the database as the context to locate the query attention. Novel deep convolutional neural network architectures, namely TagYNet and CtxYNet, are proposed to learn the attention weights and then extract effective representations of the images. Experimental results on public datasets confirm that our approaches have significant improvement over the existing methods in terms of the retrieval accuracy and efficiency.",
"title": ""
},
{
"docid": "cfe5d769b9d479dccd543f8a4d23fcf9",
"text": "This paper aims to describe the role of advanced sensing systems in the electric grid of the future. In detail, the project, development, and experimental validation of a smart power meter are described in the following. The authors provide an outline of the potentialities of the sensing systems and IoT to monitor efficiently the energy flow among nodes of an electric network. The described power meter uses the metrics proposed in the IEEE Standard 1459–2010 to analyze and process voltage and current signals. Information concerning the power consumption and power quality could allow the power grid to route efficiently the energy by means of more suitable decision criteria. The new scenario has changed the way to exchange energy in the grid. Now, energy flow must be able to change its direction according to needs. Energy cannot be now routed by considering just only the criterion based on the simple shortening of transmission path. So, even energy coming from a far node should be preferred, if it has higher quality standards. In this view, the proposed smart power meter intends to support the smart power grid to monitor electricity among different nodes in an efficient and effective way.",
"title": ""
},
{
"docid": "8c5726817049b2f5a77f6c1ba32b1254",
"text": "A memory leak occurs when a program allocates a block of memory, but does not release it after its last use. In case such a block is still referenced by one or more reachable pointers at the end of the execution, fixing the leak is often quite simple as long as it is known where the block was allocated. If, however, all references to the block are overwritten or lost during the program’s execution, only knowing the allocation site is not enough in most cases. This paper describes an approach based on dynamic instrumentation and garbage collection techniques, which enables us to also inform the user about where the last reference to a lost memory block was created and where it was lost, without the need for recompilation or relinking.",
"title": ""
},
{
"docid": "d21ce518c0186c15f93348bb43273655",
"text": "On the basis of current evidence regarding human papillomavirus (HPV) and cancer, this chapter provides estimates of the global burden of HPV-related cancers, and the proportion that are actually \"caused\" by infection with HPV types, and therefore potentially preventable. We also present trends in incidence and mortality of these cancers in the past, and consider their likely future evolution.",
"title": ""
},
{
"docid": "ea9bafe86af4418fa51abe27a2c2180b",
"text": "In this work, we propose a novel phenomenological model of the EEG signal based on the dynamics of a coupled Duffing-van der Pol oscillator network. An optimization scheme is adopted to match data generated from the model with clinically obtained EEG data from subjects under resting eyes-open (EO) and eyes-closed (EC) conditions. It is shown that a coupled system of two Duffing-van der Pol oscillators with optimized parameters yields signals with characteristics that match those of the EEG in both the EO and EC cases. The results, which are reinforced using statistical analysis, show that the EEG recordings under EC and EO resting conditions are clearly distinct realizations of the same underlying model occurring due to parameter variations with qualitatively different nonlinear dynamic characteristics. In addition, the interplay between noise and nonlinearity is addressed and it is shown that, for appropriately chosen values of noise intensity in the model, very good agreement exists between the model output and the EEG in terms of the power spectrum as well as Shannon entropy. In summary, the results establish that an appropriately tuned stochastic coupled nonlinear oscillator network such as the Duffing-van der Pol system could provide a useful framework for modeling and analysis of the EEG signal. In turn, design of algorithms based on the framework has the potential to positively impact the development of novel diagnostic strategies for brain injuries and disorders. © 2014 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "a7d957e619fa7b3fbca8073be818fc94",
"text": "The dielectric properties of epoxy nanocomposites with insulating nano-fillers, viz., TiO2, ZnO and AI2O3 were investigated at low filler concentrations by weight. Epoxy nanocomposite samples with a good dispersion of nanoparticles in the epoxy matrix were prepared and experiments were performed to measure the dielectric permittivity and tan delta (400 Hz-1 MHz), dc volume resistivity and ac dielectric strength. At very low nanoparticle loadings, results demonstrate some interesting dielectric behaviors for nanocomposites and some of the electrical properties are found to be unique and advantageous for use in several existing and potential electrical systems. The nanocomposite dielectric properties are analyzed in detail with respect to different experimental parameters like frequency (for permittivity/tan delta), filler size, filler concentration and filler permittivity. In addition, epoxy microcomposites for the same systems were synthesized and their dielectric properties were compared to the results already obtained for nanocomposites. The interesting dielectric characteristics for epoxy based nanodielectric systems are attributed to the large volume fraction of interfaces in the bulk of the material and the ensuing interactions between the charged nanoparticle surface and the epoxy chains.",
"title": ""
},
{
"docid": "5cb18c0ac81c6ead1892c699d43224b4",
"text": "We discuss algorithms for performing canonical correlation analysis. In canonical correlation analysis we try to find correlations between two data sets. The canonical correlation coefficients can be calculated directly from the two data sets or from (reduced) representations such as the covariance matrices. The algorithms for both representations are based on singular value decomposition. The methods described here have been implemented in the speech analysis program PRAAT (Boersma & Weenink, 1996), and some examples will be demonstated for formant frequency and formant level data from 50 male Dutch speakers as were reported by Pols et al. (1973).",
"title": ""
},
{
"docid": "ec9f13212368d59ff737a0e87939ccd2",
"text": "Abstract words refer to things that can not be seen, heard, felt, smelled, or tasted as opposed to concrete words. Among other applications, the degree of abstractness has been shown to be a useful information for metaphor detection. Our contribution to this topic are as follows: i) we compare supervised techniques to learn and extend abstractness ratings for huge vocabularies ii) we learn and investigate norms for multi-word units by propagating abstractness to verb-noun pairs which lead to better metaphor detection, iii) we overcome the limitation of learning a single rating per word and show that multisense abstractness ratings are potentially useful for metaphor detection. Finally, with this paper we publish automatically created abstractness norms for 3 million English words and multi-words as wellwords refer to things that can not be seen, heard, felt, smelled, or tasted as opposed to concrete words. Among other applications, the degree of abstractness has been shown to be a useful information for metaphor detection. Our contribution to this topic are as follows: i) we compare supervised techniques to learn and extend abstractness ratings for huge vocabularies ii) we learn and investigate norms for multi-word units by propagating abstractness to verb-noun pairs which lead to better metaphor detection, iii) we overcome the limitation of learning a single rating per word and show that multisense abstractness ratings are potentially useful for metaphor detection. Finally, with this paper we publish automatically created abstractness norms for 3 million English words and multi-words as well as automatically created sense-specific abstractness ratings.",
"title": ""
}
] |
scidocsrr
|
dde98cb5f741899d9bc63d0cefec8c62
|
Conditional Random Field (CRF)-Boosting: Constructing a Robust Online Hybrid Boosting Multiple Object Tracker Facilitated by CRF Learning
|
[
{
"docid": "2959be17f8186f6db5c479d39cc928db",
"text": "Bourdev and Malik (ICCV 09) introduced a new notion of parts, poselets, constructed to be tightly clustered both in the configuration space of keypoints, as well as in the appearance space of image patches. In this paper we develop a new algorithm for detecting people using poselets. Unlike that work which used 3D annotations of keypoints, we use only 2D annotations which are much easier for naive human annotators. The main algorithmic contribution is in how we use the pattern of poselet activations. Individual poselet activations are noisy, but considering the spatial context of each can provide vital disambiguating information, just as object detection can be improved by considering the detection scores of nearby objects in the scene. This can be done by training a two-layer feed-forward network with weights set using a max margin technique. The refined poselet activations are then clustered into mutually consistent hypotheses where consistency is based on empirically determined spatial keypoint distributions. Finally, bounding boxes are predicted for each person hypothesis and shape masks are aligned to edges in the image to provide a segmentation. To the best of our knowledge, the resulting system is the current best performer on the task of people detection and segmentation with an average precision of 47.8% and 40.5% respectively on PASCAL VOC 2009.",
"title": ""
},
{
"docid": "d158d2d0b24fe3766b6ddb9bff8e8010",
"text": "We introduce an online learning approach for multitarget tracking. Detection responses are gradually associated into tracklets in multiple levels to produce final tracks. Unlike most previous approaches which only focus on producing discriminative motion and appearance models for all targets, we further consider discriminative features for distinguishing difficult pairs of targets. The tracking problem is formulated using an online learned CRF model, and is transformed into an energy minimization problem. The energy functions include a set of unary functions that are based on motion and appearance models for discriminating all targets, as well as a set of pairwise functions that are based on models for differentiating corresponding pairs of tracklets. The online CRF approach is more powerful at distinguishing spatially close targets with similar appearances, as well as in dealing with camera motions. An efficient algorithm is introduced for finding an association with low energy cost. We evaluate our approach on three public data sets, and show significant improvements compared with several state-of-art methods.",
"title": ""
},
{
"docid": "e702b39e13d308fa264cb6a421792f5c",
"text": "Multi-object tracking has been recently approached with the min-cost network flow optimization techniques. Such methods simultaneously resolve multiple object tracks in a video and enable modeling of dependencies among tracks. Min-cost network flow methods also fit well within the “tracking-by-detection” paradigm where object trajectories are obtained by connecting per-frame outputs of an object detector. Object detectors, however, often fail due to occlusions and clutter in the video. To cope with such situations, we propose to add pairwise costs to the min-cost network flow framework. While integer solutions to such a problem become NP-hard, we design a convex relaxation solution with an efficient rounding heuristic which empirically gives certificates of small suboptimality. We evaluate two particular types of pairwise costs and demonstrate improvements over recent tracking methods in real-world video sequences.",
"title": ""
}
] |
[
{
"docid": "7540f1a40efd116ad712562d1fb5de23",
"text": "Optical coherence tomography (OCT) enables noninvasive high-resolution 3D imaging of the human retina and thus, plays a fundamental role in detecting a wide range of ocular diseases. Despite OCT’s diagnostic value, managing and analyzing resulting data is challenging. We apply two visual analytics strategies for supporting retinal assessment in practice. First, we provide an interface for unifying and structuring data from different sources into a common basis. Fusing that basis with medical records and augmenting it with analytically derived information facilitates thorough investigations. Second, we present a tailored visual analysis tool for presenting, selecting, and emphasizing different aspects of the attributed data. This enables free exploration, reducing the data to relevant subsets, and focusing on details. By applying both strategies, we effectively enhance the management and the analysis of OCT data for assisting medical diagnoses.",
"title": ""
},
{
"docid": "566e703c70f4d43bf1890761dc5a3861",
"text": "In this paper a novel technique for detecting and correcting errors in the RNS representation is presented. It is based on the selection of a particular subset of the legitimate range of the RNS representation characterized by the property that each element is a multiple of a suitable integer number m. This method allows to detect and correct any single error in the modular processors of the RNS based computational unit. This subset of the legitimate range can be used to perform addition and multiplication in the RNS domain allowing the design of complex arithmetic structures like FIR filters. In the paper, the architecture of a FIR filter with error detection and correction capabilities is presented showing the advantages with respect to filters in which the error detection and correction are obtained by using the traditional RNS technique.",
"title": ""
},
{
"docid": "b152e2a688321659c7c18cd1a7304854",
"text": "Mobile Ad Hoc Networking (MANET) is a key technology enabler in the tactical communication domain for the Network Centric Warfare.[1] A self-forming, self-healing, infrastructure-less network consisting of mobile nodes is very attractive for beyond line of sight (BLOS) voice and data range extension as well as tactical networking applications in general. Current research and development mostly focus on implementing MANET over new wideband data waveforms. However, a large number of currently fielded tactical radios and the next generation software defined radios (SDR) support various legacy tactical radio waveforms. A mobile ad hoc network over such legacy tactical radio links not only provides war fighters mission critical networking applications such as Situational Awareness and short payload messaging, the MANET nodes can also support voice and legacy data interoperation with the existing fielded legacy radios. Furthermore, the small spectrum footprint of current narrowband tactical radio waveforms can be complementary to the new wideband data waveforms for providing networking access in a spectrum constrained environment. This paper first describes the networking usage requirements for MANET over legacy narrowband tactical waveforms. Next, the common characteristics of legacy tactical radio waveforms and the implications of such characteristics for the MANET implementation are discussed. Then an actual MANET implementation over a legacy tactical radio waveform on a SDR is presented with the results of actual field tests. Finally, several improvements to this implementation are proposed.",
"title": ""
},
{
"docid": "6e8b6f8d0d69d7fcdec560a536c5cd57",
"text": "Networks have become multipath: mobile devices have multiple radio interfaces, datacenters have redundant paths and multihoming is the norm for big server farms. Meanwhile, TCP is still only single-path. Is it possible to extend TCP to enable it to support multiple paths for current applications on today’s Internet? The answer is positive. We carefully review the constraints—partly due to various types of middleboxes— that influenced the design of Multipath TCP and show how we handled them to achieve its deployability goals. We report our experience in implementing Multipath TCP in the Linux kernel and we evaluate its performance. Our measurements focus on the algorithms needed to efficiently use paths with different characteristics, notably send and receive buffer tuning and segment reordering. We also compare the performance of our implementation with regular TCP on web servers. Finally, we discuss the lessons learned from designing MPTCP.",
"title": ""
},
{
"docid": "69d296d1302d9e0acd7fb576f551118d",
"text": "Event detection is a research area that attracted attention during the last years due to the widespread availability of social media data. The problem of event detection has been examined in multiple social media sources like Twitter, Flickr, YouTube and Facebook. The task comprises many challenges including the processing of large volumes of data and high levels of noise. In this article, we present a wide range of event detection algorithms, architectures and evaluation methodologies. In addition, we extensively discuss on available datasets, potential applications and open research issues. The main objective is to provide a compact representation of the recent developments in the field and aid the reader in understanding the main challenges tackled so far as well as identifying interesting future research directions.",
"title": ""
},
{
"docid": "d03cda3a3e4deb5e249af7f3bcec0bee",
"text": "In this research, we investigate the process of producing allicin in garlic. With regard to the chemical compositions of garlic (Allium Sativum L.), allicin is among the active sulfuric materials in garlic that has a lot of benefits such as anti-bacterial, anti-oxidant and deradicalizing properties.",
"title": ""
},
{
"docid": "8d848e28f5b1187b0abea06ed53eed7b",
"text": "Vector Space Model (VSM) has been at the core of information retrieval for the past decades. VSM considers the documents as vectors in high dimensional space.In such a vector space, techniques like Latent Semantic Indexing (LSI), Support Vector Machines (SVM), Naive Bayes, etc., can be then applied for indexing and classification. However, in some cases, the dimensionality of the document space might be extremely large, which makes these techniques infeasible due to the curse of dimensionality. In this paper, we propose a novel Tensor Space Model for document analysis. We represent documents as the second order tensors, or matrices. Correspondingly, a novel indexing algorithm called Tensor Latent Semantic Indexing (TensorLSI) is developed in the tensor space. Our theoretical analysis shows that TensorLSI is much more computationally efficient than the conventional Latent Semantic Indexing, which makes it applicable for extremely large scale data set. Several experimental results on standard document data sets demonstrate the efficiency and effectiveness of our algorithm.",
"title": ""
},
{
"docid": "56ed1b2d57e2a76ce35f8ac93baf185e",
"text": "This study investigated the relationship between sprint start performance (5-m time) and strength and power variables. Thirty male athletes [height: 183.8 (6.8) cm, and mass: 90.6 (9.3) kg; mean (SD)] each completed six 10-m sprints from a standing start. Sprint times were recorded using a tethered running system and the force-time characteristics of the first ground contact were recorded using a recessed force plate. Three to six days later subjects completed three concentric jump squats, using a traditional and split technique, at a range of external loads from 30–70% of one repetition maximum (1RM). Mean (SD) braking impulse during acceleration was negligible [0.009 (0.007) N/s/kg) and showed no relationship with 5 m time; however, propulsive impulse was substantial [0.928 (0.102) N/s/kg] and significantly related to 5-m time (r=−0.64, P<0.001). Average and peak power were similar during the split squat [7.32 (1.34) and 17.10 (3.15) W/kg] and the traditional squat [7.07 (1.25) and 17.58 (2.85) W/kg], and both were significantly related to 5-m time (r=−0.64 to −0.68, P<0.001). Average power was maximal at all loads between 30% and 60% of 1RM for both squats. Split squat peak power was also maximal between 30% and 60% of 1RM; however, traditional squat peak power was maximal between 50% and 70% of 1RM. Concentric force development is critical to sprint start performance and accordingly maximal concentric jump power is related to sprint acceleration.",
"title": ""
},
{
"docid": "415076b6961220393217bc18d9ae99ce",
"text": "Support Vector Machines (SVM) have been extensively studied and have shown remarkable success in many applications. However the success of SVM is very limited when it is applied to the problem of learning from imbalanced datasets in which negative instances heavily outnumber the positive instances (e.g. in gene profiling and detecting credit card fraud). This paper discusses the factors behind this failure and explains why the common strategy of undersampling the training data may not be the best choice for SVM. We then propose an algorithm for overcoming these problems which is based on a variant of the SMOTE algorithm by Chawla et al, combined with Veropoulos et al’s different error costs algorithm. We compare the performance of our algorithm against these two algorithms, along with undersampling and regular SVM and show that our algorithm outperforms all of them.",
"title": ""
},
{
"docid": "b0c62e2049ea4f8ada0d506e06adb4bb",
"text": "In the past year, convolutional neural networks have been shown to perform extremely well for stereo estimation. However, current architectures rely on siamese networks which exploit concatenation followed by further processing layers, requiring a minute of GPU computation per image pair. In contrast, in this paper we propose a matching network which is able to produce very accurate results in less than a second of GPU computation. Towards this goal, we exploit a product layer which simply computes the inner product between the two representations of a siamese architecture. We train our network by treating the problem as multi-class classification, where the classes are all possible disparities. This allows us to get calibrated scores, which result in much better matching performance when compared to existing approaches.",
"title": ""
},
{
"docid": "3d267b494eda6271ca9ce5037a2a4c5c",
"text": "The Web of Linked Data forms a single, globally distributed dataspace. Due to the openness of this dataspace, it is not possible to know in advance all data sources that might be relevant for query answering. This openness poses a new challenge that is not addressed by traditional research on federated query processing. In this paper we present an approach to execute SPARQL queries over the Web of Linked Data. The main idea of our approach is to discover data that might be relevant for answering a query during the query execution itself. This discovery is driven by following RDF links between data sources based on URIs in the query and in partial results. The URIs are resolved over the HTTP protocol into RDF data which is continuously added to the queried dataset. This paper describes concepts and algorithms to implement our approach using an iterator-based pipeline. We introduce a formalization of the pipelining approach and show that classical iterators may cause blocking due to the latency of HTTP requests. To avoid blocking, we propose an extension of the iterator paradigm. The evaluation of our approach shows its strengths as well as the still existing challenges.",
"title": ""
},
{
"docid": "b9e71509ab12a3963f069ad8fa6d3baf",
"text": "Data mining can provide support for bank managers to effectively analyze and predict customer churn in the era of big data. After analyzing the reasons for the bank customer churn and the defects of FCM algorithm as a data mining algorithm, a new method of calculating the effectiveness function to improve the FCM algorithm was raised. At the same time, it has been applied to predict bank customer churn. Through data mining experiments of customer information conducted on a commercial bank, it's found out the clients have been lost and will be lost. Contrast of confusion matrixes shows that the improved FCM algorithm has high accuracy, which can provide new ideas and new methods for the analysis and prediction of bank customer churn.",
"title": ""
},
{
"docid": "e644b698d2977a2c767fe86a1445e23c",
"text": "This paper describes the E2E data, a new dataset for training end-to-end, datadriven natural language generation systems in the restaurant domain, which is ten times bigger than existing, frequently used datasets in this area. The E2E dataset poses new challenges: (1) its human reference texts show more lexical richness and syntactic variation, including discourse phenomena; (2) generating from this set requires content selection. As such, learning from this dataset promises more natural, varied and less template-like system utterances. We also establish a baseline on this dataset, which illustrates some of the difficulties associated with this data.",
"title": ""
},
{
"docid": "db31a02d996b0a36d0bf215b7b7e8354",
"text": "This paper presents methods to analyze functional brain networks and signals from graph spectral perspectives. The notion of frequency and filters traditionally defined for signals supported on regular domains such as discrete time and image grids has been recently generalized to irregular graph domains and defines brain graph frequencies associated with different levels of spatial smoothness across the brain regions. Brain network frequency also enables the decomposition of brain signals into pieces corresponding to smooth or rapid variations. We relate graph frequency with principal component analysis when the networks of interest denote functional connectivity. The methods are utilized to analyze brain networks and signals as subjects master a simple motor skill. We observe that brain signals corresponding to different graph frequencies exhibit different levels of adaptability throughout learning. Further, we notice a strong association between graph spectral properties of brain networks and the level of exposure to tasks performed and recognize the most contributing and important frequency signatures at different levels of task familiarity.",
"title": ""
},
{
"docid": "cfe5d769b9d479dccd543f8a4d23fcf9",
"text": "This paper aims to describe the role of advanced sensing systems in the electric grid of the future. In detail, the project, development, and experimental validation of a smart power meter are described in the following. The authors provide an outline of the potentialities of the sensing systems and IoT to monitor efficiently the energy flow among nodes of an electric network. The described power meter uses the metrics proposed in the IEEE Standard 1459–2010 to analyze and process voltage and current signals. Information concerning the power consumption and power quality could allow the power grid to route efficiently the energy by means of more suitable decision criteria. The new scenario has changed the way to exchange energy in the grid. Now, energy flow must be able to change its direction according to needs. Energy cannot be now routed by considering just only the criterion based on the simple shortening of transmission path. So, even energy coming from a far node should be preferred, if it has higher quality standards. In this view, the proposed smart power meter intends to support the smart power grid to monitor electricity among different nodes in an efficient and effective way.",
"title": ""
},
{
"docid": "e159ffe1f686e400b28d398127edfc5c",
"text": "In this paper, we present an in-vehicle computing system capable of localizing lane markings and communicating them to drivers. To the best of our knowledge, this is the first system that combines the Maximally Stable Extremal Region (MSER) technique with the Hough transform to detect and recognize lane markings (i.e., lines and pictograms). Our system begins by localizing the region of interest using the MSER technique. A three-stage refinement computing algorithm is then introduced to enhance the results of MSER and to filter out undesirable information such as trees and vehicles. To achieve the requirements of real-time systems, the Progressive Probabilistic Hough Transform (PPHT) is used in the detection stage to detect line markings. Next, the recognition of the color and the form of line markings is performed; this it is based on the results of the application of the MSER to left and right line markings. The recognition of High-Occupancy Vehicle pictograms is performed using a new algorithm, based on the results of MSER regions. In the tracking stage, Kalman filter is used to track both ends of each detected line marking. Several experiments are conducted to show the efficiency of our system. © 2015 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "f996b9911692cc835e55e561c3a501db",
"text": "This study proposes a clustering-based Wi-Fi fingerprinting localization algorithm. The proposed algorithm first presents a novel support vector machine based clustering approach, namely SVM-C, which uses the margin between two canonical hyperplanes for classification instead of using the Euclidean distance between two centroids of reference locations. After creating the clusters of fingerprints by SVM-C, our positioning system embeds the classification mechanism into a positioning task and compensates for the large database searching problem. The proposed algorithm assigns the matched cluster surrounding the test sample and locates the user based on the corresponding cluster's fingerprints to reduce the computational complexity and remove estimation outliers. Experimental results from realistic Wi-Fi test-beds demonstrated that our approach apparently improves the positioning accuracy. As compared to three existing clustering-based methods, K-means, affinity propagation, and support vector clustering, the proposed algorithm reduces the mean localization errors by 25.34%, 25.21%, and 26.91%, respectively.",
"title": ""
},
{
"docid": "597d49edde282e49703ba0d9e02e3f1e",
"text": "BACKGROUND\nThe vitamin D receptor (VDR) pathway is important in the prevention and potentially in the treatment of many cancers. One important mechanism of VDR action is related to its interaction with the Wnt/beta-catenin pathway. Agonist-bound VDR inhibits the oncogenic Wnt/beta-catenin/TCF pathway by interacting directly with beta-catenin and in some cells by increasing cadherin expression which, in turn, recruits beta-catenin to the membrane. Here we identify TCF-4, a transcriptional regulator and beta-catenin binding partner as an indirect target of the VDR pathway.\n\n\nMETHODOLOGY/PRINCIPAL FINDINGS\nIn this work, we show that TCF-4 (gene name TCF7L2) is decreased in the mammary gland of the VDR knockout mouse as compared to the wild-type mouse. Furthermore, we show 1,25(OH)2D3 increases TCF-4 at the RNA and protein levels in several human colorectal cancer cell lines, the effect of which is completely dependent on the VDR. In silico analysis of the human and mouse TCF7L2 promoters identified several putative VDR binding elements. Although TCF7L2 promoter reporters responded to exogenous VDR, and 1,25(OH)2D3, mutation analysis and chromatin immunoprecipitation assays, showed that the increase in TCF7L2 did not require recruitment of the VDR to the identified elements and indicates that the regulation by VDR is indirect. This is further confirmed by the requirement of de novo protein synthesis for this up-regulation.\n\n\nCONCLUSIONS/SIGNIFICANCE\nAlthough it is generally assumed that binding of beta-catenin to members of the TCF/LEF family is cancer-promoting, recent studies have indicated that TCF-4 functions instead as a transcriptional repressor that restricts breast and colorectal cancer cell growth. Consequently, we conclude that the 1,25(OH)2D3/VDR-mediated increase in TCF-4 may have a protective role in colon cancer as well as diabetes and Crohn's disease.",
"title": ""
},
{
"docid": "aa83af152739ac01ba899d186832ee62",
"text": "Predicting user \"ratings\" on items is a crucial task in recommender systems. Matrix factorization methods that computes a low-rank approximation of the incomplete user-item rating matrix provide state-of-the-art performance, especially for users and items with several past ratings (warm starts). However, it is a challenge to generalize such methods to users and items with few or no past ratings (cold starts). Prior work [4][32] have generalized matrix factorization to include both user and item features for performing better regularization of factors as well as provide a model for smooth transition from cold starts to warm starts. However, the features were incorporated via linear regression on factor estimates. In this paper, we generalize this process to allow for arbitrary regression models like decision trees, boosting, LASSO, etc. The key advantage of our approach is the ease of computing --- any new regression procedure can be incorporated by \"plugging\" in a standard regression routine into a few intermediate steps of our model fitting procedure. With this flexibility, one can leverage a large body of work on regression modeling, variable selection, and model interpretation. We demonstrate the usefulness of this generalization using the MovieLens and Yahoo! Buzz datasets.",
"title": ""
}
] |
scidocsrr
|
1d27839b112aeb226d6897c9f2819d5f
|
Interpretable 3D Human Action Analysis with Temporal Convolutional Networks
|
[
{
"docid": "2e8251644f82f3a965cf6360416eaaaa",
"text": "The past decade has witnessed a rapid proliferation of video cameras in all walks of life and has resulted in a tremendous explosion of video content. Several applications such as content-based video annotation and retrieval, highlight extraction and video summarization require recognition of the activities occurring in the video. The analysis of human activities in videos is an area with increasingly important consequences from security and surveillance to entertainment and personal archiving. Several challenges at various levels of processing-robustness against errors in low-level processing, view and rate-invariant representations at midlevel processing and semantic representation of human activities at higher level processing-make this problem hard to solve. In this review paper, we present a comprehensive survey of efforts in the past couple of decades to address the problems of representation, recognition, and learning of human activities from video and related applications. We discuss the problem at two major levels of complexity: 1) \"actions\" and 2) \"activities.\" \"Actions\" are characterized by simple motion patterns typically executed by a single human. \"Activities\" are more complex and involve coordinated actions among a small number of humans. We will discuss several approaches and classify them according to their ability to handle varying degrees of complexity as interpreted above. We begin with a discussion of approaches to model the simplest of action classes known as atomic or primitive actions that do not require sophisticated dynamical modeling. Then, methods to model actions with more complex dynamics are discussed. The discussion then leads naturally to methods for higher level representation of complex activities.",
"title": ""
},
{
"docid": "e41680f7ade6fa91d275e5e5137b4750",
"text": "The ability to identify and temporally segment fine-grained human actions throughout a video is crucial for robotics, surveillance, education, and beyond. Typical approaches decouple this problem by first extracting local spatiotemporal features from video frames and then feeding them into a temporal classifier that captures high-level temporal patterns. We describe a class of temporal models, which we call Temporal Convolutional Networks (TCNs), that use a hierarchy of temporal convolutions to perform fine-grained action segmentation or detection. Our Encoder-Decoder TCN uses pooling and upsampling to efficiently capture long-range temporal patterns whereas our Dilated TCN uses dilated convolutions. We show that TCNs are capable of capturing action compositions, segment durations, and long-range dependencies, and are over a magnitude faster to train than competing LSTM-based Recurrent Neural Networks. We apply these models to three challenging fine-grained datasets and show large improvements over the state of the art.",
"title": ""
},
{
"docid": "71b5c8679979cccfe9cad229d4b7a952",
"text": "Despite widespread adoption, machine learning models remain mostly black boxes. Understanding the reasons behind predictions is, however, quite important in assessing trust, which is fundamental if one plans to take action based on a prediction, or when choosing whether to deploy a new model. Such understanding also provides insights into the model, which can be used to transform an untrustworthy model or prediction into a trustworthy one.\n In this work, we propose LIME, a novel explanation technique that explains the predictions of any classifier in an interpretable and faithful manner, by learning an interpretable model locally varound the prediction. We also propose a method to explain models by presenting representative individual predictions and their explanations in a non-redundant way, framing the task as a submodular optimization problem. We demonstrate the flexibility of these methods by explaining different models for text (e.g. random forests) and image classification (e.g. neural networks). We show the utility of explanations via novel experiments, both simulated and with human subjects, on various scenarios that require trust: deciding if one should trust a prediction, choosing between models, improving an untrustworthy classifier, and identifying why a classifier should not be trusted.",
"title": ""
}
] |
[
{
"docid": "a7db9f3f1bb5883f6a5a873dd661867b",
"text": "Psychologists and sociologists usually interpret happiness scores as cardinal and comparable across respondents, and thus run OLS regressions on happiness and changes in happiness. Economists usually assume only ordinality and have mainly used ordered latent response models, thereby not taking satisfactory account of fixed individual traits. We address this problem by developing a conditional estimator for the fixed-effect ordered logit model. We find that assuming ordinality or cardinality of happiness scores makes little difference, whilst allowing for fixed-effects does change results substantially. We call for more research into the determinants of the personality traits making up these fixed-effects.",
"title": ""
},
{
"docid": "2e82b7c84ed48dbeb95564eb1c63ecb6",
"text": "Received Nov 26, 2017 Revised Jan 22, 2018 Accepted Feb 25, 2018 This paper presents the simulation of the control of doubly star induction motor using Direct Torque Control (DTC) based on Proportional and Integral controller (PI) and Fuzzy Logic Controller (FLC). In addition, the work describes a model of doubly star induction motor in α-β reference frame theory and its computer simulation in MATLAB/SIMULINK®.The structure of the DTC has several advantages such as the short sampling time required by the TC schemes makes them suited to a very fast flux and torque controlled drives as well as the simplicity of the control algorithm.the generalpurpose induction drives in very wide range using DTC because it is the excellent solution. The performances of the DTC with a PI controller and FLC are tested under differents speeds command values and load torque. Keyword:",
"title": ""
},
{
"docid": "bef3c65efd72249fb3d668438f9961e5",
"text": "This study investigates the effects of earthquake types, magnitudes, and hysteretic behavior on the peak and residual ductility demands of inelastic single-degree-of-freedom systems and evaluates the effects of major aftershocks on the non-linear structural responses. An extensive dataset of real mainshock–aftershock sequences for Japanese earthquakes is developed. The constructed dataset is large, compared with previous datasets of similar kinds, and includes numerous sequences from the 2011 Tohoku earthquake, facilitating an investigation of spatial aspects of the aftershock effects. The empirical assessment of peak and residual ductility demands of numerous inelastic systems having different vibration periods, yield strengths, and hysteretic characteristics indicates that the increase in seismic demand measures due to aftershocks occurs rarely but can be significant. For a large mega-thrust subduction earthquake, a critical factor for major aftershock damage is the spatial occurrence process of aftershocks.",
"title": ""
},
{
"docid": "35aa75f5bd79c8d97e374c33f5bad615",
"text": "Historically, much attention has been given to the unit processes and the integration of those unit processes to improve product yield. Less attention has been given to the wafer environment, either during or post processing. This paper contains a detailed discussion on how particles and Airborne Molecular Contaminants (AMCs) from the wafer environment interact and produce undesired effects on the wafer. Sources of wafer environmental contamination are the process itself, ambient environment, outgassing from wafers, and FOUP contamination. Establishing a strategy that reduces contamination inside the FOUP will increase yield and decrease defect variability. Three primary variables that greatly impact this strategy are FOUP contamination mitigation, FOUP material, and FOUP metrology and cleaning method.",
"title": ""
},
{
"docid": "15bf072dd0195fa8a9eb19fb82862a4e",
"text": "Recent developments in Graphics Processing Units (GPUs) have enabled inexpensive high performance computing for general-purpose applications. Due to GPU's tremendous computing capability, it has emerged as the co-processor of the CPU to achieve a high overall throughput. CUDA programming model provides the programmers adequate C language like APIs to better exploit the parallel power of the GPU. K-nearest neighbor (KNN) is a widely used classification technique and has significant applications in various domains, especially in text classification. The computational-intensive nature of KNN requires a high performance implementation. In this paper, we present a CUDA-based parallel implementation of KNN, CUKNN, using CUDA multi-thread model, where the data elements are processed in a data-parallel fashion. Various CUDA optimization techniques are applied to maximize the utilization of the GPU. CUKNN outperforms the serial KNN on an HP xw8600 workstation significantly, achieving up to 46.71X speedup including I/O time. It also shows good scalability when varying the dimension of the reference dataset, the number of records in the reference dataset, and the number of records in the query dataset.",
"title": ""
},
{
"docid": "8231e10912b42e0f8ac90392e6e0efbb",
"text": "Zobrist Hashing: An Efficient Work Distribution Method for Parallel Best-First Search Yuu Jinnai, Alex Fukunaga VIS: Text and Vision Oral Presentations 1326 SentiCap: Generating Image Descriptions with Sentiments Alexander Patrick Mathews, Lexing Xie, Xuming He 1950 Reading Scene Text in Deep Convolutional Sequences Pan He, Weilin Huang, Yu Qiao, Chen Change Loy, Xiaoou Tang 1247 Creating Images by Learning Image Semantics Using Vector Space Models Derrall Heath, Dan Ventura Poster Spotlight Talks 655 Towards Domain Adaptive Vehicle Detection in Satellite Image by Supervised SuperResolution Transfer Liujuan Cao, Rongrong Ji, Cheng Wang, Jonathan Li 499 Transductive Zero-Shot Recognition via Shared Model Space Learning Yuchen Guo, Guiguang Ding, Xiaoming Jin, Jianmin Wang 1255 Exploiting View-Specific Appearance Similarities Across Classes for Zero-shot Pose Prediction: A Metric Learning Approach Alina Kuznetsova, Sung Ju Hwang, Bodo Rosenhahn, Leonid Sigal NLP: Topic Flow Oral Presentations 744 Topical Analysis of Interactions Between News and Social Media Ting Hua, Yue Ning, Feng Chen, Chang-Tien Lu, Naren Ramakrishnan 1561 Tracking Idea Flows between Social Groups Yangxin Zhong, Shixia Liu, Xiting Wang, Jiannan Xiao, Yangqiu Song 1201 Modeling Evolving Relationships Between Characters in Literary Novels Snigdha Chaturvedi, Shashank Srivastava, Hal Daume III, Chris Dyer Poster Spotlight Talks 405 Identifying Search",
"title": ""
},
{
"docid": "d40584c70648ec82a4f59d835ddfd1a2",
"text": "Objective To evaluate efficacy of probiotics in prevention and treatment of diarrhoea associated with the use of antibiotics. Design Meta-analysis; outcome data (proportion of patients not getting diarrhoea) were analysed, pooled, and compared to determine odds ratios in treated and control groups. Identification Studies identified by searching Medline between 1966 and 2000 and the Cochrane Library. Studies reviewed Nine randomised, double blind, placebo controlled trials of probiotics. Results Two of the nine studies investigated the effects of probiotics in children. Four trials used a yeast (Saccharomyces boulardii), four used lactobacilli, and one used a strain of enterococcus that produced lactic acid. Three trials used a combination of probiotic strains of bacteria. In all nine trials, the probiotics were given in combination with antibiotics and the control groups received placebo and antibiotics. The odds ratio in favour of active treatment over placebo in preventing diarrhoea associated with antibiotics was 0.39 (95% confidence interval 0.25 to 0.62; P < 0.001) for the yeast and 0.34 (0.19 to 0.61; P < 0.01 for lactobacilli. The combined odds ratio was 0.37 (0.26 to 0.53; P < 0.001) in favour of active treatment over placebo. Conclusions The meta-analysis suggests that probiotics can be used to prevent antibiotic associated diarrhoea and that S boulardii and lactobacilli have the potential to be used in this situation. The efficacy of probiotics in treating antibiotic associated diarrhoea remains to be proved. A further large trial in which probiotics are used as preventive agents should look at the costs of and need for routine use of these agents.",
"title": ""
},
{
"docid": "166230b235fe0c18a80041741a7c5e4a",
"text": "Most existing neural network models for music generation use recurrent neural networks. However, the recent WaveNet model proposed by DeepMind shows that convolutional neural networks (CNNs) can also generate realistic musical waveforms in the audio domain. Following this light, we investigate using CNNs for generating melody (a series of MIDI notes) one bar after another in the symbolic domain. In addition to the generator, we use a discriminator to learn the distributions of melodies, making it a generative adversarial network (GAN). Moreover, we propose a novel conditional mechanism to exploit available prior knowledge, so that the model can generate melodies either from scratch, by following a chord sequence, or by conditioning on the melody of previous bars (e.g. a priming melody), among other possibilities. The resulting model, named MidiNet, can be expanded to generate music with multiple MIDI channels (i.e. tracks). We conduct a user study to compare the melody of eight-bar long generated by MidiNet and by Google’s MelodyRNN models, each time using the same priming melody. Result shows that MidiNet performs comparably with MelodyRNN models in being realistic and pleasant to listen to, yet MidiNet’s melodies are reported to be much more interesting.",
"title": ""
},
{
"docid": "f85a8a7e11a19d89f2709cc3c87b98fc",
"text": "This paper presents novel store-and-forward packet routing algorithms for Wireless Body Area Networks (WBAN) with frequent postural partitioning. A prototype WBAN has been constructed for experimentally characterizing on-body topology disconnections in the presence of ultra short range radio links, unpredictable RF attenuation, and human postural mobility. On-body DTN routing protocols are then developed using a stochastic link cost formulation, capturing multi-scale topological localities in human postural movements. Performance of the proposed protocols are evaluated experimentally and via simulation, and are compared with a number of existing single-copy DTN routing protocols and an on-body packet flooding mechanism that serves as a performance benchmark with delay lower-bound. It is shown that via multi-scale modeling of the spatio-temporal locality of on-body link disconnection patterns, the proposed algorithms can provide better routing performance compared to a number of existing probabilistic, opportunistic, and utility-based DTN routing protocols in the literature.",
"title": ""
},
{
"docid": "774f1a2403acf459a4eb594c5772a362",
"text": "motion selection DTU Orbit (12/12/2018) ISSARS: An integrated software environment for structure-specific earthquake ground motion selection Current practice enables the design and assessment of structures in earthquake prone areas by performing time history analysis with the use of appropriately selected strong ground motions. This study presents a Matlab-based software environment, which is integrated with a finite element analysis package, and aims to improve the efficiency of earthquake ground motion selection by accounting for the variability of critical structural response quantities. This additional selection criterion, which is tailored to the specific structure studied, leads to more reliable estimates of the mean structural response quantities used in design, while fulfils the criteria already prescribed by the European and US seismic codes and guidelines. To demonstrate the applicability of the software environment developed, an existing irregular, multi-storey, reinforced concrete building is studied for a wide range of seismic scenarios. The results highlight the applicability of the software developed and the benefits of applying a structure-specific criterion in the process of selecting suites of earthquake motions for the seismic design and assessment. (C) 2013 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "da7d45d2cbac784d31e4d3957f4799e6",
"text": "Malicious Uniform Resource Locator (URL) detection is an important problem in web search and mining, which plays a critical role in internet security. In literature, many existing studies have attempted to formulate the problem as a regular supervised binary classification task, which typically aims to optimize the prediction accuracy. However, in a real-world malicious URL detection task, the ratio between the number of malicious URLs and legitimate URLs is highly imbalanced, making it very inappropriate for simply optimizing the prediction accuracy. Besides, another key limitation of the existing work is to assume a large amount of training data is available, which is impractical as the human labeling cost could be potentially quite expensive. To solve these issues, in this paper, we present a novel framework of Cost-Sensitive Online Active Learning (CSOAL), which only queries a small fraction of training data for labeling and directly optimizes two cost-sensitive measures to address the class-imbalance issue. In particular, we propose two CSOAL algorithms and analyze their theoretical performance in terms of cost-sensitive bounds. We conduct an extensive set of experiments to examine the empirical performance of the proposed algorithms for a large-scale challenging malicious URL detection task, in which the encouraging results showed that the proposed technique by querying an extremely small-sized labeled data (about 0.5% out of 1-million instances) can achieve better or highly comparable classification performance in comparison to the state-of-the-art cost-insensitive and cost-sensitive online classification algorithms using a huge amount of labeled data.",
"title": ""
},
{
"docid": "284c52c29b5a5c2d3fbd0a7141353e35",
"text": "This paper presents results of patient experiments using a new gait-phase detection sensor (GPDS) together with a programmable functional electrical stimulation (FES) system for subjects with a dropped-foot walking dysfunction. The GPDS (sensors and processing unit) is entirely embedded in a shoe insole and detects in real time four phases (events) during the gait cycle: stance, heel off, swing, and heel strike. The instrumented GPDS insole consists of a miniature gyroscope that measures the angular velocity of the foot and three force sensitive resistors that measure the force load on the shoe insole at the heel and the metatarsal bones. The extracted gait-phase signal is transmitted from the embedded microcontroller to the electrical stimulator and used in a finite state control scheme to time the electrical stimulation sequences. The electrical stimulations induce muscle contractions in the paralyzed muscles leading to a more physiological motion of the affected leg. The experimental results of the quantitative motion analysis during walking of the affected and nonaffected sides showed that the use of the combined insole and FES system led to a significant improvement in the gait-kinematics of the affected leg. This combined sensor and stimulation system has the potential to serve as a walking aid for rehabilitation training or permanent use in a wide range of gait disabilities after brain stroke, spinal-cord injury, or neurological diseases.",
"title": ""
},
{
"docid": "494d720d5a8c7c58b795c5c6131fa8d1",
"text": "The increasing emergence of pervasive information systems requires a clearer understanding of the underlying characteristics in relation to user acceptance. Based on the integration of UTAUT2 and three pervasiveness constructs, we derived a comprehensive research model to account for pervasive information systems. Data collected from 346 participants in an online survey was analyzed to test the developed model using structural equation modeling and taking into account multigroup analysis. The results confirm the applicability of the integrated UTAUT2 model to measure pervasiveness. Implications for research and practice are discussed together with future research opportunities.",
"title": ""
},
{
"docid": "a34e04069b232309b39994d21bb0f89a",
"text": "In the near future, i.e., beyond 4G, some of the prime objectives or demands that need to be addressed are increased capacity, improved data rate, decreased latency, and better quality of service. To meet these demands, drastic improvements need to be made in cellular network architecture. This paper presents the results of a detailed survey on the fifth generation (5G) cellular network architecture and some of the key emerging technologies that are helpful in improving the architecture and meeting the demands of users. In this detailed survey, the prime focus is on the 5G cellular network architecture, massive multiple input multiple output technology, and device-to-device communication (D2D). Along with this, some of the emerging technologies that are addressed in this paper include interference management, spectrum sharing with cognitive radio, ultra-dense networks, multi-radio access technology association, full duplex radios, millimeter wave solutions for 5G cellular networks, and cloud technologies for 5G radio access networks and software defined networks. In this paper, a general probable 5G cellular network architecture is proposed, which shows that D2D, small cell access points, network cloud, and the Internet of Things can be a part of 5G cellular network architecture. A detailed survey is included regarding current research projects being conducted in different countries by research groups and institutions that are working on 5G technologies.",
"title": ""
},
{
"docid": "ccb1634d00239d7b08946144f3e0a763",
"text": "Designing RNA sequences that fold into specific structures and perform desired biological functions is an emerging field in bioengineering with broad applications from intracellular chemical catalysis to cancer therapy via selective gene silencing. Effective RNA design requires first solving the inverse folding problem: given a target structure, propose a sequence that folds into that structure. Although significant progress has been made in developing computational algorithms for this purpose, current approaches are ineffective at designing sequences for complex targets, limiting their utility in real-world applications. However, an alternative that has shown significantly higher performance are human players of the online RNA design game EteRNA. Through many rounds of gameplay, these players have developed a collective library of \"human\" rules and strategies for RNA design that have proven to be more effective than current computational approaches, especially for complex targets. Here, we present an RNA design agent, SentRNA, which consists of a fully-connected neural network trained using the eternasolves dataset, a set of 1.8 x 104 player-submitted sequences across 724 unique targets. The agent first predicts an initial sequence for a target using the trained network, and then refines that solution if necessary using a short adaptive walk utilizing a canon of standard design moves. Through this approach, we observe SentRNA can learn and apply human-like design strategies to solve several complex targets previously unsolvable by any computational approach. We thus demonstrate that incorporating a prior of human design strategies into a computational agent can significantly boost its performance, and suggests a new paradigm for machine-based RNA design. Introduction: Solving the inverse folding problem for RNA is a critical prerequisite to effective RNA design, an emerging field of modern bioengineering research.1,2,3,4,5 A RNA molecule's function is highly dependent on the structure into which it folds, which in turn is determined by the sequence of nucleotides that comprise it. Therefore, designing RNA molecules to perform specific functions requires designing sequences that fold into specific structures. As such, significant efforts have been made over the past several decades in developing computational algorithms to reliably predict RNA sequences that fold into a given target.6,7,8,9,10,11 Existing computational methods for inverse RNA folding can be roughly separated into two types. The first type generates an initial guess of a sequence and then refines the sequence using some form of stochastic search. Published algorithms that fall under this category include RNAInverse,6 RNA-SSD,7 INFO-RNA,8 NUPACK,10 and MODENA.11 RNAInverse, one of the first inverse folding algorithms, initializes the sequence randomly and then uses a simple adaptive walk in which random single or pair mutations are successively performed, and a mutation is accepted if it improves the structural similarity between the current and the target structure. RNA-SSD first performs hierarchical decomposition of the target and then performs adaptive walk separately on each substructure to reduce the size of the search space. INFO-RNA first generates an initial guess of the sequence using dynamic programming to estimate the minimum energy sequence for a target structure, and then performs simulated annealing on the sequence. NUPACK performs hierarchical decomposition of the target and assigns an initial sequence to each substructure. For each sequence, it then generates a thermodynamic ensemble of possible structures and stochastically perturbs the sequence to optimize the \"ensemble defect\" term, which represents the average number of improperly paired bases relative to the target over the entire ensemble. Finally, one of the most recent algorithms, MODENA, generates an ensemble of initial sequences using a genetic algorithm, and then performs stochastic search using crossovers and single-point mutations. The second type of design algorithm, exemplified by programs such as DSS-Opt, foregoes stochastic search and instead attempts to generate a valid sequence directly from gradient-based optimization. Given a target, DSS-Opt generates an initial sequence and then performs a gradient-based optimization of an objective function that includes the predicted free energy of the target and a \"negative design\" term that punishes improperly paired bases. Both types of algorithms have proven effective given simple to moderately complex structures. However, there is still much room for improvement. A recent benchmark of these algorithms showed that they consistently fail given large or structurally complex targets, 12 limiting their applicability to designing RNA molecules for real-world biological applications. A promising alternative approach to RNA design that has consistently outperformed current computational methods is EteRNA, a web-based graphical interface in which the RNA design problem is presented to humans as a game. 13 Players of the game are shown 2D representations of target RNA structures (\"puzzles\") and asked to propose sequences that fold into them. These sequences are first judged using the ViennaRNA 1.8.5 software package6 and then validated experimentally. Through this cycle of design and evaluation, players build a collective library of design strategies that can then be applied to new, more complex puzzles. These strategies are distinct from those employed by design algorithms such as DSS-Opt and NUPACK in that they are honed through visual pattern recognition and experience. Remarkably, these human-developed strategies have proven more effective for RNA design than current computational methods. For example, EteRNA players significantly outperform even the best computational algorithms on the Eterna100, a set of 100 challenging puzzles designed by EteRNA players to showcase a variety of RNA structural elements that make design difficult. While top-ranking human players can solve all 100 puzzles, even the best-scoring computational algorithm, MODENA, could only solve 54 / 100 puzzles.12 Given the success of these strategies, we decided to investigate whether incorporating these strategies into a computational agent can increase its performance beyond that of current state-of-the-art methods. In this study, we present SentRNA, a computational agent for RNA design that significantly outperforms existing computational algorithms by learning human-like design strategies in a data driven manner. The agent consists of a fully-connected neural network that takes as input a featurized representation of the local environment around a given position in a puzzle. The output is length-4, corresponding to the four RNA nucleotides (bases): A, U, C, or G. The model is trained using the eternasolves dataset, a custom-compiled collection of 1.8 x 104 playersubmitted solutions across 724 unique puzzles. These puzzles comprise both the “Progression” puzzles, designed for beginning EteRNA players, as well as several “Lab” puzzles for which solutions were experimentally synthesized and tested. During validation and testing the agent takes an initially blank puzzle and assigns bases to every position greedily based on the output values. If this initial prediction is not valid, as judged by ViennaRNA 1.8.5, it is further refined via an adaptive walk using a canon of standard design moves compiled by players and taught to new players through the game's puzzle progression. Overall, we trained and tested an ensemble of 165 models, each using a distinct training set and model input (see Methods). Collectively, the ensemble of models can solve 42 / 100 puzzles from the Eterna100 by neural network prediction alone, and 80 / 100 puzzles using neural network prediction + refinement. Among these 80 puzzles are all 15 puzzles highlighted during a previous benchmark by Anderson Lee et al.12 Notably, among these 15 puzzles are 7 puzzles yet unsolvable by any computational algorithm. This study demonstrates that teaching human design strategies to a computational RNA design agent in a data-driven manner can lead to significant increases in performance over previous methods, and represents a new paradigm in machine-based RNA design in which both human and computational design strategies are united into a single agent. Methods: Code availability: The source code for SentRNA, all our trained models, and the full eternasolves dataset can be found on GitHub: https://github.com/jadeshi/SentRNA. Hardware: We performed all computations (training, validation, testing, and refinement) using a desktop computer with an Intel Core i7-6700K @ 4.00 GHz CPU and 16 GB of RAM. Creating 2D structural representations of puzzles: During training and testing of almost all models, we used the RNAplot function from ViennaRNA 1.8.5 to render puzzles as 2D structures given their dot-bracket representations. However, when training and testing two specific models M6 and M8 on two highly symmetric puzzles, “Mat Lot 2-2 B\" and “Mutated chicken feet” (see Results and Discussion), we decided to use an in-house rendering algorithm (hereafter called EteRNA rendering) in place of RNAplot, as we found the RNAplot was unable to properly render the symmetric structure of the puzzles. Neural network architecture: Our goal is to create an RNA design agent that can propose a sequence of RNA bases that folds into a given target structure, i.e. fill in an initially blank EteRNA puzzle. To do this, we employ a fully connected neural network that assigns an identity of A, U, C, or G to each position in the puzzle given a featurized representation of its local environment. During test time, we expose the agent to every position in the puzzle sequentially and have it predict its identity. The neural network was implemented using TensorFlow14 and contains three hidden layers of 100 nodes with ReLU nonlinearitie",
"title": ""
},
{
"docid": "b68336c869207720d6ab1880744b70be",
"text": "Particle Swarm Optimization (PSO) algorithms represent a new approach for optimization. In this paper image enhancement is considered as an optimization problem and PSO is used to solve it. Image enhancement is mainly done by maximizing the information content of the enhanced image with intensity transformation function. In the present work a parameterized transformation function is used, which uses local and global information of the image. Here an objective criterion for measuring image enhancement is used which considers entropy and edge information of the image. We tried to achieve the best enhanced image according to the objective criterion by optimizing the parameters used in the transformation function with the help of PSO. Results are compared with other enhancement techniques, viz. histogram equalization, contrast stretching and genetic algorithm based image enhancement.",
"title": ""
},
{
"docid": "d34d8dd7ba59741bb5e28bba3e870ac4",
"text": "Among those who have recently lost a job, social networks in general and online ones in particular may be useful to cope with stress and find new employment. This study focuses on the psychological and practical consequences of Facebook use following job loss. By pairing longitudinal surveys of Facebook users with logs of their online behavior, we examine how communication with different kinds of ties predicts improvements in stress, social support, bridging social capital, and whether they find new jobs. Losing a job is associated with increases in stress, while talking with strong ties is generally associated with improvements in stress and social support. Weak ties do not provide these benefits. Bridging social capital comes from both strong and weak ties. Surprisingly, individuals who have lost a job feel greater stress after talking with strong ties. Contrary to the \"strength of weak ties\" hypothesis, communication with strong ties is more predictive of finding employment within three months.",
"title": ""
},
{
"docid": "bbd378407abb1c2a9a5016afee40c385",
"text": "One approach to the generation of natural-sounding synthesized speech waveforms is to select and concatenate units from a large speech database. Units (in the current work, phonemes) are selected to produce a natural realisation of a target phoneme sequence predicted from text which is annotated with prosodic and phonetic context information. We propose that the units in a synthesis database can be considered as a state transition network in which the state occupancy cost is the distance between a database unit and a target, and the transition cost is an estimate of the quality of concatenation of two consecutive units. This framework has many similarities to HMM-based speech recognition. A pruned Viterbi search is used to select the best units for synthesis from the database. This approach to waveform synthesis permits training from natural speech: two methods for training from speech are presented which provide weights which produce more natural speech than can be obtained by hand-tuning.",
"title": ""
},
{
"docid": "47a12c3101f0aa6cd7f9675a211bcfae",
"text": "This paper describes the OpenViBE software platform which enables researchers to design, test, and use braincomputer interfaces (BCIs). BCIs are communication systems that enable users to send commands to computers solely by means of brain activity. BCIs are gaining interest among the virtual reality (VR) community since they have appeared as promising interaction devices for virtual environments (VEs). The key features of the platform are (1) high modularity, (2) embedded tools for visualization and feedback based on VR and 3D displays, (3) BCI design made available to non-programmers thanks to visual programming, and (4) various tools offered to the different types of users. The platform features are illustrated in this paper with two entertaining VR applications based on a BCI. In the first one, users can move a virtual ball by imagining hand movements, while in the second one, they can control a virtual spaceship using real or imagined foot movements. Online experiments with these applications together with the evaluation of the platform computational performances showed its suitability for the design of VR applications controlled with a BCI. OpenViBE is a free software distributed under an open-source license.",
"title": ""
},
{
"docid": "0be69ebc6297e7a4fb71594d7c38cb86",
"text": "Internet of Things (IoT), which will create a huge network of billions or trillions of “Things” communicating with one another, are facing many technical and application challenges. This paper introduces the status of IoT development in China, including policies, R&D plans, applications, and standardization. With China's perspective, this paper depicts such challenges on technologies, applications, and standardization, and also proposes an open and general IoT architecture consisting of three platforms to meet the architecture challenge. Finally, this paper discusses the opportunity and prospect of IoT.",
"title": ""
}
] |
scidocsrr
|
499ee0581ada2467ddd3d3fda1e8054c
|
Fooling OCR Systems with Adversarial Text Images
|
[
{
"docid": "22accfa74592e8424bdfe74224365425",
"text": "In the SQuaD reading comprehension task systems are given a paragraph from Wikipedia and have to answer a question about it. The answer is guaranteed to be contained within the paragraph. There are 107,785 such paragraph-question-answer tuples in the dataset. Human performance on this task achieves 91.2% accuracy (F1), and the current state-of-the-art system obtains a respectably close 84.7%. Not so fast though! If we adversarially add a single sentence to those paragraphs, in such a way that the added sentences do not contradict the correct answer, nor do they confuse humans, the accuracy of the published models studied plummets from an average of 75% to just 36%.",
"title": ""
}
] |
[
{
"docid": "f1635351e7d3c308eeca5df314b18b8f",
"text": "The vertex cover problem Find a set of vertices that cover the graph LP rounding is a 4 step scheme to approximate combinatorial problems with theoretical guarantees on solution quality. Several problems in machine learning, computer vision and data analysis can be formulated using NP-‐hard combinatorial optimization problems. In many of these applications, approximate solutions for these NP-‐hard problems are 'good enough'.",
"title": ""
},
{
"docid": "47f9724fd9dc25eda991854074ac0afa",
"text": "This paper reviews the state of the art in piezoelectric energy harvesting. It presents the basics of piezoelectricity and discusses materials choice. The work places emphasis on material operating modes and device configurations, from resonant to non-resonant devices and also to rotational solutions. The reviewed literature is compared based on power density and bandwidth. Lastly, the question of power conversion is addressed by reviewing various circuit solutions.",
"title": ""
},
{
"docid": "2a14bb86d6e758f547a024cfdac125de",
"text": "Nurses' judgements and decisions have the potential to help healthcare systems allocate resources efficiently, promote health gain and patient benefit and prevent harm. Evidence from healthcare systems throughout the world suggests that judgements and decisions made by clinicians could be improved: around half of all adverse events have some kind of error at their core. For nursing to contribute to raising quality though improved judgements and decisions within health systems we need to know more about the decisions and judgements themselves, the interventions likely to improve judgement and decision processes and outcomes, and where best to target finite intellectual and educational resources. There is a rich heritage of research into decision making and judgement, both from within the discipline of nursing and from other perspectives, but which focus on nurses. Much of this evidence plays only a minor role in the development of educational and technological efforts at decision improvement. This paper presents nine unanswered questions that researchers and educators might like to consider as a potential agenda for the future of research into this important area of nursing practice, training and development.",
"title": ""
},
{
"docid": "b8b1c342a2978f74acd38bed493a77a5",
"text": "With the rapid growth of battery-powered portable electronics, an efficient power management solution is necessary for extending battery life. Generally, basic switching regulators, such as buck and boost converters, may not be capable of using the entire battery output voltage range (e.g., 2.5-4.7 V for Li-ion batteries) to provide a fixed output voltage (e.g., 3.3 V). In this paper, an average-current-mode noninverting buck-boost dc-dc converter is proposed. It is not only able to use the full output voltage range of a Li-ion battery, but it also features high power efficiency and excellent noise immunity. The die area of this chip is 2.14 × 1.92 mm2, fabricated by using TSMC 0.35 μm 2P4M 3.3 V/5 V mixed-signal polycide process. The input voltage of the converter may range from 2.3 to 5 V with its output voltage set to 3.3 V, and its switching frequency is 500 kHz. Moreover, it can provide up to 400-mA load current, and the maximal measured efficiency is 92.01%.",
"title": ""
},
{
"docid": "517ec608208a669872a1d11c1d7836a3",
"text": "Hafez is an automatic poetry generation system that integrates a Recurrent Neural Network (RNN) with a Finite State Acceptor (FSA). It generates sonnets given arbitrary topics. Furthermore, Hafez enables users to revise and polish generated poems by adjusting various style configurations. Experiments demonstrate that such “polish” mechanisms consider the user’s intention and lead to a better poem. For evaluation, we build a web interface where users can rate the quality of each poem from 1 to 5 stars. We also speed up the whole system by a factor of 10, via vocabulary pruning and GPU computation, so that adequate feedback can be collected at a fast pace. Based on such feedback, the system learns to adjust its parameters to improve poetry quality.",
"title": ""
},
{
"docid": "7608d2d2331cb306e380ed4163ee448b",
"text": "We establish two fixed-point theorems for mappings satisfying a general contrac-tive inequality of integral type. These results substantially extend the theorem of Branciari (2002). In a recent paper [1], Branciari established the following theorem. Theorem 1. Let (X, d) be a complete metric space, c ∈ [0, 1), f : X → X a mapping such that, for each x, y ∈ X, d(f x,f y) 0 ϕ(t)dt ≤ c d(x,y) 0 ϕ(t)dt, (1) where ϕ : R + → R + is a Lebesgue-integrable mapping which is summable, non-negative, and such that, for each > 0, 0 ϕ(t)dt > 0. Then f has a unique fixed point z ∈ X such that, for each x ∈ X, lim n f n x = z. In [1], it was mentioned that (1) could be extended to more general contrac-tive conditions. It is the purpose of this paper to make such an extension to two of the most general contractive conditions. Define m(x, y) = max d(x,y),d(x,f x),d(y,f y), d(x, f y) + d(y, f x) 2. (2) Our first result is the following theorem. Theorem 2. Let (X, d) be a complete metric space, k ∈ [0, 1), f : X → X a mapping such that, for each x, y ∈ X, d(f x,f y) 0 ϕ(t)dt ≤ k m(x,y) 0 ϕ(t)dt, (3)",
"title": ""
},
{
"docid": "bfb0de9970cf1970f98c4fa78c2ec4d7",
"text": "The problem of matching between binaries is important for software copyright enforcement as well as for identifying disclosed vulnerabilities in software. We present a search engine prototype called Rendezvous which enables indexing and searching for code in binary form. Rendezvous identifies binary code using a statistical model comprising instruction mnemonics, control flow sub-graphs and data constants which are simple to extract from a disassembly, yet normalising with respect to different compilers and optimisations. Experiments show that Rendezvous achieves F2 measures of 86.7% and 83.0% on the GNU C library compiled with different compiler optimisations and the GNU coreutils suite compiled with gcc and clang respectively. These two code bases together comprise more than one million lines of code. Rendezvous will bring significant changes to the way patch management and copyright enforcement is currently performed.",
"title": ""
},
{
"docid": "ad9f3510ffaf7d0bdcf811a839401b83",
"text": "The stator permanent magnet (PM) machines have simple and robust rotor structure as well as high torque density. The hybrid excitation topology can realize flux regulation and wide constant power operating capability of the stator PM machines when used in dc power systems. This paper compares and analyzes the electromagnetic performance of different hybrid excitation stator PM machines according to different combination modes of PMs, excitation winding, and iron flux bridge. Then, the control strategies for voltage regulation of dc power systems are discussed based on different critical control variables including the excitation current, the armature current, and the electromagnetic torque. Furthermore, an improved direct torque control (DTC) strategy is investigated to improve system performance. A parallel hybrid excitation flux-switching generator employing the improved DTC which shows excellent dynamic and steady-state performance has been achieved experimentally.",
"title": ""
},
{
"docid": "9cf8a2f73a906f7dc22c2d4fbcf8fa6b",
"text": "In this paper the effect of spoilers on aerodynamic characteristics of an airfoil were observed by CFD.As the experimental airfoil NACA 2415 was choosen and spoiler was extended from five different positions based on the chord length C. Airfoil section is designed with a spoiler extended at an angle of 7 degree with the horizontal.The spoiler extends to 0.15C.The geometry of 2-D airfoil without spoiler and with spoiler was designed in GAMBIT.The numerical simulation was performed by ANS YS Fluent to observe the effect of spoiler position on the aerodynamic characteristics of this particular airfoil. The results obtained from the computational process were plotted on graph and the conceptual assumptions were verified as the lift is reduced and the drag is increased that obeys the basic function of a spoiler. Coefficient of drag. I. INTRODUCTION An airplane wing has a special shape called an airfoil. As a wing moves through air, the air is split and passes above and below the wing. The wing's upper surface is shaped so the air rushing over the top speeds up and stretches out. This decreases the air pressure above the wing. The air flowing below the wing moves in a straighter line, so its speed and air pressure remains the same. Since high air pressure always moves toward low air pressure, the air below the wing pushes upward toward the air above the wing. The wing is in the middle, and the whole wing is ―lifted‖. The faster an airplane moves, the more lift there is and when the force of lift is greater than the force of gravity, the airplane is able to fly. [1] A spoiler, sometimes called a lift dumper is a device intended to reduce lift in an aircraft. Spoilers are plates on the top surface of a wing which can be extended upward into the airflow and spoil it. By doing so, the spoiler creates a carefully controlled stall over the portion of the wing behind it, greatly reducing the lift of that wing section. Spoilers are designed to reduce lift also making considerable increase in drag. Spoilers increase drag and reduce lift on the wing. If raised on only one wing, they aid roll control, causing that wing to drop. If the spoilers rise symmetrically in flight, the aircraft can either be slowed in level flight or can descend rapidly without an increase in airspeed. When the …",
"title": ""
},
{
"docid": "40fc164e3e98c81a9f2b60f6a2254961",
"text": "Is software-defined networking (SDN) friend or foe in terms of security? Drawing on a recent Dagstuhl seminar, the authors discuss SDN's security challenges, debate strategies to monitor and protect SDN-enabled networks, and propose methods and strategies to leverage SDN's flexibility for designing new security mechanisms.",
"title": ""
},
{
"docid": "3a7c00a5f2a42037d81441611507268c",
"text": "OBJECTIVE\nEffective play and coping skills may be important determinants of children's adaptive behavior. Play and coping have undergone extensive individual study; however, these two variables have not been explored in relationship to each other.\n\n\nMETHOD\nThe play behaviors of 19 randomly selected preschool children were rated by researchers using The Test of Playfulness. The children's coping skills were rated by their teachers with the Coping Inventory.\n\n\nRESULTS\nA positive, significant correlation was found between children's level of playfulness and their coping skills. Overall, girls were rated as more playful than boys and scored higher in coping skills. Younger children (36-47 months of age) were rated as better players and copers than older children (47-57 months of age).\n\n\nCONCLUSION\nThis pilot study supports occupational therapy intervention in children's play environments and playful interactions as a way of influencing their adaptability in all life skills.",
"title": ""
},
{
"docid": "83c97ebf212a2c183fae2220bcc6a985",
"text": "Fourier ptychography is an imaging technique that overcomes the diffraction limit of conventional cameras with applications in microscopy and long range imaging. Diffraction blur causes resolution loss in both cases. In Fourier ptychography, a coherent light source illuminates an object, which is then imaged from multiple viewpoints. The reconstruction of the object from these set of recordings can be obtained by an iterative phase retrieval algorithm. However, the retrieval process is slow and does not work well under certain conditions. In this paper, we propose a new reconstruction algorithm that is based on convolutional neural networks and demonstrate its advantages in terms of speed and performance.",
"title": ""
},
{
"docid": "abdb5215202ac788004c5bf534b09c9b",
"text": "Social Networking sites have grown in popularity with the number of users who utilize the medium to share aspects of their personal lives increasing yearly. Security issues have been of concern to users, site designers, as well as Security specialists who must find ways to defend corporate networks from malicious attacks initiated from these sites. Social engineering attacks can lead to targeted spear phishing attacks, all with a monetary motive at the root. This study describes and discusses some of these security issues.",
"title": ""
},
{
"docid": "3f5a9a14c9e02e832358deeff3e2509c",
"text": "Modern science of graphs has emerged the last few years as a field of interest and has been bringing significant advances to our knowledge about networks. Until recently the existing data mining algorithms were destined for structured/relational data while many datasets exist that require graph representation such as social networks, networks generated by textual data, 3D protein structures and chemical compounds. It has become therefore of crucial importance to be able to extract in an efficient and effective way meaningful information from that kind of data and towards this end graph mining and analysis methods have been proven essential. The goal of this thesis is to study problems in the area of graph mining focusing especially on designing new algorithms and tools related to information spreading and specifically on how to locate influential entities in real-world social networks. This task is crucial in many applications such as information diffusion, epidemic control and viral marketing. In the first part of the thesis, we have studied spreading processes in social networks focusing on finding topological characteristics that rank entities in the network based on their influential capabilities. We have specifically focused on the K-truss decomposition which is an extension of the k-core decomposition of the graph. Both methods partition a graph into subgraphs whose nodes and/or edges have some common characteristics. For the case of the K-truss, the edges belonging to such subgraph are contained to at least K-2 triangles. After extensive experimental analysis in real-world networks, we showed that the nodes that belong to the maximal K-truss subgraph show a better spreading behavior when compared to baseline criteria such as degree and k-core centralities. Such spreaders have the ability to influence a greater part of the network during the first steps of a spreading process but also the total fraction of the influenced nodes at the end of the epidemic is greater. We have also observed that node members of such dense subgraphs are those achieving the optimal spreading in the network. In the second part of the thesis, we focused on identifying a group of nodes that by acting all together maximize the expected number of influenced nodes at the end of the spreading process, formally called Influence Maximization. The Influence Maximization problem is actually NP-hard though there exist approximation guarantees for efficient algorithms that can solve the problem while obtaining a solution within the 63% of optimal classes of models. As those guarantees propose a greedy approximation which is computationally expensive especially for large graphs, we proposed the MATI algorithm which succeeds in locating the group of users that maximize the influence while also being scalable. The algorithm takes advantage of the possible paths created in each node’s neighborhood and precalculates each node’s potential influence and achieves to produce competitive results in quality compared to those of baseline algorithms such as the Greedy, LDAG and SimPath. In the last part of the thesis, we study the privacy point of view of sharing such metrics that are good influential indicators in a social network. We have focused on designing an algorithm that addresses the problem of computing through an efficient, correct, secure, and privacy-preserving algorithm the k-core metric which measures the influence of each node of the network. We have specifically adopted a decentralization approach where the social network is considered as a Peer-to-peer (P2P) system. The algorithm is",
"title": ""
},
{
"docid": "be4a4e3385067ce8642ff83ed76c4dcf",
"text": "We examine what makes a search system domain-specific and find that previous definitions are incomplete. We propose a new definition of domain specific search, together with a corresponding model, to assist researchers, systems designers and system beneficiaries in their analysis of their own domain. This model is then instantiated for two domains: intellectual property search (i.e. patent search) and medical or healthcare search. For each of the two we follow the theoretical model and identify outstanding issues. We find that the choice of dimensions is still an open issue, as linear independence is often absent and specific use-cases, particularly those related to interactive IR, still cannot be covered by the proposed model.",
"title": ""
},
{
"docid": "26d7628ee9b06bd9b46965fb5989d245",
"text": "This paper introduces a new perspective on information behavior in Web 2.0 environments, including the role of mobile access in bridging formal to informal learning. Kuhlthau’s (1991, 2007) Information Search Process (ISP) model is identified as a theoretical basis for exploring Information Seeking attitudes and behaviors, while social learning and literacy concepts of Vygotsky (1962, 1978), Bruner (1962, 1964) and Jenkins (2010) are identified as foundations for Information Sharing. The Guided Inquiry Spaces model (Maniotes, 2005) is proposed as an approach to bridging the student’s informal learning world and the curriculum-based teacher’s world. Research within this framework is operationalized through a recently validated Information and Communications Technology Learning (ICTL) survey instrument measuring learners’ preferences for self-expression, sharing, and knowledge acquisition interactions in technology-pervasive environments. Stepwise refinement of ICTL produced two reliable and valid psychometric scales, Information Sharing (alpha = .77) and Information Seeking (alpha = .72). Cross-validation with an established Mobile Learning Scale (Khaddage & Knezek, 2013) indicates that Information Sharing aligns significantly (p < .05) with Mobile Learning. Information Seeking, Information Sharing, and mobile access are presented as important, complimentary components important, complimentary components in the shift along the formal to informal learning continuum. Therefore, measures of these constructs can assist in understanding students’ preferences for 21st century learning. 2013 The Authors. Published by Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "fa9a2112a687063c2fb3733af7b1ea61",
"text": "Prediction without justification has limited utility. Much of the success of neural models can be attributed to their ability to learn rich, dense and expressive representations. While these representations capture the underlying complexity and latent trends in the data, they are far from being interpretable. We propose a novel variant of denoising k-sparse autoencoders that generates highly efficient and interpretable distributed word representations (word embeddings), beginning with existing word representations from state-of-the-art methods like GloVe and word2vec. Through large scale human evaluation, we report that our resulting word embedddings are much more interpretable than the original GloVe and word2vec embeddings. Moreover, our embeddings outperform existing popular word embeddings on a diverse suite of benchmark downstream tasks.",
"title": ""
},
{
"docid": "193cb03ebb59935ea33d23daaebbfb74",
"text": "We study semi-supervised learning when the data consists of multiple intersecting manifolds. We give a finite sample analysis to quantify the potential gain of using unlabeled data in this multi-manifold setting. We then propose a semi-supervised learning algorithm that separates different manifolds into decision sets, and performs supervised learning within each set. Our algorithm involves a novel application of Hellinger distance and size-constrained spectral clustering. Experiments demonstrate the benefit of our multimanifold semi-supervised learning approach.",
"title": ""
},
{
"docid": "b3068a1b1acb0782d2c2b1dac65042cf",
"text": "Measurement of N (nitrogen), P (phosphorus) and K ( potassium) contents of soil is necessary to decide how much extra contents of these nutrients are to b e added in the soil to increase crop fertility. Thi s improves the quality of the soil which in turn yields a good qua lity crop. In the present work fiber optic based c olor sensor has been developed to determine N, P, and K values in t he soil sample. Here colorimetric measurement of aq ueous solution of soil has been carried out. The color se nsor is based on the principle of absorption of col or by solution. It helps in determining the N, P, K amounts as high, m edium, low, or none. The sensor probe along with p roper signal conditioning circuits is built to detect the defici ent component of the soil. It is useful in dispensi ng only required amount of fertilizers in the soil.",
"title": ""
},
{
"docid": "d973047c3143043bb25d4a53c6b092ec",
"text": "Persian License Plate Detection and Recognition System is an image-processing technique used to identify a vehicle by its license plate. In fact this system is one kind of automatic inspection of transport, traffic and security systems and is of considerable interest because of its potential applications to areas such as automatic toll collection, traffic law enforcement and security control of restricted areas. License plate location is an important stage in vehicle license plate recognition for automated transport system. This paper presents a real time and robust method of license plate detection and recognition from cluttered images based on the morphology and template matching. In this system main stage is the isolation of the license plate from the digital image of the car obtained by a digital camera under different circumstances such as illumination, slop, distance, and angle. The algorithm starts with preprocessing and signal conditioning. Next license plate is localized using morphological operators. Then a template matching scheme will be used to recognize the digits and characters within the plate. This system implemented with help of Isfahan Control Traffic organization and the performance was 98.2% of correct plates identification and localization and 92% of correct recognized characters. The results regarding the complexity of the problem and diversity of the test cases show the high accuracy and robustness of the proposed method. The method could also be applicable for other applications in the transport information systems, where automatic recognition of registration plates, shields, signs, and so on is often necessary. This paper presents a morphology-based method.",
"title": ""
}
] |
scidocsrr
|
d5e1646fa4a6fa74251f19eebe3cc2c5
|
Lowering the barriers to large-scale mobile crowdsensing
|
[
{
"docid": "b8808d637dcb8bbb430d68196587b3a4",
"text": "Crowd sourcing is based on a simple but powerful concept: Virtually anyone has the potential to plug in valuable information. The concept revolves around large groups of people or community handling tasks that have traditionally been associated with a specialist or small group of experts. With the advent of the smart devices, many mobile applications are already tapping into crowd sourcing to report community issues and traffic problems, but more can be done. While most of these applications work well for the average user, it neglects the information needs of particular user communities. We present CROWDSAFE, a novel convergence of Internet crowd sourcing and portable smart devices to enable real time, location based crime incident searching and reporting. It is targeted to users who are interested in crime information. The system leverages crowd sourced data to provide novel features such as a Safety Router and value added crime analytics. We demonstrate the system by using crime data in the metropolitan Washington DC area to show the effectiveness of our approach. Also highlighted is its ability to facilitate greater collaboration between citizens and civic authorities. Such collaboration shall foster greater innovation to turn crime data analysis into smarter and safe decisions for the public.",
"title": ""
},
{
"docid": "513ae13c6848f3a83c36dc43d34b43a5",
"text": "In this paper, we describe the design, analysis, implementation, and operational deployment of a real-time trip information system that provides passengers with the expected fare and trip duration of the taxi ride they are planning to take. This system was built in cooperation with a taxi operator that operates more than 15,000 taxis in Singapore. We first describe the overall system design and then explain the efficient algorithms used to achieve our predictions based on up to 21 months of historical data consisting of approximately 250 million paid taxi trips. We then describe various optimisations (involving region sizes, amount of history, and data mining techniques) and accuracy analysis (involving routes and weather) we performed to increase both the runtime performance and prediction accuracy. Our large scale evaluation demonstrates that our system is (a) accurate --- with the mean fare error under 1 Singapore dollar (~ 0.76 US$) and the mean duration error under three minutes, and (b) capable of real-time performance, processing thousands to millions of queries per second. Finally, we describe the lessons learned during the process of deploying this system into a production environment.",
"title": ""
}
] |
[
{
"docid": "f720554ba9cff8bec781f4ad2ec538aa",
"text": "English. Hate speech is prevalent in social media platforms. Systems that can automatically detect offensive content are of great value to assist human curators with removal of hateful language. In this paper, we present machine learning models developed at UW Tacoma for detection of misogyny, i.e. hate speech against women, in English tweets, and the results obtained with these models in the shared task for Automatic Misogyny Identification (AMI) at EVALITA2018. Italiano. Commenti offensivi nei confronti di persone con diversa orientazione sessuale o provenienza sociale sono oggigiorno prevalenti nelle piattaforme di social media. A tale fine, sistemi automatici in grado di rilevare contenuti offensivi nei confronti di alcuni gruppi sociali sono importanti per facilitare il lavoro dei moderatori di queste piattaforme a rimuovere ogni commento offensivo usato nei social media. In questo articolo, vi presentiamo sia dei modelli di apprendimento automatico sviluppati all’Università di Washington in Tacoma per il rilevamento della misoginia, ovvero discorsi offensivi usati nei tweet in lingua inglese contro le donne, sia i risultati ottenuti con questi modelli nel processo per l’identificazione automatica della misoginia in EVALITA2018.",
"title": ""
},
{
"docid": "78fe279ca9a3e355726ffacb09302be5",
"text": "In present, dynamically developing organizations, that often realize business tasks using the project-based approach, effective project management is of paramount importance. Numerous reports and scientific papers present lists of critical success factors in project management, and communication management is usually at the very top of the list. But even though the communication practices are found to be associated with most of the success dimensions, they are not given enough attention and the communication processes and practices formalized in the company's project management methodology are neither followed nor prioritized by project managers. This paper aims at supporting project managers and teams in more effective implementation of best practices in communication management by proposing a set of communication management patterns, which promote a context-problem-solution approach to communication management in projects.",
"title": ""
},
{
"docid": "f70bd0a47eac274a1bb3b964f34e0a63",
"text": "Although deep neural network (DNN) has achieved many state-of-the-art results, estimating the uncertainty presented in the DNN model and the data is a challenging task. Problems related to uncertainty such as classifying unknown classes (class which does not appear in the training data) data as known class with high confidence, is critically concerned in the safety domain area (e.g, autonomous driving, medical diagnosis). In this paper, we show that applying current Bayesian Neural Network (BNN) techniques alone does not effectively capture the uncertainty. To tackle this problem, we introduce a simple way to improve the BNN by using one class classification (in this paper, we use the term ”set classification” instead). We empirically show the result of our method on an experiment which involves three datasets: MNIST, notMNIST and FMNIST.",
"title": ""
},
{
"docid": "02ea5b61b22d5af1b9362ca46ead0dea",
"text": "This paper describes a student project examining mechanisms with which to attack Bluetooth-enabled devices. The paper briefly describes the protocol architecture of Bluetooth and the Java interface that programmers can use to connect to Bluetooth communication services. Several types of attacks are described, along with a detailed example of two attack tools, Bloover II and BT Info.",
"title": ""
},
{
"docid": "1aa3d2456e34c8ab59a340fd32825703",
"text": "It is well known that guided soft tissue healing with a provisional restoration is essential to obtain optimal anterior esthetics in the implant prosthesis. What is not well known is how to transfer a record of beautiful anatomically healed tissue to the laboratory. With the advent of emergence profile healing abutments and corresponding impression copings, there has been a dramatic improvement over the original 4.0-mm diameter design. This is a great improvement, however, it still does not accurately transfer a record of anatomically healed tissue, which is often triangularly shaped, to the laboratory, because the impression coping is a round cylinder. This article explains how to fabricate a \"custom impression coping\" that is an exact record of anatomically healed tissue for accurate duplication. This technique is significant because it allows an even closer replication of the natural dentition.",
"title": ""
},
{
"docid": "3005c32c7cf0e90c59be75795e1c1fbc",
"text": "In this paper, a novel AR interface is proposed that provides generic solutions to the tasks involved in augmenting simultaneously different types of virtual information and processing of tracking data for natural interaction. Participants within the system can experience a real-time mixture of 3D objects, static video, images, textual information and 3D sound with the real environment. The user-friendly AR interface can achieve maximum interaction using simple but effective forms of collaboration based on the combinations of human–computer interaction techniques. To prove the feasibility of the interface, the use of indoor AR techniques are employed to construct innovative applications and demonstrate examples from heritage to learning systems. Finally, an initial evaluation of the AR interface including some initial results is presented.",
"title": ""
},
{
"docid": "695766e9a526a0a25c4de430242e46d2",
"text": "In the large-scale Wireless Metropolitan Area Network (WMAN) consisting of many wireless Access Points (APs),choosing the appropriate position to place cloudlet is very important for reducing the user's access delay. For service provider, it isalways very costly to deployment cloudlets. How many cloudletsshould be placed in a WMAN and how much resource eachcloudlet should have is very important for the service provider. In this paper, we study the cloudlet placement and resourceallocation problem in a large-scale Wireless WMAN, we formulatethe problem as an novel cloudlet placement problem that givenan average access delay between mobile users and the cloudlets, place K cloudlets to some strategic locations in the WMAN withthe objective to minimize the number of use cloudlet K. Wethen propose an exact solution to the problem by formulatingit as an Integer Linear Programming (ILP). Due to the poorscalability of the ILP, we devise a clustering algorithm K-Medoids(KM) for the problem. For a special case of the problem whereall cloudlets computing capabilities have been given, we proposean efficient heuristic for it. We finally evaluate the performanceof the proposed algorithms through experimental simulations. Simulation result demonstrates that the proposed algorithms areeffective.",
"title": ""
},
{
"docid": "d4f3dc5efe166df222b2a617d5fbd5e4",
"text": "IKEA is the largest furniture retailer in the world. Their critical success factor is that IKEA can seamlessly integrate and optimize end-to-end supply chain to maximize customer value, eventually build their dominate position in entire value chain. This article summarizes and analyzes IKEA's successful practices of value chain management. Hopefully it can be a good reference or provide strategic insight for Chinese enterprises.",
"title": ""
},
{
"docid": "935282c2cbfa34ed24bc598a14a85273",
"text": "Cybersecurity is a national priority in this big data era. Because of negative externalities and the resulting lack of economic incentives, companies often underinvest in security controls, despite government and industry recommendations. Although many existing studies on security have explored technical solutions, only a few have looked at the economic motivations. To fill the gap, we propose an approach to increase the incentives of organizations to address security problems. Specifically, we utilize and process existing security vulnerability data, derive explicit security performance information, and disclose the information as feedback to organizations and the public. We regularly release information on the organizations with the worst security behaviors, imposing reputation loss on them. The information is also used by organizations for self-evaluation in comparison to others. Therefore, additional incentives are solicited out of reputation concern and social comparison. To test the effectiveness of our approach, we conducted a field quasi-experiment for outgoing spam for 1,718 autonomous systems in eight countries and published SpamRankings.net, the website we created to release information. We found that the treatment group subject to information disclosure reduced outgoing spam approximately by 16%. We also found that the more observed outgoing spam from the top spammer, the less likely an organization would be to reduce its own outgoing spam, consistent with the prediction by social comparison theory. Our results suggest that social information and social comparison can be effectively leveraged to encourage desirable behavior. Our study contributes to both information architecture design and public policy by suggesting how information can be used as intervention to impose economic incentives. The usual disclaimers apply for NSF grants 1228990 and 0831338.",
"title": ""
},
{
"docid": "72485a3c94c2dfa5121e91f2a3fc0f4a",
"text": "Four experiments support the hypothesis that syntactically relevant information about verbs is encoded in the lexicon in semantic event templates. A verb's event template represents the participants in an event described by the verb and the relations among the participants. The experiments show that lexical decision times are longer for verbs with more complex templates than verbs with less complex templates and that, for both transitive and intransitive sentences, sentences containing verbs with more complex templates take longer to process. In contrast, sentence processing times did not depend on the probabilities with which the verbs appear in transitive versus intransitive constructions in a large corpus of naturally produced sentences.",
"title": ""
},
{
"docid": "ef4272cd4b0d4df9aa968cc9ff528c1e",
"text": "Estimating action quality, the process of assigning a \"score\" to the execution of an action, is crucial in areas such as sports and health care. Unlike action recognition, which has millions of examples to learn from, the action quality datasets that are currently available are small-typically comprised of only a few hundred samples. This work presents three frameworks for evaluating Olympic sports which utilize spatiotemporal features learned using 3D convolutional neural networks (C3D) and perform score regression with i) SVR ii) LSTM and iii) LSTM followed by SVR. An efficient training mechanism for the limited data scenarios is presented for clip-based training with LSTM. The proposed systems show significant improvement over existing quality assessment approaches on the task of predicting scores of diving, vault, figure skating. SVR-based frameworks yield better results, LSTM-based frameworks are more natural for describing an action and can be used for improvement feedback.",
"title": ""
},
{
"docid": "f376948c1b8952b0b19efad3c5ca0471",
"text": "This essay grew out of an examination of one-tailed significance testing. One-tailed tests were little advocated by the founders of modern statistics but are widely used and recommended nowadays in the biological, behavioral and social sciences. The high frequency of their use in ecology and animal behavior and their logical indefensibil-ity have been documented in a companion review paper. In the present one, we trace the roots of this problem and counter some attacks on significance testing in general. Roots include: the early but irrational dichotomization of the P scale and adoption of the 'significant/non-significant' terminology; the mistaken notion that a high P value is evidence favoring the null hypothesis over the alternative hypothesis; and confusion over the distinction between statistical and research hypotheses. Resultant widespread misuse and misinterpretation of significance tests have also led to other problems, such as unjustifiable demands that reporting of P values be disallowed or greatly reduced and that reporting of confidence intervals and standardized effect sizes be required in their place. Our analysis of these matters thus leads us to a recommendation that for standard types of significance assessment the paleoFisherian and Neyman-Pearsonian paradigms be replaced by a neoFisherian one. The essence of the latter is that a critical α (probability of type I error) is not specified, the terms 'significant' and 'non-significant' are abandoned, that high P values lead only to suspended judgments, and that the so-called \" three-valued logic \" of Cox, Kaiser, Tukey, Tryon and Harris is adopted explicitly. Confidence intervals and bands, power analyses, and severity curves remain useful adjuncts in particular situations. Analyses conducted under this paradigm we term neoFisherian significance assessments (NFSA). Their role is assessment of the existence, sign and magnitude of statistical effects. The common label of null hypothesis significance tests (NHST) is retained for paleoFisherian and Neyman-Pearsonian approaches and their hybrids. The original Neyman-Pearson framework has no utility outside quality control type applications. Some advocates of Bayesian, likelihood and information-theoretic approaches to model selection have argued that P values and NFSAs are of little or no value, but those arguments do not withstand critical review. Champions of Bayesian methods in particular continue to overstate their value and relevance. 312 Hurlbert & Lombardi • ANN. ZOOL. FeNNICI Vol. 46 \" … the object of statistical methods is the reduction of data. A quantity of data … is to be replaced by relatively few quantities which shall …",
"title": ""
},
{
"docid": "4ad3c199ad1ba51372e9f314fc1158be",
"text": "Inner lead bonding (ILB) is used to thermomechanically join the Cu inner leads on a flexible film tape and Au bumps on a driver IC chip to form electrical paths. With the newly developed film carrier assembly technology, called chip on film (COF), the bumps are prepared separately on a film tape substrate and bonded on the finger lead ends beforehand; therefore, the assembly of IC chips can be made much simpler and cheaper. In this paper, three kinds of COF samples, namely forming, wrinkle, and flat samples, were prepared using conventional gang bonder. The peeling test was used to examine the bondability of ILB in terms of the adhesion strength between the inner leads and the bumps. According to the peeling test results, flat samples have competent strength, less variation, and better appearance than when using flip-chip bonder.",
"title": ""
},
{
"docid": "c89ca701d947ba6594be753470f152ac",
"text": "The visualization of an image collection is the process of displaying a collection of images on a screen under some specific layout requirements. This paper focuses on an important problem that is not well addressed by the previous methods: visualizing image collections into arbitrary layout shapes while arranging images according to user-defined semantic or visual correlations (e.g., color or object category). To this end, we first propose a property-based tree construction scheme to organize images of a collection into a tree structure according to user-defined properties. In this way, images can be adaptively placed with the desired semantic or visual correlations in the final visualization layout. Then, we design a two-step visualization optimization scheme to further optimize image layouts. As a result, multiple layout effects including layout shape and image overlap ratio can be effectively controlled to guarantee a satisfactory visualization. Finally, we also propose a tree-transfer scheme such that visualization layouts can be adaptively changed when users select different “images of interest.” We demonstrate the effectiveness of our proposed approach through the comparisons with state-of-the-art visualization techniques.",
"title": ""
},
{
"docid": "e787a1486a6563c15a74a07ed9516447",
"text": "This chapter describes how engineering principles can be used to estimate joint forces. Principles of static and dynamic analysis are reviewed, with examples of static analysis applied to the hip and elbow joints and to the analysis of joint forces in human ancestors. Applications to indeterminant problems of joint mechanics are presented and utilized to analyze equine fetlock joints.",
"title": ""
},
{
"docid": "ade59b46fca7fbf99800370435e1afe6",
"text": "etretinate to PUVA was associated with better treatment response. In our patients with psoriasis, topical PUVA achieved improvement rates comparable with oral PUVA, with a mean cumulative UVA dose of 187.5 J ⁄ cm. Our study contradicts previous observations made in other studies on vitiligo and demonstrates that topical PUVA does have a limited therapeutic effect in vitiligo. Oral and topical PUVA showed low but equal efficacy in patients with vitiligo with a similar mean number of treatments to completion. Approximately one-quarter of our patients with vitiligo had discontinued PUVA therapy, which probably affected the outcome. It has been shown that at least 1 year of continuous and regular therapy with oral PUVA is needed to achieve a sufficient degree of repigmentation. Shorter periods were not found to be sufficient to observe clinical signs of repigmentation. Currently it is not known if the same holds true for topical PUVA. In conclusion, our results show that the efficacy of topical PUVA is comparable with that of oral PUVA, and favoured topical PUVA treatment especially in the eczema group with respect to cumulative UVA doses and success rates. Given the necessity for long-term treatment with local PUVA therapies, successful management requires maintenance of full patient compliance. Because of this, the results in this study should not only be attributed to the therapies. Because of its safety and the simplicity, topical PUVA should be considered as an alternative therapy to other phototherapy methods.",
"title": ""
},
{
"docid": "5a0fe40414f7881cc262800a43dfe4d0",
"text": "In this work, a passive rectifier circuit is presented, which is operating at 868 MHz. It allows energy harvesting from low power RF waves with a high efficiency. It consists of a novel multiplier circuit design and high quality components to reduce parasitic effects, losses and reaches a low startup voltage. Using lower capacitor rises up the switching speed of the whole circuit. An inductor L serves to store energy in a magnetic field during the negative cycle wave and returns it during the positive one. A low pass filter is arranged in cascade with the rectifier circuit to reduce ripple at high frequencies and to get a stable DC signal. A 50 kΩ load is added at the output to measure the output power and to visualize the behavior of the whole circuit. Simulation results show an outstanding potential of this RF-DC converter witch has a relative high sensitivity beginning with -40 dBm.",
"title": ""
},
{
"docid": "ed1a3ca3e558eeb33e2841fa4b9c28d2",
"text": "© 2010 ETRI Journal, Volume 32, Number 4, August 2010 In this paper, we present a low-voltage low-dropout voltage regulator (LDO) for a system-on-chip (SoC) application which, exploiting the multiplication of the Miller effect through the use of a current amplifier, is frequency compensated up to 1-nF capacitive load. The topology and the strategy adopted to design the LDO and the related compensation frequency network are described in detail. The LDO works with a supply voltage as low as 1.2 V and provides a maximum load current of 50 mA with a drop-out voltage of 200 mV: the total integrated compensation capacitance is about 40 pF. Measurement results as well as comparison with other SoC LDOs demonstrate the advantage of the proposed topology.",
"title": ""
},
{
"docid": "e5d323fe9bf2b5800043fa0e4af6849a",
"text": "A central question in psycholinguistic research is how listeners isolate words from connected speech despite the paucity of clear word-boundary cues in the signal. A large body of empirical evidence indicates that word segmentation is promoted by both lexical (knowledge-derived) and sublexical (signal-derived) cues. However, an account of how these cues operate in combination or in conflict is lacking. The present study fills this gap by assessing speech segmentation when cues are systematically pitted against each other. The results demonstrate that listeners do not assign the same power to all segmentation cues; rather, cues are hierarchically integrated, with descending weights allocated to lexical, segmental, and prosodic cues. Lower level cues drive segmentation when the interpretive conditions are altered by a lack of contextual and lexical information or by white noise. Taken together, the results call for an integrated, hierarchical, and signal-contingent approach to speech segmentation.",
"title": ""
}
] |
scidocsrr
|
c28e3c84c62220fd491ecc62bacc1c08
|
Trust cues fostering initial consumers' trust: usability inspection of nutrition and healthcare websites
|
[
{
"docid": "9b7654390d496cb041f3073dcfb07e67",
"text": "Electronic commerce (EC) transactions are subject to multiple information security threats. Proposes that consumer trust in EC transactions is influenced by perceived information security and distinguishes it from the objective assessment of security threats. Proposes mechanisms of encryption, protection, authentication, and verification as antecedents of perceived information security. These mechanisms are derived from technological solutions to security threats that are visible to consumers and hence contribute to actual consumer perceptions. Tests propositions in a study of 179 consumers and shows a significant relationship between consumers’ perceived information security and trust in EC transactions. Explores the role of limited financial liability as a surrogate for perceived security. However, the findings show that there is a minimal effect of financial liability on consumers’ trust in EC. Engenders several new insights regarding the role of perceived security in EC transactions.",
"title": ""
}
] |
[
{
"docid": "8ea2dadd6024e2f1b757818e0c5d76fa",
"text": "BACKGROUND\nLysergic acid diethylamide (LSD) is a potent serotonergic hallucinogen or psychedelic that modulates consciousness in a marked and novel way. This study sought to examine the acute and mid-term psychological effects of LSD in a controlled study.\n\n\nMETHOD\nA total of 20 healthy volunteers participated in this within-subjects study. Participants received LSD (75 µg, intravenously) on one occasion and placebo (saline, intravenously) on another, in a balanced order, with at least 2 weeks separating sessions. Acute subjective effects were measured using the Altered States of Consciousness questionnaire and the Psychotomimetic States Inventory (PSI). A measure of optimism (the Revised Life Orientation Test), the Revised NEO Personality Inventory, and the Peter's Delusions Inventory were issued at baseline and 2 weeks after each session.\n\n\nRESULTS\nLSD produced robust psychological effects; including heightened mood but also high scores on the PSI, an index of psychosis-like symptoms. Increased optimism and trait openness were observed 2 weeks after LSD (and not placebo) and there were no changes in delusional thinking.\n\n\nCONCLUSIONS\nThe present findings reinforce the view that psychedelics elicit psychosis-like symptoms acutely yet improve psychological wellbeing in the mid to long term. It is proposed that acute alterations in mood are secondary to a more fundamental modulation in the quality of cognition, and that increased cognitive flexibility subsequent to serotonin 2A receptor (5-HT2AR) stimulation promotes emotional lability during intoxication and leaves a residue of 'loosened cognition' in the mid to long term that is conducive to improved psychological wellbeing.",
"title": ""
},
{
"docid": "41d546266db9b3e9ec5071e4926abb8d",
"text": "Estimating the shape of transparent and refractive objects is one of the few open problems in 3D reconstruction. Under the assumption that the rays refract only twice when traveling through the object, we present the first approach to simultaneously reconstructing the 3D positions and normals of the object's surface at both refraction locations. Our acquisition setup requires only two cameras and one monitor, which serves as the light source. After acquiring the ray-ray correspondences between each camera and the monitor, we solve an optimization function which enforces a new position-normal consistency constraint. That is, the 3D positions of surface points shall agree with the normals required to refract the rays under Snell's law. Experimental results using both synthetic and real data demonstrate the robustness and accuracy of the proposed approach.",
"title": ""
},
{
"docid": "4f478443484f0eb9f9fec5a6a0966544",
"text": "The data warehouse facilitates knowledge workers in decision making process. A good DW design can actually reduce the report processing time but, it requires substantial efforts in ETL design and implementation. In this paper, the authors have focused on the working of Extraction, Transformation and Loading. The focus has also been laid on the data quality problem which in result leads to falsification of analysis based on that data. The authors have also analyzed and compared various ETL modeling processes. So this study would be substantially fruitful for understanding various approaches of ETL modeling in data warehousing.",
"title": ""
},
{
"docid": "4d56e63ea8d3b4325aa5c7f9baa9eaeb",
"text": "In this paper, the concepts of input/output-to-state stability (IOSS) and state-norm estimators are considered for switched nonlinear systems under average dwell-time switching signals. We show that when the average dwell-time is large enough, a switched system is IOSS if all of its constituent subsystems are IOSS. Moreover, under the same conditions, a non-switched state-norm estimator exists for the switched system. Furthermore, if some of the constituent subsystems are not IOSS, we show that still IOSS can be established for the switched system, if the activation time of the non-IOSS subsystems is not too big. Again, under the same conditions, a state-norm estimator exists for the switched system. However, in this case, the state-norm estimator is a switched system itself, consisting of two subsystems. We show that this state-norm estimator can be constructed such that its switching times are independent of the switching times of the switched system it is designed for. © 2012 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "e4632cf52719eea1565d04ec4e068e16",
"text": "This study examined the correlation between body mass index as independent variable, and body image and fear of negative evaluation as dependent variables, as well as the moderating role of self-esteem in these correlations. A total of 318 Malaysian young adults were conveniently recruited to do the self-administered survey on the demographic characteristics body image, fear of negative evaluation, and self-esteem. Partial least squares structural equation modeling was used to test the research hypotheses. The results revealed that body mass index was negatively associated with body image, while no such correlation was found with fear of negative evaluation. Meanwhile, the negative correlation of body mass index with body image was stronger among those with lower self-esteem, while a positive association of body mass index with fear of negative evaluation was significant only among individuals with low self-esteem.",
"title": ""
},
{
"docid": "61f9b5b698c847bfb6316fdb5481d529",
"text": "We present a feature vector formation technique for documents Sparse Composite Document Vector (SCDV) which overcomes several shortcomings of the current distributional paragraph vector representations that are widely used for text representation. In SCDV, word embeddings are clustered to capture multiple semantic contexts in which words occur. They are then chained together to form document topic-vectors that can express complex, multi-topic documents. Through extensive experiments on multi-class and multi-label classification tasks, we outperform the previous state-of-the-art method, NTSG (Liu et al., 2015a). We also show that SCDV embeddings perform well on heterogeneous tasks like Topic Coherence, context-sensitive Learning and Information Retrieval. Moreover, we achieve significant reduction in training and prediction times compared to other representation methods. SCDV achieves best of both worlds better performance with lower time and space complexity.",
"title": ""
},
{
"docid": "f2ab3ba4503f4c6173e3ea1d273791ac",
"text": "Our starting point for developing the Studierstube system was the belief that augmented reality, the less obtrusive cousin of virtual reality, has a better chance of becoming a viable user interface for applications requiring manipulation of complex three-dimensional information as a daily routine. In essence, we are searching for a 3-D user interface metaphor as powerful as the desktop metaphor for 2-D. At the heart of the Studierstube system, collaborative augmented reality is used to embed computer-generated images into the real work environment. In the first part of this paper, we review the user interface of the initial Studierstube system, in particular the implementation of collaborative augmented reality, and the Personal Interaction Panel, a two-handed interface for interaction with the system. In the second part, an extended Studierstube system based on a heterogeneous distributed architecture is presented. This system allows the user to combine multiple approaches augmented reality, projection displays, and ubiquitous computingto the interface as needed. The environment is controlled by the Personal Interaction Panel, a twohanded, pen-and-pad interface that has versatile uses for interacting with the virtual environment. Studierstube also borrows elements from the desktop, such as multitasking and multi-windowing. The resulting software architecture is a user interface management system for complex augmented reality applications. The presentation is complemented by selected application examples.",
"title": ""
},
{
"docid": "3d20ba5dc32270cb75df7a2d499a70e4",
"text": "The Maximum Margin Planning (MMP) (Ratliff et al., 2006) algorithm solves imitation learning problems by learning linear mappings from features to cost functions in a planning domain. The learned policy is the result of minimum-cost planning using these cost functions. These mappings are chosen so that example policies (or trajectories) given by a teacher appear to be lower cost (with a lossscaled margin) than any other policy for a given planning domain. We provide a novel approach, MMPBOOST , based on the functional gradient descent view of boosting (Mason et al., 1999; Friedman, 1999a) that extends MMP by “boosting” in new features. This approach uses simple binary classification or regression to improve performance of MMP imitation learning, and naturally extends to the class of structured maximum margin prediction problems. (Taskar et al., 2005) Our technique is applied to navigation and planning problems for outdoor mobile robots and robotic legged locomotion.",
"title": ""
},
{
"docid": "5db506a8f6b962362e4d8bf2214afab6",
"text": "There are three main issues in non-invasive (NI) glucose measurements: namely, specificity, compartmentalization of glucose values, and calibration. There has been progress in the use of near-infrared and mid-infrared spectroscopy. Recently new glucose measurement methods have been developed, exploiting the effect of glucose on erythrocyte scattering, new photoacoustic phenomenon, optical coherence tomography, thermo-optical studies on human skin, Raman spectroscopy studies, fluorescence measurements, and use of photonic crystals. In addition to optical methods, in vivo electrical impedance results have been reported. Some of these methods measure intrinsic properties of glucose; others deal with its effect on tissue or blood properties. Recent studies on skin from individuals with diabetes and its response to stimuli, skin thermo-optical response, peripheral blood flow, and red blood cell rheology in diabetes shed new light on physical and physiological changes resulting from the disease that can affect NI glucose measurements. There have been advances in understanding compartmentalization of glucose values by targeting certain regions of human tissue. Calibration of NI measurements and devices is still an open question. More studies are needed to understand the specific glucose signals and signals that are due to the effect of glucose on blood and tissue properties. These studies should be performed under normal physiological conditions and in the presence of other co-morbidities.",
"title": ""
},
{
"docid": "13529522be402878286138168f264478",
"text": "I. Cantador (), P. Castells Universidad Autónoma de Madrid 28049 Madrid, Spain e-mails: ivan.cantador@uam.es, pablo.castells@uam.es Abstract An increasingly important type of recommender systems comprises those that generate suggestions for groups rather than for individuals. In this chapter, we revise state of the art approaches on group formation, modelling and recommendation, and present challenging problems to be included in the group recommender system research agenda in the context of the Social Web.",
"title": ""
},
{
"docid": "88def96b7287ce217f1abf8fb1b413a5",
"text": "Designing a metric manually for unsupervised sequence generation tasks, such as text generation, is essentially difficult. In a such situation, learning a metric of a sequence from data is one possible solution. The previous study, SeqGAN, proposed the framework for unsupervised sequence generation, in which a metric is learned from data, and a generator is optimized with regard to the learned metric with policy gradient, inspired by generative adversarial nets (GANs) and reinforcement learning. In this paper, we make two proposals to learn better metric than SeqGAN’s: partial reward function and expert-based reward function training. The partial reward function is a reward function for a partial sequence of a certain length. SeqGAN employs a reward function for completed sequence only. By combining long-scale and short-scale partial reward functions, we expect a learned metric to be able to evaluate a partial correctness as well as a coherence of a sequence, as a whole. In expert-based reward function training, a reward function is trained to discriminate between an expert (or true) sequence and a fake sequence that is produced by editing an expert sequence. Expert-based reward function training is not a kind of GAN frameworks. This makes the optimization of the generator easier. We examine the effect of the partial reward function and expert-based reward function training on synthetic data and real text data, and show improvements over SeqGAN and the model trained with MLE. Specifically, whereas SeqGAN gains 0.42 improvement of NLL over MLE on synthetic data, our best model gains 3.02 improvement, and whereas SeqGAN gains 0.029 improvement of BLEU over MLE, our best model gains 0.250 improvement.",
"title": ""
},
{
"docid": "63893d6406c581e9598b00f7ba95a065",
"text": "Security researchers can send vulnerability notifications to take proactive measures in securing systems at scale. However, the factors affecting a notification’s efficacy have not been deeply explored. In this paper, we report on an extensive study of notifying thousands of parties of security issues present within their networks, with an aim of illuminating which fundamental aspects of notifications have the greatest impact on efficacy. The vulnerabilities used to drive our study span a range of protocols and considerations: exposure of industrial control systems; apparent firewall omissions for IPv6-based services; and exploitation of local systems in DDoS amplification attacks. We monitored vulnerable systems for several weeks to determine their rate of remediation. By comparing with experimental controls, we analyze the impact of a number of variables: choice of party to contact (WHOIS abuse contacts versus national CERTs versus US-CERT), message verbosity, hosting an information website linked to in the message, and translating the message into the notified party’s local language. We also assess the outcome of the emailing process itself (bounces, automated replies, human replies, silence) and characterize the sentiments and perspectives expressed in both the human replies and an optional anonymous survey that accompanied our notifications. We find that various notification regimens do result in different outcomes. The best observed process was directly notifying WHOIS contacts with detailed information in the message itself. These notifications had a statistically significant impact on improving remediation, and human replies were largely positive. However, the majority of notified contacts did not take action, and even when they did, remediation was often only partial. Repeat notifications did not further patching. These results are promising but ultimately modest, behooving the security community to more deeply investigate ways to improve the effectiveness of vulnerability notifications.",
"title": ""
},
{
"docid": "4e0ff4875a4dff6863734c964db54540",
"text": "We present a personalized recommender system using neural network for recommending products, such as eBooks, audio-books (“Anonymous audio book service”), Mobile Apps, Video and Music. It produces recommendations based on user consumption history: purchases, listens or watches. Our key contribution is to formulate recommendation problem as a model that encodes historical behavior to predict the future behavior using soft data split, combining predictor and autoencoder models. We introduce convolutional layer for learning the importance (time decay) of the purchases depending on their purchase date and demonstrate that the shape of the time decay function can be well approximated by a parametrical function. We present offline experimental results showing that neural networks with two hidden layers can capture seasonality changes, and at the same time outperform other modeling techniques, including our recommender in production. Most importantly, we demonstrate that our model can be scaled to all digital categories. Finally, we show online A/B test results, discuss key improvements to the neural network model, and describe our production pipeline.",
"title": ""
},
{
"docid": "d580021d1e7cfe44e58dbace3d5c7bee",
"text": "We believe that humanoid robots provide new tools to investigate human social cognition, the processes underlying everyday interactions between individuals. Resonance is an emerging framework to understand social interactions that is based on the finding that cognitive processes involved when experiencing a mental state and when perceiving another individual experiencing the same mental state overlap, both at the behavioral and neural levels. We will first review important aspects of his framework. In a second part, we will discuss how this framework is used to address questions pertaining to artificial agents' social competence. We will focus on two types of paradigm, one derived from experimental psychology and the other using neuroimaging, that have been used to investigate humans' responses to humanoid robots. Finally, we will speculate on the consequences of resonance in natural social interactions if humanoid robots are to become integral part of our societies.",
"title": ""
},
{
"docid": "86de6e4d945f0d1fa7a0b699064d7bd5",
"text": "BACKGROUND\nTo increase understanding of the relationships among sexual violence, paraphilias, and mental illness, the authors assessed the legal and psychiatric features of 113 men convicted of sexual offenses.\n\n\nMETHOD\n113 consecutive male sex offenders referred from prison, jail, or probation to a residential treatment facility received structured clinical interviews for DSM-IV Axis I and II disorders, including sexual disorders. Participants' legal, sexual and physical abuse, and family psychiatric histories were also evaluated. We compared offenders with and without paraphilias.\n\n\nRESULTS\nParticipants displayed high rates of lifetime Axis I and Axis II disorders: 96 (85%) had a substance use disorder; 84 (74%), a paraphilia; 66 (58%), a mood disorder (40 [35%], a bipolar disorder and 27 [24%], a depressive disorder); 43 (38%), an impulse control disorder; 26 (23%), an anxiety disorder; 10 (9%), an eating disorder; and 63 (56%), antisocial personality disorder. Presence of a paraphilia correlated positively with the presence of any mood disorder (p <.001), major depression (p =.007), bipolar I disorder (p =.034), any anxiety disorder (p=.034), any impulse control disorder (p =.006), and avoidant personality disorder (p =.013). Although offenders without paraphilias spent more time in prison than those with paraphilias (p =.019), paraphilic offenders reported more victims (p =.014), started offending at a younger age (p =.015), and were more likely to perpetrate incest (p =.005). Paraphilic offenders were also more likely to be convicted of (p =.001) or admit to (p <.001) gross sexual imposition of a minor. Nonparaphilic offenders were more likely to have adult victims exclusively (p =.002), a prior conviction for theft (p <.001), and a history of juvenile offenses (p =.058).\n\n\nCONCLUSIONS\nSex offenders in the study population displayed high rates of mental illness, substance abuse, paraphilias, personality disorders, and comorbidity among these conditions. Sex offenders with paraphilias had significantly higher rates of certain types of mental illness and avoidant personality disorder. Moreover, paraphilic offenders spent less time in prison but started offending at a younger age and reported more victims and more non-rape sexual offenses against minors than offenders without paraphilias. On the basis of our findings, we assert that sex offenders should be carefully evaluated for the presence of mental illness and that sex offender management programs should have a capacity for psychiatric treatment.",
"title": ""
},
{
"docid": "60fdd64d8d715820afbadb91bcebfbe1",
"text": "We present a joint modeling approach to identify salient discussion points in spoken meetings as well as to label the discourse relations between speaker turns. A variation of our model is also discussed when discourse relations are treated as latent variables. Experimental results on two popular meeting corpora show that our joint model can outperform state-of-the-art approaches for both phrasebased content selection and discourse relation prediction tasks. We also evaluate our model on predicting the consistency among team members’ understanding of their group decisions. Classifiers trained with features constructed from our model achieve significant better predictive performance than the state-of-the-art.",
"title": ""
},
{
"docid": "792c0ac288242cedad24627df3092a94",
"text": "The popular media have publicized the idea that social networking Web sites (e.g., Facebook) may enrich the interpersonal lives of people who struggle to make social connections. The opportunity that such sites provide for self-disclosure-a necessary component in the development of intimacy--could be especially beneficial for people with low self-esteem, who are normally hesitant to self-disclose and who have difficulty maintaining satisfying relationships. We suspected that posting on Facebook would reduce the perceived riskiness of self-disclosure, thus encouraging people with low self-esteem to express themselves more openly. In three studies, we examined whether such individuals see Facebook as a safe and appealing medium for self-disclosure, and whether their actual Facebook posts enabled them to reap social rewards. We found that although people with low self-esteem considered Facebook an appealing venue for self-disclosure, the low positivity and high negativity of their disclosures elicited undesirable responses from other people.",
"title": ""
},
{
"docid": "395ae6ab0506de5fc2f1b9815b49cfec",
"text": "Subjectivity and sentiment analysis focuses on the automatic identification of private states, such as opinions, emotions, sentiments, evaluations, beliefs, and speculations in natural language. While subjectivity classification labels text as either subjective or objective, sentiment classification adds an additional level of granularity, by further classifying subjective text as either positive, negative or neutral. To date, a large number of text processing applications have already used techniques for automatic sentiment and subjectivity analysis, including automatic expressive text-to-speech synthesis [1], tracking sentiment timelines in on-line forums and news [22, 2], and mining opinions from product reviews [11]. In many natural language processing tasks, subjectivity and sentiment classification have been used as a first phase filtering to generate more viable data. Research that benefitted from this additional layering ranges from question answering [48], to conversation summarization [7] and text semantic analysis [41, 8]. Much of the research work to date on sentiment and subjectivity analysis has been applied to English, but work on other languages is growing, including Japanese [19, 34, 35, 15], Chinese [12, 49], German [18], and Romanian [23, 4]. In addition, several participants in the Chinese and Japanese Opinion Extraction tasks of NTCIR-6 [17] performed subjectivity and sentiment analysis in languages other than English.1 As only 29.4% of Internet users speak English,2 the construction of resources and tools for subjectivity and sentiment analysis in languages other than English is a growing need. In this chapter, we review the main directions of research focusing on the development of resources and tools for multilingual subjectivity and sentiment analysis. Specifically, we identify and overview three main categories of methods: (1) those focusing on word and phrase level annotations, overviewed in Section 4; (2) methods targeting the labeling of sentences, described in Section 5; and finally (3) methods for document-level annotations, presented in Section 6. We address both multilingual and cross-lingual methods. For multilingual methods, we review work concerned with languages other than English, where the resources and tools have been specifically developed for a given target language. In this category, in Section 3 we also briefly overview the main directions of work on English data, highlighting the methods that can be easily",
"title": ""
},
{
"docid": "47faebfa7d65ebf277e57436cf7c2ca4",
"text": "Steganography is a method which can put data into a media without a tangible impact on the cover media. In addition, the hidden data can be extracted with minimal differences. In this paper, twodimensional discrete wavelet transform is used for steganography in 24-bit color images. This steganography is of blind type that has no need for original images to extract the secret image. In this algorithm, by the help of a structural similarity and a two-dimensional correlation coefficient, it is tried to select part of sub-band cover image instead of embedding location. These sub-bands are obtained by 3levels of applying the DWT. Also to increase the steganography resistance against cropping or insert visible watermark, two channels of color image is used simultaneously. In order to raise the security, an encryption algorithm based on Arnold transform was also added to the steganography operation. Because diversity of chaos scenarios is limited in Arnold transform, it could be improved by its mirror in order to increase the diversity of key. Additionally, an ability is added to encryption algorithm that can still maintain its efficiency against image crop. Transparency of steganography image is measured by the peak signalto-noise ratio that indicates the adequate transparency of steganography process. Extracted image similarity is also measured by two-dimensional correlation coefficient with more than 99% similarity. Moreover, steganography resistance against increasing and decreasing brightness and contrast, lossy compression, cropping image, changing scale and adding noise is acceptable",
"title": ""
},
{
"docid": "d411b5b732f9d7eec4fc065bc410ae1b",
"text": "What do you do to start reading robot hands and the mechanics of manipulation? Searching the book that you love to read first or find an interesting book that will make you want to read? Everybody has difference with their reason of reading a book. Actuary, reading habit must be from earlier. Many people may be love to read, but not a book. It's not fault. Someone will be bored to open the thick book with small words to read. In more, this is the real condition. So do happen probably with this robot hands and the mechanics of manipulation.",
"title": ""
}
] |
scidocsrr
|
b8fc65054d305075957542df91cc1e79
|
Towards unified depth and semantic prediction from a single image
|
[
{
"docid": "bd042b5e8d2d92966bde5e224bb8220b",
"text": "Output of our system is a 3D semantic+occupancy map. However due to lack of ground truth in that form, we need to evaluate using indirect approaches. To evaluate the segmentation accuracy, we evaluated it with standard 2D semantic segmentation methods for which human annotated ground truth exists. The 2D segmentation is obtained by back-projecting our 3D map to the camera images. However these kind of evaluation negatively harms our scores for the following reasons:",
"title": ""
},
{
"docid": "8a77882cfe06eaa88db529432ed31b0c",
"text": "We present an approach to interpret the major surfaces, objects, and support relations of an indoor scene from an RGBD image. Most existing work ignores physical interactions or is applied only to tidy rooms and hallways. Our goal is to parse typical, often messy, indoor scenes into floor, walls, supporting surfaces, and object regions, and to recover support relationships. One of our main interests is to better understand how 3D cues can best inform a structured 3D interpretation. We also contribute a novel integer programming formulation to infer physical support relations. We offer a new dataset of 1449 RGBD images, capturing 464 diverse indoor scenes, with detailed annotations. Our experiments demonstrate our ability to infer support relations in complex scenes and verify that our 3D scene cues and inferred support lead to better object segmentation.",
"title": ""
},
{
"docid": "cc4c58f1bd6e5eb49044353b2ecfb317",
"text": "Today, visual recognition systems are still rarely employed in robotics applications. Perhaps one of the main reasons for this is the lack of demanding benchmarks that mimic such scenarios. In this paper, we take advantage of our autonomous driving platform to develop novel challenging benchmarks for the tasks of stereo, optical flow, visual odometry/SLAM and 3D object detection. Our recording platform is equipped with four high resolution video cameras, a Velodyne laser scanner and a state-of-the-art localization system. Our benchmarks comprise 389 stereo and optical flow image pairs, stereo visual odometry sequences of 39.2 km length, and more than 200k 3D object annotations captured in cluttered scenarios (up to 15 cars and 30 pedestrians are visible per image). Results from state-of-the-art algorithms reveal that methods ranking high on established datasets such as Middlebury perform below average when being moved outside the laboratory to the real world. Our goal is to reduce this bias by providing challenging benchmarks with novel difficulties to the computer vision community. Our benchmarks are available online at: www.cvlibs.net/datasets/kitti.",
"title": ""
}
] |
[
{
"docid": "427ce3159bf598c306645a5b9e670c95",
"text": "In recent years, microblogging platforms have become good places to spread various spams, making the problem of gauging information credibility on social networks receive considerable attention especially under an emergency situation. Unlike previous studies on detecting rumors using tweets' inherent attributes generally, in this work, we shift the premise and focus on identifying event rumors on Weibo by extracting features from crowd responses that are texts of retweets (reposting tweets) and comments under a certain social event. Firstly the paper proposes a method of collecting theme data, including a sample set of tweets which have been confirmed to be false rumors based on information from the official rumor-busting service provided by Weibo. Secondly clustering analysis of tweets are made to examine the text features extracted from retweets and comments, and a classifier is trained based on observed feature distribution to automatically judge rumors from a mixed set of valid news and false information. The experiments show that the new features we propose are indeed effective in the classification, and especially some stop words and punctuations which are treated as noises in previous works can play an important role in rumor detection. To the best of our knowledge, this work is the first to detect rumors in Chinese via crowd responses under an emergency situation.",
"title": ""
},
{
"docid": "c3c5931200ff752d8285cc1068e779ee",
"text": "Speech-driven facial animation is the process which uses speech signals to automatically synthesize a talking character. The majority of work in this domain creates a mapping from audio features to visual features. This often requires post-processing using computer graphics techniques to produce realistic albeit subject dependent results. We present a system for generating videos of a talking head, using a still image of a person and an audio clip containing speech, that doesn’t rely on any handcrafted intermediate features. To the best of our knowledge, this is the first method capable of generating subject independent realistic videos directly from raw audio. Our method can generate videos which have (a) lip movements that are in sync with the audio and (b) natural facial expressions such as blinks and eyebrow movements 1. We achieve this by using a temporal GAN with 2 discriminators, which are capable of capturing different aspects of the video. The effect of each component in our system is quantified through an ablation study. The generated videos are evaluated based on their sharpness, reconstruction quality, and lip-reading accuracy. Finally, a user study is conducted, confirming that temporal GANs lead to more natural sequences than a static GAN-based approach.",
"title": ""
},
{
"docid": "dd740ca578eefd345b9b137210fdad82",
"text": "The new ultrafast cardiac single photon emission computed tomography (SPECT) cameras with cadmium-zinc-telluride (CZT)-based detectors are faster and produce higher quality images as compared to conventional SPECT cameras. We assessed the need for additional imaging, total imaging time, tracer dose and 1-year outcome between patients scanned with the CZT camera and a conventional SPECT camera. A total of 456 consecutive stable patients without known coronary artery disease underwent myocardial perfusion imaging on a hybrid SPECT/CT (64-slice) scanner using either conventional (n = 225) or CZT SPECT (n = 231). All patients started with low-dose stress imaging, combined with coronary calcium scoring. Rest imaging was only done when initial stress SPECT testing was equivocal or abnormal. Coronary CT angiography was subsequently performed in cases of ischaemic or equivocal SPECT findings. Furthermore, 1-year clinical follow-up was obtained with regard to coronary revascularization, nonfatal myocardial infarction or death. Baseline characteristics were comparable between the two groups. With the CZT camera, the need for rest imaging (35 vs 56%, p < 0.001) and additional coronary CT angiography (20 vs 28%, p = 0.025) was significantly lower as compared with the conventional camera. This resulted in a lower mean total administered isotope dose per patient (658 ± 390 vs 840 ± 421 MBq, p < 0.001) and shorter imaging time (6.39 ± 1.91 vs 20.40 ± 7.46 min, p < 0.001) with the CZT camera. After 1 year, clinical outcome was comparable between the two groups. As compared to images on a conventional SPECT camera, stress myocardial perfusion images acquired on a CZT camera are more frequently interpreted as normal with identical clinical outcome after 1-year follow-up. This lowers the need for additional testing, results in lower mean radiation dose and shortens imaging time.",
"title": ""
},
{
"docid": "0209132c7623c540c125a222552f33ac",
"text": "This paper reviews the criticism on the 4Ps Marketing Mix framework, the most popular tool of traditional marketing management, and categorizes the main objections of using the model as the foundation of physical marketing. It argues that applying the traditional approach, based on the 4Ps paradigm, is also a poor choice in the case of virtual marketing and identifies two main limitations of the framework in online environments: the drastically diminished role of the Ps and the lack of any strategic elements in the model. Next to identifying the critical factors of the Web marketing, the paper argues that the basis for successful E-Commerce is the full integration of the virtual activities into the company’s physical strategy, marketing plan and organisational processes. The four S elements of the Web-Marketing Mix framework present a sound and functional conceptual basis for designing, developing and commercialising Business-to-Consumer online projects. The model was originally developed for educational purposes and has been tested and refined by means of field projects; two of them are presented as case studies in the paper. 2002 Elsevier Science B.V. All rights reserved.",
"title": ""
},
{
"docid": "38c96356f5fd3daef5f1f15a32971b57",
"text": "Recommendation systems make suggestions about artifacts to a user. For instance, they may predict whether a user would be interested in seeing a particular movie. Social recomendation methods collect ratings of artifacts from many individuals and use nearest-neighbor techniques to make recommendations to a user concerning new artifacts. However, these methods do not use the significant amount of other information that is often available about the nature of each artifact -such as cast lists or movie reviews, for example. This paper presents an inductive learning approach to recommendation that is able to use both ratings information and other forms of information about each artifact in predicting user preferences. We show that our method outperforms an existing social-filtering method in the domain of movie recommendations on a dataset of more than 45,000 movie ratings collected from a community of over 250 users. Introduction Recommendations are a part of everyday life. We usually rely on some external knowledge to make informed decisions about a particular artifact or action, for instance when we are going to see a movie or going to see a doctor. This knowledge can be derived from social processes. At other times, our judgments may be based on available information about an artifact and our known preferences. There are many factors which may influence a person in making choices, and ideally one would like to model as many of these factors as possible in a recommendation system. There are some general approaches to this problem. In one approach, the user of the system provides ratings of some artifacts or items. The system makes informed guesses about other items the user may like based on ratings other users have provided. This is the framework for social-filtering methods (Hill, Stead, Rosenstein Furnas 1995; Shardanand & Maes 1995). In a second approach, the system accepts information describing the nature of an item, and based on a sample of the user’s preferences, learns to predict which items the user will like (Lang 1995; Pazzani, Muramatsu, & Billsus 1996). We will call this approach content-based filtering, as it does not rely on social information (in the form of other users’ ratings). Both social and content-based filtering can be cast as learning problems: the objective is to *Department of Computer Science, Rutgers University, Piscataway, NJ 08855 We would like to thank Susan Dumais for useful discussions during the early stages of this work. Copyright ~)1998, American Association for Artificial Intelligence (www.aaai.org). All rights reserved. learn a function that can take a description of a user and an artifact and predict the user’s preferences concerning the artifact. Well-known recommendation systems like Recommender (Hill, Stead, Rosenstein & Furnas 1995) and Firefly (http: //www.firefly.net) (Shardanand & Maes 1995) are based on social-filtering principles. Recommender, the baseline system used in the work reported here, recommends as yet unseen movies to a user based on his prior ratings of movies and their similarity to the ratings of other users. Social-filtering systems perform well using only numeric assessments of worth, i.e., ratings. However, social-filtering methods leave open the question of what role content can play in the recommen-",
"title": ""
},
{
"docid": "168f901cbecec27a71122eea607d17ce",
"text": "This paper introduces Cartograph, a visualization system that harnesses the vast amount of world knowledge encoded within Wikipedia to create thematic maps of almost any data. Cartograph extends previous systems that visualize non-spatial data using geographic approaches. While these systems required data with an existing semantic structure, Cartograph unlocks spatial visualization for a much larger variety of datasets by enhancing input datasets with semantic information extracted from Wikipedia. Cartograph's map embeddings use neural networks trained on Wikipedia article content and user navigation behavior. Using these embeddings, the system can reveal connections between points that are unrelated in the original data sets, but are related in meaning and therefore embedded close together on the map. We describe the design of the system and key challenges we encountered, and we present findings from an exploratory user study",
"title": ""
},
{
"docid": "b5babae9b9bcae4f87f5fe02459936de",
"text": "The study evaluated the effects of formocresol (FC), ferric sulphate (FS), calcium hydroxide (Ca[OH](2)), and mineral trioxide aggregate (MTA) as pulp dressing agents in pulpotomized primary molars. Sixteen children each with at least four primary molars requiring pulpotomy were selected. Eighty selected teeth were divided into four groups and treated with one of the pulpotomy agent. The children were recalled for clinical and radiographic examination every 6 months during 2 years of follow-up. Eleven children with 56 teeth arrived for clinical and radiographic follow-up evaluation at 24 months. The follow-up evaluations revealed that the success rate was 76.9% for FC, 73.3% for FS, 46.1% for Ca(OH)(2), and 66.6% for MTA. In conclusion, Ca(OH)(2)is less appropriate for primary teeth pulpotomies than the other pulpotomy agents. FC and FS appeared to be superior to the other agents. However, there was no statistically significant difference between the groups.",
"title": ""
},
{
"docid": "4be087f37232aefa30da1da34a5e9ff5",
"text": "Many clinical studies have shown that electroencephalograms (EEG) of Alzheimer patients (AD) often have an abnormal power spectrum. In this paper a frequency band analysis of AD EEG signals is presented, with the aim of improving the diagnosis of AD from EEG signals. Relative power in different EEG frequency bands is used as features to distinguish between AD patients and healthy control subjects. Many different frequency bands between 4 and 30Hz are systematically tested, besides the traditional frequency bands, e.g., theta band (4–8Hz). The discriminative power of the resulting spectral features is assessed through statistical tests (Mann-Whitney U test). Moreover, linear discriminant analysis is conducted with those spectral features. The optimized frequency ranges (4–7Hz, 8–15Hz, 19–24Hz) yield substantially better classification performance than the traditional frequency bands (4–8Hz, 8–12Hz, 12–30Hz); the frequency band 4–7Hz is the optimal frequency range for detecting AD, which is similar to the classical theta band. The frequency bands were also optimized as features through leave-one-out crossvalidation, resulting in error-free classification. The optimized frequency bands may improve existing EEG based diagnostic tools for AD. Additional testing on larger AD datasets is required to verify the effectiveness of the proposed approach.",
"title": ""
},
{
"docid": "e1ecae98985cf87523492605bcfb468c",
"text": "This four-part series of articles provides an overview of the neurological examination of the elderly patient, particularly as it applies to patients with cognitive impairment, dementia or cerebrovascular disease.The focus is on the method and interpretation of the bedside physical examination; the mental state and cognitive examinations are not covered in this review.Part 1 (featured in the September issue) began with an approach to the neurological examination in normal aging and in disease, and reviewed components of the general physical,head and neck,neurovascular and cranial nerve examinations relevant to aging and dementia.Part 2 (featured in the October issue) covered the motor examination with an emphasis on upper motor neuron signs and movement disorders. Part 3(featured in the November issue) reviewed the assessment of coordination,balance and gait,and Part 4, featured here, discusses the muscle stretch reflexes, pathological and primitive reflexes, and sensory examination, and offers concluding remarks.Throughout this series, special emphasis is placed on the evaluation and interpretation of neurological signs in light of findings considered normal in the elderly.",
"title": ""
},
{
"docid": "25d2a1234952508c351fceb6b8d964ea",
"text": "This article provides an introduction and overview of sensory integration theory as it is used in occupational therapy practice for children with developmental disabilities. This review of the theoretical tenets of the theory, its historical foundations, and early research provides the reader with a basis for exploring current uses and applications. The key principles of the sensory integrative approach, including concepts such as \"the just right challenge\" and \"the adaptive response\" as conceptualized by A. Jean Ayres, the theory's founder, are presented to familiarize the reader with the approach. The state of research in this area is presented, including studies underway to further delineate the subtypes of sensory integrative dysfunction, the neurobiological mechanisms of poor sensory processing, advances in theory development, and the development of a fidelity measure for use in intervention studies. Finally, this article reviews the current state of the evidence to support this approach and suggests that consensual knowledge and empirical research are needed to further elucidate the theory and its utility for a variety of children with developmental disabilities. This is especially critical given the public pressure by parents of children with autism and other developmental disabilities to obtain services and who have anecdotally noted the utility of sensory integration therapy for helping their children function more independently. Key limiting factors to research include lack of funding, paucity of doctorate trained clinicians and researchers in occupational therapy, and the inherent heterogeneity of the population of children affected by sensory integrative dysfunction. A call to action for occupational therapy researchers, funding agencies, and other professions is made to support ongoing efforts and to develop initiatives that will lead to better diagnoses and effective intervention for sensory integrative dysfunction, which will improve the lives of children and their families.",
"title": ""
},
{
"docid": "4e88d4afb9a11713a7396612863e4176",
"text": "Wind turbines are typically operated to maximize their own performance without considering the impact of wake effects on nearby turbines. There is the potential to increase total power and reduce structural loads by properly coordinating the individual turbines in a wind farm. The effective design and analysis of such coordinated controllers requires turbine wake models of sufficient accuracy but low computational complexity. This paper first formulates a coordinated control problem for a two-turbine array. Next, the paper reviews several existing simulation tools that range from low-fidelity, quasi-static models to high-fidelity, computational fluid dynamic models. These tools are compared by evaluating the power, loads, and flow characteristics for the coordinated two-turbine array. The results in this paper highlight the advantages and disadvantages of existing wake models for design and analysis of coordinated wind farm controllers.",
"title": ""
},
{
"docid": "766b18cdae33d729d21d6f1b2b038091",
"text": "1.1 Terminology Intercultural communication or communication between people of different cultural backgrounds has always been and will probably remain an important precondition of human co-existance on earth. The purpose of this paper is to provide a framework of factors thatare important in intercultural communication within a general model of human, primarily linguistic, communication. The term intercultural is chosen over the largely synonymousterm cross-cultural because it is linked to language use such as “interdisciplinary”, that is cooperation between people with different scientific backgrounds. Perhaps the term also has somewhat fewer connotations than crosscultural. It is not cultures that communicate, whatever that might imply, but people (and possibly social institutions) with different cultural backgrounds that do. In general, the term”cross-cultural” is probably best used for comparisons between cultures (”crosscultural comparison”).",
"title": ""
},
{
"docid": "c4a74726ac56b0127e5920098e6f0258",
"text": "BACKGROUND\nAttention Deficit Hyperactivity disorder (ADHD) is one of the most common and challenging childhood neurobehavioral disorders. ADHD is known to negatively impact children, their families, and their community. About one-third to one-half of patients with ADHD will have persistent symptoms into adulthood. The prevalence in the United States is estimated at 5-11%, representing 6.4 million children nationwide. The variability in the prevalence of ADHD worldwide and within the US may be due to the wide range of factors that affect accurate assessment of children and youth. Because of these obstacles to assessment, ADHD is under-diagnosed, misdiagnosed, and undertreated.\n\n\nOBJECTIVES\nWe examined factors associated with making and receiving the diagnosis of ADHD. We sought to review the consequences of a lack of diagnosis and treatment for ADHD on children's and adolescent's lives and how their families and the community may be involved in these consequences.\n\n\nMETHODS\nWe reviewed scientific articles looking for factors that impact the identification and diagnosis of ADHD and articles that demonstrate naturalistic outcomes of diagnosis and treatment. The data bases PubMed and Google scholar were searched from the year 1995 to 2015 using the search terms \"ADHD, diagnosis, outcomes.\" We then reviewed abstracts and reference lists within those articles to rule out or rule in these or other articles.\n\n\nRESULTS\nMultiple factors have significant impact in the identification and diagnosis of ADHD including parents, healthcare providers, teachers, and aspects of the environment. Only a few studies detailed the impact of not diagnosing ADHD, with unclear consequences independent of treatment. A more significant number of studies have examined the impact of untreated ADHD. The experience around receiving a diagnosis described by individuals with ADHD provides some additional insights.\n\n\nCONCLUSION\nADHD diagnosis is influenced by perceptions of many different members of a child's community. A lack of clear understanding of ADHD and the importance of its diagnosis and treatment still exists among many members of the community including parents, teachers, and healthcare providers. More basic and clinical research will improve methods of diagnosis and information dissemination. Even before further advancements in science, strong partnerships between clinicians and patients with ADHD may be the best way to reduce the negative impacts of this disorder.",
"title": ""
},
{
"docid": "8f7428569e1d3036cdf4842d48b56c22",
"text": "This paper describes a unified model for role-based access control (RBAC). RBAC is a proven technology for large-scale authorization. However, lack of a standard model results in uncertainty and confusion about its utility and meaning. The NIST model seeks to resolve this situation by unifying ideas from prior RBAC models, commercial products and research prototypes. It is intended to serve as a foundation for developing future standards. RBAC is a rich and open-ended technology which is evolving as users, researchers and vendors gain experience with it. The NIST model focuses on those aspects of RBAC for which consensus is available. It is organized into four levels of increasing functional capabilities called flat RBAC, hierarchical RBAC, constrained RBAC and symmetric RBAC. These levels are cumulative and each adds exactly one new requirement. An alternate approach comprising flat and hierarchical RBAC in an ordered sequence and two unordered features—constraints and symmetry—is also presented. The paper furthermore identifies important attributes of RBAC not included in the NIST model. Some are not suitable for inclusion in a consensus document. Others require further work and agreement before standardization is feasible.",
"title": ""
},
{
"docid": "73a998535ab03730595ce5d9c1f071f7",
"text": "This article familiarizes counseling psychologists with qualitative research methods in psychology developed in the tradition of European phenomenology. A brief history includes some of Edmund Husserl’s basic methods and concepts, the adoption of existential-phenomenology among psychologists, and the development and formalization of qualitative research procedures in North America. The choice points and alternatives in phenomenological research in psychology are delineated. The approach is illustrated by a study of a recovery program for persons repeatedly hospitalized for chronic mental illness. Phenomenological research is compared with other qualitative methods, and some of its benefits for counseling psychology are identified.",
"title": ""
},
{
"docid": "7755e8c9234f950d0d5449602269e34b",
"text": "In this paper we describe a privacy-preserving method for commissioning an IoT device into a cloud ecosystem. The commissioning consists of the device proving its manufacturing provenance in an anonymous fashion without reliance on a trusted third party, and for the device to be anonymously registered through the use of a blockchain system. We introduce the ChainAnchor architecture that provides device commissioning in a privacy-preserving fashion. The goal of ChainAnchor is (i) to support anonymous device commissioning, (ii) to support device-owners being remunerated for selling their device sensor-data to service providers, and (iii) to incentivize device-owners and service providers to share sensor-data in a privacy-preserving manner.",
"title": ""
},
{
"docid": "07e2dae7b1ed0c7164e59bd31b0d3f87",
"text": "The requirement to perform complicated statistic analysis of big data by institutions of engineering, scientific research, health care, commerce, banking and computer research is immense. However, the limitations of the widely used current desktop software like R, excel, minitab and spss gives a researcher limitation to deal with big data. The big data analytic tools like IBM Big Insight, Revolution Analytics, and tableau software are commercial and heavily license. Still, to deal with big data, client has to invest in infrastructure, installation and maintenance of hadoop cluster to deploy these analytical tools. Apache Hadoop is an open source distributed computing framework that uses commodity hardware. With this project, I intend to collaborate Apache Hadoop and R software over the on the Cloud. Objective is to build a SaaS (Software-as-a-Service) analytic platform that stores & analyzes big data using open source Apache Hadoop and open source R software. The benefits of this cloud based big data analytical service are user friendliness & cost as it is developed using open-source software. The system is cloud based so users have their own space in cloud where user can store there data. User can browse data, files, folders using browser and arrange datasets. User can select dataset and analyze required dataset and store result back to cloud storage. Enterprise with a cloud environment can save cost of hardware, upgrading software, maintenance or network configuration, thus it making it more economical.",
"title": ""
},
{
"docid": "7c0328e05e30a11729bc80255e09a5b8",
"text": "This paper presents a preliminary design for a moving-target defense (MTD) for computer networks to combat an attacker's asymmetric advantage. The MTD system reasons over a set of abstract models that capture the network's configuration and its operational and security goals to select adaptations that maintain the operational integrity of the network. The paper examines both a simple (purely random) MTD system as well as an intelligent MTD system that uses attack indicators to augment adaptation selection. A set of simulation-based experiments show that such an MTD system may in fact be able to reduce an attacker's success likelihood. These results are a preliminary step towards understanding and quantifying the impact of MTDs on computer networks.",
"title": ""
},
{
"docid": "70bed43cdfd50586e803bf1a9c8b3c0a",
"text": "We design a way to model apps as vectors, inspired by the recent deep learning approach to vectorization of words called word2vec. Our method relies on how users use apps. In particular, we visualize the time series of how each user uses mobile apps as a “document”, and apply the recent word2vec modeling on these documents, but the novelty is that the training context is carefully weighted by the time interval between the usage of successive apps. This gives us the app2vec vectorization of apps. We apply this to industrial scale data from Yahoo! and (a) show examples that app2vec captures semantic relationships between apps, much as word2vec does with words, (b) show using Yahoo!'s extensive human evaluation system that 82% of the retrieved top similar apps are semantically relevant, achieving 37% lift over bag-of-word approach and 140% lift over matrix factorization approach to vectorizing apps, and (c) finally, we use app2vec to predict app-install conversion and improve ad conversion prediction accuracy by almost 5%. This is the first industry scale design, training and use of app vectorization.",
"title": ""
}
] |
scidocsrr
|
cf16b8c55bbe6d9614987461925f2800
|
Fast Scene Understanding for Autonomous Driving
|
[
{
"docid": "7af26168ae1557d8633a062313d74b78",
"text": "This paper addresses the problem of estimating the depth map of a scene given a single RGB image. We propose a fully convolutional architecture, encompassing residual learning, to model the ambiguous mapping between monocular images and depth maps. In order to improve the output resolution, we present a novel way to efficiently learn feature map up-sampling within the network. For optimization, we introduce the reverse Huber loss that is particularly suited for the task at hand and driven by the value distributions commonly present in depth maps. Our model is composed of a single architecture that is trained end-to-end and does not rely on post-processing techniques, such as CRFs or other additional refinement steps. As a result, it runs in real-time on images or videos. In the evaluation, we show that the proposed model contains fewer parameters and requires fewer training data than the current state of the art, while outperforming all approaches on depth estimation. Code and models are publicly available.",
"title": ""
},
{
"docid": "b8b73a2f4924aaa34cf259d0f5eca3ba",
"text": "Semantic segmentation and object detection research have recently achieved rapid progress. However, the former task has no notion of different instances of the same object, and the latter operates at a coarse, bounding-box level. We propose an Instance Segmentation system that produces a segmentation map where each pixel is assigned an object class and instance identity label. Most approaches adapt object detectors to produce segments instead of boxes. In contrast, our method is based on an initial semantic segmentation module, which feeds into an instance subnetwork. This subnetwork uses the initial category-level segmentation, along with cues from the output of an object detector, within an end-to-end CRF to predict instances. This part of our model is dynamically instantiated to produce a variable number of instances per image. Our end-to-end approach requires no post-processing and considers the image holistically, instead of processing independent proposals. Therefore, unlike some related work, a pixel cannot belong to multiple instances. Furthermore, far more precise segmentations are achieved, as shown by our substantial improvements at high APr thresholds.",
"title": ""
},
{
"docid": "5f4d10a1a180f6af3d35ca117cd4ee19",
"text": "This work addresses the task of instance-aware semantic segmentation. Our key motivation is to design a simple method with a new modelling-paradigm, which therefore has a different trade-off between advantages and disadvantages compared to known approaches. Our approach, we term InstanceCut, represents the problem by two output modalities: (i) an instance-agnostic semantic segmentation and (ii) all instance-boundaries. The former is computed from a standard convolutional neural network for semantic segmentation, and the latter is derived from a new instance-aware edge detection model. To reason globally about the optimal partitioning of an image into instances, we combine these two modalities into a novel MultiCut formulation. We evaluate our approach on the challenging CityScapes dataset. Despite the conceptual simplicity of our approach, we achieve the best result among all published methods, and perform particularly well for rare object classes.",
"title": ""
}
] |
[
{
"docid": "42c6ec7e27bc1de6beceb24d52b7216c",
"text": "Internet of Things (IoT) refers to the expansion of Internet technologies to include wireless sensor networks (WSNs) and smart objects by extensive interfacing of exclusively identifiable, distributed communication devices. Due to the close connection with the physical world, it is an important requirement for IoT technology to be self-secure in terms of a standard information security model components. Autonomic security should be considered as a critical priority and careful provisions must be taken in the design of dynamic techniques, architectures and self-sufficient frameworks for future IoT. Over the years, many researchers have proposed threat mitigation approaches for IoT and WSNs. This survey considers specific approaches requiring minimal human intervention and discusses them in relation to self-security. This survey addresses and brings together a broad range of ideas linked together by IoT, autonomy and security. More particularly, this paper looks at threat mitigation approaches in IoT using an autonomic taxonomy and finally sets down future directions. & 2014 Published by Elsevier Ltd.",
"title": ""
},
{
"docid": "1bf69a2bffe2652e11ff8ec7f61b7c0d",
"text": "This research proposes and validates a design theory for digital platforms that support online communities (DPsOC). It addresses ways in which digital platforms can effectively support social interactions in online communities. Drawing upon prior literature on IS design theory, online communities, and platforms, we derive an initial set of propositions for designing effective DPsOC. Our overarching proposition is that three components of digital platform architecture (core, interface, and complements) should collectively support the mix of the three distinct types of social interaction structures of online community (information sharing, collaboration, and collective action). We validate the initial propositions and generate additional insights by conducting an in-depth analysis of an European digital platform for elderly care assistance. We further validate the propositions by analyzing three widely used digital platforms, including Twitter, Wikipedia, and Liquidfeedback, and we derive additional propositions and insights that can guide DPsOC design. We discuss the implications of this research for research and practice. Journal of Information Technology advance online publication, 10 February 2015; doi:10.1057/jit.2014.37",
"title": ""
},
{
"docid": "45494f14c2d9f284dd3ad3a5be49ca78",
"text": "Developing segmentation techniques for overlapping cells has become a major hurdle for automated analysis of cervical cells. In this paper, an automated three-stage segmentation approach to segment the nucleus and cytoplasm of each overlapping cell is described. First, superpixel clustering is conducted to segment the image into small coherent clusters that are used to generate a refined superpixel map. The refined superpixel map is passed to an adaptive thresholding step to initially segment the image into cellular clumps and background. Second, a linear classifier with superpixel-based features is designed to finalize the separation between nuclei and cytoplasm. Finally, edge and region based cell segmentation are performed based on edge enhancement process, gradient thresholding, morphological operations, and region properties evaluation on all detected nuclei and cytoplasm pairs. The proposed framework has been evaluated using the ISBI 2014 challenge dataset. The dataset consists of 45 synthetic cell images, yielding 270 cells in total. Compared with the state-of-the-art approaches, our approach provides more accurate nuclei boundaries, as well as successfully segments most of overlapping cells.",
"title": ""
},
{
"docid": "ba36f2cabea51ed99621a7aa104fed08",
"text": "Plant identification and classification play an important role in ecology, but the manual process is cumbersome even for experimented taxonomists. Technological advances allows the development of strategies to make these tasks easily and faster. In this context, this paper describes a methodology for plant identification and classification based on leaf shapes, that explores the discriminative power of the contour-centroid distance in the Fourier frequency domain in which some invariance (e.g. Rotation and scale) are guaranteed. In addition, it is also investigated the influence of feature selection techniques regarding classification accuracy. Our results show that by combining a set of features vectors - in the principal components space - and a feed forward neural network, an accuracy of 97.45% was achieved.",
"title": ""
},
{
"docid": "6761bd757cdd672f60c980b081d4dbc8",
"text": "Real-time eye and iris tracking is important for handsoff gaze-based password entry, instrument control by paraplegic patients, Internet user studies, as well as homeland security applications. In this project, a smart camera, LabVIEW and vision software tools are utilized to generate eye detection and tracking algorithms. The algorithms are uploaded to the smart camera for on-board image processing. Eye detection refers to finding eye features in a single frame. Eye tracking is achieved by detecting the same eye features across multiple image frames and correlating them to a particular eye. The algorithms are tested for eye detection and tracking under different conditions including different angles of the face, head motion speed, and eye occlusions to determine their usability for the proposed applications. This paper presents the implemented algorithms and performance results of these algorithms on the smart camera.",
"title": ""
},
{
"docid": "84e0b31ca5cbd158a673f59019d3ace3",
"text": "This paper presents a compact bi-directional solidstate transformer (SST) based on a current-source topology, featuring high-frequency galvanic isolation and only two minimal power conversion stages. The topology, referenced as Dynamic Current or Dyna-C, can be configured for multiterminal DC and/or multi-phase AC applications. The Dyna-C is robust, can be stacked for MV applications, and be paralleled for high current and high power applications. The paper will present some of the possible configurations of Dyna-C, and will discuss challenges associated with the control and operation. One core innovation presented in this paper is the management of transformer leakage energy when transitioning from one bridge to another while maintaining low device stresses and losses. Simulation and experimental results are used to validate operation of the topology.",
"title": ""
},
{
"docid": "361511f6c0e068442cd12377b9c3c9a6",
"text": "Machine learning methods are widely used for a variety of prediction problems. Prediction as a service is a paradigm in which service providers with technological expertise and computational resources may perform predictions for clients. However, data privacy severely restricts the applicability of such services, unless measures to keep client data private (even from the service provider) are designed. Equally important is to minimize the amount of computation and communication required between client and server. Fully homomorphic encryption offers a possible way out, whereby clients may encrypt their data, and on which the server may perform arithmetic computations. The main drawback of using fully homomorphic encryption is the amount of time required to evaluate large machine learning models on encrypted data. We combine ideas from the machine learning literature, particularly work on binarization and sparsification of neural networks, together with algorithmic tools to speed-up and parallelize computation using encrypted data.",
"title": ""
},
{
"docid": "6b1dc94c4c70e1c78ea32a760b634387",
"text": "3d reconstruction from a single image is inherently an ambiguous problem. Yet when we look at a picture, we can often infer 3d information about the scene. Humans perform single-image 3d reconstructions by using a variety of singleimage depth cues, for example, by recognizing objects and surfaces, and reasoning about how these surfaces are connected to each other. In this paper, we focus on the problem of automatic 3d reconstruction of indoor scenes, specifically ones (sometimes called “Manhattan worlds”) that consist mainly of orthogonal planes. We use a Markov random field (MRF) model to identify the different planes and edges in the scene, as well as their orientations. Then, an iterative optimization algorithm is applied to infer the most probable position of all the planes, and thereby obtain a 3d reconstruction. Our approach is fully automatic—given an input image, no human intervention is necessary to obtain an approximate 3d reconstruction.",
"title": ""
},
{
"docid": "32764726652b5f95aa2d208f80e967c0",
"text": "Simulation is a technique-not a technology-to replace or amplify real experiences with guided experiences that evoke or replicate substantial aspects of the real world in a fully interactive manner. The diverse applications of simulation in healthcare can be categorized by 11 dimensions: aims and purposes of the simulation activity; unit of participation; experience level of participants; healthcare domain; professional discipline of participants; type of knowledge, skill, attitudes, or behaviors addressed; the simulated patient's age; technology applicable or required; site of simulation; extent of direct participation; and method of feedback used. Using simulation to improve safety will require full integration of its applications into the routine structures and practices of healthcare. The costs and benefits of simulation are difficult to determine, especially for the most challenging applications, where long-term use may be required. Various driving forces and implementation mechanisms can be expected to propel simulation forward, including professional societies, liability insurers, healthcare payers, and ultimately the public. The future of simulation in healthcare depends on the commitment and ingenuity of the healthcare simulation community to see that improved patient safety using this tool becomes a reality.",
"title": ""
},
{
"docid": "84307c2dd94ebe89c46a535b31b4b51b",
"text": "Building systems that autonomously create temporal abstractions from data is a key challenge in scaling learning and planning in reinforcement learning. One popular approach for addressing this challenge is the options framework [41]. However, only recently in [1] was a policy gradient theorem derived for online learning of general purpose options in an end to end fashion. In this work, we extend previous work on this topic that only focuses on learning a two-level hierarchy including options and primitive actions to enable learning simultaneously at multiple resolutions in time. We achieve this by considering an arbitrarily deep hierarchy of options where high level temporally extended options are composed of lower level options with finer resolutions in time. We extend results from [1] and derive policy gradient theorems for a deep hierarchy of options. Our proposed hierarchical option-critic architecture is capable of learning internal policies, termination conditions, and hierarchical compositions over options without the need for any intrinsic rewards or subgoals. Our empirical results in both discrete and continuous environments demonstrate the efficiency of our framework.",
"title": ""
},
{
"docid": "fcf0ac3b52a1db116463e7376dae4950",
"text": "Although the ability to perform complex cognitive operations is assumed to be impaired following acute marijuana smoking, complex cognitive performance after acute marijuana use has not been adequately assessed under experimental conditions. In the present study, we used a within-participant double-blind design to evaluate the effects acute marijuana smoking on complex cognitive performance in experienced marijuana smokers. Eighteen healthy research volunteers (8 females, 10 males), averaging 24 marijuana cigarettes per week, completed this three-session outpatient study; sessions were separated by at least 72-hrs. During sessions, participants completed baseline computerized cognitive tasks, smoked a single marijuana cigarette (0%, 1.8%, or 3.9% Δ9-THC w/w), and completed additional cognitive tasks. Blood pressure, heart rate, and subjective effects were also assessed throughout sessions. Marijuana cigarettes were administered in a double-blind fashion and the sequence of Δ9-THC concentration order was balanced across participants. Although marijuana significantly increased the number of premature responses and the time participants required to complete several tasks, it had no effect on accuracy on measures of cognitive flexibility, mental calculation, and reasoning. Additionally, heart rate and several subjective-effect ratings (e.g., “Good Drug Effect,” “High,” “Mellow”) were significantly increased in a Δ9-THC concentration-dependent manner. These data demonstrate that acute marijuana smoking produced minimal effects on complex cognitive task performance in experienced marijuana users.",
"title": ""
},
{
"docid": "32e430c84b64d123763ed2e034696e20",
"text": "The Internet of Things (IoT) is becoming a key infrastructure for the development of smart ecosystems. However, the increased deployment of IoT devices with poor security has already rendered them increasingly vulnerable to cyber attacks. In some cases, they can be used as a tool for committing serious crimes. Although some researchers have already explored such issues in the IoT domain and provided solutions for them, there remains the need for a thorough analysis of the challenges, solutions, and open problems in this domain. In this paper, we consider this research gap and provide a systematic analysis of security issues of IoT-based systems. Then, we discuss certain existing research projects to resolve the security issues. Finally, we highlight a set of open problems and provide a detailed description for each. We posit that our systematic approach for understanding the nature and challenges in IoT security will motivate researchers to addressing and solving these problems.",
"title": ""
},
{
"docid": "48d2f38037b0cab83ca4d57bf19ba903",
"text": "The term sentiment analysis can be used to refer to many different, but related, problems. Most commonly, it is used to refer to the task of automatically determining the valence or polarity of a piece of text, whether it is positive, negative, or neutral. However, more generally, it refers to determining one’s attitude towards a particular target or topic. Here, attitude can mean an evaluative judgment, such as positive or negative, or an emotional or affectual attitude such as frustration, joy, anger, sadness, excitement, and so on. Note that some authors consider feelings to be the general category that includes attitude, emotions, moods, and other affectual states. In this chapter, we use ‘sentiment analysis’ to refer to the task of automatically determining feelings from text, in other words, automatically determining valence, emotions, and other affectual states from text. Osgood, Suci, and Tannenbaum (1957) showed that the three most prominent dimensions of meaning are evaluation (good–bad), potency (strong–weak), and activity (active– passive). Evaluativeness is roughly the same dimension as valence (positive–negative). Russell (1980) developed a circumplex model of affect characterized by two primary dimensions: valence and arousal (degree of reactivity to stimulus). Thus, it is not surprising that large amounts of work in sentiment analysis are focused on determining valence. (See survey articles by Pang and Lee (2008), Liu and Zhang (2012), and Liu (2015).) However, there is some work on automatically detecting arousal (Thelwall, Buckley, Paltoglou, Cai, & Kappas, 2010; Kiritchenko, Zhu, & Mohammad, 2014b; Mohammad, Kiritchenko, & Zhu, 2013a) and growing interest in detecting emotions such as anger, frustration, sadness, and optimism in text (Mohammad, 2012; Bellegarda, 2010; Tokuhisa, Inui, & Matsumoto, 2008; Strapparava & Mihalcea, 2007; John, Boucouvalas, & Xu, 2006; Mihalcea & Liu, 2006; Genereux & Evans, 2006; Ma, Prendinger, & Ishizuka, 2005; Holzman & Pottenger, 2003; Boucouvalas, 2002; Zhe & Boucouvalas, 2002). Further, massive amounts of data emanating from social media have led to significant interest in analyzing blog posts, tweets, instant messages, customer reviews, and Facebook posts for both valence (Kiritchenko et al., 2014b; Kiritchenko, Zhu, Cherry, & Mohammad, 2014a; Mohammad et al., 2013a; Aisopos, Papadakis, Tserpes, & Varvarigou, 2012; Bakliwal, Arora, Madhappan, Kapre, Singh, & Varma, 2012; Agarwal, Xie, Vovsha, Rambow, & Passonneau, 2011; Thelwall, Buckley, & Paltoglou, 2011; Brody & Diakopoulos, 2011; Pak & Paroubek, 2010) and emotions (Hasan, Rundensteiner, & Agu, 2014; Mohammad & Kiritchenko, 2014; Mohammad, Zhu, Kiritchenko, & Martin, 2014; Choudhury, Counts, & Gamon, 2012; Mohammad, 2012a; Wang, Chen, Thirunarayan, & Sheth, 2012; Tumasjan, Sprenger, Sandner, & Welpe, 2010b; Kim, Gilbert, Edwards, &",
"title": ""
},
{
"docid": "b7600e8798f867fb267cfdd9129948c7",
"text": "In this paper, we consider an interesting vision problem—salient instance segmentation. Other than producing approximate bounding boxes, our network also outputs high-quality instance-level segments. Taking into account the category-independent property of each target, we design a single stage salient instance segmentation framework, with a novel segmentation branch. Our new branch regards not only local context inside each detection window but also its surrounding context, enabling us to distinguish the instances in the same scope even with obstruction. Our network is end-to-end trainable and runs at a fast speed (40 fps when processing an image with resolution 320 × 320). We evaluate our approach on a public available benchmark and show that it outperforms other alternative solutions. In addition, we also provide a thorough analysis of the design choices to help readers better understand the functions of each part in our network. To facilitate the development of this area, our code will be available at https://github.com/RuochenFan/S4Net.",
"title": ""
},
{
"docid": "734638df47b05b425b0dcaaab11d886e",
"text": "Satisfying the needs of users of online video streaming services requires not only to manage the network Quality of Service (QoS), but also to address the user's Quality of Experience (QoE) expectations. While QoS factors reflect the status of individual networks, they do not comprehensively capture the end-to-end features affecting the quality delivered to the user. In this situation, QoE management is the better option. However, traditionally used QoE management models require human interaction and have stringent requirements in terms of time and complexity. Thus, they fail to achieve successful performance in terms of real-timeliness, accuracy, scalability and adaptability. This dissertation work investigates new methods to bring QoE management to the level required by the real-time management of video services. In this paper, we highlight our main contributions. First, with the aim to perform a combined network-service assessment, we designed an experimental methodology able to map network QoS onto service QoE. Our methodology is meant to provide service and network providers with the means to pinpoint the working boundaries of their video-sets and to predict the effect of network policies on perception. Second, we developed a generic machine learning framework that allows deriving accurate predictive No Reference (NR) assessment metrics, based on simplistic NR QoE methods, that are functionally and computationally viable for real-time QoE evaluation. The tools, methods and conclusions derived from this dissertation conform a solid contribution to QoE management of video streaming services, opening new venues for further research.",
"title": ""
},
{
"docid": "05a788c8387e58e59e8345f343b4412a",
"text": "We deal with the problem of recognizing social roles played by people in an event. Social roles are governed by human interactions, and form a fundamental component of human event description. We focus on a weakly supervised setting, where we are provided different videos belonging to an event class, without training role labels. Since social roles are described by the interaction between people in an event, we propose a Conditional Random Field to model the inter-role interactions, along with person specific social descriptors. We develop tractable variational inference to simultaneously infer model weights, as well as role assignment to all people in the videos. We also present a novel YouTube social roles dataset with ground truth role annotations, and introduce annotations on a subset of videos from the TRECVID-MED11 [1] event kits for evaluation purposes. The performance of the model is compared against different baseline methods on these datasets.",
"title": ""
},
{
"docid": "ae454338771f068e2b8a1f475855de11",
"text": "For powder-bed electron beam additive manufacturing (EBAM), support structures are required when fabricating an overhang to prevent defects such as curling, which is due to the complex thermomechanical process in EBAM. In this study, finite element modeling is developed to simulate the thermomechanical process in EBAM in building overhang part. Thermomechanical characteristics such as thermal gradients and thermal stresses around the overhang build are evaluated and analyzed. The model is applied to evaluate process parameter effects on the severity of thermal stresses. The major results are summarized as follows. For a uniform set of process parameters, the overhang areas have a higher maximum temperature, a higher tensile stress, and a larger distortion than the areas above a solid substrate. A higher energy density input, e.g., a lower beam speed or a higher beam current may cause more severe curling at the overhang area.",
"title": ""
},
{
"docid": "6dbfefb384a3dbd28beee2d0daebae52",
"text": "Many NLP applications require disambiguating polysemous words. Existing methods that learn polysemous word vector representations involve first detecting various senses and optimizing the sensespecific embeddings separately, which are invariably more involved than single sense learning methods such as word2vec. Evaluating these methods is also problematic, as rigorous quantitative evaluations in this space is limited, especially when compared with single-sense embeddings. In this paper, we propose a simple method to learn a word representation, given any context. Our method only requires learning the usual single sense representation, and coefficients that can be learnt via a single pass over the data. We propose several new test sets for evaluating word sense induction, relevance detection, and contextual word similarity, significantly supplementing the currently available tests. Results on these and other tests show that while our method is embarrassingly simple, it achieves excellent results when compared to the state of the art models for unsupervised polysemous word representation learning. Our code and data are at https://github.com/dingwc/",
"title": ""
},
{
"docid": "9d7a67f2cd12a6fd033ad102fb9c526e",
"text": "We begin by pretraining the source task model, fS , using the task loss on the labeled source data. Next, we perform pixel-level adaptation using our image space GAN losses together with semantic consistency and cycle consistency losses. This yeilds learned parameters for the image transformations, GS!T and GT!S , image discriminators, DS and DT , as well as an initial setting of the task model, fT , which is trained using pixel transformed source images and the corresponding source pixel labels. Finally, we perform feature space adpatation in order to update the target semantic model, fT , to have features which are aligned between the source images mapped into target style and the real target images. During this phase, we learn the feature discriminator, Dfeat and use this to guide the representation update to fT . In general, our method could also perform phases 2 and 3 simultaneously, but this would require more GPU memory then available at the time of these experiments.",
"title": ""
},
{
"docid": "bf7e67dededd5f4585aaefecc60e7c1a",
"text": "Multidimensional long short-term memory recurrent neural networks achieve impressive results for handwriting recognition. However, with current CPU-based implementations, their training is very expensive and thus their capacity has so far been limited. We release an efficient GPU-based implementation which greatly reduces training times by processing the input in a diagonal-wise fashion. We use this implementation to explore deeper and wider architectures than previously used for handwriting recognition and show that especially the depth plays an important role. We outperform state of the art results on two databases with a deep multidimensional network.",
"title": ""
}
] |
scidocsrr
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.