query_id
stringlengths
32
32
query
stringlengths
5
5.38k
positive_passages
listlengths
1
23
negative_passages
listlengths
7
25
subset
stringclasses
5 values
4cc97f01ed0be002bc18abbe3dc0a186
Random Faces Guided Sparse Many-to-One Encoder for Pose-Invariant Face Recognition
[ { "docid": "0102748c7f9969fb53a3b5ee76b6eefe", "text": "Face veri cation is the task of deciding by analyzing face images, whether a person is who he/she claims to be. This is very challenging due to image variations in lighting, pose, facial expression, and age. The task boils down to computing the distance between two face vectors. As such, appropriate distance metrics are essential for face veri cation accuracy. In this paper we propose a new method, named the Cosine Similarity Metric Learning (CSML) for learning a distance metric for facial veri cation. The use of cosine similarity in our method leads to an e ective learning algorithm which can improve the generalization ability of any given metric. Our method is tested on the state-of-the-art dataset, the Labeled Faces in the Wild (LFW), and has achieved the highest accuracy in the literature. Face veri cation has been extensively researched for decades. The reason for its popularity is the non-intrusiveness and wide range of practical applications, such as access control, video surveillance, and telecommunication. The biggest challenge in face veri cation comes from the numerous variations of a face image, due to changes in lighting, pose, facial expression, and age. It is a very di cult problem, especially using images captured in totally uncontrolled environment, for instance, images from surveillance cameras, or from the Web. Over the years, many public face datasets have been created for researchers to advance state of the art and make their methods comparable. This practice has proved to be extremely useful. FERET [1] is the rst popular face dataset freely available to researchers. It was created in 1993 and since then research in face recognition has advanced considerably. Researchers have come very close to fully recognizing all the frontal images in FERET [2,3,4,5,6]. However, these methods are not robust to deal with non-frontal face images. Recently a new face dataset named the Labeled Faces in the Wild (LFW) [7] was created. LFW is a full protocol for evaluating face veri cation algorithms. Unlike FERET, LFW is designed for unconstrained face veri cation. Faces in LFW can vary in all possible ways due to pose, lighting, expression, age, scale, and misalignment (Figure 1). Methods for frontal images cannot cope with these variations and as such many researchers have turned to machine learning to 2 Hieu V. Nguyen and Li Bai Fig. 1. From FERET to LFW develop learning based face veri cation methods [8,9]. One of these approaches is to learn a transformation matrix from the data so that the Euclidean distance can perform better in the new subspace. Learning such a transformation matrix is equivalent to learning a Mahalanobis metric in the original space [10]. Xing et al. [11] used semide nite programming to learn a Mahalanobis distance metric for clustering. Their algorithm aims to minimize the sum of squared distances between similarly labeled inputs, while maintaining a lower bound on the sum of distances between di erently labeled inputs. Goldberger et al. [10] proposed Neighbourhood Component Analysis (NCA), a distance metric learning algorithm especially designed to improve kNN classi cation. The algorithm is to learn a Mahalanobis distance by minimizing the leave-one-out cross validation error of the kNN classi er on a training set. Because it uses softmax activation function to convert distance to probability, the gradient computation step is expensive. Weinberger et al. [12] proposed a method that learns a matrix designed to improve the performance of kNN classi cation. The objective function is composed of two terms. The rst term minimizes the distance between target neighbours. The second term is a hinge-loss that encourages target neighbours to be at least one distance unit closer than points from other classes. It requires information about the class of each sample. As a result, their method is not applicable for the restricted setting in LFW (see section 2.1). Recently, Davis et al. [13] have taken an information theoretic approach to learn a Mahalanobis metric under a wide range of possible constraints and prior knowledge on the Mahalanobis distance. Their method regularizes the learned matrix to make it as close as possible to a known prior matrix. The closeness is measured as a Kullback-Leibler divergence between two Gaussian distributions corresponding to the two matrices. In this paper, we propose a new method named Cosine Similarity Metric Learning (CSML). There are two main contributions. The rst contribution is Cosine Similarity Metric Learning for Face Veri cation 3 that we have shown cosine similarity to be an e ective alternative to Euclidean distance in metric learning problem. The second contribution is that CSML can improve the generalization ability of an existing metric signi cantly in most cases. Our method is di erent from all the above methods in terms of distance measures. All of the other methods use Euclidean distance to measure the dissimilarities between samples in the transformed space whilst our method uses cosine similarity which leads to a simple and e ective metric learning method. The rest of this paper is structured as follows. Section 2 presents CSML method in detail. Section 3 present how CSML can be applied to face veri cation. Experimental results are presented in section 4. Finally, conclusion is given in section 5. 1 Cosine Similarity Metric Learning The general idea is to learn a transformation matrix from training data so that cosine similarity performs well in the transformed subspace. The performance is measured by cross validation error (cve). 1.1 Cosine similarity Cosine similarity (CS) between two vectors x and y is de ned as: CS(x, y) = x y ‖x‖ ‖y‖ Cosine similarity has a special property that makes it suitable for metric learning: the resulting similarity measure is always within the range of −1 and +1. As shown in section 1.3, this property allows the objective function to be simple and e ective. 1.2 Metric learning formulation Let {xi, yi, li}i=1 denote a training set of s labeled samples with pairs of input vectors xi, yi ∈ R and binary class labels li ∈ {1, 0} which indicates whether xi and yi match or not. The goal is to learn a linear transformation A : R → R(d ≤ m), which we will use to compute cosine similarities in the transformed subspace as: CS(x, y,A) = (Ax) (Ay) ‖Ax‖ ‖Ay‖ = xAAy √ xTATAx √ yTATAy Speci cally, we want to learn the linear transformation that minimizes the cross validation error when similarities are measured in this way. We begin by de ning the objective function. 4 Hieu V. Nguyen and Li Bai 1.3 Objective function First, we de ne positive and negative sample index sets Pos and Neg as:", "title": "" } ]
[ { "docid": "fa88a823e05586bd3000461992a29af9", "text": "Evaluation metrics for image captioning face two challenges. Firstly, commonly used metrics such as CIDEr, METEOR, ROUGE and BLEU often do not correlate well with human judgments. Secondly, each metric has well known blind spots to pathological caption constructions, and rule-based metrics lack provisions to repair such blind spots once identified. For example, the newly proposed SPICE correlates well with human judgments, but fails to capture the syntactic structure of a sentence. To address these two challenges, we propose a novel learning based discriminative evaluation metric that is directly trained to distinguish between human and machine-generated captions. In addition, we further propose a data augmentation scheme to explicitly incorporate pathological transformations as negative examples during training. The proposed metric is evaluated with three kinds of robustness tests and its correlation with human judgments. Extensive experiments show that the proposed data augmentation scheme not only makes our metric more robust toward several pathological transformations, but also improves its correlation with human judgments. Our metric outperforms other metrics on both caption level human correlation in Flickr 8k and system level human correlation in COCO. The proposed approach could be served as a learning based evaluation metric that is complementary to existing rule-based metrics.", "title": "" }, { "docid": "cae689b8a27b05318088a16eaccd85b4", "text": "In recent years, electronic product have been demanded more functionalities, miniaturization, higher performance, reliability and low cost. Therefore, IC chip is required to deliver more signal I/O and better electrical characteristics under the same package footprint. None-Lead Bump Array (NBA) Chip Scale Structure is then developed to meet those requirements offering better electrical performance, more I/O accommodation and high transmission speed. To evaluate NBA package capability, the solder joint life, package warpage, die corner stress and thermal performance are characterized. Firstly, investigations on the warpage, die corner stress and thermal performance of NBA-QFN structure are performed by the use of Finite Element Method (FEM). Secondly, experiments are conducted for the solder joint reliability performance with different solder coverage and standoff height In the conclusion of this study, NBA-QFN would have no warpage risk, lower die corner stress and better thermal performance than TFBGA from simulation result. Beside that, the simulation result shows good agreement with experimental data. From the drop test study, with solder coverage less than 50% and standoff height lower than 40um would perform better solder joint life than others.", "title": "" }, { "docid": "38fd6a2b2ea49fda599a70ec7e803cde", "text": "The role of trace elements in biological systems has been described in several animals. However, the knowledge in fish is mainly limited to iron, copper, manganese, zinc and selenium as components of body fluids, cofactors in enzymatic reactions, structural units of non-enzymatic macromolecules, etc. Investigations in fish are comparatively complicated as both dietary intake and waterborne mineral uptake have to be considered in determining the mineral budgets. The importance of trace minerals as essential ingredients in diets, although in small quantities, is also evident in fish.", "title": "" }, { "docid": "4ac083b7e2900eb5cc80efd6022c76c1", "text": "We investigate the problem of reconstructing normals, albedo and lights of Lambertian surfaces in uncalibrated photometric stereo under the perspective projection model. Our analysis is based on establishing the integrability constraint. In the orthographic projection case, it is well-known that when such constraint is imposed, a solution can be identified only up to 3 parameters, the so-called generalized bas-relief (GBR) ambiguity. We show that in the perspective projection case the solution is unique. We also propose a closed-form solution which is simple, efficient and robust. We test our algorithm on synthetic data and publicly available real data. Our quantitative tests show that our method outperforms all prior work of uncalibrated photometric stereo under orthographic projection.", "title": "" }, { "docid": "264fef3aa71df1f661f2b94461f9634c", "text": "This paper presents a new control method for cascaded connected H-bridge converter-based static compensators. These converters have classically been commutated at fundamental line frequencies, but the evolution of power semiconductors has allowed the increase of switching frequencies and power ratings of these devices, permitting the use of pulsewidth modulation techniques. This paper mainly focuses on dc-bus voltage balancing problems and proposes a new control technique (individual voltage balancing strategy), which solves these balancing problems, maintaining the delivered reactive power equally distributed among all the H-bridges of the converter.", "title": "" }, { "docid": "c926d9a6b6fe7654e8409ae855bdeb20", "text": "A low-power, 40-Gb/s optical transceiver front-end is demonstrated in a 45-nm silicon-on-insulator (SOI) CMOS process. Both single-ended and differential optical modulators are demonstrated with floating-body transistors to reach output swings of more than 2 VPP and 4 VPP, respectively. A single-ended gain of 7.6 dB is measured over 33 GHz. The optical receiver consists of a transimpedance amplifier (TIA) and post-amplifier with 55 dB ·Ω of transimpedance over 30 GHz. The group-delay variation is ±3.9 ps over the 3-dB bandwidth and the average input-referred noise density is 20.5 pA/(√Hz) . The TIA consumes 9 mW from a 1-V supply for a transimpedance figure of merit of 1875 Ω /pJ. This represents the lowest power consumption for a transmitter and receiver operating at 40 Gb/s in a CMOS process.", "title": "" }, { "docid": "60556a58af0196cc0032d7237636ec52", "text": "This paper investigates what students understand about algorithm efficiency before receiving any formal instruction on the topic. We gave students a challenging search problem and two solutions, then asked them to identify the more efficient solution and to justify their choice. Many students did not use the standard worst-case analysis of algorithms; rather they chose other metrics, including average-case, better for more cases, better in all cases, one algorithm being more correct, and better for real-world scenarios. Students were much more likely to choose the correct algorithm when they were asked to trace the algorithms on specific examples; this was true even if they traced the algorithms incorrectly.", "title": "" }, { "docid": "368996ab544c51c540afe129ffb65275", "text": "Humans are experts at high-fidelity imitation – closely mimicking a demonstration, often in one attempt. Humans use this ability to quickly solve a task instance, and to bootstrap learning of new tasks. Achieving these abilities in autonomous agents is an open problem. In this paper, we introduce an off-policy RL algorithm (MetaMimic) to narrow this gap. MetaMimic can learn both (i) policies for high-fidelity one-shot imitation of diverse novel skills, and (ii) policies that enable the agent to solve tasks more efficiently than the demonstrators. MetaMimic relies on the principle of storing all experiences in a memory and replaying these to learn massive deep neural network policies by off-policy RL. This paper introduces, to the best of our knowledge, the largest existing neural networks for deep RL and shows that larger networks with normalization are needed to achieve one-shot high-fidelity imitation on a challenging manipulation task. The results also show that both types of policy can be learned from vision, in spite of the task rewards being sparse, and without access to demonstrator actions.", "title": "" }, { "docid": "771b1e44b26f749f6ecd9fe515159d9c", "text": "In spoken dialog systems, dialog state tracking refers to the task of correctly inferring the user's goal at a given turn, given all of the dialog history up to that turn. This task is challenging because of speech recognition and language understanding errors, yet good dialog state tracking is crucial to the performance of spoken dialog systems. This paper presents results from the third Dialog State Tracking Challenge, a research community challenge task based on a corpus of annotated logs of human-computer dialogs, with a blind test set evaluation. The main new feature of this challenge is that it studied the ability of trackers to generalize to new entities - i.e. new slots and values not present in the training data. This challenge received 28 entries from 7 research teams. About half the teams substantially exceeded the performance of a competitive rule-based baseline, illustrating not only the merits of statistical methods for dialog state tracking but also the difficulty of the problem.", "title": "" }, { "docid": "c171254eae86ce30c475c4355ed8879f", "text": "The rapid growth of connected things across the globe has been brought about by the deployment of the Internet of things (IoTs) at home, in organizations and industries. The innovation of smart things is envisioned through various protocols, but the most prevalent protocols are pub-sub protocols such as Message Queue Telemetry Transport (MQTT) and Advanced Message Queuing Protocol (AMQP). An emerging paradigm of communication architecture for IoTs support is Fog computing in which events are processed near to the place they occur for efficient and fast response time. One of the major concerns in the adoption of Fog computing based publishsubscribe protocols for the Internet of things is the lack of security mechanisms because the existing security protocols such as SSL/TSL have a large overhead of computations, storage and communications. To address these issues, we propose a secure, Fog computing based publish-subscribe lightweight protocol using Elliptic Curve Cryptography (ECC) for the Internet of Things. We present analytical proofs and results for resource efficient security, comparing to the existing protocols of traditional Internet.", "title": "" }, { "docid": "e016c72bf2c3173d5c9f4973d03ab380", "text": "SDN controllers demand tight performance guarantees over the control plane actions performed by switches. For example, traffic engineering techniques that frequently reconfigure the network require guarantees on the speed of reconfiguring the network. Initial experiments show that poor performance of Ternary Content-Addressable Memory (TCAM) control actions (e.g., rule insertion) can inflate application performance by a factor of 2x! Yet, modern switches provide no guarantees for these important control plane actions -- inserting, modifying, or deleting rules.\n In this paper, we present the design and evaluation of Hermes, a practical and immediately deployable framework that offers a novel method for partitioning and optimizing switch TCAM to enable performance guarantees. Hermes builds on recent studies on switch performance and provides guarantees by trading-off a nominal amount of TCAM space for assured performance. We evaluated Hermes using large-scale simulations. Our evaluations show that with less than 5% overheads, Hermes provides 5ms insertion guarantees that translates into an improvement of application level metrics by up to 80%. Hermes is more than 50% better than existing state of the art techniques and provides significant improvement for traditional networks running BGP.", "title": "" }, { "docid": "501d6ec6163bc8b93fd728412a3e97f3", "text": "This short paper describes our ongoing research on Greenhouse a zero-positive machine learning system for time-series anomaly detection.", "title": "" }, { "docid": "0b2ae99927b9006fd41b07e4d58a2e82", "text": "Our increasingly digital life provides a wealth of data about our behavior, beliefs, mood, and well-being. This data provides some insight into the lives of patients outside the healthcare setting, and in aggregate can be insightful for the person's mental health and emotional crisis. Here, we introduce this community to some of the recent advancement in using natural language processing and machine learning to provide insight into mental health of both individuals and populations. We advocate using these linguistic signals as a supplement to those that are collected in the health care system, filling in some of the so-called “whitespace” between visits.", "title": "" }, { "docid": "e9229d3ab3e9ec7e5020e50ca23ada0b", "text": "Human beings have been recently reviewed as ‘metaorganisms’ as a result of a close symbiotic relationship with the intestinal microbiota. This assumption imposes a more holistic view of the ageing process where dynamics of the interaction between environment, intestinal microbiota and host must be taken into consideration. Age-related physiological changes in the gastrointestinal tract, as well as modification in lifestyle, nutritional behaviour, and functionality of the host immune system, inevitably affect the gut microbial ecosystem. Here we review the current knowledge of the changes occurring in the gut microbiota of old people, especially in the light of the most recent applications of the modern molecular characterisation techniques. The hypothetical involvement of the age-related gut microbiota unbalances in the inflamm-aging, and immunosenescence processes will also be discussed. Increasing evidence of the importance of the gut microbiota homeostasis for the host health has led to the consideration of medical/nutritional applications of this knowledge through the development of probiotic and prebiotic preparations specific for the aged population. The results of the few intervention trials reporting the use of pro/prebiotics in clinical conditions typical of the elderly will be critically reviewed.", "title": "" }, { "docid": "c9bc670fae6dd0f2274bb18492260372", "text": "We present an efficient GPU-based parallel LSH algorithm to perform approximate k-nearest neighbor computation in high-dimensional spaces. We use the Bi-level LSH algorithm, which can compute k-nearest neighbors with higher accuracy and is amenable to parallelization. During the first level, we use the parallel RP-tree algorithm to partition datasets into several groups so that items similar to each other are clustered together. The second level involves computing the Bi-Level LSH code for each item and constructing a hierarchical hash table. The hash table is based on parallel cuckoo hashing and Morton curves. In the query step, we use GPU-based work queues to accelerate short-list search, which is one of the main bottlenecks in LSH-based algorithms. We demonstrate the results on large image datasets with 200,000 images which are represented as 512 dimensional vectors. In practice, our GPU implementation can obtain more than 40X acceleration over a single-core CPU-based LSH implementation.", "title": "" }, { "docid": "fdba7b3ae6e266b938eeb73f5fd93962", "text": "Prostatic artery embolization (PAE) is an alternative treatment for benign prostatic hyperplasia. Complications are primarily related to non-target embolization. We report a case of ischemic rectitis in a 76-year-old man with significant lower urinary tract symptoms due to benign prostatic hyperplasia, probably related to nontarget embolization. Magnetic resonance imaging revealed an 85.5-g prostate and urodynamic studies confirmed Inferior vesical obstruction. PAE was performed bilaterally. During the first 3 days of follow-up, a small amount of blood mixed in the stool was observed. Colonoscopy identified rectal ulcers at day 4, which had then disappeared by day 16 post PAE without treatment. PAE is a safe, effective procedure with a low complication rate, but interventionalists should be aware of the risk of rectal nontarget embolization.", "title": "" }, { "docid": "e79e94549bca30e3a4483f7fb9992932", "text": "The use of semantic technologies and Semantic Web ontologies in particular have enabled many recent developments in information integration, search engines, and reasoning over formalised knowledge. Ontology Design Patterns have been proposed to be useful in simplifying the development of Semantic Web ontologies by codifying and reusing modelling best practices. This thesis investigates the quality of Ontology Design Patterns. The main contribution of the thesis is a theoretically grounded and partially empirically evaluated quality model for such patterns including a set of quality characteristics, indicators, measurement methods and recommendations. The quality model is based on established theory on information system quality, conceptual model quality, and ontology evaluation. It has been tested in a case study setting and in two experiments. The main findings of this thesis are that the quality of Ontology Design Patterns can be identified, formalised and measured, and furthermore, that these qualities interact in such a way that ontology engineers using patterns need to make tradeoffs regarding which qualities they wish to prioritise. The developed model may aid them in making these choices. This work has been supported by Jönköping University. Department of Computer and Information Science Linköping University SE-581 83 Linköping, Sweden", "title": "" }, { "docid": "ecd144226fdb065c2325a0d3131fd802", "text": "The unknown and the invisible exploit the unwary and the uninformed for illicit financial gain and reputation damage.", "title": "" }, { "docid": "a2e597c8e4ff156eaa72a4981b81df8d", "text": "OBJECTIVE\nAggregation and deposition of amyloid beta (Abeta) in the brain is thought to be central to the pathogenesis of Alzheimer's disease (AD). Recent studies suggest that cerebrospinal fluid (CSF) Abeta levels are strongly correlated with AD status and progression, and may be a meaningful endophenotype for AD. Mutations in presenilin 1 (PSEN1) are known to cause AD and change Abeta levels. In this study, we have investigated DNA sequence variation in the presenilin (PSEN1) gene using CSF Abeta levels as an endophenotype for AD.\n\n\nMETHODS\nWe sequenced the exons and flanking intronic regions of PSEN1 in clinically characterized research subjects with extreme values of CSF Abeta levels.\n\n\nRESULTS\nThis novel approach led directly to the identification of a disease-causing mutation in a family with late-onset AD.\n\n\nINTERPRETATION\nThis finding suggests that CSF Abeta may be a useful endophenotype for genetic studies of AD. Our results also suggest that PSEN1 mutations can cause AD with a large range in age of onset, spanning both early- and late-onset AD.", "title": "" }, { "docid": "72fec6dc287b0aa9aea97a22268c1125", "text": "Given a symmetric matrix what is the nearest correlation matrix, that is, the nearest symmetric positive semidefinite matrix with unit diagonal? This problem arises in the finance industry, where the correlations are between stocks. For distance measured in two weighted Frobenius norms we characterize the solution using convex analysis. We show how the modified alternating projections method can be used to compute the solution for the more commonly used of the weighted Frobenius norms. In the finance application the original matrix has many zero or negative eigenvalues; we show that for a certain class of weights the nearest correlation matrix has correspondingly many zero eigenvalues and that this fact can be exploited in the computation.", "title": "" } ]
scidocsrr
f1ee3d65fae8212a76e30e038be722c6
How to Protect ADS-B: Confidentiality Framework and Efficient Realization Based on Staged Identity-Based Encryption
[ { "docid": "47c723b0c41fb26ed7caa077388e2e1b", "text": "Automatic dependent surveillance-broadcast (ADS-B) is the communications protocol currently being rolled out as part of next-generation air transportation systems. As the heart of modern air traffic control, it will play an essential role in the protection of two billion passengers per year, in addition to being crucial to many other interest groups in aviation. The inherent lack of security measures in the ADS-B protocol has long been a topic in both the aviation circles and in the academic community. Due to recently published proof-of-concept attacks, the topic is becoming ever more pressing, particularly with the deadline for mandatory implementation in most airspaces fast approaching. This survey first summarizes the attacks and problems that have been reported in relation to ADS-B security. Thereafter, it surveys both the theoretical and practical efforts that have been previously conducted concerning these issues, including possible countermeasures. In addition, the survey seeks to go beyond the current state of the art and gives a detailed assessment of security measures that have been developed more generally for related wireless networks such as sensor networks and vehicular ad hoc networks, including a taxonomy of all considered approaches.", "title": "" }, { "docid": "6d18ef0d7e78a970c46c7c8f68675e85", "text": "Aircraft data communications and networking are key enablers for civilian air transportation systems to meet projected aviation demands of the next 20 years and beyond. In this paper, we show how the envisioned e-enabled aircraft plays a central role in streamlining system modernization efforts. We show why performance targets such as safety, security, capacity, efficiency, environmental benefit, travel comfort, and convenience will heavily depend on communications, networking and cyber-physical security capabilities of the e-enabled aircraft. The paper provides a comprehensive overview of the state-of-the-art research and standardization efforts. We highlight unique challenges, recent advances, and open problems in enhancing operations as well as certification of the future e-enabled aircraft.", "title": "" }, { "docid": "d83853692581644f3a86ad0e846c48d2", "text": "This paper investigates cyber security issues with automatic dependent surveillance broadcast (ADS-B) based air traffic control. Before wide-scale deployment in civil aviation, any airborne or ground-based technology must be ensured to have no adverse impact on safe and profitable system operations, both under normal conditions and failures. With ADS-B, there is a lack of a clear understanding about vulnerabilities, how they can impact airworthiness and what failure conditions they can potentially induce. The proposed work streamlines a threat assessment methodology for security evaluation of ADS-B based surveillance. To the best of our knowledge, this work is the first to identify the need for mechanisms to secure ADS-B based airborne surveillance and propose a security solution. This paper presents preliminary findings and results of the ongoing investigation.12", "title": "" } ]
[ { "docid": "a49b2152082aa23f9b90d298064b9733", "text": "The number of steps required to compute a function depends, in general, on the type of computer that is used, on the choice of computer program, and on the input-output code. Nevertheless, the results obtained in this paper are so general as to be nearly independent of these considerations.\nA function is exhibited that requires an enormous number of steps to be computed, yet has a “nearly quickest” program: Any other program for this function, no matter how ingeniously designed it may be, takes practically as many steps as this nearly quickest program.\nA different function is exhibited with the property that no matter how fast a program may be for computing this function another program exists for computing the function very much faster.", "title": "" }, { "docid": "75519b3621d66f55202ce4cbecc8bff1", "text": "belief-network inference Adnan Darwiche and Gregory Provan Rockwell Science Center 1049 Camino Dos Rios Thousand Oaks, CA 91360 fdarwiche, provang@risc.rockwell.com Abstract We describe a new paradigm for implementing inference in belief networks, which consists of two steps: (1) compiling a belief network into an arithmetic expression called a Query DAG (Q-DAG); and (2) answering queries using a simple evaluation algorithm. Each non-leaf node of a Q-DAG represents a numeric operation, a number, or a symbol for evidence. Each leaf node of a Q-DAG represents the answer to a network query, that is, the probability of some event of interest. It appears that Q-DAGs can be generated using any of the standard algorithms for exact inference in belief networks | we show how they can be generated using the clustering algorithm. The time and space complexity of a Q-DAG generation algorithm is no worse than the time complexity of the inference algorithm on which it is based. The complexity of a Q-DAG evaluation algorithm is linear in the size of the Q-DAG, and such inference amounts to a standard evaluation of the arithmetic expression it represents. The main value of Q-DAGs is in reducing the software and hardware resources required to utilize belief networks in on-line, real-world applications. The proposed framework also facilitates the development of on-line inference on di erent software and hardware platforms due to the simplicity of the Q-DAG evaluation algorithm.", "title": "" }, { "docid": "9817009ca281ae09baf45b5f8bdef87d", "text": "The rise of graph-structured data such as social networks, regulatory networks, citation graphs, and functional brain networks, in combination with resounding success of deep learning in various applications, has brought the interest in generalizing deep learning models to non-Euclidean domains. In this paper, we introduce a new spectral domain convolutional architecture for deep learning on graphs. The core ingredient of our model is a new class of parametric rational complex functions (Cayley polynomials) allowing to efficiently compute spectral filters on graphs that specialize on frequency bands of interest. Our model generates rich spectral filters that are localized in space, scales linearly with the size of the input data for sparsely-connected graphs, and can handle different constructions of Laplacian operators. Extensive experimental results show the superior performance of our approach on spectral image classification, community detection, vertex classification and matrix completion tasks.", "title": "" }, { "docid": "df155f17d4d810779ee58bafcaab6f7b", "text": "OBJECTIVE\nTo explore the types, prevalence and associated variables of cyberbullying among students with intellectual and developmental disability attending special education settings.\n\n\nMETHODS\nStudents (n = 114) with intellectual and developmental disability who were between 12-19 years of age completed a questionnaire containing questions related to bullying and victimization via the internet and cellphones. Other questions concerned sociodemographic characteristics (IQ, age, gender, diagnosis), self-esteem and depressive feelings.\n\n\nRESULTS\nBetween 4-9% of students reported bullying or victimization of bullying at least once a week. Significant associations were found between cyberbullying and IQ, frequency of computer usage and self-esteem and depressive feelings. No associations were found between cyberbullying and age and gender.\n\n\nCONCLUSIONS\nCyberbullying is prevalent among students with intellectual and developmental disability in special education settings. Programmes should be developed to deal with this issue in which students, teachers and parents work together.", "title": "" }, { "docid": "fe194d00c129e05f17e7926d15f37c37", "text": "Synthesis, simulation and experiment of unequally spaced resonant slotted-waveguide antenna arrays based on the infinite wavelength propagation property of composite right/left-handed (CRLH) waveguide has been demonstrated in this paper. Both the slot element spacing and excitation amplitude of the antenna array can be adjusted to tailor the radiation pattern. A specially designed shorted CRLH waveguide, as the feed structure of the antenna array, is to work at the infinite wavelength propagation frequency. This ensures that all unequally spaced slot elements along the shorted CRLH waveguide wall can be excited either inphase or antiphase. Four different unequally spaced resonant slotted-waveguide antenna arrays are designed to form pencil, flat-topped and difference beam patterns. Through the synthesis, simulation and experiment, it proves that the proposed arrays are able to exhibit better radiation performances than conventional resonant slotted-waveguide antenna arrays.", "title": "" }, { "docid": "7ff0befa9e6d5694228a8199cd3c1c8c", "text": "This article examined the effects of product aesthetics on several outcome variables in usability tests. Employing a computer simulation of a mobile phone, 60 adolescents (14-17 yrs) were asked to complete a number of typical tasks of mobile phone users. Two functionally identical mobile phones were manipulated with regard to their visual appearance (highly appealing vs not appealing) to determine the influence of appearance on perceived usability, performance measures and perceived attractiveness. The results showed that participants using the highly appealing phone rated their appliance as being more usable than participants operating the unappealing model. Furthermore, the visual appearance of the phone had a positive effect on performance, leading to reduced task completion times for the attractive model. The study discusses the implications for the use of adolescents in ergonomic research.", "title": "" }, { "docid": "2710a25b3cf3caf5ebd5fb9f08c9e5e3", "text": "Your use of the JSTOR archive indicates your acceptance of JSTOR's Terms and Conditions of Use, available at http://www.jstor.org/page/info/about/policies/terms.jsp. JSTOR's Terms and Conditions of Use provides, in part, that unless you have obtained prior permission, you may not download an entire issue of a journal or multiple copies of articles, and you may use content in the JSTOR archive only for your personal, non-commercial use.", "title": "" }, { "docid": "c04ae48f1ff779da8a565653c0976636", "text": "It is widely agreed on that most cognitive processes are contextual in the sense that they depend on the environment, or context, inside which they are carried on. Even concentrating on the issue of contextuality in reasoning, many different notions of context can be found in the Artificial Intelligence literature, see for instance [Giunchiglia 1991a, Giunchiglia & Weyhrauch 1988, Guha 1990, Guha & Lenat 1990, Shoham 1991, McCarthy 1990b]. Our intuition is that reasoning is usually performed on a subset of the global knowledge base; we never consider all we know but only a very small subset of it. The notion of context is used as a means of formalizing this idea of localization. Roughly speaking, we take a context to be the set of facts used locally to prove a given goal plus the inference routines used to reason about them (which in general are different for different sets of facts). Our perspective is similar to that proposed in [McCarthy 1990b, McCarthy 1991]. The goal of this paper is to propose an epistemologically adequate theory of reasoning with contexts. The emphasis is on motivations and intuitions, rather than on technicalities. The two basic definitions are reported in appendix A. Ideas are described incrementally with increasing level of detail. Thus, section 2 describes why contexts are an important notion to consider as part of our ontology. This is achieved also by comparing contexts with situations, another ontologically very important concept. Section 3 then goes more into the technical details and proposes that contexts should be formalized as particular mathematical objects, namely as logical theories. Reasoning with contexts is then formalized as a set of deductions, each deduction carried out inside a context, connected by appropriate \"bridge rules\". Finally, section 4 describes how an important example of common sense reasoning, reasoning about reasoning, can be formalized as multicontextual reasoning.", "title": "" }, { "docid": "9c183992492880d8b6e1a644e014a72f", "text": "Repeated measures analyses of variance are the method of choice in many studies from experimental psychology and the neurosciences. Data from these fields are often characterized by small sample sizes, high numbers of factor levels of the within-subjects factor(s), and nonnormally distributed response variables such as response times. For a design with a single within-subjects factor, we investigated Type I error control in univariate tests with corrected degrees of freedom, the multivariate approach, and a mixed-model (multilevel) approach (SAS PROC MIXED) with Kenward-Roger's adjusted degrees of freedom. We simulated multivariate normal and nonnormal distributions with varied population variance-covariance structures (spherical and nonspherical), sample sizes (N), and numbers of factor levels (K). For normally distributed data, as expected, the univariate approach with Huynh-Feldt correction controlled the Type I error rate with only very few exceptions, even if samples sizes as low as three were combined with high numbers of factor levels. The multivariate approach also controlled the Type I error rate, but it requires N ≥ K. PROC MIXED often showed acceptable control of the Type I error rate for normal data, but it also produced several liberal or conservative results. For nonnormal data, all of the procedures showed clear deviations from the nominal Type I error rate in many conditions, even for sample sizes greater than 50. Thus, none of these approaches can be considered robust if the response variable is nonnormally distributed. The results indicate that both the variance heterogeneity and covariance heterogeneity of the population covariance matrices affect the error rates.", "title": "" }, { "docid": "2119a6fcc721124690d6cc2fe6552724", "text": "A development of humanoid robot HRP-2 is presented in this paper. HRP-2 is a humanoid robotics platform, which we developed in phase two of HRP. HRP was a humanoid robotics project, which had run by the Ministry of Economy, Trade and Industry (METI) of Japan from 1998FY to 2002FY for five years. The ability of the biped locomotion of HRP-2 is improved so that HRP-2 can cope with uneven surface, can walk at two third level of human speed, and can walk on a narrow path. The ability of whole body motion of HRP-2 is also improved so that HRP-2 can get up by a humanoid robot's own self if HRP-2 tips over safely. In this paper, the appearance design, the mechanisms, the electrical systems, specifications, and features upgraded from its prototype are also introduced.", "title": "" }, { "docid": "9df6e9bd41b7a5c48f10cd542fa5e6d9", "text": "Many machine learning problems can be interpreted as learning for matching two types of objects (e.g., images and captions, users and products, queries and documents, etc.). The matching level of two objects is usually measured as the inner product in a certain feature space, while the modeling effort focuses on mapping of objects from the original space to the feature space. This schema, although proven successful on a range of matching tasks, is insufficient for capturing the rich structure in the matching process of more complicated objects. In this paper, we propose a new deep architecture to more effectively model the complicated matching relations between two objects from heterogeneous domains. More specifically, we apply this model to matching tasks in natural language, e.g., finding sensible responses for a tweet, or relevant answers to a given question. This new architecture naturally combines the localness and hierarchy intrinsic to the natural language problems, and therefore greatly improves upon the state-of-the-art models.", "title": "" }, { "docid": "13800973a4bc37f26319c0bb76fce731", "text": "Light fields are a powerful concept in computational imaging and a mainstay in image-based rendering; however, so far their acquisition required either carefully designed and calibrated optical systems (micro-lens arrays), or multi-camera/multi-shot settings. Here, we show that fully calibrated light field data can be obtained from a single ordinary photograph taken through a partially wetted window. Each drop of water produces a distorted view on the scene, and the challenge of recovering the unknown mapping from pixel coordinates to refracted rays in space is a severely underconstrained problem. The key idea behind our solution is to combine ray tracing and low-level image analysis techniques (extraction of 2D drop contours and locations of scene features seen through drops) with state-of-the-art drop shape simulation and an iterative refinement scheme to enforce photo-consistency across features that are seen in multiple views. This novel approach not only recovers a dense pixel-to-ray mapping, but also the refractive geometry through which the scene is observed, to high accuracy. We therefore anticipate that our inherently self-calibrating scheme might also find applications in other fields, for instance in materials science where the wetting properties of liquids on surfaces are investigated.", "title": "" }, { "docid": "37653b46f34b1418ad7dbfc59cbfe16a", "text": "The Nonlinear autoregressive exogenous (NARX) model, which predicts the current value of a time series based upon its previous values as well as the current and past values of multiple driving (exogenous) series, has been studied for decades. Despite the fact that various NARX models have been developed, few of them can capture the long-term temporal dependencies appropriately and select the relevant driving series to make predictions. In this paper, we propose a dual-stage attention-based recurrent neural network (DA-RNN) to address these two issues. In the first stage, we introduce an input attention mechanism to adaptively extract relevant driving series (a.k.a., input features) at each time step by referring to the previous encoder hidden state. In the second stage, we use a temporal attention mechanism to select relevant encoder hidden states across all time steps. With this dual-stage attention scheme, our model can not only make predictions effectively, but can also be easily interpreted. Thorough empirical studies based upon the SML 2010 dataset and the NASDAQ 100 Stock dataset demonstrate that the DA-RNN can outperform state-of-the-art methods for time series prediction.", "title": "" }, { "docid": "7bc81d5c42266a75fe46d99a76b0861d", "text": "Stem cells continue to garner attention by the news media and play a role in public and policy discussions of emerging technologies. As new media platforms develop, it is important to understand how different news media represents emerging stem cell technologies and the role these play in public discussions. We conducted a comparative analysis of newspaper and sports websites coverage of one recent high profile case: Gordie Howe’s stem cell treatment in Mexico. Using qualitative coding methods, we analyzed news articles and readers’ comments from Canadian and US newspapers and sports websites. Results indicate that the efficacy of stem cell treatments is often assumed in news coverage and readers’ comments indicate a public with a wide array of beliefs and perspectives on stem cells and their clinical efficacy. Media coverage that presents uncritical perspectives on unproven stem cell therapies may create patient expectations, may have an affect on policy discussions, and help to feed the marketing of unproven therapies. However, news coverage that provides more balanced or critical coverage of unproven stem cell treatments may also inspire more critical discussion, as reflected in readers’ comments.", "title": "" }, { "docid": "9e35454e25d78714576f140928d4a666", "text": "Learning commonsense knowledge from natural language text is nontrivial due to reporting bias: people rarely state the obvious, e.g., “My house is bigger than me.” However, while rarely stated explicitly, this trivial everyday knowledge does influence the way people talk about the world, which provides indirect clues to reason about the world. For example, a statement like, “Tyler entered his house” implies that his house is bigger than Tyler. In this paper, we present an approach to infer relative physical knowledge of actions and objects along five dimensions (e.g., size, weight, and strength) from unstructured natural language text. We frame knowledge acquisition as joint inference over two closely related problems: learning (1) relative physical knowledge of object pairs and (2) physical implications of actions when applied to those object pairs. Empirical results demonstrate that it is possible to extract knowledge of actions and objects from language and that joint inference over different types of knowledge improves performance.", "title": "" }, { "docid": "b3449b09e45cb56e2dbd91d82c18752a", "text": "Applications with a dynamic workload demand need access to a flexible infrastructure to meet performance guarantees and minimize resource costs. While cloud computing provides the elasticity to scale the infrastructure on demand, cloud service providers lack control and visibility of user space applications, making it difficult to accurately scale the underlying infrastructure. Thus, the burden of scaling falls on the user. In this paper, we propose a new cloud service, Dependable Compute Cloud (DC2), that automatically scales the infrastructure to meet the user-specified performance requirements. DC2 employs Kalman filtering to automatically learn the (possibly changing) system parameters for each application, allowing it to proactively scale the infrastructure to meet performance guarantees. DC2 is designed for the cloud it is application-agnostic and does not require any offline application profiling or benchmarking. Our implementation results on OpenStack using a multi-tier application under a range of workload traces demonstrate the robustness and superiority of DC2 over existing rule-based approaches.", "title": "" }, { "docid": "42b1052a0d1e1536228b1b90602051ea", "text": "Improving the quality of healthcare and the prospects of \"aging in place\" using wireless sensor technology requires solving difficult problems in scale, energy management, data access, security, and privacy. We present AlarmNet, a novel system for assisted living and residential monitoring that uses a two-way flow of data and analysis between the front- and back-ends to enable context-aware protocols that are tailored to residents' individual patterns of living. AlarmNet integrates environmental, physiological, and activity sensors in a scalable heterogeneous architecture. The SenQ query protocol provides real-time access to data and lightweight in-network processing. Circadian activity rhythm analysis learns resident activity patterns and feeds them back into the network to aid context-aware power management and dynamic privacy policies.", "title": "" }, { "docid": "61f5586aa35d4804c336f88603fc18a6", "text": "The authors use the term, “Group Model Building” (Richardson and Andersen 1995; Vennix 1996; 1999) to refer to a bundle of techniques used to construct system dynamics models working directly with client groups on key strategic decisions. We use facilitated face-to-face meetings to elicit model structure and to engage client teams directly in the process of model conceptualization, formulation, analysis, and decision making.", "title": "" }, { "docid": "b4462bf06bac13af9e40023019619a78", "text": "Successful schools ensure that all students master basic skills such as reading and math and have strong backgrounds in other subject areas, including science, history, and foreign language. Recently, however, educators and parents have begun to support a broader educational agenda – one that enhances teachers’ and students’ social and emotional skills. Research indicates that social and emotional skills are associated with success in many areas of life, including effective teaching, student learning, quality relationships, and academic performance. Moreover, a recent meta-analysis of over 300 studies showed that programs designed to enhance social and emotional learning significantly improve students’ social and emotional competencies as well as academic performance. Incorporating social and emotional learning programs into school districts can be challenging, as programs must address a variety of topics in order to be successful. One organization, the Collaborative for Academic, Social, and Emotional Learning (CASEL), provides leadership for researchers, educators, and policy makers to advance the science and practice of school-based social and emotional learning programs. According to CASEL, initiatives to integrate programs into schools should include training on social and emotional skills for both teachers and students, and should receive backing from all levels of the district, including the superintendent, school principals, and teachers. Additionally, programs should be field-tested, evidence-based, and founded on sound", "title": "" } ]
scidocsrr
00c876c636eb89f05b9aedcdca7fcee3
Modeling avalanche breakdown for ESD diodes in integrated circuits
[ { "docid": "3c4219212dfeb01d2092d165be0cfb44", "text": "Classical substrate noise analysis considers the silicon resistivity of an integrated circuit only as doping dependent besides neglecting diffusion currents as well. In power circuits minority carriers are injected into the substrate and propagate by drift–diffusion. In this case the conductivity of the substrate is spatially modulated and this effect is particularly important in high injection regime. In this work a description of the coupling between majority and minority drift–diffusion currents is presented. A distributed model of the substrate is then proposed to take into account the conductivity modulation and its feedback on diffusion processes. The model is expressed in terms of equivalent circuits in order to be fully compatible with circuit simulators. The simulation results are then discussed for diodes and bipolar transistors and compared to the ones obtained from physical device simulations and measurements. 2014 Published by Elsevier Ltd.", "title": "" } ]
[ { "docid": "d7ee1f283cf930310743c98ad8137bcf", "text": "The volume and complexity of diagnostic imaging is increasing at a pace faster than the availability of human expertise to interpret it. Artificial intelligence has shown great promise in classifying two-dimensional photographs of some common diseases and typically relies on databases of millions of annotated images. Until now, the challenge of reaching the performance of expert clinicians in a real-world clinical pathway with three-dimensional diagnostic scans has remained unsolved. Here, we apply a novel deep learning architecture to a clinically heterogeneous set of three-dimensional optical coherence tomography scans from patients referred to a major eye hospital. We demonstrate performance in making a referral recommendation that reaches or exceeds that of experts on a range of sight-threatening retinal diseases after training on only 14,884 scans. Moreover, we demonstrate that the tissue segmentations produced by our architecture act as a device-independent representation; referral accuracy is maintained when using tissue segmentations from a different type of device. Our work removes previous barriers to wider clinical use without prohibitive training data requirements across multiple pathologies in a real-world setting. A novel deep learning architecture performs device-independent tissue segmentation of clinical 3D retinal images followed by separate diagnostic classification that meets or exceeds human expert clinical diagnoses of retinal disease.", "title": "" }, { "docid": "24c744337d831e541f347bbdf9b6b48a", "text": "Modelling and animation of crawler UGV's caterpillars is a complicated task, which has not been completely resolved in ROS/Gazebo simulators. In this paper, we proposed an approximation of track-terrain interaction of a crawler UGV, perform modelling and simulation of Russian crawler robot \"Engineer\" within ROS/Gazebo and visualize its motion in ROS/RViz software. Finally, we test the proposed model in heterogeneous robot group navigation scenario within uncertain Gazebo environment.", "title": "" }, { "docid": "f1582ae3d1ce78c1ad84ab5e552e29bd", "text": "The emergence of sensory-guided behavior depends on sensorimotor coupling during development. How sensorimotor experience shapes neural processing is unclear. Here, we show that the coupling between motor output and visual feedback is necessary for the functional development of visual processing in layer 2/3 (L2/3) of primary visual cortex (V1) of the mouse. Using a virtual reality system, we reared mice in conditions of normal or random visuomotor coupling. We recorded the activity of identified excitatory and inhibitory L2/3 neurons in response to transient visuomotor mismatches in both groups of mice. Mismatch responses in excitatory neurons were strongly experience dependent and driven by a transient release from inhibition mediated by somatostatin-positive interneurons. These data are consistent with a model in which L2/3 of V1 computes a difference between an inhibitory visual input and an excitatory locomotion-related input, where the balance between these two inputs is finely tuned by visuomotor experience.", "title": "" }, { "docid": "d9324f415de22d8f2dfbc49c0f81d241", "text": "Agriculture has been one of the most important industries in human history since it provides humans with absolutely indispensable resources such as food, fiber, and energy. The agriculture industry could be further developed by employing new technologies, in particular, the Internet of Things (IoT). In this paper, we present a connected farm based on IoT systems, which aims to provide smart farming systems for end users. A detailed design and implementation for connected farms are illustrated, and its advantages are explained with service scenarios compared to previous smart farms. We hope this work will show the power of IoT as a disruptive technology helping across multi industries including agriculture.", "title": "" }, { "docid": "d527daf7ae59c7bcf0989cad3183efbe", "text": "In today’s Web, Web services are created and updated on the fly. It’s already beyond the human ability to analysis them and generate the composition plan manually. A number of approaches have been proposed to tackle that problem. Most of them are inspired by the researches in cross-enterprise workflow and AI planning. This paper gives an overview of recent research efforts of automatic Web service composition both from the workflow and AI planning research community.", "title": "" }, { "docid": "03a6425423516d0f978bb5f8abe0d62d", "text": "Machine ethics and robot rights are quickly becoming hot topics in artificial intelligence/robotics communities. We will argue that the attempts to allow machines to make ethical decisions or to have rights are misguided. Instead we propose a new science of safety engineering for intelligent artificial agents. In particular we issue a challenge to the scientific community to develop intelligent systems capable of proving that they are in fact safe even under recursive selfimprovement.", "title": "" }, { "docid": "fd9411cfa035139010be0935d9e52865", "text": "This paper presents a robotic manipulation system capable of autonomously positioning a multi-segment soft fluidic elastomer robot in three dimensions. Specifically, we present an extremely soft robotic manipulator morphology that is composed entirely from low durometer elastomer, powered by pressurized air, and designed to be both modular and durable. To understand the deformation of a single arm segment, we develop and experimentally validate a static deformation model. Then, to kinematically model the multi-segment manipulator, we use a piece-wise constant curvature assumption consistent with more traditional continuum manipulators. In addition, we define a complete fabrication process for this new manipulator and use this process to make multiple functional prototypes. In order to power the robot’s spatial actuation, a high capacity fluidic drive cylinder array is implemented, providing continuously variable, closed-circuit gas delivery. Next, using real-time data from a vision system, we develop a processing and control algorithm that generates realizable kinematic curvature trajectories and controls the manipulator’s configuration along these trajectories. Lastly, we experimentally demonstrate new capabilities offered by this soft fluidic elastomer manipulation system such as entering and advancing through confined three-dimensional environments as well as conforming to goal shape-configurations within a sagittal plane under closed-loop control.", "title": "" }, { "docid": "b9b634c93f2cc216370a94128aeab596", "text": "Life-cycle models of labor supply predict a positive relationship between hours supplied and transitory changes in wages. We tested this prediction   ", "title": "" }, { "docid": "670556463e3204a98b1e407ea0619a1f", "text": "1 Ekaterina Prasolova-Forland, IDI, NTNU, Sem Salandsv 7-9, N-7491 Trondheim, Norway ekaterip@idi.ntnu.no Abstract  This paper discusses awareness support in educational context, focusing on the support offered by collaborative virtual environments. Awareness plays an important role in everyday educational activities, especially in engineering courses where projects and group work is an integral part of the curriculum. In this paper we will provide a general overview of awareness in computer supported cooperative work and then focus on the awareness mechanisms offered by CVEs. We will also discuss the role and importance of these mechanisms in educational context and make some comparisons between awareness support in CVEs and in more traditional tools.", "title": "" }, { "docid": "5e9d63bfc3b4a66e0ead79a2d883adfe", "text": "Cloud computing is becoming a major trend for delivering and accessing infrastructure on demand via the network. Meanwhile, the usage of FPGAs (Field Programmable Gate Arrays) for computation acceleration has made significant inroads into multiple application domains due to their ability to achieve high throughput and predictable latency, while providing programmability, low power consumption and time-to-value. Many types of workloads, e.g. databases, big data analytics, and high performance computing, can be and have been accelerated by FPGAs. As more and more workloads are being deployed in the cloud, it is appropriate to consider how to make FPGAs and their capabilities available in the cloud. However, such integration is non-trivial due to issues related to FPGA resource abstraction and sharing, compatibility with applications and accelerator logics, and security, among others. In this paper, a general framework for integrating FPGAs into the cloud is proposed and a prototype of the framework is implemented based on OpenStack, Linux-KVM and Xilinx FPGAs. The prototype enables isolation between multiple processes in multiple VMs, precise quantitative acceleration resource allocation, and priority-based workload scheduling. Experimental results demonstrate the effectiveness of this prototype, an acceptable overhead, and good scalability when hosting multiple VMs and processes.", "title": "" }, { "docid": "958cde8dec4d8df9c6b6d83a7740e2d0", "text": "Distributed applications use replication, implemented by protocols like Paxos, to ensure data availability and transparently mask server failures. This paper presents a new approach to achieving replication in the data center without the performance cost of traditional methods. Our work carefully divides replication responsibility between the network and protocol layers. The network orders requests but does not ensure reliable delivery – using a new primitive we call ordered unreliable multicast (OUM). Implementing this primitive can be achieved with near-zero-cost in the data center. Our new replication protocol, NetworkOrdered Paxos (NOPaxos), exploits network ordering to provide strongly consistent replication without coordination. The resulting system not only outperforms both latencyand throughput-optimized protocols on their respective metrics, but also yields throughput within 2% and latency within 16 μs of an unreplicated system – providing replication without the performance cost.", "title": "" }, { "docid": "4f2e6de82e2a79ce26a4b26b3177e977", "text": "The World Wide Web has become the hotbed of a multi-billion dollar underground economy among cyber criminals whose victims range from individual Internet users to large corporations and even government organizations. As phishing attacks are increasingly being used by criminals to facilitate their cyber schemes, it is important to develop effective phishing detection tools. In this paper, we propose a rule-based method to detect phishing webpages. We first study a number of phishing websites to examine various tactics employed by phishers and generate a rule set based on observations. We then use Decision Tree and Logistic Regression learning algorithms to apply the rules and achieve 95-99% accuracy, with a false positive rate of 0.5-1.5% and modest false negatives. Thus, it is demonstrated that our rulebased method for phishing detection achieves performance comparable to learning machine based methods, with the great advantage of understandable rules derived from experience. KeywordsPhishing attack, phishing website, rule-based, machine learning, phishing detection, decision tree", "title": "" }, { "docid": "a814fedf9bedf31911f8db43b0d494a5", "text": "A critical period for language learning is often defined as a sharp decline in learning outcomes with age. This study examines the relevance of the critical period for English speaking proficiency among immigrants in the US. It uses microdata from the 2000 US Census, a model of language acquisition, and a flexible specification of an estimating equation based on 64 age-at-migration dichotomous variables. Self-reported English speaking proficiency among immigrants declines more-or-less monotonically with age at migration, and this relationship is not characterized by any sharp decline or discontinuity that might be considered consistent with a “critical” period. The findings are robust across the various immigrant samples, and between the genders. (110 words).", "title": "" }, { "docid": "f6266e5c4adb4fa24cc353dccccaf6db", "text": "Clustering plays an important role in many large-scale data analyses providing users with an overall understanding of their data. Nonetheless, clustering is not an easy task due to noisy features and outliers existing in the data, and thus the clustering results obtained from automatic algorithms often do not make clear sense. To remedy this problem, automatic clustering should be complemented with interactive visualization strategies. This paper proposes an interactive visual analytics system for document clustering, called iVisClustering, based on a widelyused topic modeling method, latent Dirichlet allocation (LDA). iVisClustering provides a summary of each cluster in terms of its most representative keywords and visualizes soft clustering results in parallel coordinates. The main view of the system provides a 2D plot that visualizes cluster similarities and the relation among data items with a graph-based representation. iVisClustering provides several other views, which contain useful interaction methods. With help of these visualization modules, we can interactively refine the clustering results in various ways.", "title": "" }, { "docid": "218c5fdd541a839094e8010ed6a56d22", "text": "In this paper, we propose a consistent-aware deep learning (CADL) framework for person re-identification in a camera network. Unlike most existing person re-identification methods which identify whether two body images are from the same person, our approach aims to obtain the maximal correct matches for the whole camera network. Different from recently proposed camera network based re-identification methods which only consider the consistent information in the matching stage to obtain a global optimal association, we exploit such consistent-aware information under a deep learning framework where both feature representation and image matching are automatically learned with certain consistent constraints. Specifically, we reach the global optimal solution and balance the performance between different cameras by optimizing the similarity and association iteratively. Experimental results show that our method obtains significant performance improvement and outperforms the state-of-the-art methods by large margins.", "title": "" }, { "docid": "1b5427ff132a4ace0031b667eb6ff5f3", "text": "The obesity epidemic shows no signs of abating. There is an urgent need to push back against the environmental forces that are producing gradual weight gain in the population. Using data from national surveys, we estimate that affecting energy balance by 100 kilocalories per day (by a combination of reductions in energy intake and increases in physical activity) could prevent weight gain in most of the population. This can be achieved by small changes in behavior, such as 15 minutes per day of walking or eating a few less bites at each meal. Having a specific behavioral target for the prevention of weight gain may be key to arresting the obesity epidemic.", "title": "" }, { "docid": "34976e12739060a443ad0cfbb373fd3b", "text": "The detection of failures is a fundamental issue for fault-tolerance in distributed systems. Recently, many people have come to realize that failure detection ought to be provided as some form of generic service, similar to IP address lookup or time synchronization. However, this has not been successful so far; one of the reasons being the fact that classical failure detectors were not designed to satisfy several application requirements simultaneously. We present a novel abstraction, called accrual failure detectors, that emphasizes flexibility and expressiveness and can serve as a basic building block to implementing failure detectors in distributed systems. Instead of providing information of a binary nature (trust vs. suspect), accrual failure detectors output a suspicion level on a continuous scale. The principal merit of this approach is that it favors a nearly complete decoupling between application requirements and the monitoring of the environment. In this paper, we describe an implementation of such an accrual failure detector, that we call the /spl phi/ failure detector. The particularity of the /spl phi/ failure detector is that it dynamically adjusts to current network conditions the scale on which the suspicion level is expressed. We analyzed the behavior of our /spl phi/ failure detector over an intercontinental communication link over a week. Our experimental results show that if performs equally well as other known adaptive failure detection mechanisms, with an improved flexibility.", "title": "" }, { "docid": "35724d9d93c5780cac4287fc866a3529", "text": "Advancing research into autonomous micro aerial vehicle navigation requires data structures capable of representing indoor and outdoor 3D environments. The vehicle must be able to update the map structure in real time using readings from range-finding sensors when mapping unknown areas; it must also be able to look up occupancy information from the map for the purposes of localization and path-planning. Mapping models that have been used for these tasks include voxel grids, multi-level surface maps, and octrees. In this paper, we suggest a new approach to 3D mapping using a multi-volume occupancy grid, or MVOG. MVOGs explicitly store information about both obstacles and free space. This allows us to correct previous potentially erroneous sensor readings by incrementally fusing in new positive or negative sensor information. In turn, this enables extracting more reliable probabilistic information about the occupancy of 3D space. MVOGs outperform existing probabilistic 3D mapping methods in terms of memory usage, due to the fact that observations are grouped together into continuous vertical volumes to save space. We describe the techniques required for mapping using MVOGs, and analyze their performance using indoor and outdoor experimental data.", "title": "" }, { "docid": "a688f040f616faff3db13be4b1c052df", "text": "Intracellular fucoidanase was isolated from the marine bacterium, Formosa algae strain KMM 3553. The first appearance of fucoidan enzymatic hydrolysis products in a cell-free extract was detected after 4 h of bacterial growth, and maximal fucoidanase activity was observed after 12 h of growth. The fucoidanase displayed maximal activity in a wide range of pH values, from 6.5 to 9.1. The presence of Mg2+, Ca2+ and Ba2+ cations strongly activated the enzyme; however, Cu2+ and Zn2+ cations had inhibitory effects on the enzymatic activity. The enzymatic activity of fucoidanase was considerably reduced after prolonged (about 60 min) incubation of the enzyme solution at 45 °C. The fucoidanase catalyzed the hydrolysis of fucoidans from Fucus evanescens and Fucus vesiculosus, but not from Saccharina cichorioides. The fucoidanase also did not hydrolyze carrageenan. Desulfated fucoidan from F. evanescens was hydrolysed very weakly in contrast to deacetylated fucoidan, which was hydrolysed more actively compared to the native fucoidan from F. evanescens. Analysis of the structure of the enzymatic products showed that the marine bacteria, F. algae, synthesized an α-l-fucanase with an endo-type action that is specific for 1→4-bonds in a polysaccharide molecule built up of alternating three- and four-linked α-l-fucopyranose residues sulfated mainly at position 2.", "title": "" }, { "docid": "0d95f43ba40942b83e5f118b01ebf923", "text": "Containers are a lightweight virtualization method for running multiple isolated Linux systems under a common host operating system. Container-based computing is revolutionizing the way applications are developed and deployed. A new ecosystem has emerged around the Docker platform to enable container based computing. However, this revolution has yet to reach the HPC community. In this paper, we provide background on Linux Containers and Docker, and how they can be of value to the scientific and HPC community. We will explain some of the use cases that motivate the need for user defined images and the uses of Docker. We will describe early work in deploying and integrating Docker into an HPC environment, and some of the pitfalls and challenges we encountered. We will discuss some of the security implications of using Docker and how we have addressed those for a shared user system typical of HPC centers. We will also provide performance measurements to illustrate the low overhead of containers. While our early work has been on cluster-based/CS-series systems, we will describe some preliminary assessment of supporting Docker on Cray XC series supercomputers, and a potential partnership with Cray to explore the feasibility and approaches to using Docker on large systems. Keywords-Docker; User Defined Images; containers; HPC systems", "title": "" } ]
scidocsrr
1f5ff340f15bcde7cc6736ffda487d6c
Adaptive Loss Minimization for Semi-Supervised Elastic Embedding
[ { "docid": "04ba17b4fc6b506ee236ba501d6cb0cf", "text": "We propose a family of learning algorithms based on a new form f regularization that allows us to exploit the geometry of the marginal distribution. We foc us on a semi-supervised framework that incorporates labeled and unlabeled data in a general-p u pose learner. Some transductive graph learning algorithms and standard methods including Suppor t Vector Machines and Regularized Least Squares can be obtained as special cases. We utilize pr op rties of Reproducing Kernel Hilbert spaces to prove new Representer theorems that provide theor e ical basis for the algorithms. As a result (in contrast to purely graph-based approaches) we ob tain a natural out-of-sample extension to novel examples and so are able to handle both transductive and truly semi-supervised settings. We present experimental evidence suggesting that our semiupervised algorithms are able to use unlabeled data effectively. Finally we have a brief discuss ion of unsupervised and fully supervised learning within our general framework.", "title": "" }, { "docid": "6228f059be27fa5f909f58fb60b2f063", "text": "We propose a unified manifold learning framework for semi-supervised and unsupervised dimension reduction by employing a simple but effective linear regression function to map the new data points. For semi-supervised dimension reduction, we aim to find the optimal prediction labels F for all the training samples X, the linear regression function h(X) and the regression residue F0 = F - h(X) simultaneously. Our new objective function integrates two terms related to label fitness and manifold smoothness as well as a flexible penalty term defined on the residue F0. Our Semi-Supervised learning framework, referred to as flexible manifold embedding (FME), can effectively utilize label information from labeled data as well as a manifold structure from both labeled and unlabeled data. By modeling the mismatch between h(X) and F, we show that FME relaxes the hard linear constraint F = h(X) in manifold regularization (MR), making it better cope with the data sampled from a nonlinear manifold. In addition, we propose a simplified version (referred to as FME/U) for unsupervised dimension reduction. We also show that our proposed framework provides a unified view to explain and understand many semi-supervised, supervised and unsupervised dimension reduction techniques. Comprehensive experiments on several benchmark databases demonstrate the significant improvement over existing dimension reduction algorithms.", "title": "" }, { "docid": "da168a94f6642ee92454f2ea5380c7f3", "text": "One of the central problems in machine learning and pattern recognition is to develop appropriate representations for complex data. We consider the problem of constructing a representation for data lying on a low-dimensional manifold embedded in a high-dimensional space. Drawing on the correspondence between the graph Laplacian, the Laplace Beltrami operator on the manifold, and the connections to the heat equation, we propose a geometrically motivated algorithm for representing the high-dimensional data. The algorithm provides a computationally efficient approach to nonlinear dimensionality reduction that has locality-preserving properties and a natural connection to clustering. Some potential applications and illustrative examples are discussed.", "title": "" } ]
[ { "docid": "e0cc48dc60f6c79befb8584cee95e9ea", "text": "Neural Network approaches to time series prediction are briefly discussed, and the need to specify an appropriately sized input window identified. Relevant theoretical results from dynamic systems theory are introduced, and the number of false neighbours heuristic is described, as a means of finding the correct embedding dimension, and thence window size. The method is applied to three time series and the resulting generalisation performance of the trained feed-forward neural network predictors is analysed. It is shown that the heuristics can provide useful information in defining the appropriate network architecture.", "title": "" }, { "docid": "dbafe7db0387b56464ac630404875465", "text": "Recognition of body posture and motion is an important physiological function that can keep the body in balance. Man-made motion sensors have also been widely applied for a broad array of biomedical applications including diagnosis of balance disorders and evaluation of energy expenditure. This paper reviews the state-of-the-art sensing components utilized for body motion measurement. The anatomy and working principles of a natural body motion sensor, the human vestibular system, are first described. Various man-made inertial sensors are then elaborated based on their distinctive sensing mechanisms. In particular, both the conventional solid-state motion sensors and the emerging non solid-state motion sensors are depicted. With their lower cost and increased intelligence, man-made motion sensors are expected to play an increasingly important role in biomedical systems for basic research as well as clinical diagnostics.", "title": "" }, { "docid": "6fa454fc02b5f52e08e6ab0de657ed6b", "text": "Large numbers of children in the world are acquiring one language as their native language and subsequently learning another. There are also many children who are acquiring two or more languages simultaneously in early childhood as part of the natural consequences of being a member of bilingual families and communities. Because bilingualism brings about advantages to children that have an effect on their future development, understanding differences between monolinguals and bilinguals becomes a question of interest. However, on tests of vocabulary bilinguals frequently seem to perform at lower levels than monolinguals (Ben Zeev, 1977b; Doyle, Champagne, & Segalowitz, 1978). The reason for this seems to be that bilingual children have to learn two different labels for everything, which reduces the frequency of a particular word in either language (Ben Zeev, 1977b). This makes the task of acquiring, sorting, and differentiating vocabulary and meaning in two languages much more difficult when compared to the monolingual child’s task in one language (Doyle et al., 1978). Many researchers (Genesee & Nicoladis, 1995; Patterson, 1998; Pearson, Fernandez, and Oller, 1993) have raised questions about the appropriateness of using monolingual vocabulary norms to evaluate bilinguals. In the past, when comparing monolingual and bilingual performance, researchers mainly considered only one language of the bilingual (Ben Zeev, 1977b; Bialystok, 1988; Doyle et al., 1978). However, there is considerable evidence of a vocabulary overlap in the lexicon of bilingual children’s two languages, differing from child to child (Umbel, Pearson, Fernandez, and Oller, 1992). This vocabulary overlap is attributed to the child acquiring each language in different contexts resulting in some areas of complementary knowledge across the two languages (Saunders, 1982). It is crucial to examine both languages of bilingual children and account for this overlap in order to assess the size of bilinguals’ vocabulary with validity. This has been very difficult to do, since there are a few standardized measures for vocabulary knowledge in two languages concurrently and no measure are normed for bilingual preschool age children. It has been suggested that when the vocabulary scores of tests in both languages of the bilingual child are combined, their vocabulary equals or exceeds that of monolingual children (Bialystok, 1988; Doyle et al., 1978; Genesee & Nicoladis, 1995). However, this measure of Total Vocabulary (total scores achieved in language A + language B) is not sufficient for the examination of differences in vocabulary size of bilinguals and monolinguals due to the vocabulary overlap. A measure of total unique words or Conceptual Vocabulary, which is a combination of vocabulary scores in both languages considering words describing the same concept as one word, provides additional information about bilinguals’ vocabulary size with regards to knowledge of concepts. Pearson et al. (1993) conducted the only study considering both Total Vocabulary (language A + language B) and Conceptual Vocabulary (language A U language B) for bilingual children in comparison to their monolingual peers. Based on a sample of 25 simultaneous English/Spanish bilinguals and 35 monolinguals it was suggested that there exists no basis for concluding that the bilingual children were slower to develop early vocabulary than were their monolingual peers. There is a possibility that quite the opposite is true with regards to vocabulary comprehension when both languages are involved. There is a need for further study evaluating vocabulary size of preschool bilinguals to verify patterns identified by Pearson et al. (1993).", "title": "" }, { "docid": "7ec6790b96e9185bf822eea3a27ad7ab", "text": "Multi-level converter architectures have been explored for a variety of applications including high-power DC-AC inverters and DC-DC converters. In this work, we explore flying-capacitor multi-level (FCML) DC-DC topologies as a class of hybrid switched-capacitor/inductive converter. Compared to other candidate architectures in this area (e.g. Series-Parallel, Dickson), FCML converters have notable advantages such as the use of single-rated low-voltage switches, potentially lower switching loss, lower passive component volume, and enable regulation across the full VDD-VOUT range. It is shown that multimode operation, including previously published resonant and dynamic off-time modulation, form a single set of techniques that can be used to extend high efficiency over a wide power density range. Some of the general operating considerations of FCML converters, such as the challenge of maintaining voltage balance on flying capacitors, are shown to be of equal concern in other soft-switched SC converter topologies. Experimental verification from a 24V:12V, 3-level converter is presented to show multimode operation with a nominally 2:1 topology. A second 50V:7V 4-level FCML converter demonstrates operation with variable regulation. A method is presented to balance flying capacitor voltages through low frequency closed-loop feedback.", "title": "" }, { "docid": "b69dd5f570f9a1996fe743d5038dbc6c", "text": "With the development of deep learning and artificial intelligence, more and more research apply neural networks to natural language processing tasks. However, while the majority of these research take English corpus as the dataset, few studies have been done using Chinese corpus. Meanwhile, Existing Chinese processing algorithms typically regard Chinese word or Chinese character as the basic unit but ignore the deeper information into the Chinese character. In Chinese linguistic, strokes are the basic unit of Chinese character who are similar to letters of the English word. Inspired by the recent success of deep learning at character-level, we delve deeper to Chinese stroke level for Chinese language processing and developed it into service for Chinese text classification. In this paper, we dig the basic feature of the strokes considering the similar Chinese character components and propose a new method to leverage Chinese stroke for learning the continuous representation of Chinese character and develop it into a service for Chinese text classification. We develop a dedicated neural architecture based on the convolutional neural network to effectively learn character embedding and apply it to Chinese word similarity judgment and Chinese text classification. Both experiments results show that the stroke level method is effective for Chinese language processing.", "title": "" }, { "docid": "7530a79035a1d2b73d7ef5e38dda942b", "text": "Representing images and videos with Symmetric Positive Definite (SPD) matrices, and considering the Riemannian geometry of the resulting space, has been shown to yield high discriminative power in many visual recognition tasks. Unfortunately, computation on the Riemannian manifold of SPD matrices –especially of high-dimensional ones– comes at a high cost that limits the applicability of existing techniques. In this paper, we introduce algorithms able to handle high-dimensional SPD matrices by constructing a lower-dimensional SPD manifold. To this end, we propose to model the mapping from the high-dimensional SPD manifold to the low-dimensional one with an orthonormal projection. This lets us formulate dimensionality reduction as the problem of finding a projection that yields a low-dimensional manifold either with maximum discriminative power in the supervised scenario, or with maximum variance of the data in the unsupervised one. We show that learning can be expressed as an optimization problem on a Grassmann manifold and discuss fast solutions for special cases. Our evaluation on several classification tasks evidences that our approach leads to a significant accuracy gain over state-of-the-art methods.", "title": "" }, { "docid": "6dfe8b18e3d825b2ecfa8e6b353bbb99", "text": "In the last decade tremendous effort has been put in the study of the Apollonian circle packings. Given the great variety of mathematics it exhibits, this topic has attracted experts from different fields: number theory, homogeneous dynamics, expander graphs, group theory, to name a few. The principle investigator (PI) contributed to this program in his PhD studies. The scenery along the way formed the horizon of the PI at his early mathematical career. After his PhD studies, the PI has successfully applied tools and ideas from Apollonian circle packings to the studies of topics from various fields, and will continue this endeavor in his proposed research. The proposed problems are roughly divided into three categories: number theory, expander graphs, geometry. Each of which will be discussed in depth in later sections. Since Apollonian circle packing provides main inspirations for this proposal, let’s briefly review how it comes up and what has been done. We start with four mutually circles, with one circle bounding the other three. We can repeatedly inscribe more and more circles into curvilinear triangular gaps as illustrated in Figure 1, and we call the resultant set an Apollonian circle packing, which consists of infinitely many circles.", "title": "" }, { "docid": "1757d8eee607b80b6b590ed8ca1e77b2", "text": "The proximity of cells in three-dimensional (3D) organization maximizes the cell-cell communication and signaling that are critical for cell function. In this study, 3D cell aggregates composed of human umbilical vein endothelial cells (HUVECs) and cord-blood mesenchymal stem cells (cbMSCs) were used for therapeutic neovascularization to rescue tissues from critical limb ischemia. Within the cell aggregates, homogeneously mixed HUVECs and cbMSCs had direct cell-cell contact with expressions of endogenous extracellular matrices and adhesion molecules. Although dissociated HUVECs/cbMSCs initially formed tubular structures on Matrigel, the grown tubular network substantially regressed over time. Conversely, 3D HUVEC/cbMSC aggregates seeded on Matrigel exhibited an extensive tubular network that continued to expand without regression. Immunostaining experiments show that, by differentiating into smooth muscle cell (SMC) lineages, the cbMSCs stabilize the HUVEC-derived tubular network. The real-time PCR analysis results suggest that, through myocardin, TGF-β signaling regulates the differentiation of cbMSCs into SMCs. Transplantation of 3D HUVEC/cbMSC aggregates recovered blood perfusion in a mouse model of hindlimb ischemia more effectively compared to their dissociated counterparts. The experimental results confirm that the transplanted 3D HUVEC/cbMSC aggregates enhanced functional vessel formation within the ischemic limb and protected it from degeneration. The 3D HUVEC/cbMSC aggregates can therefore facilitate the cell-based therapeutic strategies for modulating postnatal neovascularization.", "title": "" }, { "docid": "9151a96cd2d1552dc15e0d5ff07b6108", "text": "Making correct decisions often requires analysing large volumes of textual information. Text Mining is a budding new field that endeavours to garner meaningful information from natural language text. Text Mining is the process of applying automatic methods to analyse and structure textual data in order to create useable knowledge from previously unstructured information. Text Mining is inherently interdisciplinary, borrowing heavily from neighbouring fields such as data mining and computational linguistics. Some real application to define the state-of-the-art in Text Mining and to single out future needs and scenarios are collected.", "title": "" }, { "docid": "c59cae78ce3482450776755b9d9d5199", "text": "Traditional information systems return answers after a user submits a complete query. Users often feel “left in the dark” when they have limited knowledge about the underlying data and have to use a try-and-see approach for finding information. A recent trend of supporting autocomplete in these systems is a first step toward solving this problem. In this paper, we study a new information-access paradigm, called “type-ahead search” in which the system searches the underlying data “on the fly” as the user types in query keywords. It extends autocomplete interfaces by allowing keywords to appear at different places in the underlying data. This framework allows users to explore data as they type, even in the presence of minor errors. We study research challenges in this framework for large amounts of data. Since each keystroke of the user could invoke a query on the backend, we need efficient algorithms to process each query within milliseconds. We develop various incremental-search algorithms for both single-keyword queries and multi-keyword queries, using previously computed and cached results in order to achieve a high interactive speed. We develop novel techniques to support fuzzy search by allowing mismatches between query keywords and answers. We have deployed several real prototypes using these techniques. One of them has been deployed to support type-ahead search on the UC Irvine people directory, which has been used regularly and well received by users due to its friendly interface and high efficiency.", "title": "" }, { "docid": "5d953232681e6815ccd85e2b1b600465", "text": "Bandwidth and power constraints are the main concerns in current wireless networks because mul tihop ad hoc mobile wireless networks rely on each node in the network to act as a router and packet forwarder This dependency places bandwidth power and computation demands on mobile hosts which must be taken into account when choosing the best routing protocol In recent years protocols that build routes based on demand have been proposed The major goal of on demand routing protocols is to minimize control traffic overhead In this paper we perform a simulation and performance study on some routing protocols for ad hoc networks Distributed Bellman Ford a traditional table driven routing algorithm is simulated to evaluate its performance in multihop wireless networks In addition two on demand routing protocols Dynamic Source Routing and Associativity Based Routing with distinctive route selection algorithms are simulated in a common environment to quantitatively mea sure and contrast their performance The final selection of an appropriate protocol will depend on a variety of factors which are discussed in this paper", "title": "" }, { "docid": "9128809af50519d2d0ef3a0ee520e569", "text": "It has been experimentally observed that distributed implementations of mini-batch stochastic gradient descent (SGD) algorithms exhibit speedup saturation and decaying generalization ability beyond a particular batch-size. In this work, we present an analysis hinting that high similarity between concurrently processed gradients may be a cause of this performance degradation. We introduce the notion of gradient diversity that measures the dissimilarity between concurrent gradient updates, and show its key role in the performance of mini-batch SGD. We prove that on problems with high gradient diversity, mini-batch SGD is amenable to better speedups, while maintaining the generalization performance of serial (one sample) SGD. We further establish lower bounds on convergence where mini-batch SGD slows down beyond a particular batch-size, solely due to the lack of gradient diversity. We provide experimental evidence indicating the key role of gradient diversity in distributed learning, and discuss how heuristics like dropout, Langevin dynamics, and quantization can improve it.", "title": "" }, { "docid": "d880535f198a1f0a26b18572f674b829", "text": "Human Activity Recognition (HAR) aims to identify the actions performed by humans using signals collected from various sensors embedded in mobile devices. In recent years, deep learning techniques have further improved HAR performance on several benchmark datasets. In this paper, we propose one-dimensional Convolutional Neural Network (1D CNN) for HAR that employs a divide and conquer-based classifier learning coupled with test data sharpening. Our approach leverages a two-stage learning of multiple 1D CNN models; we first build a binary classifier for recognizing abstract activities, and then build two multi-class 1D CNN models for recognizing individual activities. We then introduce test data sharpening during prediction phase to further improve the activity recognition accuracy. While there have been numerous researches exploring the benefits of activity signal denoising for HAR, few researches have examined the effect of test data sharpening for HAR. We evaluate the effectiveness of our approach on two popular HAR benchmark datasets, and show that our approach outperforms both the two-stage 1D CNN-only method and other state of the art approaches.", "title": "" }, { "docid": "e7ff760dddadf1de42cfc0553f286fe6", "text": "Fluorine-containing amino acids are valuable probes for the biophysical characterization of proteins. Current methods for (19)F-labeled protein production involve time-consuming genetic manipulation, compromised expression systems and expensive reagents. We show that Escherichia coli BL21, the workhorse of protein production, can utilise fluoroindole for the biosynthesis of proteins containing (19)F-tryptophan.", "title": "" }, { "docid": "c2c38481e67fa3cb63a6e784c9d9144d", "text": "Some property and casualty insurers use automated detection systems to help to decide whether or not to investigate claims suspected of fraud. Claim screening systems benefit from the coded experience of previously investigated claims. The embedded detection models typically consist of scoring devices relating fraud indicators to some measure of suspicion of fraud. In practice these scoring models often focus on minimizing the error rate rather than on the cost of (mis)classification. We show that focusing on cost is a profitable approach. We analyse the effects of taking into account information on damages and audit costs early on in the screening process. We discuss several scenarios using real-life data. The findings suggest that with claim amount information available at screening time detection rules can be accommodated to increase expected profits. Our results show the value of cost-sensitive claim fraud screening and provide guidance on how to render this strategy operational. 2005 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "d1622f3a2cf81758fa2084506dcd65f2", "text": "Students who enrol in the undergraduate program on informatics at the Hellenic Open University (HOU) demonstrate significant difficulties in advancing beyond the introductory courses. We have embarked in an effort to analyse their academic performance throughout the academic year, as measured by the homework assignments, and attempt to derive short rules that explain and predict success or failure in the final exams. In this paper we review previous approaches, compare them with genetic algorithm based induction of decision trees and argue why our approach has a potential for developing into an alert tool.", "title": "" }, { "docid": "5df21fff08a770787ddce9224c611364", "text": "Data clustering is an important data mining technology that plays a crucial role in numerous scientific applications. However, it is challenging due to the size of datasets has been growing rapidly to extra-large scale in the real world. Meanwhile, MapReduce is a desirable parallel programming platform that is widely applied in kinds of data process fields. In this paper, we propose an efficient parallel density-based clustering algorithm and implement it by a 4-stages MapReduce paradigm. Furthermore, we adopt a quick partitioning strategy for large scale non-indexed data. We study the metric of merge among bordering partitions and make optimizations on it. At last, we evaluate our work on real large scale datasets using Hadoop platform. Results reveal that the speedup and scale up of our work are very efficient.", "title": "" }, { "docid": "3a1cc60b1b6729e06f178ab62d19c59c", "text": "The Web 2.0 wave brings, among other aspects, the Programmable Web:increasing numbers of Web sites provide machine-oriented APIs and Web services. However, most APIs are only described with text in HTML documents. The lack of machine-readable API descriptions affects the feasibility of tool support for developers who use these services. We propose a microformat called hRESTS (HTML for RESTful Services) for machine-readable descriptions of Web APIs, backed by a simple service model. The hRESTS microformat describes main aspects of services, such as operations, inputs and outputs. We also present two extensions of hRESTS:SA-REST, which captures the facets of public APIs important for mashup developers, and MicroWSMO, which provides support for semantic automation.", "title": "" }, { "docid": "b342443400c85277d4f980a39198ded0", "text": "We present several optimizations to SPHINCS, a stateless hash-based signature scheme proposed by Bernstein et al. in 2015: PORS, a more secure variant of the HORS few-time signature scheme used in SPHINCS; secret key caching, to speed-up signing and reduce signature size; batch signing, to amortize signature time and reduce signature size when signing multiple messages at once; mask-less constructions to reduce the key size and simplify the scheme; and Octopus, a technique to eliminate redundancies from authentication paths in Merkle trees. Based on a refined analysis of the subset resilience problem, we show that SPHINCS’ parameters can be modified to reduce the signature size while retaining a similar security level and computation time. We then propose Gravity-SPHINCS, our variant of SPHINCS embodying the aforementioned tricks. Gravity-SPHINCS has shorter keys (32 and 64 bytes instead of ≈ 1 KB), shorter signatures (≈ 30 KB instead of 41 KB), and faster signing and verification for the same security level as SPHINCS.", "title": "" }, { "docid": "99ebd04c11db731653ba4b8f26c46208", "text": "This letter presents a novel computationally efficient and robust pattern tracking method based on a time-encoded, frame-free visual data. Recent interdisciplinary developments, combining inputs from engineering and biology, have yielded a novel type of camera that encodes visual information into a continuous stream of asynchronous, temporal events. These events encode temporal contrast and intensity locally in space and time. We show that the sparse yet accurately timed information is well suited as a computational input for object tracking. In this letter, visual data processing is performed for each incoming event at the time it arrives. The method provides a continuous and iterative estimation of the geometric transformation between the model and the events representing the tracked object. It can handle isometry, similarities, and affine distortions and allows for unprecedented real-time performance at equivalent frame rates in the kilohertz range on a standard PC. Furthermore, by using the dimension of time that is currently underexploited by most artificial vision systems, the method we present is able to solve ambiguous cases of object occlusions that classical frame-based techniques handle poorly.", "title": "" } ]
scidocsrr
7ecdfc08152fce6e5449249f3a8cafd3
Word Embeddings, Analogies, and Machine Learning: Beyond king - man + woman = queen
[ { "docid": "f8cb44e765ad86bd18e5401283c7e0bf", "text": "Distributional models represent a word through the contexts in which it has been observed. They can be used to predict similarity in meaning, based on the distributional hypothesis, which states that two words that occur in similar contexts tend to have similar meanings. Distributional approaches are often implemented in vector space models. They represent a word as a point in high-dimensional space, where each dimension stands for a context item, and a word’s coordinates represent its context counts. Occurrence in similar contexts then means proximity in space. In this survey we look at the use of vector space models to describe the meaning of words and phrases: the phenomena that vector space models address, and the techniques that they use to do so. Many word meaning phenomena can be described in terms of semantic similarity: synonymy, priming, categorization, and the typicality of a predicate’s arguments. But vector space models can do more than just predict semantic similarity. They are a very flexible tool, because they can make use of all of linear algebra, with all its data structures and operations. The dimensions of a vector space can stand for many things: context words, or non-linguistic context like images, or properties of a concept. And vector space models can use matrices or higher-order arrays instead of vectors for representing more complex relationships. Polysemy is a tough problem for distributional approaches, as a representation that is learned from all of a word’s contexts will conflate the different senses of the word. It can be addressed, using either clustering or vector combination techniques. Finally, we look at vector space models for phrases, which are usually constructed by combining word vectors. Vector space models for phrases can predict phrase similarity, and some argue that they can form the basis for a general-purpose representation framework for natural language semantics.", "title": "" }, { "docid": "c45517d21c40bb935b0e1ff4d4ecdf85", "text": "Recognizing analogies, synonyms, antonyms, and associations appear to be four distinct tasks, requiring distinct NLP algorithms. In the past, the four tasks have been treated independently, using a wide variety of algorithms. These four semantic classes, however, are a tiny sample of the full range of semantic phenomena, and we cannot afford to create ad hoc algorithms for each semantic phenomenon; we need to seek a unified approach. We propose to subsume a broad range of phenomena under analogies. To limit the scope of this paper, we restrict our attention to the subsumption of synonyms, antonyms, and associations. We introduce a supervised corpus-based machine learning algorithm for classifying analogous word pairs, and we show that it can solve multiple-choice SAT analogy questions, TOEFL synonym questions, ESL synonym-antonym questions, and similar-associated-both questions from cognitive psychology.", "title": "" }, { "docid": "7d11d25dc6cd2822d7f914b11b7fe640", "text": "The authors analyze three critical components in training word embeddings: model, corpus, and training parameters. They systematize existing neural-network-based word embedding methods and experimentally compare them using the same corpus. They then evaluate each word embedding in three ways: analyzing its semantic properties, using it as a feature for supervised tasks, and using it to initialize neural networks. They also provide several simple guidelines for training good word embeddings.", "title": "" }, { "docid": "79a02a35c02858a6510fc92b9eadde4e", "text": "Distributed word representations have been demonstrated to be effective in capturing semantic and syntactic regularities. Unsupervised representation learning from large unlabeled corpora can learn similar representations for those words that present similar cooccurrence statistics. Besides local occurrence statistics, global topical information is also important knowledge that may help discriminate a word from another. In this paper, we incorporate category information of documents in the learning of word representations and to learn the proposed models in a documentwise manner. Our models outperform several state-of-the-art models in word analogy and word similarity tasks. Moreover, we evaluate the learned word vectors on sentiment analysis and text classification tasks, which shows the superiority of our learned word vectors. We also learn high-quality category embeddings that reflect topical meanings.", "title": "" } ]
[ { "docid": "0614f84f0a5d62f707d545943b936667", "text": "A new input-output coupled inductor (IOCI) is proposed for reducing current ripples and magnetic components. Moreover, a current-source-type circuit using active-clamp mechanism and a current doubler with synchronous rectifier are presented to achieve high efficiency in low input-output voltage applications. The configuration of the IOCI is realized by three windings on a common core, and has the properties of an input inductor at the input-side and two output inductors at the output- side. An active clamped ripple-free dc-dc converter using the proposed IOCI is analyzed in detail and optimized for high power efficiency. Experimental results for 80 W (5 V/16 A) at a constant switching frequency of 100 kHz are obtained to show the performance of the proposed converter.", "title": "" }, { "docid": "850483f2db17a4f5d5a48db80d326dd3", "text": "The Internet has revolutionized healthcare by offering medical information ubiquitously to patients via the web search. The healthcare status, complex medical information needs of patients are expressed diversely and implicitly in their medical text queries. Aiming to better capture a focused picture of user's medical-related information search and shed insights on their healthcare information access strategies, it is challenging yet rewarding to detect structured user intentions from their diversely expressed medical text queries. We introduce a graph-based formulation to explore structured concept transitions for effective user intent detection in medical queries, where each node represents a medical concept mention and each directed edge indicates a medical concept transition. A deep model based on multi-task learning is introduced to extract structured semantic transitions from user queries, where the model extracts word-level medical concept mentions as well as sentence-level concept transitions collectively. A customized graph-based mutual transfer loss function is designed to impose explicit constraints and further exploit the contribution of mentioning a medical concept word to the implication of a semantic transition. We observe an 8% relative improvement in AUC and 23% relative reduction in coverage error by comparing the proposed model with the best baseline model for the concept transition inference task on real-world medical text queries.", "title": "" }, { "docid": "8f6806ba2f75e3671efa2aa390d79b40", "text": "Applying amendments to multi-element contaminated soils can have contradictory effects on the mobility, bioavailability and toxicity of specific elements, depending on the amendment. Trace elements and PAHs were monitored in a contaminated soil amended with biochar and greenwaste compost over 60 days field exposure, after which phytotoxicity was assessed by a simple bio-indicator test. Copper and As concentrations in soil pore water increased more than 30 fold after adding both amendments, associated with significant increases in dissolved organic carbon and pH, whereas Zn and Cd significantly decreased. Biochar was most effective, resulting in a 10 fold decrease of Cd in pore water and a resultant reduction in phytotoxicity. Concentrations of PAHs were also reduced by biochar, with greater than 50% decreases of the heavier, more toxicologically relevant PAHs. The results highlight the potential of biochar for contaminated land remediation.", "title": "" }, { "docid": "0ce06f95b1dafcac6dad4413c8b81970", "text": "User acceptance of artificial intelligence agents might depend on their ability to explain their reasoning, which requires adding an interpretability layer that facilitates users to understand their behavior. This paper focuses on adding an interpretable layer on top of Semantic Textual Similarity (STS), which measures the degree of semantic equivalence between two sentences. The interpretability layer is formalized as the alignment between pairs of segments across the two sentences, where the relation between the segments is labeled with a relation type and a similarity score. We present a publicly available dataset of sentence pairs annotated following the formalization. We then develop a system trained on this dataset which, given a sentence pair, explains what is similar and different, in the form of graded and typed segment alignments. When evaluated on the dataset, the system performs better than an informed baseline, showing that the dataset and task are well-defined and feasible. Most importantly, two user studies show how the system output can be used to automatically produce explanations in natural language. Users performed better when having access to the explanations, providing preliminary evidence that our dataset and method to automatically produce explanations is useful in real applications.", "title": "" }, { "docid": "513239885e48a729e6f80a2df2e061c7", "text": "Schemes for FPE enable one to encrypt Social Security numbers (SSNs), credit card numbers (CCNs), and the like, doing so in such a way that the ciphertext has the same format as the plaintext. In the case of SSNs, for example, this means that the ciphertext, like the plaintext, consists of a nine decimal-digit string. Similarly, encryption of a 16-digit CCN results in a 16-digit ciphertext. FPE is rapidly emerging as a useful cryptographic tool, with applications including financial-information security, data sanitization, and transparently encrypting fields in a legacy database.", "title": "" }, { "docid": "4107e9288ea64d039211acf48a091577", "text": "The trisomy 18 syndrome can result from a full, mosaic, or partial trisomy 18. The main clinical findings of full trisomy 18 consist of prenatal and postnatal growth deficiency, characteristic facial features, clenched hands with overriding fingers and nail hypoplasia, short sternum, short hallux, major malformations, especially of the heart, andprofound intellectual disability in the surviving older children. The phenotype of partial trisomy 18 is extremely variable. The aim of this article is to systematically review the scientific literature on patients with partial trisomy 18 in order to identify regions of chromosome 18 that may be responsible for the specific clinical features of the trisomy 18 syndrome. We confirmed that trisomy of the short arm of chromosome 18 does not seem to cause the major features. However, we found candidate regions on the long arm of chromosome 18 for some of the characteristic clinical features, and a thus a phenotypic map is proposed. Our findings confirm the hypothesis that single critical regions/candidate genes are likely to be responsible for specific characteristics of the syndrome, while a single critical region for the whole Edwards syndrome phenotype is unlikely to exist.", "title": "" }, { "docid": "f0d17b259b699bc7fb7e8f525ec64db0", "text": "Developing Intelligent Systems involves artificial intelligence approaches including artificial neural networks. Here, we present a tutorial of Deep Neural Networks (DNNs), and some insights about the origin of the term “deep”; references to deep learning are also given. Restricted Boltzmann Machines, which are the core of DNNs, are discussed in detail. An example of a simple two-layer network, performing unsupervised learning for unlabeled data, is shown. Deep Belief Networks (DBNs), which are used to build networks with more than two layers, are also described. Moreover, examples for supervised learning with DNNs performing simple prediction and classification tasks, are presented and explained. This tutorial includes two intelligent pattern recognition applications: handwritten digits (benchmark known as MNIST) and speech recognition.", "title": "" }, { "docid": "0b87e22007cef7546d7503821919e50b", "text": "This review focuses on the antibacterial activities of visible light-responsive titanium dioxide (TiO2) photocatalysts. These photocatalysts have a range of applications including disinfection, air and water cleaning, deodorization, and pollution and environmental control. Titanium dioxide is a chemically stable and inert material, and can continuously exert antimicrobial effects when illuminated. The energy source could be solar light; therefore, TiO2 photocatalysts are also useful in remote areas where electricity is insufficient. However, because of its large band gap for excitation, only biohazardous ultraviolet (UV) light irradiation can excite TiO2, which limits its application in the living environment. To extend its application, impurity doping, through metal coating and controlled calcination, has successfully modified the substrates of TiO2 to expand its absorption wavelengths to the visible light region. Previous studies have investigated the antibacterial abilities of visible light-responsive photocatalysts using the model bacteria Escherichia coli and human pathogens. The modified TiO2 photocatalysts significantly reduced the numbers of surviving bacterial cells in response to visible light illumination. They also significantly reduced the activity of bacterial endospores; reducing their toxicity while retaining their germinating abilities. It is suggested that the photocatalytic killing mechanism initially damages the surfaces weak points of the bacterial cells, before totally breakage of the cell membranes. The internal bacterial components then leak from the cells through the damaged sites. Finally, the photocatalytic reaction oxidizes the cell debris. In summary, visible light-responsive TiO2 photocatalysts are more convenient than the traditional UV light-responsive TiO2 photocatalysts because they do not require harmful UV light irradiation to function. These photocatalysts, thus, provide a promising and feasible approach for disinfection of pathogenic bacteria; facilitating the prevention of infectious diseases.", "title": "" }, { "docid": "78829447a6cbf0aa020ef098a275a16d", "text": "Black soldier fly (BSF), Hermetia illucens (L.) is widely used in bio-recycling of human food waste and manure of livestock. Eggs of BSF were commonly collected by egg-trapping technique for mass rearing. To find an efficient lure for BSF egg-trapping, this study compared the number of egg batch trapped by different lures, including fruit, food waste, chicken manure, pig manure, and dairy manure. The result showed that fruit wastes are the most efficient on trapping BSF eggs. To test the effects of fruit species, number of egg batch trapped by three different fruit species, papaya, banana, and pineapple were compared, and no difference were found among fruit species. Environmental factors including temperature, relative humidity, and light intensity were measured and compared in different study sites to examine their effects on egg-trapping. The results showed no differences on temperature, relative humidity, and overall light intensity between sites, but the stability of light environment differed between sites. BSF tend to lay more eggs in site with stable light environment.", "title": "" }, { "docid": "08a62894bac4e272530d1630e720c7ad", "text": "Recently, along with the rapid development of mobile communication technology, edge computing theory and techniques have been attracting more and more attentions from global researchers and engineers, which can significantly bridge the capacity of cloud and requirement of devices by the network edges, and thus can accelerate the content deliveries and improve the quality of mobile services. In order to bring more intelligence to the edge systems, compared to traditional optimization methodology, and driven by the current deep learning techniques, we propose to integrate the Deep Reinforcement Learning techniques and Federated Learning framework with the mobile edge systems, for optimizing the mobile edge computing, caching and communication. And thus, we design the “In-Edge AI” framework in order to intelligently utilize the collaboration among devices and edge nodes to exchange the learning parameters for a better training and inference of the models, and thus to carry out dynamic system-level optimization and application-level enhancement while reducing the unnecessary system communication load. “In-Edge AI” is evaluated and proved to have near-optimal performance but relatively low overhead of learning, while the system is cognitive and adaptive to the mobile communication systems. Finally, we discuss several related challenges and opportunities for unveiling a promising upcoming future of “In-Edge AI”.", "title": "" }, { "docid": "74959e138f7defce9bf7df2198b46a90", "text": "In the game industry, especially for free to play games, player retention and purchases are important issues. There have been several approaches investigated towards predicting them by players' behaviours during game sessions. However, most current methods are only available for specific games because the data representations utilised are usually game specific. This work intends to use frequency of game events as data representations to predict both players' disengagement from game and the decisions of their first purchases. This method is able to provide better generality because events exist in every game and no knowledge of any event but their frequency is needed. In addition, this event frequency based method will also be compared with a recent work by Runge et al. [1] in terms of disengagement prediction.", "title": "" }, { "docid": "c995426196ad943df2f5a4028a38b781", "text": "Today it is quite common for people to exchange hundreds of comments in online conversations (e.g., blogs). Often, it can be very difficult to analyze and gain insights from such long conversations. To address this problem, we present a visual text analytic system that tightly integrates interactive visualization with novel text mining and summarization techniques to fulfill information needs of users in exploring conversations. At first, we perform a user requirement analysis for the domain of blog conversations to derive a set of design principles. Following these principles, we present an interface that visualizes a combination of various metadata and textual analysis results, supporting the user to interactively explore the blog conversations. We conclude with an informal user evaluation, which provides anecdotal evidence about the effectiveness of our system and directions for further design.", "title": "" }, { "docid": "99549d037b403f78f273b3c64181fd21", "text": "From social media has emerged continuous needs for automatic travel recommendations. Collaborative filtering (CF) is the most well-known approach. However, existing approaches generally suffer from various weaknesses. For example , sparsity can significantly degrade the performance of traditional CF. If a user only visits very few locations, accurate similar user identification becomes very challenging due to lack of sufficient information for effective inference. Moreover, existing recommendation approaches often ignore rich user information like textual descriptions of photos which can reflect users' travel preferences. The topic model (TM) method is an effective way to solve the “sparsity problem,” but is still far from satisfactory. In this paper, an author topic model-based collaborative filtering (ATCF) method is proposed to facilitate comprehensive points of interest (POIs) recommendations for social users. In our approach, user preference topics, such as cultural, cityscape, or landmark, are extracted from the geo-tag constrained textual description of photos via the author topic model instead of only from the geo-tags (GPS locations). Advantages and superior performance of our approach are demonstrated by extensive experiments on a large collection of data.", "title": "" }, { "docid": "ad96c93d4a27ec8a5a1a8168519977ff", "text": "BACKGROUND\nMovement velocity is an acute resistance-training variable that can be manipulated to potentially optimize dynamic muscular strength development. However, it is unclear whether performing faster or slower repetitions actually influences dynamic muscular strength gains.\n\n\nOBJECTIVE\nWe conducted a systematic review and meta-analysis to examine the effect of movement velocity during resistance training on dynamic muscular strength.\n\n\nMETHODS\nFive electronic databases were searched using terms related to movement velocity and resistance training. Studies were deemed eligible for inclusion if they met the following criteria: randomized and non-randomized comparative studies; published in English; included healthy adults; used isotonic resistance-exercise interventions directly comparing fast or explosive training to slower movement velocity training; matched in prescribed intensity and volume; duration ≥4 weeks; and measured dynamic muscular strength changes.\n\n\nRESULTS\nA total of 15 studies were identified that investigated movement velocity in accordance with the criteria outlined. Fast and moderate-slow resistance training were found to produce similar increases in dynamic muscular strength when all studies were included. However, when intensity was accounted for, there was a trend for a small effect favoring fast compared with moderate-slow training when moderate intensities, defined as 60-79% one repetition maximum, were used (effect size 0.31; p = 0.06). Strength gains between conditions were not influenced by training status and age.\n\n\nCONCLUSIONS\nOverall, the results suggest that fast and moderate-slow resistance training improve dynamic muscular strength similarly in individuals within a wide range of training statuses and ages. Resistance training performed at fast movement velocities using moderate intensities showed a trend for superior muscular strength gains as compared to moderate-slow resistance training. Both training practices should be considered for novice to advanced, young and older resistance trainers targeting dynamic muscular strength.", "title": "" }, { "docid": "f7e773113b9006256ab51d975c8f53c5", "text": "Received 12/4/2013 Accepted 19/6/2013 (006063) 1 Laboratorio Integral de Investigación en Alimentos – LIIA, Instituto Tecnológico de Tepic – ITT, Av. Tecnológico, 2595, CP 63175, Tepic, Nayarit, México, e-mail: efimontalvo@gmail.com 2 Dirección General de Innovación Tecnológica, Centro de Excelencia, Universidad Autónoma de Tamaulipas – UAT, Ciudad Victoria, Tamaulipas, México 3 Centro de Investigación en Ciencia Aplicada y Tecnología Avanzada – CICATA, Instituto Politécnico Nacional – IPN, Querétaro, Querétaro, México *Corresponding author Effect of high hydrostatic pressure on antioxidant content of ‘Ataulfo’ mango during postharvest maturation Viviana Guadalupe ORTEGA1, José Alberto RAMÍREZ2, Gonzalo VELÁZQUEZ3, Beatriz TOVAR1, Miguel MATA1, Efigenia MONTALVO1*", "title": "" }, { "docid": "47d2ebd3794647708d41c6b3d604e796", "text": "Most stream data classification algorithms apply the supervised learning strategy which requires massive labeled data. Such approaches are impractical since labeled data are usually hard to obtain in reality. In this paper, we build a clustering feature decision tree model, CFDT, from data streams having both unlabeled and a small number of labeled examples. CFDT applies a micro-clustering algorithm that scans the data only once to provide the statistical summaries of the data for incremental decision tree induction. Micro-clusters also serve as classifiers in tree leaves to improve classification accuracy and reinforce the any-time property. Our experiments on synthetic and real-world datasets show that CFDT is highly scalable for data streams while generating high classification accuracy with high speed.", "title": "" }, { "docid": "510a43227819728a77ff0c7fa06fa2d0", "text": "The ubiquity of time series data across almost all human endeavors has produced a great interest in time series data mining in the last decade. While there is a plethora of classification algorithms that can be applied to time series, all of the current empirical evidence suggests that simple nearest neighbor classification is exceptionally difficult to beat. The choice of distance measure used by the nearest neighbor algorithm depends on the invariances required by the domain. For example, motion capture data typically requires invariance to warping. In this work we make a surprising claim. There is an invariance that the community has missed, complexity invariance. Intuitively, the problem is that in many domains the different classes may have different complexities, and pairs of complex objects, even those which subjectively may seem very similar to the human eye, tend to be further apart under current distance measures than pairs of simple objects. This fact introduces errors in nearest neighbor classification, where complex objects are incorrectly assigned to a simpler class. We introduce the first complexity-invariant distance measure for time series, and show that it generally produces significant improvements in classification accuracy. We further show that this improvement does not compromise efficiency, since we can lower bound the measure and use a modification of triangular inequality, thus making use of most existing indexing and data mining algorithms. We evaluate our ideas with the largest and most comprehensive set of time series classification experiments ever attempted, and show that complexity-invariant distance measures can produce improvements in accuracy in the vast majority of cases.", "title": "" }, { "docid": "4c4bfcadd71890ccce9e58d88091f6b3", "text": "With the dramatic growth of the game industry over the past decade, its rapid inclusion in many sectors of today’s society, and the increased complexity of games, game development has reached a point where it is no longer humanly possible to use only manual techniques to create games. Large parts of games need to be designed, built, and tested automatically. In recent years, researchers have delved into artificial intelligence techniques to support, assist, and even drive game development. Such techniques include procedural content generation, automated narration, player modelling and adaptation, and automated game design. This research is still very young, but already the games industry is taking small steps to integrate some of these techniques in their approach to design. The goal of this seminar was to bring together researchers and industry representatives who work at the forefront of artificial intelligence (AI) and computational intelligence (CI) in games, to (1) explore and extend the possibilities of AI-driven game design, (2) to identify the most viable applications of AI-driven game design in the game industry, and (3) to investigate new approaches to AI-driven game design. To this end, the seminar included a wide range of researchers and developers, including specialists in AI/CI for abstract games, commercial video games, and serious games. Thus, it fostered a better understanding of and unified vision on AI-driven game design, using input from both scientists as well as AI specialists from industry. Seminar November 19–24, 2017 – http://www.dagstuhl.de/17471 1998 ACM Subject Classification I.2.1 Artificial Intelligence Games", "title": "" }, { "docid": "b19fa7fa211e36b0049fd5745e30f0c3", "text": "Multilevel clock-and-data recovery (CDR) systems are analyzed, modeled, and designed. A stochastic analysis provides probability density functions that are used to estimate the effect of intersymbol interference (ISI) and additive white noise on the characteristics of the phase detector (PD) in the CDR. A slope detector based novel multilevel bang-bang CDR architecture is proposed and modeled using the stochastic analysis and its performance compared with a typical multilevel Alexander PD-based CDR for equal-loop bandwidths. The rms jitter of the CDRs are predicted using a linear jitter model and a Markov chain and verified using behavioral simulations. Jitter tolerance simulations are also employed to compare the two CDRs. Both analytical calculations and behavioral simulations predict that at equal-loop bandwidths, the proposed architecture is superior to the Alexander type CDR at large ISI and low signal-to-noise ratios.", "title": "" }, { "docid": "4c0427bd87ef200484f0a510e8acb0de", "text": "Recent deep learning (DL) models are moving more and more to dynamic neural network (NN) architectures, where the NN structure changes for every data sample. However, existing DL programming models are inefficient in handling dynamic network architectures because of: (1) substantial overhead caused by repeating dataflow graph construction and processing every example; (2) difficulties in batched execution of multiple samples; (3) inability to incorporate graph optimization techniques such as those used in static graphs. In this paper, we present “Cavs”, a runtime system that overcomes these bottlenecks and achieves efficient training and inference of dynamic NNs. Cavs represents a dynamic NN as a static vertex function F and a dynamic instance-specific graph G. It avoids the overhead of repeated graph construction by only declaring and constructing F once, and allows for the use of static graph optimization techniques on pre-defined operations in F . Cavs performs training and inference by scheduling the execution of F following the dependencies in G, hence naturally exposing batched execution opportunities over different samples. Experiments comparing Cavs to state-of-the-art frameworks for dynamic NNs (TensorFlow Fold, PyTorch and DyNet) demonstrate the efficacy of our approach: Cavs achieves a near one order of magnitude speedup on training of dynamic NN architectures, and ablations verify the effectiveness of our proposed design and optimizations.", "title": "" } ]
scidocsrr
fed9694336c6085ed06a590e0c821402
New Simple-Structured AC Solid-State Circuit Breaker
[ { "docid": "6af7f70f0c9b752d3dbbe701cb9ede2a", "text": "This paper addresses real and reactive power management strategies of electronically interfaced distributed generation (DG) units in the context of a multiple-DG microgrid system. The emphasis is primarily on electronically interfaced DG (EI-DG) units. DG controls and power management strategies are based on locally measured signals without communications. Based on the reactive power controls adopted, three power management strategies are identified and investigated. These strategies are based on 1) voltage-droop characteristic, 2) voltage regulation, and 3) load reactive power compensation. The real power of each DG unit is controlled based on a frequency-droop characteristic and a complimentary frequency restoration strategy. A systematic approach to develop a small-signal dynamic model of a multiple-DG microgrid, including real and reactive power management strategies, is also presented. The microgrid eigen structure, based on the developed model, is used to 1) investigate the microgrid dynamic behavior, 2) select control parameters of DG units, and 3) incorporate power management strategies in the DG controllers. The model is also used to investigate sensitivity of the design to changes of parameters and operating point and to optimize performance of the microgrid system. The results are used to discuss applications of the proposed power management strategies under various microgrid operating conditions", "title": "" }, { "docid": "d8255047dc2e28707d711f6d6ff19e30", "text": "This paper discusses the design of a 10 kV and 200 A hybrid dc circuit breaker suitable for the protection of the dc power systems in electric ships. The proposed hybrid dc circuit breaker employs a Thompson coil based ultrafast mechanical switch (MS) with the assistance of two additional solid-state power devices. A low-voltage (80 V) metal–oxide–semiconductor field-effect transistors (MOSFETs)-based commutating switch (CS) is series connected with the MS to realize the zero current turn-OFF of the MS. In this way, the arcing issue with the MS is avoided. A 15 kV SiC emitter turn-OFF thyristor-based main breaker (MB) is parallel connected with the MS and CS branch to interrupt the fault current. A stack of MOVs parallel with the MB are used to clamp the voltage across the hybrid dc circuit breaker during interruption. This paper focuses on the electronic parts of the hybrid dc circuit breaker, and a companion paper will elucidate the principle and operation of the fast acting MS and the overall operation of the hybrid dc circuit breaker. The selection and design of both the high-voltage and low-voltage electronic components in the hybrid dc circuit breaker are presented in this paper. The turn-OFF capability of the MB with and without snubber circuit is experimentally tested, validating its suitability for the hybrid dc circuit breaker application. The CSs’ conduction performances are tested up to 200 A, and its current commutating during fault current interruption is also analyzed. Finally, the hybrid dc circuit breaker demonstrated a fast current interruption within 2 ms at 7 kV and 100 A.", "title": "" } ]
[ { "docid": "d38f9ef3248bf54b7a073beaa186ad42", "text": "Tracking-by-detection methods have demonstrated competitive performance in recent years. In these approaches, the tracking model heavily relies on the quality of the training set. Due to the limited amount of labeled training data, additional samples need to be extracted and labeled by the tracker itself. This often leads to the inclusion of corrupted training samples, due to occlusions, misalignments and other perturbations. Existing tracking-by-detection methods either ignore this problem, or employ a separate component for managing the training set. We propose a novel generic approach for alleviating the problem of corrupted training samples in tracking-by-detection frameworks. Our approach dynamically manages the training set by estimating the quality of the samples. Contrary to existing approaches, we propose a unified formulation by minimizing a single loss over both the target appearance model and the sample quality weights. The joint formulation enables corrupted samples to be downweighted while increasing the impact of correct ones. Experiments are performed on three benchmarks: OTB-2015 with 100 videos, VOT-2015 with 60 videos, and Temple-Color with 128 videos. On the OTB-2015, our unified formulation significantly improves the baseline, with a gain of 3:8% in mean overlap precision. Finally, our method achieves state-of-the-art results on all three datasets.", "title": "" }, { "docid": "99ba1fd6c96dad6d165c4149ac2ce27a", "text": "In order to solve unsupervised domain adaptation problem, recent methods focus on the use of adversarial learning to learn the common representation among domains. Although many designs are proposed, they seem to ignore the negative influence of domain-specific characteristics in transferring process. Besides, they also tend to obliterate these characteristics when extracted, although they are useful for other tasks and somehow help preserve the data. Take into account these issues, in this paper, we want to design a novel domainadaptation architecture which disentangles learned features into multiple parts to answer the questions: what features to transfer across domains and what to preserve within domains for other tasks. Towards this, besides jointly matching domain distributions in both image-level and feature-level, we offer new idea on feature exchange across domains combining with a novel feed-back loss and a semantic consistency loss to not only enhance the transferability of learned common feature but also preserve data and semantic information during exchange process. By performing domain adaptation on two standard digit datasets – MNIST and USPS, we show that our architecture can solve not only the full transfer problem but also partial transfer problem efficiently. The translated image results also demonstrate the potential of our architecture in image style transfer application.", "title": "" }, { "docid": "04d7b3e3584d89d5a3bc5c22c3fd1438", "text": "With the widespread use of information technologies, information networks are becoming increasingly popular to capture complex relationships across various disciplines, such as social networks, citation networks, telecommunication networks, and biological networks. Analyzing these networks sheds light on different aspects of social life such as the structure of societies, information diffusion, and communication patterns. In reality, however, the large scale of information networks often makes network analytic tasks computationally expensive or intractable. Network representation learning has been recently proposed as a new learning paradigm to embed network vertices into a low-dimensional vector space, by preserving network topology structure, vertex content, and other side information. This facilitates the original network to be easily handled in the new vector space for further analysis. In this survey, we perform a comprehensive review of the current literature on network representation learning in the data mining and machine learning field. We propose new taxonomies to categorize and summarize the state-of-the-art network representation learning techniques according to the underlying learning mechanisms, the network information intended to preserve, as well as the algorithmic designs and methodologies. We summarize evaluation protocols used for validating network representation learning including published benchmark datasets, evaluation methods, and open source algorithms. We also perform empirical studies to compare the performance of representative algorithms on common datasets, and analyze their computational complexity. Finally, we suggest promising research directions to facilitate future study.", "title": "" }, { "docid": "0742314b8099dce0eadaa12f96579209", "text": "Smart utility network (SUN) communications are an essential part of the smart grid. Major vendors realized the importance of universal standards and participated in the IEEE802.15.4g standardization effort. Due to the fact that many vendors already have proprietary solutions deployed in the field, the standardization effort was a challenge, but after three years of hard work, the IEEE802.15.4g standard published on April 28th, 2012. The publication of this standard is a first step towards establishing common and consistent communication specifications for utilities deploying smart grid technologies. This paper summaries the technical essence of the standard and how it can be used in smart utility networks.", "title": "" }, { "docid": "38d7107de35f3907c0e42b111883613e", "text": "On-line social networks have become a massive communication and information channel for users world-wide. In particular, the microblogging platform Twitter, is characterized by short-text message exchanges at extremely high rates. In this type of scenario, the detection of emerging topics in text streams becomes an important research area, essential for identifying relevant new conversation topics, such as breaking news and trends. Although emerging topic detection in text is a well established research area, its application to large volumes of streaming text data is quite novel. Making scalability, efficiency and rapidness, the key aspects for any emerging topic detection algorithm in this type of environment.\n Our research addresses the aforementioned problem by focusing on detecting significant and unusual bursts in keyword arrival rates or bursty keywords. We propose a scalable and fast on-line method that uses normalized individual frequency signals per term and a windowing variation technique. This method reports keyword bursts which can be composed of single or multiple terms, ranked according to their importance. The average complexity of our method is O(n log n), where n is the number of messages in the time window. This complexity allows our approach to be scalable for large streaming datasets. If bursts are only detected and not ranked, the algorithm remains with lineal complexity O(n), making it the fastest in comparison to the current state-of-the-art. We validate our approach by comparing our performance to similar systems using the TREC Tweet 2011 Challenge tweets, obtaining 91% of matches with LDA, an off-line gold standard used in similar evaluations. In addition, we study Twitter messages related to the SuperBowl football events in 2011 and 2013.", "title": "" }, { "docid": "c69d15a44bcb779394df5776e391ec23", "text": "Ankylosing spondylitis (AS) is a chronic and inflammatory rheumatic disease, characterized by pain and structural and functional impairments, such as reduced mobility and axial deformity, which lead to diminished quality of life. Its treatment includes not only drugs, but also nonpharmacological therapy. Exercise appears to be a promising modality. The aim of this study is to review the current evidence and evaluate the role of exercise either on land or in water for the management of patients with AS in the biological era. Systematic review of the literature published until November 2016 in Medline, Embase, Cochrane Library, Web of Science and Scopus databases. Thirty-five studies were included for further analysis (30 concerning land exercise and 5 concerning water exercise; combined or not with biological drugs), comprising a total of 2515 patients. Most studies showed a positive effect of exercise on Bath Ankylosing Spondylitis Disease Activity Index, Bath Ankylosing Spondylitis Functional Index, pain, mobility, function and quality of life. The benefit was statistically significant in randomized controlled trials. Results support a multimodal approach, including educational sessions and maintaining home-based program. This study highlights the important role of exercise in management of AS, therefore it should be encouraged and individually prescribed. More studies with good methodological quality are needed to strengthen the results and to define the specific characteristics of exercise programs that determine better results.", "title": "" }, { "docid": "699836a5b2caf6acde02c4bad16c2795", "text": "Drilling end-effector is a key unit in autonomous drilling robot. The perpendicularity of the hole has an important influence on the quality of airplane assembly. Aiming at the robot drilling perpendicularity, a micro-adjusting attitude mechanism and a surface normal measurement algorithm are proposed in this paper. In the mechanism, two rounded eccentric discs are used and the small one is embedded in the big one, which makes the drill’s point static when adjusting the drill’s attitude. Thus, removal of drill’s point position after adjusting the drill attitude can be avoided. Before the micro-adjusting progress, four non-coplanar points in space are used to determine a unique sphere. The normal at the drilling point is measured by four laser ranging sensors. The adjusting angles at which the motors should be rotated to adjust attitude can be calculated by using the deviation between the normal and the drill axis. Finally, the motors will drive the two eccentric discs to achieve micro-adjusting progress. Experiments on drilling robot system and the results demonstrate that the adjusting mechanism and the algorithm for surface normal measurement are effective with high accuracy and efficiency. (1)设计一种微型姿态调整机构, 实现对钻头姿态进行调整, 使其沿制孔点法线进行制孔, 提高孔的垂直度. 使得钻头调整前后, 钻头顶点保持不变, 提高制孔效率. (2)利用4个激光测距传感器, 根据空间不共面四点确定唯一球, 测得制孔点处的法线向量, 为钻头的姿态调整做准备.", "title": "" }, { "docid": "a05a953097e5081670f26e85c4b8e397", "text": "In European science and technology policy, various styles have been developed and institutionalised to govern the ethical challenges of science and technology innovations. In this paper, we give an account of the most dominant styles of the past 30 years, particularly in Europe, seeking to show their specific merits and problems. We focus on three styles of governance: a technocratic style, an applied ethics style, and a public participation style. We discuss their merits and deficits, and use this analysis to assess the potential of the recently established governance approach of 'Responsible Research and Innovation' (RRI). Based on this analysis, we reflect on the current shaping of RRI in terms of 'doing governance'.", "title": "" }, { "docid": "80666930dbabe1cd9d65af762cc4b150", "text": "Accurate electronic health records are important for clinical care and research as well as ensuring patient safety. It is crucial for misspelled words to be corrected in order to ensure that medical records are interpreted correctly. This paper describes the development of a spelling correction system for medical text. Our spell checker is based on Shannon's noisy channel model, and uses an extensive dictionary compiled from many sources. We also use named entity recognition, so that names are not wrongly corrected as misspellings. We apply our spell checker to three different types of free-text data: clinical notes, allergy entries, and medication orders; and evaluate its performance on both misspelling detection and correction. Our spell checker achieves detection performance of up to 94.4% and correction accuracy of up to 88.2%. We show that high-performance spelling correction is possible on a variety of clinical documents.", "title": "" }, { "docid": "78bc13c6b86ea9a8fda75b66f665c39f", "text": "We propose a stochastic answer network (SAN) to explore multi-step inference strategies in Natural Language Inference. Rather than directly predicting the results given the inputs, the model maintains a state and iteratively refines its predictions. Our experiments show that SAN achieves the state-of-the-art results on three benchmarks: Stanford Natural Language Inference (SNLI) dataset, MultiGenre Natural Language Inference (MultiNLI) dataset and Quora Question Pairs dataset.", "title": "" }, { "docid": "53ae229e708297bf73cf3a33b32e42da", "text": "Signal-dependent phase variation, AM/PM, along with amplitude variation, AM/AM, are known to determine nonlinear distortion characteristics of current-mode PAs. However, these distortion effects have been treated separately, putting more weight on the amplitude distortion, while the AM/PM generation mechanisms are yet to be fully understood. Hence, the aim of this work is to present a large-signal physical model that can describe both the AM/AM and AM/PM PA nonlinear distortion characteristics and their internal relationship.", "title": "" }, { "docid": "c6d25017a6cba404922933672a18d08a", "text": "The Internet of Things (IoT) makes smart objects the ultimate building blocks in the development of cyber-physical smart pervasive frameworks. The IoT has a variety of application domains, including health care. The IoT revolution is redesigning modern health care with promising technological, economic, and social prospects. This paper surveys advances in IoT-based health care technologies and reviews the state-of-the-art network architectures/platforms, applications, and industrial trends in IoT-based health care solutions. In addition, this paper analyzes distinct IoT security and privacy features, including security requirements, threat models, and attack taxonomies from the health care perspective. Further, this paper proposes an intelligent collaborative security model to minimize security risk; discusses how different innovations such as big data, ambient intelligence, and wearables can be leveraged in a health care context; addresses various IoT and eHealth policies and regulations across the world to determine how they can facilitate economies and societies in terms of sustainable development; and provides some avenues for future research on IoT-based health care based on a set of open issues and challenges.", "title": "" }, { "docid": "e33fd686860657a93a0e47807b4cbe24", "text": "Planning optimal paths for large numbers of robots is computationally expensive. In this thesis, we present a new framework for multirobot path planning called subdimensional expansion, which initially plans for each robot individually, and then coordinates motion among the robots as needed. More specifically, subdimensional expansion initially creates a one-dimensional search space embedded in the joint configuration space of the multirobot system. When the search space is found to be blocked during planning by a robot-robot collision, the dimensionality of the search space is locally increased to ensure that an alternative path can be found. As a result, robots are only coordinated when necessary, which reduces the computational cost of finding a path. Subdimensional expansion is a flexible framework that can be used with multiple planning algorithms. For discrete planning problems, subdimensional expansion can be combined with A* to produce the M* algorithm, a complete and optimal multirobot path planning problem. When the configuration space of individual robots is too large to be explored effectively with A*, subdimensional expansion can be combined with probabilistic planning algorithms to produce sRRT and sPRM. M* is then extended to solve variants of the multirobot path planning algorithm. We present the Constraint Manifold Subsearch (CMS) algorithm to solve problems where robots must dynamically form and dissolve teams with other robots to perform cooperative tasks. Uncertainty M* (UM*) is a variant of M* that handles systems with probabilistic dynamics. Finally, we apply M* to multirobot sequential composition. Results are validated with extensive simulations and experiments on multiple physical robots.", "title": "" }, { "docid": "73d31d63cfaeba5fa7c2d2acc4044ca0", "text": "Plastics in the marine environment have become a major concern because of their persistence at sea, and adverse consequences to marine life and potentially human health. Implementing mitigation strategies requires an understanding and quantification of marine plastic sources, taking spatial and temporal variability into account. Here we present a global model of plastic inputs from rivers into oceans based on waste management, population density and hydrological information. Our model is calibrated against measurements available in the literature. We estimate that between 1.15 and 2.41 million tonnes of plastic waste currently enters the ocean every year from rivers, with over 74% of emissions occurring between May and October. The top 20 polluting rivers, mostly located in Asia, account for 67% of the global total. The findings of this study provide baseline data for ocean plastic mass balance exercises, and assist in prioritizing future plastic debris monitoring and mitigation strategies.", "title": "" }, { "docid": "e3853e259c3ae6739dcae3143e2074a8", "text": "A new reference collection of patent documents for training and testing automated categorization systems is established and described in detail. This collection is tailored for automating the attribution of international patent classification codes to patent applications and is made publicly available for future research work. We report the results of applying a variety of machine learning algorithms to the automated categorization of English-language patent documents. This procedure involves a complex hierarchical taxonomy, within which we classify documents into 114 classes and 451 subclasses. Several measures of categorization success are described and evaluated. We investigate how best to resolve the training problems related to the attribution of multiple classification codes to each patent document.", "title": "" }, { "docid": "f160dd844c54dafc8c5265ff0e4d4a05", "text": "The increasing number of smart phones presents a significant opportunity for the development of m-payment services. Despite the predicted success of m-payment, the market remains immature in most countries. This can be explained by the lack of agreement on standards and business models for all stakeholders in m-payment ecosystem. In this paper, the STOF business model framework is employed to analyze m-payment services from the point of view of one of the key players in the ecosystem i.e., banks. We apply Analytic Hierarchy Process (AHP) method to analyze the critical design issues for four domains of the STOF model. The results of the analysis show that service domain is the most important, followed by technology, organization and finance domains. Security related issues are found to be the most important by bank representatives. The future research can be extended to the m-payment ecosystem by collecting data from different actors from the ecosystem.", "title": "" }, { "docid": "f3d0ae1db485b95b8b6931f8c6f2ea40", "text": "Spoken language understanding (SLU) is a core component of a spoken dialogue system. In the traditional architecture of dialogue systems, the SLU component treats each utterance independent of each other, and then the following components aggregate the multi-turn information in the separate phases. However, there are two challenges: 1) errors from previous turns may be propagated and then degrade the performance of the current turn; 2) knowledge mentioned in the long history may not be carried into the current turn. This paper addresses the above issues by proposing an architecture using end-to-end memory networks to model knowledge carryover in multi-turn conversations, where utterances encoded with intents and slots can be stored as embeddings in the memory and the decoding phase applies an attention model to leverage previously stored semantics for intent prediction and slot tagging simultaneously. The experiments on Microsoft Cortana conversational data show that the proposed memory network architecture can effectively extract salient semantics for modeling knowledge carryover in the multi-turn conversations and outperform the results using the state-of-the-art recurrent neural network framework (RNN) designed for single-turn SLU.", "title": "" }, { "docid": "b2283fb23a199dbfec42b76dec31ac69", "text": "High accurate indoor localization and tracking of smart phones is critical to pervasive applications. Most radio-based solutions either exploit some error prone power-distance models or require some labor-intensive process of site survey to construct RSS fingerprint database. This study offers a new perspective to exploit RSS readings by their contrast relationship rather than absolute values, leading to three observations and functions called turn verifying, room distinguishing and entrance discovering. On this basis, we design WaP (WiFi-Assisted Particle filter), an indoor localization and tracking system exploiting particle filters to combine dead reckoning, RSS-based analyzing and knowledge of floor plan together. All the prerequisites of WaP are the floor plan and the coarse locations on which room the APs reside. WaP prototype is realized on off-the-shelf smartphones with limited particle number typically 400, and validated in a college building covering 1362m2. Experiment results show that WaP can achieve average localization error of 0.71m for 100 trajectories by 8 pedestrians.", "title": "" }, { "docid": "10634117fd51d94f9b12b9f0ed034f65", "text": "Our corpus of descriptive text contains a significant number of long-distance pronominal references (8.4% of the total). In order to account for how these pronouns are interpreted, we re-examine Grosz and Sidner’s theory of the attentional state, and in particular the use of the global focus to supplement centering theory. Our corpus evidence concerning these long-distance pronominal references, as well as studies of the use of descriptions, proper names and ambiguous uses of pronouns, lead us to conclude that a discourse focus stack mechanism of the type proposed by Sidner is essential to account for the use of these referring expressions. We suggest revising the Grosz & Sidner framework by allowing for the possibility that an entity in a focus space may have special status.", "title": "" }, { "docid": "1840d879044662bfb1e6b2ea3ee9c2c8", "text": "Working memory (WM) training has been reported to benefit abilities as diverse as fluid intelligence (Jaeggi et al., Proceedings of the National Academy of Sciences of the United States of America, 105:6829-6833, 2008) and reading comprehension (Chein & Morrison, Psychonomic Bulletin & Review, 17:193-199, 2010), but transfer is not always observed (for reviews, see Morrison & Chein, Psychonomics Bulletin & Review, 18:46-60, 2011; Shipstead et al., Psychological Bulletin, 138:628-654, 2012). In contrast, recent WM training studies have consistently reported improvement on the trained tasks. The basis for these training benefits has received little attention, however, and it is not known which WM components and/or processes are being improved. Therefore, the goal of the present study was to investigate five possible mechanisms underlying the effects of adaptive dual n-back training on working memory (i.e., improvements in executive attention, updating, and focus switching, as well as increases in the capacity of the focus of attention and short-term memory). In addition to a no-contact control group, the present study also included an active control group whose members received nonadaptive training on the same task. All three groups showed significant improvements on the n-back task from pretest to posttest, but adaptive training produced larger improvements than did nonadaptive training, which in turn produced larger improvements than simply retesting. Adaptive, but not nonadaptive, training also resulted in improvements on an untrained running span task that measured the capacity of the focus of attention. No other differential improvements were observed, suggesting that increases in the capacity of the focus of attention underlie the benefits of adaptive dual n-back training.", "title": "" } ]
scidocsrr
df343f7c434386cbe83582a84e00fc2a
On Feature Matching and Image Registration for Two-dimensional Forward-scan Sonar Imaging
[ { "docid": "ed9e22167d3e9e695f67e208b891b698", "text": "ÐIn k-means clustering, we are given a set of n data points in d-dimensional space R and an integer k and the problem is to determine a set of k points in R, called centers, so as to minimize the mean squared distance from each data point to its nearest center. A popular heuristic for k-means clustering is Lloyd's algorithm. In this paper, we present a simple and efficient implementation of Lloyd's k-means clustering algorithm, which we call the filtering algorithm. This algorithm is easy to implement, requiring a kd-tree as the only major data structure. We establish the practical efficiency of the filtering algorithm in two ways. First, we present a data-sensitive analysis of the algorithm's running time, which shows that the algorithm runs faster as the separation between clusters increases. Second, we present a number of empirical studies both on synthetically generated data and on real data sets from applications in color quantization, data compression, and image segmentation. Index TermsÐPattern recognition, machine learning, data mining, k-means clustering, nearest-neighbor searching, k-d tree, computational geometry, knowledge discovery.", "title": "" } ]
[ { "docid": "f1e5f8ab0b2ce32553dd5e08f1113b36", "text": "We examined the hypothesis that an excess accumulation of intramuscular lipid (IMCL) is associated with insulin resistance and that this may be mediated by the oxidative capacity of muscle. Nine sedentary lean (L) and 11 obese (O) subjects, 8 obese subjects with type 2 diabetes mellitus (D), and 9 lean, exercise-trained (T) subjects volunteered for this study. Insulin sensitivity (M) determined during a hyperinsulinemic (40 mU x m(-2)min(-1)) euglycemic clamp was greater (P < 0.01) in L and T, compared with O and D (9.45 +/- 0.59 and 10.26 +/- 0.78 vs. 5.51 +/- 0.61 and 1.15 +/- 0.83 mg x min(-1)kg fat free mass(-1), respectively). IMCL in percutaneous vastus lateralis biopsy specimens by quantitative image analysis of Oil Red O staining was approximately 2-fold higher in D than in L (3.04 +/- 0.39 vs. 1.40 +/- 0.28% area as lipid; P < 0.01). IMCL was also higher in T (2.36 +/- 0.37), compared with L (P < 0.01). The oxidative capacity of muscle determined with succinate dehydrogenase staining of muscle fibers was higher in T, compared with L, O, and D (50.0 +/- 4.4, 36.1 +/- 4.4, 29.7 +/- 3.8, and 33.4 +/- 4.7 optical density units, respectively; P < 0.01). IMCL was negatively associated with M (r = -0.57, P < 0.05) when endurance-trained subjects were excluded from the analysis, and this association was independent of body mass index. However, the relationship between IMCL and M was not significant when trained individuals were included. There was a positive association between the oxidative capacity and M among nondiabetics (r = 0.37, P < 0.05). In summary, skeletal muscle of trained endurance athletes is markedly insulin sensitive and has a high oxidative capacity, despite having an elevated lipid content. In conclusion, the capacity for lipid oxidation may be an important mediator of the association between excess muscle lipid accumulation and insulin resistance.", "title": "" }, { "docid": "d69a8dde296d21f4e3334f436deefdf1", "text": "In this work, we demonstrate that 3D poses in video can be effectively estimated with a fully convolutional model based on dilated temporal convolutions over 2D keypoints. We also introduce back-projection, a simple and effective semi-supervised training method that leverages unlabeled video data. We start with predicted 2D keypoints for unlabeled video, then estimate 3D poses and finally back-project to the input 2D keypoints. In the supervised setting, our fully-convolutional model outperforms the previous best result from the literature by 6 mm mean per-joint position error on Human3.6M, corresponding to an error reduction of 11%, and the model also shows significant improvements on HumanEva-I. Moreover, experiments with back-projection show that it comfortably outperforms previous state-of-the-art results in semisupervised settings where labeled data is scarce. Code and models are available at https://github.com/ facebookresearch/VideoPose3D", "title": "" }, { "docid": "68bb5cb195c910e0a52c81a42a9e141c", "text": "With advances in brain-computer interface (BCI) research, a portable few- or single-channel BCI system has become necessary. Most recent BCI studies have demonstrated that the common spatial pattern (CSP) algorithm is a powerful tool in extracting features for multiple-class motor imagery. However, since the CSP algorithm requires multi-channel information, it is not suitable for a few- or single-channel system. In this study, we applied a short-time Fourier transform to decompose a single-channel electroencephalography signal into the time-frequency domain and construct multi-channel information. Using the reconstructed data, the CSP was combined with a support vector machine to obtain high classification accuracies from channels of both the sensorimotor and forehead areas. These results suggest that motor imagery can be detected with a single channel not only from the traditional sensorimotor area but also from the forehead area.", "title": "" }, { "docid": "473f51629f0267530a02472fb1e5b7ac", "text": "It has been widely reported that a large number of ERP implementations fail to meet expectations. This is indicative, firstly, of the magnitude of the problems involved in ERP systems implementation and, secondly, of the importance of the ex-ante evaluation and selection process of ERP software. This paper argues that ERP evaluation should extend its scope beyond operational improvements arising from the ERP software/product per se to the strategic impact of ERP on the competitive position of the organisation. Due to the complexity of ERP software, the intangible nature of both costs and benefits, which evolve over time, and the organisational, technological and behavioural impact of ERP, a broad perspective of the ERP systems evaluation process is needed. The evaluation has to be both quantitative and qualitative and requires an estimation of the perceived costs and benefits throughout the life-cycle of ERP systems. The paper concludes by providing a framework of the key issues involved in the selection process of ERP software and the associated costs and benefits. European Journal of Information Systems (2001) 10, 204–215.", "title": "" }, { "docid": "4a80d4ecb00fd27b29f342794213fc41", "text": "Rapid and accurate analysis of platelet count plays an important role in evaluating hemorrhagic status. Therefore, we evaluated platelet counting performance of a hematology analyzer, Celltac F (MEK-8222, Nihon Kohden Corporation, Tokyo, Japan), that features easy use with low reagent consumption and high throughput while occupying minimal space in the clinical laboratory. All blood samples were anticoagulated with dipotassium ethylenediaminetetraacetic acid (EDTA-2K). The samples were stored at room temperature (18(;)C-22(;)C) and tested within 4 hours of phlebotomy. We evaluated the counting ability of the Celltac F hematology analyzer by comparing it with the platelet counts obtained by the flow cytometry method that ISLH and ICSH recommended, and also the manual visual method by Unopette (Becton Dickinson Vacutainer Systems). The ICSH/ISLH reference method is based on the fact that platelets can be stained with monoclonal antibodies to CD41 and/or CD61. The dilution ratio was optimized after the precision, coincidence events, and debris counts were confirmed by the reference method. Good correlation of platelet count between the Celltac F and the ICSH/ISLH reference method (r = 0.99, and the manual visual method (r= 0.93) were obtained. The regressions were y = 0.90 x+9.0 and y=1.11x+8.4, respectively. We conclude that the Celltac F hematology analyzer for platelet counting was well suited to the ICSH/ISLH reference method for rapidness and reliability.", "title": "" }, { "docid": "dd6b922a2cced45284cd1c67ad3be247", "text": "Today’s interconnected socio-economic and environmental challenges require the combination and reuse of existing integrated modelling solutions. This paper contributes to this overall research area, by reviewing a wide range of currently available frameworks, systems and emerging technologies for integrated modelling in the environmental sciences. Based on a systematic review of the literature, we group related studies and papers into viewpoints and elaborate on shared and diverging characteristics. Our analysis shows that component-based modelling frameworks and scientific workflow systems have been traditionally used for solving technical integration challenges, but ultimately, the appropriate framework or system strongly depends on the particular environmental phenomenon under investigation. The study also shows that in general individual integrated modelling solutions do not benefit from components and models that are provided by others. It is this island (or silo) situation, which results in low levels of model reuse for multi-disciplinary settings. This seems mainly due to the fact that the field as such is highly complex and diverse. A unique integrated modelling solution, which is capable of dealing with any environmental scenario, seems to be unaffordable because of the great variety of data formats, models, environmental phenomena, stakeholder networks, user perspectives and social aspects. Nevertheless, we conclude that the combination of modelling tools, which address complementary viewpoints such as service-based combined with scientific workflow systems, or resource-modelling on top of virtual research environments could lead to sustainable information systems, which would advance model sharing, reuse and integration. Next steps for improving this form of multi-disciplinary interoperability are sketched.", "title": "" }, { "docid": "f418441593da8db1dcbaa922cccc21fa", "text": "Sentiment analysis, as a heatedly-discussed research topic in the area of information extraction, has attracted more attention from the beginning of this century. With the rapid development of the Internet, especially the rising popularity of Web2.0 technology, network user has become not only the content maker, but also the receiver of information. Meanwhile, benefiting from the development and maturity of the technology in natural language processing and machine learning, we can widely employ sentiment analysis on subjective texts. In this paper, we propose a supervised learning method on fine-grained sentiment analysis to meet the new challenges by exploring new research ideas and methods to further improve the accuracy and practicability of sentiment analysis. First, this paper presents an improved strength computation method of sentiment word. Second, this paper introduces a sentiment information joint recognition model based on Conditional Random Fields and analyzes the related knowledge of the basic and semantic features. Finally, the experimental results show that our approach and a demo system are feasible and effective.", "title": "" }, { "docid": "a73275f83b94ee3fb1675a125edbb55a", "text": "Treatment of biowaste, the predominant waste fraction in lowand middle-income settings, offers public health, environmental and economic benefits by converting waste into a hygienic product, diverting it from disposal sites, and providing a source of income. This article presents a comprehensive overview of 13 biowaste treatment technologies, grouped into four categories: (1) direct use (direct land application, direct animal feed, direct combustion), (2) biological treatment (composting, vermicomposting, black soldier fly treatment, anaerobic digestion, fermentation), (3) physico-chemical treatment (transesterification, densification), and (4) thermo-chemical treatment (pyrolysis, liquefaction, gasification). Based on a literature review and expert consultation, the main feedstock requirements, process conditions and treatment products are summarized, and the challenges and trends, particularly regarding the applicability of each technology in the urban lowand middle-income context, are critically discussed. An analysis of the scientific articles published from 2005 to 2015 reveals substantial differences in the amount and type of research published for each technology, a fact that can partly be explained with the development stage of the technologies. Overall, publications from case studies and field research seem disproportionately underrepresented for all technologies. One may argue that this reflects the main task of researchers—to conduct fundamental research for enhanced process understanding—but it may also be a result of the traditional embedding of the waste sector in the discipline of engineering science, where socio-economic and management aspects are seldom object of the research. More unbiased, wellstructured and reproducible evidence from case studies at scale could foster the knowledge transfer to practitioners and enhance the exchange between academia, policy and practice.", "title": "" }, { "docid": "ba94bc5f5762017aed0c307ce89c0558", "text": "Carsharing has emerged as an alternative to vehicle ownership and is a rapidly expanding global market. Particularly through the flexibility of free-floating models, car sharing complements public transport since customers do not need to return cars to specific stations. We present a novel data analytics approach that provides decision support to car sharing operators -- from local start-ups to global players -- in maneuvering this constantly growing and changing market environment. Using a large set of rental data, as well as zero-inflated and geographically weighted regression models, we derive indicators for the attractiveness of certain areas based on points of interest in their vicinity. These indicators are valuable for a variety of operational and strategic decisions. As a demonstration project, we present a case study of Berlin, where the indicators are used to identify promising regions for business area expansion.", "title": "" }, { "docid": "422caa6ceb9713bee7ebfb64f9c46b8f", "text": "he persistence of illegal activity throughout human history and some of its apparent regularities have long attracted the attention of economists. For example, Adam Smith (1776 [1937], p. 670) observed that crime and the demand for protection from crime are both motivated by the accumulation of property. William Paley (1785 [1822]) presented a penetrating analysis of factors responsible for differences in the actual magnitudes of probability and severity of sanctions for different crimes. Jeremy Bentham, the father of utilitarianism, focused considerable attention on the calculus of both offenders' behavior and the optimal response by the legal authorities. It was not until the late 1960s, however, that economists reconnected with the subject, using modern economic analysis.' In this paper I shall focus on two of the main themes that characterize the literature on crime in the last three decades. The first is the evolution of a \"market model\" that offers a comprehensive framework for studying the problem. Like the classical approach, the model builds on the assumption that offenders, as members of the human race, respond to incentives. Of course, not every single offender does so. But willful engagement in even the most reprehensible violations of legal and moral codes does not preclude an ability to make self-serving choices, and this has been the justification for applying economic analysis to all illegal activities, from speeding and tax evasion to murder.", "title": "" }, { "docid": "9dd245f75092adc8d8bb2b151275789b", "text": "Current model free learning-based robot grasping approaches exploit human-labeled datasets for training the models. However, there are two problems with such a methodology: (a) since each object can be grasped in multiple ways, manually labeling grasp locations is not a trivial task; (b) human labeling is biased by semantics. While there have been attempts to train robots using trial-and-error experiments, the amount of data used in such experiments remains substantially low and hence makes the learner prone to over-fitting. In this paper, we take the leap of increasing the available training data to 40 times more than prior work, leading to a dataset size of 50K data points collected over 700 hours of robot grasping attempts. This allows us to train a Convolutional Neural Network (CNN) for the task of predicting grasp locations without severe overfitting. In our formulation, we recast the regression problem to an 18-way binary classification over image patches. We also present a multi-stage learning approach where a CNN trained in one stage is used to collect hard negatives in subsequent stages. Our experiments clearly show the benefit of using large-scale datasets (and multi-stage training) for the task of grasping. We also compare to several baselines and show state-of-the-art performance on generalization to unseen objects for grasping.", "title": "" }, { "docid": "3867ff9ac24349b17e50ec2a34e84da4", "text": "Each generation that enters the workforce brings with it its own unique perspectives and values, shaped by the times of their life, about work and the work environment; thus posing atypical human resources management challenges. Following the completion of an extensive quantitative study conducted in Cyprus, and by adopting a qualitative methodology, the researchers aim to further explore the occupational similarities and differences of the two prevailing generations, X and Y, currently active in the workplace. Moreover, the study investigates the effects of the perceptual generational differences on managing the diverse hospitality workplace. Industry implications, recommendations for stakeholders as well as directions for further scholarly research are discussed.", "title": "" }, { "docid": "b0e94a0fdaf280d9e1942befdc4ac660", "text": "In SCARA robots, which are often used in industrial applications, all joint axes are parallel, covering three degrees of freedom in translation and one degree of freedom in rotation. Therefore, conventional approaches for the hand-eye calibration of articulated robots cannot be used for SCARA robots. In this paper, we present a new linear method that is based on dual quaternions and extends the work of Daniilid is 1999 (IJRR) for SCARA robots. To improve the accuracy, a subsequent nonlinear optimization is proposed. We address several practical implementation issues and show the effectiveness of the method by evaluating it on synthetic and real data.", "title": "" }, { "docid": "3194a0dd979b668bb25afb10260c30d2", "text": "An octa-band antenna for 5.7-in mobile phones with the size of 80 mm <inline-formula> <tex-math notation=\"LaTeX\">$\\times6$ </tex-math></inline-formula> mm <inline-formula> <tex-math notation=\"LaTeX\">$\\times5.8$ </tex-math></inline-formula> mm is proposed and studied. The proposed antenna is composed of a coupled line, a monopole branch, and a ground branch. By using the 0.25-, 0.5-, and 0.75-wavelength modes, the lower band (704–960 MHz) and the higher band (1710–2690 MHz) are covered. The working mechanism is analyzed based on the S-parameters and the surface current distributions. The attractive merits of the proposed antenna are that the nonground portion height is only 6 mm and any lumped element is not used. A prototype of the proposed antenna is fabricated and measured. The measured −6 dB impedance bandwidths are 350 MHz (0.67–1.02 GHz) and 1.27 GHz (1.65–2.92 GHz) at the lower and higher bands, respectively, which can cover the LTE700, GSM850, GSM900, GSM1800, GSM1900, UMTS, LTE2300, and LTE2500 bands. The measured patterns, gains, and efficiencies are presented.", "title": "" }, { "docid": "b819c10fb84e576cb6444023246b91b0", "text": "BCAAs (leucine, isoleucine, and valine), particularly leucine, have anabolic effects on protein metabolism by increasing the rate of protein synthesis and decreasing the rate of protein degradation in resting human muscle. Also, during recovery from endurance exercise, BCAAs were found to have anabolic effects in human muscle. These effects are likely to be mediated through changes in signaling pathways controlling protein synthesis. This involves phosphorylation of the mammalian target of rapamycin (mTOR) and sequential activation of 70-kD S6 protein kinase (p70 S6 kinase) and the eukaryotic initiation factor 4E-binding protein 1. Activation of p70 S6 kinase, and subsequent phopsphorylation of the ribosomal protein S6, is associated with enhanced translation of specific mRNAs. When BCAAs were supplied to subjects during and after one session of quadriceps muscle resistance exercise, an increase in mTOR, p70 S6 kinase, and S6 phosphorylation was found in the recovery period after the exercise with no effect of BCAAs on Akt or glycogen synthase kinase 3 (GSK-3) phosphorylation. Exercise without BCAA intake led to a partial phosphorylation of p70 S6 kinase without activating the enzyme, a decrease in Akt phosphorylation, and no change in GSK-3. It has previously been shown that leucine infusion increases p70 S6 kinase phosphorylation in an Akt-independent manner in resting subjects; however, a relation between mTOR and p70 S6 kinase has not been reported previously. The results suggest that BCAAs activate mTOR and p70 S6 kinase in human muscle in the recovery period after exercise and that GSK-3 is not involved in the anabolic action of BCAAs on human muscle. J. Nutr. 136: 269S–273S, 2006.", "title": "" }, { "docid": "d6ed9594536cada2d857a876fd9e21ae", "text": "As the increasing growth of the computing technology and network technology, it also increases data storage demands. Data Security has become a crucial issue in electronic communication. Secret writing has come up as a solution, and plays a vital role in data security system. It uses some algorithms to scramble data into unreadable text which might be only being decrypted by party those having the associated key. These algorithms consume a major amount of computing resources such as memory and battery power and computation time. This paper accomplishes comparative analysis of encryption standards DES, AES and RSA considering various parameters such as computation time, memory usages. A cryptographic tool is used for performing experiments. Experiments results are given to analyses the effectiveness of symmetric and asymmetric algorithms. Keywords— Encryption, secret key encryption, public key encryption, DES, AES, RSA encryption, Symmetric", "title": "" }, { "docid": "9a57dfbbd233c851ae972403c67c35d5", "text": "It is well established that women’s preferences for masculinity are contingent on their own market-value and the duration of the sought relationship, but few studies have investigated similar effects in men. Here, we tested whether men’s attractiveness predicts their preferences for feminine face shape in women when judging for longand short-term relationship partners. We found that attractive men expressed a stronger preference for facial femininity compared to less attractive men. The relationship was evident when men judged women for a short-term, but not for a long-term, relationship. These findings suggest that market-value may influence men’s preferences for feminine characteristics in women’s faces and indicate that men’s preferences may be subject to facultative variation to a greater degree than was previously thought. 2010 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "5457f45fa815a4d96b39e982b79836bd", "text": "Liu, He M , Purdue University, August 2016. Image Quality Estimation: Soft-ware for Objective Evaluation . Major Professor: Amy R. Reibman. Digital images are widely used in our daily lives and the quality of images is important to the viewing experience. Low quality images may be blurry or contain noise or compression artifacts. Humans can easily estimate image quality, but it is not practical to use human subjects to measure image quality in real applications. Image Quality Estimators (QE) are algorithms that evaluate image qualities automatically. These QEs compute scores of any input images to represent their qualities. This thesis mainly focuses on evaluating the performance of QEs. Two approaches used in this work are objective software analysis and the subjective database design. For the first, we create a software consisting of functional modules to test QE performances. These modules can load images from subjective databases or generate distortion images from any input images. Their QE scores are computed and analyzed by the statistical method module so that they can be easily interpreted and reported. Some modules in this software are combined and formed into a published software package: Stress Testing Image Quality Estimators (STIQE). In addition to the QE analysis software, a new subjective database is designed and implemented using both online and in-lab subjective tests. The database is designed using the pairwise comparison method and the subjective quality scores are computed using the Bradley-Terry model and Maximum Likelihood Estimation (MLE). While four testing phases are designed for this databases, only phase 1 is reported in this", "title": "" }, { "docid": "d76d09ca1e87eb2e08ccc03428c62be0", "text": "Face recognition has the perception of a solved problem, however when tested at the million-scale exhibits dramatic variation in accuracies across the different algorithms [11]. Are the algorithms very different? Is access to good/big training data their secret weapon? Where should face recognition improve? To address those questions, we created a benchmark, MF2, that requires all algorithms to be trained on same data, and tested at the million scale. MF2 is a public large-scale set with 672K identities and 4.7M photos created with the goal to level playing field for large scale face recognition. We contrast our results with findings from the other two large-scale benchmarks MegaFace Challenge and MS-Celebs-1M where groups were allowed to train on any private/public/big/small set. Some key discoveries: 1) algorithms, trained on MF2, were able to achieve state of the art and comparable results to algorithms trained on massive private sets, 2) some outperformed themselves once trained on MF2, 3) invariance to aging suffers from low accuracies as in MegaFace, identifying the need for larger age variations possibly within identities or adjustment of algorithms in future testing.", "title": "" }, { "docid": "bb75aa9bbe07e635493b123eaaadf74d", "text": "Right ventricular (RV) pacing increases the incidence of atrial fibrillation (AF) and hospitalization rate for heart failure. Many patients with sinus node dysfunction (SND) are implanted with a DDDR pacemaker to ensure the treatment of slowly conducted atrial fibrillation and atrioventricular (AV) block. Many pacemakers are never reprogrammed after implantation. This study aims to evaluate the effectiveness of programming DDIR with a long AV delay in patients with SND and preserved AV conduction as a possible strategy to reduce RV pacing in comparison with a nominal DDDR setting including an AV search hysteresis. In 61 patients (70 ± 10 years, 34 male, PR < 200 ms, AV-Wenckebach rate at ≥130 bpm) with symptomatic SND a DDDR pacemaker was implanted. The cumulative prevalence of right ventricular pacing was assessed according to the pacemaker counter in the nominal DDDR-Mode (AV delay 150/120 ms after atrial pacing/sensing, AV search hysteresis active) during the first postoperative days and in DDIR with an individually programmed long fixed AV delay after 100 days (median). With the nominal DDDR mode the median incidence of right ventricular pacing amounted to 25.2%, whereas with DDIR and long AV delay the median prevalence of RV pacing was significantly reduced to 1.1% (P < 0.001). In 30 patients (49%) right ventricular pacing was almost completely (<1%) eliminated, n = 22 (36%) had >1% <20% and n = 4 (7%) had >40% right ventricular pacing. The median PR interval was 161 ms. The median AV interval with DDIR was 280 ms. The incidence of right ventricular pacing in patients with SND and preserved AV conduction, who are treated with a dual chamber pacemaker, can significantly be reduced by programming DDIR with a long, individually adapted AV delay when compared with a nominal DDDR setting, but nonetheless in some patients this strategy produces a high proportion of disadvantageous RV pacing. The DDIR mode with long AV delay provides an effective strategy to reduce unnecessary right ventricular pacing but the effect has to be verified in every single patient.", "title": "" } ]
scidocsrr
607048a795d01591be1876e687ee657c
A Study of Reinforcement Learning for Neural Machine Translation
[ { "docid": "0f699e9f14753b2cbfb7f7a3c7057f40", "text": "There has been much recent work on training neural attention models at the sequencelevel using either reinforcement learning-style methods or by optimizing the beam. In this paper, we survey a range of classical objective functions that have been widely used to train linear models for structured prediction and apply them to neural sequence to sequence models. Our experiments show that these losses can perform surprisingly well by slightly outperforming beam search optimization in a like for like setup. We also report new state of the art results on both IWSLT’14 German-English translation as well as Gigaword abstractive summarization. On the large WMT’14 English-French task, sequence-level training achieves 41.5 BLEU which is on par with the state of the art.1", "title": "" } ]
[ { "docid": "97e80e728b53042d6e9962dd03b5df87", "text": "(1) is an example of an adjectival comparative. In it, the adjective important is flanked by more and a comparative clause headed by than. This article is a survey of recent ideas about the interpretation of comparatives, including (i) the underlying semantics based on the idea of a threshold; (ii) the interpretation of comparative clauses that include quantifiers (brighter than on many other days); (iii) remarks on differentials such as much in (1) above: what they do in the comparative and what they do elsewhere in the language; (iv) the relationship between comparatives and other Degree constructions (e.g. as important, too important); and (v) the types of phrases in which comparatives are found (adjective: tighter versus noun: more water). Given the nature and purpose of this essay, I have tried not to presuppose background in formal semantics and I have departed from standard practice in journal articles by, as much as possible, not interrupting the flow with footnotes and references. There are two appendices. The first provides more analytical detail and there I do rely on formal techniques of natural language semantics. The second covers the sources for the ideas surveyed here.", "title": "" }, { "docid": "408d3db3b2126990611fdc3a62a985ea", "text": "Multi-choice reading comprehension is a challenging task, which involves the matching between a passage and a question-answer pair. This paper proposes a new co-matching approach to this problem, which jointly models whether a passage can match both a question and a candidate answer. Experimental results on the RACE dataset demonstrate that our approach achieves state-of-the-art performance.", "title": "" }, { "docid": "e62daef8b5273096e0f174c73e3674a8", "text": "A wide range of human-robot collaborative applications in diverse domains such as manufacturing, search-andrescue, health care, the entertainment industry, and social interactions, require an autonomous robot to follow its human companion. Different working environments and applications pose diverse challenges by adding constraints on the choice of sensors, the degree of autonomy, and dynamics of the person-following robot. Researchers have addressed these challenges in many ways and contributed to the development of a large body of literature. This paper provides a comprehensive overview of the literature by categorizing different aspects of person-following by autonomous robots. Also, the corresponding operational challenges are identified based on various design choices for ground, underwater, and aerial scenarios. In addition, state-of-the-art methods for perception, planning, control, and interaction are elaborately discussed and their applicability in varied operational scenarios are presented. Then, qualitative evaluations of some of the prominent methods are performed, corresponding practicalities are illustrated, and their feasibility is analyzed in terms of standard metrics. Furthermore, several prospective application areas are identified, and open problems are highlighted for future research.", "title": "" }, { "docid": "4829d8c0dd21f84c3afbe6e1249d6248", "text": "We present an action recognition and detection system from temporally untrimmed videos by combining motion and appearance features. Motion and appearance are two kinds of complementary cues for human action understanding from video. For motion features, we adopt the Fisher vector representation with improved dense trajectories due to its rich descriptive capacity. For appearance feature, we choose the deep convolutional neural network activations due to its recent success in image based tasks. With this fused feature of iDT and CNN, we train a SVM classifier for each action class in the one-vs-all scheme. We report both the recognition and detection results of our system on Thumos 14 Challenge. From the results, we see that our method rank 4 in the action recognition task and 2 in the action detection task.", "title": "" }, { "docid": "3427740a87691629bd6cf97792089f62", "text": "Maintainers face the daunting task of wading through a collection of both new and old revisions, trying to ferret out revisions which warrant personal inspection. One can rank revisions by size/lines of code (LOC), but often, due to the distribution of the size of changes, revisions will be of similar size. If we can't rank revisions by LOC perhaps we can rank by Halstead's and McCabe's complexity metrics? However, these metrics are problematic when applied to code fragments (revisions) written in multiple languages: special parsers are required which may not support the language or dialect used; analysis tools may not understand code fragments. We propose using the statistical moments of indentation as a lightweight, language independent, revision/diff friendly metric which actually proxies classical complexity metrics. We have extensively evaluated our approach against the entire CVS histories of the 278 of the most popular and most active SourceForge projects. We found that our results are linearly correlated and rank-correlated with traditional measures of complexity, suggesting that measuring indentation is a cheap and accurate proxy for code complexity of revisions. Thus ranking revisions by the standard deviation and summation of indentation will be very similar to ranking revisions by complexity.", "title": "" }, { "docid": "b50498964a73a59f54b3a213f2626935", "text": "To reduce the significant redundancy in deep Convolutional Neural Networks (CNNs), most existing methods prune neurons by only considering the statistics of an individual layer or two consecutive layers (e.g., prune one layer to minimize the reconstruction error of the next layer), ignoring the effect of error propagation in deep networks. In contrast, we argue that for a pruned network to retain its predictive power, it is essential to prune neurons in the entire neuron network jointly based on a unified goal: minimizing the reconstruction error of important responses in the \"final response layer\" (FRL), which is the second-to-last layer before classification. Specifically, we apply feature ranking techniques to measure the importance of each neuron in the FRL, formulate network pruning as a binary integer optimization problem, and derive a closed-form solution to it for pruning neurons in earlier layers. Based on our theoretical analysis, we propose the Neuron Importance Score Propagation (NISP) algorithm to propagate the importance scores of final responses to every neuron in the network. The CNN is pruned by removing neurons with least importance, and it is then fine-tuned to recover its predictive power. NISP is evaluated on several datasets with multiple CNN models and demonstrated to achieve significant acceleration and compression with negligible accuracy loss.", "title": "" }, { "docid": "11d06fb5474df44a6bc733bd5cd1263d", "text": "Understanding how materials that catalyse the oxygen evolution reaction (OER) function is essential for the development of efficient energy-storage technologies. The traditional understanding of the OER mechanism on metal oxides involves four concerted proton-electron transfer steps on metal-ion centres at their surface and product oxygen molecules derived from water. Here, using in situ 18O isotope labelling mass spectrometry, we provide direct experimental evidence that the O2 generated during the OER on some highly active oxides can come from lattice oxygen. The oxides capable of lattice-oxygen oxidation also exhibit pH-dependent OER activity on the reversible hydrogen electrode scale, indicating non-concerted proton-electron transfers in the OER mechanism. Based on our experimental data and density functional theory calculations, we discuss mechanisms that are fundamentally different from the conventional scheme and show that increasing the covalency of metal-oxygen bonds is critical to trigger lattice-oxygen oxidation and enable non-concerted proton-electron transfers during OER.", "title": "" }, { "docid": "058a4f93fb5c24c0c9967fca277ee178", "text": "We report on the SUM project which applies automatic summarisation techniques to the legal domain. We describe our methodology whereby sentences from the text are classified according to their rhetorical role in order that particular types of sentence can be extracted to form a summary. We describe some experiments with judgments of the House of Lords: we have performed automatic linguistic annotation of a small sample set and then hand-annotated the sentences in the set in order to explore the relationship between linguistic features and argumentative roles. We use state-of-the-art NLP techniques to perform the linguistic annotation using XML-based tools and a combination of rule-based and statistical methods. We focus here on the predictive capacity of tense and aspect features for a classifier.", "title": "" }, { "docid": "1463e545177c0ad5ab87c394b504b1ee", "text": "The term Cyber-Physical Systems (CPS) typically refers to engineered, physical and biological systems monitored and/or controlled by an embedded computational core. The behaviour of a CPS over time is generally characterised by the evolution of physical quantities, and discrete software and hardware states. In general, these can be mathematically modelled by the evolution of continuous state variables for the physical components interleaved with discrete events. Despite large effort and progress in the exhaustive verification of such hybrid systems, the complexity of CPS models limits formal verification of safety of their behaviour only to small instances. An alternative approach, closer to the practice of simulation and testing, is to monitor and to predict CPS behaviours at simulation-time or at runtime. In this chapter, we summarise the state-of-the-art techniques for qualitative and quantitative monitoring of CPS behaviours. We present an overview of some of the important applications and, finally, we describe the tools supporting CPS monitoring and compare their main features.", "title": "" }, { "docid": "1f45397efcbe3db84f45a1498267593c", "text": "Multiobjective evolutionary algorithm based on decomposition (MOEA/D) decomposes a multiobjective optimization problem into a set of scalar optimization subproblems and optimizes them in a collaborative manner. Subproblems and solutions are two sets of agents that naturally exist in MOEA/D. The selection of promising solutions for subproblems can be regarded as a matching between subproblems and solutions. Stable matching, proposed in economics, can effectively resolve conflicts of interests among selfish agents in the market. In this paper, we advocate the use of a simple and effective stable matching (STM) model to coordinate the selection process in MOEA/D. In this model, subproblem agents can express their preferences over the solution agents, and vice versa. The stable outcome produced by the STM model matches each subproblem with one single solution, and it tradeoffs convergence and diversity of the evolutionary search. Comprehensive experiments have shown the effectiveness and competitiveness of our MOEA/D algorithm with the STM model. We have also demonstrated that user-preference information can be readily used in our proposed algorithm to find a region that decision makers are interested in.", "title": "" }, { "docid": "c721a66169e3ded24c814b16604855f2", "text": "When it comes to smart cities, one of the most important components is data. To enable smart city applications, data needs to be collected, stored, and processed to accomplish intelligent tasks. In this paper we discuss smart cities and the use of new and existing technologies to improve multiple aspects of these cities. There are also social and environmental aspects that have become important in smart cities that create concerns regarding ethics and ethical conduct. Thus we discuss various issues relating to the appropriate and ethical use of smart city applications and their data. Many smart city projects are being implemented and here we showcase several examples to provide context for our ethical analysis. Law enforcement, structure efficiency, utility efficiency, and traffic flow control applications are some areas that could have the most gains in smart cities; yet, they are the most pervasive as the applications performing these activities must collect and process the most private data about the citizens. The secure and ethical use of this data must be a top priority within every project. The paper also provides a list of challenges for smart city applications pertaining in some ways to ethics. These challenges are drawn from the studied examples of smart city projects to bring attention to ethical issues and raise awareness of the need to address and regulate such use of data.", "title": "" }, { "docid": "aec273859fedb6550c461548e9ab7c53", "text": "In this paper, we describe our contribution for the NTCIR-13 Short Text Conversation (STC) Chinese task. Short text conversation remains an important part on social media gathering much attention recently. The task aims to retrieve or generate a relevant comment given a post. We consider both closed and open domain STC for retrieval–based and generation-based track. To be more specific, the former applies a retrieval-based approach from the given corpus, while the later utilizes the Web to fulfill the generation-based track. Evaluation results show that our retrieval–based approach performs better than the generation-based one.", "title": "" }, { "docid": "8bd0c280a95f549bd5596fb1f7499e44", "text": "Mobile devices are becoming ubiquitous. People take pictures via their phone cameras to explore the world on the go. In many cases, they are concerned with the picture-related information. Understanding user intent conveyed by those pictures therefore becomes important. Existing mobile applications employ visual search to connect the captured picture with the physical world. However, they only achieve limited success due to the ambiguity nature of user intent in the picture-one picture usually contains multiple objects. By taking advantage of multitouch interactions on mobile devices, this paper presents a prototype of interactive mobile visual search, named TapTell, to help users formulate their visual intent more conveniently. This kind of search leverages limited yet natural user interactions on the phone to achieve more effective visual search while maintaining a satisfying user experience. We make three contributions in this work. First, we conduct a focus study on the usage patterns and concerned factors for mobile visual search, which in turn leads to the interactive design of expressing visual intent by gesture. Second, we introduce four modes of gesture-based interactions (crop, line, lasso, and tap) and develop a mobile prototype. Third, we perform an in-depth usability evaluation on these different modes, which demonstrates the advantage of interactions and shows that lasso is the most natural and effective interaction mode. We show that TapTell provides a natural user experience to use phone camera and gesture to explore the world. Based on the observation and conclusion, we also suggest some design principles for interactive mobile visual search in the future.", "title": "" }, { "docid": "e9a46aa0c797520a9b192fc5607b3521", "text": "A common setting for novelty detection assumes that labeled xamples from the nominal class are available, but that labeled examples of novelties are un available. The standard (inductive) approach is to declare novelties where the nominal density is l ow, which reduces the problem to density level set estimation. In this paper, we consider the setting where an unlabeled and possibly contaminated sample is also available at learning tim e. We argue that novelty detection in this semi-supervised setting is naturally solved by a gener al r duction to a binary classification problem. In particular, a detector with a desired false posi tive rate can be achieved through a reduction to Neyman-Pearson classification. Unlike the induc tive approach, semi-supervised novelty detection (SSND) yields detectors that are optimal (e.g., s tatistically consistent) regardless of the distribution on novelties. Therefore, in novelty detectio n, unlabeled data have a substantial impact on the theoretical properties of the decision rule. We valid ate the practical utility of SSND with an extensive experimental study. We also show that SSND provides distribution-free, learnin g-theoretic solutions to two well known problems in hypothesis testing. First, our results pr ovide a general solution to the general two-sample problem, that is, the problem of determining whe ther two random samples arise from the same distribution. Second, a specialization of SSND coi ncides with the standard p-value approach to multiple testing under the so-called random effec ts model. Unlike standard rejection regions based on thresholded p-values, the general SSND framework allows for adaptation t o arbitrary alternative distributions in multiple dimensions.", "title": "" }, { "docid": "d593c18bf87daa906f83d5ff718bdfd0", "text": "Information and communications technologies (ICTs) have enabled the rise of so-called “Collaborative Consumption” (CC): the peer-to-peer-based activity of obtaining, giving, or sharing the access to goods and services, coordinated through community-based online services. CC has been expected to alleviate societal problems such as hyper-consumption, pollution, and poverty by lowering the cost of economic coordination within communities. However, beyond anecdotal evidence, there is a dearth of understanding why people participate in CC. Therefore, in this article we investigate people’s motivations to participate in CC. The study employs survey data (N = 168) gathered from people registered onto a CC site. The results show that participation in CC is motivated by many factors such as its sustainability, enjoyment of the activity as well as economic gains. An interesting detail in the result is that sustainability is not directly associated with participation unless it is at the same time also associated with positive attitudes towards CC. This suggests that sustainability might only be an important factor for those people for whom ecological consumption is important. Furthermore, the results suggest that in CC an attitudebehavior gap might exist; people perceive the activity positively and say good things about it, but this good attitude does not necessary translate into action. Introduction", "title": "" }, { "docid": "870ac1e223cc937e5f4416c9b2ee4a89", "text": "Effective weed control, using either mechanical or chemical means, relies on knowledge of the crop and weed plant occurrences in the field. This knowledge can be obtained automatically by analyzing images collected in the field. Many existing methods for plant detection in images make the assumption that plant foliage does not overlap. This assumption is often violated, reducing the performance of existing methods. This study overcomes this issue by training a convolutional neural network to create a pixel-wise classification of crops, weeds and soil in RGB images from fields, in order to know the exact position of the plants. This training is based on simulated top-down images of weeds and maize in fields. The results show an pixel accuracy over 94% and a 100% detection rate of both maize and weeds, when tested on real images, while a high intersection over union is kept. The system can handle 2.4 images per second for images with a resolution of 1MPix, when using an Nvidia Titan X GPU.", "title": "" }, { "docid": "eaec68f19fc5a168d5bee7b359ae2789", "text": "New technology in knowledge discovery and data mining (KDD) make it possible to extract valuable information from operational data. Private businesses already use the technology for better management, planning, and marketing. Social welfare government agencies have a wealth of information about the experiences of families and individuals that are the most needy in our society in their administrative databases. These data too can be mined and analyzed with proper application of KDD technology. Such social science research could be priceless for better welfare program administration, program evaluation, and policy analysis. In this paper, we discuss a successful case study involving research in computer science as well as social welfare. In a long standing collaboration between the North Carolina DHHS and the University of North Carolina, we have (1) successfully built a longitudinal information system that tracks the experiences of families and individuals on welfare in NC since 1995 (2) developed a dynamic website reporting on the various aspects of the welfare program at the county level in order to assist county staff in the administration of the welfare program and (3) developed a new method to analyze sequential data, which can detect common patterns of welfare services given over time.", "title": "" }, { "docid": "11b857de21829051b55aa8318c4c97f7", "text": "An optimized split-gate-enhanced UMOSFET (SGE-UMOS) layout design is proposed, and its mechanism is investigated by 2-D and 3-D simulations. The layout features trench surrounding mesa (TSM): First, it optimizes the distribution of electric field density in the outer active mesa, reduces the electric-field crowding effect, and improves the breakdown voltage of the SGE-UMOS device. Second, it is unnecessary to design the layout corner with a large diameter in the termination region for the TSM structure as the conventional mesa surrounding trench (MST) structure, which is more efficient in terms of silicon usage. Rsp.on is reduced when compared with the MST structure within the same rectangular chip area. The BV of SGE-UMOS is increased from 72 to 115 V, and Rsp.on is reduced by approximately 3.5% as compared with the MST structure, due to the application of the TSM. Finally, it needs five masks in the process, and the trenches in active and termination regions are formed with the same processing steps; hence, the manufacturing process is simplified, and the cost is reduced as well.", "title": "" }, { "docid": "5184b25a4d056b861f5dbae34300344a", "text": "AFFILIATIONS: asHouri, Hsu, soroosHian, and braitHwaite— Center for Hydrometeorology and Remote Sensing, Henry Samueli School of Engineering, Department of Civil and Environmental Engineering, University of California, Irvine, Irvine, California; Knapp and neLson—NOAA/National Climatic Data Center, Asheville, North Carolina; CeCiL—Global Science & Technology, Inc., Asheville, North Carolina; prat—Cooperative Institute for Climate and Satellites, North Carolina State University, and NOAA/National Climatic Data Center, Asheville, North Carolina CORRESPONDING AUTHOR: Hamed Ashouri, Center for Hydrometeorology and Remote Sensing, Department of Civil and Environmental Engineering, University of California, Irvine, CA 92697 E-mail: h.ashouri@uci.edu", "title": "" }, { "docid": "6a89658d1200b6d2ee6a33e3bf9cb01f", "text": "No single software fault-detection technique is capable of addressing all fault-detection concerns. Similarly to software reviews and testing, static analysis tools (or automated static analysis) can be used to remove defects prior to release of a software product. To determine to what extent automated static analysis can help in the economic production of a high-quality product, we have analyzed static analysis faults and test and customer-reported failures for three large-scale industrial software systems developed at Nortel Networks. The data indicate that automated static analysis is an affordable means of software fault detection. Using the orthogonal defect classification scheme, we found that automated static analysis is effective at identifying assignment and checking faults, allowing the later software production phases to focus on more complex, functional, and algorithmic faults. A majority of the defects found by automated static analysis appear to be produced by a few key types of programmer errors and some of these types have the potential to cause security vulnerabilities. Statistical analysis results indicate the number of automated static analysis faults can be effective for identifying problem modules. Our results indicate static analysis tools are complementary to other fault-detection techniques for the economic production of a high-quality software product.", "title": "" } ]
scidocsrr
670089b7b19ec3fd4d3c5a3551b9e38d
A culturally and linguistically responsive vocabulary approach for young Latino dual language learners.
[ { "docid": "e9477e72249764e28945e4bc3a7e6b1e", "text": "English language learners (ELLs) who experience slow vocabulary development are less able to comprehend text at grade level than their English-only peers. Such students are likely to perform poorly on assessments in these areas and are at risk of being diagnosed as learning disabled. In this article, we review the research on methods to develop the vocabulary knowledge of ELLs and present lessons learned from the research concerning effective instructional practices for ELLs. The review suggests that several strategies are especially valuable for ELLs, including taking advantage of students’ first language if the language shares cognates with English; ensuring that ELLs know the meaning of basic words, and providing sufficient review and reinforcement. Finally, we discuss challenges in designing effective vocabulary instruction for ELLs. Important issues are determining which words to teach, taking into account the large deficits in second-language vocabulary of ELLs, and working with the limited time that is typically available for direct instruction in vocabulary.", "title": "" } ]
[ { "docid": "cb4f78047b92b773bc30509ca80438a4", "text": "In this article, we exploit the problem of annotating a large-scale image corpus by label propagation over noisily tagged web images. To annotate the images more accurately, we propose a novel kNN-sparse graph-based semi-supervised learning approach for harnessing the labeled and unlabeled data simultaneously. The sparse graph constructed by datum-wise one-vs-kNN sparse reconstructions of all samples can remove most of the semantically unrelated links among the data, and thus it is more robust and discriminative than the conventional graphs. Meanwhile, we apply the approximate k nearest neighbors to accelerate the sparse graph construction without loosing its effectiveness. More importantly, we propose an effective training label refinement strategy within this graph-based learning framework to handle the noise in the training labels, by bringing in a dual regularization for both the quantity and sparsity of the noise. We conduct extensive experiments on a real-world image database consisting of 55,615 Flickr images and noisily tagged training labels. The results demonstrate both the effectiveness and efficiency of the proposed approach and its capability to deal with the noise in the training labels.", "title": "" }, { "docid": "8c63ce71aaa0409372efeb3ea392394f", "text": "This paper describes the application of evolutionary fuzzy systems for subgroup discovery to a medical problem, the study on the type of patients who tend to visit the psychiatric emergency department in a given period of time of the day. In this problem, the objective is to characterise subgroups of patients according to their time of arrival at the emergency department. To solve this problem, several subgroup discovery algorithms have been applied to determine which of them obtains better results. The multiobjective evolutionary algorithm MESDIF for the extraction of fuzzy rules obtains better results and so it has been used to extract interesting information regarding the rate of admission to the psychiatric emergency department.", "title": "" }, { "docid": "c07a0053f43d9e1f98bb15d4af92a659", "text": "We present a zero-shot learning approach for text classification, predicting which natural language understanding domain can handle a given utterance. Our approach can predict domains at runtime that did not exist at training time. We achieve this extensibility by learning to project utterances and domains into the same embedding space while generating each domain-specific embedding from a set of attributes that characterize the domain. Our model is a neural network trained via ranking loss. We evaluate the performance of this zero-shot approach on a subset of a virtual assistant’s third-party domains and show the effectiveness of the technique on new domains not observed during training. We compare to generative baselines and show that our approach requires less storage and performs better on new domains.", "title": "" }, { "docid": "40ba65504518383b4ca2a6fabff261fe", "text": "Fig. 1. Noirot and Quennedey's original classification of insect exocrine glands, based on a rhinotermitid sternal gland. The asterisk indicates a subcuticular space. Abbreviations: C, cuticle; D, duct cells; G1, secretory cells class 1; G2, secretory cells class 2; G3, secretory cells class 3; S, campaniform sensilla (modified after Noirot and Quennedey, 1974). ‘Describe the differences between endocrine and exocrine glands’, it sounds a typical exam question from a general biology course during our time at high school. Because of their secretory products being released to the outside world, exocrine glands definitely add flavour to our lives. Everybody is familiar with their secretions, from the salty and perhaps unpleasantly smelling secretions from mammalian sweat glands to the sweet exudates of the honey glands used by some caterpillars to attract ants, from the most painful venoms of bullet ants and scorpions to the precious wax that honeybees use to make their nest combs. Besides these functions, exocrine glands are especially known for the elaboration of a broad spectrum of pheromonal substances, and can also be involved in the production of antibiotics, lubricants, and digestive enzymes. Modern research in insect exocrinology started with the classical works of Charles Janet, who introduced a histological approach to the insect world (Billen and Wilson, 2007). The French school of insect anatomy remained strong since then, and the commonly used classification of insect exocrine glands generally follows the pioneer paper of Charles Noirot and Andr e Quennedey (1974). These authors were leading termite researchers using their extraordinary knowledge on termite glands to understand related phenomena, such as foraging and reproductive behaviour. They distinguish between class 1 with secretory cells adjoining directly to the cuticle, and class 3 with bicellular units made up of a large secretory cell and its accompanying duct cell that carries the secretion to the exterior (Fig. 1). The original classification included also class 2 secretory cells, but these are very rare and are only found in sternal and tergal glands of a cockroach and many termites (and also in the novel nasus gland described in this issue!). This classification became universally used, with the rather strange consequence that the vast majority of insect glands is illogically made up of class 1 and class 3 cells. In a follow-up paper, the uncommon class 2 cells were re-considered as oenocyte homologues (Noirot and Quennedey, 1991). Irrespectively of these objections, their 1974 pioneer paper is a cornerstone of modern works dealing with insect exocrine glands, as is also obvious in the majority of the papers in this special issue. This paper already received 545 citations at Web of Science and 588 at Google Scholar (both on 24 Aug 2015), so one can easily say that all researchers working on insect glands consider this work truly fundamental. Exocrine glands are organs of cardinal importance in all insects. The more common ones include mandibular and labial", "title": "" }, { "docid": "74ea9bde4e265dba15cf9911fce51ece", "text": "We consider a system aimed at improving the resolution of a conventional airborne radar, looking in the forward direction, by forming an end-fire synthetic array along the airplane line of flight. The system is designed to operate even in slant (non-horizontal) flight trajectories, and it allows imaging along the line of flight. By using the array theory, we analyze system geometry and ambiguity problems, and analytically evaluate the achievable resolution and the required pulse repetition frequency. Processing computational burden is also analyzed, and finally some simulation results are provided.", "title": "" }, { "docid": "7fbc78aead9d65201d921c828b6396cd", "text": "In developing a humanoid robot, there are two major objectives. One is developing a physical robot having body, hands, and feet resembling those of human beings and being able to similarly control them. The other is to develop a control system that works similarly to our brain, to feel, think, act, and learn like ours. In this article, an architecture of a control systemwith a brain-oriented logical structure for the second objective is proposed. The proposed system autonomously adapts to the environment and implements a clearly defined “consciousness” function, through which both habitual behavior and goaldirected behavior are realized. Consciousness is regarded as a function for effective adaptation at the system-level, based on matching and organizing the individual results of the underlying parallel-processing units. This consciousness is assumed to correspond to how our mind is “aware” when making our moment to moment decisions in our daily life. The binding problem and the basic causes of delay in Libet’s experiment are also explained by capturing awareness in this manner. The goal is set as an image in the system, and efficient actions toward achieving this goal are selected in the goaldirected behavior process. The system is designed as an artificial neural network and aims at achieving consistent and efficient system behavior, through the interaction of highly independent neural nodes. The proposed architecture is based on a two-level design. The first level, which we call the “basic-system,” is an artificial neural network system that realizes consciousness, habitual behavior and explains the binding problem. The second level, which we call the “extended-system,” is an artificial neural network system that realizes goal-directed behavior.", "title": "" }, { "docid": "290b56471b64e150e40211f7a51c1237", "text": "Industrial robots are flexible machines that can be equipped with various sensors and tools to perform complex tasks. However, current robot programming languages are reaching their limits. They are not flexible and powerful enough to master the challenges posed by the intended future application areas. In the research project SoftRobot, a consortium of science and industry partners developed a software architecture that enables object-oriented software development for industrial robot systems using general-purpose programming languages. The requirements of current and future applications of industrial robots have been analysed and are reflected in the developed architecture. In this paper, an overview is given about this architecture as well as the goals that guided its development. A special focus is put on the design of the object-oriented Robotics API, which serves as a framework for developing complex robotic applications. It allows specifying real-time critical operations of robots and tools, including advanced concepts like sensor-based motions and multi-robot synchronization. The power and usefulness of the architecture is illustrated by several application examples. Its extensibility and reusability is evaluated and a comparison to other robotics frameworks is drawn.", "title": "" }, { "docid": "1c60ddeb7e940992094cb8f3913e811a", "text": "In this paper, we address the scene segmentation task by capturing rich contextual dependencies based on the selfattention mechanism. Unlike previous works that capture contexts by multi-scale features fusion, we propose a Dual Attention Networks (DANet) to adaptively integrate local features with their global dependencies. Specifically, we append two types of attention modules on top of traditional dilated FCN, which model the semantic interdependencies in spatial and channel dimensions respectively. The position attention module selectively aggregates the features at each position by a weighted sum of the features at all positions. Similar features would be related to each other regardless of their distances. Meanwhile, the channel attention module selectively emphasizes interdependent channel maps by integrating associated features among all channel maps. We sum the outputs of the two attention modules to further improve feature representation which contributes to more precise segmentation results. We achieve new state-of-the-art segmentation performance on three challenging scene segmentation datasets, i.e., Cityscapes, PASCAL Context and COCO Stuff dataset. In particular, a Mean IoU score of 81.5% on Cityscapes test set is achieved without using coarse data. we make the code and trained models publicly available at https://github.com/junfu1115/DANet", "title": "" }, { "docid": "31d2e56c01f53c25c6c9bfcabe21fcbe", "text": "In this paper, we propose a novel computer vision-based fall detection system for monitoring an elderly person in a home care, assistive living application. Initially, a single camera covering the full view of the room environment is used for the video recording of an elderly person's daily activities for a certain time period. The recorded video is then manually segmented into short video clips containing normal postures, which are used to compose the normal dataset. We use the codebook background subtraction technique to extract the human body silhouettes from the video clips in the normal dataset and information from ellipse fitting and shape description, together with position information, is used to provide features to describe the extracted posture silhouettes. The features are collected and an online one class support vector machine (OCSVM) method is applied to find the region in feature space to distinguish normal daily postures and abnormal postures such as falls. The resultant OCSVM model can also be updated by using the online scheme to adapt to new emerging normal postures and certain rules are added to reduce false alarm rate and thereby improve fall detection performance. From the comprehensive experimental evaluations on datasets for 12 people, we confirm that our proposed person-specific fall detection system can achieve excellent fall detection performance with 100% fall detection rate and only 3% false detection rate with the optimally tuned parameters. This work is a semiunsupervised fall detection system from a system perspective because although an unsupervised-type algorithm (OCSVM) is applied, human intervention is needed for segmenting and selecting of video clips containing normal postures. As such, our research represents a step toward a complete unsupervised fall detection system.", "title": "" }, { "docid": "78744205cf17be3ee5a61d12e6a44180", "text": "Modeling of photovoltaic (PV) systems is essential for the designers of solar generation plants to do a yield analysis that accurately predicts the expected power output under changing environmental conditions. This paper presents a comparative analysis of PV module modeling methods based on the single-diode model with series and shunt resistances. Parameter estimation techniques within a modeling method are used to estimate the five unknown parameters in the single diode model. Two sets of estimated parameters were used to plot the I-V characteristics of two PV modules, i.e., SQ80 and KC200GT, for the different sets of modeling equations, which are classified into models 1 to 5 in this study. Each model is based on the different combinations of diode saturation current and photogenerated current plotted under varying irradiance and temperature. Modeling was done using MATLAB/Simulink software, and the results from each model were first verified for correctness against the results produced by their respective authors. Then, a comparison was made among the different models (models 1 to 5) with respect to experimentally measured and datasheet I-V curves. The resultant plots were used to draw conclusions on which combination of parameter estimation technique and modeling method best emulates the manufacturer specified characteristics.", "title": "" }, { "docid": "519ca18e1450581eb3a7387568dce7cf", "text": "This paper illustrates the design of a process compensated bias for asynchronous CML dividers for a low power, high performance LO divide chain operating at 4Ghz of input RF frequency. The divider chain provides division by 4,8,12,16,20, and 24. It provides a differential CML level signal for the in-loop modulated transmitter, and 25% duty cycle non-overlapping rail to rail waveforms for I/Q receiver for driving passive mixer. Asynchronous dividers have been used to realize divide by 3 and 5 with 50% duty cycle, quadrature outputs. All the CML dividers use a process compensated bias to compensate for load resistor variation and tail current variation using dual analog feedback loops. Frabricated in 180nm CMOS technology, the divider chain operate over industrial temperature range (−40 to 90°C), and provide outputs in 138–960Mhz range, consuming 2.2mA from 1.8V regulated supply at the highest output frequency.", "title": "" }, { "docid": "36b232e486ee4c9885a51a1aefc8f12b", "text": "Graphics processing units (GPUs) are a powerful platform for building high-speed network traffic processing applications using low-cost hardware. Existing systems tap the massively parallel architecture of GPUs to speed up certain computationally intensive tasks, such as cryptographic operations and pattern matching. However, they still suffer from significant overheads due to criticalpath operations that are still being carried out on the CPU, and redundant inter-device data transfers. In this paper we present GASPP, a programmable network traffic processing framework tailored to modern graphics processors. GASPP integrates optimized GPUbased implementations of a broad range of operations commonly used in network traffic processing applications, including the first purely GPU-based implementation of network flow tracking and TCP stream reassembly. GASPP also employs novel mechanisms for tackling control flow irregularities across SIMT threads, and sharing memory context between the network interface and the GPU. Our evaluation shows that GASPP can achieve multi-gigabit traffic forwarding rates even for computationally intensive and complex network operations such as stateful traffic classification, intrusion detection, and packet encryption. Especially when consolidating multiple network applications on the same device, GASPP achieves up to 16.2× speedup compared to standalone GPU-based implementations of the same applications.", "title": "" }, { "docid": "12d564ad22b33ee38078f18a95ed670f", "text": "Embedding knowledge graphs (KGs) into continuous vector spaces is a focus of current research. Early works performed this task via simple models developed over KG triples. Recent attempts focused on either designing more complicated triple scoring models, or incorporating extra information beyond triples. This paper, by contrast, investigates the potential of using very simple constraints to improve KG embedding. We examine non-negativity constraints on entity representations and approximate entailment constraints on relation representations. The former help to learn compact and interpretable representations for entities. The latter further encode regularities of logical entailment between relations into their distributed representations. These constraints impose prior beliefs upon the structure of the embedding space, without negative impacts on efficiency or scalability. Evaluation on WordNet, Freebase, and DBpedia shows that our approach is simple yet surprisingly effective, significantly and consistently outperforming competitive baselines. The constraints imposed indeed improve model interpretability, leading to a substantially increased structuring of the embedding space. Code and data are available at https://github.com/i ieir-km/ComplEx-NNE_AER.", "title": "" }, { "docid": "256376e1867ee923ff72d3376c3be918", "text": "Driven by recent vision and graphics applications such as image segmentation and object recognition, computing pixel-accurate saliency values to uniformly highlight foreground objects becomes increasingly important. In this paper, we propose a unified framework called pixelwise image saliency aggregating (PISA) various bottom-up cues and priors. It generates spatially coherent yet detail-preserving, pixel-accurate, and fine-grained saliency, and overcomes the limitations of previous methods, which use homogeneous superpixel based and color only treatment. PISA aggregates multiple saliency cues in a global context, such as complementary color and structure contrast measures, with their spatial priors in the image domain. The saliency confidence is further jointly modeled with a neighborhood consistence constraint into an energy minimization formulation, in which each pixel will be evaluated with multiple hypothetical saliency levels. Instead of using global discrete optimization methods, we employ the cost-volume filtering technique to solve our formulation, assigning the saliency levels smoothly while preserving the edge-aware structure details. In addition, a faster version of PISA is developed using a gradient-driven image subsampling strategy to greatly improve the runtime efficiency while keeping comparable detection accuracy. Extensive experiments on a number of public data sets suggest that PISA convincingly outperforms other state-of-the-art approaches. In addition, with this work, we also create a new data set containing 800 commodity images for evaluating saliency detection.", "title": "" }, { "docid": "9e359f0d7df4e35c934ce01bf5619622", "text": "This paper presents a computationally efficient machine-learned method for natural language response suggestion. Feed-forward neural networks using n-gram embedding features encode messages into vectors which are optimized to give message-response pairs a high dot-product value. An optimized search finds response suggestions. The method is evaluated in a large-scale commercial e-mail application, Inbox by Gmail. Compared to a sequence-to-sequence approach, the new system achieves the same quality at a small fraction of the computational requirements and latency.", "title": "" }, { "docid": "67ba6914f8d1a50b7da5024567bc5936", "text": "Abstract—Braille alphabet is an important tool that enables visually impaired individuals to have a comfortable life like those who have normal vision. For this reason, new applications related to the Braille alphabet are being developed. In this study, a new Refreshable Braille Display was developed to help visually impaired individuals learn the Braille alphabet easier. By means of this system, any text downloaded on a computer can be read by the visually impaired individual at that moment by feeling it by his/her hands. Through this electronic device, it was aimed to make learning the Braille alphabet easier for visually impaired individuals with whom the necessary tests were conducted.", "title": "" }, { "docid": "ae5bf888ce9a61981be60b9db6fc2d9c", "text": "Inverting the hash values by performing brute force computation is one of the latest security threats on password based authentication technique. New technologies are being developed for brute force computation and these increase the success rate of inversion attack. Honeyword base authentication protocol can successfully mitigate this threat by making password cracking detectable. However, the existing schemes have several limitations like Multiple System Vulnerability, Weak DoS Resistivity, Storage Overhead, etc. In this paper we have proposed a new honeyword generation approach, identified as Paired Distance Protocol (PDP) which overcomes almost all the drawbacks of previously proposed honeyword generation approaches. The comprehensive analysis shows that PDP not only attains a high detection rate of 97.23% but also reduces the storage cost to a great extent.", "title": "" }, { "docid": "03aec14861b2b1b4e6f091dc77913a5b", "text": "Taxonomy is indispensable in understanding natural language. A variety of large scale, usage-based, data-driven lexical taxonomies have been constructed in recent years. Hypernym-hyponym relationship, which is considered as the backbone of lexical taxonomies can not only be used to categorize the data but also enables generalization. In particular, we focus on one of the most prominent properties of the hypernym-hyponym relationship, namely, transitivity, which has a significant implication for many applications. We show that, unlike human crafted ontologies and taxonomies, transitivity does not always hold in data-driven lexical taxonomies. We introduce a supervised approach to detect whether transitivity holds for any given pair of hypernym-hyponym relationships. Besides solving the inferencing problem, we also use the transitivity to derive new hypernym-hyponym relationships for data-driven lexical taxonomies. We conduct extensive experiments to show the effectiveness of our approach.", "title": "" }, { "docid": "d284fff9eed5e5a332bb3cfc612a081a", "text": "This paper describes the NILC USP system that participated in SemEval-2013 Task 2: Sentiment Analysis in Twitter. Our system adopts a hybrid classification process that uses three classification approaches: rulebased, lexicon-based and machine learning approaches. We suggest a pipeline architecture that extracts the best characteristics from each classifier. Our system achieved an Fscore of 56.31% in the Twitter message-level subtask.", "title": "" }, { "docid": "3ff58e78ac9fe623e53743ad05248a30", "text": "Clock gating is an effective technique for minimizing dynamic power in sequential circuits. Applying clock-gating at gate-level not only saves time compared to implementing clock-gating in the RTL code but also saves power and can easily be automated in the synthesis process. This paper presents simulation results on various types of clock-gating at different hierarchical levels on a serial peripheral interface (SPI) design. In general power savings of about 30% and 36% reduction on toggle rate can be seen with different complex clock- gating methods with respect to no clock-gating in the design.", "title": "" } ]
scidocsrr
3be88697b9e1f17720d351238afc6a71
Addictive use of social networking sites can be explained by the interaction of Internet use expectancies, Internet literacy, and psychopathological symptoms
[ { "docid": "9325b8aefbdf9b28f71f891f9f82fd00", "text": "The aim of this study was to evaluate the extent to which gender and other factors predict the severity of online gaming addiction among Taiwanese adolescents. A total of 395 junior high school students were recruited for evaluation of their experiences playing online games. Severity of addiction, behavioral characteristics, number of stressors, and level of satisfaction with daily life were compared between males and females who had previously played online games. Multiple regression analysis was used to explore gender differences in the relationships between severity of online gaming addiction and a number of variables. This study found that subjects who had previously played online games were predominantly male. Gender differences were also found in the severity of online gaming addiction and motives for playing. Older age, lower self-esteem, and lower satisfaction with daily life were associated with more severe addiction among males, but not among females. Special strategies accounting for gender differences must be implemented to prevent adolescents with risk factors from becoming addicted to online gaming.", "title": "" }, { "docid": "20e10963c305ca422fb025cafc807301", "text": "The new psychological disorder of Internet addiction is fast accruing both popular and professional recognition. Past studies have indicated that some patterns of Internet use are associated with loneliness, shyness, anxiety, depression, and self-consciousness, but there appears to be little consensus about Internet addiction disorder. This exploratory study attempted to examine the potential influences of personality variables, such as shyness and locus of control, online experiences, and demographics on Internet addiction. Data were gathered from a convenient sample using a combination of online and offline methods. The respondents comprised 722 Internet users mostly from the Net-generation. Results indicated that the higher the tendency of one being addicted to the Internet, the shyer the person is, the less faith the person has, the firmer belief the person holds in the irresistible power of others, and the higher trust the person places on chance in determining his or her own course of life. People who are addicted to the Internet make intense and frequent use of it both in terms of days per week and in length of each session, especially for online communication via e-mail, ICQ, chat rooms, newsgroups, and online games. Furthermore, full-time students are more likely to be addicted to the Internet, as they are considered high-risk for problems because of free and unlimited access and flexible time schedules. Implications to help professionals and student affairs policy makers are addressed.", "title": "" } ]
[ { "docid": "bef86730221684b8e9236cb44179b502", "text": "secure software. In order to find the real-life issues, this case study was initiated to investigate whether the existing FDD can withstand requirements change and software security altogether. The case study was performed in controlled environment – in a course called Application Development—a four credit hours course at UTM. The course began by splitting up the class to seven software development groups and two groups were chosen to implement the existing process of FDD. After students were given an introduction to FDD, they started to adapt the processes to their proposed system. Then students were introduced to the basic concepts on how to make software systems secure. Though, they were still new to security and FDD, however, this study produced a lot of interest among the students. The students seemed to enjoy the challenge of creating secure system using FDD model.", "title": "" }, { "docid": "639c8142b14f0eed40b63c0fa7580597", "text": "The purpose of this study is to give an overlook and comparison of best known data warehouse architectures. Single-layer, two-layer, and three-layer architectures are structure-oriented one that are depending on the number of layers used by the architecture. In independent data marts architecture, bus, hub-and-spoke, centralized and distributed architectures, the main layers are differently combined. Listed data warehouse architectures are compared based on organizational structures, with its similarities and differences. The second comparison gives a look into information quality (consistency, completeness, accuracy) and system quality (integration, flexibility, scalability). Bus, hub-and-spoke and centralized data warehouse architectures got the highest scores in information and system quality assessment.", "title": "" }, { "docid": "5f4c9518ad93c7916010efcae888cefe", "text": "Honeypots and similar sorts of decoys represent only the most rudimentary uses of deception in protection of information systems. But because of their relative popularity and cultural interest, they have gained substantial attention in the research and commercial communities. In this paper we will introduce honeypots and similar sorts of decoys, discuss their historical use in defense of information systems, and describe some of their uses today. We will then go into a bit of the theory behind deceptions, discuss their limitations, and put them in the greater context of information protection. 1. Background and History Honeypots and other sorts of decoys are systems or components intended to cause malicious actors to attack the wrong targets. Along the way, they produce potentially useful information for defenders. 1.1 Deception fundamentals According to the American Heritage Dictionary of the English Language (1981): \"deception\" is defined as \"the act of deceit\" \"deceit\" is defined as \"deception\". Fundamentally, deception is about exploiting errors in cognitive systems for advantage. History shows that deception is achieved by systematically inducing and suppressing signals entering the target cognitive system. There have been many approaches to the identification of cognitive errors and methods for their exploitation, and some of these will be explored here. For more thorough coverage, see [68]. Honeypots and decoys achieve this by presenting targets that appear to be useful targets for attackers. To quote Jesus Torres, who worked on honeypots as part of his graduate degree at the Naval Postgradua te School: “For a honeypot to work, it needs to have some honey” Honeypots work by providing something that appears to be desirable to the attacker. The attacker, in searching for the honey of interest, comes across the honeypot, and starts to taste of its wares. If they are appealing enough, the attacker spends significant time and effort getting at the honey provided. If the attacker has finite resources, the time spent going after the honeypot is time not spent going after other things the honeypot is intended to protect. If the attacker uses tools and techniques in attacking the honeypot, some aspects of those tools and techniques are revealed to the defender in the attack on the honeypot. Decoys, like the chaff used to cause information systems used in missiles to go after the wrong objective, induce some signals into the cognitive system of their target (the missile) that, if successful, causes the missile to go after the chaff instead of their real objective. While some readers might be confused for a moment about the relevance of military operations to normal civilian use of deceptions, this example is particularly useful because it shows how information systems are used to deceive other information systems and it is an example in which only the induction of signals is applied. Of course in tactical situations, the real object of the missile attack may also take other actions to suppress its own signals, and this makes the analogy even better suited for this use. Honeypots and decoys only induce signals, they do not suppress them. While other deceptions that suppress signals may be used in concert with honeypots and decoys, the remainder of this paper will focus on signal induction as a deceptive technique and shy away from signal suppression and combinations of signal suppression and induction. 1.2 Historical Deceptions Since long before 800 B.C. when Sun Tzu wrote \"The Art of War\" [28] deception has been key to success in warfare. Similarly, information protection as a field of study has been around for at least 4,000 years [41]. And long before humans documented the use of deceptions, even before humans existed, deception was common in nature. Just as baboons beat their chests, so did early humans, and of course who has not seen the films of Khrushchev at the United Nations beating his shoe on the table and stating “We will bury you!”. While this article is about deceptions involving computer systems, understanding cognitive issues in deception is fundamental to understanding any deception. 1.3 Cognitive Deception Background Many authors have examined facets of deception from both an experiential and cognitive perspective. Chuck Whitlock has built a large part of his career on identifying and demonst rating these sorts of deceptions. [12] His book includes detailed descriptions and examples of scores of common street deceptions. Fay Faron points out that most such confidence efforts are carried as as specific 'plays' and details the anatomy of a 'con' [30]. Bob Fellows [13] takes a detailed approach to how 'magic' and similar techniques exploit human fallibility and cognitive limits to deceive people. Thomas Gilovich [14] provides indepth analysis of human reasoning fallibility by presenting evidence from psychological studies that demonst rate a number of human reasoning mechanisms resulting in erroneous conclusions. Charles K. West [32] describes the steps in psychological and social distortion of information and provides detailed support for cognitive limits leading to deception. Al Seckel [15] provides about 100 excellent examples of various optical illusions, many of which work regardless of the knowledge of the observer, and some of which are defeated after the observer sees them only once. Donald D. Hoffman [36] expands this into a detailed examination of visual intelligence and how the brain processes visual information. It is particularly noteworthy that the visual cortex consumes a great deal of the total human brain space and that it has a great deal of effect on cognition. Deutsch [47] provides a series of demons trations of interpreta tion and misinterpretation of audio information. First Karrass [33] then Cialdini [34] have provided excellent summaries of negotiation strategies and the use of influence to gain advantage. Both also explain how to defend against influence tactics. Cialdini [34] provides a simple structure for influence and asserts that much of the effect of influence techniques is built in and occurs below the conscious level for most people. Robertson and Powers [31] have worked out a more detailed lowlevel theoretical model of cognition based on \"Perceptual Control Theory\" (PCT), but extensions to higher levels of cognition have been highly speculative to date. They define a set of levels of cognition in terms of their order in the control system, but beyond the lowest few levels they have inadequate basis for asserting that these are orders of complexity in the classic control theoretical sense. Their higher level analysis results have also not been shown to be realistic representations of human behaviors. David Lambert [2] provides an extensive collection of examples of deceptions and deceptive techniques mapped into a cognitive model intended for modeling deception in military situations. These are categorized into cognitive levels in Lambert's cognitive model. Charles Handy [37] discusses organizational structures and behaviors and the roles of power and influence within organizations. The National Research Council (NRC) [38] discusses models of human and organizational behavior and how automation has been applied in this area. The NRC report includes scores of examples of modeling techniques and details of simulation implementa tions based on those models and their applicability to current and future needs. Greene [46] describes the 48 laws of power and, along the way, demonst rates 48 methods that exert compliance forces in an organization. These can be traced to cognitive influences and mapped out using models like Lambert 's, Cialdini's, and the one we describe later in this paper. Closely related to the subject of deception is the work done by the CIA on the MKULTRA project. [52] A good summary of some of the pre1990 results on psychological aspects of self deception is provided in Heuer's CIA book on the psychology of intelligence analysis. [49] Heuer goes one step further in trying to start assessing ways to counter deception, and concludes that intelligence analysts can make improvements in their presentation and analysis process. Several other papers on deception detection have been written and substantially summarized in Vrij's book on the subject.[50] All of these books and papers are summarized in more detail in “A Framework for Deception” [68] which provides much of the basis for the historical issues in this paper as well as other related issues in deception not limited to honeypots, decoys, and signal induction deceptions. In addition, most of the computer deception background presented next is derived from this paper. 1.4 Computer Deception Background The most common example of a computer security mechanism based on deception is the response to attempted logins on most modern computer systems. When a user first attempts to access a system, they are asked for a user identification (UID) and password. Regardless of whether the cause of a failed access attempt was the result of a nonexistent UID or an invalid password for that UID, a failed attempt is met with the same message. In text based access methods, the UID is typically requested first and, even if no such UID exists in the system, a password is requested. Clearly, in such systems, the computer can identify that no such UID exists without asking for a password. And yet these systems intentionally suppress the information that no such UID exist and induce a message designed to indicate that the UID does exist. In earlier systems where this was not done, attackers exploited the result so as to gain additional information about which UIDs were on the system and this dramatically reduced their difficulty in attack. This is a very widely accepted practice, and when presented as a deception, many people who otherwise object to deceptions in computer systems indicate that this somehow doesn’t count as a d", "title": "" }, { "docid": "02c1c424e4511219cc2e857a3c39de32", "text": "We propose a unified architecture for next generation cognitive, low cost, mobile internet. The end user platform is able to scale as per the application and network requirements. It takes computing out of the data center and into end user platform. Internet enables open standards, accessible computing and applications programmability on a commodity platform. The architecture is a super-set to present day infrastructure web computing. The Java virtual machine (JVM) derives from the stack architecture. Applications can be developed and deployed on a multitude of host platforms. O(1)→ O(N). Computing and the internet today are more accessible and available to the larger community. Machine learning has made extensive advances with the availability of modern computing. It is used widely in NLP, Computer Vision, Deep learning and AI. A prototype device for mobile could contain N compute and N MB of memory. Keywords— mobile, server, internet", "title": "" }, { "docid": "1474c61cba04ac391079082d175c5532", "text": "With an increasing understanding of the aging process and the rapidly growing interest in minimally invasive treatments, injectable facial fillers have changed the perspective for the treatment and rejuvenation of the aging face. Other than autologous fat and certain preformed implants, the collagen family products were the only Food and Drug Administration approved soft tissue fillers. But the overwhelming interest in soft tissue fillers had led to the increase in research and development of other products including bioengineered nonpermanent implants and permanent alloplastic implants. As multiple injectable soft tissue fillers and biostimulators are continuously becoming available, it is important to understand the biophysical properties inherent in each, as these constitute the clinical characteristics of the product. This article will review the materials and properties of the currently available soft tissue fillers: hyaluronic acid, calcium hydroxylapatite, poly-l-lactic acid, polymethylmethacrylate, and autologous fat (and aspirated tissue including stem cells).", "title": "" }, { "docid": "bf9e828c9e3ee8d64d387cd518fb6b2d", "text": "As smartphone penetration saturates, we are witnessing a new trend in personal mobile devices—wearable mobile devices or simply wearables as it is often called. Wearables come in many different forms and flavors targeting different accessories and clothing that people wear. Although small in size, they are often expected to continuously sense, collect, and upload various physiological data to improve quality of life. These requirements put significant demand on improving communication security and reducing power consumption of the system, fueling new research in these areas. In this paper, we first provide a comprehensive survey and classification of commercially available wearables and research prototypes. We then examine the communication security issues facing the popular wearables followed by a survey of solutions studied in the literature. We also categorize and explain the techniques for improving the power efficiency of wearables. Next, we survey the research literature in wearable computing. We conclude with future directions in wearable market and research.", "title": "" }, { "docid": "0e0b0b6b0fdab06fa9d3ebf6a8aefd6b", "text": "Hippocampal place fields have been shown to reflect behaviorally relevant aspects of space. For instance, place fields tend to be skewed along commonly traveled directions, they cluster around rewarded locations, and they are constrained by the geometric structure of the environment. We hypothesize a set of design principles for the hippocampal cognitive map that explain how place fields represent space in a way that facilitates navigation and reinforcement learning. In particular, we suggest that place fields encode not just information about the current location, but also predictions about future locations under the current transition distribution. Under this model, a variety of place field phenomena arise naturally from the structure of rewards, barriers, and directional biases as reflected in the transition policy. Furthermore, we demonstrate that this representation of space can support efficient reinforcement learning. We also propose that grid cells compute the eigendecomposition of place fields in part because is useful for segmenting an enclosure along natural boundaries. When applied recursively, this segmentation can be used to discover a hierarchical decomposition of space. Thus, grid cells might be involved in computing subgoals for hierarchical reinforcement learning.", "title": "" }, { "docid": "985e8fae88a81a2eec2ca9cc73740a0f", "text": "Negative symptoms account for much of the functional disability associated with schizophrenia and often persist despite pharmacological treatment. Cognitive behavioral therapy (CBT) is a promising adjunctive psychotherapy for negative symptoms. The treatment is based on a cognitive formulation in which negative symptoms arise and are maintained by dysfunctional beliefs that are a reaction to the neurocognitive impairment and discouraging life events frequently experienced by individuals with schizophrenia. This article outlines recent innovations in tailoring CBT for negative symptoms and functioning, including the use of a strong goal-oriented recovery approach, in-session exercises designed to disconfirm dysfunctional beliefs, and adaptations to circumvent neurocognitive and engagement difficulties. A case illustration is provided.", "title": "" }, { "docid": "37a0c6ac688c7d7f2dd622ebbe3ec184", "text": "Prior research shows that directly applying phrase-based SMT on lexical tokens to migrate Java to C# produces much semantically incorrect code. A key limitation is the use of sequences in phrase-based SMT to model and translate source code with well-formed structures. We propose mppSMT, a divide-and-conquer technique to address that with novel training and migration algorithms using phrase-based SMT in three phases. First, mppSMT treats a program as a sequence of syntactic units and maps/translates such sequences in two languages to one another. Second, with a syntax-directed fashion, it deals with the tokens within syntactic units by encoding them with semantic symbols to represent their data and token types. This encoding via semantic symbols helps better migration of API usages. Third, the lexical tokens corresponding to each sememe are mapped or migrated. The resulting sequences of tokens are merged together to form the final migrated code. Such divide-and-conquer and syntax-direction strategies enable phrase-based SMT to adapt well to syntactical structures in source code, thus, improving migration accuracy. Our empirical evaluation on several real-world systems shows that 84.8 -- 97.9% and 70 -- 83% of the migrated methods are syntactically and semantically correct, respectively. 26.3 -- 51.2% of total migrated methods are exactly matched to the human-written C# code in the oracle. Compared to Java2CSharp, a rule-based migration tool, it achieves higher semantic accuracy from 6.6 -- 57.7% relatively. Importantly, it does not require manual labeling for training data or manual definition of rules.", "title": "" }, { "docid": "1debcbf981ae6115efcc4a853cd32bab", "text": "Vision and language understanding has emerged as a subject undergoing intense study in Artificial Intelligence. Among many tasks in this line of research, visual question answering (VQA) has been one of the most successful ones, where the goal is to learn a model that understands visual content at region-level details and finds their associations with pairs of questions and answers in the natural language form. Despite the rapid progress in the past few years, most existing work in VQA have focused primarily on images. In this paper, we focus on extending VQA to the video domain and contribute to the literature in three important ways. First, we propose three new tasks designed specifically for video VQA, which require spatio-temporal reasoning from videos to answer questions correctly. Next, we introduce a new large-scale dataset for video VQA named TGIF-QA that extends existing VQA work with our new tasks. Finally, we propose a dual-LSTM based approach with both spatial and temporal attention, and show its effectiveness over conventional VQA techniques through empirical evaluations.", "title": "" }, { "docid": "4adfa3026fbfceca68a02ee811d8a302", "text": "Designing a new domain specific language is as any other complex task sometimes error-prone and usually time consuming, especially if the language shall be of high-quality and comfortably usable. Existing tool support focuses on the simplification of technical aspects but lacks support for an enforcement of principles for a good language design. In this paper we investigate guidelines that are useful for designing domain specific languages, largely based on our experience in developing languages as well as relying on existing guidelines on general purpose (GPLs) and modeling languages. We defined guidelines to support a DSL developer to achieve better quality of the language design and a better acceptance among its users.", "title": "" }, { "docid": "9d95535e6aee8acb6a613211223c3341", "text": "We report a method to convert discrete representations of molecules to and from a multidimensional continuous representation. This model allows us to generate new molecules for efficient exploration and optimization through open-ended spaces of chemical compounds. A deep neural network was trained on hundreds of thousands of existing chemical structures to construct three coupled functions: an encoder, a decoder, and a predictor. The encoder converts the discrete representation of a molecule into a real-valued continuous vector, and the decoder converts these continuous vectors back to discrete molecular representations. The predictor estimates chemical properties from the latent continuous vector representation of the molecule. Continuous representations of molecules allow us to automatically generate novel chemical structures by performing simple operations in the latent space, such as decoding random vectors, perturbing known chemical structures, or interpolating between molecules. Continuous representations also allow the use of powerful gradient-based optimization to efficiently guide the search for optimized functional compounds. We demonstrate our method in the domain of drug-like molecules and also in a set of molecules with fewer that nine heavy atoms.", "title": "" }, { "docid": "b7ec7f1c2cef561a979dae311322dd39", "text": "We envision that the physical architectural space we inhabit will be a new form of interface between humans and digital information. This paper and video present the design of the ambientROOM, an interface to information for processing in the background of awareness. This information is displayed through various subtle displays of light, sound, and movement. Physical objects are also employed as controls for these “ambient media.”", "title": "" }, { "docid": "984dba43888e7a3572d16760eba6e9a5", "text": "This study developed an integrated model to explore the antecedents and consequences of online word-of-mouth in the context of music-related communication. Based on survey data from college students, online word-of-mouth was measured with two components: online opinion leadership and online opinion seeking. The results identified innovativeness, Internet usage, and Internet social connection as significant predictors of online word-of-mouth, and online forwarding and online chatting as behavioral consequences of online word-of-mouth. Contrary to the original hypothesis, music involvement was found not to be significantly related to online word-of-mouth. Theoretical implications of the findings and future research directions are discussed.", "title": "" }, { "docid": "3f5aa023f0cda7e56c0004e57a8b60e3", "text": "The contribution of this paper is two-fold. First, a connection is established between approximating the size of the largest clique in a graph and multi-prover interactive proofs. Second, an efficient multi-prover interactive proof for NP languages is constructed, where the verifier uses very few random bits and communication bits. Last, the connection between cliques and efficient multi-prover interaction proofs, is shown to yield hardness results on the complexity of approximating the size of the largest clique in a graph.\nOf independent interest is our proof of correctness for the multilinearity test of functions.", "title": "" }, { "docid": "c40168c28ca6ae6174ede1046eb2ec8c", "text": "This paper proposes a wide pulse combined with a narrow-pulse generator for solid-food sterilization. The proposed generator is composed of a full-bridge converter in phase-shift control to generate a high dc-link voltage and a full-bridge inverter associated with an L-C network and a transformer to generate wide pulses combined with narrow pulses. These combined pulses can prevent undesired strong air arcing in free space, reduce power consumption, and save power components, while sterilizing food effectively. The converter and inverter can be operated at high frequencies and with pulse width-modulation control; thus, its weight and size can be reduced significantly, and its efficiency can correspondingly be improved. Experimental results obtained from a prototype with plusmn10-kV wide pulses combined with plusmn10-kV narrow pulses and with 10- to 50-kW peak output power, depending on pulsewidth of the output pulses, have demonstrated its feasibility.", "title": "" }, { "docid": "8a4ff0af844823400d1ce707fd57e16f", "text": "In this work, we propose a new language modeling paradigm that has the ability to perform both prediction and moderation of information flow at multiple granularities: neural lattice language models. These models construct a lattice of possible paths through a sentence and marginalize across this lattice to calculate sequence probabilities or optimize parameters. This approach allows us to seamlessly incorporate linguistic intuitions — including polysemy and the existence of multiword lexical items — into our language model. Experiments on multiple language modeling tasks show that English neural lattice language models that utilize polysemous embeddings are able to improve perplexity by 9.95% relative to a word-level baseline, and that a Chinese model that handles multi-character tokens is able to improve perplexity by 20.94% relative to a character-level baseline.", "title": "" }, { "docid": "b2db53f203f2b168ec99bd8e544ff533", "text": "BACKGROUND\nThis study aimed to analyze the scientific outputs of esophageal and esophagogastric junction (EGJ) cancer and construct a model to quantitatively and qualitatively evaluate pertinent publications from the past decade.\n\n\nMETHODS\nPublications from 2007 to 2016 were retrieved from the Web of Science Core Collection database. Microsoft Excel 2016 (Redmond, WA) and the CiteSpace (Drexel University, Philadelphia, PA) software were used to analyze publication outcomes, journals, countries, institutions, authors, research areas, and research frontiers.\n\n\nRESULTS\nA total of 12,978 publications on esophageal and EGJ cancer were identified published until March 23, 2017. The Journal of Clinical Oncology had the largest number of publications, the USA was the leading country, and the University of Texas MD Anderson Cancer Center was the leading institution. Ajani JA published the most papers, and Jemal A had the highest co-citation counts. Esophageal squamous cell carcinoma ranked the first in research hotspots, and preoperative chemotherapy/chemoradiotherapy ranked the first in research frontiers.\n\n\nCONCLUSION\nThe annual number of publications steadily increased in the past decade. A considerable number of papers were published in journals with high impact factor. Many Chinese institutions engaged in esophageal and EGJ cancer research but significant collaborations among them were not noted. Jemal A, Van Hagen P, Cunningham D, and Enzinger PC were identified as good candidates for research collaboration. Neoadjuvant therapy and genome-wide association study in esophageal and EGJ cancer research should be closely observed.", "title": "" }, { "docid": "4110d0601a31430dd5d415fea453ae43", "text": "With the fast development of mobile Internet, Internet of Things (IoT) has been found in many important applications recently. However, it still faces many challenges in security and privacy. Blockchain (BC) technology, which underpins the cryptocurrency Bitcoin, has played an important role in the development of decentralized and data intensive applications running on millions of devices. In this paper, to establish the relationship between IoT and BC for device credibility verification, we propose a framework with layers, intersect, and self-organization Blockchain Structures (BCS). In this new framework, each BCS is organized by Blockchain technology. We describe the credibility verification method and show how it provide the verification. The efficiency and security analysis are also given in this paper, including its response time, storage efficiency, and verification. The conducted experiments have been shown to demonstrate the validity of the proposed method in satisfying the credible requirement achieved by Blockchain technology and certain advantages in storage space and response time.", "title": "" }, { "docid": "b576ffcda7637e3c2e45194ab16f8c26", "text": "This paper presents an asynchronous pipelined all-digital 10-b time-to-digital converter (TDC) with fine resolution, good linearity, and high throughput. Using a 1.5-b/stage pipeline architecture, an on-chip digital background calibration is implemented to correct residue subtraction error in the seven MSB stages. An asynchronous clocking scheme realizes pipeline operation for higher throughput. The TDC was implemented in standard 0.13-μm CMOS technology and has a maximum throughput of 300 MS/s and a resolution of 1.76 ps with a total conversion range of 1.8 ns. The measured DNL and INL were 0.6 LSB and 1.9 LSB, respectively.", "title": "" } ]
scidocsrr
968058449c28baf1c6060e88d9e49636
Dynamics of facial expression: recognition of facial actions and their temporal segments from face profile image sequences
[ { "docid": "69fd3e6e9a1fc407d20b0fb19fc536e3", "text": "In the last decade, the research topic of automatic analysis of facial expressions has become a central topic in machine vision research. Nonetheless, there is a glaring lack of a comprehensive, readily accessible reference set of face images that could be used as a basis for benchmarks for efforts in the field. This lack of easily accessible, suitable, common testing resource forms the major impediment to comparing and extending the issues concerned with automatic facial expression analysis. In this paper, we discuss a number of issues that make the problem of creating a benchmark facial expression database difficult. We then present the MMI facial expression database, which includes more than 1500 samples of both static images and image sequences of faces in frontal and in profile view displaying various expressions of emotion, single and multiple facial muscle activation. It has been built as a Web-based direct-manipulation application, allowing easy access and easy search of the available images. This database represents the most comprehensive reference set of images for studies on facial expression analysis to date.", "title": "" } ]
[ { "docid": "bbdf68b20aed9801ece9dc2adaa46ba5", "text": "Coflow is a collection of parallel flows, while a job consists of a set of coflows. A job is completed if all of the flows completes in the coflows. Therefore, the completion time of a job is affected by the latest flows in the coflows. To guarantee the job completion time and service performance, the job deadline and the dependency of coflows needs to be considered in the scheduling process. However, most existing methods ignore the dependency of coflows which is important to guarantee the job completion. In this paper, we take the dependency of coflows into consideration. To guarantee job completion for performance, we formulate a deadline and dependency-based model called MTF scheduler model. The purpose of MTF model is to minimize the overall completion time with the constraints of deadline and network capacity. Accordingly, we propose our method to schedule dependent coflows. Especially, we consider the dependent coflows as an entirety and propose a valuable coflow scheduling first MTF algorithm. We conduct extensive simulations to evaluate MTF method which outperforms the conventional short job first method as well as guarantees the job deadline.", "title": "" }, { "docid": "9ca208c420c5c9e4592bf86f1245056d", "text": "No one had given Muhammad Ali a chance against George Foreman in the World Heavyweight Championship Žght of October 30, 1974. Foreman, none of whose opponents had lasted more than three rounds in the ring, was the strongest, hardest hitting boxer of his generation. Ali, though not as powerful as Foreman, had a slightly faster punch and was lighter on his feet. In the weeks leading up to the Žght, however, Foreman had practiced against nimble sparring partners. He was ready. But when the bell rang just after 4:00 a.m. in Kinshasa, something completely unexpected happened. In round two, instead of moving into the ring to meet Foreman, Ali appeared to cower against the ropes. Foreman, now conŽdent of victory, pounded him again and again, while Ali whispered hoarse taunts: “George, you’re not hittin’,” “George, you disappoint me.” Foreman lost his temper, and his punches became a furious blur. To spectators, unaware that the elastic ring ropes were absorbing much of the force of Foreman’s blows, it looked as if Ali would surely fall. By the Žfth round, however, Foreman was worn out. And in round eight, as stunned commentators and a delirious crowd looked on, Muhammad Ali knocked George Foreman to the canvas, and the Žght was over. The outcome of that now-famous “rumble in the jungle” was completely unexpected. The two Žghters were equally motivated to win: Both had boasted of victory, and both had enormous egos. Yet in the end, a Žght that should have been over in three rounds went eight, and Foreman’s prodigious punches proved useless against Ali’s rope-a-dope strategy. This Žght illustrates an important yet relatively unexplored feature of interstate conict: how a weak actor’s strategy can make a strong actor’s power irHow the Weak Win Wars Ivan Arreguín-Toft", "title": "" }, { "docid": "9acc03449f1b51188257b7e05c561c2a", "text": "When neural networks process images which do not resemble the distribution seen during training, so called out-of-distribution images, they often make wrong predictions, and do so too confidently. The capability to detect out-of-distribution images is therefore crucial for many real-world applications. We divide out-of-distribution detection between novelty detection —images of classes which are not in the training set but are related to those—, and anomaly detection —images with classes which are unrelated to the training set. By related we mean they contain the same type of objects, like digits in MNIST and SVHN. Most existing work has focused on anomaly detection, and has addressed this problem considering networks trained with the cross-entropy loss. Differently from them, we propose to use metric learning which does not have the drawback of the softmax layer (inherent to cross-entropy methods), which forces the network to divide its prediction power over the learned classes. We perform extensive experiments and evaluate both novelty and anomaly detection, even in a relevant application such as traffic sign recognition, obtaining comparable or better results than previous works.", "title": "" }, { "docid": "5c04f381c2b3de1377e1988b4fb64ecd", "text": "The study of bullying behavior and its consequences for young people depends on valid and reliable measurement of bullying victimization and perpetration. Although numerous self-report bullying-related measures have been developed, robust evidence of their psychometric properties is scant, and several limitations inhibit their applicability. The Forms of Bullying Scale (FBS), with versions to measure bullying victimization (FBS-V) and perpetration (FBS-P), was developed on the basis of existing instruments, for use with 12- to 15-year-old adolescents to economically, yet comprehensively measure both bullying perpetration and victimization. Measurement properties were estimated. Scale validity was tested using data from 2 independent studies of 3,496 Grade 8 and 783 Grade 8-10 students, respectively. Construct validity of scores on the FBS was shown in confirmatory factor analysis. The factor structure was not invariant across gender. Strong associations between the FBS-V and FBS-P and separate single-item bullying items demonstrated adequate concurrent validity. Correlations, in directions as expected with social-emotional outcomes (i.e., depression, anxiety, conduct problems, and peer support), provided robust evidence of convergent and discriminant validity. Responses to the FBS items were found to be valid and concurrently reliable measures of self-reported frequency of bullying victimization and perpetration, as well as being useful to measure involvement in the different forms of bullying behaviors. (PsycINFO Database Record (c) 2013 APA, all rights reserved).", "title": "" }, { "docid": "627587e2503a2555846efb5f0bca833b", "text": "Image generation has been successfully cast as an autoregressive sequence generation or transformation problem. Recent work has shown that self-attention is an effective way of modeling textual sequences. In this work, we generalize a recently proposed model architecture based on self-attention, the Transformer, to a sequence modeling formulation of image generation with a tractable likelihood. By restricting the selfattention mechanism to attend to local neighborhoods we significantly increase the size of images the model can process in practice, despite maintaining significantly larger receptive fields per layer than typical convolutional neural networks. While conceptually simple, our generative models significantly outperform the current state of the art in image generation on ImageNet, improving the best published negative log-likelihood on ImageNet from 3.83 to 3.77. We also present results on image super-resolution with a large magnification ratio, applying an encoder-decoder configuration of our architecture. In a human evaluation study, we find that images generated by our super-resolution model fool human observers three times more often than the previous state of the art.", "title": "" }, { "docid": "98e9dff9ba946dc1ea6d50b1271a0685", "text": "OBJECTIVES\nTo evaluate the effect of Carbopol gel formulations containing pilocarpine on the morphology and morphometry of the vaginal epithelium of castrated rats.\n\n\nMETHODS\nThirty-one female Wistar-Hannover rats were randomly divided into four groups: the control Groups I (n=7, rats in persistent estrus; positive controls) and II (n=7, castrated rats, negative controls) and the experimental Groups, III (n=8) and IV (n=9). Persistent estrus (Group I) was achieved with a subcutaneous injection of testosterone propionate on the second postnatal day. At 90 days postnatal, rats in Groups II, III and IV were castrated and treated vaginally for 14 days with Carbopol gel (vehicle alone) or Carbopol gel containing 5% and 15% pilocarpine, respectively. Next, all of the animals were euthanized and their vaginas were removed for histological evaluation. A non-parametric test with a weighted linear regression model was used for data analysis (p<0.05).\n\n\nRESULTS\nThe morphological evaluation showed maturation of the vaginal epithelium with keratinization in Group I, whereas signs of vaginal atrophy were present in the rats of the other groups. Morphometric examinations showed mean thickness values of the vaginal epithelium of 195.10±12.23 μm, 30.90±1.14 μm, 28.16±2.98 μm and 29.84±2.30 μm in Groups I, II, III and IV, respectively, with statistically significant differences between Group I and the other three groups (p<0.0001) and no differences between Groups II, III and IV (p=0.0809).\n\n\nCONCLUSION\nTopical gel formulations containing pilocarpine had no effect on atrophy of the vaginal epithelium in the castrated female rats.", "title": "" }, { "docid": "23615c8affc64304b2dab6b5d7e9b77b", "text": "Softmax loss is widely used in deep neural networks for multi-class classification, where each class is represented by a weight vector, a sample is represented as a feature vector, and the feature vector has the largest projection on the weight vector of the correct category when the model correctly classifies a sample. To ensure generalization, weight decay that shrinks the weight norm is often used as regularizer. Different from traditional learning algorithms where features are fixed and only weights are tunable, features are also tunable as representation learning in deep learning. Thus, we propose feature incay to also regularize representation learning, which favors feature vectors with large norm when the samples can be correctly classified. With the feature incay, feature vectors are further pushed away from the origin along the direction of their corresponding weight vectors, which achieves better inter-class separability. In addition, the proposed feature incay encourages intra-class compactness along the directions of weight vectors by increasing the small feature norm faster than the large ones. Empirical results on MNIST, CIFAR10 and CIFAR100 demonstrate feature incay can improve the generalization ability.", "title": "" }, { "docid": "831836deb75aacb54513004daa92e1bf", "text": "Jean Watson introduced The Theory of Human Caring over thirty years ago to the nursing profession. In the theory it is stated that caring is the essence of nursing and that professional nurses have an obligation to provide the best environment for healing to take place. The theory’s carative factors outlines principles and ideas that should be used by the professional nurse to create the best environment for healing of the patient and of the nurse. This paper will describe and critique Jean Watson’s Theory of Human Caring and discuss how this model has influenced nursing practice. REFLECTIONS ON JEAN WATSON'S THEORY OF HUMAN CARING 3 Reflections on Jean Watson's Theory of Human Caring Florence Nightingale helped define the role of the nurse over one hundred and fifty years ago. Even so, nursing has struggled to find an identity apart from medicine. For years nursing theorists have examined how nursing is unique from medicine. While it was obvious that nursing was a different art than medicine, there was not any scholarly work to illustrate the difference. During the 1950’s nursing began building a body of knowledge, which interpreted and conceptualized the intricacies of nursing. Over the next several decades, nurse theorists rapidly grew the discipline’s foundation. One of the concepts that emerged was nursing as caring. Several theorists have identified caring as being central to nursing; however, Watson’s Theory of Human Caring offers a unique perspective. The theory blends the beliefs and ideas from Eastern and Western cultures to create a spiritual philosophy that can be used throughout nursing practice. This paper will describe and critique Jean Watson’s Theory of Human Caring and discuss how this model has influenced nursing practice. Introduction to the Theory The Theory of Human Caring evolved from Jean Watson’s own desire to develop a deeper understanding of the meaning of humanity and life. She was also greatly influenced by her background in philosophy, psychology and nursing science. Watson’s first book Nursing: The Philosophy and Science of Caring (1979) was developed to bring a “new meaning and dignity” to nursing care (Watson, 2008). The first book introduced carative factors, which are the foundation of Watson’s Theory of Human Caring. The carative factors offered a holistic perspective to caring for a patient, juxtaposed to the reductionist, biophysical model that was prevalent at the time. Watson believed that without incorporating the carative factors, a nurse REFLECTIONS ON JEAN WATSON'S THEORY OF HUMAN CARING 4 was only performing tasks when treating a patient and not offering professional nursing care (Watson, 2008). In Watson’s second book, Nursing: Human Science and Human Care, A Theory of Nursing (1985), she discusses the philosophical and spiritual components of the Theory of Human Caring, as well as expands upon the definition and implications of the transpersonal moment. The second book redefines caring as a combination of scientific actions, consciousness and intentionality, as well as defines the transcendental phenomenology of a transpersonal caring occasion and expands upon the idea of human-to-human connection. Watson’s third book, Postmodern Nursing and Beyond (1999), focuses on the evolution of the consciousness of the clinician. The third book reinforces the ideas of the first two books and further evolves several concepts to include the spiritual realm, the energetic realm, the interconnectedness to all things and the higher power. The philosophy behind each book and the Theory of Human Caring is that all human beings are connected to each other and to a divine spirit or higher power. Furthermore, each interaction between human beings, but specifically between nurses and patients, should be entered into with the intention of connecting with the patient’s spirit or higher source. Each moment or each act can and should not only facilitate healing in the patient and the nurse, but also transcend both space and time. The components of Watson’s theories include the 10 carative factors, the caritas process, the transpersonal caring relationship, caring moments and caring/healing modalities. Carative factors are the essential characteristics needed by the professional nurse to establish a therapeutic relationship and promote healing. Carative factors are the core of Watson’s philosophy and they are (i) formation of a humanistic-altruistic systems of values, (ii) instillation of faith-hope, (iii) cultivation of sensitivity to one’s self and to others, (iv) development of a helping-trusting REFLECTIONS ON JEAN WATSON'S THEORY OF HUMAN CARING 5 human caring relationship, (v) promotion and acceptance of the expression of positive and negative feelings, (vi) systematic use of a creative problem solving and caring process, (vii) promotion of transpersonal teaching-learning, (viii) provision for supportive, protective, and/or corrective mental, physical, societal and spiritual environment, (ix) assistance with gratification of human needs and (x) allowance for existential-phenomenological-spiritual forces. Carative factors are intended to provide a foundation for the discipline of nursing that is developed from understanding and compassion. Watson’s caritas processes are the expansion of the original carative factors and are reflective of Watson’s own personal evolution. The caritas processes provide the tenets for a professional approach to caring, a means by which to practice caring in a spiritual and loving fashion. The transpersonal caring relationship is a relationship that goes beyond one’s self and creates a change in the energetic environment of the nurse and the patient. A transpersonal caring relationship allows for a relationship between the souls of the individuals and because of this authentic relationship, optimal caring and healing can take place (Watson, 1985). In the transpersonal relationship the caregiver is aware of his/her intention and performs care that is emanating from the heart. When intentionality is focused and delivered from the heart, unseen energetic fields can change and promote an environment for healing. When a nurse is more conscious of his or her self and surroundings, he or she acts from a place of love with each caring moment. Caring moments are any moments in which a nurse has an interaction with a patient or family and is using the carative factors or the caritas process. In order for a caring moment to occur the participation of the nurse and the patient is required. Practice based on the carative factors presents an opportunity for both the nurse and patient to engage in a transpersonal caring REFLECTIONS ON JEAN WATSON'S THEORY OF HUMAN CARING 6 moment that benefits the mind, body and soul of each person. The caring/healing modalities are practices that enhance the ability of the care provider to engage in transpersonal relationship and caring moments. Caring/healing exercises can be as simple as centering, being attentive to touch or the communication of specific knowledge. The goal of using Watson’s principles in practice is to enhance the life and experience of the nurse and of the patient. Description of Theory Purpose The Theory of Human Caring was developed based on Watson’s desire to reestablish holistic practice in nursing care and move away from the cold and disconnected scientific model while infusing feeling and caring back into nursing practice (Watson, 2008). The purpose of the theory was to provide a philosophical-ethical foundation from which the nurse could provide care. The proposed benefit of this theory for both the nurse and the patient is that when each person reveals his or her authentic self and engages in interactions with another being, the energetic field around both of them will change and enhance the healing environment. The theory’s purpose is quite broad, promoting healing and oneness with the universe through caring. The positive impact of these practices is phenomenal and the beauty of the theory is that the caritas processes can be used to enhance any practice. When applied to nursing practice, the theory reestablishes Florence Nightingale’s vision that nursing is a spiritual calling. The deeper message within the theory is that being/relating to others from a place of love can transcend the planes and energetic fields of the universe and promote healing to one’s self and to", "title": "" }, { "docid": "6f05e76961d4ef5fc173bafd5578081f", "text": "Edmodo is simply a controlled online networking application that can be used by teachers and students to communicate and remain connected. This paper explores the experiences from a group of students who were using Edmodo platform in their course work. It attempts to use the SAMR (Substitution, Augmentation, Modification and Redefinition) framework of technology integration in education to access and evaluate technology use in the classroom. The respondents were a group of 62 university students from a Kenyan University whose lecturer had created an Edmodo account and introduced the students to participate in their course work during the September to December 2015 semester. More than 82% of the students found that they had a personal stake in the quality of work presented through the platforms and that they were able to take on different subtopics and collaborate to create one final product. This underscores the importance of Edmodo as an environment with skills already in the hands of the students that we can use to integrate technology in the classroom.", "title": "" }, { "docid": "d836f8b9c13ba744f39daa5887bed52e", "text": "Cerebral palsy is the most common cause of childhood-onset, lifelong physical disability in most countries, affecting about 1 in 500 neonates with an estimated prevalence of 17 million people worldwide. Cerebral palsy is not a disease entity in the traditional sense but a clinical description of children who share features of a non-progressive brain injury or lesion acquired during the antenatal, perinatal or early postnatal period. The clinical manifestations of cerebral palsy vary greatly in the type of movement disorder, the degree of functional ability and limitation and the affected parts of the body. There is currently no cure, but progress is being made in both the prevention and the amelioration of the brain injury. For example, administration of magnesium sulfate during premature labour and cooling of high-risk infants can reduce the rate and severity of cerebral palsy. Although the disorder affects individuals throughout their lifetime, most cerebral palsy research efforts and management strategies currently focus on the needs of children. Clinical management of children with cerebral palsy is directed towards maximizing function and participation in activities and minimizing the effects of the factors that can make the condition worse, such as epilepsy, feeding challenges, hip dislocation and scoliosis. These management strategies include enhancing neurological function during early development; managing medical co-morbidities, weakness and hypertonia; using rehabilitation technologies to enhance motor function; and preventing secondary musculoskeletal problems. Meeting the needs of people with cerebral palsy in resource-poor settings is particularly challenging.", "title": "" }, { "docid": "baad68c1adef7b72d78745fe03db0c57", "text": "0020-0255/$ see front matter 2012 Elsevier Inc http://dx.doi.org/10.1016/j.ins.2012.10.039 ⇑ Corresponding author. E-mail addresses: pcortez@dsi.uminho.pt (P. Cor In this paper, we propose a new visualization approach based on a Sensitivity Analysis (SA) to extract human understandable knowledge from supervised learning black box data mining models, such as Neural Networks (NNs), Support Vector Machines (SVMs) and ensembles, including Random Forests (RFs). Five SA methods (three of which are purely new) and four measures of input importance (one novel) are presented. Also, the SA approach is adapted to handle discrete variables and to aggregate multiple sensitivity responses. Moreover, several visualizations for the SA results are introduced, such as input pair importance color matrix and variable effect characteristic surface. A wide range of experiments was performed in order to test the SA methods and measures by fitting four well-known models (NN, SVM, RF and decision trees) to synthetic datasets (five regression and five classification tasks). In addition, the visualization capabilities of the SA are demonstrated using four real-world datasets (e.g., bank direct marketing and white wine quality). 2012 Elsevier Inc. All rights reserved.", "title": "" }, { "docid": "58de521ab563333c2051b590592501a8", "text": "Prognostics and systems health management (PHM) is an enabling discipline that uses sensors to assess the health of systems, diagnoses anomalous behavior, and predicts the remaining useful performance over the life of the asset. The advent of the Internet of Things (IoT) enables PHM to be applied to all types of assets across all sectors, thereby creating a paradigm shift that is opening up significant new business opportunities. This paper introduces the concepts of PHM and discusses the opportunities provided by the IoT. Developments are illustrated with examples of innovations from manufacturing, consumer products, and infrastructure. From this review, a number of challenges that result from the rapid adoption of IoT-based PHM are identified. These include appropriate analytics, security, IoT platforms, sensor energy harvesting, IoT business models, and licensing approaches.", "title": "" }, { "docid": "e0fc6fc1425bb5786847c3769c1ec943", "text": "Developing manufacturing simulation models usually requires experts with knowledge of multiple areas including manufacturing, modeling, and simulation software. The expertise requirements increase for virtual factory models that include representations of manufacturing at multiple resolution levels. This paper reports on an initial effort to automatically generate virtual factory models using manufacturing configuration data in standard formats as the primary input. The execution of the virtual factory generates time series data in standard formats mimicking a real factory. Steps are described for auto-generation of model components in a software environment primarily oriented for model development via a graphic user interface. Advantages and limitations of the approach and the software environment used are discussed. The paper concludes with a discussion of challenges in verification and validation of the virtual factory prototype model with its multiple hierarchical models and future directions.", "title": "" }, { "docid": "f6e90401ea52689801b164ef8167814c", "text": "In this paper, we develop novel, efficient 2D encodings for 3D geometry, which enable reconstructing full 3D shapes from a single image at high resolution. The key idea is to pose 3D shape reconstruction as a 2D prediction problem. To that end, we first develop a simple baseline network that predicts entire voxel tubes at each pixel of a reference view. By leveraging well-proven architectures for 2D pixel-prediction tasks, we attain state-of-the-art results, clearly outperforming purely voxel-based approaches. We scale this baseline to higher resolutions by proposing a memory-efficient shape encoding, which recursively decomposes a 3D shape into nested shape layers, similar to the pieces of a Matryoshka doll. This allows reconstructing highly detailed shapes with complex topology, as demonstrated in extensive experiments; we clearly outperform previous octree-based approaches despite having a much simpler architecture using standard network components. Our Matryoshka networks further enable reconstructing shapes from IDs or shape similarity, as well as shape sampling.", "title": "" }, { "docid": "f022871509e863f6379d76ba80afaa2f", "text": "Neuroeconomics seeks to gain a greater understanding of decision making by combining theoretical and methodological principles from the fields of psychology, economics, and neuroscience. Initial studies using this multidisciplinary approach have found evidence suggesting that the brain may be employing multiple levels of processing when making decisions, and this notion is consistent with dual-processing theories that have received extensive theoretical consideration in the field of cognitive psychology, with these theories arguing for the dissociation between automatic and controlled components of processing. While behavioral studies provide compelling support for the distinction between automatic and controlled processing in judgment and decision making, less is known if these components have a corresponding neural substrate, with some researchers arguing that there is no evidence suggesting a distinct neural basis. This chapter will discuss the behavioral evidence supporting the dissociation between automatic and controlled processing in decision making and review recent literature suggesting potential neural systems that may underlie these processes.", "title": "" }, { "docid": "a2f062482157efb491ca841cc68b7fd3", "text": "Coping with malware is getting more and more challenging, given their relentless growth in complexity and volume. One of the most common approaches in literature is using machine learning techniques, to automatically learn models and patterns behind such complexity, and to develop technologies to keep pace with malware evolution. This survey aims at providing an overview on the way machine learning has been used so far in the context of malware analysis in Windows environments, i.e. for the analysis of Portable Executables. We systematize surveyed papers according to their objectives (i.e., the expected output), what information about malware they specifically use (i.e., the features), and what machine learning techniques they employ (i.e., what algorithm is used to process the input and produce the output). We also outline a number of issues and challenges, including those concerning the used datasets, and identify the main current topical trends and how to possibly advance them. In particular, we introduce the novel concept of malware analysis economics, regarding the study of existing trade-offs among key metrics, such as analysis accuracy and economical costs.", "title": "" }, { "docid": "982ebb6c33a1675d3073896e3768212a", "text": "Morphometric analysis of nuclei play an essential role in cytological diagnostics. Cytological samples contain hundreds or thousands of nuclei that need to be examined for cancer. The process is tedious and time-consuming but can be automated. Unfortunately, segmentation of cytological samples is very challenging due to the complexity of cellular structures. To deal with this problem, we are proposing an approach, which combines convolutional neural network and ellipse fitting algorithm to segment nuclei in cytological images of breast cancer. Images are preprocessed by the colour deconvolution procedure to extract hematoxylin-stained objects (nuclei). Next, convolutional neural network is performing semantic segmentation of preprocessed image to extract nuclei silhouettes. To find the exact location of nuclei and to separate touching and overlapping nuclei, we approximate them using ellipses of various sizes and orientations. They are fitted using the Bayesian object recognition approach. The accuracy of the proposed approach is evaluated with the help of reference nuclei segmented manually. Tests carried out on breast cancer images have shown that the proposed method can accurately segment elliptic-shaped objects.", "title": "" }, { "docid": "bb74cbb76c6efb4a030d2c5653e18842", "text": "Two new wideband in-phase and out-of-phase balanced power dividing/combining networks are proposed in this paper. Based on matrix transformation, the differential-mode and common-mode equivalent circuits of the two wideband in-phase and out-of-phase networks can be easily deduced. A patterned ground-plane technique is used to realize the strong coupling of the shorted coupled lines for the differential mode. Two planar wideband in-phase and out-of-phase balanced networks with bandwidths of 55.3% and 64.4% for the differential mode with wideband common-mode suppression are designed and fabricated. The theoretical and measured results agree well with each other and show good in-band performances.", "title": "" }, { "docid": "1994ae6f7de73b30729f274e70e4899f", "text": "Being symmetric positive-definite (SPD), covariance matrix has traditionally been used to represent a set of local descriptors in visual recognition. Recent study shows that kernel matrix can give considerably better representation by modelling the nonlinearity in the local descriptor set. Nevertheless, neither the descriptors nor the kernel matrix is deeply learned. Worse, they are considered separately, hindering the pursuit of an optimal SPD representation. This work proposes a deep network that jointly learns local descriptors, kernel-matrix-based SPD representation, and the classifier via an end-to-end training process. We derive the derivatives for the mapping from a local descriptor set to the SPD representation to carry out backpropagation. Also, we exploit the Daleckǐi-Krěin formula in operator theory to give a concise and unified result on differentiating SPD matrix functions, including ∗Corresponding author (leiw@uow.edu.au) 1 ar X iv :1 71 1. 04 04 7v 1 [ cs .C V ] 1 1 N ov 2 01 7 the matrix logarithm to handle the Riemannian geometry of kernel matrix. Experiments not only show the superiority of kernel-matrixbased SPD representation with deep local descriptors, but also verify the advantage of the proposed deep network in pursuing better SPD representations for fine-grained image recognition tasks.", "title": "" }, { "docid": "fee7fc5639a66e68666a58ecec8e88d1", "text": "Most previous studies assert the negative effect of loneliness on social life and an individual's well-being when individuals use the Internet. To expand this previous research tradition, the current study proposes a model to test whether loneliness has a direct or indirect effect on well-being when mediated by self-disclosure and social support. The results show that loneliness has a direct negative impact on well-being but a positive effect on self-disclosure. While self-disclosure positively influences social support, self-disclosure has no impact on well-being, and social support positively influences well-being. The results also show a full mediation effect of social support in the self-disclosure to well-being link. The results imply that even if lonely people's well-being is poor, their well-being can be enhanced through the use of SNSs, including self-presentation and social support from their friends.", "title": "" } ]
scidocsrr
943b80cba2f5739940c34b988807349a
Apache REEF: Retainable Evaluator Execution Framework
[ { "docid": "47ac4b546fe75f2556a879d6188d4440", "text": "There is great interest in exploiting the opportunity provided by cloud computing platforms for large-scale analytics. Among these platforms, Apache Spark is growing in popularity for machine learning and graph analytics. Developing efficient complex analytics in Spark requires deep understanding of both the algorithm at hand and the Spark API or subsystem APIs (e.g., Spark SQL, GraphX). Our BigDatalog system addresses the problem by providing concise declarative specification of complex queries amenable to efficient evaluation. Towards this goal, we propose compilation and optimization techniques that tackle the important problem of efficiently supporting recursion in Spark. We perform an experimental comparison with other state-of-the-art large-scale Datalog systems and verify the efficacy of our techniques and effectiveness of Spark in supporting Datalog-based analytics.", "title": "" } ]
[ { "docid": "6ee26f725bfb63a6ff72069e48404e68", "text": "OBJECTIVE\nTo determine which routinely collected exercise test variables most strongly correlate with survival and to derive a fitness risk score that can be used to predict 10-year survival.\n\n\nPATIENTS AND METHODS\nThis was a retrospective cohort study of 58,020 adults aged 18 to 96 years who were free of established heart disease and were referred for an exercise stress test from January 1, 1991, through May 31, 2009. Demographic, clinical, exercise, and mortality data were collected on all patients as part of the Henry Ford ExercIse Testing (FIT) Project. Cox proportional hazards models were used to identify exercise test variables most predictive of survival. A \"FIT Treadmill Score\" was then derived from the β coefficients of the model with the highest survival discrimination.\n\n\nRESULTS\nThe median age of the 58,020 participants was 53 years (interquartile range, 45-62 years), and 28,201 (49%) were female. Over a median of 10 years (interquartile range, 8-14 years), 6456 patients (11%) died. After age and sex, peak metabolic equivalents of task and percentage of maximum predicted heart rate achieved were most highly predictive of survival (P<.001). Subsequent addition of baseline blood pressure and heart rate, change in vital signs, double product, and risk factor data did not further improve survival discrimination. The FIT Treadmill Score, calculated as [percentage of maximum predicted heart rate + 12(metabolic equivalents of task) - 4(age) + 43 if female], ranged from -200 to 200 across the cohort, was near normally distributed, and was found to be highly predictive of 10-year survival (Harrell C statistic, 0.811).\n\n\nCONCLUSION\nThe FIT Treadmill Score is easily attainable from any standard exercise test and translates basic treadmill performance measures into a fitness-related mortality risk score. The FIT Treadmill Score should be validated in external populations.", "title": "" }, { "docid": "88bd6fe890ed385ae60ace44ab71db3e", "text": "Background: While concerns about adverse health outcomes of unintended pregnancies for the mother have been expressed, there has only been limited research on the outcomes of unintended pregnancies. This review provides an overview of antecedents and maternal health outcomes of unintended pregnancies (UIPs) carried to term live", "title": "" }, { "docid": "ee9709e756c90f20506ebbddefaeb309", "text": "OBJECTIVES/HYPOTHESIS\nTo compare three existing endoscopic scoring systems and a newly proposed modified scoring system for the assessment of patients with chronic rhinosinusitis (CRS).\n\n\nSTUDY DESIGN\nBlinded, prospective cohort study.\n\n\nMETHODS\nCRS patients completed two patient-reported outcome measures (PROMs)-the visual analogue scale (VAS) symptom score and the Sino-Nasal Outcome Test-22 (SNOT-22)-and then underwent a standardized, recorded sinonasal endoscopy. Videos were scored by three blinded rhinologists using three scoring systems: the Lund-Kennedy (LK) endoscopic score; the Discharge, Inflammation, Polyp (DIP) score; and the Perioperative Sinonasal Endoscopic score. The videos were further scored using a modified Lund-Kennedy (MLK) endoscopic scoring system, which retains the LK subscores of polyps, edema, and discharge but eliminates the scoring of scarring and crusting. The systems were compared for test-retest and inter-rater reliability as well as for their correlation with PROMs.\n\n\nRESULTS\nOne hundred two CRS patients were enrolled. The MLK system showed the highest inter-rater and test-retest reliability of all scoring systems. All systems except for the DIP correlated with total VAS scores. The MLK was the only system that correlated with the symptom subscore of the SNOT-22 in both unoperated and postoperative patients.\n\n\nCONCLUSIONS\nModification of the LK system by excluding the subscores of scarring and crusting improves its reliability and its correlation with PROMs. In addition, the MLK system retains the familiarity of the widely used LK system and is applicable to any patient irrespective of surgical status. The MLK system may be a more suitable and reliable endoscopic scoring system for clinical practice and outcomes research.", "title": "" }, { "docid": "79cb7d3bbdb6ebedc3941e8f35897fc9", "text": "Occurrences of entrapment neuropathies of the lower extremity are relatively infrequent; therefore, these conditions may be underappreciated and difficult to diagnose. Understanding the anatomy of the peripheral nerves and their potential entrapment sites is essential. A detailed physical examination and judicious use of imaging modalities are also vital when establishing a diagnosis. Once an accurate diagnosis is obtained, treatment is aimed at reducing external pressure, minimizing inflammation, correcting any causative foot and ankle deformities, and ultimately releasing any constrictive tissues.", "title": "" }, { "docid": "f3ed5e6eb8fd450830360e9bc1bad340", "text": "Musical performance requires prediction to operate instruments, to perform in groups and to improvise. We argue, with reference to a number of digital music instruments (DMIs), including two of our own, that predictive machine learning models can help interactive systems to understand their temporal context and ensemble behaviour. We also discuss how recent advances in deep learning highlight the role of prediction in DMIs, by allowing data-driven predictive models with a long memory of past states. We advocate for predictive musical interaction, where a predictive model is embedded in a musical interface, assisting users by predicting unknown states of musical processes. We propose a framework for characterising prediction as relating to the instrumental sound, ongoing musical process, or between members of an ensemble. Our framework shows that different musical interface design configurations lead to different types of prediction. We show that our framework accommodates deep generative models, as well as models for predicting gestural states, or other high-level musical information. We apply our framework to examples from our recent work and the literature, and discuss the benefits and challenges revealed by these systems as well as musical use-cases where prediction is a necessary component.", "title": "" }, { "docid": "ad6672657fc07ed922f1e2c0212b30bc", "text": "As a generalization of the ordinary wavelet transform, the fractional wavelet transform (FRWT) is a very promising tool for signal analysis and processing. Many of its fundamental properties are already known; however, little attention has been paid to its sampling theory. In this paper, we first introduce the concept of multiresolution analysis associated with the FRWT, and then propose a sampling theorem for signals in FRWT-based multiresolution subspaces. The necessary and sufficient condition for the sampling theorem is derived. Moreover, sampling errors due to truncation and aliasing are discussed. The validity of the theoretical derivations is demonstrated via simulations.", "title": "" }, { "docid": "afeb909f4be9da56dcaeb86d464ec75e", "text": "Synthesizing expressive speech with appropriate prosodic variations, e.g., various styles, still has much room for improvement. Previous methods have explored to use manual annotations as conditioning attributes to provide variation information. However, the related training data are expensive to obtain and the annotated style codes can be ambiguous and unreliable. In this paper, we explore utilizing the residual error as conditioning attributes. The residual error is the difference between the prediction of a trained average model and the ground truth. We encode the residual error into a style embedding via a neural networkbased error encoder. The style embedding is then fed to the target synthesis model to provide information for modeling various style distributions more accurately. The average model and the error encoder are jointly optimized with the target synthesis model. Our proposed method has two advantages: 1) the embedding is automatically learned with no need of manual style annotations, which helps overcome data sparsity and ambiguity limitations; 2) For any unseen audio utterance, the style embedding can be efficiently generated. This enables rapid adaptation to the desired style to be achieved with only a single adaptation utterance. Experimental results show that our proposed method outperforms the baseline model in both speech quality and style similarity.", "title": "" }, { "docid": "b637196c4627fd463ca54d0efeb87370", "text": "Vision-based lane detection is a critical component of modern automotive active safety systems. Although a number of robust and accurate lane estimation (LE) algorithms have been proposed, computationally efficient systems that can be realized on embedded platforms have been less explored and addressed. This paper presents a framework that incorporates contextual cues for LE to further enhance the performance in terms of both computational efficiency and accuracy. The proposed context-aware LE framework considers the state of the ego vehicle, its surroundings, and the system-level requirements to adapt and scale the LE process resulting in substantial computational savings. This is accomplished by synergistically fusing data from multiple sensors along with the visual data to define the context around the ego vehicle. The context is then incorporated as an input to the LE process to scale it depending on the contextual requirements. A detailed evaluation of the proposed framework on real-world driving conditions shows that the dynamic and static configuration of the lane detection process results in computation savings as high as 90%, without compromising on the accuracy of LE.", "title": "" }, { "docid": "d54ad1a912a0b174d1f565582c6caf1c", "text": "This paper presents a new novel design of a smart walker for rehabilitation purpose by patients in hospitals and rehabilitation centers. The design features a full frame walker that provides secured and stable support while being foldable and compact. It also has smart features such as telecommunication and patient activity monitoring.", "title": "" }, { "docid": "1eca0e6a170470a483dc25196e6cca63", "text": "Benchmarks for Cloud Robotics", "title": "" }, { "docid": "987de36823c8dbb9ff13aec4fecd6c9a", "text": "Previous research has been done on mindfulness and nursing stress but no review has been done to highlight the most up-to-date findings, to justify the recommendation of mindfulness training for the nursing field. The present paper aims to review the relevant studies, derive conclusions, and discuss future direction of research in this field.A total of 19 research papers were reviewed. The majority was intervention studies on the effects of mindfulness-training programs on nursing stress. Higher mindfulness is correlated with lower nursing stress. Mindfulness-based training programs were found to have significant positive effects on nursing stress and psychological well-being. The studies were found to have non-standardized intervention methods, inadequate research designs, small sample size, and lack of systematic follow-up on the sustainability of treatment effects, limiting the generalizability of the results. There is also a lack of research investigation into the underlying mechanism of action of mindfulness on nursing stress. Future research that addresses these limitations is indicated.", "title": "" }, { "docid": "51a2d48f43efdd8f190fd2b6c9a68b3c", "text": "Textual passwords are often the only mechanism used to authenticate users of a networked system. Unfortunately, many passwords are easily guessed or cracked. In an attempt to strengthen passwords, some systems instruct users to create mnemonic phrase-based passwords. A mnemonic password is one where a user chooses a memorable phrase and uses a character (often the first letter) to represent each word in the phrase.In this paper, we hypothesize that users will select mnemonic phrases that are commonly available on the Internet, and that it is possible to build a dictionary to crack mnemonic phrase-based passwords. We conduct a survey to gather user-generated passwords. We show the majority of survey respondents based their mnemonic passwords on phrases that can be found on the Internet, and we generate a mnemonic password dictionary as a proof of concept. Our 400,000-entry dictionary cracked 4% of mnemonic passwords; in comparison, a standard dictionary with 1.2 million entries cracked 11% of control passwords. The user-generated mnemonic passwords were also slightly more resistant to brute force attacks than control passwords. These results suggest that mnemonic passwords may be appropriate for some uses today. However, mnemonic passwords could become more vulnerable in the future and should not be treated as a panacea.", "title": "" }, { "docid": "851de4b014dfeb6f470876896b0416b3", "text": "The design of bioinspired systems for chemical sensing is an engaging line of research in machine olfaction. Developments in this line could increase the lifetime and sensitivity of artificial chemo-sensory systems. Such approach is based on the sensory systems known in live organisms, and the resulting developed artificial systems are targeted to reproduce the biological mechanisms to some extent. Sniffing behaviour, sampling odours actively, has been studied recently in neuroscience, and it has been suggested that the respiration frequency is an important parameter of the olfactory system, since the odour perception, especially in complex scenarios such as novel odourants exploration, depends on both the stimulus identity and the sampling method. In this work we propose a chemical sensing system based on an array of 16 metal-oxide gas sensors that we combined with an external mechanical ventilator to simulate the biological respiration cycle. The tested gas classes formed a relatively broad combination of two analytes, acetone and ethanol, in binary mixtures. Two sets of lowfrequency and high-frequency features were extracted from the acquired signals to show that the high-frequency features contain information related to the gas class. In addition, such information is available at early stages of the measurement, which could make the technique ∗Corresponding author. Email address: andrey.ziyatdinov@upc.edu (Andrey Ziyatdinov) Preprint submitted to Sensors and Actuators B: Chemical August 15, 2014 suitable in early detection scenarios. The full data set is made publicly available to the community.", "title": "" }, { "docid": "e9103d50d367787a5bfa68a38d6ea059", "text": "This article proposes the develop of a dynamic virtual environment that with the consumption of real time data about the state of a place, offer an immersion to the tourist like be at the desired location. The development implements a communication and loader structure from many information sources, manual information data loaded from mobile devices and data loader from collecting equipment that get environmental and atmospheric data. The virtual reality application use Google Maps, and worldwide heightmap to get 3D geographic map models; HTC VIVE and Oculus SDK for support virtual reality experience; and weather API to show the weather information from the desired location in real time. In addition, the proposed virtual reality application emphasizes user interaction on the virtual environment by displaying dynamic and up-to-date information about tourism services.", "title": "" }, { "docid": "9e0a28a8205120128938b52ba8321561", "text": "Modeling data with linear combinations of a few elements from a learned dictionary has been the focus of much recent research in machine learning, neuroscience, and signal processing. For signals such as natural images that admit such sparse representations, it is now well established that these models are well suited to restoration tasks. In this context, learning the dictionary amounts to solving a large-scale matrix factorization problem, which can be done efficiently with classical optimization tools. The same approach has also been used for learning features from data for other purposes, e.g., image classification, but tuning the dictionary in a supervised way for these tasks has proven to be more difficult. In this paper, we present a general formulation for supervised dictionary learning adapted to a wide variety of tasks, and present an efficient algorithm for solving the corresponding optimization problem. Experiments on handwritten digit classification, digital art identification, nonlinear inverse image problems, and compressed sensing demonstrate that our approach is effective in large-scale settings, and is well suited to supervised and semi-supervised classification, as well as regression tasks for data that admit sparse representations.", "title": "" }, { "docid": "a759ddc24cebbbf0ac71686b179962df", "text": "Most proteins must fold into defined three-dimensional structures to gain functional activity. But in the cellular environment, newly synthesized proteins are at great risk of aberrant folding and aggregation, potentially forming toxic species. To avoid these dangers, cells invest in a complex network of molecular chaperones, which use ingenious mechanisms to prevent aggregation and promote efficient folding. Because protein molecules are highly dynamic, constant chaperone surveillance is required to ensure protein homeostasis (proteostasis). Recent advances suggest that an age-related decline in proteostasis capacity allows the manifestation of various protein-aggregation diseases, including Alzheimer's disease and Parkinson's disease. Interventions in these and numerous other pathological states may spring from a detailed understanding of the pathways underlying proteome maintenance.", "title": "" }, { "docid": "941df83e65700bc2e5ee7226b96e4f54", "text": "This paper presents design and analysis of a three phase induction motor drive using IGBT‟s at the inverter power stage with volts hertz control (V/F) in closed loop using dsPIC30F2010 as a controller. It is a 16 bit high-performance digital signal controller (DSC). DSC is a single chip embedded controller that integrates the controller attributes of a microcontroller with the computation and throughput capabilities of a DSP in a single core. A 1HP, 3-phase, 415V, 50Hz induction motor is used as load for the inverter. Digital Storage Oscilloscope Textronix TDS2024B is used to record and analyze the various waveforms. The experimental results for V/F control of 3Phase induction motor using dsPIC30F2010 chip clearly shows constant volts per hertz and stable inverter line to line output voltage. Keywords--DSC, constant volts per hertz, PWM inverter, ACIM.", "title": "" }, { "docid": "cc1876cf1d71be6c32c75bd2ded25e65", "text": "Traditional anomaly detection on social media mostly focuses on individual point anomalies while anomalous phenomena usually occur in groups. Therefore, it is valuable to study the collective behavior of individuals and detect group anomalies. Existing group anomaly detection approaches rely on the assumption that the groups are known, which can hardly be true in real world social media applications. In this article, we take a generative approach by proposing a hierarchical Bayes model: Group Latent Anomaly Detection (GLAD) model. GLAD takes both pairwise and point-wise data as input, automatically infers the groups and detects group anomalies simultaneously. To account for the dynamic properties of the social media data, we further generalize GLAD to its dynamic extension d-GLAD. We conduct extensive experiments to evaluate our models on both synthetic and real world datasets. The empirical results demonstrate that our approach is effective and robust in discovering latent groups and detecting group anomalies.", "title": "" }, { "docid": "c6a36cd9165d073d037245505f1cf710", "text": "Most drugs of abuse easily cross the placenta and can affect fetal brain development. In utero exposures to drugs thus can have long-lasting implications for brain structure and function. These effects on the developing nervous system, before homeostatic regulatory mechanisms are properly calibrated, often differ from their effects on mature systems. In this review, we describe current knowledge on how alcohol, nicotine, cocaine, amphetamine, Ecstasy, and opiates (among other drugs) produce alterations in neurodevelopmental trajectory. We focus both on animal models and available clinical and imaging data from cross-sectional and longitudinal human studies. Early studies of fetal exposures focused on classic teratological methods that are insufficient for revealing more subtle effects that are nevertheless very behaviorally relevant. Modern mechanistic approaches have informed us greatly as to how to potentially ameliorate the induced deficits in brain formation and function, but conclude that better delineation of sensitive periods, dose–response relationships, and long-term longitudinal studies assessing future risk of offspring to exhibit learning disabilities, mental health disorders, and limited neural adaptations are crucial to limit the societal impact of these exposures.", "title": "" } ]
scidocsrr
d9045cce7af90cef04ea6d41238b7bd1
Low-loss 0.13-µm CMOS 50 – 70 GHz SPDT and SP4T switches
[ { "docid": "b929cbcaf8de8e845d1cf7f59d3eca63", "text": "This paper presents 35 GHz single-pole-single-throw (SPST) and single-pole-double-throw (SPDT) CMOS switches using a 0.13 mum BiCMOS process (IBM 8 HP). The CMOS transistors are designed to have a high substrate resistance to minimize the insertion loss and improve power handling capability. The SPST/SPDT switches have a insertion loss of 1.8 dB/2.2 dB, respectively, and an input 1-dB compression point (P1 dB) greater than 22 dBm. The isolation is greater than 30 dB at 35-40 GHz and is achieved using two parallel resonant networks. To our knowledge, this is the first demonstration of low-loss, high-isolation CMOS switches at Ka-band frequencies.", "title": "" } ]
[ { "docid": "e9e11d96e26708c380362847094113db", "text": "Orthogonal frequency-division multiplexing (OFDM) is a modulation technology that has been widely adopted in many new and emerging broadband wireless and wireline communication systems. Due to its capability to transmit a high-speed data stream using multiple spectral-overlapped lower-speed subcarriers, OFDM technology offers superior advantages of high spectrum efficiency, robustness against inter-carrier and inter-symbol interference, adaptability to server channel conditions, etc. In recent years, there have been intensive studies on optical OFDM (O-OFDM) transmission technologies, and it is considered a promising technology for future ultra-high-speed optical transmission. Based on O-OFDM technology, a novel elastic optical network architecture with immense flexibility and scalability in spectrum allocation and data rate accommodation could be built to support diverse services and the rapid growth of Internet traffic in the future. In this paper, we present a comprehensive survey on OFDM-based elastic optical network technologies, including basic principles of OFDM, O-OFDM technologies, the architectures of OFDM-based elastic core optical networks, and related key enabling technologies. The main advantages and issues of OFDM-based elastic core optical networks that are under research are also discussed.", "title": "" }, { "docid": "e460b586a78b334f1faaab0ad77a2a82", "text": "This paper introduces an allocation and scheduling algorithm that efficiently handles conditional execution in multi-rate embedded system. Control dependencies are introduced into the task graph model. We propose a mutual exclusion detection algorithm that helps the scheduling algorithm to exploit the resource sharing. Allocation and scheduling are performed simultaneously to take advantage of the resource sharing among those mutual exclusive tasks. The algorithm is fast and efficient,and so is suitable to be used in the inner loop of our hardware/software co-synthesis framework which must call the scheduling routine many times.", "title": "" }, { "docid": "94a35547a45c06a90f5f50246968b77e", "text": "In this paper we present a process called color transfer which can borrow one image's color characteristics from another. Recently Reinhard and his colleagues reported a pioneering work of color transfer. Their technology can produce very believable results, but has to transform pixel values from RGB to lαβ. Inspired by their work, we advise an approach which can directly deal with the color transfer in any 3D space.From the view of statistics, we consider pixel's value as a three-dimension stochastic variable and an image as a set of samples, so the correlations between three components can be measured by covariance. Our method imports covariance between three components of pixel values while calculate the mean along each of the three axes. Then we decompose the covariance matrix using SVD algorithm and get a rotation matrix. Finally we can scale, rotate and shift pixel data of target image to fit data points' cluster of source image in the current color space and get resultant image which takes on source image's look and feel. Besides the global processing, a swatch-based method is introduced in order to manipulate images' color more elaborately. Experimental results confirm the validity and usefulness of our method.", "title": "" }, { "docid": "f6574fbbdd53b2bc92af485d6c756df0", "text": "A comparative analysis between Nigerian English (NE) and American English (AE) is presented in this article. The study is aimed at highlighting differences in the speech parameters, and how they influence speech processing and automatic speech recognition (ASR). The UILSpeech corpus of Nigerian-Accented English isolated word recordings, read speech utterances, and video recordings are used as a reference for Nigerian English. The corpus captures the linguistic diversity of Nigeria with data collected from native speakers of Hausa, Igbo, and Yoruba languages. The UILSpeech corpus is intended to provide a unique opportunity for application and expansion of speech processing techniques to a limited resource language dialect. The acoustic-phonetic differences between American English (AE) and Nigerian English (NE) are studied in terms of pronunciation variations, vowel locations in the formant space, mean fundamental frequency, and phone model distances in the acoustic space, as well as through visual speech analysis of the speakers’ articulators. A strong impact of the AE–NE acoustic mismatch on ASR is observed. A combination of model adaptation and extension of the AE lexicon for newly established NE pronunciation variants is shown to substantially improve performance of the AE-trained ASR system in the new NE task. This study is a part of the pioneering efforts towards incorporating speech technology in Nigerian English and is intended to provide a development basis for other low resource language dialects and languages.", "title": "" }, { "docid": "153a22e4477a0d6ce98b9a0fba2ab595", "text": "Uninterruptible power supplies (UPSs) have been used in many installations for critical loads that cannot afford power failure or surge during operation. It is often difficult to upgrade the UPS system as the load grows over time. Due to lower cost and maintenance, as well as ease of increasing system capacity, the parallel operation of modularized small-power UPS has attracted much attention in recent years. In this paper, a new scheme for parallel operation of inverters is introduced. A multiple-input-multiple-output state-space model is developed to describe the parallel-connected inverters system, and a model-predictive-control scheme suitable for paralleled inverters control is proposed. In this algorithm, the control objectives of voltage tracking and current sharing are formulated using a weighted cost function. The effectiveness and the hot-swap capability of the proposed parallel-connected inverters system have been verified with experimental results.", "title": "" }, { "docid": "d0d114e862c2b8aa81ba4c1815b00764", "text": "It has been commonly acknowledged that the acceptance of a product depends on both its utilitarian and non-utilitarian properties. The non-utilitarian properties can elicit generally pleasurable and particularly playful experiences in the product’s users. Product design needs to improve the support of playful experiences in order to fit in with the users’ multi-faceted needs. However, designing for fun and pleasure is not an easy task, and there is an urgent need in user experience research and design practices to better understand the role of playfulness in overall user experience of the product. In this paper, we present an initial framework of playful experiences which are derived from studies in interactive art and videogames. We conducted a user study to verify that these experiences are valid. We interviewed 13 videogame players about their experiences with games and what triggers these experiences. The results indicate that the players are experiencing the videogames in many different ways which can be categorized using the framework. We propose that the framework could help the design of interactive products from an experience point of view and make them more engaging, attractive, and most importantly, more playful for the users.", "title": "" }, { "docid": "bc1efec6824aae80c9cae7ea2b2c4842", "text": "State-of-the-art natural language processing systems rely on supervision in the form of annotated data to learn competent models. These models are generally trained on data in a single language (usually English), and cannot be directly used beyond that language. Since collecting data in every language is not realistic, there has been a growing interest in crosslingual language understanding (XLU) and low-resource cross-language transfer. In this work, we construct an evaluation set for XLU by extending the development and test sets of the Multi-Genre Natural Language Inference Corpus (MultiNLI) to 15 languages, including low-resource languages such as Swahili and Urdu. We hope that our dataset, dubbed XNLI, will catalyze research in cross-lingual sentence understanding by providing an informative standard evaluation task. In addition, we provide several baselines for multilingual sentence understanding, including two based on machine translation systems, and two that use parallel data to train aligned multilingual bag-of-words and LSTM encoders. We find that XNLI represents a practical and challenging evaluation suite, and that directly translating the test data yields the best performance among available baselines.", "title": "" }, { "docid": "d0f9bf7511bcaced02838aa1c2d8785b", "text": "A folksonomy consists of three basic entities, namely users, tags and resources. This kind of social tagging system is a good way to index information, facilitate searches and navigate resources. The main objective of this paper is to present a novel method to improve the quality of tag recommendation. According to the statistical analysis, we find that the total number of tags used by a user changes over time in a social tagging system. Thus, this paper introduces the concept of user tagging status, namely the growing status, the mature status and the dormant status. Then, the determining user tagging status algorithm is presented considering a user’s current tagging status to be one of the three tagging status at one point. Finally, three corresponding strategies are developed to compute the tag probability distribution based on the statistical language model in order to recommend tags most likely to be used by users. Experimental results show that the proposed method is better than the compared methods at the accuracy of tag recommendation.", "title": "" }, { "docid": "00904281e8f6d5770e1ba3ff7febd20b", "text": "This paper proposes a data-driven method for concept-to-text generation, the task of automatically producing textual output from non-linguistic input. A key insight in our approach is to reduce the tasks of content selection (“what to say”) and surface realization (“how to say”) into a common parsing problem. We define a probabilistic context-free grammar that describes the structure of the input (a corpus of database records and text describing some of them) and represent it compactly as a weighted hypergraph. The hypergraph structure encodes exponentially many derivations, which we rerank discriminatively using local and global features. We propose a novel decoding algorithm for finding the best scoring derivation and generating in this setting. Experimental evaluation on the ATIS domain shows that our model outperforms a competitive discriminative system both using BLEU and in a judgment elicitation study.", "title": "" }, { "docid": "40043360644ded6950e1f46bd2caaf96", "text": "Recently, there has been a rapidly growing interest in deep learning research and their applications to real-world problems. In this paper, we aim at evaluating and comparing LSTM deep learning architectures for short-and long-term prediction of financial time series. This problem is often considered as one of the most challenging real-world applications for time-series prediction. Unlike traditional recurrent neural networks, LSTM supports time steps of arbitrary sizes and without the vanishing gradient problem. We consider both bidirectional and stacked LSTM predictive models in our experiments and also benchmark them with shallow neural networks and simple forms of LSTM networks. The evaluations are conducted using a publicly available dataset for stock market closing prices.", "title": "" }, { "docid": "9692ab0e46c6e370aeb171d3224f5d23", "text": "With the advent technology of Remote Sensing (RS) and Geographic Information Systems (GIS), a network transportation (Road) analysis within this environment has now become a common practice in many application areas. But a main problem in the network transportation analysis is the less quality and insufficient maintenance policies. This is because of the lack of funds for infrastructure. This demand for information requires new approaches in which data related to transportation network can be identified, collected, stored, retrieved, managed, analyzed, communicated and presented, for the decision support system of the organization. The adoption of newly emerging technologies such as Geographic Information System (GIS) can help to improve the decision making process in this area for better use of the available limited funds. The paper reviews the applications of GIS technology for transportation network analysis.", "title": "" }, { "docid": "4fa99994915bba8621e186a7e6804743", "text": "We address the problem of synthesizing a robust data-extractor from a family of websites that contain the same kind of information. This problem is common when trying to aggregate information from many web sites, for example, when extracting information for a price-comparison site.\n Given a set of example annotated web pages from multiple sites in a family, our goal is to synthesize a robust data extractor that performs well on all sites in the family (not only on the provided example pages). The main challenge is the need to trade off precision for generality and robustness. Our key contribution is the introduction of forgiving extractors that dynamically adjust their precision to handle structural changes, without sacrificing precision on the training set.\n Our approach uses decision tree learning to create a generalized extractor and converts it into a forgiving extractor, inthe form of an XPath query. The forgiving extractor captures a series of pruned decision trees with monotonically decreasing precision, and monotonically increasing recall, and dynamically adjusts precision to guarantee sufficient recall. We have implemented our approach in a tool called TREEX and applied it to synthesize extractors for real-world large scale web sites. We evaluate the robustness and generality of the forgiving extractors by evaluating their precision and recall on: (i) different pages from sites in the training set (ii) pages from different versions of sites in the training set (iii) pages from different (unseen) sites. We compare the results of our synthesized extractor to those of classifier-based extractors, and pattern-based extractors, and show that TREEX significantly improves extraction accuracy.", "title": "" }, { "docid": "2c3e6373feb4352a68ec6fd109df66e0", "text": "A broadband transition design between broadside coupled stripline (BCS) and conductor-backed coplanar waveguide (CBCPW) is proposed and studied. The E-field of CBCPW is designed to be gradually changed to that of BCS via a simple linear tapered structure. Two back-to-back transitions are simulated, fabricated and measured. It is reported that maximum insertion loss of 2.3 dB, return loss of higher than 10 dB and group delay flatness of about 0.14 ns are obtained from 50 MHz to 20 GHz.", "title": "" }, { "docid": "d59b7281c896bcd99902b8fb13951f98", "text": "The History of financial services in Tanzania shows that there were poor financial services before and soon after the independence. The financial services improved slowly after the liberalization of financial services in the 1990s. This paper uses the empirical literature to compare the current financial services providers serving the middle and lower income groups in Tanzania. The analysis of findings indicates that cooperative financial institutions (VICOBA and SACCOS) and mobile money services serve the majority of Tanzanians both in rural and urban areas. The paper recommends that policymakers should favor the semi-formal MFIs to enable them to serve the majority of Tanzanians and the security of mobile monetary transactions should be strengthened since is the most reliable monetary services used by all categories of Tanzanians throughout the country.", "title": "" }, { "docid": "8a4b1c87b85418ce934f16003a481f27", "text": "Current parking space vacancy detection systems use simple trip sensors at the entry and exit points of parking lots. Unfortunately, this type of system fails when a vehicle takes up more than one spot or when a parking lot has different types of parking spaces. Therefore, I propose a camera-based system that would use computer vision algorithms for detecting vacant parking spaces. My algorithm uses a combination of car feature point detection and color histogram classification to detect vacant parking spaces in static overhead images.", "title": "" }, { "docid": "d8c5ff196db9acbea12e923b2dcef276", "text": "MoS<sub>2</sub>-graphene-based hybrid structures are biocompatible and useful in the field of biosensors. Herein, we propose a heterostructured MoS<sub>2</sub>/aluminum (Al) film/MoS<sub>2</sub>/graphene as a highly sensitive surface plasmon resonance (SPR) biosensor based on the Otto configuration. The sensitivity of the proposed biosensor is enhanced by using three methods. First, prisms of different refractive index have been discussed and it is found that sensitivity can be enhanced by using a low refractive index prism. Second, the influence of the thickness of the air layer on the sensitivity is analyzed and the optimal thickness of air is obtained. Finally, the sensitivity improvement and mechanism by using molybdenum disulfide (MoS<sub>2</sub>)–graphene hybrid structure is revealed. The maximum sensitivity ∼ 190.83°/RIU is obtained with six layers of MoS<sub>2</sub> coating on both surfaces of Al thin film.", "title": "" }, { "docid": "b97c9e8238f74539e8a17dcffecdd35f", "text": "This paper presents a novel approach to the task of automatic music genre classification which is based on multiple feature vectors and ensemble of classifiers. Multiple feature vectors are extracted from a single music piece. First, three 30-second music segments, one from the beginning, one from the middle and one from end part of a music piece are selected and feature vectors are extracted from each segment. Individual classifiers are trained to account for each feature vector extracted from each music segment. At the classification, the outputs provided by each individual classifier are combined through simple combination rules such as majority vote, max, sum and product rules, with the aim of improving music genre classification accuracy. Experiments carried out on a large dataset containing more than 3,000 music samples from ten different Latin music genres have shown that for the task of automatic music genre classification, the features extracted from the middle part of the music provide better results than using the segments from the beginning or end part of the music. Furthermore, the proposed ensemble approach, which combines the multiple feature vectors, provides better accuracy than using single classifiers and any individual music segment.", "title": "" }, { "docid": "3e01af44d4819d8c78615e66f56e5983", "text": "The amount of dynamic content on the web has been steadily increasing. Scripting languages such as JavaScript and browser extensions such as Adobe's Flash have been instrumental in creating web-based interfaces that are similar to those of traditional applications. Dynamic content has also become popular in advertising, where Flash is used to create rich, interactive ads that are displayed on hundreds of millions of computers per day. Unfortunately, the success of Flash-based advertisements and applications attracted the attention of malware authors, who started to leverage Flash to deliver attacks through advertising networks. This paper presents a novel approach whose goal is to automate the analysis of Flash content to identify malicious behavior. We designed and implemented a tool based on the approach, and we tested it on a large corpus of real-world Flash advertisements. The results show that our tool is able to reliably detect malicious Flash ads with limited false positives. We made our tool available publicly and it is routinely used by thousands of users.", "title": "" }, { "docid": "f5ce4a13a8d081243151e0b3f0362713", "text": "Despite the growing popularity of digital imaging devices, the problem of accurately estimating the spatial frequency response or optical transfer function (OTF) of these devices has been largely neglected. Traditional methods for estimating OTFs were designed for film cameras and other devices that form continuous images. These traditional techniques do not provide accurate OTF estimates for typical digital image acquisition devices because they do not account for the fixed sampling grids of digital devices . This paper describes a simple method for accurately estimating the OTF of a digital image acquisition device. The method extends the traditional knife-edge technique''3 to account for sampling. One of the principal motivations for digital imaging systems is the utility of digital image processing algorithms, many of which require an estimate of the OTF. Algorithms for enhancement, spatial registration, geometric transformations, and other purposes involve restoration—removing the effects of the image acquisition device. Nearly all restoration algorithms (e.g., the", "title": "" } ]
scidocsrr
78140758171ada124132bbeac9aa671c
Low-Rank Matrix Completion by Riemannian Optimization
[ { "docid": "19518604892789208000e970747d0c3d", "text": "Given a partial symmetric matrixA with only certain elements specified, the Euclidean distance matrix completion problem (EDMCP) is to find the unspecified elements of A that makeA a Euclidean distance matrix (EDM). In this paper, we follow the successful approach in [20] and solve the EDMCP by generalizing the completion problem to allow for approximate completions. In particular, we introduce a primal-dual interiorpoint algorithm that solves an equivalent (quadratic objective function) semidefinite programming problem (SDP). Numerical results are included which illustrate the efficiency and robustness of our approach. Our randomly generated problems consistently resulted in low dimensional solutions when no completion existed.", "title": "" } ]
[ { "docid": "6b846d082123ca7319af0a4321f45a86", "text": "Mutations that exaggerate signalling of the receptor tyrosine kinase fibroblast growth factor receptor 3 (FGFR3) give rise to achondroplasia, the most common form of dwarfism in humans. Here we review the clinical features, genetic aspects and molecular pathogenesis of achondroplasia and examine several therapeutic strategies designed to target the mutant receptor or its signalling pathways, including the use of kinase inhibitors, blocking antibodies, physiologic antagonists, RNAi and chaperone inhibitors. We conclude by discussing the challenges of treating growth plate disorders in children.", "title": "" }, { "docid": "9b8ba583adc6df6e02573620587be68a", "text": "BACKGROUND\nTraditional one-session exposure therapy (OST) in which a patient is gradually exposed to feared stimuli for up to 3 h in a one-session format has been found effective for the treatment of specific phobias. However, many individuals with specific phobia are reluctant to seek help, and access to care is lacking due to logistic challenges of accessing, collecting, storing, and/or maintaining stimuli. Virtual reality (VR) exposure therapy may improve upon existing techniques by facilitating access, decreasing cost, and increasing acceptability and effectiveness. The aim of this study is to compare traditional OST with in vivo spiders and a human therapist with a newly developed single-session gamified VR exposure therapy application with modern VR hardware, virtual spiders, and a virtual therapist.\n\n\nMETHODS/DESIGN\nParticipants with specific phobia to spiders (N = 100) will be recruited from the general public, screened, and randomized to either VR exposure therapy (n = 50) or traditional OST (n = 50). A behavioral approach test using in vivo spiders will serve as the primary outcome measure. Secondary outcome measures will include spider phobia questionnaires and self-reported anxiety, depression, and quality of life. Outcomes will be assessed using a non-inferiority design at baseline and at 1, 12, and 52 weeks after treatment.\n\n\nDISCUSSION\nVR exposure therapy has previously been evaluated as a treatment for specific phobias, but there has been a lack of high-quality randomized controlled trials. A new generation of modern, consumer-ready VR devices is being released that are advancing existing technology and have the potential to improve clinical availability and treatment effectiveness. The VR medium is also particularly suitable for taking advantage of recent phobia treatment research emphasizing engagement and new learning, as opposed to physiological habituation. This study compares a market-ready, gamified VR spider phobia exposure application, delivered using consumer VR hardware, with the current gold standard treatment. Implications are discussed.\n\n\nTRIAL REGISTRATION\nClinicalTrials.gov identifier NCT02533310. Registered on 25 August 2015.", "title": "" }, { "docid": "646f6456904a6ffe968c0f79a5286f65", "text": "Both ray tracing and point-based representations provide means to efficiently display very complex 3D models. Computational efficiency has been the main focus of previous work on ray tracing point-sampled surfaces. For very complex models efficient storage in the form of compression becomes necessary in order to avoid costly disk access. However, as ray tracing requires neighborhood queries, existing compression schemes cannot be applied because of their sequential nature. This paper introduces a novel acceleration structure called the quantized kd-tree, which offers both efficient traversal and storage. The gist of our new representation lies in quantizing the kd-tree splitting plane coordinates. We show that the quantized kd-tree reduces the memory footprint up to 18 times, not compromising performance. Moreover, the technique can also be employed to provide LOD (level-of-detail) to reduce aliasing problems, with little additional storage cost", "title": "" }, { "docid": "59791087d518577c20708e544a5eec26", "text": "This paper proposes an innovative fraud detection method, built upon existing fraud detection research and Minority Report, to deal with the data mining problem of skewed data distributions. This method uses backpropagation (BP), together with naive Bayesian (NB) and C4.5 algorithms, on data partitions derived from minority oversampling with replacement. Its originality lies in the use of a single meta-classifier (stacking) to choose the best base classifiers, and then combine these base classifiers' predictions (bagging) to improve cost savings (stacking-bagging). Results from a publicly available automobile insurance fraud detection data set demonstrate that stacking-bagging performs slightly better than the best performing bagged algorithm, C4.5, and its best classifier, C4.5 (2), in terms of cost savings. Stacking-bagging also outperforms the common technique used in industry (BP without both sampling and partitioning). Subsequently, this paper compares the new fraud detection method (meta-learning approach) against C4.5 trained using undersampling, oversampling, and SMOTEing without partitioning (sampling approach). Results show that, given a fixed decision threshold and cost matrix, the partitioning and multiple algorithms approach achieves marginally higher cost savings than varying the entire training data set with different class distributions. The most interesting find is confirming that the combination of classifiers to produce the best cost savings has its contributions from all three algorithms.", "title": "" }, { "docid": "e7772ed75853d4d16641b41ad2abdcfe", "text": "A 3D shape signature is a compact representation for some essence of a shape. Shape signatures are commonly utilized as a fast indexing mechanism for shape retrieval. Effective shape signatures capture some global geometric properties which are scale, translation, and rotation invariant. In this paper, we introduce an effective shape signature which is also pose-oblivious. This means that the signature is also insensitive to transformations which change the pose of a 3D shape such as skeletal articulations. Although some topology-based matching methods can be considered pose-oblivious as well, our new signature retains the simplicity and speed of signature indexing. Moreover, contrary to topology-based methods, the new signature is also insensitive to the topology change of the shape, allowing us to match similar shapes with different genus. Our shape signature is a 2D histogram which is a combination of the distribution of two scalar functions defined on the boundary surface of the 3D shape. The first is a definition of a novel function called the local-diameter function. This function measures the diameter of the 3D shape in the neighborhood of each vertex. The histogram of this function is an informative measure of the shape which is insensitive to pose changes. The second is the centricity function that measures the average geodesic distance from one vertex to all other vertices on the mesh. We evaluate and compare a number of methods for measuring the similarity between two signatures, and demonstrate the effectiveness of our pose-oblivious shape signature within a 3D search engine application for different databases containing hundreds of models", "title": "" }, { "docid": "ecaf322e67c43b7d54a05de495a443eb", "text": "Recently, considerable effort has been devoted to deep domain adaptation in computer vision and machine learning communities. However, most of existing work only concentrates on learning shared feature representation by minimizing the distribution discrepancy across different domains. Due to the fact that all the domain alignment approaches can only reduce, but not remove the domain shift, target domain samples distributed near the edge of the clusters, or far from their corresponding class centers are easily to be misclassified by the hyperplane learned from the source domain. To alleviate this issue, we propose to joint domain alignment and discriminative feature learning, which could benefit both domain alignment and final classification. Specifically, an instance-based discriminative feature learning method and a center-based discriminative feature learning method are proposed, both of which guarantee the domain invariant features with better intra-class compactness and inter-class separability. Extensive experiments show that learning the discriminative features in the shared feature space can significantly boost the performance of deep domain adaptation methods.", "title": "" }, { "docid": "03097e1239e5540fe1ec45729d1cbbc2", "text": "Policy gradient is an efficient technique for improving a policy in a reinforcement learning setting. However, vanilla online variants are on-policy only and not able to take advantage of off-policy data. In this paper we describe a new technique that combines policy gradient with off-policy Q-learning, drawing experience from a replay buffer. This is motivated by making a connection between the fixed points of the regularized policy gradient algorithm and the Q-values. This connection allows us to estimate the Q-values from the action preferences of the policy, to which we apply Q-learning updates. We refer to the new technique as ‘PGQ’, for policy gradient and Q-learning. We also establish an equivalency between action-value fitting techniques and actor-critic algorithms, showing that regularized policy gradient techniques can be interpreted as advantage function learning algorithms. We conclude with some numerical examples that demonstrate improved data efficiency and stability of PGQ. In particular, we tested PGQ on the full suite of Atari games and achieved performance exceeding that of both asynchronous advantage actor-critic (A3C) and Q-learning.", "title": "" }, { "docid": "4bee6ec901c365f3780257ed62b7c020", "text": "There is no explicitly known example of a triple (g, a, x), where g ≥ 3 is an integer, a a digit in {0, . . . , g − 1} and x a real algebraic irrational number, for which one can claim that the digit a occurs infinitely often in the g–ary expansion of x. In 1909 and later in 1950, É. Borel considered such questions and suggested that the g–ary expansion of any algebraic irrational number in any base g ≥ 2 satisfies some of the laws that are satisfied by almost all numbers. For instance, the frequency where a given finite sequence of digits occurs should depend only on the base and on the length of the sequence. Hence there is a huge gap between the established theory and the expected state of the art. However, some progress have been made recently, mainly thanks to clever use of the Schmidt’s subspace Theorem. We review some of these results.", "title": "" }, { "docid": "d7e6b07fee74d6efd97733ac0b22f92c", "text": "Low level optimisations from conventional compiler technology often give very poor results when applied to code from lazy functional languages, mainly because of the completely diierent structure of the code, unknown control ow, etc. A novel approach to compiling laziness is needed. We describe a complete back end for lazy functional languages, which uses various interprocedural optimisations to produce highly optimised code. The main features of our new back end are the following. It uses a monadic intermediate code, called GRIN (Graph Reduction Intermediate Notation). This code has a very functional avourr, making it well suited for analysis and program transformations, but at the same time provides the low levell machinery needed to express many concrete implementation concerns. Using a heap points-to analysis, we are able to eliminate most unknown control ow due to evals (i.e., forcing of closures) and applications of higher order functions, in the program. A transformation machinery uses many, each very simple, GRIN program transformations to optimise the intermediate code. Eventually, the GRIN code is translated into RISC machine code, and we apply an interpro-cedural register allocation algorithm, followed by many other low level optimisations. The elimination of unknown control ow, made earlier, will help a lot in making the low level optimisations work well. Preliminary measurements look very promising: we are currently twice as fast as the Glasgow Haskell Compiler for some small programs. Our approach still gives us many opportunities for further optimisations (though yet unexplored).", "title": "" }, { "docid": "fca63f719115e863f5245f15f6b1be50", "text": "Model-based testing (MBT) in hardware-in-the-loop (HIL) platform is a simulation and testing environment for embedded systems, in which test design automation provided by MBT is combined with HIL methodology. A HIL platform is a testing environment in which the embedded system under testing (SUT) assumes to be operating with real-world inputs and outputs. In this paper, we focus on presenting the novel methodologies and tools that were used to conduct the validation of the MBT in HIL platform. Another novelty of the validation approach is that it aims to provide a comprehensive and many-sided process view to validating MBT and HIL related systems including different component, integration and system level testing activities. The research is based on the constructive method of the related scientific literature and testing technologies, and the results are derived through testing and validating the implemented MBT in HIL platform. The used testing process indicated that the functionality of the constructed MBT in HIL prototype platform was validated.", "title": "" }, { "docid": "87f7c3cfe6ca262e1f8716bf8ee16d2b", "text": "Existing deep multitask learning (MTL) approaches align layers shared between tasks in a parallel ordering. Such an organization significantly constricts the types of shared structure that can be learned. The necessity of parallel ordering for deep MTL is first tested by comparing it with permuted ordering of shared layers. The results indicate that a flexible ordering can enable more effective sharing, thus motivating the development of a soft ordering approach, which learns how shared layers are applied in different ways for different tasks. Deep MTL with soft ordering outperforms parallel ordering methods across a series of domains. These results suggest that the power of deep MTL comes from learning highly general building blocks that can be assembled to meet the demands of each task.", "title": "" }, { "docid": "4feab0c5f92502011ed17a425b0f800b", "text": "This paper gives an insight of how we can store healthcare data digitally like patient's records as an Electronic Health Record (EHR) and how we can generate useful information from these records by using analytics techniques and tools which will help in saving time and money of patients as well as the doctors. This paper is fully focused towards the Maharaja Yeshwantrao Hospital (M.Y.) located in Indore, Madhya Pradesh, India. M.Y hospital is the central India's largest government hospital. It generates large amount of heterogeneous data from different sources like patients health records, laboratory test result, electronic medical equipment, health insurance data, social media, drug research, genome research, clinical outcome, transaction and from Mahatma Gandhi Memorial medical college which is under MY hospital. To manage this data, data analytics may be used to make it useful for retrieval. Hence the concept of \"big data\" can be applied. Big data is characterized as extremely large data sets that can be analysed computationally to find patterns, trends, and associations, visualization, querying, information privacy and predictive analytics on large wide spread collection of data. Big data analytics can be done using Hadoop which plays an effective role in performing meaningful real-time analysis on the large volume of this data to predict the emergency situations before it happens. This paper also discusses about the EHR and the big data usage and its analytics at M.Y. hospital.", "title": "" }, { "docid": "a5999023893d996f0485abcf991ffbe1", "text": "In this paper, we address the issue of recovering and segmenting the apparent velocity field in sequences of images. As for motion estimation, we minimize an objective function involving two robust terms. The first one cautiously captures the optical flow constraint, while the second (a priori) term incorporates a discontinuity-preserving smoothness constraint. To cope with the nonconvex minimization problem thus defined, we design an efficient deterministic multigrid procedure. It converges fast toward estimates of good quality, while revealing the large discontinuity structures of flow fields. We then propose an extension of the model by attaching to it a flexible object-based segmentation device based on deformable closed curves (different families of curve equipped with different kinds of prior can be easily supported). Experimental results on synthetic and natural sequences are presented, including an analysis of sensitivity to parameter tuning.", "title": "" }, { "docid": "c533f33f95fd993e3bceffab85e9d851", "text": "Deep Convolutional Neural Networks (CNNs) offer remarkable performance of classifications and regressions in many high-dimensional problems and have been widely utilized in real-word cognitive applications. However, high computational cost of CNNs greatly hinder their deployment in resource-constrained applications, real-time systems and edge computing platforms. To overcome this challenge, we propose a novel filter-pruning framework, two-phase filter pruning based on conditional entropy, namely 2PFPCE, to compress the CNN models and reduce the inference time with marginal performance degradation. In our proposed method, we formulate filter pruning process as an optimization problem and propose a novel filter selection criteria measured by conditional entropy. Based on the assumption that the representation of neurons shall be evenly distributed, we also develop a maximum-entropy filter freeze technique that can reduce over fitting. Two filter pruning strategies – global and layer-wise strategies, are compared. Our experiment result shows that combining these two strategies can achieve a higher neural network compression ratio than applying only one of them under the same accuracy drop threshold. Twophase pruning, that is, combining both global and layer-wise strategies, achieves ∼ 10× FLOPs reduction and 46% inference time reduction on VGG-16, with 2% accuracy drop.", "title": "" }, { "docid": "e5687e8ac3eb1fbac18d203c049d9446", "text": "Information security policy compliance is one of the key concerns that face organizations today. Although, technical and procedural security measures help improve information security, there is an increased need to accommodate human, social and organizational factors. While employees are considered the weakest link in information security domain, they also are assets that organizations need to leverage effectively. Employees' compliance with Information Security Policies (ISPs) is critical to the success of an information security program. The purpose of this research is to develop a measurement tool that provides better measures for predicting and explaining employees' compliance with ISPs by examining the role of information security awareness in enhancing employees' compliance with ISPs. The study is the first to address compliance intention from a users' perspective. Overall, analysis results indicate strong support for the proposed instrument and represent an early confirmation for the validation of the underlying theoretical model.", "title": "" }, { "docid": "97310173da47afec3cb3af2c3f985079", "text": "While machine learning (ML) models are being increasingly trusted to make decisions in different and varying areas, the safety of systems using such models has become an increasing concern. In particular, ML models are often trained on data from potentially untrustworthy sources, providing adversaries with the opportunity to manipulate them by inserting carefully crafted samples into the training set. Recent work has shown that this type of attack, called a poisoning attack, allows adversaries to insert backdoors or trojans into the model, enabling malicious behavior with simple external backdoor triggers at inference time and only a blackbox perspective of the model itself. Detecting this type of attack is challenging because the unexpected behavior occurs only when a backdoor trigger, which is known only to the adversary, is present. Model users, either direct users of training data or users of pre-trained model from a catalog, may not guarantee the safe operation of their ML-based system. In this paper, we propose a novel approach to backdoor detection and removal for neural networks. Through extensive experimental results, we demonstrate its effectiveness for neural networks classifying text and images. To the best of our knowledge, this is the first methodology capable of detecting poisonous data crafted to insert backdoors and repairing the model that does not require a verified and trusted dataset.", "title": "" }, { "docid": "1a143ebc85d6284c075dd1fc915a56c8", "text": "Neural models assist in characterizing the processes carried out by cortical and hippocampal memory circuits. Recent models of memory have addressed issues including recognition and recall dynamics, sequences of activity as the unit of storage, and consolidation of intermediate-term episodic memory into long-term memory.", "title": "" }, { "docid": "7d687eb0a853c2faed5d4109f3cdb023", "text": "This paper presents a new method for vehicle logo detection and recognition from images of front and back views of vehicle. The proposed method is a two-stage scheme which combines Convolutional Neural Network (CNN) and Pyramid of Histogram of Gradient (PHOG) features. CNN is applied as the first stage for candidate region detection and recognition of the vehicle logos. Then, PHOG with Support Vector Machine (SVM) classifier is employed in the second stage to verify the results from the first stage. Experiments are performed with dataset of vehicle images collected from internet. The results show that the proposed method can accurately locate and recognize the vehicle logos with higher robustness in comparison with the other conventional schemes. The proposed methods can provide up to 100% in recall, 96.96% in precision and 99.99% in recognition rate in dataset of 20 classes of the vehicle logo.", "title": "" }, { "docid": "eaa2ed7e15a3b0a3ada381a8149a8214", "text": "This paper describes a new robust regular polygon detector. The regular polygon transform is posed as a mixture of regular polygons in a five dimensional space. Given the edge structure of an image, we derive the a posteriori probability for a mixture of regular polygons, and thus the probability density function for the appearance of a mixture of regular polygons. Likely regular polygons can be isolated quickly by discretising and collapsing the search space into three dimensions. The remaining dimensions may be efficiently recovered subsequently using maximum likelihood at the locations of the most likely polygons in the subspace. This leads to an efficient algorithm. Also the a posteriori formulation facilitates inclusion of additional a priori information leading to real-time application to road sign detection. The use of gradient information also reduces noise compared to existing approaches such as the generalised Hough transform. Results are presented for images with noise to show stability. The detector is also applied to two separate applications: real-time road sign detection for on-line driver assistance; and feature detection, recovering stable features in rectilinear environments.", "title": "" }, { "docid": "610769d8ac53d5708f3a699f3f4436f9", "text": "For modeling the 3D world behind 2D images, which 3D representation is most appropriate? A polygon mesh is a promising candidate for its compactness and geometric properties. However, it is not straightforward to model a polygon mesh from 2D images using neural networks because the conversion from a mesh to an image, or rendering, involves a discrete operation called rasterization, which prevents back-propagation. Therefore, in this work, we propose an approximate gradient for rasterization that enables the integration of rendering into neural networks. Using this renderer, we perform single-image 3D mesh reconstruction with silhouette image supervision and our system outperforms the existing voxel-based approach. Additionally, we perform gradient-based 3D mesh editing operations, such as 2D-to-3D style transfer and 3D DeepDream, with 2D supervision for the first time. These applications demonstrate the potential of the integration of a mesh renderer into neural networks and the effectiveness of our proposed renderer.", "title": "" } ]
scidocsrr
6d761f362f01fefc189f6720ef654d66
SafeRoute: Learning to Navigate Streets Safely in an Urban Environment
[ { "docid": "2b57b32fcb378fe6a9a78699142d36c6", "text": "Navigating through unstructured environments is a basic capability of intelligent creatures, and thus is of fundamental interest in the study and development of artificial intelligence. Long-range navigation is a complex cognitive task that relies on developing an internal representation of space, grounded by recognisable landmarks and robust visual processing, that can simultaneously support continuous self-localisation (“I am here”) and a representation of the goal (“I am going there”). Building upon recent research that applies deep reinforcement learning to maze navigation problems, we present an end-to-end deep reinforcement learning approach that can be applied on a city scale. Recognising that successful navigation relies on integration of general policies with locale-specific knowledge, we propose a dual pathway architecture that allows locale-specific features to be encapsulated, while still enabling transfer to multiple cities. A key contribution of this paper is an interactive navigation environment that uses Google Street View for its photographic content and worldwide coverage. Our baselines demonstrate that deep reinforcement learning agents can learn to navigate in multiple cities and to traverse to target destinations that may be kilometres away. The project webpage http://streetlearn.cc contains a video summarizing our research and showing the trained agent in diverse city environments and on the transfer task, the form to request the StreetLearn dataset and links to further resources. The StreetLearn environment code is available at https://github.com/deepmind/streetlearn.", "title": "" }, { "docid": "318514ff3b6fc3d60fbb403c5db28687", "text": "Imitation learning techniques aim to mimic human behavior in a given task. An agent (a learning machine) is trained to perform a task from demonstrations by learning a mapping between observations and actions. The idea of teaching by imitation has been around for many years; however, the field is gaining attention recently due to advances in computing and sensing as well as rising demand for intelligent applications. The paradigm of learning by imitation is gaining popularity because it facilitates teaching complex tasks with minimal expert knowledge of the tasks. Generic imitation learning methods could potentially reduce the problem of teaching a task to that of providing demonstrations, without the need for explicit programming or designing reward functions specific to the task. Modern sensors are able to collect and transmit high volumes of data rapidly, and processors with high computational power allow fast processing that maps the sensory data to actions in a timely manner. This opens the door for many potential AI applications that require real-time perception and reaction such as humanoid robots, self-driving vehicles, human computer interaction, and computer games, to name a few. However, specialized algorithms are needed to effectively and robustly learn models as learning by imitation poses its own set of challenges. In this article, we survey imitation learning methods and present design options in different steps of the learning process. We introduce a background and motivation for the field as well as highlight challenges specific to the imitation problem. Methods for designing and evaluating imitation learning tasks are categorized and reviewed. Special attention is given to learning methods in robotics and games as these domains are the most popular in the literature and provide a wide array of problems and methodologies. We extensively discuss combining imitation learning approaches using different sources and methods, as well as incorporating other motion learning methods to enhance imitation. We also discuss the potential impact on industry, present major applications, and highlight current and future research directions.", "title": "" }, { "docid": "8092fcd0f4beae6f26fa40a78d1408aa", "text": "Existing research studies on vision and language grounding for robot navigation focus on improving model-free deep reinforcement learning (DRL) models in synthetic environments. However, model-free DRL models do not consider the dynamics in the real-world environments, and they often fail to generalize to new scenes. In this paper, we take a radical approach to bridge the gap between synthetic studies and real-world practices—We propose a novel, planned-ahead hybrid reinforcement learning model that combines model-free and model-based reinforcement learning to solve a real-world vision-language navigation task. Our look-ahead module tightly integrates a look-ahead policy model with an environment model that predicts the next state and the reward. Experimental results suggest that our proposed method significantly outperforms the baselines and achieves the best on the real-world Room-toRoom dataset. Moreover, our scalable method is more generalizable when transferring to unseen environments.", "title": "" } ]
[ { "docid": "4eead577c1b3acee6c93a62aee8a6bb5", "text": "The present study examined teacher attitudes toward dyslexia and the effects of these attitudes on teacher expectations and the academic achievement of students with dyslexia compared to students without learning disabilities. The attitudes of 30 regular education teachers toward dyslexia were determined using both an implicit measure and an explicit, self-report measure. Achievement scores for 307 students were also obtained. Implicit teacher attitudes toward dyslexia related to teacher ratings of student achievement on a writing task and also to student achievement on standardized tests of spelling but not math for those students with dyslexia. Self-reported attitudes of the teachers toward dyslexia did not relate to any of the outcome measures. Neither the implicit nor the explicit measures of teacher attitudes related to teacher expectations. The results show implicit attitude measures to be a more valuable predictor of the achievement of students with dyslexia than explicit, self-report attitude measures.", "title": "" }, { "docid": "262be71d64eef2534fab547ec3db6b9a", "text": "In the past few decades, the rise in attacks on communication devices in networks has resulted in a reduction of network functionality, throughput, and performance. To detect and mitigate these network attacks, researchers, academicians, and practitioners developed Intrusion Detection Systems (IDSs) with automatic response systems. The response system is considered an important component of IDS, since without a timely response IDSs may not function properly in countering various attacks, especially on a real-time basis. To respond appropriately, IDSs should select the optimal response option according to the type of network attack. This research study provides a complete survey of IDSs and Intrusion Response Systems (IRSs) on the basis of our in-depth understanding of the response option for different types of network attacks. Knowledge of the path from IDS to IRS can assist network administrators and network staffs in understanding how to tackle different attacks with state-of-the-art technologies.", "title": "" }, { "docid": "8c467cec76d31fee70e8206769b121c3", "text": "Color preference is an important aspect of visual experience, but little is known about why people in general like some colors more than others. Previous research suggested explanations based on biological adaptations [Hurlbert AC, Ling YL (2007) Curr Biol 17:623-625] and color-emotions [Ou L-C, Luo MR, Woodcock A, Wright A (2004) Color Res Appl 29:381-389]. In this article we articulate an ecological valence theory in which color preferences arise from people's average affective responses to color-associated objects. An empirical test provides strong support for this theory: People like colors strongly associated with objects they like (e.g., blues with clear skies and clean water) and dislike colors strongly associated with objects they dislike (e.g., browns with feces and rotten food). Relative to alternative theories, the ecological valence theory both fits the data better (even with fewer free parameters) and provides a more plausible, comprehensive causal explanation of color preferences.", "title": "" }, { "docid": "2c38b6af96d8393660c4c700b9322f7a", "text": "According to what we call the Principle of Procreative Beneficence (PB),couples who decide to have a child have a significant moral reason to select the child who, given his or her genetic endowment, can be expected to enjoy the most well-being. In the first part of this paper, we introduce PB,explain its content, grounds, and implications, and defend it against various objections. In the second part, we argue that PB is superior to competing principles of procreative selection such as that of procreative autonomy.In the third part of the paper, we consider the relation between PB and disability. We develop a revisionary account of disability, in which disability is a species of instrumental badness that is context- and person-relative.Although PB instructs us to aim to reduce disability in future children whenever possible, it does not privilege the normal. What matters is not whether future children meet certain biological or statistical norms, but what level of well-being they can be expected to have.", "title": "" }, { "docid": "7e994507b7d1986bbc02411b221e9223", "text": "Users of online social networks voluntarily participate in different user groups or communities. Researches suggest the presence of strong local community structure in these social networks, i.e., users tend to meet other people via mutual friendship. Recently, different approaches have considered communities structure information for increasing the link prediction accuracy. Nevertheless, these approaches consider that users belong to just one community. In this paper, we propose three measures for the link prediction task which take into account all different communities that users belong to. We perform experiments for both unsupervised and supervised link prediction strategies. The evaluation method considers the links imbalance problem. Results show that our proposals outperform state-of-the-art unsupervised link prediction measures and help to improve the link prediction task approached as a supervised strategy.", "title": "" }, { "docid": "5063a63d425b5ceebbadfbab14a0a75d", "text": "Two studies investigated young infants' use of the word-learning principle Mutual Exclusivity. In Experiment 1, a linear relationship between age and performance was discovered. Seventeen-month-old infants successfully used Mutual Exclusivity to map novel labels to novel objects in a preferential looking paradigm. That is, when presented a familiar and a novel object (e.g. car and phototube) and asked to \"look at the dax\", 17-month-olds increased looking to the novel object (i.e. phototube) above baseline preference. On these trials, 16-month-olds were at chance. And, 14-month-olds systematically increased looking to the familiar object (i.e. car) in response to hearing the novel label \"dax\". Experiment 2 established that this increase in looking to the car was due solely to hearing the novel label \"dax\". Several possible interpretations of the surprising form of failure at 14 months are discussed.", "title": "" }, { "docid": "fad6716fef303435fd3724364ebd2741", "text": "1567-4223/$ see front matter 2011 Elsevier B.V. A doi:10.1016/j.elerap.2011.06.003 ⇑ Corresponding author. Tel.: +82 10 8976 4410. E-mail addresses: kimhw@yonsei.ac.kr (H.-W. K (Y. Xu), sumeetgupta@ssitm.ac.in (S. Gupta). 1 Tel.: +86 21 2501 1198. 2 Tel.: +91 788 2291621. 3 There are two types of products: search product experience product (i.e., high touch product) (Klein 19 those where there is great variation is product qua luxurious items, and apparels. Search products are th quality does not vary across stores. For example, bo quality and therefore their quality is standard do not v store. Price and trust are considered to be two important factors that influence customer purchasing decisions in Internet shopping. This paper examines the relative influence they have on online purchasing decisions for both potential and repeat customers. The knowledge of their relative impacts and changes in their relative roles over customer transaction experience is useful in developing customized sales strategies to target different groups of customers. The results of this study revealed that perceived trust exerted a stronger effect than perceived price on purchase intentions for both potential and repeat customers of an online store. The results also revealed that perceived price exerted a stronger influence on purchase decisions of repeat customers as compared to that of potential customers. Perceived trust exerted a stronger influence on purchase decisions of potential customers as compared to that of repeat customers. 2011 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "1245c626f26dd7fe799d862b6f56a6af", "text": "The emergence of cloud services brings new possibilities for constructing and using HPC platforms. However, while cloud services provide the flexibility and convenience of customized, pay-as-you-go parallel computing, multiple previous studies in the past three years have indicated that cloud-based clusters need a significant performance boost to become a competitive choice, especially for tightly coupled parallel applications.\n In this work, we examine the feasibility of running HPC applications in clouds. This study distinguishes itself from existing investigations in several ways: 1) We carry out a comprehensive examination of issues relevant to the HPC community, including performance, cost, user experience, and range of user activities. 2) We compare an Amazon EC2-based platform built upon its newly available HPC-oriented virtual machines with typical local cluster and supercomputer options, using benchmarks and applications with scale and problem size unprecedented in previous cloud HPC studies. 3) We perform detailed performance and scalability analysis to locate the chief limiting factors of the state-of-the-art cloud based clusters. 4) We present a case study on the impact of per-application parallel I/O system configuration uniquely enabled by cloud services. Our results reveal that though the scalability of EC2-based virtual clusters still lags behind traditional HPC alternatives, they are rapidly gaining in overall performance and cost-effectiveness, making them feasible candidates for performing tightly coupled scientific computing. In addition, our detailed benchmarking and profiling discloses and analyzes several problems regarding the performance and performance stability on EC2.", "title": "" }, { "docid": "77555e0c16077cfa50682de2669b9abd", "text": "The demand for knowledge extraction has been increasing. With the growing amount of data being generated by global data sources (e.g., social media and mobile apps) and the popularization of context-specific data (e.g., the Internet of Things), companies and researchers need to connect all these data and extract valuable information. Machine learning has been gaining much attention in data mining, leveraging the birth of new solutions. This paper proposes an architecture to create a flexible and scalable machine learning as a service. An open source solution was implemented and presented. As a case study, a forecast of electricity demand was generated using real-world sensor and weather data by running different algorithms at the same time.", "title": "" }, { "docid": "2a361656df03b330abe665e6f40559aa", "text": "Sentiment analysis plays a big role in brand and product positioning, consumer attitude detection, market research and customer relationship management. Essential part of information-gathering for market research is to find the opinion of people about the product. With availability and popularity of like online review sites and personal blogs, more chances and challenges arise as people now can, and do use information technologies to understand others opinions. In this paper, a Multi-Layer Perceptron (MLP) is used to classify the features extracted from the movie reviews. A Decision Tree-based Feature Ranking is proposed for feature selection. The ranking is based on Manhattan Hierarchical Cluster Criterion In the proposed feature selection; a decision tree induction selects relevant features. Decision tree induction constructs a tree structure with internal nodes denoting an attribute test with the branch representing test outcome and external node denotes class prediction. In this paper, a hybrid algorithm based on Differential Evolution (DE) and Genetic Algorithm (GA) for weight optimization algorithm to optimize MLPNN is proposed. IMDb dataset is used to evaluate the proposed method. Experimental results showed that the MLP with proposed feature selection improves the performance of MLP significantly by 3.96% to 6.56%. Classification accuracy of 81.25% was achieved when 70 or 90 features were selected.", "title": "" }, { "docid": "148b3fa74867f67fa1a7196b3a10038a", "text": "Sentiment analysis of customer reviews has a crucial impact on a business's development strategy. Despite the fact that a repository of reviews evolves over time, sentiment analysis often relies on offline solutions where training data is collected before the model is built. If we want to avoid retraining the entire model from time to time, incremental learning becomes the best alternative solution for this task. In this work, we present a variant of online random forests to perform sentiment analysis on customers' reviews. Our model is able to achieve accuracy similar to offline methods and comparable to other online models.", "title": "" }, { "docid": "ea8290fda2918a4618b268db502b9e69", "text": "Managing raw alerts generated by various sensors are becoming of more significance to intrusion detection systems as more sensors with different capabilities are distributed spatially in the network. Alert Correlation addresses this issue by reducing, fusing and correlating raw alerts to provide a condensed, yet more meaningful view of the network from the intrusion standpoint. Techniques from a divers range of disciplines have been used by researchers for different aspects of correlation. This paper provides a survey of the state of the art in alert correlation techniques. Our main contribution is a two-fold classification of literature based on correlation framework and applied techniques. The previous works in each category have been described alongside with their strengths and weaknesses from our viewpoint.", "title": "" }, { "docid": "38624083e36ff9f2ea988de0eb685528", "text": "We present Convolutional Oriented Boundaries (COB), which produces multiscale oriented contours and region hierarchies starting from generic image classification Convolutional Neural Networks (CNNs). COB is computationally efficient, because it requires a single CNN forward pass for contour detection and it uses a novel sparse boundary representation for hierarchical segmentation; it gives a significant leap in performance over the state-of-the-art, and it generalizes very well to unseen categories and datasets. Particularly, we show that learning to estimate not only contour strength but also orientation provides more accurate results. We perform extensive experiments on BSDS, PASCAL Context, PASCAL Segmentation, and MS-COCO, showing that COB provides state-of-the-art contours, region hierarchies, and object proposals in all datasets.", "title": "" }, { "docid": "6ccb8a904748cbb263f9edb6cf82ff92", "text": "IMPORTANCE\nThe Affordable Care Act is the most important health care legislation enacted in the United States since the creation of Medicare and Medicaid in 1965. The law implemented comprehensive reforms designed to improve the accessibility, affordability, and quality of health care.\n\n\nOBJECTIVES\nTo review the factors influencing the decision to pursue health reform, summarize evidence on the effects of the law to date, recommend actions that could improve the health care system, and identify general lessons for public policy from the Affordable Care Act.\n\n\nEVIDENCE\nAnalysis of publicly available data, data obtained from government agencies, and published research findings. The period examined extends from 1963 to early 2016.\n\n\nFINDINGS\nThe Affordable Care Act has made significant progress toward solving long-standing challenges facing the US health care system related to access, affordability, and quality of care. Since the Affordable Care Act became law, the uninsured rate has declined by 43%, from 16.0% in 2010 to 9.1% in 2015, primarily because of the law's reforms. Research has documented accompanying improvements in access to care (for example, an estimated reduction in the share of nonelderly adults unable to afford care of 5.5 percentage points), financial security (for example, an estimated reduction in debts sent to collection of $600-$1000 per person gaining Medicaid coverage), and health (for example, an estimated reduction in the share of nonelderly adults reporting fair or poor health of 3.4 percentage points). The law has also begun the process of transforming health care payment systems, with an estimated 30% of traditional Medicare payments now flowing through alternative payment models like bundled payments or accountable care organizations. These and related reforms have contributed to a sustained period of slow growth in per-enrollee health care spending and improvements in health care quality. Despite this progress, major opportunities to improve the health care system remain.\n\n\nCONCLUSIONS AND RELEVANCE\nPolicy makers should build on progress made by the Affordable Care Act by continuing to implement the Health Insurance Marketplaces and delivery system reform, increasing federal financial assistance for Marketplace enrollees, introducing a public plan option in areas lacking individual market competition, and taking actions to reduce prescription drug costs. Although partisanship and special interest opposition remain, experience with the Affordable Care Act demonstrates that positive change is achievable on some of the nation's most complex challenges.", "title": "" }, { "docid": "a839d9e4a80d9a8715119bc53eddbce1", "text": "Reliable and comprehensive measurement data from large-scale fire tests is needed for validation of computer fire models, but is subject to various uncertainties, including radiation errors in temperature measurement. Here, a simple method for post-processing thermocouple data is demonstrated, within the scope of a series of large-scale fire tests, in order to establish a well characterised dataset of physical parameter values which can be used with confidence in model validation. Sensitivity analyses reveal the relationship of the correction uncertainty to the assumed optical properties and the thermocouple distribution. The analysis also facilitates the generation of maps of an equivalent radiative flux within the fire compartment, a quantity which usefully characterises the thermal exposures of structural components. Large spatial and temporal variations are found, with regions of most severe exposures not being collocated with the peak gas temperatures; this picture is at variance with the assumption of uniform heating conditions often adopted for post-flashover fires.", "title": "" }, { "docid": "d685e84f8ddc55f2391a9feffc88889f", "text": "Little is known about how Agile developers and UX designers integrate their work on a day-to-day basis. While accounts in the literature attempt to integrate Agile development and UX design by combining their processes and tools, the contradicting claims found in the accounts complicate extracting advice from such accounts. This paper reports on three ethnographically-informed field studies of the day-today practice of developers and designers in organisational settings. Our results show that integration is achieved in practice through (1) mutual awareness, (2) expectations about acceptable behaviour, (3) negotiating progress and (4) engaging with each other. Successful integration relies on practices that support and maintain these four aspects in the day-to-day work of developers and designers.", "title": "" }, { "docid": "c9f6de422e349ac1319b1017d2a6547b", "text": "This paper attempts a preliminary analysis of the global desirability of different forms of openness in AI development (including openness about source code, science, data, safety techniques, capabilities, and goals). Short-term impacts of increased openness appear mostly socially beneficial in expectation. The strategic implications of medium and long-term impacts are complex. The evaluation of long-term impacts, in particular, may depend on whether the objective is to benefit the present generation or to promote a time-neutral aggregate of well-being of future generations. Some forms of openness are plausibly positive on both counts (openness about safety measures, openness about goals). Others (openness about source code, science, and possibly capability) could lead to a tightening of the competitive situation around the time of the introduction of advanced AI, increasing the probability that winning the AI race is incompatible with using any safety method that incurs a delay or limits performance. We identify several key factors that must be taken into account by any well-founded opinion on the matter. Policy Implications • The global desirability of openness in AI development – sharing e.g. source code, algorithms, or scientific insights – depends – on complex tradeoffs. • A central concern is that openness could exacerbate a racing dynamic: competitors trying to be the first to develop advanced (superintelligent) AI may accept higher levels of existential risk in order to accelerate progress. • Openness may reduce the probability of AI benefits being monopolized by a small group, but other potential political consequences are more problematic. • Partial openness that enables outsiders to contribute to an AI project’s safety work and to supervise organizational plans and goals appears desirable. The goal of this paper is to conduct a preliminary analysis of the long-term strategic implications of openness in AI development. What effects would increased openness in AI development have, on the margin, on the long-term impacts of AI? Is the expected value for society of these effects positive or negative? Since it is typically impossible to provide definitive answers to this type of question, our ambition here is more modest: to introduce some relevant considerations and develop some thoughts on their weight and plausibility. Given recent interest in the topic of openness in AI and the absence (to our knowledge) of any academic work directly addressing this issue, even this modest ambition would offer scope for a worthwhile contribution. Openness in AI development can refer to various things. For example, we could use this phrase to refer to open source code, open science, open data, or to openness about safety techniques, capabilities, and organizational goals, or to a non-proprietary development regime generally. We will have something to say about each of those different aspects of openness – they do not all have the same strategic implications. But unless we specify otherwise, we will use the shorthand ‘openness’ to refer to the practice of releasing into the public domain (continuously and as promptly as is practicable) all relevant source code and platforms and publishing freely about algorithms and scientific insights and ideas gained in the course of the research. Currently, most leading AI developers operate with a high but not maximal degree of openness. AI researchers at Google, Facebook, Microsoft and Baidu regularly present their latest work at technical conferences and post it on preprint servers. So do researchers in academia. Sometimes, but not always, these publications are accompanied by a release of source code, which makes it easier for outside researchers to replicate the work and build on it. Each of the aforementioned companies have developed and released under open source licences source code for platforms that help researchers (and students and other interested folk) implement machine learning architectures. The movement of staff and interns is another important vector for the spread of ideas. The recently announced OpenAI initiative even has openness explicitly built into its brand identity. Global Policy (2017) doi: 10.1111/1758-5899.12403 © 2017 The Authors Global Policy published by Durham University and John Wiley & Sons, Ltd. This is an open access article under the terms of the Creative Commons Attribution License, which permits use, distribution and reproduction in any medium, provided the original work is properly cited. Global Policy", "title": "" }, { "docid": "260e574e9108e05b98df7e4ed489e5fc", "text": "Why are we not living yet with robots? If robots are not common everyday objects, it is maybe because we have looked for robotic applications without considering with sufficient attention what could be the experience of interacting with a robot. This article introduces the idea of a value profile, a notion intended to capture the general evolution of our experience with different kinds of objects. After discussing value profiles of commonly used objects, it offers a rapid outline of the challenging issues that must be investigated concerning immediate, short-term and long-term experience with robots. Beyond science-fiction classical archetypes, the picture emerging from this analysis is the one of versatile everyday robots, autonomously developing in interaction with humans, communicating with one another, changing shape and body in order to be adapted to their various context of use. To become everyday objects, robots will not necessary have to be useful, but they will have to be at the origins of radically new forms of experiences.", "title": "" }, { "docid": "6a6f7493f38248b06fe67039143bda82", "text": "Time series forecasting techniques have been widely applied in domains such as weather forecasting, electric power demand forecasting, earthquake forecasting, and financial market forecasting. Because of the fact that these time series are affected by a multitude of interrelating macroscopic and microscopic variables, the underlying models that generate these time series are nonlinear and extremely complex. Therefore, it is computationally infeasible to develop full-scale models with the present computing technology. Therefore, researchers have resorted to smaller-scale models that require frequent recalibration. Despite advances in forecasting technology over the past few decades, there have not been algorithms that can consistently produce accurate forecasts with statistical significance. This is mainly because state-of-the-art forecasting algorithms essentially perform single-horizon forecasts and produce continuous numbers as outputs. This paper proposes a novel multi-horizon ternary forecasting algorithm that forecasts whether a time series is heading for an uptrend or downtrend, or going sideways. The proposed system utilizes a cascade of support vector machines, each of which is trained to forecast a specific horizon. Individual forecasts of these support vector machines are combined to form an extrapolated time series. A higher level forecasting system then forward-runs the extrapolated time series and then forecasts the future trend of the input time series in accordance with some volatility measure. Experiments have been carried out on some datasets. Over these datasets, this system achieves accuracy rates well above the baseline accuracy rate, implying statistical significance. The experimental results demonstrate the efficacy of our framework.", "title": "" }, { "docid": "2cfc7eeae3259a43a24ef56932d8b27f", "text": "This paper presents Platener, a system that allows quickly fabricating intermediate design iterations of 3D models, a process also known as low-fidelity fabrication. Platener achieves its speed-up by extracting straight and curved plates from the 3D model and substituting them with laser cut parts of the same size and thickness. Only the regions that are of relevance to the current design iteration are executed as full-detail 3D prints. Platener connects the parts it has created by automatically inserting joints. To help fast assembly it engraves instructions. Platener allows users to customize substitution results by (1) specifying fidelity-speed tradeoffs, (2) choosing whether or not to convert curved surfaces to plates bent using heat, and (3) specifying the conversion of individual plates and joints interactively. Platener is designed to best preserve the fidelity of func-tional objects, such as casings and mechanical tools, all of which contain a large percentage of straight/rectilinear elements. Compared to other low-fab systems, such as faBrickator and WirePrint, Platener better preserves the stability and functionality of such objects: the resulting assemblies have fewer parts and the parts have the same size and thickness as in the 3D model. To validate our system, we converted 2.250 3D models downloaded from a 3D model site (Thingiverse). Platener achieves a speed-up of 10 or more for 39.5% of all objects.", "title": "" } ]
scidocsrr
b72e173b5fad75e08ff6d4676fca5c3b
Bus Architectures for Safety-Critical Embedded Systems
[ { "docid": "ed6d9fd7ef8ec0f2509b6dec0ea4f77b", "text": "Avionics and control systems for aircraft use distributed, fault-tolerant computer systems to provide safety-critical functions such as flight and e gine control. These systems are becomingmodular, meaning that they are based on standardized architectures and components, andintegrated, meaning that some of the components are shared by different functions—of possibly different criticality levels. The modular architectures that support these functions mus t provide mechanisms for coordinating the distributed components that provide a sin gle function (e.g., distributing sensor readings and actuator commands appropriately, and a ssisting replicated components to perform the function in a fault-tolerant manner), while p rotecting functions from faults in each other. Such an architecture must tolerate hardware f aults in its own components and must provide very strong guarantees on the correctness and r eliability of its own mechanisms and services. One of the essential services provided by this kind of modula r architecture is communication of information from one distributed component to ano ther, so a (physical or logical) communication bus is one of its principal components, and th e protocols used for control and communication on the bus are among its principal mechani sms. Consequently, these architectures are often referred to as buses(or databuses ), although this term understates their complexity, sophistication, and criticality. The capabilities once found in aircraft buses are becoming a vailable in buses aimed at the automobile market, where the economies of scale ensure l ow prices. The low price of the automobile buses then renders them attractive to certai n aircraft applications—provided they can achieve the safety required. In this report, I describe and compare the architectures of t w avionics and two automobile buses in the interest of deducing principles common t o all of them, the main differences in their design choices, and the tradeoffs made. The av ionics buses considered are the Honeywell SAFEbus (the backplane data bus used in the Boe ing 777 Airplane Information Management System) and the NASA SPIDER (an architectur being developed as a demonstrator for certification under the new DO-254 guideli nes); the automobile buses considered are the TTTech Time-Triggered Architecture (TTA), recently adopted by Audi for automobile applications, and by Honeywell for avionics and ircraft control functions, and FlexRay, which is being developed by a consortium of BMW, Dai mlerChrysler, Motorola, and Philips. I consider these buses from the perspective of their fault hy potheses, mechanisms, services, and assurance.", "title": "" } ]
[ { "docid": "78bd1c7ea28a4af60991b56ccd658d7f", "text": "The number of categories for action recognition is growing rapidly. It is thus becoming increasingly hard to collect sufficient training data to learn conventional models for each category. This issue may be ameliorated by the increasingly popular “zero-shot learning” (ZSL) paradigm. In this framework a mapping is constructed between visual features and a human interpretable semantic description of each category, allowing categories to be recognised in the absence of any training data. Existing ZSL studies focus primarily on image data, and attribute-based semantic representations. In this paper, we address zero-shot recognition in contemporary video action recognition tasks, using semantic word vector space as the common space to embed videos and category labels. This is more challenging because the mapping between the semantic space and space-time features of videos containing complex actions is more complex and harder to learn. We demonstrate that a simple self-training and data augmentation strategy can significantly improve the efficacy of this mapping. Experiments on human action datasets including HMDB51 and UCF101 demonstrate that our approach achieves the state-of-the-art zero-shot action recognition performance.", "title": "" }, { "docid": "a73b9ce3d0808177c9f0739b67a1a3f3", "text": "Multiword expressions (MWEs) are lexical items that can be decomposed into multiple component words, but have properties that are unpredictable with respect to their component words. In this paper we propose the first deep learning models for token-level identification of MWEs. Specifically, we consider a layered feedforward network, a recurrent neural network, and convolutional neural networks. In experimental results we show that convolutional neural networks are able to outperform the previous state-of-the-art for MWE identification, with a convolutional neural network with three hidden layers giving the best performance.", "title": "" }, { "docid": "fc70a1820f838664b8b51b5adbb6b0db", "text": "This paper presents a method for identifying an opinion with its holder and topic, given a sentence from online news media texts. We introduce an approach of exploiting the semantic structure of a sentence, anchored to an opinion bearing verb or adjective. This method uses semantic role labeling as an intermediate step to label an opinion holder and topic using data from FrameNet. We decompose our task into three phases: identifying an opinion-bearing word, labeling semantic roles related to the word in the sentence, and then finding the holder and the topic of the opinion word among the labeled semantic roles. For a broader coverage, we also employ a clustering technique to predict the most probable frame for a word which is not defined in FrameNet. Our experimental results show that our system performs significantly better than the baseline.", "title": "" }, { "docid": "3a8f14166954036f85914183dd7a7ee4", "text": "Abused and nonabused child witnesses to parental violence temporarily residing in a battered women's shelter were compared to children from a similar economic background on measures of self-esteem, anxiety, depression, and behavior problems, using mothers' and self-reports. Results indicated significantly more distress in the abused-witness children than in the comparison group, with nonabused witness children's scores falling between the two. Age of child and types of violence were mediating factors. Implications of the findings are discussed.", "title": "" }, { "docid": "2f270908a1b4897b7d008d9673e3300b", "text": "The implementation of 3D stereo matching in real time is an important problem for many vision applications and algorithms. The current work, extending previous results by the same authors, presents in detail an architecture which combines the methods of Absolute Differences, Census, and Belief Propagation in an integrated architecture suitable for implementation with Field Programmable Gate Array (FPGA) logic. Emphasis on the present work is placed on the justification of dimensioning the system, as well as detailed design and testing information for a fully placed and routed design to process 87 frames per sec (fps) in 1920 × 1200 resolution, and a fully implemented design for 400 × 320 which runs up to 1570 fps.", "title": "" }, { "docid": "55fdf6b013aa8e4082137a4c84a2873d", "text": "The Named Data Networking (NDN) project is emerging as one of the most promising information-centric future Internet architectures. Besides NDN recognized potential as a content retrieval solution in wired and wireless domains, its innovative concepts, such as named content, name-based routing and in-network caching, particularly suit the requirements of Internet of Things (IoT) interconnecting billions of heterogeneous objects. IoT highly differs from today's Internet due to resource-constrained devices, massive volumes of small exchanged data, and traffic type diversity. The study in this paper addresses the design of a high-level NDN architecture, whose main components are overhauled to specifically meet the IoT challenges.", "title": "" }, { "docid": "608bf85fa593c7ddff211c5bcc7dd20a", "text": "We introduce a composite deep neural network architecture for supervised and language independent context sensitive lemmatization. The proposed method considers the task as to identify the correct edit tree representing the transformation between a word-lemma pair. To find the lemma of a surface word, we exploit two successive bidirectional gated recurrent structures the first one is used to extract the character level dependencies and the next one captures the contextual information of the given word. The key advantages of our model compared to the state-of-the-art lemmatizers such as Lemming and Morfette are (i) it is independent of human decided features (ii) except the gold lemma, no other expensive morphological attribute is required for joint learning. We evaluate the lemmatizer on nine languages Bengali, Catalan, Dutch, Hindi, Hungarian, Italian, Latin, Romanian and Spanish. It is found that except Bengali, the proposed method outperforms Lemming and Morfette on the other languages. To train the model on Bengali, we develop a gold lemma annotated dataset1 (having 1, 702 sentences with a total of 20, 257 word tokens), which is an additional contribution of this work.", "title": "" }, { "docid": "0d1f88dbd4a04748a83fe741a86518c1", "text": "The focus of this paper is to investigate how writing computer programs can help children develop their storytelling and creative writing abilities. The process of writing a program---coding---has long been considered only in terms of computer science, but such coding is also reflective of the imaginative and narrative elements of fiction writing workshops. Writing to program can also serve as programming to write, in which a child learns the importance of sequence, structure, and clarity of expression---three aspects characteristic of effective coding and good storytelling alike. While there have been efforts examining how learning to write code can be facilitated by storytelling, there has been little exploration as to how such creative coding can also be directed to teach students about the narrative and storytelling process. Using the introductory programming language Scratch, this paper explores the potential of having children create their own digital stories with the software and how the narrative structure of these stories offers kids the opportunity to better understand the process of expanding an idea into the arc of a story.", "title": "" }, { "docid": "592b959fb3beef020e9dbafd804d897f", "text": "In this paper, we study the effectiveness of phishing blacklists. We used 191 fresh phish that were less than 30 minutes old to conduct two tests on eight anti-phishing toolbars. We found that 63% of the phishing campaigns in our dataset lasted less than two hours. Blacklists were ineffective when protecting users initially, as most of them caught less than 20% of phish at hour zero. We also found that blacklists were updated at different speeds, and varied in coverage, as 47% 83% of phish appeared on blacklists 12 hours from the initial test. We found that two tools using heuristics to complement blacklists caught significantly more phish initially than those using only blacklists. However, it took a long time for phish detected by heuristics to appear on blacklists. Finally, we tested the toolbars on a set of 13,458 legitimate URLs for false positives, and did not find any instance of mislabeling for either blacklists or heuristics. We present these findings and discuss ways in which anti-phishing tools can be improved.", "title": "" }, { "docid": "eddeeb5b00dc7f82291b3880956e2f01", "text": "This study aims at building a robust method for semiautomated information extraction of pavement markings detected from mobile laser scanning (MLS) point clouds. The proposed workflow consists of three components: 1) preprocessing, 2) extraction, and 3) classification. In preprocessing, the three-dimensional (3-D) MLS point clouds are converted into radiometrically corrected and enhanced two-dimensional (2-D) intensity imagery of the road surface. Then, the pavement markings are automatically extracted with the intensity using a set of algorithms, including Otsu's thresholding, neighbor-counting filtering, and region growing. Finally, the extracted pavement markings are classified with the geometric parameters by using a manually defined decision tree. A study was conducted by using the MLS dataset acquired in Xiamen, Fujian, China. The results demonstrated that the proposed workflow and method can achieve 92% in completeness, 95% in correctness, and 94% in F-score.", "title": "" }, { "docid": "1386c523706fdd4535a8a75c33c4e615", "text": "People have a basic need to maintain the integrity of the self, a global sense of personal adequacy. Events that threaten self-integrity arouse stress and self-protective defenses that can hamper performance and growth. However, an intervention known as self-affirmation can curb these negative outcomes. Self-affirmation interventions typically have people write about core personal values. The interventions bring about a more expansive view of the self and its resources, weakening the implications of a threat for personal integrity. Timely affirmations have been shown to improve education, health, and relationship outcomes, with benefits that sometimes persist for months and years. Like other interventions and experiences, self-affirmations can have lasting benefits when they touch off a cycle of adaptive potential, a positive feedback loop between the self-system and the social system that propagates adaptive outcomes over time. The present review highlights both connections with other disciplines and lessons for a social psychological understanding of intervention and change.", "title": "" }, { "docid": "99ffc7cd601d1c43bbf7e3537632e95c", "text": "Despite numerous advances in IT security, many computer users are still vulnerable to security-related risks because they do not comply with organizational policies and procedures. In a network setting, individual risk can extend to all networked users. Endpoint security refers to the set of organizational policies, procedures, and practices directed at securing the endpoint of the network connections – the individual end user. As such, the challenges facing IT managers in providing effective endpoint security are unique in that they often rely heavily on end user participation. But vulnerability can be minimized through modification of desktop security programs and increased vigilance on the part of the system administrator or CSO. The cost-prohibitive nature of these measures generally dictates targeting high-risk users on an individual basis. It is therefore important to differentiate between individuals who are most likely to pose a security risk and those who will likely follow most organizational policies and procedures.", "title": "" }, { "docid": "45bb19cdb9508acf8796e9f43951571f", "text": "The biomedical literature is expanding at ever-increasing rates, and it has become extremely challenging for researchers to keep abreast of new data and discoveries even in their own domains of expertise. We introduce PaperBot, a configurable, modular, open-source crawler to automatically find and efficiently index peer-reviewed publications based on periodic full-text searches across publisher web portals. PaperBot may operate stand-alone or it can be easily integrated with other software platforms and knowledge bases. Without user interactions, PaperBot retrieves and stores the bibliographic information (full reference, corresponding email contact, and full-text keyword hits) based on pre-set search logic from a wide range of sources including Elsevier, Wiley, Springer, PubMed/PubMedCentral, Nature, and Google Scholar. Although different publishing sites require different search configurations, the common interface of PaperBot unifies the process from the user perspective. Once saved, all information becomes web accessible allowing efficient triage of articles based on their actual relevance and seamless annotation of suitable metadata content. The platform allows the agile reconfiguration of all key details, such as the selection of search portals, keywords, and metadata dimensions. The tool also provides a one-click option for adding articles manually via digital object identifier or PubMed ID. The microservice architecture of PaperBot implements these capabilities as a loosely coupled collection of distinct modules devised to work separately, as a whole, or to be integrated with or replaced by additional software. All metadata is stored in a schema-less NoSQL database designed to scale efficiently in clusters by minimizing the impedance mismatch between relational model and in-memory data structures. As a testbed, we deployed PaperBot to help identify and manage peer-reviewed articles pertaining to digital reconstructions of neuronal morphology in support of the NeuroMorpho.Org data repository. PaperBot enabled the custom definition of both general and neuroscience-specific metadata dimensions, such as animal species, brain region, neuron type, and digital tracing system. Since deployment, PaperBot helped NeuroMorpho.Org more than quintuple the yearly volume of processed information while maintaining a stable personnel workforce.", "title": "" }, { "docid": "73531bf62f19857e68e04e8b6470679e", "text": "What will it take for drones—and the whole associated ecosystem—to take off? Arguably, infallible command and control (C&C) channels for safe and autonomous flying, and high-throughput links for multi-purpose live video streaming. And indeed, meeting these aspirations may entail a full cellular support, provided through 5G-and-beyond hardware and software upgrades by both mobile operators and manufacturers of these unmanned aerial vehicles (UAVs). In this article, we vouch for massive MIMO as the key building block to realize 5G-connected UAVs. Through the sheer evidence of 3GPPcompliant simulations, we demonstrate how massive MIMO can be enhanced by complementary network-based and UAVbased solutions, resulting in consistent UAV C&C support, large UAV uplink data rates, and harmonious coexistence with legacy ground users.", "title": "" }, { "docid": "40cd4d0863ed757709530af59e928e3b", "text": "Kynurenic acid (KYNA) is an endogenous antagonist of ionotropic glutamate receptors and the α7 nicotinic acetylcholine receptor, showing anticonvulsant and neuroprotective activity. In this study, the presence of KYNA in food and honeybee products was investigated. KYNA was found in all 37 tested samples of food and honeybee products. The highest concentration of KYNA was obtained from honeybee products’ samples, propolis (9.6 nmol/g), honey (1.0–4.8 nmol/g) and bee pollen (3.4 nmol/g). A high concentration was detected in fresh broccoli (2.2 nmol/g) and potato (0.7 nmol/g). Only traces of KYNA were found in some commercial baby products. KYNA administered intragastrically in rats was absorbed from the intestine into the blood stream and transported to the liver and to the kidney. In conclusion, we provide evidence that KYNA is a constituent of food and that it can be easily absorbed from the digestive system.", "title": "" }, { "docid": "139d9d5866a1e455af954b2299bdbcf6", "text": "1 . I n t r o d u c t i o n Reasoning about knowledge and belief has long been an issue of concern in philosophy and artificial intelligence (cf. [Hil],[MH],[Mo]). Recently we have argued that reasoning about knowledge is also crucial in understanding and reasoning about protocols in distributed systems, since messages can be viewed as changing the state of knowledge of a system [HM]; knowledge also seems to be of v i tal importance in cryptography theory [Me] and database theory. In order to formally reason about knowledge, we need a good semantic model. Part of the difficulty in providing such a model is that there is no agreement on exactly what the properties of knowledge are or should * This author's work was supported in part by DARPA contract N00039-82-C-0250. be. For example, is it the case that you know what facts you know? Do you know what you don't know? Do you know only true things, or can something you \"know\" actually be false? Possible-worlds semantics provide a good formal tool for \"customizing\" a logic so that, by making minor changes in the semantics, we can capture different sets of axioms. The idea, first formalized by Hintikka [Hi l ] , is that in each state of the world, an agent (or knower or player: we use all these words interchangeably) has other states or worlds that he considers possible. An agent knows p exactly if p is true in all the worlds that he considers possible. As Kripke pointed out [Kr], by imposing various conditions on this possibil i ty relation, we can capture a number of interesting axioms. For example, if we require that the real world always be one of the possible worlds (which amounts to saying that the possibility relation is reflexive), then it follows that you can't know anything false. Similarly, we can show that if the relation is transitive, then you know what you know. If the relation is transitive and symmetric, then you also know what you don't know. (The one-knower models where the possibility relation is reflexive corresponds to the classical modal logic T, while the reflexive and transitive case corresponds to S4, and the reflexive, symmetric and transitive case corresponds to S5.) Once we have a general framework for modelling knowledge, a reasonable question to ask is how hard it is to reason about knowledge. In particular, how hard is it to decide if a given formula is valid or satisfiable? The answer to this question depends crucially on the choice of axioms. For example, in the oneknower case, Ladner [La] has shown that for T and S4 the problem of deciding satisfiability is complete in polynomial space, while for S5 it is NP-complete, J. Halpern and Y. Moses 481 and thus no harder than the satisf iabi l i ty problem for propos i t iona l logic. Our a im in th is paper is to reexamine the possiblewor lds f ramework for knowledge and belief w i t h four par t icu lar po ints of emphasis: (1) we show how general techniques for f inding decision procedures and complete ax iomat izat ions apply to models for knowledge and belief, (2) we show how sensitive the di f f icul ty of the decision procedure is to such issues as the choice of moda l operators and the ax iom system, (3) we discuss how not ions of common knowledge and impl ic i t knowl edge among a group of agents fit in to the possibleworlds f ramework, and, f inal ly, (4) we consider to what extent the possible-worlds approach is a viable one for model l ing knowledge and belief. We begin in Section 2 by reviewing possible-world semantics in deta i l , and prov ing tha t the many-knower versions of T, S4, and S5 do indeed capture some of the more common axiomatizat ions of knowledge. In Section 3 we t u r n to complexity-theoret ic issues. We review some standard not ions f rom complexi ty theory, and then reprove and extend Ladner's results to show tha t the decision procedures for the many-knower versions of T, S4, and S5 are a l l complete in po lynomia l space.* Th is suggests tha t for S5, reasoning about many agents' knowledge is qual i ta t ive ly harder than jus t reasoning about one agent's knowledge of the real wor ld and of his own knowledge. In Section 4 we t u rn our at tent ion to mod i fy ing the model so tha t i t can deal w i t h belief rather than knowledge, where one can believe something tha t is false. Th is turns out to be somewhat more compl i cated t han dropp ing the assumption of ref lexivi ty, but i t can s t i l l be done in the possible-worlds f ramework. Results about decision procedures and complete axiomat i i a t i ons for belief paral le l those for knowledge. In Section 5 we consider what happens when operators for common knowledge and implicit knowledge are added to the language. A group has common knowledge of a fact p exact ly when everyone knows tha t everyone knows tha t everyone knows ... tha t p is t rue. (Common knowledge is essentially wha t McCar thy 's \" f oo l \" knows; cf. [MSHI] . ) A group has i m p l ic i t knowledge of p i f, roughly speaking, when the agents poo l the i r knowledge together they can deduce p. (Note our usage of the not ion of \" imp l i c i t knowl edge\" here differs s l ight ly f rom the way it is used in [Lev2] and [FH].) As shown in [ H M l ] , common knowl edge is an essential state for reaching agreements and * A problem is said to be complete w i th respect to a complexity class if, roughly speaking, it is the hardest problem in that class (see Section 3 for more details). coordinating action. For very similar reasons, common knowledge also seems to play an important role in human understanding of speech acts (cf. [CM]). The notion of implicit knowledge arises when reasoning about what states of knowledge a group can attain through communication, and thus is also crucial when reasoning about the efficacy of speech acts and about communication protocols in distributed systems. It turns out that adding an implicit knowledge operator to the language does not substantially change the complexity of deciding the satisfiability of formulas in the language, but this is not the case for common knowledge. Using standard techniques from PDL (Propositional Dynamic Logic; cf. [FL],[Pr]), we can show that when we add common knowledge to the language, the satisfiability problem for the resulting logic (whether it is based on T, S4, or S5) is complete in deterministic exponential time, as long as there at least two knowers. Thus, adding a common knowledge operator renders the decision procedure qualitatively more complex. (Common knowledge does not seem to be of much interest in the in the case of one knower. In fact, in the case of S4 and S5, if there is only one knower, knowledge and common knowledge are identical.) We conclude in Section 6 with some discussion of the appropriateness of the possible-worlds approach for capturing knowledge and belief, particularly in light of our results on computational complexity. Detailed proofs of the theorems stated here, as well as further discussion of these results, can be found in the ful l paper ([HM2]). 482 J. Halpern and Y. Moses 2.2 Possib le-wor lds semant ics: Following Hintikka [H i l ] , Sato [Sa], Moore [Mo], and others, we use a posaible-worlds semantics to model knowledge. This provides us wi th a general framework for our semantical investigations of knowledge and belief. (Everything we say about \"knowledge* in this subsection applies equally well to belief.) The essential idea behind possible-worlds semantics is that an agent's state of knowledge corresponds to the extent to which he can determine what world he is in. In a given world, we can associate wi th each agent the set of worlds that, according to the agent's knowledge, could possibly be the real world. An agent is then said to know a fact p exactly if p is true in all the worlds in this set; he does not know p if there is at least one world that he considers possible where p does not hold. * We discuss the ramifications of this point in Section 6. ** The name K (m) is inspired by the fact that for one knower, the system reduces to the well-known modal logic K. J. Halpern and Y. Moses 483 484 J. Halpern and Y. Moses that can be said is that we are modelling a rather idealised reaaoner, who knows all tautologies and all the logical consequences of his knowledge. If we take the classical interpretation of knowledge as true, justified belief, then an axiom such as A3 seems to be necessary. On the other hand, philosophers have shown that axiom A5 does not hold wi th respect to this interpretation ([Len]). However, the S5 axioms do capture an interesting interpretation of knowledge appropriate for reasoning about distributed systems (see [HM1] and Section 6). We continue here wi th our investigation of all these logics, deferring further comments on their appropriateness to Section 6. Theorem 3 implies that the provable formulas of K (m) correspond precisely to the formulas that are valid for Kripke worlds. As Kripke showed [Kr], there are simple conditions that we can impose on the possibility relations Pi so that the valid formulas of the resulting worlds are exactly the provable formulas of T ( m ) , S4 (m) , and S5(m) respectively. We wi l l try to motivate these conditions, but first we need a few definitions. * Since Lemma 4(b) says that a relation that is both reflexive and Euclidean must also be transitive, the reader may auspect that axiom A4 ia redundant in S5. Thia indeed ia the caae. J. Halpern and Y. Moses 485 486 J. Halpern and Y. Moses", "title": "" }, { "docid": "906c92a4e913d2b7e478155492a69013", "text": "Most investigations into near-memory hardware accelerators for deep neural networks have primarily focused on inference, while the potential of accelerating training has received relatively little attention so far. Based on an in-depth analysis of the key computational patterns in state-of-the-art gradient-based training methods, we propose an efficient near-memory acceleration engine called NTX that can be used to train state-of-the-art deep convolutional neural networks at scale. Our main contributions are: (i) a loose coupling of RISC-V cores and NTX co-processors reducing offloading overhead by <inline-formula><tex-math notation=\"LaTeX\">$7\\times$</tex-math><alternatives><mml:math><mml:mrow><mml:mn>7</mml:mn><mml:mo>×</mml:mo></mml:mrow></mml:math><inline-graphic xlink:href=\"schuiki-ieq1-2876312.gif\"/></alternatives></inline-formula> over previously published results; (ii) an optimized IEEE 754 compliant data path for fast high-precision convolutions and gradient propagation; (iii) evaluation of near-memory computing with NTX embedded into residual area on the Logic Base die of a Hybrid Memory Cube; and (iv) a scaling analysis to meshes of HMCs in a data center scenario. We demonstrate a <inline-formula><tex-math notation=\"LaTeX\">$2.7\\times$</tex-math><alternatives><mml:math><mml:mrow><mml:mn>2</mml:mn><mml:mo>.</mml:mo><mml:mn>7</mml:mn><mml:mo>×</mml:mo></mml:mrow></mml:math><inline-graphic xlink:href=\"schuiki-ieq2-2876312.gif\"/></alternatives></inline-formula> energy efficiency improvement of NTX over contemporary GPUs at <inline-formula><tex-math notation=\"LaTeX\">$4.4\\times$</tex-math><alternatives><mml:math><mml:mrow><mml:mn>4</mml:mn><mml:mo>.</mml:mo><mml:mn>4</mml:mn><mml:mo>×</mml:mo></mml:mrow></mml:math><inline-graphic xlink:href=\"schuiki-ieq3-2876312.gif\"/></alternatives></inline-formula> less silicon area, and a compute performance of 1.2 Tflop/s for training large state-of-the-art networks with full floating-point precision. At the data center scale, a mesh of NTX achieves above 95 percent parallel and energy efficiency, while providing <inline-formula><tex-math notation=\"LaTeX\">$2.1\\times$</tex-math><alternatives><mml:math><mml:mrow><mml:mn>2</mml:mn><mml:mo>.</mml:mo><mml:mn>1</mml:mn><mml:mo>×</mml:mo></mml:mrow></mml:math><inline-graphic xlink:href=\"schuiki-ieq4-2876312.gif\"/></alternatives></inline-formula> energy savings or <inline-formula><tex-math notation=\"LaTeX\">$3.1\\times$</tex-math><alternatives><mml:math><mml:mrow><mml:mn>3</mml:mn><mml:mo>.</mml:mo><mml:mn>1</mml:mn><mml:mo>×</mml:mo></mml:mrow></mml:math><inline-graphic xlink:href=\"schuiki-ieq5-2876312.gif\"/></alternatives></inline-formula> performance improvement over a GPU-based system.", "title": "" }, { "docid": "38419655a4a8fedfd9e0c3001741f165", "text": "Convolutional Neural Networks (CNN) has achieved a great success in image recognition task by automatically learning a hierarchical feature representation from raw data. While the majority of Time-Series Classification (TSC) literature is focused on 1D signals, this paper uses Recurrence Plots (RP) to transform time-series into 2D texture images and then take advantage of the deep CNN classifier. Image representation of time-series introduces different feature types that are not available for 1D signals, and therefore TSC can be treated as texture image recognition task. CNN model also allows learning different levels of representations together with a classifier, jointly and automatically. Therefore, using RP and CNN in a unified framework is expected to boost the recognition rate of TSC. Experimental results on the UCR time-series classification archive demonstrate competitive accuracy of the proposed approach, compared not only to the existing deep architectures, but also to the state-of-the art TSC algorithms.", "title": "" }, { "docid": "1328ced6939005175d3fbe2ef95fd067", "text": "We present SNIPER, an algorithm for performing efficient multi-scale training in instance level visual recognition tasks. Instead of processing every pixel in an image pyramid, SNIPER processes context regions around ground-truth instances (referred to as chips) at the appropriate scale. For background sampling, these context-regions are generated using proposals extracted from a region proposal network trained with a short learning schedule. Hence, the number of chips generated per image during training adaptively changes based on the scene complexity. SNIPER only processes 30% more pixels compared to the commonly used single scale training at 800x1333 pixels on the COCO dataset. But, it also observes samples from extreme resolutions of the image pyramid, like 1400x2000 pixels. As SNIPER operates on resampled low resolution chips (512x512 pixels), it can have a batch size as large as 20 on a single GPU even with a ResNet-101 backbone. Therefore it can benefit from batch-normalization during training without the need for synchronizing batch-normalization statistics across GPUs. SNIPER brings training of instance level recognition tasks like object detection closer to the protocol for image classification and suggests that the commonly accepted guideline that it is important to train on high resolution images for instance level visual recognition tasks might not be correct. Our implementation based on Faster-RCNN with a ResNet-101 backbone obtains an mAP of 47.6% on the COCO dataset for bounding box detection and can process 5 images per second during inference with a single GPU. Code is available at https://github.com/mahyarnajibi/SNIPER/.", "title": "" }, { "docid": "ae579fccab792401cbbd7b6225c17e1b", "text": "The generalized assignment problem can be viewed as the following problem of scheduling parallel machines with costs. Each job is to be processed by exactly one machine; processing job j on machine i requires time pif and incurs a cost of c,f, each machine / is available for 7\", t ime units, and the objective is.t»minimize the total cost incurred. Our main result is as follows. There is a polynomial-time algorithm that, given a value C, either proves that no feasible schedule of cost C exists, or else finds a schedule of cost at most C where each machine / is used for at most 27\", time units. We also extend this result to a variant of the problem where, instead of a fixed processing time p,r there is a range of possible processing times for each machine-job pair, and the cost linearly increases as (he processing time decreases. We show that these results imply a polynomial-time 2-approximation algorithm to minimize a weighted sum of the cost and the makespan, i.e., the maximum job completion time. We also consider the objective of minimizing the mean job completion time. We show that there is a polynomial-time algorithm that, given values M and 7\", either proves that no schedule of mean job completion time M and makespan /\"exists, or else finds a schedule of mean job completion time at most M and makespan at most 27\".", "title": "" } ]
scidocsrr
2119d665534a15b04e49f996db25ac47
The contribution of attentional bias to worry: Distinguishing the roles of selective engagement and disengagement
[ { "docid": "1c7131fcb031497b2c1487f9b25d8d4e", "text": "Biases in information processing undoubtedly play an important role in the maintenance of emotion and emotional disorders. In an attentional cueing paradigm, threat words and angry faces had no advantage over positive or neutral words (or faces) in attracting attention to their own location, even for people who were highly state-anxious. In contrast, the presence of threatening cues (words and faces) had a strong impact on the disengagement of attention. When a threat cue was presented and a target subsequently presented in another location, high state-anxious individuals took longer to detect the target relative to when either a positive or a neutral cue was presented. It is concluded that threat-related stimuli affect attentional dwell time and the disengage component of attention, leaving the question of whether threat stimuli affect the shift component of attention open to debate.", "title": "" } ]
[ { "docid": "42452d6df7372cdc9c2cdebd8f0475cb", "text": "This paper presents SgxPectre Attacks that exploit the recently disclosed CPU bugs to subvert the confidentiality and integrity of SGX enclaves. Particularly, we show that when branch prediction of the enclave code can be influenced by programs outside the enclave, the control flow of the enclave program can be temporarily altered to execute instructions that lead to observable cache-state changes. An adversary observing such changes can learn secrets inside the enclave memory or its internal registers, thus completely defeating the confidentiality guarantee offered by SGX. To demonstrate the practicality of our SgxPectre Attacks, we have systematically explored the possible attack vectors of branch target injection, approaches to win the race condition during enclave’s speculative execution, and techniques to automatically search for code patterns required for launching the attacks. Our study suggests that any enclave program could be vulnerable to SgxPectre Attacks since the desired code patterns are available in most SGX runtimes (e.g., Intel SGX SDK, Rust-SGX, and Graphene-SGX). Most importantly, we have applied SgxPectre Attacks to steal seal keys and attestation keys from Intel signed quoting enclaves. The seal key can be used to decrypt sealed storage outside the enclaves and forge valid sealed data; the attestation key can be used to forge attestation signatures. For these reasons, SgxPectre Attacks practically defeat SGX’s security protection. This paper also systematically evaluates Intel’s existing countermeasures against SgxPectre Attacks and discusses the security implications.", "title": "" }, { "docid": "7c2c987c2fc8ea0b18d8361072fa4e31", "text": "Information Retrieval (IR) and Answer Extraction are often designed as isolated or loosely connected components in Question Answering (QA), with repeated overengineering on IR, and not necessarily performance gain for QA. We propose to tightly integrate them by coupling automatically learned features for answer extraction to a shallow-structured IR model. Our method is very quick to implement, and significantly improves IR for QA (measured in Mean Average Precision and Mean Reciprocal Rank) by 10%-20% against an uncoupled retrieval baseline in both document and passage retrieval, which further leads to a downstream 20% improvement in QA F1.", "title": "" }, { "docid": "57261e77a6e8f6a0c984f5e199a71554", "text": "We present a software framework for simulating the HCF Controlled Channel Access (HCCA) in an IEEE 802.11e system. The proposed approach allows for flexible integration of different scheduling algorithms with the MAC. The 802.11e system consists of three modules: Classifier, HCCA Scheduler, MAC. We define a communication interface exported by the MAC module to the HCCA Scheduler. A Scheduler module implementing the reference scheduler defined in the draft IEEE 802.11e document is also described. The software framework reported in this paper has been implemented using the Network Simulator 2 platform. A preliminary performance analysis of the reference scheduler is also reported.", "title": "" }, { "docid": "fba0ff24acbe07e1204b5fe4c492ab72", "text": "To ensure high quality software, it is crucial that non‐functional requirements (NFRs) are well specified and thoroughly tested in parallel with functional requirements (FRs). Nevertheless, in requirement specification the focus is mainly on FRs, even though NFRs have a critical role in the success of software projects. This study presents a systematic literature review of the NFR specification in order to identify the current state of the art and needs for future research. The systematic review summarizes the 51 relevant papers found and discusses them within seven major sub categories with “combination of other approaches” being the one with most prior results.", "title": "" }, { "docid": "a7c2c2889b54a4f0e22b1cb09bbd8d6b", "text": "In this paper we present an efficient algorithm for multi-layer depth peeling via bucket sort of fragments on GPU, which makes it possible to capture up to 32 layers simultaneously with correct depth ordering in a single geometry pass. We exploit multiple render targets (MRT) as storage and construct a bucket array of size 32 per pixel. Each bucket is capable of holding only one fragment, and can be concurrently updated using the MAX/MIN blending operation. During the rasterization, the depth range of each pixel location is divided into consecutive subintervals uniformly, and a linear bucket sort is performed so that fragments within each subintervals will be routed into the corresponding buckets. In a following fullscreen shader pass, the bucket array can be sequentially accessed to get the sorted fragments for further applications. Collisions will happen when more than one fragment is routed to the same bucket, which can be alleviated by multi-pass approach. We also develop a two-pass approach to further reduce the collisions, namely adaptive bucket depth peeling. In the first geometry pass, the depth range is redivided into non-uniform subintervals according to the depth distribution to make sure that there is only one fragment within each subinterval. In the following bucket sorting pass, there will be only one fragment routed into each bucket and collisions will be substantially reduced. Our algorithm shows up to 32 times speedup to the classical depth peeling especially for large scenes with high depth complexity, and the experimental results are visually faithful to the ground truth. Also it has no requirement of pre-sorting geometries or post-sorting fragments, and is free of read-modify-write (RMW) hazards.", "title": "" }, { "docid": "af7736d4e796d3439613ed06ca4e4b72", "text": "The past few years have witnessed the fast development of different regularization methods for deep learning models such as fully-connected deep neural networks (DNNs) and Convolutional Neural Networks (CNNs). Most of previous methods mainly consider to drop features from input data and hidden layers, such as Dropout, Cutout and DropBlocks. DropConnect select to drop connections between fully-connected layers. By randomly discard some features or connections, the above mentioned methods control the overfitting problem and improve the performance of neural networks. In this paper, we proposed two novel regularization methods, namely DropFilter and DropFilter-PLUS, for the learning of CNNs. Different from the previous methods, DropFilter and DropFilter-PLUS selects to modify the convolution filters. For DropFilter-PLUS, we find a suitable way to accelerate the learning process based on theoretical analysis. Experimental results on MNISTshow that using DropFilter and DropFilter-PLUS may improve performance on image classification tasks.", "title": "" }, { "docid": "3ea9d312027505fb338a1119ff01d951", "text": "Many experiments provide evidence that practicing retrieval benefits retention relative to conditions of no retrieval practice. Nearly all prior research has employed retrieval practice requiring overt responses, but a few experiments have shown that covert retrieval also produces retention advantages relative to control conditions. However, direct comparisons between overt and covert retrieval are scarce: Does covert retrieval-thinking of but not producing responses-on a first test produce the same benefit as overt retrieval on a criterial test given later? We report 4 experiments that address this issue by comparing retention on a second test following overt or covert retrieval on a first test. In Experiment 1 we used a procedure designed to ensure that subjects would retrieve on covert as well as overt test trials and found equivalent testing effects in the 2 cases. In Experiment 2 we replicated these effects using a procedure that more closely mirrored natural retrieval processes. In Experiment 3 we showed that overt and covert retrieval produced equivalent testing effects after a 2-day delay. Finally, in Experiment 4 we showed that covert retrieval benefits retention more than restudying. We conclude that covert retrieval practice is as effective as overt retrieval practice, a conclusion that contravenes hypotheses in the literature proposing that overt responding is better. This outcome has an important educational implication: Students can learn as much from covert self-testing as they would from overt responding.", "title": "" }, { "docid": "7709df997c72026406d257c85dacb271", "text": "This paper addresses the task of document retrieval based on the degree of document relatedness to the meanings of a query by presenting a semantic-enabled language model. Our model relies on the use of semantic linking systems for forming a graph representation of documents and queries, where nodes represent concepts extracted from documents and edges represent semantic relatedness between concepts. Based on this graph, our model adopts a probabilistic reasoning model for calculating the conditional probability of a query concept given values assigned to document concepts. We present an integration framework for interpolating other retrieval systems with the presented model in this paper. Our empirical experiments on a number of TREC collections show that the semantic retrieval has a synergetic impact on the results obtained through state of the art keyword-based approaches, and the consideration of semantic information obtained from entity linking on queries and documents can complement and enhance the performance of other retrieval models.", "title": "" }, { "docid": "7d02f07418dc82b0645b6933a3fecfc0", "text": "This article is part of a For-Discussion-Section of Methods of Information in Medicine about the paper \"Evidence-based Health Informatics: How Do We Know What We Know?\" written by Elske Ammenwerth [1]. It is introduced by an editorial. This article contains the combined commentaries invited to independently comment on the Ammenwerth paper. In subsequent issues the discussion can continue through letters to the editor. With these comments on the paper \"Evidence-based Health Informatics: How do we know what we know?\", written by Elske Ammenwerth [1], the journal seeks to stimulate a broad discussion on the challenges of evaluating information processing and information technology in health care. An international group of experts has been invited by the editor of Methods to comment on this paper. Each of the invited commentaries forms one section of this paper.", "title": "" }, { "docid": "498eada57edb9120da164c5cb396198b", "text": "We propose a passive blackbox-based technique for determining the type of access point (AP) connected to a network. Essentially, a stimulant (i.e., packet train) that emulates normal data transmission is sent through the access point. Since access points from different vendors are architecturally heterogeneous (e.g., chipset, firmware, driver), each AP will act upon the packet train differently. By applying wavelet analysis to the resultant packet train, a distinct but reproducible pattern is extracted allowing a clear classification of different AP types. This has two important applications: (1) as a system administrator, this technique can be used to determine if a rogue access point has connected to the network; and (2) as an attacker, fingerprinting the access point is necessary to launch driver/firmware specific attacks. Extensive experiments were conducted (over 60GB of data was collected) to differentiate 6 APs. We show that this technique can classify APs with a high accuracy (in some cases, we can classify successfully 100% of the time) with as little as 100000 packets. Further, we illustrate that this technique is independent of the stimulant traffic type (e.g., TCP or UDP). Finally, we show that the AP profile is stable across multiple models of the same AP.", "title": "" }, { "docid": "dd1f8a5eae50d0a026387ba1b6695bef", "text": "Cloud computing is one of the significant development that utilizes progressive computational power and upgrades data distribution and data storing facilities. With cloud information services, it is essential for information to be saved in the cloud and also distributed across numerous customers. Cloud information repository is involved with issues of information integrity, data security and information access by unapproved users. Hence, an autonomous reviewing and auditing facility is necessary to guarantee that the information is effectively accommodated and used in the cloud. In this paper, a comprehensive survey on the state-of-art techniques in data auditing and security are discussed. Challenging problems in information repository auditing and security are presented. Finally, directions for future research in data auditing and security have been discussed.", "title": "" }, { "docid": "013c6f8931a8f9e0cff4fb291571e5bf", "text": "Herrmann-Pillath, Carsten, Libman, Alexander, and Yu, Xiaofan—Economic integration in China: Politics and culture The aim of the paper is to explicitly disentangle the role of political and cultural boundaries as factors of fragmentation of economies within large countries. On the one hand, local protectionism plays a substantial role in many federations and decentralized states. On the other hand, if the country exhibits high level of cultural heterogeneity, it may also contribute to the economic fragmentation; however, this topic has received significantly less attention in the literature. This paper looks at the case of China and proxies the cultural heterogeneity by the heterogeneity of local dialects. It shows that the effect of politics clearly dominates that of culture: while provincial borders seem to have a strong influence disrupting economic ties, economic linkages across provinces, even if the regions fall into the same linguistic zone, are rather weak and, on the contrary, linguistic differences within provinces do not prevent economic integration. For some language zones we do, however, find a stronger effect on economic integration. Journal of Comparative Economics 42 (2) (2014) 470–492. Frankfurt School of Finance and Management, Germany; Russian Academy of Sciences, Russia. 2013 Association for Comparative Economic Studies Published by Elsevier Inc. All rights reserved.", "title": "" }, { "docid": "1eea111c3efcc67fcc1bb6f358622475", "text": "Methyl Cellosolve (the monomethyl ether of ethylene glycol) has been widely used as the organic solvent in ninhydrin reagents for amino acid analysis; it has, however, properties that are disadvantageous in a reagent for everyday employment. The solvent is toxic and it is difficult to keep the ether peroxide-free. A continuing effort to arrive at a chemically preferable and relatively nontoxic substitute for methyl Cellosolve has led to experiments with dimethyl s&oxide, which proves to be a better solvent for the reduced form of ninhydrin (hydrindantin) than is methyl Cellosolve. Dimethyl sulfoxide can replace the latter, volume for volume, in a ninhydrin reagent mixture that gives equal performance and has improved stability. The result is a ninhydrin-hydrindantin solution in 75% dimethyl sulfoxide25 % 4 M lithium acetate buffer at pH 5.2. This type of mixture, with appropriate hydrindantin concentrations, is recommended to replace methyl Cellosolve-containing reagents in the quantitative determination of amino acids by automatic analyzers and by the manual ninhydrin method.", "title": "" }, { "docid": "11828571b57966958bd364947f41ad40", "text": "A smart city is developed, deployed and maintained with the help of Internet of Things (IoT). The smart cities have become an emerging phenomena with rapid urban growth and boost in the field of information technology. However, the function and operation of a smart city is subject to the pivotal development of security architectures. The contribution made in this paper is twofold. Firstly, it aims to provide a detailed, categorized and comprehensive overview of the research on security problems and their existing solutions for smart cities. The categorization is based on several factors such as governance, socioeconomic and technological factors. This classification provides an easy and concise view of the security threats, vulnerabilities and available solutions for the respective technologies areas that are proposed over the period 2010-2015. Secondly, an IoT testbed for smart cities architecture, i.e., SmartSantander is also analyzed with respect to security threats and vulnerabilities to smart cities. The existing best practices regarding smart city security are discussed and analyzed with respect to their performance, which could be used by different stakeholders of the smart cities.", "title": "" }, { "docid": "02c8093183af96808a71b93ee3103996", "text": "The medical field stands to see significant benefits from the recent advances in deep learning. Knowing the uncertainty in the decision made by any machine learning algorithm is of utmost importance for medical practitioners. This study demonstrates the utility of using Bayesian LSTMs for classification of medical time series. Four medical time series datasets are used to show the accuracy improvement Bayesian LSTMs provide over standard LSTMs. Moreover, we show cherry-picked examples of confident and uncertain classifications of the medical time series. With simple modifications of the common practice for deep learning, significant improvements can be made for the medical practitioner and patient.", "title": "" }, { "docid": "812687a5291d786ecda102adda03700c", "text": "The overall goal is to show that conceptual spaces are more promising than other ways of modelling the semantics of natural language. In particular, I will show how they can be used to model actions and events. I will also outline how conceptual spaces provide a cognitive grounding for word classes, including nouns, adjectives, prepositions and verbs.", "title": "" }, { "docid": "e573d85271e3f3cc54b774de8a5c6dd9", "text": "This paper explores the use of a learned classifier for post-OCR text correction. Experiments with the Arabic language show that this approach, which integrates a weighted confusion matrix and a shallow language model, improves the vast majority of segmentation and recognition errors, the most frequent types of error on our dataset.", "title": "" }, { "docid": "e87c93e13f94191450216e308215ff38", "text": "Fair scheduling of delay and rate-sensitive packet flows over a wireless channel is not addressed effectively by most contemporary wireline fair scheduling algorithms because of two unique characteristics of wireless media: (a) bursty channel errors, and (b) location-dependent channel capacity and errors. Besides, in packet cellular networks, the base station typically performs the task of packet scheduling for both downlink and uplink flows in a cell; however a base station has only a limited knowledge of the arrival processes of uplink flows.In this paper, we propose a new model for wireless fair scheduling based on an adaptation of fluid fair queueing to handle location-dependent error bursts. We describe an ideal wireless fair scheduling algorithm which provides a packetized implementation of the fluid model while assuming full knowledge of the current channel conditions. For this algorithm, we derive the worst-case throughput and delay bounds. Finally, we describe a practical wireless scheduling algorithm which approximates the ideal algorithm. Through simulations, we show that the algorithm achieves the desirable properties identified in the wireless fluid fair queueing model.", "title": "" }, { "docid": "bb01b5e24d7472ab52079dcb8a65358d", "text": "There are plenty of classification methods that perform well when training and testing data are drawn from the same distribution. However, in real applications, this condition may be violated, which causes degradation of classification accuracy. Domain adaptation is an effective approach to address this problem. In this paper, we propose a general domain adaptation framework from the perspective of prediction reweighting, from which a novel approach is derived. Different from the major domain adaptation methods, our idea is to reweight predictions of the training classifier on testing data according to their signed distance to the domain separator, which is a classifier that distinguishes training data (from source domain) and testing data (from target domain). We then propagate the labels of target instances with larger weights to ones with smaller weights by introducing a manifold regularization method. It can be proved that our reweighting scheme effectively brings the source and target domains closer to each other in an appropriate sense, such that classification in target domain becomes easier. The proposed method can be implemented efficiently by a simple two-stage algorithm, and the target classifier has a closed-form solution. The effectiveness of our approach is verified by the experiments on artificial datasets and two standard benchmarks, a visual object recognition task and a cross-domain sentiment analysis of text. Experimental results demonstrate that our method is competitive with the state-of-the-art domain adaptation algorithms.", "title": "" } ]
scidocsrr
41e5cf3b4e9ff8becbcee94599f49029
Adaptive assist-as-needed controller to improve gait symmetry in robot-assisted gait training
[ { "docid": "ec230707da4dc2085863fffb990e5259", "text": "We propose a novel method for movement assistance that is based on adaptive oscillators, i.e., mathematical tools that are capable of extracting the high-level features (amplitude, frequency, and offset) of a periodic signal. Such an oscillator acts like a filter on these features, but keeps its output in phase with respect to the input signal. Using a simple inverse model, we predicted the torque produced by human participants during rhythmic flexion extension of the elbow. Feeding back a fraction of this estimated torque to the participant through an elbow exoskeleton, we were able to prove the assistance efficiency through a marked decrease of the biceps and triceps electromyography. Importantly, since the oscillator adapted to the movement imposed by the user, the method flexibly allowed us to change the movement pattern and was still efficient during the nonstationary epochs. This method holds promise for the development of new robot-assisted rehabilitation protocols because it does not require prespecifying a reference trajectory and does not require complex signal sensing or single-user calibration: the only signal that is measured is the position of the augmented joint. In this paper, we further demonstrate that this assistance was very intuitive for the participants who adapted almost instantaneously.", "title": "" }, { "docid": "d42f5fdbcaf8933dc97b377a801ef3e0", "text": "Bodyweight supported treadmill training has become a prominent gait rehabilitation method in leading rehabilitation centers. This type of locomotor training has many functional benefits but the labor costs are considerable. To reduce therapist effort, several groups have developed large robotic devices for assisting treadmill stepping. A complementary approach that has not been adequately explored is to use powered lower limb orthoses for locomotor training. Recent advances in robotic technology have made lightweight powered orthoses feasible and practical. An advantage to using powered orthoses as rehabilitation aids is they allow practice starting, turning, stopping, and avoiding obstacles during overground walking.", "title": "" } ]
[ { "docid": "499ad54ed1b02115fd42d2f0972c7abb", "text": "This paper has been commissioned by the World Bank Group for the \"Social Dimensions of Climate Change\" workshop. Views represented are those of the authors, and do not represent an official position of the World Bank Group or those of the Executive Directors of the World Bank or the overnments they represent. The World Bank does not guarantee the accuracy of data presented in g this paper. *This paper was written under contract 7145451 between PRIO and the World Bank for the program on 'Exploring the Social Dimensions of Climate Change'. The opinions expressed in this document represent the views of the authors and do not necessarily state or reflect those of the World Bank. Bibliography 41 iii Executive Summary Climate change is expected to bring about significant changes in migration patterns throughout the developing world. Increases in the frequency and severity of chronic environmental hazards and sudden onset disasters are projected to alter the typical migration patterns of communities and entire countries. We examine evidence for such claims and roundly conclude that large scale community relocation due to either chronic or sudden onset hazards is and continues to be an unlikely response. We propose an alternate framework through which to examine the likely consequences of increased hazards. It is built upon the five major conclusions of this paper: First, disasters vary considerably in their potential to instigate migration. Moreover, individual, community and national vulnerabilities shape responses as much as disaster effects do. Focussing on how people are vulnerable as a function of political, economic and social forces leads to an in-depth understanding of post-disaster human security. Second, individuals and communities in the developing world incorporate environmental risk into their livelihoods. Their ability to do so effectively is contingent upon their available assets. Diversifying income streams is the predominant avenue through which people mitigate increased hazards from climate changes. Labour migration to rural and urban areas is a common component of diversified local economies. In lesser developed countries, labour migration is typically internal, temporary and circular. Third, during periods of chronic environmental degradation, such as increased soil salinization or land degradation, the most common responses by individuals and communities is to intensify labour migration patterns. By doing so, families increase remittances and lessen immediate burdens to provide. Fourth, with the onset of a sudden disaster or the continued presence of a chronic disaster (i.e. drought or famine), communities engage in …", "title": "" }, { "docid": "41a0b9797c556368f84e2a05b80645f3", "text": "This paper describes and evaluates log-linear parsing models for Combinatory Categorial Grammar (CCG). A parallel implementation of the L-BFGS optimisation algorithm is described, which runs on a Beowulf cluster allowing the complete Penn Treebank to be used for estimation. We also develop a new efficient parsing algorithm for CCG which maximises expected recall of dependencies. We compare models which use all CCG derivations, including nonstandard derivations, with normal-form models. The performances of the two models are comparable and the results are competitive with existing wide-coverage CCG parsers.", "title": "" }, { "docid": "7caf6388da49eafe48ce70b205c6223d", "text": "Competitive Computer Games, such as StarCraft II, remain a largely unexplored and active application of Machine Learning, Artificial Intelligence, and Computer Vision. These games are highly complex as they typically 1) involve incomplete information, 2) include multiple strategies and elements that usually happen concurrently, and 3) run in real-time. For this project, we dive into a minigame for StarCraft II that involves many engagement skills such as focus fire, splitting, and kiting to win battles. This paper goes into the details of implementing an algorithm using behavioral cloning, a subset of imitation learning, to tackle the problem. Human expert replay data is used to train different systems that are evaluated on the minigame. Supervised learning, Convolutional Neural Networks, and Combined Loss Functions are all used in this project. While we have created an agent that shows some basic understanding of the game, the strategies performed are rather primitive. Nevertheless, this project establishes a useful framework that can be used for future expansion. (This project was completed in tandem with a related CS221 project.)", "title": "" }, { "docid": "90c3543eca7a689188725e610e106ce9", "text": "Lithium-based battery technology offers performance advantages over traditional battery technologies at the cost of increased monitoring and controls overhead. Multiple-cell Lead-Acid battery packs can be equalized by a controlled overcharge, eliminating the need to periodically adjust individual cells to match the rest of the pack. Lithium-based based batteries cannot be equalized by an overcharge, so alternative methods are required. This paper discusses several cell-balancing methodologies. Active cell balancing methods remove charge from one or more high cells and deliver the charge to one or more low cells. Dissipative techniques find the high cells in the pack, and remove excess energy through a resistive element until their charges match the low cells. This paper presents the theory of charge balancing techniques and the advantages and disadvantages of the presented methods. INTRODUCTION Lithium Ion and Lithium Polymer battery chemistries cannot be overcharged without damaging active materials [1-5]. The electrolyte breakdown voltage is precariously close to the fully charged terminal voltage, typically in the range of 4.1 to 4.3 volts/cell. Therefore, careful monitoring and controls must be implemented to avoid any single cell from experiencing an overvoltage due to excessive charging. Single lithium-based cells require monitoring so that cell voltage does not exceed predefined limits of the chemistry. Series connected lithium cells pose a more complex problem: each cell in the string must be monitored and controlled. Even though the pack voltage may appear to be within acceptable limits, one cell of the series string may be experiencing damaging voltage due to cell-to-cell imbalances. Traditionally, cell-to-cell imbalances in lead-acid batteries have been solved by controlled overcharging [6,7]. Leadacid batteries can be brought into overcharge conditions without permanent cell damage, as the excess energy is released by gassing. This gassing mechanism is the natural method for balancing a series string of lead acid battery cells. Other chemistries, such as NiMH, exhibit similar natural cell-to-cell balancing mechanisms [8]. Because a Lithium battery cannot be overcharged, there is no natural mechanism for cell equalization. Therefore, an alternative method must be employed. This paper discusses three categories of cell balancing methodologies: charging methods, active methods, and passive methods. Cell balancing is necessary for highly transient lithium battery applications, especially those applications where charging occurs frequently, such as regenerative braking in electric vehicle (EV) or hybrid electric vehicle (HEV) applications. Regenerative braking can cause problems for Lithium Ion batteries because the instantaneous regenerative braking current inrush can cause battery voltage to increase suddenly, possibly over the electrolyte breakdown threshold voltage. Deviations in cell behaviors generally occur because of two phenomenon: changes in internal impedance or cell capacity reduction due to aging. In either case, if one cell in a battery pack experiences deviant cell behavior, that cell becomes a likely candidate to overvoltage during high power charging events. Cells with reduced capacity or high internal impedance tend to have large voltage swings when charging and discharging. For HEV applications, it is necessary to cell balance lithium chemistry because of this overvoltage potential. For EV applications, cell balancing is desirable to obtain maximum usable capacity from the battery pack. During charging, an out-of-balance cell may prematurely approach the end-of-charge voltage (typically 4.1 to 4.3 volts/cell) and trigger the charger to turn off. Cell balancing is useful to control the higher voltage cells until the rest of the cells can catch up. In this way, the charger is not turned off until the cells simultaneously reach the end-of-charge voltage. END-OF-CHARGE CELL BALANCING METHODS Typically, cell-balancing methods employed during and at end-of-charging are useful only for electric vehicle purposes. This is because electric vehicle batteries are generally fully charged between each use cycle. Hybrid electric vehicle batteries may or may not be maintained fully charged, resulting in unpredictable end-of-charge conditions to enact the balancing mechanism. Hybrid vehicle batteries also require both high power charge (regenerative braking) and discharge (launch assist or boost) capabilities. For this reason, their batteries are usually maintained at a SOC that can discharge the required power but still have enough headroom to accept the necessary regenerative power. To fully charge the HEV battery for cell balancing would diminish charge acceptance capability (regenerative braking). CHARGE SHUNTING The charge-shunting cell balancing method selectively shunts the charging current around each cell as they become fully charged (Figure 1). This method is most efficiently employed on systems with known charge rates. The shunt resistor R is sized to shunt exactly the charging current I when the fully charged cell voltage V is reached. If the charging current decreases, resistor R will discharge the shunted cell. To avoid extremely large power dissipations due to R, this method is best used with stepped-current chargers with a small end-of-charge current.", "title": "" }, { "docid": "b3dcbd8a41e42ae6e748b07c18dbe511", "text": "There is inconclusive evidence whether practicing tasks with computer agents improves people’s performance on these tasks. This paper studies this question empirically using extensive experiments involving bilateral negotiation and threeplayer coordination tasks played by hundreds of human subjects. We used different training methods for subjects, including practice interactions with other human participants, interacting with agents from the literature, and asking participants to design an automated agent to serve as their proxy in the task. Following training, we compared the performance of subjects when playing state-of-the-art agents from the literature. The results revealed that in the negotiation settings, in most cases, training with computer agents increased people’s performance as compared to interacting with people. In the three player coordination game, training with computer agents increased people’s performance when matched with the state-of-the-art agent. These results demonstrate the efficacy of using computer agents as tools for improving people’s skills when interacting in strategic settings, saving considerable effort and providing better performance than when interacting with human counterparts.", "title": "" }, { "docid": "956cf3bf67aa60391b7c96162a5013bd", "text": "Transferring artistic styles onto everyday photographs has become an extremely popular task in both academia and industry. Recently, offline training has replaced online iterative optimization, enabling nearly real-time stylization. When those stylization networks are applied directly to high-resolution images, however, the style of localized regions often appears less similar to the desired artistic style. This is because the transfer process fails to capture small, intricate textures and maintain correct texture scales of the artworks. Here we propose a multimodal convolutional neural network that takes into consideration faithful representations of both color and luminance channels, and performs stylization hierarchically with multiple losses of increasing scales. Compared to state-of-the-art networks, our network can also perform style transfer in nearly real-time by performing much more sophisticated training offline. By properly handling style and texture cues at multiple scales using several modalities, we can transfer not just large-scale, obvious style cues but also subtle, exquisite ones. That is, our scheme can generate results that are visually pleasing and more similar to multiple desired artistic styles with color and texture cues at multiple scales.", "title": "" }, { "docid": "903a5b7fb82d3d46b02e720b2db9c982", "text": "A heuristic recursive algorithm for the two-dimensional rectangular strip packing problem is presented. It is based on a recursive structure combined with branch-and-bound techniques. Several lengths are tried to determine the minimal plate length to hold all the items. Initially the plate is taken as a block. For the current block considered, the algorithm selects an item, puts it at the bottom-left corner of the block, and divides the unoccupied region into two smaller blocks with an orthogonal cut. The dividing cut is vertical if the block width is equal to the plate width; otherwise it is horizontal. Both lower and upper bounds are used to prune unpromising branches. The computational results on a class of benchmark problems indicate that the algorithm performs better than several recently published algorithms. 2006 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "2c04fd272c90a8c0a74a16980fcb5b03", "text": "We propose a multimodal, decomposable model for articulated human pose estimation in monocular images. A typical approach to this problem is to use a linear structured model, which struggles to capture the wide range of appearance present in realistic, unconstrained images. In this paper, we instead propose a model of human pose that explicitly captures a variety of pose modes. Unlike other multimodal models, our approach includes both global and local pose cues and uses a convex objective and joint training for mode selection and pose estimation. We also employ a cascaded mode selection step which controls the trade-off between speed and accuracy, yielding a 5x speedup in inference and learning. Our model outperforms state-of-the-art approaches across the accuracy-speed trade-off curve for several pose datasets. This includes our newly-collected dataset of people in movies, FLIC, which contains an order of magnitude more labeled data for training and testing than existing datasets.", "title": "" }, { "docid": "3bebd1c272b1cba24f6aeeabaa5c54d2", "text": "Cloacal anomalies occur when failure of the urogenital septum to separate the cloacal membrane results in the urethra, vagina, rectum and anus opening into a single common channel. The reported incidence is 1:50,000 live births. Short-term paediatric outcomes of surgery are well reported and survival into adulthood is now usual, but long-term outcome data are less comprehensive. Chronic renal failure is reported to occur in 50 % of patients with cloacal anomalies, and 26–72 % (dependant on the length of the common channel) of patients experience urinary incontinence in adult life. Defaecation is normal in 53 % of patients, with some managed by methods other than surgery, including medication, washouts, stoma and antegrade continent enema. Gynaecological anomalies are common and can necessitate reconstructive surgery at adolescence for menstrual obstruction. No data are currently available on sexual function and little on the quality of life. Pregnancy is extremely rare and highly risky. Patient care should be provided by a multidisciplinary team with experience in managing these and other related complex congenital malformations. However, there is an urgent need for a well-planned, collaborative multicentre prospective study on the urological, gastrointestinal and gynaecological aspects of this rare group of complex conditions.", "title": "" }, { "docid": "c3325bcfa1b1a9c9012c50fe0bd11161", "text": "We consider the problem of identifying authoritative users in Yahoo! Answers. A common approach is to use link analysis techniques in order to provide a ranked list of users based on their degree of authority. A major problem for such an approach is determining how many users should be chosen as authoritative from a ranked list. To address this problem, we propose a method for automatic identification of authoritative actors. In our approach, we propose to model the authority scores of users as a mixture of gamma distributions. The number of components in the mixture is estimated by the Bayesian Information Criterion (BIC) while the parameters of each component are estimated using the Expectation-Maximization (EM) algorithm. This method allows us to automatically discriminate between authoritative and non-authoritative users. The suitability of our proposal is demonstrated in an empirical study using datasets from Yahoo! Answers.", "title": "" }, { "docid": "a57bdfa9c48a76d704258f96874ea700", "text": "BACKGROUND\nPrevious state-of-the-art systems on Drug Name Recognition (DNR) and Clinical Concept Extraction (CCE) have focused on a combination of text \"feature engineering\" and conventional machine learning algorithms such as conditional random fields and support vector machines. However, developing good features is inherently heavily time-consuming. Conversely, more modern machine learning approaches such as recurrent neural networks (RNNs) have proved capable of automatically learning effective features from either random assignments or automated word \"embeddings\".\n\n\nOBJECTIVES\n(i) To create a highly accurate DNR and CCE system that avoids conventional, time-consuming feature engineering. (ii) To create richer, more specialized word embeddings by using health domain datasets such as MIMIC-III. (iii) To evaluate our systems over three contemporary datasets.\n\n\nMETHODS\nTwo deep learning methods, namely the Bidirectional LSTM and the Bidirectional LSTM-CRF, are evaluated. A CRF model is set as the baseline to compare the deep learning systems to a traditional machine learning approach. The same features are used for all the models.\n\n\nRESULTS\nWe have obtained the best results with the Bidirectional LSTM-CRF model, which has outperformed all previously proposed systems. The specialized embeddings have helped to cover unusual words in DrugBank and MedLine, but not in the i2b2/VA dataset.\n\n\nCONCLUSIONS\nWe present a state-of-the-art system for DNR and CCE. Automated word embeddings has allowed us to avoid costly feature engineering and achieve higher accuracy. Nevertheless, the embeddings need to be retrained over datasets that are adequate for the domain, in order to adequately cover the domain-specific vocabulary.", "title": "" }, { "docid": "e66ae650db7c4c75a88ee6cf1ea8694d", "text": "Traditional recommender systems minimize prediction error with respect to users' choices. Recent studies have shown that recommender systems have a positive effect on the provider's revenue.\n In this paper we show that by providing a set of recommendations different than the one perceived best according to user acceptance rate, the recommendation system can further increase the business' utility (e.g. revenue), without any significant drop in user satisfaction. Indeed, the recommendation system designer should have in mind both the user, whose taste we need to reveal, and the business, which wants to promote specific content.\n We performed a large body of experiments comparing a commercial state-of-the-art recommendation engine with a modified recommendation list, which takes into account the utility (or revenue) which the business obtains from each suggestion that is accepted by the user. We show that the modified recommendation list is more desirable for the business, as the end result gives the business a higher utility (or revenue). To study possible reduce in satisfaction by providing the user worse suggestions, we asked the users how they perceive the list of recommendation that they received. Differences in user satisfaction between the lists is negligible, and not statistically significant.\n We also uncover a phenomenon where movie consumers prefer watching and even paying for movies that they have already seen in the past than movies that are new to them.", "title": "" }, { "docid": "4c64b652d9135dae74de4f167c61e896", "text": "An important task in computational statistics and machine learning is to approximate a posterior distribution p(x) with an empirical measure supported on a set of representative points {xi}i=1. This paper focuses on methods where the selection of points is essentially deterministic, with an emphasis on achieving accurate approximation when n is small. To this end, we present Stein Points. The idea is to exploit either a greedy or a conditional gradient method to iteratively minimise a kernel Stein discrepancy between the empirical measure and p(x). Our empirical results demonstrate that Stein Points enable accurate approximation of the posterior at modest computational cost. In addition, theoretical results are provided to establish convergence of the method.", "title": "" }, { "docid": "cea53ea6ff16808a2dbc8680d3ef88ee", "text": "Applying deep reinforcement learning (RL) on real systems suffers from slow data sampling. We propose an enhanced generative adversarial network (EGAN) to initialize an RL agent in order to achieve faster learning. The EGAN utilizes the relation between states and actions to enhance the quality of data samples generated by a GAN. Pre-training the agent with the EGAN shows a steeper learning curve with a 20% improvement of training time in the beginning of learning, compared to no pre-training, and an improvement compared to training with GAN by about 5% with smaller variations. For real time systems with sparse and slow data sampling the EGAN could be used to speed up the early phases of the training process.", "title": "" }, { "docid": "314e10ba42a13a84b40a1b0367bd556e", "text": "How do users behave in online chatrooms, where they instantaneously read and write posts? We analyzed about 2.5 million posts covering various topics in Internet relay channels, and found that user activity patterns follow known power-law and stretched exponential distributions, indicating that online chat activity is not different from other forms of communication. Analysing the emotional expressions (positive, negative, neutral) of users, we revealed a remarkable persistence both for individual users and channels. I.e. despite their anonymity, users tend to follow social norms in repeated interactions in online chats, which results in a specific emotional \"tone\" of the channels. We provide an agent-based model of emotional interaction, which recovers qualitatively both the activity patterns in chatrooms and the emotional persistence of users and channels. While our assumptions about agent's emotional expressions are rooted in psychology, the model allows to test different hypothesis regarding their emotional impact in online communication.", "title": "" }, { "docid": "ba695228c0fbaf91d6db972022095e98", "text": "This study evaluated the critical period hypothesis for second language (L2) acquisition. The participants were 240 native speakers of Korean who differed according to age of arrival (AOA) in the United States (1 to 23 years), but were all experienced in English (mean length of residence 5 15 years). The native Korean participants’ pronunciation of English was evaluated by having listeners rate their sentences for overall degree of foreign accent; knowledge of English morphosyntax was evaluated using a 144-item grammaticality judgment test. As AOA increased, the foreign accents grew stronger, and the grammaticality judgment test scores decreased steadily. However, unlike the case for the foreign accent ratings, the effect of AOA on the grammaticality judgment test scores became nonsignificant when variables confounded with AOA were controlled. This suggested that the observed decrease in morphosyntax scores was not the result of passing a maturationally defined critical period. Additional analyses showed that the score for sentences testing knowledge of rule based, generalizable aspects of English morphosyntax varied as a function of how much education the Korean participants had received in the United States. The scores for sentences testing lexically based aspects of English morphosyntax, on the other hand, depended on how much the Koreans used English. © 1999 Academic Press", "title": "" }, { "docid": "3a897419e218dc20e71a596cbe4c9c58", "text": "This paper is the first of a two-part series analyzing human grasping behavior during a wide range of unstructured tasks. The results help clarify overall characteristics of human hand to inform many domains, such as the design of robotic manipulators, targeting rehabilitation toward important hand functionality, and designing haptic devices for use by the hand. It investigates the properties of objects grasped by two housekeepers and two machinists during the course of almost 10,000 grasp instances and correlates the grasp types used to the properties of the object. We establish an object classification that assigns each object properties from a set of seven classes, including mass, shape and size of the grasp location, grasped dimension, rigidity, and roundness. The results showed that 55 percent of grasped objects had at least one dimension larger than 15 cm, suggesting that more than half of objects cannot physically be grasped using their largest axis. Ninety-two percent of objects had a mass of 500 g or less, implying that a high payload capacity may be unnecessary to accomplish a large subset of human grasping behavior. In terms of grasps, 96 percent of grasp locations were 7 cm or less in width, which can help to define requirements for hand rehabilitation and defines a reasonable grasp aperture size for a robotic hand. Subjects grasped the smallest overall major dimension of the object in 94 percent of the instances. This suggests that grasping the smallest axis of an object could be a reliable default behavior to implement in grasp planners.", "title": "" }, { "docid": "224ec7b58d17f4ffb9753ac85bf29456", "text": "This paper presents Venus, a service for securing user interaction with untrusted cloud storage. Specifically, Venus guarantees integrity and consistency for applications accessing a key-based object store service, without requiring trusted components or changes to the storage provider. Venus completes all operations optimistically, guaranteeing data integrity. It then verifies operation consistency and notifies the application. Whenever either integrity or consistency is violated, Venus alerts the application. We implemented Venus and evaluated it with Amazon S3 commodity storage service. The evaluation shows that it adds no noticeable overhead to storage operations.", "title": "" }, { "docid": "94a6d693d3b3b9273335ef35a61d9f2f", "text": "Twitter is one of the most popular social platforms for online users to share trendy information and views on any event. Twitter reports an event faster than any other medium and contains enormous information and views regarding an event. Consequently, Twitter topic summarization is one of the most convenient ways to get instant gist of any event. However, the information shared on Twitter is often full of nonstandard abbreviations, acronyms, out of vocabulary (OOV) words and with grammatical mistakes which create challenges to find reliable and useful information related to any event. Undoubtedly, Twitter event summarization is a challenging task where traditional text summarization methods do not work well. In last decade, various research works introduced different approaches for automatic Twitter topic summarization. The main aim of this survey work is to make a broad overview of promising summarization approaches on a Twitter topic. We also focus on automatic evaluation of summarization techniques by surveying recent evaluation methodologies. At the end of the survey, we emphasize on both current and future research challenges in this domain through a level of depth analysis of the most recent summarization approaches.", "title": "" } ]
scidocsrr
f11c3d4c30f5c3fc47836c033ce8ea87
Reconfigurable circularly polarized antenna for short-range communication systems
[ { "docid": "dcafec84cfcfad2c9c679e43eb87949a", "text": "A novel design of a microstrip patch antenna with switchable slots (PASS) is proposed to achieve circular polarization diversity. Two orthogonal slots are incorporated into the patch and two pin diodes are utilized to switch the slots on and off. By turning the diodes on or off, this antenna can radiate with either right hand circular polarization (RHCP) or left hand circular polarization (LHCP) using the same feeding probe. Experimental results validate this concept. This design demonstrates useful features for wireless communication applications and future planetary missions.", "title": "" } ]
[ { "docid": "88f43c85c32254a5c2859e983adf1c43", "text": "This study observed naturally occurring emergent leadership behavior in distributed virtual teams. The goal of the study was to understand how leadership behaviors emerge and are distributed in these kinds of teams. Archived team interaction captured during the course of a virtual collaboration exercise was analyzed using an a priori content analytic scheme derived from behaviorally-based leadership theory to capture behavior associated with leadership in virtual environments. The findings lend support to the notion that behaviorally-based leadership theory can provide insights into emergent leadership in virtual environments. This study also provides additional insights into the patterns of leadership that emerge in virtual environments and relationship to leadership behaviors.", "title": "" }, { "docid": "46ddd7d456553927f8522802f7fb4cc2", "text": "An effective supplier selection process is very important to the success of any manufacturing organization. The main objective of supplier selection process is to reduce purchase risk, maximize overall value to the purchaser, and develop closeness and long-term relationships between buyers and suppliers in today’s competitive industrial scenario. The literature on supplier selection criteria and methods is full of various analytical and heuristic approaches. Some researchers have developed hybrid models by combining more than one type of selection methods. It is felt that supplier selection criteria and method is still a critical issue for the manufacturing industries therefore in the present paper the literature has been thoroughly reviewed and critically analyzed to address the issue. Keywords—Supplier selection, AHP, ANP, TOPSIS, Mathematical Programming.", "title": "" }, { "docid": "4d964a5cfd5b21c6196a31f4b204361d", "text": "Edge detection is a fundamental tool in the field of image processing. Edge indicates sudden change in the intensity level of image pixels. By detecting edges in the image, one can preserve its features and eliminate useless information. In the recent years, especially in the field of Computer Vision, edge detection has been emerged out as a key technique for image processing. There are various gradient based edge detection algorithms such as Robert, Prewitt, Sobel, Canny which can be used for this purpose. This paper reviews all these gradient based edge detection techniques and provides comparative analysis. MATLAB/Simulink is used as a simulation tool. System is designed by configuring ISE Design suit with MATLAB. Hardware Description Language (HDL) is generated using Xilinx System Generator. HDL code is synthesized and implemented using Field Programmable Gate Array (FPGA).", "title": "" }, { "docid": "512c0d3d9ad6d6a4d139a5e7e0bd3a4e", "text": "The epidermal growth factor receptor (EGFR) contributes to the pathogenesis of head&neck squamous cell carcinoma (HNSCC). However, only a subset of HNSCC patients benefit from anti-EGFR targeted therapy. By performing an unbiased proteomics screen, we found that the calcium-activated chloride channel ANO1 interacts with EGFR and facilitates EGFR-signaling in HNSCC. Using structural mutants of EGFR and ANO1 we identified the trans/juxtamembrane domain of EGFR to be critical for the interaction with ANO1. Our results show that ANO1 and EGFR form a functional complex that jointly regulates HNSCC cell proliferation. Expression of ANO1 affected EGFR stability, while EGFR-signaling elevated ANO1 protein levels, establishing a functional and regulatory link between ANO1 and EGFR. Co-inhibition of EGFR and ANO1 had an additive effect on HNSCC cell proliferation, suggesting that co-targeting of ANO1 and EGFR could enhance the clinical potential of EGFR-targeted therapy in HNSCC and might circumvent the development of resistance to single agent therapy. HNSCC cell lines with amplification and high expression of ANO1 showed enhanced sensitivity to Gefitinib, suggesting ANO1 overexpression as a predictive marker for the response to EGFR-targeting agents in HNSCC therapy. Taken together, our results introduce ANO1 as a promising target and/or biomarker for EGFR-directed therapy in HNSCC.", "title": "" }, { "docid": "8b51bcd5d36d9e15419d09b5fc8995b5", "text": "In this technical report, we study estimator inconsistency in Vision-aided Inertial Navigation Systems (VINS) from a standpoint of system observability. We postulate that a leading cause of inconsistency is the gain of spurious information along unobservable directions, resulting in smaller uncertainties, larger estimation errors, and divergence. We support our claim with an analytical study of the Observability Gramian, along with its right nullspace, which constitutes the basis of the unobservable directions of the system. We develop an Observability-Constrained VINS (OC-VINS), which explicitly enforces the unobservable directions of the system, hence preventing spurious information gain and reducing inconsistency. Our analysis, along with the proposed method for reducing inconsistency, are extensively validated with simulation trials and real-world experimentation.", "title": "" }, { "docid": "3fd747a983ef1a0e5eff117b8765d4b3", "text": "We study centrality in urban street patterns of different world cities represented as networks in geographical space. The results indicate that a spatial analysis based on a set of four centrality indices allows an extended visualization and characterization of the city structure. A hierarchical clustering analysis based on the distributions of centrality has a certain capacity to distinguish different classes of cities. In particular, self-organized cities exhibit scale-free properties similar to those found in nonspatial networks, while planned cities do not.", "title": "" }, { "docid": "e17558c5a39f3e231aa6d09c8e2124fc", "text": "Surveys of child sexual abuse in large nonclinical populations of adults have been conducted in at least 19 countries in addition to the United States and Canada, including 10 national probability samples. All studies have found rates in line with comparable North American research, ranging from 7% to 36% for women and 3% to 29% for men. Most studies found females to be abused at 1 1/2 to 3 times the rate for males. Few comparisons among countries are possible because of methodological and definitional differences. However, they clearly confirm sexual abuse to be an international problem.", "title": "" }, { "docid": "f27ad6bf5c65fdea1a98b118b1a43c85", "text": "Localization is one of the problems that often appears in the world of robotics. Monte Carlo Localization (MCL) are the one of the popular algorithms in localization because easy to implement on issues Global Localization. This algorithm using particles to represent the robot position. MCL can simulated by Robot Operating System (ROS) using robot type is Pioneer3-dx. In this paper we will discuss about this algorithm on ROS, by analyzing the influence of the number particle that are used for localization of the actual robot position.", "title": "" }, { "docid": "8edc51b371d7551f9f7e69149cd4ece0", "text": "Though many previous studies has proved the importance of trust from various perspectives, the researches about online consumer’s trust are fragmented in nature and still it need more attention from academics. Lack of consumers trust in online systems is a critical impediment to the success of e-Commerce. Therefore it is important to explore the critical factors that affect the formation of user’s trust in online environments. The main objective of this paper is to analyze the effects of various antecedents of online trust and to predict the user’s intention to engage in online transaction based on their trust in the Information systems. This study is conducted among Asian online consumers and later the results were compared with those from Non-Asian regions. Another objective of this paper is to integrate De Lone and McLean model of IS Success and Technology Acceptance Model (TAM) for measuring the significance of online trust in e-Commerce adoption. The results of this study show that perceived security, perceived privacy, vendor familiarity, system quality and service quality are the significant antecedents of online trust in a B2C e-Commerce context.", "title": "" }, { "docid": "52796981853b05fb29dcfd223a732866", "text": "OBJECTIVE\nTo investigate whether intrapericardial urokinase irrigation along with pericardiocentesis could prevent pericardial constriction in patients with infectious exudative pericarditis.\n\n\nMETHODS\nA total of 94 patients diagnosed as infectious exudative pericarditis (34 patients with purulent pericarditis and 60 with tuberculous pericarditis, the disease courses of all patients were less than 1 month), 44 males and 50 females, aged from 9 to 66 years (mean 45.4 +/- 14.7 years), were consecutively recruited from 1993 to 2002. All individuals were randomly given either intrapericardial urokinase along with conventional treatment in study group, or conventional treatment alone (including pericardiocentesis and drainage) in control group. The dosage of urokinase ranged from 200000 to 600000 U (mean 320000 +/- 70000 U). The immediate effects were detected by pericardiography with sterilized air and diatrizoate meglumine as contrast media. The long-term investigation depended on the telephonic survey and echocardiographic examination. The duration of following-up ranged from 8 to 120 months (mean 56.8 +/- 29.0 months).\n\n\nRESULTS\nPercutaneous intrapericardial urokinase irrigation promoted complete drainage of pericardial effusion, significantly reduced the thickness of pericardium (from 3.1 +/- 1.6 mm to 1.6 +/- 1.0 mm in study group, P < 0.001; from 3.4 +/- 1.6 mm to 3.2 +/- 1.8 mm in control group, P > 0.05, respectively), and alleviated the adhesion. Intrapericardial bleeding related to fibrinolysis was found in 6 of 47 patients with non-blood pericardial effusion and no systemic bleeding and severe puncture-related complication was observed. In follow-up, there was no cardiac death, and pericardial constriction events were observed in 9 (19.1%) of study group and 27 (57.4%) of control group. Cox analysis illustrated that urokinase could significantly reduce the occurrence of pericardial constriction (relative hazard coefficient = 0.185, P < 0.0001).\n\n\nCONCLUSION\nThe early employment of intrapericardial fibrinolysis with urokinase and pericardiocentesis appears to be safe and effective in preventing the development of pericardial constriction in patients with infectious exudative pericarditis.", "title": "" }, { "docid": "f833db8a1e61634f1ff20be721bd7c64", "text": "Low-rank modeling has many important applications in computer vision and machine learning. While the matrix rank is often approximated by the convex nuclear norm, the use of nonconvex low-rank regularizers has demonstrated better empirical performance. However, the resulting optimization problem is much more challenging. Recent state-of-the-art requires an expensive full SVD in each iteration. In this paper, we show that for many commonly-used nonconvex low-rank regularizers, the singular values obtained from the proximal operator can be automatically threshold. This allows the proximal operator to be efficiently approximated by the power method. We then develop a fast proximal algorithm and its accelerated variant with inexact proximal step. It can be guaranteed that the squared distance between consecutive iterates converges at a rate of , where is the number of iterations. Furthermore, we show the proposed algorithm can be parallelized, and the resultant algorithm achieves nearly linear speedup w.r.t. the number of threads. Extensive experiments are performed on matrix completion and robust principal component analysis. Significant speedup over the state-of-the-art is observed.", "title": "" }, { "docid": "42d3adba03f835f120404cfe7571a532", "text": "This study investigated the psychometric properties of the Arabic version of the SMAS. SMAS is a variant of IAT customized to measure addiction to social media instead of the Internet as a whole. Using a self-report instrument on a cross-sectional sample of undergraduate students, the results revealed the following. First, the exploratory factor analysis showed that a three-factor model fits the data well. Second, concurrent validity analysis showed the SMAS to be a valid measure of social media addiction. However, further studies and data should verify the hypothesized model. Finally, this study showed that the Arabic version of the SMAS is a valid and reliable instrument for use in measuring social media addiction in the Arab world.", "title": "" }, { "docid": "149073f577d0e1fb380ae395ff1ca0c5", "text": "A complete kinematic model of the 5 DOF-Mitsubishi RV-M1 manipulator is presented in this paper. The forward kinematic model is based on the Modified Denavit-Hartenberg notation, and the inverse one is derived in closed form by fixing the orientation of the tool. A graphical interface is developed using MATHEMATICA software to illustrate the forward and inverse kinematics, allowing student or researcher to have hands-on of virtual graphical model that fully describe both the robot's geometry and the robot's motion in its workspace before to tackle any real task.", "title": "" }, { "docid": "3db1c2e951f464238b887b4ceda470a4", "text": "Assuming that migration threat is multi-dimensional, this article seeks to investigate how various types of threats associated with immigration affect attitudes towards immigration and civil liberties. Through experimentation, the study unpacks the ‘securitization of migration’ discourse by disaggregating the nature of immigration threat, and its impact on policy positions and ideological patterns at the individual level. Based on framing and attitudinal analysis, we argue that physical security in distinction from cultural insecurity is enough to generate important ideological variations stemming from strategic input (such as framing and issue-linkage). We expect then that as immigration shifts from a cultural to a physical threat, immigration issues may become more politically salient but less politicized and subject to consensus. Interestingly, however, the findings reveal that the effects of threat framing are not ubiquitous, and may be conditional upon ideology. Liberals were much more susceptible to the frames than were conservatives. Potential explanations for the ideological effects of framing, as well as their implications, are explored.", "title": "" }, { "docid": "0dc0565b364defdd1c23c4367a4bb87e", "text": "A procedure involving reverse transcription followed by the polymerase chain reaction (RT-PCR) using a single primer pair was developed for the detection of five tobamovirus species which are related serologically. Either with a subsequent restriction enzyme analysis (RT-PCR-RFLP) or with a RT-PCR using species specific primers the five species can be differentiated. To differentiate those species by serological means is time consuming and might give ambiguous results. With the example of the isolate OHIO V, which is known to break the resistance in a selection of Lycopersicon peruvianum, the suitability of the RT-PCR-RFLP technique to detect variability at the species level was shown. In sequence analysis 47 codons of the coat protein gene of this isolate were found to be mutated compared to a tobacco mosaic virus (TMV) coat protein gene sequence. Forty of these mutations were silent and did not change the amino acid sequence. Both procedures are suitable to detect mixed infections. In addition, the RT-PCR-RFLP give information on the relative amounts of the viruses that are present in a doubly infected plant. The RT-PCR-RFLP using general primers as well as the RT-PCR using species specific primers were proven to be useful for the diagnosis and control of the disease and will be helpful for resistance breeding, epidemiological investigations and plant virus collections.", "title": "" }, { "docid": "56e520f27f7411979e901318c5979fcf", "text": "With the development of intelligent device and social media, the data bulk on Internet has grown with high speed. As an important aspect of image processing, object detection has become one of the international popular research fields. In recent years, the powerful ability with feature learning and transfer learning of Convolutional Neural Network (CNN) has received growing interest within the computer vision community, thus making a series of important breakthroughs in object detection. So it is a significant survey that how to apply CNN to object detection for better performance. First the paper introduced the basic concept and architecture of CNN. Secondly the methods that how to solve the existing problems of conventional object detection are surveyed, mainly analyzing the detection algorithm based on region proposal and based on regression. Thirdly it mentioned some means which improve the performance of object detection. Then the paper introduced some public datasets of object detection and the concept of evaluation criterion. Finally, it combed the current research achievements and thoughts of object detection, summarizing the important progress and discussing the future directions.", "title": "" }, { "docid": "a526cf2212f8233be7c8e20c9619ec31", "text": "Patients with rheumatoid arthritis can be divided into two major subsets characterized by the presence versus absence of antibodies to citrullinated protein antigens (ACPAs) and of rheumatoid factor (RF). The antibody-positive subset of disease, also known as seropositive rheumatoid arthritis, constitutes approximately two-thirds of all cases of rheumatoid arthritis and generally has a more severe disease course. ACPAs and RF are often present in the blood long before any signs of joint inflammation, which suggests that the triggering of autoimmunity may occur at sites other than the joints (for example, in the lung). This Review summarizes recent progress in our understanding of this gradual disease development in seropositive patients. We also emphasize the implications of this new understanding for the development of preventive and therapeutic strategies. Similar temporal and spatial separation of immune triggering and clinical manifestations, with novel opportunities for early intervention, may also occur in other immune-mediated diseases.", "title": "" }, { "docid": "933623750ec9ebbbb79a5fea3b03fae1", "text": "It is natural to ask if one can perform a computational task considerably faster by using a different architecture (i.e., a different computational model). The answer to this question is a resounding yes. A cute example is the Macaroni sort. We are given a set S = {s 1 ,. .. , S n } of n real numbers in the range (say) [1, 2]. We get a lot of Macaroni (this are longish and very narrow tubes of pasta), and cut the ith piece to be of length s i , for i = 1,. .. , n. Next, take all these pieces of pasta in your hand, make them stand up vertically, with their bottom end lying on a horizontal surface. Next, lower your handle till it hit the first (i.e., tallest) piece of pasta. Take it out, measure it height, write down its number, and continue in this fashion till you have extracted all the pieces of pasta. Clearly, this is a sorting algorithm that works in linear time. But we know that sorting takes Ω(n log n) time. Thus, this algorithm is much faster than the standard sorting algorithms. This faster algorithm was achieved by changing the computation model. We allowed new \" strange \" operations (cutting a piece of pasta into a certain length, picking the longest one in constant time, and measuring the length of a pasta piece in constant time). Using these operations we can sort in linear time. If this was all we can do with this approach, that would have only been a curiosity. However, interestingly enough, there are natural computation models which are considerably stronger than the standard model of computation. Indeed, consider the task of computing the output of the circuit on the right (here, the input is boolean values on the input wires on the left, and the output is the single output on the right). Clearly, this can be solved by ordering the gates in the \" right \" order (this can be done by topological sorting), and then computing the value of the gates one by one in this order, in such a x This work is licensed under the Creative Commons Attribution-Noncommercial 3.0 License. To view a copy of this license, visit", "title": "" }, { "docid": "8a7ea746acbfd004d03d4918953d283a", "text": "Sentiment analysis is an important current research area. This paper combines rule-based classification, supervised learning andmachine learning into a new combinedmethod. Thismethod is tested onmovie reviews, product reviews and MySpace comments. The results show that a hybrid classification can improve the classification effectiveness in terms of microand macro-averaged F1. F1 is a measure that takes both the precision and recall of a classifier’s effectiveness into account. In addition, we propose a semi-automatic, complementary approach in which each classifier can contribute to other classifiers to achieve a good level of effectiveness.", "title": "" } ]
scidocsrr
8c1163ec955b50f7e1e0b02c08b57b5c
Building an Argument Search Engine for the Web
[ { "docid": "fdc01b87195272f8dec8ed32dfe8e664", "text": "Future search engines are expected to deliver pro and con arguments in response to queries on controversial topics. While argument mining is now in the focus of research, the question of how to retrieve the relevant arguments remains open. This paper proposes a radical model to assess relevance objectively at web scale: the relevance of an argument’s conclusion is decided by what other arguments reuse it as a premise. We build an argument graph for this model that we analyze with a recursive weighting scheme, adapting key ideas of PageRank. In experiments on a large ground-truth argument graph, the resulting relevance scores correlate with human average judgments. We outline what natural language challenges must be faced at web scale in order to stepwise bring argument relevance to web search engines.", "title": "" } ]
[ { "docid": "123b35d403447a29eaf509fa707eddaa", "text": "Technology is the vital criteria to boosting the quality of life for everyone from new-borns to senior citizens. Thus, any technology to enhance the quality of life society has a value that is priceless. Nowadays Smart Wearable Technology (SWTs) innovation has been coming up to different sectors and is gaining momentum to be implemented in everyday objects. The successful adoption of SWTs by consumers will allow the production of new generations of innovative and high value-added products. The study attempts to predict the dynamics that play a role in the process through which consumers accept wearable technology. The research build an integrated model based on UTAUT2 and some external variables in order to investigate the direct and moderating effects of human expectation and behaviour on the awareness and adoption of smart products such as watch and wristband fitness. Survey will be chosen in order to test our model based on consumers. In addition, our study focus on different rate of adoption and expectation differences between early adopters and early majority in order to explore those differences and propose techniques to successfully cross the chasm between these two groups according to “Chasm theory”. For this aim and due to lack of prior research, Semi-structured focus groups will be used to obtain qualitative data for our research. Originality/value: To date, a few research exists addressing the adoption of smart wearable technologies. Therefore, the examination of consumers behaviour towards SWTs may provide orientations into the future that are useful for managers who can monitor how consumers make choices, how manufacturers should design successful market strategies, and how regulators can proscribe manipulative behaviour in this industry.", "title": "" }, { "docid": "5fce5ef4a25f242d60aff766e1d7ba1c", "text": "Mental toughness (MT) is an umbrella term that entails positive psychological resources, which are crucial across a wide range of achievement contexts and in the domain of mental health. We systematically review empirical studies that explored the associations between the concept of MT and individual differences in learning, educational and work performance, psychological well-being, personality, and other psychological attributes. Studies that explored the genetic and environmental contributions to individual differences in MT are also reviewed. The findings suggest that MT is associated with various positive psychological traits, more efficient coping strategies and positive outcomes in education and mental health. Approximately 50% of the variation in MT can be accounted for by genetic factors. Furthermore, the associations between MT and psychological traits can be explained mainly by either common genetic or non-shared environmental factors. Taken together, our findings suggest a 'mental toughness advantage' with possible implications for developing interventions to facilitate achievement in a variety of settings.", "title": "" }, { "docid": "851fd19525da9dc5a46e3146948109df", "text": "As computation becomes increasingly limited by data movement and energy consumption, exploiting locality throughout the memory hierarchy becomes critical for maintaining the performance scaling that many have come to expect from the computing industry. Moving computation closer to main memory presents an opportunity to reduce the overheads associated with data movement. We explore the potential of using 3D die stacking to move memory-intensive computations closer to memory. This approach to processing-in-memory addresses some drawbacks of prior research on in-memory computing and appears commercially viable in the foreseeable future. We show promising early results from this approach and identify areas that are in need of research to unlock its full potential.", "title": "" }, { "docid": "85c124fd317dc7c2e5999259d26aa1db", "text": "This paper presents a method for extracting rotation-invariant features from images of handwriting samples that can be used to perform writer identification. The proposed features are based on the Hinge feature [1], but incorporating the derivative between several points along the ink contours. Finally, we concatenate the proposed features into one feature vector to characterize the writing styles of the given handwritten text. The proposed method has been evaluated using Fire maker and IAM datasets in writer identification, showing promising performance gains.", "title": "" }, { "docid": "893c7a1694596d0c8d58b819500ff9f9", "text": "A recently introduced deep neural network (DNN) has achieved some unprecedented gains in many challenging automatic speech recognition (ASR) tasks. In this paper deep neural network hidden Markov model (DNN-HMM) acoustic models is introduced to phonotactic language recognition and outperforms artificial neural network hidden Markov model (ANN-HMM) and Gaussian mixture model hidden Markov model (GMM-HMM) acoustic model. Experimental results have confirmed that phonotactic language recognition system using DNN-HMM acoustic model yields relative equal error rate reduction of 28.42%, 14.06%, 18.70% and 12.55%, 7.20%, 2.47% for 30s, 10s, 3s comparing with the ANN-HMM and GMM-HMM approaches respectively on National Institute of Standards and Technology language recognition evaluation (NIST LRE) 2009 tasks.", "title": "" }, { "docid": "37a838344c441bcb8bc1c1f233b2f0e7", "text": "Cloud computing platforms enable applications to offer low latency access to user data by offering storage services in several geographically distributed data centers. In this paper, we identify the high tail latency problem in cloud CDN via analyzing a large-scale dataset collected from 783,944 users in a major cloud CDN. We find that the data downloading latency in cloud CDN is highly variable, which may significantly degrade the user experience of applications. To address the problem, we present TailCutter, a workload scheduling mechanism that aims at optimizing the tail latency while meeting the cost constraint given by application providers. We further design the Maximum Tail Minimization Algorithm (MTMA) working in TailCutter mechanism to optimally solve the Tail Latency Minimization (TLM) problem in polynomial time. We implement TailCutter across data centers of Amazon S3 and Microsoft Azure. Our extensive evaluation using large-scale real world data traces shows that TailCutter can reduce up to 68% 99th percentile user-perceived latency in comparison with alternative solutions under cost constraints.", "title": "" }, { "docid": "36357f48cbc3ed4679c679dcb77bdd81", "text": "In this paper, we review research and applications in the area of mediated or remote social touch. Whereas current communication media rely predominately on vision and hearing, mediated social touch allows people to touch each other over a distance by means of haptic feedback technology. Overall, the reviewed applications have interesting potential, such as the communication of simple ideas (e.g., through Hapticons), establishing a feeling of connectedness between distant lovers, or the recovery from stress. However, the beneficial effects of mediated social touch are usually only assumed and have not yet been submitted to empirical scrutiny. Based on social psychological literature on touch, communication, and the effects of media, we assess the current research and design efforts and propose future directions for the field of mediated social touch.", "title": "" }, { "docid": "f3fdc63904e2bf79df8b6ca30a864fd3", "text": "Although the potential benefits of a powered ankle-foot prosthesis have been well documented, no one has successfully developed and verified that such a prosthesis can improve amputee gait compared to a conventional passive-elastic prosthesis. One of the main hurdles that hinder such a development is the challenge of building an ankle-foot prosthesis that matches the size and weight of the intact ankle, but still provides a sufficiently large instantaneous power output and torque to propel an amputee. In this paper, we present a novel, powered ankle-foot prosthesis that overcomes these design challenges. The prosthesis comprises an unidirectional spring, configured in parallel with a force-controllable actuator with series elasticity. With this architecture, the ankle-foot prosthesis matches the size and weight of the human ankle, and is shown to be satisfying the restrictive design specifications dictated by normal human ankle walking biomechanics.", "title": "" }, { "docid": "2710a25b3cf3caf5ebd5fb9f08c9e5e3", "text": "Your use of the JSTOR archive indicates your acceptance of JSTOR's Terms and Conditions of Use, available at http://www.jstor.org/page/info/about/policies/terms.jsp. JSTOR's Terms and Conditions of Use provides, in part, that unless you have obtained prior permission, you may not download an entire issue of a journal or multiple copies of articles, and you may use content in the JSTOR archive only for your personal, non-commercial use.", "title": "" }, { "docid": "0b06586502303b6796f1f512129b5cbe", "text": "This paper introduces an extension of collocational analysis that takes into account grammatical structure and is specifically geared to investigating the interaction of lexemes and the grammatical constructions associated with them. The method is framed in a construction-based approach to language, i.e. it assumes that grammar consists of signs (form-meaning pairs) and is thus not fundamentally different from the lexicon. The method is applied to linguistic expressions at various levels of abstraction (words, semi-fixed phrases, argument structures, tense, aspect and mood). The method has two main applications: first, to increase the adequacy of grammatical description by providing an objective way of identifying the meaning of a grammatical construction and determining the degree to which particular slots in it prefer or are restricted to a particular set of lexemes; second, to provide data for linguistic theory-building.", "title": "" }, { "docid": "9b6a16b84d4aadf582c16a8adb4e4830", "text": "This paper presents a new in-vehicle real-time vehicle detection strategy which hypothesizes the presence of vehicles in rectangular sub-regions based on the robust classification of features vectors result of a combination of multiple morphological vehicle features. One vector is extracted for each region of the image likely containing vehicles as a multidimensional likelihood measure with respect to a simplified vehicle model. A supervised training phase set the representative vectors of the classes vehicle and non-vehicle, so that the hypothesis is verified or not according to the Mahalanobis distance between the feature vector and the representative vectors. Excellent results have been obtained in several video sequences accurately detecting vehicles with very different aspect-ratio, color, size, etc, while minimizing the number of missing detections and false alarms.", "title": "" }, { "docid": "78ec5db757e26ce5cd1f594839169573", "text": "Thailand and an additional Australian study Synthesis report by Vittorio di Martino 2002-Workplace violence in the health sector.doc iii Foreword Violence at work has become an alarming phenomenon worldwide. The real size of the problem is largely unknown and recent information shows that the current knowledge is only the tip of the iceberg. The enormous cost of violence at work for the individual, the workplace and the community at large is becoming more and more apparent. Although incidents of violence are known to occur in all work environments, some employment sectors are particularly exposed to it. Violence includes both physical and non-physical violence. Violence is defined as being destructive towards another person. It finds its expression in physical assault, homicide, verbal abuse, bullying, sexual harassment and threat. Violence at work is often considered to be just a reflection of the more general and increasing phenomenon of violence in many areas of social life which has to be dealt with at the level of the whole society. Its prevalence has, however, increased at the workplace, traditionally viewed as a violence-free environment. Employers and workers are equally interested in the prevention of violence at the workplace. Society at large has a stake in preventing violence spreading to working life and recognizing the potential of the workplace by removing such obstacles to productivity, development and peace. Violence is common to such an extent among workers who have direct contact with people in distress, that it may be considered an inevitable part of the job. This is often the case in the health sector (violence in this sector may constitute almost a quarter of all violence at work). 1 While ambulance staff are reported to be at greatest risk, nurses are three times more likely on average to experience violence in the workplace than other occupational groups. Since the large majority of the health workforce is female, the gender dimension of the problem is very evident. Besides concern about the human right of health workers to have a decent work environment, there is concern about the consequences of violence at work. These have a significant impact on the effectiveness of health systems, particularly in developing countries. The equal access of people to primary health care is endangered if a scarce human resource, the health workers, feel under threat in certain geographical and social environments, in situations of general conflict, in work situations where transport …", "title": "" }, { "docid": "65bc99201599ec17347d3fe0857cd39a", "text": "Many children strive to attain excellence in sport. However, although talent identification and development programmes have gained popularity in recent decades, there remains a lack of consensus in relation to how talent should be defined or identified and there is no uniformly accepted theoretical framework to guide current practice. The success rates of talent identification and development programmes have rarely been assessed and the validity of the models applied remains highly debated. This article provides an overview of current knowledge in this area with special focus on problems associated with the identification of gifted adolescents. There is a growing agreement that traditional cross-sectional talent identification models are likely to exclude many, especially late maturing, 'promising' children from development programmes due to the dynamic and multidimensional nature of sport talent. A conceptual framework that acknowledges both genetic and environmental influences and considers the dynamic and multidimensional nature of sport talent is presented. The relevance of this model is highlighted and recommendations for future work provided. It is advocated that talent identification and development programmes should be dynamic and interconnected taking into consideration maturity status and the potential to develop rather than to exclude children at an early age. Finally, more representative real-world tasks should be developed and employed in a multidimensional design to increase the efficacy of talent identification and development programmes.", "title": "" }, { "docid": "f1bd4f301583725c492dcea6f1870d76", "text": "ISSN: 1750-984X (Print) 1750-9858 (Online) Journal homepage: http://www.tandfonline.com/loi/rirs20 20 years later: deliberate practice and the development of expertise in sport Joseph Baker & Bradley Young To cite this article: Joseph Baker & Bradley Young (2014) 20 years later: deliberate practice and the development of expertise in sport, International Review of Sport and Exercise Psychology, 7:1, 135-157, DOI: 10.1080/1750984X.2014.896024 To link to this article: http://dx.doi.org/10.1080/1750984X.2014.896024 Published online: 01 Apr 2014.", "title": "" }, { "docid": "c71ab03cdfd8b6a3c62b18103f449764", "text": "BACKGROUND\nHealth worker shortage in rural areas is one of the biggest problems of the health sector in Ghana and many developing countries. This may be due to fewer incentives and support systems available to attract and retain health workers at the rural level. This study explored the willingness of community health officers (CHOs) to accept and hold rural and community job postings in Ghana.\n\n\nMETHODS\nA discrete choice experiment was used to estimate the motivation and incentive preferences of CHOs in Ghana. All CHOs working in three Health and Demographic Surveillance System sites in Ghana, 200 in total, were interviewed between December 2012 and January 2013. Respondents were asked to choose from choice sets of job preferences. Four mixed logit models were used for the estimation. The first model considered (a) only the main effect. The other models included interaction terms for (b) gender, (c) number of children under 5 in the household, and (d) years worked at the same community. Moreover, a choice probability simulation was performed.\n\n\nRESULTS\nMixed logit analyses of the data project a shorter time frame before study leave as the most important motivation for most CHOs (β 2.03; 95 % CI 1.69 to 2.36). This is also confirmed by the largest simulated choice probability (29.1 %). The interaction effect of the number of children was significant for education allowance for children (β 0.58; 95 % CI 0.24 to 0.93), salary increase (β 0.35; 95 % CI 0.03 to 0.67), and housing provision (β 0.16; 95 % CI -0.02 to 0.60). Male CHOs had a high affinity for early opportunity to go on study leave (β 0.78; 95 % CI -0.06 to 1.62). CHOs who had worked at the same place for a long time greatly valued salary increase (β 0.28; 95 % CI 0.09 to 0.47).\n\n\nCONCLUSIONS\nTo reduce health worker shortage in rural settings, policymakers could provide \"needs-specific\" motivational packages. They should include career development opportunities such as shorter period of work before study leave and financial policy in the form of salary increase to recruit and retain them.", "title": "" }, { "docid": "5798d93d03b9ab2b10b5bea7ccbb58ce", "text": "A wealth of information is available only in web pages, patents, publications etc. Extracting information from such sources is challenging, both due to the typically complex language processing steps required and to the potentially large number of texts that need to be analyzed. Furthermore, integrating extracted data with other sources of knowledge often is mandatory for subsequent analysis. In this demo, we present the AliBaba system for scalable information extraction from biomedical documents. Unlike many other systems, AliBaba performs both entity extraction and relationship extraction and graphically visualizes the resulting network of inter-connected objects. It leverages the PubMed search engine for selection of relevant documents. The technical novelty of AliBaba is twofold: (a) its ability to automatically learn language patterns for relationship extraction without an annotated corpus, and (b) its high performance pattern matching algorithm. We show that a simple yet effective pattern filtering technique improves the runtime of the system drastically without harming its extraction effectiveness. Although AliBaba has been implemented for biomedical texts, its underlying principles should also be applicable in any other domain.", "title": "" }, { "docid": "e5aed574fbe4560a794cf8b77fb84192", "text": "Warping is one of the basic image processing techniques. Directly applying existing monocular image warping techniques to stereoscopic images is problematic as it often introduces vertical disparities and damages the original disparity distribution. In this paper, we show that these problems can be solved by appropriately warping both the disparity map and the two images of a stereoscopic image. We accordingly develop a technique for extending existing image warping algorithms to stereoscopic images. This technique divides stereoscopic image warping into three steps. Our method first applies the user-specified warping to one of the two images. Our method then computes the target disparity map according to the user specified warping. The target disparity map is optimized to preserve the perceived 3D shape of image content after image warping. Our method finally warps the other image using a spatially-varying warping method guided by the target disparity map. Our experiments show that our technique enables existing warping methods to be effectively applied to stereoscopic images, ranging from parametric global warping to non-parametric spatially-varying warping.", "title": "" }, { "docid": "c1981c3b0ccd26d4c8f02c2aa5e71c7a", "text": "Functional genomics studies have led to the discovery of a large amount of non-coding RNAs from the human genome; among them are long non-coding RNAs (lncRNAs). Emerging evidence indicates that lncRNAs could have a critical role in the regulation of cellular processes such as cell growth and apoptosis as well as cancer progression and metastasis. As master gene regulators, lncRNAs are capable of forming lncRNA–protein (ribonucleoprotein) complexes to regulate a large number of genes. For example, lincRNA-RoR suppresses p53 in response to DNA damage through interaction with heterogeneous nuclear ribonucleoprotein I (hnRNP I). The present study demonstrates that hnRNP I can also form a functional ribonucleoprotein complex with lncRNA urothelial carcinoma-associated 1 (UCA1) and increase the UCA1 stability. Of interest, the phosphorylated form of hnRNP I, predominantly in the cytoplasm, is responsible for the interaction with UCA1. Moreover, although hnRNP I enhances the translation of p27 (Kip1) through interaction with the 5′-untranslated region (5′-UTR) of p27 mRNAs, the interaction of UCA1 with hnRNP I suppresses the p27 protein level by competitive inhibition. In support of this finding, UCA1 has an oncogenic role in breast cancer both in vitro and in vivo. Finally, we show a negative correlation between p27 and UCA in the breast tumor cancer tissue microarray. Together, our results suggest an important role of UCA1 in breast cancer.", "title": "" }, { "docid": "79811b3cfec543470941e9529dc0ab24", "text": "We present a novel method for learning and predicting the affordances of an object based on its physical and visual attributes. Affordance prediction is a key task in autonomous robot learning, as it allows a robot to reason about the actions it can perform in order to accomplish its goals. Previous approaches to affordance prediction have either learned direct mappings from visual features to affordances, or have introduced object categories as an intermediate representation. In this paper, we argue that physical and visual attributes provide a more appropriate mid-level representation for affordance prediction, because they support informationsharing between affordances and objects, resulting in superior generalization performance. In particular, affordances are more likely to be correlated with the attributes of an object than they are with its visual appearance or a linguistically-derived object category. We provide preliminary validation of our method experimentally, and present empirical comparisons to both the direct and category-based approaches of affordance prediction. Our encouraging results suggest the promise of the attributebased approach to affordance prediction.", "title": "" }, { "docid": "c380f89ac91ce532b9f0250ce487fe5e", "text": "Starting in the seventies, face recognition has become one of the most researched topics in computer vision and biometrics. Traditional methods based on hand-crafted features and traditional machine learning techniques have recently been superseded by deep neural networks trained with very large datasets. In this paper we provide a comprehensive and upto-date literature review of popular face recognition methods including both traditional (geometry-based, holistic, featurebased and hybrid methods) and deep learning methods.", "title": "" } ]
scidocsrr
f2c9ea56e9dd7f0f4eb93cfcc7bf50e2
Feasibility Investigation of Low Cost Substrate Integrated Waveguide ( SIW ) Directional Couplers
[ { "docid": "39e332a58625a12ef3e14c1a547a8cad", "text": "This paper presents an overview of the recent achievements in the held of substrate integrated waveguides (SIW) technology, with particular emphasis on the modeling strategy and design considerations of millimeter-wave integrated circuits as well as the physical interpretation of the operation principles and loss mechanisms of these structures. The most common numerical methods for modeling both SIW interconnects and circuits are presented. Some considerations and guidelines for designing SIW structures, interconnects and circuits are discussed, along with the physical interpretation of the major issues related to radiation leakage and losses. Examples of SIW circuits and components operating in the microwave and millimeter wave bands are also reported, with numerical and experimental results.", "title": "" } ]
[ { "docid": "c25ed65511cb0a22301896bbf4ebd84d", "text": "This paper surveys the field of machine vision from a computer science perspective. It is written to act as an introduction to the field and presents the reader with references to specific implementations. Machine vision is a complex and developing field that can be broken into the three stages: stereo correspondence, scene reconstruction, and object recognition. We present the techniques and general approaches to each of these stages and summarize the future direction of research.", "title": "" }, { "docid": "89b8f3b7efa011065cf28647b9984f4d", "text": "Due to the abundance of 2D product images from the internet, developing efficient and scalable algorithms to recover the missing depth information is central to many applications. Recent works have addressed the single-view depth estimation problem by utilizing convolutional neural networks. In this paper, we show that exploring symmetry information, which is ubiquitous in man made objects, can significantly boost the quality of such depth predictions. Specifically, we propose a new convolutional neural network architecture to first estimate dense symmetric correspondences in a product image and then propose an optimization which utilizes this information explicitly to significantly improve the quality of single-view depth estimations. We have evaluated our approach extensively, and experimental results show that this approach outperforms state-of-the-art depth estimation techniques.", "title": "" }, { "docid": "6871d514bca855a9f948939a3e8a02f7", "text": "The problem of tracking targets in the presence of reflections from sea or ground is addressed. Both types of reflections (specular and diffuse) are considered. Specular reflection causes large peak errors followed by an approximately constant bias in the monopulse ratio, while diffuse reflection has random variations which on the average generate a bias in the monopulse ratio. Expressions for the average error (bias) in the monopulse ratio due to specular and diffuse reflections and the corresponding variance in the presence of noise in the radar channels are derived. A maximum maneuver-based filter and a multiple model estimator are used for tracking. Simulation results for five scenarios, typical of sea skimmers, with Swerling III fluctuating radar cross sections (RCSs) indicate the significance and efficiency of the technique developed in this paper-a 65% reduction of the rms error in the target height estimate.", "title": "" }, { "docid": "1389323613225897330d250e9349867b", "text": "Description: The field of data mining lies at the confluence of predictive analytics, statistical analysis, and business intelligence. Due to the ever–increasing complexity and size of data sets and the wide range of applications in computer science, business, and health care, the process of discovering knowledge in data is more relevant than ever before. This book provides the tools needed to thrive in today s big data world. The author demonstrates how to leverage a company s existing databases to increase profits and market share, and carefully explains the most current data science methods and techniques. The reader will learn data mining by doing data mining . By adding chapters on data modelling preparation, imputation of missing data, and multivariate statistical analysis, Discovering Knowledge in Data, Second Edition remains the eminent reference on data mining .", "title": "" }, { "docid": "4c8ff8cf19292475b724d7036ed8b75c", "text": "The purpose of this study was to examine intratester reliability of a test designed to measure the standing pelvic-tilt angle, active posterior and anterior pelvic-tilt angles and ranges of motion, and the total pelvic-tilt range of motion (ROM). After an instruction session, the pelvic-tilt angles of the right side of 20 men were calculated using trigonometric functions. Ranges of motion were determined from the pelvic-tilt angles. Intratester reliability coefficients (Pearson r) for test and retest measurements were .88 for the standing pelvic-tilt angle, .88 for the posterior pelvic-tilt angle, .92 for the anterior pelvic-tilt angle, .62 for the posterior pelvic-tilt ROM, .92 for the anterior pelvic-tilt ROM, and .87 for the total ROM. We discuss the factors that may have influenced the reliability of the measurements and the clinical implications and limitations of the test. We suggest additional research to examine intratester reliability of measuring the posterior pelvic-tilt ROM, intertester reliability of measuring all angles and ROM, and the pelvic tilt of many types of subjects.", "title": "" }, { "docid": "962831a1fa8771c68feb894dc2c63943", "text": "San-Francisco in the US and Natal in Brazil are two coastal cities which are known rather for its tech scene and natural beauty than for its criminal activities. We analyze characteristics of the urban environment in these two cities, deploying a machine learning model to detect categories and hotspots of criminal activities. We propose an extensive set of spatio-temporal & urban features which can significantly improve the accuracy of machine learning models for these tasks, one of which achieved Top 1% performance on a Crime Classification Competition by kaggle.com. Extensive evaluation on several years of crime records from both cities show how some features — such as the street network — carry important information about criminal activities.", "title": "" }, { "docid": "5bc8c2bc2a0ac668c256ad802f191288", "text": "Although the widespread use of gaming for leisure purposes has been well documented, the use of games to support cultural heritage purposes, such as historical teaching and learning, or for enhancing museum visits, has been less well considered. The state-of-the-art in serious game technology is identical to that of the state-of-the-art in entertainment games technology. As a result, the field of serious heritage games concerns itself with recent advances in computer games, real-time computer graphics, virtual and augmented reality and artificial intelligence. On the other hand, the main strengths of serious gaming applications may be generalised as being in the areas of communication, visual expression of information, collaboration mechanisms, interactivity and entertainment. In this report, we will focus on the state-of-the-art with respect to the theories, methods and technologies used in serious heritage games. We provide an overview of existing literature of relevance to the domain, discuss the strengths and weaknesses of the described methods and point out unsolved problems and challenges. In addition, several case studies illustrating the application of methods and technologies used in cultural heritage are presented.", "title": "" }, { "docid": "b845aaa999c1ed9d99cb9e75dff11429", "text": "We present a new space-efficient approach, (SparseDTW ), to compute the Dynamic Time Warping (DTW ) distance between two time series that always yields the optimal result. This is in contrast to other known approaches which typically sacrifice optimality to attain space efficiency. The main idea behind our approach is to dynamically exploit the existence of similarity and/or correlation between the time series. The more the similarity between the time series the less space required to compute the DTW between them. To the best of our knowledge, all other techniques to speedup DTW, impose apriori constraints and do not exploit similarity characteristics that may be present in the data. We conduct experiments and demonstrate that SparseDTW outperforms previous approaches.", "title": "" }, { "docid": "5deed8c53f2b28f23d8f06cdc446209a", "text": "Natural Language Inference (NLI) is a fundamentally important task in natural language processing that has many applications. It is concerned with classifying the logical relation between two sentences. In this paper, we propose attention memory networks (AMNs) to recognize entailment and contradiction between two sentences. In our model, an attention memory neural network (AMNN) has a variable sized encoding memory and supports semantic compositionality. AMNN captures sentence level semantics and reasons relation between the sentence pairs; then we use a Sparsemax layer over the output of the generated matching vectors (sentences) for classification. Our experiments on the Stanford Natural Language Inference (SNLI) Corpus show that our model outperforms the state of the art, achieving an accuracy of 87.4% on the test data.", "title": "" }, { "docid": "77bb711327befd3f4169b4548cc5a85d", "text": "We present a new technique for learning visual-semantic embeddings for cross-modal retrieval. Inspired by hard negative mining, the use of hard negatives in structured prediction, and ranking loss functions, we introduce a simple change to common loss functions used for multi-modal embeddings. That, combined with fine-tuning and use of augmented data, yields significant gains in retrieval performance. We showcase our approach, VSE++, on MS-COCO and Flickr30K datasets, using ablation studies and comparisons with existing methods. On MS-COCO our approach outperforms state-ofthe-art methods by 8.8% in caption retrieval and 11.3% in image retrieval (at R@1).", "title": "" }, { "docid": "dd40063dd10027f827a65976261c8683", "text": "Many software process methods and tools presuppose the existence of a formal model of a process. Unfortunately, developing a formal model for an on-going, complex process can be difficult, costly, and error prone. This presents a practical barrier to the adoption of process technologies, which would be lowered by automated assistance in creating formal models. To this end, we have developed a data analysis technique that we term process discovery. Under this technique, data describing process events are first captured from an on-going process and then used to generate a formal model of the behavior of that process. In this article we describe a Markov method that we developed specifically for process discovery, as well as describe two additional methods that we adopted from other domains and augmented for our purposes. The three methods range from the purely algorithmic to the purely statistical. We compare the methods and discuss their application in an industrial case study.", "title": "" }, { "docid": "df0e13e1322a95046a91fb7c867d968a", "text": "Taking into consideration both external (i.e. technology acceptance factors, website service quality) as well as internal factors (i.e. specific holdup cost) , this research explores how the customers’ satisfaction and loyalty, when shopping and purchasing on the internet , can be associated with each other and how they are affected by the above dynamics. This research adopts the Structural Equation Model (SEM) as the main analytical tool. It investigates those who used to have shopping experiences in major shopping websites of Taiwan. The research results point out the following: First, customer satisfaction will positively influence customer loyalty directly; second, technology acceptance factors will positively influence customer satisfaction and loyalty directly; third, website service quality can positively influence customer satisfaction and loyalty directly; and fourth, specific holdup cost can positively influence customer loyalty directly, but cannot positively influence customer satisfaction directly. This paper draws on the research results for implications of managerial practice, and then suggests some empirical tactics in order to help enhancing management performance for the website shopping industry.", "title": "" }, { "docid": "cb61f83a28b87a4974bc53c92bb72cfc", "text": "The main issue of a billet heater using induction heating is to avoid billets that were not heated at a desired temperature. In order to improve the induction heating system, it is necessary to clarify the heating property of an object due to eddy current loss and to investigate the temperature distribution in an object by the magneto-thermal coupled analysis. In this paper, the eddy current and temperature distribution of a billet heater is analyzed considering the heat emission, heat conduction, and temperature dependence of magnetic characteristics of the billet. It is shown that the calculated values of temperature in the center and surface of a billet are in good agreement with measured values. The precise analysis is possible by considering the temperature dependence of magnetic characteristics, heat conductivity, etc. The detailed behavior of the heat generation in the billet is clarified. The skin depth is increased because the resistivity of the billet is increased and the permeability is decreased at high temperature. As a result, the flux in the billet is reduced, and then the power (eddy current loss) in the billet is decreased.", "title": "" }, { "docid": "ce21a811ea260699c18421d99221a9f2", "text": "Medical image processing is the most challenging and emerging field now a day’s processing of MRI images is one of the parts of this field. The quantitative analysis of MRI brain tumor allows obtaining useful key indicators of disease progression. This is a computer aided diagnosis systems for detecting malignant texture in biological study. This paper presents an approach in computer-aided diagnosis for early prediction of brain cancer using Texture features and neuro classification logic. This paper describes the proposed strategy for detection; extraction and classification of brain tumour from MRI scan images of brain; which incorporates segmentation and morphological functions which are the basic functions of image processing. Here we detect the tumour, segment the tumour and we calculate the area of the tumour. Severity of the disease can be known, through classes of brain tumour which is done through neuro fuzzy classifier and creating a user friendly environment using GUI in MATLAB. In this paper cases of 10 patients is taken and severity of disease is shown and different features of images are calculated.", "title": "" }, { "docid": "4ac8435b96c020231c775c4625b5ff0a", "text": "This article addresses the issue of student writing in higher education. It draws on the findings of an Economic and Social Research Council funded project which examined the contrasting expectations and interpretations of academic staff and students regarding undergraduate students' written assignments. It is suggested that the implicit models that have generally been used to understand student writing do not adequately take account of the importance of issues of identity and the institutional relationships of power and authority that surround, and are embedded within, diverse student writing practices across the university. A contrasting and therefore complementary perspective is used to present debates about 'good' and `poor' student writing. The article outlines an 'academic literacies' framework which can take account of the conflicting and contested nature of writing practices, and may therefore be more valuable for understanding student writing in today's higher education than traditional models and approaches.", "title": "" }, { "docid": "04d9f96fcd218e61f41412518c18cf31", "text": "Squeak is an open, highly-portable Smalltalk implementation whose virtual machine is written entirely in Smalltalk, making it easy to. debug, analyze, and change. To achieve practical performance, a translator produces an equivalent C program whose performance is comparable to commercial Smalltalks.Other noteworthy aspects of Squeak include: a compact object format that typically requires only a single word of overhead per object; a simple yet efficient incremental garbage collector for 32-bit direct pointers; efficient bulk-mutation of objects; extensions of BitBlt to handle color of any depth and anti-aliased image rotation and scaling; and real-time sound and music synthesis written entirely in Smalltalk.", "title": "" }, { "docid": "de2294753031935ca4729a729ac23283", "text": "We propose a novel system TEXplorer that integrates keyword-based object ranking with the aggregation and exploration power of OLAP in a text database with rich structured attributes available, e.g., a product review database. TEXplorer can be implemented within a multi-dimensional text database, where each row is associated with structural dimensions (attributes) and text data (e.g., a document). The system utilizes the text cube data model, where a cell aggregates a set of documents with matching values in a subset of dimensions. Cells in a text cube capture different levels of summarization of the documents, and can represent objects at different conceptual levels.\n Users query the system by submitting a set of keywords. Instead of returning a ranked list of all the cells, we propose a keyword-based interactive exploration framework that could offer flexible OLAP navigational guides and help users identify the levels and objects they are interested in. A novel significance measure of dimensions is proposed based on the distribution of IR relevance of cells. During each interaction stage, dimensions are ranked according to their significance scores to guide drilling down; and cells in the same cuboids are ranked according to their relevance to guide exploration. We propose efficient algorithms and materialization strategies for ranking top-k dimensions and cells. Finally, extensive experiments on real datasets demonstrate the efficiency and effectiveness of our approach.", "title": "" }, { "docid": "b55a314aea8914db8705cd3974c862bb", "text": "This study examines the mediating effect of perceived usefulness on the relationship between tax service quality (correctness, response time, system support) and continuance usage intention of e-filing system in Malaysia. A total of 116 data was analysed using Partial Least Squared Method (PLS). The result showed that Perceived Usefulness has a partial mediating effect on the relationship between tax service quality (Correctness, Response Time) with the continuance usage intention and tax service quality (correctness) has significant positive relationship with continuance usage intention. Perceived usefulness was found to be the most important predictor of continuance usage intention.", "title": "" }, { "docid": "030d09fd465d76f96cea06ff4f4ed24e", "text": "Several large technology companies including Apple, Google, and Samsung are entering the expanding market of population health with the introduction of wearable devices. This technology, worn in clothing or accessories, is part of a larger movement often referred to as the “quantified self.” The notion is that by recording and reporting information about behaviors such as physical activity or sleep patterns, these devices can educate and motivate individuals toward better habits and better health. The gap between recording information and changing behavior is substantial, however, and while these devices are increasing in popularity, little evidence suggests that they are bridging that gap. Only 1% to 2% of individuals in the United States have used a wearable device, but annual sales are projected to increase to more than $50 billion by 2018.1 Some of these devices aim at individuals already motivated to change their health behaviors. Others are being considered by health care organizations, employers, insurers, and clinicians who see promise in using these devices to better engage less motivated individuals. Some of these devices may justify that promise, but less because of their technology and more because of the behavioral change strategies that can be designed around them. Most health-related behaviors such as eating well and exercising regularly could lead to meaningful improvements in population health only if they are sustained. If wearable devices are to be part of the solution, they either need to create enduring new habits, turning external motivations into internal ones (which is difficult), or they need to sustain their external motivation (which is also difficult). This requirement of sustained behavior change is a major challenge, but many mobile health applications have not yet leveraged principles from theories of health behavior.2 Feedback loops could be better designed around wearable devices to sustain engagement by using concepts from behavioral economics.3 Individuals are often motivated by the experience of past rewards and the prospect of future rewards. Lottery-based designs leverage the fact that individuals tend to assign undue weight to small probabilities and are more engaged by intermittent variable rewards than with constant reinforcement. Anticipated regret, an individual’s concern or anxiety over the reward he or she might not win, can have a significant effect on decision making. Feedback could be designed to use this concept by informing individuals what they would have won had they been adherent to the new behavior. Building new habits may be best facilitated by presenting frequent feedback with appropriate framing and by using a trigger that captures the individual’s attention at those moments when he or she is most likely to take action. Identifying and Addressing the Gaps Using wearable devices to effectively promote health behavior change is a complex, multistep process. First, a person must be motivated enough to want a device and be able to afford it; this is a challenge, because some devices can cost hundreds of dollars. Perhaps for these reasons, wearable devices seem to appeal to groups that might need them least. In a survey of wearable device users, 75% described themselves as “early adopters of technology,” 48% were younger than 35 years, and 29% reportedly earn more than $100 000 annually.4 The individuals who might have the most to gain from these devices are likely to be older and less affluent. To better engage these individuals, wearable devices must be more affordable, or new funding mechanisms are needed. For example, employers and insurers might pay for a device that helps individuals better adhere to their medications, potentially yielding significant downstream health care savings. Or, devices that demonstrate effectiveness could be financed in a manner similar to that for prescription drugs. Second, once a device is acquired, a person needs to remember to wear it and occasionally recharge it— additional behaviors required from individuals who may have a difficult time already. Many wearable devices require data to be sent to a phone or computer, adding additional steps and more equipment. According to one survey (n = 6223), more than half of individuals who purchased a wearable device stop using it and, of these, onethird did so before 6 months.5 One potential solution might be to better leverage smartphones; most people with these phones carry them often. Ideally, using a smartphone does not require any effort beyond setup— like an app that gets its power from the phone that people are already accustomed to regularly charging. Because data can be transmitted passively via a cellular connection, there is no need for individuals to actively update their data. Although smartphones are expensive, many people already have them, and the reach of these devices is increasing. Third, the device must be able to accurately track its targeted behavior. Accelerometers, commonly found within wearable devices, have been well studied for tracking step counts. However, newer technologies, such as those that measure heart rate or sleep patterns, have not been well validated. Similar to mobile health applications, the increase in the availability and types of wearable devices has not been matched by appropriate testing or oversight to make sure they work.6 Wearable devices are unlikely to have the same capabilities as home devices that measure blood pressure or track medication adherence. However, a smartwatch may facilitate feedback from these devices, forming a better VIEWPOINT", "title": "" }, { "docid": "8415585161d51b500f99aa36650a67d9", "text": "A brain-computer interface (BCI) is a communication system that can help users interact with the outside environment by translating brain signals into machine commands. The use of electroencephalographic (EEG) signals has become the most common approach for a BCI because of their usability and strong reliability. Many EEG-based BCI devices have been developed with traditional wet- or micro-electro-mechanical-system (MEMS)-type EEG sensors. However, those traditional sensors have uncomfortable disadvantage and require conductive gel and skin preparation on the part of the user. Therefore, acquiring the EEG signals in a comfortable and convenient manner is an important factor that should be incorporated into a novel BCI device. In the present study, a wearable, wireless and portable EEG-based BCI device with dry foam-based EEG sensors was developed and was demonstrated using a gaming control application. The dry EEG sensors operated without conductive gel; however, they were able to provide good conductivity and were able to acquire EEG signals effectively by adapting to irregular skin surfaces and by maintaining proper skin-sensor impedance on the forehead site. We have also demonstrated a real-time cognitive stage detection application of gaming control using the proposed portable device. The results of the present study indicate that using this portable EEG-based BCI device to conveniently and effectively control the outside world provides an approach for researching rehabilitation engineering.", "title": "" } ]
scidocsrr
47bffba44fe4f14cc440205f9f574c1b
AckSeer: a repository and search engine for automatically extracted acknowledgments from digital libraries
[ { "docid": "2210176bcb0f139e3f7f7716447f3920", "text": "Automatic metadata generation provides scalability and usability for digital libraries and their collections. Machine learning methods offer robust and adaptable automatic metadata extraction. We describe a Support Vector Machine classification-based method for metadata extraction from header part of research papers and show that it outperforms other machine learning methods on the same task. The method first classifies each line of the header into one or more of 15 classes. An iterative convergence procedure is then used to improve the line classification by using the predicted class labels of its neighbor lines in the previous round. Further metadata extraction is done by seeking the best chunk boundaries of each line. We found that discovery and use of the structural patterns of the data and domain based word clustering can improve the metadata extraction performance. An appropriate feature normalization also greatly improves the classification performance. Our metadata extraction method was originally designed to improve the metadata extraction quality of the digital libraries Citeseer [17] and EbizSearch[24]. We believe it can be generalized to other digital libraries.", "title": "" }, { "docid": "4eaf40cdef12d0d2be1d3c6a96c94841", "text": "Acknowledgements in research publications, like citations, indicate influential contributions to scientific work; however, large-scale acknowledgement analyses have traditionally been impractical due to the high cost of manual information extraction. In this paper we describe a mixture method for automatically mining acknowledgements from research documents using a combination of a Support Vector Machine and regular expressions. The algorithm has been implemented as a plug-in to the CiteSeer Digital Library and the extraction results have been integrated with the traditional metadata and citation index of the CiteSeer system. As a demonstration, we use CiteSeer's autonomous citation indexing (ACI) feature to measure the relative impact of acknowledged entities, and present the top twenty acknowledged entities within the archive.", "title": "" } ]
[ { "docid": "cd811b8c1324ca0fef6a25e1ca5c4ce9", "text": "This commentary discusses why most IS academic research today lacks relevance to practice and suggests tactics, procedures, and guidelines that the IS academic community might follow in their research efforts and articles to introduce relevance to practitioners. The commentary begins by defining what is meant by relevancy in the context of academic research. It then explains why there is a lack of attention to relevance within the IS scholarly literature. Next, actions that can be taken to make relevance a more central aspect of IS research and to communicate implications of IS research more effectively to IS professionals are suggested.", "title": "" }, { "docid": "3b4ad43c44d824749da5487b34f31291", "text": "Recent terrorist attacks carried out on behalf of ISIS on American and European soil by lone wolf attackers or sleeper cells remind us of the importance of understanding the dynamics of radicalization mediated by social media communication channels. In this paper, we shed light on the social media activity of a group of twenty-five thousand users whose association with ISIS online radical propaganda has been manually verified. By using a computational tool known as dynamical activity-connectivity maps, based on network and temporal activity patterns, we investigate the dynamics of social influence within ISIS supporters. We finally quantify the effectiveness of ISIS propaganda by determining the adoption of extremist content in the general population and draw a parallel between radical propaganda and epidemics spreading, highlighting that information broadcasters and influential ISIS supporters generate highly-infectious cascades of information contagion. Our findings will help generate effective countermeasures to combat the group and other forms of online extremism.", "title": "" }, { "docid": "9a7e491e4d4490f630b55a94703a6f00", "text": "Learning generic and robust feature representations with data from multiple domains for the same problem is of great value, especially for the problems that have multiple datasets but none of them are large enough to provide abundant data variations. In this work, we present a pipeline for learning deep feature representations from multiple domains with Convolutional Neural Networks (CNNs). When training a CNN with data from all the domains, some neurons learn representations shared across several domains, while some others are effective only for a specific one. Based on this important observation, we propose a Domain Guided Dropout algorithm to improve the feature learning procedure. Experiments show the effectiveness of our pipeline and the proposed algorithm. Our methods on the person re-identification problem outperform stateof-the-art methods on multiple datasets by large margins.", "title": "" }, { "docid": "db9f0c0ab08b07ac3b05d97e580c4aae", "text": "Our objective is to identify requirements (i.e., quality attributes and functional requirements) for software visualization tools. We especially focus on requirements for research tools that target the domains of visualization for software maintenance, reengineering, and reverse engineering. The requirements are identified with a comprehensive literature survey based on relevant publications in journals, conference proceedings, and theses. The literature survey has identified seven quality attributes (i.e., rendering scalability, information scalability, interoperability, customizability, interactivity, usability, and adoptability) and seven functional requirements (i.e., views, abstraction, search, filters, code proximity, automatic layouts, and undo/history). The identified requirements are useful for researchers in the software visualization field to build and evaluate tools, and to reason about the domain of software visualization.", "title": "" }, { "docid": "295decfc6cbfe44ee20455fd551c0a45", "text": "Ultraviolet (UV) photodetectors have drawn extensive attention owing to their applications in industrial, environmental and even biological fields. Compared to UV-enhanced Si photodetectors, a new generation of wide bandgap semiconductors, such as (Al, In) GaN, diamond, and SiC, have the advantages of high responsivity, high thermal stability, robust radiation hardness and high response speed. On the other hand, one-dimensional (1D) nanostructure semiconductors with a wide bandgap, such as β-Ga2O3, GaN, ZnO, or other metal-oxide nanostructures, also show their potential for high-efficiency UV photodetection. In some cases such as flame detection, high-temperature thermally stable detectors with high performance are required. This article provides a comprehensive review on the state-of-the-art research activities in the UV photodetection field, including not only semiconductor thin films, but also 1D nanostructured materials, which are attracting more and more attention in the detection field. A special focus is given on the thermal stability of the developed devices, which is one of the key characteristics for the real applications.", "title": "" }, { "docid": "65580dfc9bdf73ef72b6a133ab19ccdd", "text": "A rotary piezoelectric motor design with simple structural components and the potential for miniaturization using a pretwisted beam stator is demonstrated in this paper. The beam acts as a vibration converter to transform axial vibration input from a piezoelectric element into combined axial-torsional vibration. The axial vibration of the stator modulates the torsional friction forces transmitted to the rotor. Prototype stators measuring 6.5 times 6.5 times 67.5 mm were constructed using aluminum (2024-T6) twisted beams with rectangular cross-section and multilayer piezoelectric actuators. The stall torque and no-load speed attained for a rectangular beam with an aspect ratio of 1.44 and pretwist helix angle of 17.7deg were 0.17 mNm and 840 rpm with inputs of 184.4 kHz and 149 mW, respectively. Operation in both clockwise and counterclockwise directions was obtained by choosing either 70.37 or 184.4 kHz for the operating frequency. The effects of rotor preload and power input on motor performance were investigated experimentally. The results suggest that motor efficiency is higher at low power input, and that efficiency increases with preload to a maximum beyond which it begins to drop.", "title": "" }, { "docid": "035b2296835a9c4a7805ba446760071e", "text": "Intrusion detection is the process of monitoring the events occurring in a computer system or network and analyzing them for signs of intrusions, defined as attempts to compromise the confidentiality, integrity, availability, or to bypass the security mechanisms of a computer or network. This paper proposes the development of an Intrusion Detection Program (IDP) which could detect known attack patterns. An IDP does not eliminate the use of any preventive mechanism but it works as the last defensive mechanism in securing the system. Three variants of genetic programming techniques namely Linear Genetic Programming (LGP), Multi-Expression Programming (MEP) and Gene Expression Programming (GEP) were evaluated to design IDP. Several indices are used for comparisons and a detailed analysis of MEP technique is provided. Empirical results reveal that genetic programming technique could play a major role in developing IDP, which are light weight and accurate when compared to some of the conventional intrusion detection systems based on machine learning paradigms.", "title": "" }, { "docid": "ae8e043f980d313499433d49aa90467c", "text": "During the last few years, Convolutional Neural Networks are slowly but surely becoming the default method solve many computer vision related problems. This is mainly due to the continuous success that they have achieved when applied to certain tasks such as image, speech, or object recognition. Despite all the efforts, object class recognition methods based on deep learning techniques still have room for improvement. Most of the current approaches do not fully exploit 3D information, which has been proven to effectively improve the performance of other traditional object recognition methods. In this work, we propose PointNet, a new approach inspired by VoxNet and 3D ShapeNets, as an improvement over the existing methods by using density occupancy grids representations for the input data, and integrating them into a supervised Convolutional Neural Network architecture. An extensive experimentation was carried out, using ModelNet - a large-scale 3D CAD models dataset - to train and test the system, to prove that our approach is on par with state-of-the-art methods in terms of accuracy while being able to perform recognition under real-time constraints.", "title": "" }, { "docid": "78d7c61f7ca169a05e9ae1393712cd69", "text": "Designing an automatic solver for math word problems has been considered as a crucial step towards general AI, with the ability of natural language understanding and logical inference. The state-of-the-art performance was achieved by enumerating all the possible expressions from the quantities in the text and customizing a scoring function to identify the one with the maximum probability. However, it incurs exponential search space with the number of quantities and beam search has to be applied to trade accuracy for efficiency. In this paper, we make the first attempt of applying deep reinforcement learning to solve arithmetic word problems. The motivation is that deep Q-network has witnessed success in solving various problems with big search space and achieves promising performance in terms of both accuracy and running time. To fit the math problem scenario, we propose our MathDQN that is customized from the general deep reinforcement learning framework. Technically, we design the states, actions, reward function, together with a feed-forward neural network as the deep Q-network. Extensive experimental results validate our superiority over state-ofthe-art methods. Our MathDQN yields remarkable improvement on most of datasets and boosts the average precision among all the benchmark datasets by 15%.", "title": "" }, { "docid": "0e2d5444d16f7c710039f6145473131c", "text": "In this paper, a novel design approach for the development of robot hands is presented. This approach, that can be considered alternative to the “classical” one, takes into consideration compliant structures instead of rigid ones. Compliance effects, which were considered in the past as a “defect” to be mechanically eliminated, can be viceversa regarded as desired features and can be properly controlled in order to achieve desired properties from the robotic device. In particular, this is true for robot hands, where the mechanical complexity of “classical” design solutions has always originated complicated structures, often with low reliability and high costs. In this paper, an alternative solution to the design of dexterous robot hand is illustrated, considering a “mechatronic approach” for the integration of the mechanical structure, the sensory and electronic system, the control and the actuation part. Moreover, the preliminary experimental activity on a first prototype is reported and discussed. The results obtained so far, considering also reliability, costs and development time, are very encouraging, and allows to foresee a wider diffusion of dextrous hands for robotic applications.", "title": "" }, { "docid": "21ca1c1fce82a764e9dc7b31e11cb0fa", "text": "We describe an approach to learning from long-tailed, imbalanced datasets that are prevalent in real-world settings. Here, the challenge is to learn accurate “fewshot” models for classes in the tail of the class distribution, for which little data is available. We cast this problem as transfer learning, where knowledge from the data-rich classes in the head of the distribution is transferred to the data-poor classes in the tail. Our key insights are as follows. First, we propose to transfer meta-knowledge about learning-to-learn from the head classes. This knowledge is encoded with a meta-network that operates on the space of model parameters, that is trained to predict many-shot model parameters from few-shot model parameters. Second, we transfer this meta-knowledge in a progressive manner, from classes in the head to the “body”, and from the “body” to the tail. That is, we transfer knowledge in a gradual fashion, regularizing meta-networks for few-shot regression with those trained with more training data. This allows our final network to capture a notion of model dynamics, that predicts how model parameters are likely to change as more training data is gradually added. We demonstrate results on image classification datasets (SUN, Places, and ImageNet) tuned for the long-tailed setting, that significantly outperform common heuristics, such as data resampling or reweighting.", "title": "" }, { "docid": "b740f07b95041e764bfe8cb5a59b14a8", "text": "We present in this paper a statistical model for languageindependent bi-directional conversion between spelling and pronunciation, based on joint grapheme/phoneme units extracted from automatically aligned data. The model is evaluated on spelling-to-pronunciation and pronunciation-tospelling conversion on the NetTalk database and the CMU dictionary. We also study the effect of including lexical stress in the pronunciation. Although a direct comparison is difficult to make, our model’s performance appears to be as good or better than that of other data-driven approaches that have been applied to the same tasks.", "title": "" }, { "docid": "26787002ed12cc73a3920f2851449c5e", "text": "This article brings together three current themes in organizational behavior: (1) a renewed interest in assessing person-situation interactional constructs, (2) the quantitative assessment of organizational culture, and (3) the application of \"Q-sort,\" or template-matching, approaches to assessing person-situation interactions. Using longitudinal data from accountants and M.B.A. students and cross-sectional data from employees of government agencies and public accounting firms, we developed and validated an instrument for assessing personorganization fit, the Organizational Culture Profile (OCP). Results suggest that the dimensionality of individual preferences for organizational cultures and the existence of these cultures are interpretable. Further, person-organization fit predicts job satisfaction and organizational commitment a year after fit was measured and actual turnover after two years. This evidence attests to the importance of understanding the fit between individuals' preferences and organizational cultures.", "title": "" }, { "docid": "60d90ae1407c86559af63f20536202dc", "text": "TCP Westwood (TCPW) is a sender-side modification of the TCP congestion window algorithm that improves upon the performance of TCP Reno in wired as well as wireless networks. The improvement is most significant in wireless networks with lossy links. In fact, TCPW performance is not very sensitive to random errors, while TCP Reno is equally sensitive to random loss and congestion loss and cannot discriminate between them. Hence, the tendency of TCP Reno to overreact to errors. An important distinguishing feature of TCP Westwood with respect to previous wireless TCP “extensions” is that it does not require inspection and/or interception of TCP packets at intermediate (proxy) nodes. Rather, TCPW fully complies with the end-to-end TCP design principle. The key innovative idea is to continuously measure at the TCP sender side the bandwidth used by the connection via monitoring the rate of returning ACKs. The estimate is then used to compute congestion window and slow start threshold after a congestion episode, that is, after three duplicate acknowledgments or after a timeout. The rationale of this strategy is simple: in contrast with TCP Reno which “blindly” halves the congestion window after three duplicate ACKs, TCP Westwood attempts to select a slow start threshold and a congestion window which are consistent with the effective bandwidth used at the time congestion is experienced. We call this mechanism faster recovery. The proposed mechanism is particularly effective over wireless links where sporadic losses due to radio channel problems are often misinterpreted as a symptom of congestion by current TCP schemes and thus lead to an unnecessary window reduction. Experimental studies reveal improvements in throughput performance, as well as in fairness. In addition, friendliness with TCP Reno was observed in a set of experiments showing that TCP Reno connections are not starved by TCPW connections. Most importantly, TCPW is extremely effective in mixed wired and wireless networks where throughput improvements of up to 550% are observed. Finally, TCPW performs almost as well as localized link layer approaches such as the popular Snoop scheme, without incurring the overhead of a specialized link layer protocol.", "title": "" }, { "docid": "b8f23ec8e704ee1cf9dbe6063a384b09", "text": "The Dirichlet distribution and its compound variant, the Dirichlet-multinomial, are two of the most basic models for proportional data, such as the mix of vocabulary words in a text document. Yet the maximum-likelihood estimate of these distributions is not available in closed-form. This paper describes simple and efficient iterative schemes for obtaining parameter estimates in these models. In each case, a fixed-point iteration and a Newton-Raphson (or generalized Newton-Raphson) iteration is provided. 1 The Dirichlet distribution The Dirichlet distribution is a model of how proportions vary. Let p denote a random vector whose elements sum to 1, so that pk represents the proportion of item k. Under the Dirichlet model with parameter vector α, the probability density at p is p(p) ∼ D(α1, ..., αK) = Γ( ∑ k αk) ∏ k Γ(αk) ∏ k pk k (1) where pk > 0 (2)", "title": "" }, { "docid": "4bfb6e5b039dd434e0c8aed461536acf", "text": "In many applications transactions between the elements of an information hierarchy occur over time. For example, the product offers of a department store can be organized into product groups and subgroups to form an information hierarchy. A market basket consisting of the products bought by a customer forms a transaction. Market baskets of one or more customers can be ordered by time into a sequence of transactions. Each item in a transaction is associated with a measure, for example, the amount paid for a product.\n In this paper we present a novel method for visualizing sequences of these kinds of transactions in information hierarchies. It uses a tree layout to draw the hierarchy and a timeline to represent progression of transactions in the hierarchy. We have developed several interaction techniques that allow the users to explore the data. Smooth animations help them to track the transitions between views. The usefulness of the approach is illustrated by examples from several very different application domains.", "title": "" }, { "docid": "7a3573bfb32dc1e081d43fe9eb35a23b", "text": "Collections of relational paraphrases have been automatically constructed from large text corpora, as a WordNet counterpart for the realm of binary predicates and their surface forms. However, these resources fall short in their coverage of hypernymy links (subsumptions) among the synsets of phrases. This paper closes this gap by computing a high-quality alignment between the relational phrases of the Patty taxonomy, one of the largest collections of this kind, and the verb senses of WordNet. To this end, we devise judicious features and develop a graph-based alignment algorithm by adapting and extending the SimRank random-walk method. The resulting taxonomy of relational phrases and verb senses, coined HARPY, contains 20,812 synsets organized into a Directed Acyclic Graph (DAG) with 616,792 hypernymy links. Our empirical assessment, indicates that the alignment links between Patty and WordNet have high accuracy, with Mean Reciprocal Rank (MRR) score 0.7 and Normalized Discounted Cumulative Gain (NDCG) score 0.73. As an additional extrinsic value, HARPY provides fine-grained lexical types for the arguments of verb senses in WordNet.", "title": "" }, { "docid": "43831e29e62c574a93b6029409690bfe", "text": "We present a convolutional network that is equivariant to rigid body motions. The model uses scalar-, vector-, and tensor fields over 3D Euclidean space to represent data, and equivariant convolutions to map between such representations. These SE(3)-equivariant convolutions utilize kernels which are parameterized as a linear combination of a complete steerable kernel basis, which is derived analytically in this paper. We prove that equivariant convolutions are the most general equivariant linear maps between fields over R. Our experimental results confirm the effectiveness of 3D Steerable CNNs for the problem of amino acid propensity prediction and protein structure classification, both of which have inherent SE(3) symmetry.", "title": "" }, { "docid": "fb1724b8baf76ceec32647fc6e5f2039", "text": "The formation of informal settlements in and around urban complexes has largely been ignored in the context of procedural city modeling. However, many cities in South Africa and globally can attest to the presence of such settlements. This paper analyses the phenomenon of informal settlements from a procedural modeling perspective. Aerial photography from two South African urban complexes, namely Johannesburg and Cape Town is used as a basis for the extraction of various features that distinguish different types of settlements. In particular, the road patterns which have formed within such settlements are analysed, and various procedural techniques proposed (including Voronoi diagrams, subdivision and L-systems) to replicate the identified features. A qualitative assessment of the procedural techniques is provided, and the most suitable combination of techniques identified for unstructured and structured settlements. In particular it is found that a combination of Voronoi diagrams and subdivision provides the closest match to unstructured informal settlements. A combination of L-systems, Voronoi diagrams and subdivision is found to produce the closest pattern to a structured informal settlement.", "title": "" }, { "docid": "d4d24bee47b97e1bf4aadad0f3993e78", "text": "An aircraft landed safely is the result of a huge organizational effort required to cope with a complex system made up of humans, technology and the environment. The aviation safety record has improved dramatically over the years to reach an unprecedented low in terms of accidents per million take-offs, without ever achieving the “zero accident” target. The introduction of automation on board airplanes must be acknowledged as one of the driving forces behind the decline in the accident rate down to the current level.", "title": "" } ]
scidocsrr
b61352b48264876b641fe9f23310e6df
Terrorism Event Classification Using Fuzzy Inference Systems
[ { "docid": "08634303d285ec95873e003eeac701eb", "text": "This paper describes the application of adaptive neuro-fuzzy inference system (ANFIS) model for classification of electroencephalogram (EEG) signals. Decision making was performed in two stages: feature extraction using the wavelet transform (WT) and the ANFIS trained with the backpropagation gradient descent method in combination with the least squares method. Five types of EEG signals were used as input patterns of the five ANFIS classifiers. To improve diagnostic accuracy, the sixth ANFIS classifier (combining ANFIS) was trained using the outputs of the five ANFIS classifiers as input data. The proposed ANFIS model combined the neural network adaptive capabilities and the fuzzy logic qualitative approach. Some conclusions concerning the saliency of features on classification of the EEG signals were obtained through analysis of the ANFIS. The performance of the ANFIS model was evaluated in terms of training performance and classification accuracies and the results confirmed that the proposed ANFIS model has potential in classifying the EEG signals.", "title": "" } ]
[ { "docid": "ac5c015aa485084431b8dba640f294b5", "text": "In human sentence processing, cognitive load can be defined many ways. This report considers a definition of cognitive load in terms of the total probability of structural options that have been disconfirmed at some point in a sentence: the surprisal of word wi given its prefix w0...i−1 on a phrase-structural language model. These loads can be efficiently calculated using a probabilistic Earley parser (Stolcke, 1995) which is interpreted as generating predictions about reading time on a word-by-word basis. Under grammatical assumptions supported by corpusfrequency data, the operation of Stolcke’s probabilistic Earley parser correctly predicts processing phenomena associated with garden path structural ambiguity and with the subject/object relative asymmetry.", "title": "" }, { "docid": "fb426b89d1a65c597d190582393254eb", "text": "The amount of data of all kinds available electronically has increased dramatically in recent years. The data resides in di erent forms, ranging from unstructured data in le systems to highly structured in relational database systems. Data is accessible through a variety of interfaces including Web browsers, database query languages, application-speci c interfaces, or data exchange formats. Some of this data is raw data, e.g., images or sound. Some of it has structure even if the structure is often implicit, and not as rigid or regular as that found in standard database systems. Sometimes the structure exists but has to be extracted from the data. Sometimes also it exists but we prefer to ignore it for certain purposes such as browsing. We call here semi-structured data this data that is (from a particular viewpoint) neither raw data nor strictly typed, i.e., not table-oriented as in a relational model or sorted-graph as in object databases. As will seen later when the notion of semi-structured data is more precisely de ned, the need for semi-structured data arises naturally in the context of data integration, even when the data sources are themselves well-structured. Although data integration is an old topic, the need to integrate a wider variety of dataformats (e.g., SGML or ASN.1 data) and data found on the Web has brought the topic of semi-structured data to the forefront of research. The main purpose of the paper is to isolate the essential aspects of semistructured data. We also survey some proposals of models and query languages for semi-structured data. In particular, we consider recent works at Stanford U. and U. Penn on semi-structured data. In both cases, the motivation is found in the integration of heterogeneous data. The \\lightweight\" data models they use (based on labelled graphs) are very similar. As we shall see, the topic of semi-structured data has no precise boundary. Furthermore, a theory of semi-structured data is still missing. We will try to highlight some important issues in this context. The paper is organized as follows. In Section 2, we discuss the particularities of semi-structured data. In Section 3, we consider the issue of the data structure and in Section 4, the issue of the query language.", "title": "" }, { "docid": "e8a144ec1c58f8fa07b518a754d97fc7", "text": "Smart Cities appeared in literature in late ‘90s and various approaches have been developed so far. Until today, smart city does not describe a city with particular attributes but it is used to describe different cases in urban spaces: web portals that virtualize cities or city guides; knowledge bases that address local needs; agglomerations with Information and Communication Technology (ICT) infrastructure that attract business relocation; metropolitan-wide ICT infrastructures that deliver e-services to the citizens; ubiquitous environments; and recently ICT infrastructure for ecological use. Researchers, practicians, businessmen and policy makers consider smart city from different perspectives and most of them agree on a model that measures urban economy, mobility, environment, living, people and governance. On the other hand, ICT and construction industries stress to capitalize smart city and a new market seems to be generated in this domain. This chapter aims to perform a literature review, discover and classify the particular schools of thought, universities and research centres as well as companies that deal with smart city domain and discover alternative approaches, models, architecture and frameworks with this regard.", "title": "" }, { "docid": "4597ab07ac630eb5e256f57530e2828e", "text": "This paper presents novel QoS extensions to distributed control plane architectures for multimedia delivery over large-scale, multi-operator Software Defined Networks (SDNs). We foresee that large-scale SDNs shall be managed by a distributed control plane consisting of multiple controllers, where each controller performs optimal QoS routing within its domain and shares summarized (aggregated) QoS routing information with other domain controllers to enable inter-domain QoS routing with reduced problem dimensionality. To this effect, this paper proposes (i) topology aggregation and link summarization methods to efficiently acquire network topology and state information, (ii) a general optimization framework for flow-based end-to-end QoS provision over multi-domain networks, and (iii) two distributed control plane designs by addressing the messaging between controllers for scalable and secure inter-domain QoS routing. We apply these extensions to streaming of layered videos and compare the performance of different control planes in terms of received video quality, communication cost and memory overhead. Our experimental results show that the proposed distributed solution closely approaches the global optimum (with full network state information) and nicely scales to large networks.", "title": "" }, { "docid": "894eac11da60a5d81c437b3953d16408", "text": "ion Levels 3 Behavior (Function) Structure (Netlist) Physical (Layout) Logic Circuit Processor System", "title": "" }, { "docid": "5a573ae9fad163c6dfe225f59b246b7f", "text": "The sharp increase of plastic wastes results in great social and environmental pressures, and recycling, as an effective way currently available to reduce the negative impacts of plastic wastes, represents one of the most dynamic areas in the plastics industry today. Froth flotation is a promising method to solve the key problem of recycling process, namely separation of plastic mixtures. This review surveys recent literature on plastics flotation, focusing on specific features compared to ores flotation, strategies, methods and principles, flotation equipments, and current challenges. In terms of separation methods, plastics flotation is divided into gamma flotation, adsorption of reagents, surface modification and physical regulation.", "title": "" }, { "docid": "a0fc4982c5d63191ab1b15deff4e65d6", "text": "Sentiment classification is an important subject in text mining research, which concerns the application of automatic methods for predicting the orientation of sentiment present on text documents, with many applications on a number of areas including recommender and advertising systems, customer intelligence and information retrieval. In this paper, we provide a survey and comparative study of existing techniques for opinion mining including machine learning and lexicon-based approaches, together with evaluation metrics. Also cross-domain and cross-lingual approaches are explored. Experimental results show that supervised machine learning methods, such as SVM and naive Bayes, have higher precision, while lexicon-based methods are also very competitive because they require few effort in human-labeled document and isn't sensitive to the quantity and quality of the training dataset.", "title": "" }, { "docid": "7eeb2bf2aaca786299ebc8507482e109", "text": "In this paper we argue that questionanswering (QA) over technical domains is distinctly different from TREC-based QA or Web-based QA and it cannot benefit from data-intensive approaches. Technical questions arise in situations where concrete problems require specific answers and explanations. Finding a justification of the answer in the context of the document is essential if we have to solve a real-world problem. We show that NLP techniques can be used successfully in technical domains for high-precision access to information stored in documents. We present ExtrAns, an answer extraction system over technical domains, its architecture, its use of logical forms for answer extractions and how terminology extraction becomes an important part of the system.", "title": "" }, { "docid": "47dcffdb6d8543034784bebabf3a17a9", "text": "This research tends to explore relationship between brand equity as a whole construct comprising (brand association & brand awareness, perceived service quality and service loyalty) with purchase intention. Questionnaire has been designed from previous research settings and modified according to Pakistani context in order to ensure validity and reliability of the developed instrument. Convenience sampling comprising a sample size of 150 (non-student) has been taken in this research. Research type is causal correlational and cross sectional in nature. In order to accept or reject hypothesis correlation and regression techniques were applied. Results indicated significant and positive relationship between brand equity and purchase intention, while partial mediation has been proved for brand performance. Only three dimensions of brand equity (perceived service quality, brand association & awareness and service loyalty) have been measured. Other dimensions as brand personality have been ignored. English not being the primary language may have hampered the response rate. As far as the practical implications are concerned practitioners can get benefits from this research as the contribution of brand equity has more than 50% towards purchase intention.", "title": "" }, { "docid": "62309d3434c39ea5f9f901f8eb635539", "text": "The flap design according Karaca et al., used during surgery for removal of impacted third molars prevents complications related to 2 molar periodontal status [125]. Suarez et al. believe that this design influences healing primary [122]. This prevents wound dehiscence and evaluated the suture technique to achieve this closure to Sanchis et al. [124], believe that primary closure avoids draining the socket and worse postoperative inflammation and pain, choose to place drains, obtaining a less postoperative painful [127].", "title": "" }, { "docid": "2f5d428b8da4d5b5009729fc1794e53d", "text": "The resolution of a synthetic aperture radar (SAR) image, in range and azimuth, is determined by the transmitted bandwidth and the synthetic aperture length, respectively. Various superresolution techniques for improving resolution have been proposed, and we have proposed an algorithm that we call polarimetric bandwidth extrapolation (PBWE). To apply PBWE to a radar image, one needs to first apply PBWE in the range direction and then in the azimuth direction, or vice versa . In this paper, PBWE is further extended to the 2-D case. This extended case (2D-PBWE) utilizes a 2-D polarimetric linear prediction model and expands the spatial frequency bandwidth in range and azimuth directions simultaneously. The performance of the 2D-PBWE is shown through a simulated radar image and a real polarimetric SAR image", "title": "" }, { "docid": "59616ff3673ecfab0ff6e8224bb87f9c", "text": "The tremendous growth in wireless Internet use is showing no signs of slowing down. Existing cellular networks are starting to be insufficient in meeting this demand, in part due to their inflexible and expensive equipment as well as complex and non-agile control plane. Software-defined networking is emerging as a natural solution for next generation cellular networks as it enables further network function virtualization opportunities and network programmability. In this article, we advocate an all-SDN network architecture with hierarchical network control capabilities to allow for different grades of performance and complexity in offering core network services and provide service differentiation for 5G systems. As a showcase of this architecture, we introduce a unified approach to mobility, handoff, and routing management and offer connectivity management as a service (CMaaS). CMaaS is offered to application developers and over-the-top service providers to provide a range of options in protecting their flows against subscriber mobility at different price levels.", "title": "" }, { "docid": "cf02044b2f0c02fff666282a6e1bf68e", "text": "A rapid method for the measurement of serum and/or plasma, lipid-associated sialic acid levels has been developed. This test has been applied to 850 human sera of which 670 came from patients with nine categories of malignant disease, 80 from persons with benign disorders, and 100 from normal individuals. Lipid-associated sialic acid concentrations were found to be significantly increased (p less than 0.001) in all groups of cancer patients as compared to both those with benign diseases and normal controls. Test sensitivity in the detection of cancer ranged from 77 to 97%. Specificity was, respectively, 81 and 93% for the benign and normal groups. In small samples of patients, no association between test values and tumor burden was found. This test compares favorably with the most widely used tumor marker test, that for carcinoembryonic antigen.", "title": "" }, { "docid": "a671c6eff981b5e3a0466e53f22c4521", "text": "This paper investigates recently proposed approaches for defending against adversarial examples and evaluating adversarial robustness. We motivate adversarial risk as an objective for achieving models robust to worst-case inputs. We then frame commonly used attacks and evaluation metrics as defining a tractable surrogate objective to the true adversarial risk. This suggests that models may optimize this surrogate rather than the true adversarial risk. We formalize this notion as obscurity to an adversary, and develop tools and heuristics for identifying obscured models and designing transparent models. We demonstrate that this is a significant problem in practice by repurposing gradient-free optimization techniques into adversarial attacks, which we use to decrease the accuracy of several recently proposed defenses to near zero. Our hope is that our formulations and results will help researchers to develop more powerful defenses.", "title": "" }, { "docid": "07c817e8c2e2d195d56621d7031850ac", "text": "Traditionally, a full-mouth rehabilitation based on full-crown coverage has been recommended treatment for patients affected by severe dental erosion. Nowadays, thanks to improved adhesive techniques, the indications for crowns have decreased and a more conservative approach may be proposed. Even though adhesive treatments simplify both the clinical and laboratory procedures, restoring such patients still remains a challenge due to the great amount of tooth destruction. To facilitate the clinician's task during the planning and execution of a full-mouth adhesive rehabilitation, an innovative concept has been developed: the three-step technique. Three laboratory steps are alternated with three clinical steps, allowing the clinician and the laboratory technician to constantly interact to achieve the most predictable esthetic and functional outcome. During the first step, an esthetic evaluation is performed to establish the position of the plane of occlusion. In the second step, the patient's posterior quadrants are restored at an increased vertical dimension. Finally, the third step reestablishes the anterior guidance. Using the three-step technique, the clinician can transform a full-mouth rehabilitation into a rehabilitation for individual quadrants. The present article focuses on the second step, explaining all the laboratory and clinical steps necessary to restore the posterior quadrants with a defined occlusal scheme at an increased vertical dimension. A brief summary of the first step is also included.", "title": "" }, { "docid": "0a05cfa04d520fcf1db6c4aafb9b65b6", "text": "Motor learning can be defined as changing performance so as to optimize some function of the task, such as accuracy. The measure of accuracy that is optimized is called a loss function and specifies how the CNS rates the relative success or cost of a particular movement outcome. Models of pointing in sensorimotor control and learning usually assume a quadratic loss function in which the mean squared error is minimized. Here we develop a technique for measuring the loss associated with errors. Subjects were required to perform a task while we experimentally controlled the skewness of the distribution of errors they experienced. Based on the change in the subjects' average performance, we infer the loss function. We show that people use a loss function in which the cost increases approximately quadratically with error for small errors and significantly less than quadratically for large errors. The system is thus robust to outliers. This suggests that models of sensorimotor control and learning that have assumed minimizing squared error are a good approximation but tend to penalize large errors excessively.", "title": "" }, { "docid": "efd2843175ad0b860ad1607f337addc5", "text": "We demonstrate the usefulness of the uniform resource locator (URL) alone in performing web page classification. This approach is faster than typical web page classification, as the pages do not have to be fetched and analyzed. Our approach segments the URL into meaningful chunks and adds component, sequential and orthographic features to model salient patterns. The resulting features are used in supervised maximum entropy modeling. We analyze our approach's effectiveness on two standardized domains. Our results show that in certain scenarios, URL-based methods approach the performance of current state-of-the-art full-text and link-based methods.", "title": "" }, { "docid": "32f49e1ec3ac3cdd435111e7cfa146bd", "text": "Semantic lexicons such as WordNet and PPDB have been used to improve the vector-based semantic representations of words by adjusting the word vectors. However, such lexicons lack semantic intensity information, inhibiting adjustment of vector spaces to better represent semantic intensity scales. In this work, we adjust word vectors using the semantic intensity information in addition to synonyms and antonyms from WordNet and PPDB, and show improved performance on judging semantic intensity orders of adjective pairs on three different human annotated datasets.", "title": "" }, { "docid": "3e0a731c76324ad0cea438a1d9907b68", "text": "ance. In addition, the salt composition of the soil water influences the composition of cations on the exchange Due in large measure to the prodigious research efforts of Rhoades complex of soil particles, which influences soil permeand his colleagues at the George E. Brown, Jr., Salinity Laboratory ability and tilth, depending on salinity level and exover the past two decades, soil electrical conductivity (EC), measured changeable cation composition. Aside from decreasing using electrical resistivity and electromagnetic induction (EM), is among the most useful and easily obtained spatial properties of soil crop yield and impacting soil hydraulics, salinity can that influences crop productivity. As a result, soil EC has become detrimentally impact ground water, and in areas where one of the most frequently used measurements to characterize field tile drainage occurs, drainage water can become a disvariability for application to precision agriculture. The value of spatial posal problem as demonstrated in the southern San measurements of soil EC to precision agriculture is widely acknowlJoaquin Valley of central California. edged, but soil EC is still often misunderstood and misinterpreted. From a global perspective, irrigated agriculture makes To help clarify misconceptions, a general overview of the application an essential contribution to the food needs of the world. of soil EC to precision agriculture is presented. The following areas While only 15% of the world’s farmland is irrigated, are discussed with particular emphasis on spatial EC measurements: roughly 35 to 40% of the total supply of food and fiber a brief history of the measurement of soil salinity with EC, the basic comes from irrigated agriculture (Rhoades and Lovetheories and principles of the soil EC measurement and what it actually day, 1990). However, vast areas of irrigated land are measures, an overview of the measurement of soil salinity with various threatened by salinization. Although accurate worldEC measurement techniques and equipment (specifically, electrical wide data are not available, it is estimated that roughly resistivity with the Wenner array and EM), examples of spatial EC half of all existing irrigation systems (totaling about 250 surveys and their interpretation, applications and value of spatial measurements of soil EC to precision agriculture, and current and million ha) are affected by salinity and waterlogging future developments. Precision agriculture is an outgrowth of techno(Rhoades and Loveday, 1990). logical developments, such as the soil EC measurement, which faciliSalinity within irrigated soils clearly limits productivtate a spatial understanding of soil–water–plant relationships. The ity in vast areas of the USA and other parts of the world. future of precision agriculture rests on the reliability, reproducibility, It is generally accepted that the extent of salt-affected and understanding of these technologies. soil is increasing. In spite of the fact that salinity buildup on irrigated lands is responsible for the declining resource base for agriculture, we do not know the exact T predominant mechanism causing the salt accuextent to which soils in our country are salinized, the mulation in irrigated agricultural soils is evapotransdegree to which productivity is being reduced by salinpiration. The salt contained in the irrigation water is ity, the increasing or decreasing trend in soil salinity left behind in the soil as the pure water passes back to development, and the location of contributory sources the atmosphere through the processes of evaporation of salt loading to ground and drainage waters. Suitable and plant transpiration. The effects of salinity are manisoil inventories do not exist and until recently, neither fested in loss of stand, reduced rates of plant growth, did practical techniques to monitor salinity or assess the reduced yields, and in severe cases, total crop failure (Rhoades and Loveday, 1990). Salinity limits water upAbbreviations: EC, electrical conductivity; ECa, apparent soil electritake by plants by reducing the osmotic potential and cal conductivity; ECe, electrical conductivity of the saturated soil paste thus the total soil water potential. Salinity may also extract; ECw, electrical conductivity of soil water; EM, electromagnetic cause specific ion toxicity or upset the nutritional balinduction; EMavg, the geometric mean of the vertical and horizontal electromagnetic induction readings; EMh, electromagnetic induction measurement in the horizontal coil-mode configuration; EMv, electroUSDA-ARS, George E. Brown, Jr., Salinity Lab., 450 West Big magnetic induction measurement in the vertical coil-mode configuraSprings Rd., Riverside, CA 92507-4617. Received 23 Apr. 2001. *Cortion; GIS, geographical information system; GPS, global positioning responding author (dcorwin@ussl.ars.usda.gov). systems; NPS, nonpoint source; SP, saturation percentage; TDR, time domain reflectometry; w, total volumetric water content. Published in Agron. J. 95:455–471 (2003).", "title": "" }, { "docid": "436900539406faa9ff34c1af12b6348d", "text": "The accomplishments to date on the development of automatic vehicle control (AVC) technology in the Program on Advanced Technology for the Highway (PATH) at the University of California, Berkeley, are summarized. The basic prqfiiples and assumptions underlying the PATH work are identified, ‘followed by explanations of the work on automating vehicle lateral (steering) and longitudinal (spacing and speed) control. For both lateral and longitudinal control, the modeling of plant dynamics is described first, followed by development of the additional subsystems needed (communications, reference/sensor systems) and the derivation of the control laws. Plans for testing on vehicles in both near and long term are then discussed.", "title": "" } ]
scidocsrr
a510d90536db787fcd5133959a390a74
Text Summarization within the Latent Semantic Analysis Framework: Comparative Study
[ { "docid": "64fc1433249bb7aba59e0a9092aeee5e", "text": "In this paper, we propose two generic text summarization methods that create text summaries by ranking and extracting sentences from the original documents. The first method uses standard IR methods to rank sentence relevances, while the second method uses the latent semantic analysis technique to identify semantically important sentences, for summary creations. Both methods strive to select sentences that are highly ranked and different from each other. This is an attempt to create a summary with a wider coverage of the document's main content and less redundancy. Performance evaluations on the two summarization methods are conducted by comparing their summarization outputs with the manual summaries generated by three independent human evaluators. The evaluations also study the influence of different VSM weighting schemes on the text summarization performances. Finally, the causes of the large disparities in the evaluators' manual summarization results are investigated, and discussions on human text summarization patterns are presented.", "title": "" }, { "docid": "f4380a5acaba5b534d13e1a4f09afe4f", "text": "Several approaches to automatic speech summarization are discussed below, using the ICSI Meetings corpus. We contrast feature-based approaches using prosodic and lexical features with maximal marginal relevance and latent semantic analysis approaches to summarization. While the latter two techniques are borrowed directly from the field of text summarization, feature-based approaches using prosodic information are able to utilize characteristics unique to speech data. We also investigate how the summarization results might deteriorate when carried out on ASR output as opposed to manual transcripts. All of the summaries are of an extractive variety, and are compared using the software ROUGE.", "title": "" }, { "docid": "9747be055df9acedfdfe817eb7e1e06e", "text": "Text summarization solves the problem of extracting important information from huge amount of text data. There are various methods in the literature that aim to find out well-formed summaries. One of the most commonly used methods is the Latent Semantic Analysis (LSA). In this paper, different LSA based summarization algorithms are explained and two new LSA based summarization algorithms are proposed. The algorithms are evaluated on Turkish documents, and their performances are compared using their ROUGE-L scores. One of our algorithms produces the best scores.", "title": "" } ]
[ { "docid": "d52efc862c68ec09a5ae3395464996ed", "text": "The growth of digital video has given rise to a need for computational methods for evaluating the visual quality of digital video. We have developed a new digital video quality metric, which we call DVQ (Digital Video Quality). Here we provide a brief description of the metric, and give a preliminary report on its performance. DVQ accepts a pair of digital video sequences, and computes a measure of the magnitude of the visible difference between them. The metric is based on the Discrete Cosine Transform. It incorporates aspects of early visual processing, including light adaptation, luminance and chromatic channels, spatial and temporal filtering, spatial frequency channels, contrast masking, and probability summation. It also includes primitive dynamics of light adaptation and contrast masking. We have applied the metric to digital video sequences corrupted by various typical compression artifacts, and compared the results to quality ratings made by human observers.", "title": "" }, { "docid": "c0ec2818c7f34359b089acc1df5478c6", "text": "Methods We searched Medline from Jan 1, 2009, to Nov 19, 2013, limiting searches to phase 3, randomised trials of patients with atrial fi brillation who were randomised to receive new oral anticoagulants or warfarin, and trials in which both effi cacy and safety outcomes were reported. We did a prespecifi ed meta-analysis of all 71 683 participants included in the RE-LY, ROCKET AF, ARISTOTLE, and ENGAGE AF–TIMI 48 trials. The main outcomes were stroke and systemic embolic events, ischaemic stroke, haemorrhagic stroke, all-cause mortality, myocardial infarction, major bleeding, intracranial haemorrhage, and gastrointestinal bleeding. We calculated relative risks (RRs) and 95% CIs for each outcome. We did subgroup analyses to assess whether diff erences in patient and trial characteristics aff ected outcomes. We used a random-eff ects model to compare pooled outcomes and tested for heterogeneity.", "title": "" }, { "docid": "7cc94fa6dbad97f11b2da591936a73ee", "text": "\n Crew resource management (CRM) programs were developed to address team and leadership aspects of piloting modern airplanes. The goal is to reduce errors through team work. Human factors research and social, cognitive, and organizational psychology are used to develop programs tailored for individual airlines. Flight crews study accident case histories, group dynamics, and human error. Simulators provide pilots with the opportunity to solve complex flight problems. CRM in the simulator is called line-oriented flight training (LOFT). In automated cockpits CRM promotes the idea of automation as a crew member. Cultural aspects of aviation include professional, business, and national culture. The aviation CRM model has been adapted for training surgeons and operating room staff in human factors.\n", "title": "" }, { "docid": "e912abc2da4eb1158c6a6c84245d13f8", "text": "Social media hype has created a lot of speculation among educators on how these media can be used to support learning, but there have been rather few studies so far. Our explorative interview study contributes by critically exploring how campus students perceive using social media to support their studies and the perceived benefits and limitations compared with other means. Although the vast majority of the respondents use social media frequently, a “digital dissonance” can be noted, because few of them feel that they use such media to support their studies. The interviewees mainly put forth e-mail and instant messaging, which are used among students to ask questions, coordinate group work and share files. Some of them mention using Wikipedia and YouTube for retrieving content and Facebook to initiate contact with course peers. Students regard social media as one of three key means of the educational experience, alongside face-to-face meetings and using the learning management systems, and are mainly used for brief questions and answers, and to coordinate group work. In conclusion, we argue that teaching strategy plays a key role in supporting students in moving from using social media to support coordination and information retrieval to also using such media for collaborative learning, when appropriate.", "title": "" }, { "docid": "c6b32d5182842b1bd933de186a47d326", "text": "Grouping of strokes into semantically meaningful diagram elements is a difficult problem. Yet such grouping is needed if truly natural sketching is to be supported in intelligent sketch tools. Using a machine learning approach, we propose a number of new paired-stroke features for grouping and evaluate the suitability of a range of algorithms. Our evaluation shows the new features and algorithms produce promising results that are statistically better than the existing machine learning grouper.", "title": "" }, { "docid": "a06c9d681bb8a8b89a8ee64a53e3b344", "text": "This paper introduces CIEL, a universal execution engine for distributed data-flow programs. Like previous execution engines, CIEL masks the complexity of distributed programming. Unlike those systems, a CIEL job can make data-dependent control-flow decisions, which enables it to compute iterative and recursive algorithms. We have also developed Skywriting, a Turingcomplete scripting language that runs directly on CIEL. The execution engine provides transparent fault tolerance and distribution to Skywriting scripts and highperformance code written in other programming languages. We have deployed CIEL on a cloud computing platform, and demonstrate that it achieves scalable performance for both iterative and non-iterative algorithms.", "title": "" }, { "docid": "ba4637dd5033fa39d1cb09edb42481ec", "text": "In this paper we introduce a framework for best first search of minimax trees. Existing best first algorithms like SSS* and DUAL* are formulated as instances of this framework. The framework is built around the Alpha-Beta procedure. Its instances are highly practical, and readily implementable. Our reformulations of SSS* and DUAL* solve the perceived drawbacks of these algorithms. We prove their suitability for practical use by presenting test results with a tournament level chess program. In addition to reformulating old best first algorithms, we introduce an improved instance of the framework: MTD(ƒ). This new algorithm outperforms NegaScout, the current algorithm of choice of most chess programs. Again, these are not simulation results, but results of tests with an actual chess program, Phoenix.", "title": "" }, { "docid": "5527521d567290192ea26faeb6e7908c", "text": "With the rapid development of spectral imaging techniques, classification of hyperspectral images (HSIs) has attracted great attention in various applications such as land survey and resource monitoring in the field of remote sensing. A key challenge in HSI classification is how to explore effective approaches to fully use the spatial–spectral information provided by the data cube. Multiple kernel learning (MKL) has been successfully applied to HSI classification due to its capacity to handle heterogeneous fusion of both spectral and spatial features. This approach can generate an adaptive kernel as an optimally weighted sum of a few fixed kernels to model a nonlinear data structure. In this way, the difficulty of kernel selection and the limitation of a fixed kernel can be alleviated. Various MKL algorithms have been developed in recent years, such as the general MKL, the subspace MKL, the nonlinear MKL, the sparse MKL, and the ensemble MKL. The goal of this paper is to provide a systematic review of MKL methods, which have been applied to HSI classification. We also analyze and evaluate different MKL algorithms and their respective characteristics in different cases of HSI classification cases. Finally, we discuss the future direction and trends of research in this area.", "title": "" }, { "docid": "3dd36e800bc9135c59f04dfa1d1e5f42", "text": "A gamma radiation-resistant, Gram reaction-positive, aerobic and chemoorganotrophic actinobacterium, initially designated Geodermatophilus obscurus subsp. dictyosporus G-5T, was not validly named at the time of initial publication (1968). G-5T formed black-colored colonies on GYM agar. The optimal growth range was 25–35 °C, at pH 6.5–9.5 and in the absence of NaCl. Chemotaxonomic and molecular characteristics of the isolate matched those described for members of the genus Geodermatophilus. The DNA G + C content of the strain was 75.3 mol  %. The peptidoglycan contained meso-diaminopimelic acid as diagnostic diamino acid. The main polar lipids were phosphatidylcholine, diphosphatidylglycerol, phosphatidylinositol, phosphatidylethanolamine and one unspecified glycolipid; MK-9(H4) was the dominant menaquinone and galactose was detected as a diagnostic sugar. The major cellular fatty acids were branched-chain saturated acids, iso-C16:0 and iso-C15:0. The 16S rRNA gene showed 94.8–98.4 % sequence identity with the members of the genus Geodermatophilus. Based on phenotypic results and 16S rRNA gene sequence analysis, strain G-5T is proposed to represent a novel species, Geodermatophilus dictyosporus and the type strain is G-5T (=DSM 43161T = CCUG 62970T = MTCC 11558T = ATCC 25080T = CBS 234.69T = IFO 13317T = KCC A-0154T = NBRC 13317T). The INSDC accession number is HF970584.", "title": "" }, { "docid": "81fa6a7931b8d5f15d55316a6ed1d854", "text": "The objective of the study is to compare skeletal and dental changes in class II patients treated with fixed functional appliances (FFA) that pursue different biomechanical concepts: (1) FMA (Functional Mandibular Advancer) from first maxillary molar to first mandibular molar through inclined planes and (2) Herbst appliance from first maxillary molar to lower first bicuspid through a rod-and-tube mechanism. Forty-two equally distributed patients were treated with FMA (21) and Herbst appliance (21), following a single-step advancement protocol. Lateral cephalograms were available before treatment and immediately after removal of the FFA. The lateral cephalograms were analyzed with customized linear measurements. The actual therapeutic effect was then calculated through comparison with data from a growth survey. Additionally, the ratio of skeletal and dental contributions to molar and overjet correction for both FFA was calculated. Data was analyzed by means of one-sample Student’s t tests and independent Student’s t tests. Statistical significance was set at p < 0.05. Although differences between FMA and Herbst appliance were found, intergroup comparisons showed no statistically significant differences. Almost all measurements resulted in comparable changes for both appliances. Statistically significant dental changes occurred with both appliances. Dentoalveolar contribution to the treatment effect was ≥70%, thus always resulting in ≤30% for skeletal alterations. FMA and Herbst appliance usage results in comparable skeletal and dental treatment effects despite different biomechanical approaches. Treatment leads to overjet and molar relationship correction that is mainly caused by significant dentoalveolar changes.", "title": "" }, { "docid": "ed0f4616a36a2dffb6120bccd7539d0c", "text": "Many accounts of decision making and reinforcement learning posit the existence of two distinct systems that control choice: a fast, automatic system and a slow, deliberative system. Recent research formalizes this distinction by mapping these systems to \"model-free\" and \"model-based\" strategies in reinforcement learning. Model-free strategies are computationally cheap, but sometimes inaccurate, because action values can be accessed by inspecting a look-up table constructed through trial-and-error. In contrast, model-based strategies compute action values through planning in a causal model of the environment, which is more accurate but also more cognitively demanding. It is assumed that this trade-off between accuracy and computational demand plays an important role in the arbitration between the two strategies, but we show that the hallmark task for dissociating model-free and model-based strategies, as well as several related variants, do not embody such a trade-off. We describe five factors that reduce the effectiveness of the model-based strategy on these tasks by reducing its accuracy in estimating reward outcomes and decreasing the importance of its choices. Based on these observations, we describe a version of the task that formally and empirically obtains an accuracy-demand trade-off between model-free and model-based strategies. Moreover, we show that human participants spontaneously increase their reliance on model-based control on this task, compared to the original paradigm. Our novel task and our computational analyses may prove important in subsequent empirical investigations of how humans balance accuracy and demand.", "title": "" }, { "docid": "51db8011d3dfd60b7808abc6868f7354", "text": "Security issue in cloud environment is one of the major obstacle in cloud implementation. Network attacks make use of the vulnerability in the network and the protocol to damage the data and application. Cloud follows distributed technology; hence it is vulnerable for intrusions by malicious entities. Intrusion detection systems (IDS) has become a basic component in network protection infrastructure and a necessary method to defend systems from various attacks. Distributed denial of service (DDoS) attacks are a great problem for a user of computers linked to the Internet. Data mining techniques are widely used in IDS to identify attacks using the network traffic. This paper presents and evaluates a Radial basis function neural network (RBF-NN) detector to identify DDoS attacks. Many of the training algorithms for RBF-NNs start with a predetermined structure of the network that is selected either by means of a priori knowledge or depending on prior experience. The resultant network is frequently inadequate or needlessly intricate and a suitable network structure could be configured only by trial and error method. This paper proposes Bat algorithm (BA) to configure RBF-NN automatically. Simulation results demonstrate the effectiveness of the proposed method.", "title": "" }, { "docid": "99e1ae882a1b74ffcbe5e021eb577e49", "text": "This paper studies the problem of recognizing gender from full body images. This problem has not been addressed before, partly because of the variant nature of human bodies and clothing that can bring tough difficulties. However, gender recognition has high application potentials, e.g. security surveillance and customer statistics collection in restaurants, supermarkets, and even building entrances. In this paper, we build a system of recognizing gender from full body images, taken from frontal or back views. Our contributions are three-fold. First, to handle the variety of human body characteristics, we represent each image by a collection of patch features, which model different body parts and provide a set of clues for gender recognition. To combine the clues, we build an ensemble learning algorithm from those body parts to recognize gender from fixed view body images (frontal or back). Second, we relax the fixed view constraint and show the possibility to train a flexible classifier for mixed view images with the almost same accuracy as the fixed view case. At last, our approach is shown to be robust to small alignment errors, which is preferred in many applications.", "title": "" }, { "docid": "0738367dec2b7f1c5687ce1a15c8ac28", "text": "There is a high demand for qualified information and communication technology (ICT) practitioners in the European labour market, but the problem at many universities is a high dropout rate among ICT students, especially during the first study year. The solution might be to focus more on improving students’ computational thinking (CT) before starting university studies. Therefore, research is needed to find the best methods for learning CT already at comprehensive school level to raise the interest in and awareness of studying computer science. Doing so requires a clear understanding of CT and a model to improve it at comprehensive schools. Through the analysis of the articles found in EBSCO Discovery Search tool, this study gives an overview of the definition of CT and presents three models of CT. The models are analysed to find out their similarities and differences in order to gather together the core elements of CT and form a revised model of learning CT in comprehensive school ICT lessons or integrating CT in other subjects.", "title": "" }, { "docid": "f2521fbfd566fcf31b5810695e748ba0", "text": "A facile approach for coating red fluoride phosphors with a moisture-resistant alkyl phosphate layer with a thickness of 50-100 nm is reported. K2 SiF6 :Mn(4+) particles were prepared by co-precipitation and then coated by esterification of P2 O5 with alcohols (methanol, ethanol, and isopropanol). This route was adopted to encapsulate the prepared phosphors using transition-metal ions as cross-linkers between the alkyl phosphate moieties. The coated phosphor particles exhibited a high water tolerance and retained approximately 87 % of their initial external quantum efficiency after aging under high-humidity (85 %) and high-temperature (85 °C) conditions for one month. Warm white-light-emitting diodes that consisted of blue InGaN chips, the prepared K2 SiF6 :Mn(4+) phosphors, and either yellow Y3 Al5 O12 :Ce(3+) phosphors or green β-SiAlON: Eu(2+) phosphors showed excellent color rendition.", "title": "" }, { "docid": "5deae44a9c14600b1a2460836ed9572d", "text": "Grasping an object in a cluttered, unorganized environment is challenging because of unavoidable contacts and interactions between the robot and multiple immovable (static) and movable (dynamic) obstacles in the environment. Planning an approach trajectory for grasping in such situations can benefit from physics-based simulations that describe the dynamics of the interaction between the robot manipulator and the environment. In this work, we present a physics-based trajectory optimization approach for planning grasp approach trajectories. We present novel cost objectives and identify failure modes relevant to grasping in cluttered environments. Our approach uses rollouts of physics-based simulations to compute the gradient of the objective and of the dynamics. Our approach naturally generates behaviors such as choosing to push objects that are less likely to topple over, recognizing and avoiding situations which might cause a cascade of objects to fall over, and adjusting the manipulator trajectory to push objects aside in a direction orthogonal to the grasping direction. We present results in simulation for grasping in a variety of cluttered environments with varying levels of density of obstacles in the environment. Our experiments in simulation indicate that our approach outperforms a baseline approach that considers multiple straight-line trajectories modified to account for static obstacles by an aggregate success rate of 14% with varying degrees of object clutter.", "title": "" }, { "docid": "90faa9a8dc3fd87614a61bfbdf24cab6", "text": "The methods proposed recently for specializing word embeddings according to a particular perspective generally rely on external knowledge. In this article, we propose Pseudofit, a new method for specializing word embeddings according to semantic similarity without any external knowledge. Pseudofit exploits the notion of pseudo-sense for building several representations for each word and uses these representations for making the initial embeddings more generic. We illustrate the interest of Pseudofit for acquiring synonyms and study several variants of Pseudofit according to this perspective.", "title": "" }, { "docid": "24b5c8aee05ac9be61d9217a49e3d3b0", "text": "People have different intents in using online platforms. They may be trying to accomplish specific, short-term goals, or less well-defined, longer-term goals. While understanding user intent is fundamental to the design and personalization of online platforms, little is known about how intent varies across individuals, or how it relates to their behavior. Here, we develop a framework for understanding intent in terms of goal specificity and temporal range. Our methodology combines survey-based methodology with an observational analysis of user activity. Applying this framework to Pinterest, we surveyed nearly 6000 users to quantify their intent, and then studied their subsequent behavior on the web site. We find that goal specificity is bimodal – users tend to be either strongly goal-specific or goalnonspecific. Goal-specific users search more and consume less content in greater detail than goal-nonspecific users: they spend more time using Pinterest, but are less likely to return in the near future. Users with short-term goals are also more focused and more likely to refer to past saved content than users with long-term goals, but less likely to save content for the future. Further, intent can vary by demographic, and with the topic of interest. Last, we show that user’s intent and activity are intimately related by building a model that can predict a user’s intent for using Pinterest after observing their activity for only two minutes. Altogether, this work shows how intent can be predicted from user behavior.", "title": "" }, { "docid": "b5c15cbfdf35aabe7d8f2f237ecd4de6", "text": "In this correspondence, we investigate the physical layer security for cooperative nonorthogonal multiple access (NOMA) systems, where both amplify-and-forward (AF) and decode-and-forward (DF) protocols are considered. More specifically, some analytical expressions are derived for secrecy outage probability (SOP) and strictly positive secrecy capacity. Results show that AF and DF almost achieve the same secrecy performance. Moreover, asymptotic results demonstrate that the SOP tends to a constant at high signal-to-noise ratio. Finally, our results show that the secrecy performance of considered NOMA systems is independent of the channel conditions between the relay and the poor user.", "title": "" }, { "docid": "c6c9643816533237a29dd93fd420018f", "text": "We present an algorithm for finding a meaningful vertex-to-vertex correspondence between two 3D shapes given as triangle meshes. Our algorithm operates on embeddings of the two shapes in the spectral domain so as to normalize them with respect to uniform scaling and rigid-body transformation. Invariance to shape bending is achieved by relying on geodesic point proximities on a mesh to capture its shape. To deal with stretching, we propose to use non-rigid alignment via thin-plate splines in the spectral domain. This is combined with a refinement step based on the geodesic proximities to improve dense correspondence. We show empirically that our algorithm outperforms previous spectral methods, as well as schemes that compute correspondence in the spatial domain via non-rigid iterative closest points or the use of local shape descriptors, e.g., 3D shape context", "title": "" } ]
scidocsrr
e2323ba98c3bcdcf7b8d3e403af205da
What makes great teaching ? Review of the underpinning research
[ { "docid": "983ec9cdd75d0860c96f89f3c9b2f752", "text": "Your use of the JSTOR archive indicates your acceptance of JSTOR's Terms and Conditions of Use, available at . http://www.jstor.org/page/info/about/policies/terms.jsp. JSTOR's Terms and Conditions of Use provides, in part, that unless you have obtained prior permission, you may not download an entire issue of a journal or multiple copies of articles, and you may use content in the JSTOR archive only for your personal, non-commercial use.", "title": "" }, { "docid": "698dca642840f47081b1e9a54775c5cc", "text": "Background: Many popular educational programmes claim to be ‘brain-based’, despite pleas from the neuroscience community that these neuromyths do not have a basis in scientific evidence about the brain. Purpose: The main aim of this paper is to examine several of the most popular neuromyths in the light of the relevant neuroscientific and educational evidence. Examples of neuromyths include: 10% brain usage, leftand right-brained thinking, VAK learning styles and multiple intelligences Sources of evidence: The basis for the argument put forward includes a literature review of relevant cognitive neuroscientific studies, often involving neuroimaging, together with several comprehensive education reviews of the brain-based approaches under scrutiny. Main argument: The main elements of the argument are as follows. We use most of our brains most of the time, not some restricted 10% brain usage. This is because our brains are densely interconnected, and we exploit this interconnectivity to enable our primitively evolved primate brains to live in our complex modern human world. Although brain imaging delineates areas of higher (and lower) activation in response to particular tasks, thinking involves coordinated interconnectivity from both sides of the brain, not separate leftand right-brained thinking. High intelligence requires higher levels of inter-hemispheric and other connected activity. The brain’s interconnectivity includes the senses, especially vision and hearing. We do not learn by one sense alone, hence VAK learning styles do not reflect how our brains actually learn, nor the individual differences we observe in classrooms. Neuroimaging studies do not support multiple intelligences; in fact, the opposite is true. Through the activity of its frontal cortices, among other areas, the human brain seems to operate with general intelligence, applied to multiple areas of endeavour. Studies of educational effectiveness of applying any of these ideas in the classroom have failed to find any educational benefits. Conclusions: The main conclusions arising from the argument are that teachers should seek independent scientific validation before adopting brain-based products in their classrooms. A more sceptical approach to educational panaceas could contribute to an enhanced professionalism of the field.", "title": "" }, { "docid": "6c4a7a6d21c85f3f2f392fbb1621cc51", "text": "The International Academy of Education (IAE) is a not-for-profit scientific association that promotes educational research, and its dissemination and implementation. Founded in 1986, the Academy is dedicated to strengthening the contributions of research, solving critical educational problems throughout the world, and providing better communication among policy makers, researchers, and practitioners. The general aim of the IAE is to foster scholarly excellence in all fields of education. Towards this end, the Academy provides timely syntheses of research-based evidence of international importance. The Academy also provides critiques of research and of its evidentiary basis and its application to policy. This booklet about teacher professional learning and development has been prepared for inclusion in the Educational Practices Series developed by the International Academy of Education and distributed by the International Bureau of Education and the Academy. As part of its mission, the Academy provides timely syntheses of research on educational topics of international importance. This is the eighteenth in a series of booklets on educational practices that generally improve learning. This particular booklet is based on a synthesis of research evidence produced for the New Zealand Ministry of Education's Iterative Best Evidence Synthesis (BES) Programme, which is designed to be a catalyst for systemic improvement and sustainable development in education. This synthesis, and others in the series, are available electronically at www.educationcounts.govt.nz/themes/BES. All BESs are written using a collaborative approach that involves the writers, teacher unions, principal groups, teacher educators, academics, researchers, policy advisers, and other interested parties. To ensure its rigour and usefulness, each BES follows national guidelines developed by the Ministry of Education. Professor Helen Timperley was lead writer for the Teacher Professional Learning and Development: Best Evidence Synthesis Iteration [BES], assisted by teacher educators Aaron Wilson and Heather Barrar and research assistant Irene Fung, all of the University of Auckland. The BES is an analysis of 97 studies of professional development that led to improved outcomes for the students of the participating teachers. Most of these studies came from the United States, New Zealand, the Netherlands, the United Kingdom, Canada, and Israel. Dr Lorna Earl provided formative quality assurance for the synthesis; Professor John Hattie and Dr Gavin Brown oversaw the analysis of effect sizes. Helen Timperley is Professor of Education at the University of Auckland. The primary focus of her research is promotion of professional and organizational learning in schools for the purpose of improving student learning. She has …", "title": "" } ]
[ { "docid": "1a0d2b5a7421bcca3ee44885b2940a19", "text": "Genome editing is a powerful technique for genome modification in molecular research and crop breeding, and has the great advantage of imparting novel desired traits to genetic resources. However, the genome editing of fruit tree plantlets remains to be established. In this study, we describe induction of a targeted gene mutation in the endogenous apple phytoene desaturase (PDS) gene using the CRISPR/Cas9 system. Four guide RNAs (gRNAs) were designed and stably transformed with Cas9 separately in apple. Clear and partial albino phenotypes were observed in 31.8% of regenerated plantlets for one gRNA, and bi-allelic mutations in apple PDS were confirmed by DNA sequencing. In addition, an 18-bp gRNA also induced a targeted mutation. These CRIPSR/Cas9 induced-mutations in the apple genome suggest activation of the NHEJ pathway, but with some involvement also of the HR pathway. Our results demonstrate that genome editing can be practically applied to modify the apple genome.", "title": "" }, { "docid": "306a833c0130678e1b2ece7e8b824d5e", "text": "In many natural languages, there are clear syntactic and/or intonational differences between declarative sentences, which are primarily used to provide information, and interrogative sentences, which are primarily used to request information. Most logical frameworks restrict their attention to the former. Those that are concerned with both usually assume a logical language that makes a clear syntactic distinction between declaratives and interrogatives, and usually assign different types of semantic values to these two types of sentences. A different approach has been taken in recent work on inquisitive semantics. This approach does not take the basic syntactic distinction between declaratives and interrogatives as its starting point, but rather a new notion of meaning that captures both informative and inquisitive content in an integrated way. The standard way to treat the logical connectives in this approach is to associate them with the basic algebraic operations on these new types of meanings. For instance, conjunction and disjunction are treated as meet and join operators, just as in classical logic. This gives rise to a hybrid system, where sentences can be both informative and inquisitive at the same time, and there is no clearcut division between declaratives and interrogatives. It may seem that these two general approaches in the existing literature are quite incompatible. The main aim of this paper is to show that this is not the case. We develop an inquisitive semantics for a logical language that has a clearcut division between declaratives and interrogatives. We show that this language coincides in expressive power with the hybrid language that is standardly assumed in inquisitive semantics, we establish a sound and complete axiomatization for the associated logic, and we consider a natural enrichment of the system with presuppositional interrogatives.", "title": "" }, { "docid": "33285813f1b3f2c13c711447199ed75d", "text": "This paper describes the dotplot data visualization technique and its potential for contributingto the identificationof design patterns. Pattern languages have been used in architectural design and urban planning to codify related rules-of-thumb for constructing vernacular buildings and towns. When applied to software design, pattern languages promote reuse while allowing novice designers to learn from the insights of experts. Dotplots have been used in biology to study similarity in genetic sequences. When applied to software, dotplots identify patterns that range in abstraction from the syntax of programming languages to the organizational uniformity of large, multi-component systems. Dotplots are useful for design by successive abstraction—replacing duplicated code with macros, subroutines, or classes. Dotplots reveal a pervasive design pattern for simplifying algorithms by increasing the complexity of initializations. Dotplots also reveal patterns of wordiness in languages—one example inspired a design pattern for a new programming language. In addition, dotplots of data associated with programs identify dynamic usage patterns—one example identifies a design pattern used in the construction of a UNIX(tm) file system.", "title": "" }, { "docid": "064c86deca4955f09e12d9b9d0afc4e8", "text": "This paper presents a new classification framework for brain-computer interface (BCI) based on motor imagery. This framework involves the concept of Riemannian geometry in the manifold of covariance matrices. The main idea is to use spatial covariance matrices as EEG signal descriptors and to rely on Riemannian geometry to directly classify these matrices using the topology of the manifold of symmetric and positive definite (SPD) matrices. This framework allows to extract the spatial information contained in EEG signals without using spatial filtering. Two methods are proposed and compared with a reference method [multiclass Common Spatial Pattern (CSP) and Linear Discriminant Analysis (LDA)] on the multiclass dataset IIa from the BCI Competition IV. The first method, named minimum distance to Riemannian mean (MDRM), is an implementation of the minimum distance to mean (MDM) classification algorithm using Riemannian distance and Riemannian mean. This simple method shows comparable results with the reference method. The second method, named tangent space LDA (TSLDA), maps the covariance matrices onto the Riemannian tangent space where matrices can be vectorized and treated as Euclidean objects. Then, a variable selection procedure is applied in order to decrease dimensionality and a classification by LDA is performed. This latter method outperforms the reference method increasing the mean classification accuracy from 65.1% to 70.2%.", "title": "" }, { "docid": "8bde43670fd9c68bbb359531938f9b55", "text": "An 8b 1GS/s ADC is presented that interleaves two 2b/cycle SARs. To enhance speed and save power, the prototype utilizes segmentation switching and custom-designed DAC array with high density in a low parasitic layout structure. It operates at 1GS/s from 1V supply without interleaving calibration and consumes 3.8mW of power, exhibiting a FoM of 24fJ/conversion step. The ADC occupies an active area of 0.013mm2 in 65nm CMOS including on-chip offset calibration.", "title": "" }, { "docid": "d1ba8ad56a6227f771f9cef8139e9f15", "text": "We study sentiment analysis beyond the typical granularity of polarity and instead use Plutchik’s wheel of emotions model. We introduce RBEM-Emo as an extension to the Rule-Based Emission Model algorithm to deduce such emotions from human-written messages. We evaluate our approach on two different datasets and compare its performance with the current state-of-the-art techniques for emotion detection, including a recursive autoencoder. The results of the experimental study suggest that RBEM-Emo is a promising approach advancing the current state-of-the-art in emotion detection.", "title": "" }, { "docid": "3e9f98a1aa56e626e47a93b7973f999a", "text": "This paper presents a sociocultural knowledge ontology (OntoSOC) modeling approach. OntoSOC modeling approach is based on Engeström‟s Human Activity Theory (HAT). That Theory allowed us to identify fundamental concepts and relationships between them. The top-down precess has been used to define differents sub-concepts. The modeled vocabulary permits us to organise data, to facilitate information retrieval by introducing a semantic layer in social web platform architecture, we project to implement. This platform can be considered as a « collective memory » and Participative and Distributed Information System (PDIS) which will allow Cameroonian communities to share an co-construct knowledge on permanent organized activities.", "title": "" }, { "docid": "fb588b5df4e8167153f3f45be5cf4b6c", "text": "This paper is a study of consumer resistance among active abstainers of the Facebook social network site. I analyze the discourses invoked by individuals who consciously choose to abstain from participation on the ubiquitous Facebook platform. This discourse analysis draws from approximately 100 web and print publications from 2006 to early 2012, as well as personal interviews conducted with 20 Facebook abstainers. I conceptualize Facebook abstention as a performative mode of resistance, which must be understood within the context of a neoliberal consumer culture, in which subjects are empowered to act through consumption choices – or in this case non-consumption choices – and through the public display of those choices. I argue that such public displays are always at risk of misinterpretation due to the dominant discursive frameworks through which abstention is given meaning. This paper gives particular attention to the ways in which connotations of taste and distinction are invoked by refusers through their conspicuous displays of non-consumption. This has the effect of framing refusal as a performance of elitism, which may work against observers interpreting conscientious refusal as a persuasive and emulable practice of critique. The implication of this is that refusal is a limited tactic of political engagement where media platforms are concerned.", "title": "" }, { "docid": "3a1d66cdc06338857fc685a2bdc8b068", "text": "UNLABELLED\nThe WARM study is a longitudinal cohort study following infants of mothers with schizophrenia, bipolar disorder, depression and control from pregnancy to infant 1 year of age.\n\n\nBACKGROUND\nChildren of parents diagnosed with complex mental health problems including schizophrenia, bipolar disorder and depression, are at increased risk of developing mental health problems compared to the general population. Little is known regarding the early developmental trajectories of infants who are at ultra-high risk and in particular the balance of risk and protective factors expressed in the quality of early caregiver-interaction.\n\n\nMETHODS/DESIGN\nWe are establishing a cohort of pregnant women with a lifetime diagnosis of schizophrenia, bipolar disorder, major depressive disorder and a non-psychiatric control group. Factors in the parents, the infant and the social environment will be evaluated at 1, 4, 16 and 52 weeks in terms of evolution of very early indicators of developmental risk and resilience focusing on three possible environmental transmission mechanisms: stress, maternal caregiver representation, and caregiver-infant interaction.\n\n\nDISCUSSION\nThe study will provide data on very early risk developmental status and associated psychosocial risk factors, which will be important for developing targeted preventive interventions for infants of parents with severe mental disorder.\n\n\nTRIAL REGISTRATION\nNCT02306551, date of registration November 12, 2014.", "title": "" }, { "docid": "fd48614d255b7c7bc7054b4d5de69a15", "text": "Article history: Received 31 December 2007 Received in revised form 12 December 2008 Accepted 3 January 2009", "title": "" }, { "docid": "8c729366391133065c3d7a9b2b22fe23", "text": "Mobile devices generate massive amounts of data that is used to get an insight into the user behavior by enterprise systems. Data privacy is a concern in such systems as users have little control over the data that is generated by them. Blockchain systems offer ways to ensure privacy and security of the user data with the implementation of an access control mechanism. In this demonstration, we present ChainMOB, a mobility analytics application that is built on top of blockchain and addresses the fundamental privacy and security concerns in enterprise systems. Further, the extent of data sharing along with the intended audience is also controlled by the user. Another exciting feature is that user is part of the business model and is incentivized for sharing the personal mobility data. The system also supports queries that can be used in a variety of application domains.", "title": "" }, { "docid": "455e3f0c6f755d78ecafcdff14c46014", "text": "BACKGROUND\nIn neonatal and early childhood surgeries such as meningomyelocele repairs, closing deep wounds and oncological treatment, tensor fasciae lata (TFL) flaps are used. However, there are not enough data about structural properties of TFL in foetuses, which can be considered as the closest to neonates in terms of sampling. This study's main objective is to gather data about morphological structures of TFL in human foetuses to be used in newborn surgery.\n\n\nMATERIALS AND METHODS\nFifty formalin-fixed foetuses (24 male, 26 female) with gestational age ranging from 18 to 30 weeks (mean 22.94 ± 3.23 weeks) were included in the study. TFL samples were obtained by bilateral dissection and then surface area, width and length parameters were recorded. Digital callipers were used for length and width measurements whereas surface area was calculated using digital image analysis software.\n\n\nRESULTS\nNo statistically significant differences were found in terms of numerical value of parameters between sides and sexes (p > 0.05). Linear functions for TFL surface area, width, anterior and posterior margin lengths were calculated as y = -225.652 + 14.417 × age (weeks), y = -5.571 + 0.595 × age (weeks), y = -4.276 + 0.909 × age (weeks), and y = -4.468 + 0.779 × age (weeks), respectively.\n\n\nCONCLUSIONS\nLinear functions for TFL surface area, width and lengths can be used in designing TFL flap dimensions in newborn surgery. In addition, using those described linear functions can also be beneficial in prediction of TFL flap dimensions in autopsy studies.", "title": "" }, { "docid": "5959decfd357faa3ea76fe72e6197344", "text": "Deep Learning architectures, such as deep neural networks, are currently the hottest emerging areas of data science, especially in Big Data. Deep Learning could be effectively exploited to address some major issues of Big Data, including withdrawing complex patterns from huge volumes of data, fast information retrieval, data classification, semantic indexing and so on. In this work, we designed and implemented a framework to train deep neural networks using Spark, fast and general data flow engine for large scale data processing. The design is similar to Google software framework called DistBelief which can utilize computing clusters with thousands of machines to train large scale deep networks. Training Deep Learning models requires extensive data and computation. Our proposed framework can accelerate the training time by distributing the model replicas, via stochastic gradient descent, among cluster nodes for data resided on HDFS.", "title": "" }, { "docid": "b4fd6a1a3424e983928be16e76262913", "text": "In this paper, a common grounded Z-source dc-dc converter with high voltage gain is proposed for photovoltaic (PV) applications, which require a relatively high output-input voltage conversion ratio. Compared with the traditional Z-source dc-dc converter, the proposed converter, which employs a conventional Z-source network, can obtain higher voltage gain and provide the common ground for the input and output without any additional components, which results in low cost and small size. Moreover, the proposed converter features low voltage stresses of the switch and diodes. Therefore, the efficiency and reliability of the proposed converter can be improved. The operating principle, parameters design, and comparison with other converters are analyzed. Simulation and experimental results are given to verify the aforementioned characteristics and theoretical analysis of the proposed converter.", "title": "" }, { "docid": "8cf02bf19145df237e77273e70babc1d", "text": "Micro-facial expressions are spontaneous, involuntary movements of the face when a person experiences an emotion but attempts to hide their facial expression, most likely in a high-stakes environment. Recently, research in this field has grown in popularity, however publicly available datasets of micro-expressions have limitations due to the difficulty of naturally inducing spontaneous micro-expressions. Other issues include lighting, low resolution and low participant diversity. We present a newly developed spontaneous micro-facial movement dataset with diverse participants and coded using the Facial Action Coding System. The experimental protocol addresses the limitations of previous datasets, including eliciting emotional responses from stimuli tailored to each participant. Dataset evaluation was completed by running preliminary experiments to classify micro-movements from non-movements. Results were obtained using a selection of spatio-temporal descriptors and machine learning. We further evaluate the dataset on emerging methods of feature difference analysis and propose an Adaptive Baseline Threshold that uses individualised neutral expression to improve the performance of micro-movement detection. In contrast to machine learning approaches, we outperform the state of the art with a recall of 0.91. The outcomes show the dataset can become a new standard for micro-movement data, with future work expanding on data representation and analysis.", "title": "" }, { "docid": "476bd671b982450d6d1f6c8d7936bcb5", "text": "Walter Thiel developed the method that enables preservation of the body with natural colors in 1992. It consists in the application of an intravascular injection formula, and maintaining the corps submerged for a determinate period of time in the immersion solution in the pool. After immersion, it is possible to maintain the corps in a hermetically sealed container, thus avoiding dehydration outside the pool. The aim of this work was to review the Thiel method, searching all scientific articles describing this technique from its development point of view, and application in anatomy and morphology teaching, as well as in clinical and su rgic l practice. Most of these studies were carried out in Europe. We used PubMed, Ebsco and Embase databases with the terms “Thiel cadaver”, “Thiel embalming”, “Thiel embalming method” and we searched for papers that cited Thiel`s work. In comparison with methods commonly used with high concentrations of formaldehyde, this method lacks the emanation of noxious or irritating gases; gives the corps important passive joint mobility without stiffness; maintaining color, flexibility and tissue plasticity at a level e quivalent to that of a living body. Furthermore, it allows vascular repletion at the capillary level. All this makes for great advantage over the f rmalinfixed and fresh material. Its multiple uses are applicable in anatomy teaching and research; teaching for undergraduates (prose ction and dissection) and for training in surgical techniques for graduates and specialists (laparoscopies, arthroscopies, endoscopies).", "title": "" }, { "docid": "516ef94fad7f7e5801bf1ef637ffb136", "text": "With parallelizable attention networks, the neural Transformer is very fast to train. However, due to the auto-regressive architecture and self-attention in the decoder, the decoding procedure becomes slow. To alleviate this issue, we propose an average attention network as an alternative to the self-attention network in the decoder of the neural Transformer. The average attention network consists of two layers, with an average layer that models dependencies on previous positions and a gating layer that is stacked over the average layer to enhance the expressiveness of the proposed attention network. We apply this network on the decoder part of the neural Transformer to replace the original target-side self-attention model. With masking tricks and dynamic programming, our model enables the neural Transformer to decode sentences over four times faster than its original version with almost no loss in training time and translation performance. We conduct a series of experiments on WMT17 translation tasks, where on 6 different language pairs, we obtain robust and consistent speed-ups in decoding.1", "title": "" }, { "docid": "c26f06abb768c7b6d1a22172078aaf00", "text": "In complex conversation tasks, people react to their interlocutor’s state, such as uncertainty and engagement to improve conversation effectiveness [2]. If a conversational system reacts to a user’s state, would that lead to a better conversation experience? To test this hypothesis, we designed and implemented a dialog system that tracks and reacts to a user’s state, such as engagement, in real time. We designed and implemented a conversational job interview task based on the proposed framework. The system acts as an interviewer and reacts to user’s disengagement in real-time with positive feedback strategies designed to re-engage the user in the job interview process. Experiments suggest that users speak more while interacting with the engagement-coordinated version of the system as compared to a noncoordinated version. Users also reported the former system as being more engaging and providing a better user experience.", "title": "" }, { "docid": "fb05091e8badfc8e60f69441da1eb60d", "text": "Learning-based methods have demonstrated clear advantages in controlling robot tasks, such as the information fusion abilities, strong robustness, and high accuracy. Meanwhile, the on-board systems of robots have limited computation and energy resources, which are contradictory with state-of-the-art learning approaches. They are either too lightweight to solve complex problems or too heavyweight to be used for mobile applications. On the other hand, training spiking neural networks (SNNs) with biological plausibility has great potentials of performing fast computation and energy efficiency. However, the lack of effective learning rules for SNNs impedes their wide usage in mobile robot applications. This paper addresses the problem by introducing an end to end learning approach of spiking neural networks for a lane keeping vehicle. We consider the reward-modulated spike-timing-dependent-plasticity (R-STDP) as a promising solution in training SNNs, since it combines the advantages of both reinforcement learning and the well-known STDP. We test our approach in three scenarios that a Pioneer robot is controlled to keep lanes based on an SNN. Specifically, the lane information is encoded by the event data from a neuromorphic vision sensor. The SNN is constructed using R-STDP synapses in an all-to-all fashion. We demonstrate the advantages of our approach in terms of the lateral localization accuracy by comparing with other state-of-the-art learning algorithms based on SNNs.", "title": "" } ]
scidocsrr
1737341bfdc3a0973a3443b95f779552
Observation-Level Interaction with Clustering and Dimension Reduction Algorithms
[ { "docid": "f6266e5c4adb4fa24cc353dccccaf6db", "text": "Clustering plays an important role in many large-scale data analyses providing users with an overall understanding of their data. Nonetheless, clustering is not an easy task due to noisy features and outliers existing in the data, and thus the clustering results obtained from automatic algorithms often do not make clear sense. To remedy this problem, automatic clustering should be complemented with interactive visualization strategies. This paper proposes an interactive visual analytics system for document clustering, called iVisClustering, based on a widelyused topic modeling method, latent Dirichlet allocation (LDA). iVisClustering provides a summary of each cluster in terms of its most representative keywords and visualizes soft clustering results in parallel coordinates. The main view of the system provides a 2D plot that visualizes cluster similarities and the relation among data items with a graph-based representation. iVisClustering provides several other views, which contain useful interaction methods. With help of these visualization modules, we can interactively refine the clustering results in various ways.", "title": "" }, { "docid": "cff44da2e1038c8e5707cdde37bc5461", "text": "Visual analytics emphasizes sensemaking of large, complex datasets through interactively exploring visualizations generated by statistical models. For example, dimensionality reduction methods use various similarity metrics to visualize textual document collections in a spatial metaphor, where similarities between documents are approximately represented through their relative spatial distances to each other in a 2D layout. This metaphor is designed to mimic analysts' mental models of the document collection and support their analytic processes, such as clustering similar documents together. However, in current methods, users must interact with such visualizations using controls external to the visual metaphor, such as sliders, menus, or text fields, to directly control underlying model parameters that they do not understand and that do not relate to their analytic process occurring within the visual metaphor. In this paper, we present the opportunity for a new design space for visual analytic interaction, called semantic interaction, which seeks to enable analysts to spatially interact with such models directly within the visual metaphor using interactions that derive from their analytic process, such as searching, highlighting, annotating, and repositioning documents. Further, we demonstrate how semantic interactions can be implemented using machine learning techniques in a visual analytic tool, called ForceSPIRE, for interactive analysis of textual data within a spatial visualization. Analysts can express their expert domain knowledge about the documents by simply moving them, which guides the underlying model to improve the overall layout, taking the user's feedback into account.", "title": "" }, { "docid": "0ee744ad3c75f7bb9695c47165d87043", "text": "Clustering is a critical component of many data analysis tasks, but is exceedingly difficult to fully automate. To better incorporate domain knowledge, researchers in machine learning, human-computer interaction, visualization, and statistics have independently introduced various computational tools to engage users through interactive clustering. In this work-in-progress paper, we present a cross-disciplinary literature survey, and find that existing techniques often do not meet the needs of real-world data analysis. Semi-supervised machine learning algorithms often impose prohibitive user interaction costs or fail to account for external analysis requirements. Human-centered approaches and user interface designs often fall short because of their insufficient statistical modeling capabilities. Drawing on effective approaches from each field, we identify five characteristics necessary to support effective human-in-the-loop interactive clustering: iterative, multi-objective, local updates that can operate on any initial clustering and a dynamic set of features. We outline key aspects of our technique currently under development, and share our initial evidence suggesting that all five design considerations can be incorporated into a single algorithm. We plan to demonstrate our technique on three data analysis tasks: feature engineering for classification, exploratory analysis of biomedical data, and multi-document summarization.", "title": "" } ]
[ { "docid": "948ac7d5527cfcb978087f1465a918e6", "text": "We investigate automatic analysis of teachers' instructional strategies from audio recordings collected in live classrooms. We collected a data set of teacher audio and human-coded instructional activities (e.g., lecture, question and answer, group work) in 76 middle school literature, language arts, and civics classes from eleven teachers across six schools. We automatically segment teacher audio to analyze speech vs. rest patterns, generate automatic transcripts of the teachers' speech to extract natural language features, and compute low-level acoustic features. We train supervised machine learning models to identify occurrences of five key instructional segments (Question & Answer, Procedures and Directions, Supervised Seatwork, Small Group Work, and Lecture) that collectively comprise 76% of the data. Models are validated independently of teacher in order to increase generalizability to new teachers from the same sample. We were able to identify the five instructional segments above chance levels with F1 scores ranging from 0.64 to 0.78. We discuss key findings in the context of teacher modeling for formative assessment and professional development.", "title": "" }, { "docid": "51d950dfb9f71b9c8948198c147b9884", "text": "Collaborative filtering is the most popular approach to build recommender systems and has been successfully employed in many applications. However, it cannot make recommendations for so-called cold start users that have rated only a very small number of items. In addition, these methods do not know how confident they are in their recommendations. Trust-based recommendation methods assume the additional knowledge of a trust network among users and can better deal with cold start users, since users only need to be simply connected to the trust network. On the other hand, the sparsity of the user item ratings forces the trust-based approach to consider ratings of indirect neighbors that are only weakly trusted, which may decrease its precision. In order to find a good trade-off, we propose a random walk model combining the trust-based and the collaborative filtering approach for recommendation. The random walk model allows us to define and to measure the confidence of a recommendation. We performed an evaluation on the Epinions dataset and compared our model with existing trust-based and collaborative filtering methods.", "title": "" }, { "docid": "ddb70c486a7974f7ba1dc3e5ca623fc0", "text": "Activity recognition from on-body sensors is affected by sensor degradation, interconnections failures, and jitter in sensor placement and orientation. We investigate how this may be balanced by exploiting redundant sensors distributed on the body. We recognize activities by a meta-classifier that fuses the information of simple classifiers operating on individual sensors. We investigate the robustness to faults and sensor scalability which follows from classifier fusion. We compare a reference majority voting and a naive Bayesian fusion scheme. We validate this approach by recognizing a set of 10 activities carried out by workers in the quality assurance checkpoint of a car assembly line. Results show that classification accuracy greatly increases with additional sensors (50% with 1 sensor, 80% and 98% with 3 and 57 sensors), and that sensor fusion implicitly allows to compensate for typical faults up to high fault rates. These results highlight the benefit of large on- body sensor network rather than a minimum set of sensors for activity recognition and prompts further investigation.", "title": "" }, { "docid": "8a6e062d17ee175e00288dd875603a9c", "text": "Code summarization, aiming to generate succinct natural language description of source code, is extremely useful for code search and code comprehension. It has played an important role in software maintenance and evolution. Previous approaches generate summaries by retrieving summaries from similar code snippets. However, these approaches heavily rely on whether similar code snippets can be retrieved, how similar the snippets are, and fail to capture the API knowledge in the source code, which carries vital information about the functionality of the source code. In this paper, we propose a novel approach, named TL-CodeSum, which successfully uses API knowledge learned in a different but related task to code summarization. Experiments on large-scale real-world industry Java projects indicate that our approach is effective and outperforms the state-of-the-art in code summarization.", "title": "" }, { "docid": "edfcb2f1f2afcfd2656c2985898867df", "text": "AJAX is a web development technique for building responsive web applications. The paper gives an overview of the AJAX technique and explores ideas for teaching this technique in modules related to Internet technologies and web development. Appropriate examples for use in lab sessions are also suggested.", "title": "" }, { "docid": "2c56891c1c9f128553bab35d061049b8", "text": "RISC vs. CISC wars raged in the 1980s when chip area and processor design complexity were the primary constraints and desktops and servers exclusively dominated the computing landscape. Today, energy and power are the primary design constraints and the computing landscape is significantly different: growth in tablets and smartphones running ARM (a RISC ISA) is surpassing that of desktops and laptops running x86 (a CISC ISA). Further, the traditionally low-power ARM ISA is entering the high-performance server market, while the traditionally high-performance x86 ISA is entering the mobile low-power device market. Thus, the question of whether ISA plays an intrinsic role in performance or energy efficiency is becoming important, and we seek to answer this question through a detailed measurement based study on real hardware running real applications. We analyze measurements on the ARM Cortex-A8 and Cortex-A9 and Intel Atom and Sandybridge i7 microprocessors over workloads spanning mobile, desktop, and server computing. Our methodical investigation demonstrates the role of ISA in modern microprocessors' performance and energy efficiency. We find that ARM and x86 processors are simply engineering design points optimized for different levels of performance, and there is nothing fundamentally more energy efficient in one ISA class or the other. The ISA being RISC or CISC seems irrelevant.", "title": "" }, { "docid": "878cd4545931099ead5df71076afc731", "text": "The pioneer deep neural networks (DNNs) have emerged to be deeper or wider for improving their accuracy in various applications of artificial intelligence. However, DNNs are often too heavy to deploy in practice, and it is often required to control their architectures dynamically given computing resource budget, i.e., anytime prediction. While most existing approaches have focused on training multiple shallow sub-networks jointly, we study training thin sub-networks instead. To this end, we first build many inclusive thin sub-networks (of the same depth) under a minor modification of existing multi-branch DNNs, and found that they can significantly outperform the state-of-art dense architecture for anytime prediction. This is remarkable due to their simplicity and effectiveness, but training many thin subnetworks jointly faces a new challenge on training complexity. To address the issue, we also propose a novel DNN architecture by forcing a certain sparsity pattern on multi-branch network parameters, making them train efficiently for the purpose of anytime prediction. In our experiments on the ImageNet dataset, its sub-networks have up to 43.3% smaller sizes (FLOPs) compared to those of the state-of-art anytime model with respect to the same accuracy. Finally, we also propose an alternative task under the proposed architecture using a hierarchical taxonomy, which brings a new angle for anytime prediction.", "title": "" }, { "docid": "1212637c91d8c57299c922b6bde91ce8", "text": "BACKGROUND\nIn the late 1980's, occupational science was introduced as a basic discipline that would provide a foundation for occupational therapy. As occupational science grows and develops, some question its relationship to occupational therapy and criticize the direction and extent of its growth and development.\n\n\nPURPOSE\nThis study was designed to describe and critically analyze the growth and development of occupational science and characterize how this has shaped its current status and relationship to occupational therapy.\n\n\nMETHOD\nUsing a mixed methods design, 54 occupational science documents published in the years 1990 and 2000 were critically analyzed to describe changes in the discipline between two points in time. Data describing a range of variables related to authorship, publication source, stated goals for occupational science and type of research were collected.\n\n\nRESULTS\nDescriptive statistics, themes and future directions are presented and discussed.\n\n\nPRACTICE IMPLICATIONS\nThrough the support of a discipline that is dedicated to the pursuit of a full understanding of occupation, occupational therapy will help to create a new and complex body of knowledge concerning occupation. However, occupational therapy must continue to make decisions about how knowledge produced within occupational science and other disciplines can be best used in practice.", "title": "" }, { "docid": "ae7405600f7cf3c7654cc2db73a22340", "text": "The usual approach for automatic summarization is sentence extraction, where key sentences from the input documents are selected based on a suite of features. While word frequency often is used as a feature in summarization, its impact on system performance has not been isolated. In this paper, we study the contribution to summarization of three factors related to frequency: content word frequency, composition functions for estimating sentence importance from word frequency, and adjustment of frequency weights based on context. We carry out our analysis using datasets from the Document Understanding Conferences, studying not only the impact of these features on automatic summarizers, but also their role in human summarization. Our research shows that a frequency based summarizer can achieve performance comparable to that of state-of-the-art systems, but only with a good composition function; context sensitivity improves performance and significantly reduces repetition.", "title": "" }, { "docid": "aad3945a69f57049c052bcb222f1b772", "text": "The chapter 1 on Social Media and Social Computing has documented the nature and characteristics of social networks and community detection. The explanation about the emerging of social networks and their properties constitute this chapter followed by a discussion on social community. The nodes, ties and influence in the social networks are the core of the discussion in the second chapter. Centrality is the core discussion here and the degree of centrality and its measure is explained. Understanding network topology is required for social networks concepts.", "title": "" }, { "docid": "1623cdb614ad63675d982e8396e4ff01", "text": "Recognizing textual entailment is a fundamental task in a variety of text mining or natural language processing applications. This paper proposes a simple neural model for RTE problem. It first matches each word in the hypothesis with its most-similar word in the premise, producing an augmented representation of the hypothesis conditioned on the premise as a sequence of word pairs. The LSTM model is then used to model this augmented sequence, and the final output from the LSTM is fed into a softmax layer to make the prediction. Besides the base model, in order to enhance its performance, we also proposed three techniques: the integration of multiple word-embedding library, bi-way integration, and ensemble based on model averaging. Experimental results on the SNLI dataset have shown that the three techniques are effective in boosting the predicative accuracy and that our method outperforms several state-of-the-state ones.", "title": "" }, { "docid": "64fbffe75209359b540617fac4930c44", "text": "Recent developments in information technology have enabled collection and processing of vast amounts of personal data, such as criminal records, shopping habits, credit and medical history, and driving records. This information is undoubtedly very useful in many areas, including medical research, law enforcement and national security. However, there is an increasing public concern about the individuals' privacy. Privacy is commonly seen as the right of individuals to control information about themselves. The appearance of technology for Knowledge Discovery and Data Mining (KDDM) has revitalized concern about the following general privacy issues: • secondary use of the personal information, • handling misinformation, and • granulated access to personal information. They demonstrate that existing privacy laws and policies are well behind the developments in technology, and no longer offer adequate protection. We also discuss new privacy threats posed KDDM, which includes massive data collection, data warehouses, statistical analysis and deductive learning techniques. KDDM uses vast amounts of data to generate hypotheses and discover general patterns. KDDM poses the following new challenges to privacy.", "title": "" }, { "docid": "b0b11a794a35bec71f88cc1ef8405dc4", "text": "In this work, we present a novel method for capturing human body shape from a single scaled silhouette. We combine deep correlated features capturing different 2D views, and embedding spaces based on 3D cues in a novel convolutional neural network (CNN) based architecture. We first train a CNN to find a richer body shape representation space from pose invariant 3D human shape descriptors. Then, we learn a mapping from silhouettes to this representation space, with the help of a novel architecture that exploits correlation of multi-view data during training time, to improve prediction at test time. We extensively validate our results on synthetic and real data, demonstrating significant improvements in accuracy as compared to the state-of-the-art, and providing a practical system for detailed human body measurements from a single image.", "title": "" }, { "docid": "9a9dc194e0ca7d1bb825e8aed5c9b4fe", "text": "In this paper we show how to divide data <italic>D</italic> into <italic>n</italic> pieces in such a way that <italic>D</italic> is easily reconstructable from any <italic>k</italic> pieces, but even complete knowledge of <italic>k</italic> - 1 pieces reveals absolutely no information about <italic>D</italic>. This technique enables the construction of robust key management schemes for cryptographic systems that can function securely and reliably even when misfortunes destroy half the pieces and security breaches expose all but one of the remaining pieces.", "title": "" }, { "docid": "cf015ef9181bf2fcf39eb41f7fa9196e", "text": "Channel estimation is useful in millimeter wave (mmWave) MIMO communication systems. Channel state information allows optimized designs of precoders and combiners under different metrics such as mutual information or signal-to-interference-noise (SINR) ratio. At mmWave, MIMO precoders and combiners are usually hybrid, since this architecture provides a means to trade-off power consumption and achievable rate. Channel estimation is challenging when using these architectures, however, since there is no direct access to the outputs of the different antenna elements in the array. The MIMO channel can only be observed through the analog combining network, which acts as a compression stage of the received signal. Most of prior work on channel estimation for hybrid architectures assumes a frequencyflat mmWave channel model. In this paper, we consider a frequency-selective mmWave channel and propose compressed-sensing-based strategies to estimate the channel in the frequency domain. We evaluate different algorithms and compute their complexity to expose trade-offs in complexity-overheadperformance as compared to those of previous approaches. This work was partially funded by the Agencia Estatal de Investigacin (Spain) and the European Regional Development Fund (ERDF) under project MYRADA (TEC2016-75103-C2-2-R), the U.S. Department of Transportation through the DataSupported Transportation Operations and Planning (D-STOP) Tier 1 University Transportation Center, by the Texas Department of Transportation under Project 0-6877 entitled Communications and Radar-Supported Transportation Operations and Planning (CAR-STOP) and by the National Science Foundation under Grant NSF-CCF-1319556 and NSF-CCF-1527079. ar X iv :1 70 4. 08 57 2v 1 [ cs .I T ] 2 7 A pr 2 01 7", "title": "" }, { "docid": "f9be959b4c2392f7fc1dff2a1bde4dae", "text": "This paper presents a new Web-based system, Mooshak, to handle programming contests. The system acts as a full contest manager as well as an automatic judge for programming contests. Mooshak innovates in a number of aspects: it has a scalable architecture that can be used from small single server contests to complex multi-site contests with simultaneous public online contests and redundancy; it has a robust data management system favoring simple procedures for storing, replicating, backing up data and failure recovery using persistent objects; it has automatic judging capabilities to assist human judges in the evaluation of programs; it has built-in safety measures to prevent users from interfering with the normal progress of contests. Mooshak is an open system implemented on the Linux operating system using the Apache HTTP server and the Tcl scripting language. This paper starts by describing the main features of the system and its architecture with reference to the automated judging, data management based on the replication of persistent objects over a network. Finally, we describe our experience using this system for managing two official programming contests. Copyright c © 2003 John Wiley & Sons, Ltd.", "title": "" }, { "docid": "31f838fb0c7db7e8b58fb1788d5554c8", "text": "Today’s smartphones operate independently of each other, using only local computing, sensing, networking, and storage capabilities and functions provided by remote Internet services. It is generally difficult or expensive for one smartphone to share data and computing resources with another. Data is shared through centralized services, requiring expensive uploads and downloads that strain wireless data networks. Collaborative computing is only achieved using ad hoc approaches. Coordinating smartphone data and computing would allow mobile applications to utilize the capabilities of an entire smartphone cloud while avoiding global network bottlenecks. In many cases, processing mobile data in-place and transferring it directly between smartphones would be more efficient and less susceptible to network limitations than offloading data and processing to remote servers. We have developed Hyrax, a platform derived from Hadoop that supports cloud computing on Android smartphones. Hyrax allows client applications to conveniently utilize data and execute computing jobs on networks of smartphones and heterogeneous networks of phones and servers. By scaling with the number of devices and tolerating node departure, Hyrax allows applications to use distributed resources abstractly, oblivious to the physical nature of the cloud. The design and implementation of Hyrax is described, including experiences in porting Hadoop to the Android platform and the design of mobilespecific customizations. The scalability of Hyrax is evaluated experimentally and compared to that of Hadoop. Although the performance of Hyrax is poor for CPU-bound tasks, it is shown to tolerate node-departure and offer reasonable performance in data sharing. A distributed multimedia search and sharing application is implemented to qualitatively evaluate Hyrax from an application development perspective.", "title": "" }, { "docid": "7adffc2dd1d6412b4bb01b38ced51c24", "text": "With the popularity of the Internet and mobile intelligent terminals, the number of mobile applications is exploding. Mobile intelligent terminals trend to be the mainstream way of people's work and daily life online in place of PC terminals. Mobile application system brings some security problems inevitably while it provides convenience for people, and becomes a main target of hackers. Therefore, it is imminent to strengthen the security detection of mobile applications. This paper divides mobile application security detection into client security detection and server security detection. We propose a combining static and dynamic security detection method to detect client-side. We provide a method to get network information of server by capturing and analyzing mobile application traffic, and propose a fuzzy testing method based on HTTP protocol to detect server-side security vulnerabilities. Finally, on the basis of this, an automated platform for security detection of mobile application system is developed. Experiments show that the platform can detect the vulnerabilities of mobile application client and server effectively, and realize the automation of mobile application security detection. It can also reduce the cost of mobile security detection and enhance the security of mobile applications.", "title": "" }, { "docid": "26d5237c912977223e0ba45c0f949e3d", "text": "Generally speaking, ‘Education’ is utilized in three senses: Knowledge, Subject and a Process. When a person achieves degree up to certain level we do not call it education .As for example if a person has secured Masters degree then we utilize education it a very narrower sense and call that the person has achieved education up to Masters Level. In the second sense, education is utilized in a sense of discipline. As for example if a person had taken education as a paper or as a discipline during his study in any institution then we utilize education as a subject. In the third sense, education is utilized as a process. In fact when we talk of education, we talk in the third sense i.e. education as a process. Thus, we talk what is education as a process? What are their importances etc.? The following debate on education will discuss education in this sense and we will talk education as a process.", "title": "" }, { "docid": "7e75bbbf5e86edc396aaa9d9db02c509", "text": "Background: In recent years, blockchain technology has attracted considerable attention. It records cryptographic transactions in a public ledger that is difficult to alter and compromise because of the distributed consensus. As a result, blockchain is believed to resist fraud and hacking. Results: This work explores the types of fraud and malicious activities that can be prevented by blockchain technology and identifies attacks to which blockchain remains vulnerable. Conclusions: This study recommends appropriate defensive measures and calls for further research into the techniques for fighting malicious activities related to blockchains.", "title": "" } ]
scidocsrr
51f87aa79aabc176871d505e427c0ded
Intent Classification of Short-Text on Social Media
[ { "docid": "ac46e6176377612544bb74c064feed67", "text": "The existence and use of standard test collections in information retrieval experimentation allows results to be compared between research groups and over time. Such comparisons, however, are rarely made. Most researchers only report results from their own experiments, a practice that allows lack of overall improvement to go unnoticed. In this paper, we analyze results achieved on the TREC Ad-Hoc, Web, Terabyte, and Robust collections as reported in SIGIR (1998–2008) and CIKM (2004–2008). Dozens of individual published experiments report effectiveness improvements, and often claim statistical significance. However, there is little evidence of improvement in ad-hoc retrieval technology over the past decade. Baselines are generally weak, often being below the median original TREC system. And in only a handful of experiments is the score of the best TREC automatic run exceeded. Given this finding, we question the value of achieving even a statistically significant result over a weak baseline. We propose that the community adopt a practice of regular longitudinal comparison to ensure measurable progress, or at least prevent the lack of it from going unnoticed. We describe an online database of retrieval runs that facilitates such a practice.", "title": "" }, { "docid": "7ec88ea12923d546416fbc6a72e6ff5d", "text": "This paper proposes to study the problem of identifying intention posts in online discussion forums. For example, in a discussion forum, a user wrote “I plan to buy a camera,” which indicates a buying intention. This intention can be easily exploited by advertisers. To the best of our knowledge, there is still no reported study of this problem. Our research found that this problem is particularly suited to transfer learning because in different domains, people express the same intention in similar ways. We then propose a new transfer learning method which, unlike a general transfer learning algorithm, exploits several special characteristics of the problem. Experimental results show that the proposed method outperforms several strong baselines, including supervised learning in the target domain and a recent transfer learning method.", "title": "" }, { "docid": "3b8f29b9cc930200e079df0ea2f30f68", "text": "In this paper, we propose to study the problem of identifying and classifying tweets into intent categories. For example, a tweet “I wanna buy a new car” indicates the user’s intent for buying a car. Identifying such intent tweets will have great commercial value among others. In particular, it is important that we can distinguish different types of intent tweets. We propose to classify intent tweets into six categories, namely Food & Drink, Travel, Career & Education, Goods & Services, Event & Activities and Trifle. We propose a semi-supervised learning approach to categorizing intent tweets into the six categories. We construct a test collection by using a bootstrap method. Our experimental results show that our approach is effective in inferring intent categories for tweets.", "title": "" }, { "docid": "841f2ab48d111a6b70b2a3171c155f44", "text": "In this paper we present SPADE, a new algorithm for fast discovery of Sequential Patterns. The existing solutions to this problem make repeated database scans, and use complex hash structures which have poor locality. SPADE utilizes combinatorial properties to decompose the original problem into smaller sub-problems, that can be independently solved in main-memory using efficient lattice search techniques, and using simple join operations. All sequences are discovered in only three database scans. Experiments show that SPADE outperforms the best previous algorithm by a factor of two, and by an order of magnitude with some pre-processed data. It also has linear scalability with respect to the number of input-sequences, and a number of other database parameters. Finally, we discuss how the results of sequence mining can be applied in a real application domain.", "title": "" } ]
[ { "docid": "441633276271b94dc1bd3e5e28a1014d", "text": "While a large number of consumers in the US and Europe frequently shop on the Internet, research on what drives consumers to shop online has typically been fragmented. This paper therefore proposes a framework to increase researchers’ understanding of consumers’ attitudes toward online shopping and their intention to shop on the Internet. The framework uses the constructs of the Technology Acceptance Model (TAM) as a basis, extended by exogenous factors and applies it to the online shopping context. The review shows that attitudes toward online shopping and intention to shop online are not only affected by ease of use, usefulness, and enjoyment, but also by exogenous factors like consumer traits, situational factors, product characteristics, previous online shopping experiences, and trust in online shopping.", "title": "" }, { "docid": "d2f5f5b42d732a5d27310e4f2d76116a", "text": "This paper reports on a cluster analysis of pervasive games through a bottom-up approach based upon 120 game examples. The basis for the clustering algorithm relies on the identification of pervasive gameplay design patterns for each game from a set of 75 possible patterns. The resulting hierarchy presents a view of the design space of pervasive games, and details of clusters and novel gameplay features are described. The paper concludes with a view over how the clusters relate to existing genres and models of pervasive games.", "title": "" }, { "docid": "91726dd6fb83be434766a05bdaba7a7a", "text": "Bargaining with reading habit is no need. Reading is not kind of something sold that you can take or not. It is a thing that will change your life to life better. It is the thing that will give you many things around the world and this universe, in the real world and here after. As what will be given by this practical digital libraries books bytes and bucks, how can you bargain with the thing that has many benefits for you?", "title": "" }, { "docid": "c2aed51127b8753e4b71da3b331527cd", "text": "In this paper, we present the theory and design of interval type-2 fuzzy logic systems (FLSs). We propose an efficient and simplified method to compute the input and antecedent operations for interval type-2 FLSs; one that is based on a general inference formula for them. We introduce the concept of upper and lower membership functions (MFs) and illustrate our efficient inference method for the case of Gaussian primary MFs. We also propose a method for designing an interval type-2 FLS in which we tune its parameters. Finally, we design type-2 FLSs to perform time-series forecasting when a nonstationary time-series is corrupted by additive noise where SNR is uncertain and demonstrate improved performance over type-1 FLSs.", "title": "" }, { "docid": "b6160256dd6877fea4cec96b74ebc03a", "text": "A cascaded long short-term memory (LSTM) architecture with discriminant feature learning is proposed for the task of question answering on real world images. The proposed LSTM architecture jointly learns visual features and parts of speech (POS) tags of question words or tokens. Also, dimensionality of deep visual features is reduced by applying Principal Component Analysis (PCA) technique. In this manner, the proposed question answering model captures the generic pattern of question for a given context of image which is just not constricted within the training dataset. Empirical outcome shows that this kind of approach significantly improves the accuracy. It is believed that this kind of generic learning is a step towards a real-world visual question answering (VQA) system which will perform well for all possible forms of open-ended natural language queries.", "title": "" }, { "docid": "f2d2979ca63d47ba33fffb89c16b9499", "text": "Shor and Grover demonstrated that a quantum computer can outperform any classical computer in factoring numbers and in searching a database by exploiting the parallelism of quantum mechanics. Whereas Shor's algorithm requires both superposition and entanglement of a many-particle system, the superposition of single-particle quantum states is sufficient for Grover's algorithm. Recently, the latter has been successfully implemented using Rydberg atoms. Here we propose an implementation of Grover's algorithm that uses molecular magnets, which are solid-state systems with a large spin; their spin eigenstates make them natural candidates for single-particle systems. We show theoretically that molecular magnets can be used to build dense and efficient memory devices based on the Grover algorithm. In particular, one single crystal can serve as a storage unit of a dynamic random access memory device. Fast electron spin resonance pulses can be used to decode and read out stored numbers of up to 105, with access times as short as 10-10 seconds. We show that our proposal should be feasible using the molecular magnets Fe8 and Mn12.", "title": "" }, { "docid": "a55eed627afaf39ee308cc9e0e10a698", "text": "Perspective-taking is a complex cognitive process involved in social cognition. This positron emission tomography (PET) study investigated by means of a factorial design the interaction between the emotional and the perspective factors. Participants were asked to adopt either their own (first person) perspective or the (third person) perspective of their mothers in response to situations involving social emotions or to neutral situations. The main effect of third-person versus first-person perspective resulted in hemodynamic increase in the medial part of the superior frontal gyrus, the left superior temporal sulcus, the left temporal pole, the posterior cingulate gyrus, and the right inferior parietal lobe. A cluster in the postcentral gyrus was detected in the reverse comparison. The amygdala was selectively activated when subjects were processing social emotions, both related to self and other. Interaction effects were identified in the left temporal pole and in the right postcentral gyrus. These results support our prediction that the frontopolar, the somatosensory cortex, and the right inferior parietal lobe are crucial in the process of self/ other distinction. In addition, this study provides important building blocks in our understanding of social emotion processing and human empathy.", "title": "" }, { "docid": "b939227b7de6ef57c2d236fcb01b7bfc", "text": "We propose a speed estimation method with human body accelerations measured on the chest by a tri-axial accelerometer. To estimate the speed we segmented the acceleration signal into strides measuring stride time, and applied two neural networks into the patterns parameterized from each stride calculating stride length. The first neural network determines whether the subject walks or runs, and the second neural network with different node interactions according to the subject's status estimates stride length. Walking or running speed is calculated with the estimated stride length divided by the measured stride time. The neural networks were trained by patterns obtained from 15 subjects and then validated by 2 untrained subjects' patterns. The result shows good agreement between actual and estimated speeds presenting the linear correlation coefficient r = 0.9874. We also applied the method to the real field and track data.", "title": "" }, { "docid": "5ceb6e39c8f826c0a7fd0e5086090a5f", "text": "Mobile botnet phenomenon is gaining popularity among malware writers in order to exploit vulnerabilities in smartphones. In particular, mobile botnets enable illegal access to a victim’s smartphone, can compromise critical user data and launch a DDoS attack through Command and Control (C&C). In this article, we propose a static analysis approach, DeDroid, to investigate botnet-specific properties that can be used to detect mobile applications with botnet intensions. Initially, we identify critical features by observing code behavior of the few known malware binaries having C&C features. Then, we compare the identified features with the malicious and benign applications of Drebin dataset. The results show against the comparative analysis that, Drebin dataset has 35% malicious applications which qualify as botnets. Upon closer examination, 90% of the potential botnets are confirmed as botnets. Similarly, for comparative analysis against benign applications having C&C features, DeDroid has achieved adequate detection accuracy. In addition, DeDroid has achieved high accuracy with negligible false positive rate while making decision for state-of-the-art malicious applications.", "title": "" }, { "docid": "550ac6565bf42f42ec35d63f8c3b1e01", "text": "A fully planar ultrawideband phased array with wide scan and low cross-polarization performance is introduced. The array is based on Munk's implementation of the current sheet concept, but it employs a novel feeding scheme for the tightly coupled horizontal dipoles that enables simple PCB fabrication. This feeding eliminates the need for “cable organizers” and external baluns, and when combined with dual-offset dual-polarized lattice arrangements the array can be implemented in a modular, tile-based fashion. Simple physical explanations and circuit models are derived to explain the array's operation and guide the design process. The theory and insights are subsequently used to design an exemplary dual-polarized infinite array with 5:1 bandwidth and VSWR <; 2.1 at broadside, and cross-polarization ≈ -15 dB out to θ = 45° in the D- plane.", "title": "" }, { "docid": "04c0a4613ab0ec7fd77ac5216a17bd1d", "text": "Many contemporary biomedical applications such as physiological monitoring, imaging, and sequencing produce large amounts of data that require new data processing and visualization algorithms. Algorithms such as principal component analysis (PCA), singular value decomposition and random projections (RP) have been proposed for dimensionality reduction. In this paper we propose a new random projection version of the fuzzy c-means (FCM) clustering algorithm denoted as RPFCM that has a different ensemble aggregation strategy than the one previously proposed, denoted as ensemble FCM (EFCM). RPFCM is more suitable than EFCM for big data sets (large number of points, n). We evaluate our method and compare it to EFCM on synthetic and real datasets.", "title": "" }, { "docid": "69d826aa8309678cf04e2870c23a99dd", "text": "Contemporary analyses of cell metabolism have called out three metabolites: ATP, NADH, and acetyl-CoA, as sentinel molecules whose accumulation represent much of the purpose of the catabolic arms of metabolism and then drive many anabolic pathways. Such analyses largely leave out how and why ATP, NADH, and acetyl-CoA (Figure 1 ) at the molecular level play such central roles. Yet, without those insights into why cells accumulate them and how the enabling properties of these key metabolites power much of cell metabolism, the underlying molecular logic remains mysterious. Four other metabolites, S-adenosylmethionine, carbamoyl phosphate, UDP-glucose, and Δ2-isopentenyl-PP play similar roles in using group transfer chemistry to drive otherwise unfavorable biosynthetic equilibria. This review provides the underlying chemical logic to remind how these seven key molecules function as mobile packets of cellular currencies for phosphoryl transfers (ATP), acyl transfers (acetyl-CoA, carbamoyl-P), methyl transfers (SAM), prenyl transfers (IPP), glucosyl transfers (UDP-glucose), and electron and ADP-ribosyl transfers (NAD(P)H/NAD(P)+) to drive metabolic transformations in and across most primary pathways. The eighth key metabolite is molecular oxygen (O2), thermodynamically activated for reduction by one electron path, leaving it kinetically stable to the vast majority of organic cellular metabolites.", "title": "" }, { "docid": "df35b679204e0729266a1076685600a1", "text": "A new innovations state space modeling framework, incorporating Box-Cox transformations, Fourier series with time varying coefficients and ARMA error correction, is introduced for forecasting complex seasonal time series that cannot be handled using existing forecasting models. Such complex time series include time series with multiple seasonal periods, high frequency seasonality, non-integer seasonality and dual-calendar effects. Our new modelling framework provides an alternative to existing exponential smoothing models, and is shown to have many advantages. The methods for initialization and estimation, including likelihood evaluation, are presented, and analytical expressions for point forecasts and interval predictions under the assumption of Gaussian errors are derived, leading to a simple, comprehensible approach to forecasting complex seasonal time series. Our trigonometric formulation is also presented as a means of decomposing complex seasonal time series, which cannot be decomposed using any of the existing decomposition methods. The approach is useful in a broad range of applications, and we illustrate its versatility in three empirical studies where it demonstrates excellent forecasting performance over a range of prediction horizons. In addition, we show that our trigonometric decomposition leads to the identification and extraction of seasonal components, which are otherwise not apparent in the time series plot itself.", "title": "" }, { "docid": "89875f4c0d70e655dd1ff9ffef7c04c2", "text": "Flexible electronics incorporate all the functional attributes of conventional rigid electronics in formats that have been altered to survive mechanical deformations. Understanding the evolution of device performance during bending, stretching, or other mechanical cycling is, therefore, fundamental to research efforts in this area. Here, we review the various classes of flexible electronic devices (including power sources, sensors, circuits and individual components) and describe the basic principles of device mechanics. We then review techniques to characterize the deformation tolerance and durability of these flexible devices, and we catalogue and geometric designs that are intended to optimize electronic systems for maximum flexibility.", "title": "" }, { "docid": "d89ba95eb3bd7aca4a7acb17be973c06", "text": "An UWB elliptical slot antenna embedded with open-end slit on the tuning stub or parasitic strip on the aperture for achieving the band-notch characteristics has been proposed in this conference. Experimental results have also confirmed band-rejection capability for the proposed antenna at the desired band, as well as nearly omni-direction radiation features is still preserved. Finally, how to shrink the geometry dimensions of the UWB antenna will be investigated in the future.", "title": "" }, { "docid": "c8f9d10de0d961e4ee14b6b118b5f89a", "text": "Deep learning is having a transformative effect on how sensor data are processed and interpreted. As a result, it is becoming increasingly feasible to build sensor-based computational models that are much more robust to real-world noise and complexity than previously possible. It is paramount that these innovations reach mobile and embedded devices that often rely on understanding and reacting to sensor data. However, deep models conventionally demand a level of system resources (e.g., memory and computation) that makes them problematic to run directly on constrained devices. In this work, we present the DeepX toolkit (DXTK); an opensource collection of software components for simplifying the execution of deep models on resource-sensitive platforms. DXTK contains a number of pre-trained low-resource deep models that users can quickly adopt and integrate for their particular application needs. It also offers a range of runtime options for executing deep models on range of devices including both Android and Linux variants. But the heart of DXTK is a series of optimization techniques (viz. weight/sparse factorization, convolution separation, precision scaling, and parameter cleaning). Each technique offers a complementary approach to shaping system resource requirements, and is compatible with deep and convolutional neural networks. We hope that DXTK proves to be a valuable resource for the community, and accelerates the adoption and study of resource-constrained deep learning.", "title": "" }, { "docid": "996f1743ca60efa05f5113a4459f8b61", "text": "This paper presents a method for movie genre categorization of movie trailers, based on scene categorization. We view our approach as a step forward from using only low-level visual feature cues, towards the eventual goal of high-level seman- tic understanding of feature films. Our approach decom- poses each trailer into a collection of keyframes through shot boundary analysis. From these keyframes, we use state-of- the-art scene detectors and descriptors to extract features, which are then used for shot categorization via unsuper- vised learning. This allows us to represent trailers using a bag-of-visual-words (bovw) model with shot classes as vo- cabularies. We approach the genre classification task by mapping bovw temporally structured trailer features to four high-level movie genres: action, comedy, drama or horror films. We have conducted experiments on 1239 annotated trailers. Our experimental results demonstrate that exploit- ing scene structures improves film genre classification com- pared to using only low-level visual features.", "title": "" }, { "docid": "6b4e1e45ef1b91b7694c62bd5d3cd9fc", "text": "Recently, academia and law enforcement alike have shown a strong demand for data that is collected from online social networks. In this work, we present a novel method for harvesting such data from social networking websites. Our approach uses a hybrid system that is based on a custom add-on for social networks in combination with a web crawling component. The datasets that our tool collects contain profile information (user data, private messages, photos, etc.) and associated meta-data (internal timestamps and unique identifiers). These social snapshots are significant for security research and in the field of digital forensics. We implemented a prototype for Facebook and evaluated our system on a number of human volunteers. We show the feasibility and efficiency of our approach and its advantages in contrast to traditional techniques that rely on application-specific web crawling and parsing. Furthermore, we investigate different use-cases of our tool that include consensual application and the use of sniffed authentication cookies. Finally, we contribute to the research community by publishing our implementation as an open-source project.", "title": "" }, { "docid": "22ecb164fb7a8bf4968dd7f5e018c736", "text": "Unsupervised learning techniques in computer vision of ten require learning latent representations, such as low-dimensional linear and non-linear subspaces. Noise and outliers in the data can frustrate these approaches by obscuring the latent spaces. Our main goal is deeper understanding and new development of robust approaches for representation learning. We provide a new interpretation for existing robust approaches and present two specific contributions: a new robust PCA approach, which can separate foreground features from dynamic background, and a novel robust spectral clustering method, that can cluster facial images with high accuracy. Both contributions show superior performance to standard methods on real-world test sets.", "title": "" } ]
scidocsrr
53e06b416a4f5369636047cea17f5d6d
Is emotional contagion special? An fMRI study on neural systems for affective and cognitive empathy
[ { "docid": "0704c17b0e0d6df371dd94c4fbcf7817", "text": "Our ability to explain and predict other people's behaviour by attributing to them independent mental states, such as beliefs and desires, is known as having a 'theory of mind'. Interest in this very human ability has engendered a growing body of evidence concerning its evolution and development and the biological basis of the mechanisms underpinning it. Functional imaging has played a key role in seeking to isolate brain regions specific to this ability. Three areas are consistently activated in association with theory of mind. These are the anterior paracingulate cortex, the superior temporal sulci and the temporal poles bilaterally. This review discusses the functional significance of each of these areas within a social cognitive network.", "title": "" } ]
[ { "docid": "eb888ba37e7e97db36c330548569508d", "text": "Since the first online demonstration of Neural Machine Translation (NMT) by LISA (Bahdanau et al., 2014), NMT development has recently moved from laboratory to production systems as demonstrated by several entities announcing rollout of NMT engines to replace their existing technologies. NMT systems have a large number of training configurations and the training process of such systems is usually very long, often a few weeks, so role of experimentation is critical and important to share. In this work, we present our approach to production-ready systems simultaneously with release of online demonstrators covering a large variety of languages ( 12 languages, for32 language pairs). We explore different practical choices: an efficient and evolutive open-source framework; data preparation; network architecture; additional implemented features; tuning for production; etc. We discuss about evaluation methodology, present our first findings and we finally outline further work. Our ultimate goal is to share our expertise to build competitive production systems for ”generic” translation. We aim at contributing to set up a collaborative framework to speed-up adoption of the technology, foster further research efforts and enable the delivery and adoption to/by industry of use-case specific engines integrated in real production workflows. Mastering of the technology would allow us to build translation engines suited for particular needs, outperforming current simplest/uniform systems.", "title": "" }, { "docid": "604619dd5f23569eaff40eabc8e94f52", "text": "Understanding the causes and effects of species invasions is a priority in ecology and conservation biology. One of the crucial steps in evaluating the impact of invasive species is to map changes in their actual and potential distribution and relative abundance across a wide region over an appropriate time span. While direct and indirect remote sensing approaches have long been used to assess the invasion of plant species, the distribution of invasive animals is mainly based on indirect methods that rely on environmental proxies of conditions suitable for colonization by a particular species. The aim of this article is to review recent efforts in the predictive modelling of the spread of both plant and animal invasive species using remote sensing, and to stimulate debate on the potential use of remote sensing in biological invasion monitoring and forecasting. Specifically, the challenges and drawbacks of remote sensing techniques are discussed in relation to: i) developing species distribution models, and ii) studying life cycle changes and phenological variations. Finally, the paper addresses the open challenges and pitfalls of remote sensing for biological invasion studies including sensor characteristics, upscaling and downscaling in species distribution models, and uncertainty of results.", "title": "" }, { "docid": "ce9421a7f8c1ae3a6b3983d7e0ff66c0", "text": "Supporting Hebb's 1949 hypothesis of use-induced plasticity of the nervous system, our group found in the 1960s that training or differential experience induced neurochemical changes in cerebral cortex of the rat and regional changes in weight of cortex. Further studies revealed changes in cortical thickness, size of synaptic contacts, number of dendritic spines, and dendritic branching. Similar effects were found whether rats were assigned to differential experience at weaning (25 days of age), as young adults (105 days) or as adults (285 days). Enriched early experience improved performance on several tests of learning. Cerebral results of experience in an enriched environment are similar to results of formal training. Enriched experience and training appear to evoke the same cascade of neurochemical events in causing plastic changes in brain. Sufficiently rich experience may be necessary for full growth of species-specific brain characteristics and behavioral potential. Clayton and Krebs found in 1994 that birds that normally store food have larger hippocampi than related species that do not store. This difference develops only in birds given the opportunity to store and recover food. Research on use-induced plasticity is being applied to promote child development, successful aging, and recovery from brain damage; it is also being applied to benefit animals in laboratories, zoos and farms.", "title": "" }, { "docid": "f9b11e55be907175d969cd7e76803caf", "text": "In this paper, we consider the multivariate Bernoulli distribution as a model to estimate the structure of graphs with binary nodes. This distribution is discussed in the framework of the exponential family, and its statistical properties regarding independence of the nodes are demonstrated. Importantly the model can estimate not only the main effects and pairwise interactions among the nodes but also is capable of modeling higher order interactions, allowing for the existence of complex clique effects. We compare the multivariate Bernoulli model with existing graphical inference models – the Ising model and the multivariate Gaussian model, where only the pairwise interactions are considered. On the other hand, the multivariate Bernoulli distribution has an interesting property in that independence and uncorrelatedness of the component random variables are equivalent. Both the marginal and conditional distributions of a subset of variables in the multivariate Bernoulli distribution still follow the multivariate Bernoulli distribution. Furthermore, the multivariate Bernoulli logistic model is developed under generalized linear model theory by utilizing the canonical link function in order to include covariate information on the nodes, edges and cliques. We also consider variable selection techniques such as LASSO in the logistic model to impose sparsity structure on the graph. Finally, we discuss extending the smoothing spline ANOVA approach to the multivariate Bernoulli logistic model to enable estimation of non-linear effects of the predictor variables.", "title": "" }, { "docid": "fa89dd854c37fe87d7164e43826fac7c", "text": "Deployment of public wireless access points (also known as public hotspots) and the prevalence of portable computing devices has made it more convenient for people on travel to access the Internet. On the other hand, it also generates large privacy concerns due to the open environment. However, most users are neglecting the privacy threats because currently there is no way for them to know to what extent their privacy is revealed. In this paper, we examine the privacy leakage in public hotspots from activities such as domain name querying, web browsing, search engine querying and online advertising. We discover that, from these activities multiple categories of user privacy can be leaked, such as identity privacy, location privacy, financial privacy, social privacy and personal privacy. We have collected real data from 20 airport datasets in four countries and discover that the privacy leakage can be up to 68%, which means two thirds of users on travel leak their private information while accessing the Internet at airports. Our results indicate that users are not fully aware of the privacy leakage they can encounter in the wireless environment, especially in public WiFi networks. This fact can urge network service providers and website designers to improve their service by developing better privacy preserving mechanisms.", "title": "" }, { "docid": "0800bfff6569d6d4f3eb00fae0ea1c11", "text": "An 8-layer, 75 nm half-pitch, 3D stacked vertical-gate (VG) TFT BE-SONOS NAND Flash array is fabricated and characterized. We propose a buried-channel (n-type well) device to improve the read current of TFT NAND, and it also allows the junction-free structure which is particularly important for 3D stackable devices. Large self-boosting disturb-free memory window (6V) can be obtained in our device, and for the first time the “Z-interference” between adjacent vertical layers is studied. The proposed buried-channel VG NAND allows better X, Y pitch scaling and is a very attractive candidate for ultra high-density 3D stackable NAND Flash.", "title": "" }, { "docid": "ef011f601c37f0d08c2567fe7e231324", "text": "We live in a world were data are generated from a myriad of sources, and it is really cheap to collect and storage such data. However, the real benefit is not related to the data itself, but with the algorithms that are capable of processing such data in a tolerable elapse time, and to extract valuable knowledge from it. Therefore, the use of Big Data Analytics tools provide very significant advantages to both industry and academia. The MapReduce programming framework can be stressed as the main paradigm related with such tools. It is mainly identified by carrying out a distributed execution for the sake of providing a high degree of scalability, together with a fault-", "title": "" }, { "docid": "393f3e89c038b10feebb5ccb4fa80d07", "text": "Photo-excitation of certain semiconductors can lead to the production of reactive oxygen species that can inactivate microorganisms. The mechanisms involved are reviewed, along with two important applications. The first is the use of photocatalysis to enhance the solar disinfection of water. It is estimated that 750 million people do not have accessed to an improved source for drinking and many more rely on sources that are not safe. If one can utilize photocatalysis to enhance the solar disinfection of water and provide an inexpensive, simple method of water disinfection, then it could help reduce the risk of waterborne disease. The second application is the use of photocatalytic coatings to combat healthcare associated infections. Two challenges are considered, i.e., the use of photocatalytic coatings to give \"self-disinfecting\" surfaces to reduce the risk of transmission of infection via environmental surfaces, and the use of photocatalytic coatings for the decontamination and disinfection of medical devices. In the final section, the development of novel photocatalytic materials for use in disinfection applications is reviewed, taking account of materials, developed for other photocatalytic applications, but which may be transferable for disinfection purposes.", "title": "" }, { "docid": "b59d728b6b2cc63ccff242730571db09", "text": "Throughout the latter half of the past century cinema has played a significant role in the shaping of the core narratives of Australia. Films express and implicitly shape national images and symbolic representations of cultural fictions in which ideas about Indigenous identity have been embedded. In this paper, exclusionary practices in Australian narratives are analysed through examples of films representing Aboriginal identity. Through these filmic narratives the articulation, interrogation, and contestation of views about filmic representations of Aboriginal identity in Australia is illuminated. The various themes in the filmic narratives are examined in order to compare and contrast the ways in which the films display the operation of narrative closure and dualisms within the film texts.", "title": "" }, { "docid": "37ef43a6ed0dcf0817510b84224d9941", "text": "Contrast enhancement is one of the most important issues of image processing, pattern recognition and computer vision. The commonly used techniques for contrast enhancement fall into two categories: (1) indirect methods of contrast enhancement and (2) direct methods of contrast enhancement. Indirect approaches mainly modify histogram by assigning new values to the original intensity levels. Histogram speci\"cation and histogram equalization are two popular indirect contrast enhancement methods. However, histogram modi\"cation technique only stretches the global distribution of the intensity. The basic idea of direct contrast enhancement methods is to establish a criterion of contrast measurement and to enhance the image by improving the contrast measure. The contrast can be measured globally and locally. It is more reasonable to de\"ne a local contrast when an image contains textual information. Fuzzy logic has been found many applications in image processing, pattern recognition, etc. Fuzzy set theory is a useful tool for handling the uncertainty in the images associated with vagueness and/or imprecision. In this paper, we propose a novel adaptive direct fuzzy contrast enhancement method based on the fuzzy entropy principle and fuzzy set theory. We have conducted experiments on many images. The experimental results demonstrate that the proposed algorithm is very e!ective in contrast enhancement as well as in preventing over-enhancement. ( 2000 Pattern Recognition Society. Published by Elsevier Science Ltd. All rights reserved.", "title": "" }, { "docid": "f81dd0c86a7b45e743e4be117b4030c2", "text": "Stock market prediction is of great importance for financial analysis. Traditionally, many studies only use the news or numerical data for the stock market prediction. In the recent years, in order to explore their complementary, some studies have been conducted to equally treat dual sources of information. However, numerical data often play a much more important role compared with the news. In addition, the existing simple combination cannot exploit their complementarity. In this paper, we propose a numerical-based attention (NBA) method for dual sources stock market prediction. Our major contributions are summarized as follows. First, we propose an attention-based method to effectively exploit the complementarity between news and numerical data in predicting the stock prices. The stock trend information hidden in the news is transformed into the importance distribution of numerical data. Consequently, the news is encoded to guide the selection of numerical data. Our method can effectively filter the noise and make full use of the trend information in news. Then, in order to evaluate our NBA model, we collect news corpus and numerical data to build three datasets from two sources: the China Security Index 300 (CSI300) and the Standard & Poor’s 500 (S&P500). Extensive experiments are conducted, showing that our NBA is superior to previous models in dual sources stock price prediction.", "title": "" }, { "docid": "bd6ba64d14c8234e5ec2d07762a1165f", "text": "Since their introduction in the early years of this century, Variable Stiffness Actuators (VSA) witnessed a sustain ed growth of interest in the research community, as shown by the growing number of publications. While many consider VSA very interesting for applications, one of the factors hindering their further diffusion is the relatively new conceptual structure of this technology. In choosing a VSA for his/her application, the educated practitioner, used to choosing robot actuators based on standardized procedures and uniformly presented data, would be confronted with an inhomogeneous and rather disorganized mass of information coming mostly from scientific publications. In this paper, the authors consider how the design procedures and data presentation of a generic VS actuator could be organized so as to minimize the engineer’s effort in choosing the actuator type and size that would best fit the application needs. The reader is led through the list of the most important parameters that will determine the ultimate performance of his/her VSA robot, and influence both the mechanical design and the controller shape. This set of parameters extends the description of a traditional electric actuator with quantities describing the capability of the VSA to change its output stiffness. As an instrument for the end-user, the VSA datasheet is intended to be a compact, self-contained description of an actuator that summarizes all the salient characteristics that the user must be aware of when choosing a device for his/her application. At the end some example of compiled VSA datasheets are reported, as well as a few examples of actuator selection procedures.", "title": "" }, { "docid": "058db5e1a8c58a9dc4b68f6f16847abc", "text": "Insurance companies must manage millions of claims per year. While most of these claims are non-fraudulent, fraud detection is core for insurance companies. The ultimate goal is a predictive model to single out the fraudulent claims and pay out the non-fraudulent ones immediately. Modern machine learning methods are well suited for this kind of problem. Health care claims often have a data structure that is hierarchical and of variable length. We propose one model based on piecewise feed forward neural networks (deep learning) and another model based on self-attention neural networks for the task of claim management. We show that the proposed methods outperform bagof-words based models, hand designed features, and models based on convolutional neural networks, on a data set of two million health care claims. The proposed self-attention method performs the best.", "title": "" }, { "docid": "004a9fcd8a447f8601b901cff338f133", "text": "Hybrid precoding has been recently proposed as a cost-effective transceiver solution for millimeter wave systems. While the number of radio frequency chains has been effectively reduced in existing works, a large number of high-precision phase shifters are still needed. Practical phase shifters are with coarsely quantized phases, and their number should be reduced to a minimum due to cost and power consideration. In this paper, we propose a novel hardware-efficient implementation for hybrid precoding, called the fixed phase shifter (FPS) implementation. It only requires a small number of phase shifters with quantized and fixed phases. To enhance the spectral efficiency, a switch network is put forward to provide dynamic connections from phase shifters to antennas, which is adaptive to the channel states. An effective alternating minimization algorithm is developed with closed-form solutions in each iteration to determine the hybrid precoder and the states of switches. Moreover, to further reduce the hardware complexity, a group-connected mapping strategy is proposed to reduce the number of switches. Simulation results show that the FPS fully-connected hybrid precoder achieves higher hardware efficiency with much fewer phase shifters than existing proposals. Furthermore, the group-connected mapping achieves a good balance between spectral efficiency and hardware complexity.", "title": "" }, { "docid": "a423435c1dc21c33b93a262fa175f5c5", "text": "The study investigated several teacher characteristics, with a focus on two measures of teaching experience, and their association with second grade student achievement gains in low performing, high poverty schools in a Mid-Atlantic state. Value-added models using three-level hierarchical linear modeling were used to analyze the data from 1,544 students, 154 teachers, and 53 schools. Results indicated that traditional teacher qualification characteristics such as licensing status and educational attainment were not statistically significant in producing student achievement gains. Total years of teaching experience was also not a significant predictor but a more specific measure, years of teaching experience at a particular grade level, was significantly associated with increased student reading achievement. We caution researchers and policymakers when interpreting results from studies that have used only a general measure of teacher experience as effects are possibly underestimated. Policy implications are discussed.", "title": "" }, { "docid": "e53de7a588d61f513a77573b7b27f514", "text": "In the past, there have been dozens of studies on automatic authorship classification, and many of these studies concluded that the writing style is one of the best indicators for original authorship. From among the hundreds of features which were developed, syntactic features were best able to reflect an author's writing style. However, due to the high computational complexity for extracting and computing syntactic features, only simple variations of basic syntactic features such as function words, POS(Part of Speech) tags, and rewrite rules were considered. In this paper, we propose a new feature set of k-embedded-edge subtree patterns that holds more syntactic information than previous feature sets. We also propose a novel approach to directly mining them from a given set of syntactic trees. We show that this approach reduces the computational burden of using complex syntactic structures as the feature set. Comprehensive experiments on real-world datasets demonstrate that our approach is reliable and more accurate than previous studies.", "title": "" }, { "docid": "34472a26bc08f7763a1e5f64b5205fe4", "text": "We propose Sentence Level Recurrent Topic Model (SLRTM), a new topic model that assumes the generation of each word within a sentence to depend on both the topic of the sentence and the whole history of its preceding words in the sentence. Different from conventional topic models that largely ignore the sequential order of words or their topic coherence, SLRTM gives full characterization to them by using a Recurrent Neural Networks (RNN) based framework. Experimental results have shown that SLRTM outperforms several strong baselines on various tasks. Furthermore, SLRTM can automatically generate sentences given a topic (i.e., topics to sentences), which is a key technology for real world applications such as personalized short text conversation.", "title": "" }, { "docid": "94e386866e9e934d53405921963e483a", "text": "Population pharmacokinetics is the study of pharmacokinetics at the population level, in which data from all individuals in a population are evaluated simultaneously using a nonlinear mixedeffects model. “Nonlinear” refers to the fact that the dependent variable (e.g., concentration) is nonlinearly related to the model parameters and independent variable(s). “Mixed-effects” refers to the parameterization: parameters that do not vary across individuals are referred to as “fixed effects,” parameters that vary across individuals are called “random effects.” There are five major aspects to developing a population pharmacokinetic model: (i) data, (ii) structural model, (iii) statistical model, (iv) covariate models, and (v) modeling software. Structural models describe the typical concentration time course within the population. Statistical models account for “unexplainable” (random) variability in concentration within the population (e.g., betweensubject, between-occasion, residual, etc.). Covariate models explain variability predicted by subject characteristics (covariates). Nonlinear mixed effects modeling software brings data and models together, implementing an estimation method for finding parameters for the structural, statistical, and covariate models that describe the data.1 A primary goal of most population pharmacokinetic modeling evaluations is finding population pharmacokinetic parameters and sources of variability in a population. Other goals include relating observed concentrations to administered doses through identification of predictive covariates in a target population. Population pharmacokinetics does not require “rich” data (many observations/subject), as required for analysis of single-subject data, nor is there a need for structured sampling time schedules. “Sparse” data (few observations/ subject), or a combination, can be used. We examine the fundamentals of five key aspects of population pharmacokinetic modeling together with methods for comparing and evaluating population pharmacokinetic models. DATA CONSIDERATIONS", "title": "" }, { "docid": "e0f56e20d509234a45b0a91f8d6b91cb", "text": "This paper describes recent research findings on resource sharing between trees and crops in the semiarid tropics and attempts to reconcile this information with current knowledge of the interactions between savannah trees and understorey vegetation by examining agroforestry systems from the perspective of succession. In general, productivity of natural vegetation under savannah trees increases as rainfall decreases, while the opposite occurs in agroforestry. One explanation is that in the savannah, the beneficial effects of microclimatic improvements (e.g. lower temperatures and evaporation losses) are greater in more xeric environments. Mature savannah trees have a high proportion of woody above-ground structure compared to foliage, so that the amount of water 'saved' (largely by reduction in soil evaporation) is greater than water 'lost' through transpiration by trees. By contrast, in agroforestry practices such as alley cropping where tree density is high, any beneficial effects of the trees on microclimate are negated by reductions in soil moisture due to increasing interception losses and tree transpiration. While investment in woody structure can improve the water economy beneath agroforestry trees, it inevitably reduces the growth rate of the trees and thus increases the time required for improved understorey productivity. Therefore, agroforesters prefer trees with more direct and immediate benefits to farmers. The greatest opportunity for simultaneous agroforestry practices is therefore to fill niches within the landscape where resources are currently under-utilised by crops. In this way, agroforestry can mimic the large scale patch dynamics and successional progression of a natural ecosystem.", "title": "" } ]
scidocsrr
df90844da0f4c9240cd051235a7ce7d4
Small-Signal Model-Based Control Strategy for Balancing Individual DC Capacitor Voltages in Cascade Multilevel Inverter-Based STATCOM
[ { "docid": "bd2041c4fa88cbdc73c68cf2586df849", "text": "This paper presents a three-phase transformerless cascade pulsewidth-modulation (PWM) static synchronous compensator (STATCOM) intended for installation on industrial and utility power distribution systems. It proposes a control algorithm that devotes itself not only to meeting the demand of reactive power but also to voltage balancing of multiple galvanically isolated and floating dc capacitors. The control algorithm based on a phase-shifted carrier modulation strategy is prominent in having no restriction on the cascade number. Experimental waveforms verify that a 200-V 10-kVA cascade PWM STATCOM with star configuration has the capability of inductive to capacitive (or capacitive to inductive) operation at the rated reactive power of 10 kVA within 20 ms while keeping the nine dc mean voltages controlled and balanced even during the transient state.", "title": "" }, { "docid": "264fef3aa71df1f661f2b94461f9634c", "text": "This paper presents a new control method for cascaded connected H-bridge converter-based static compensators. These converters have classically been commutated at fundamental line frequencies, but the evolution of power semiconductors has allowed the increase of switching frequencies and power ratings of these devices, permitting the use of pulsewidth modulation techniques. This paper mainly focuses on dc-bus voltage balancing problems and proposes a new control technique (individual voltage balancing strategy), which solves these balancing problems, maintaining the delivered reactive power equally distributed among all the H-bridges of the converter.", "title": "" } ]
[ { "docid": "aa60d0d73efdf21adcc95c6ad7a7dbc3", "text": "While hardware obfuscation has been used in industry for many years, very few scientific papers discuss layout-level obfuscation. The main aim of this paper is to start a discussion about hardware obfuscation in the academic community and point out open research problems. In particular, we introduce a very flexible layout-level obfuscation tool that we use as a case study for hardware obfuscation. In this obfuscation tool, a small custom-made obfuscell is used in conjunction with a standard cell to build a new obfuscated standard cell library called Obfusgates. This standard cell library can be used to synthesize any HDL code with standard synthesis tools, e.g. Synopsis Design Compiler. However, only obfuscating the functionality of individual gates is not enough. Not only the functionality of individual gates, but also their connectivity, leaks important important information about the design. In our tool we therefore designed the obfuscation gates to include a large number of \"dummy wires\". Due to these dummy wires, the connectivity of the gates in addition to their logic functionality is obfuscated. We argue that this aspect of obfuscation is of great importance in practice and that there are many interesting open research questions related to this.", "title": "" }, { "docid": "4fb93d604733837782085ecb19b49621", "text": "Text in many domains involves a significant amount of named entities. Predicting the entity names is often challenging for a language model as they appear less frequent on the training corpus. In this paper, we propose a novel and effective approach to building a discriminative language model which can learn the entity names by leveraging their entity type information. We also introduce two benchmark datasets based on recipes and Java programming codes, on which we evaluate the proposed model. Experimental results show that our model achieves 52.2% better perplexity in recipe generation and 22.06% on code generation than the stateof-the-art language models.", "title": "" }, { "docid": "ddb36948e400c970309bd0886bfcfccb", "text": "1 Introduction \"S pace\" and \"place\" are familiar words denoting common \"Sexperiences. We live in space. There is no space for an-< • / other building on the lot. The Great Plains look spacious. Place is security, space is freedom: we are attached to the one and long for the other. There is no place like home. What is home? It is the old homestead, the old neighborhood, home-town, or motherland. Geographers study places. Planners would like to evoke \"a sense of place.\" These are unexceptional ways of speaking. Space and place are basic components of the lived world; we take them for granted. When we think about them, however, they may assume unexpected meanings and raise questions we have not thought to ask. What is space? Let an episode in the life of the theologian Paul Tillich focus the question so that it bears on the meaning of space in experience. Tillich was born and brought up in a small town in eastern Germany before the turn of the century. The town was medieval in character. Surrounded by a wall and administered from a medieval town hall, it gave the impression of a small, protected, and self-contained world. To an imaginative child it felt narrow and restrictive. Every year, however young Tillich was able to escape with his family to the Baltic Sea. The flight to the limitless horizon and unrestricted space 3 4 Introduction of the seashore was a great event. Much later Tillich chose a place on the Atlantic Ocean for his days of retirement, a decision that undoubtedly owed much to those early experiences. As a boy Tillich was also able to escape from the narrowness of small-town life by making trips to Berlin. Visits to the big city curiously reminded him of the sea. Berlin, too, gave Tillich a feeling of openness, infinity, unrestricted space. 1 Experiences of this kind make us ponder anew the meaning of a word like \"space\" or \"spaciousness\" that we think we know well. What is a place? What gives a place its identity, its aura? These questions occurred to the physicists Niels Bohr and Werner Heisenberg when they visited Kronberg Castle in Denmark. Bohr said to Heisenberg: Isn't it strange how this castle changes as soon as one imagines that Hamlet lived here? As scientists we believe that a castle consists only of stones, and admire the way the …", "title": "" }, { "docid": "541440dc7497e14876642e837c2207c7", "text": "We propose several simple approaches to training deep neural networks on data with noisy labels. We introduce an extra noise layer into the network which adapts the network outputs to match the noisy label distribution. The parameters of this noise layer can be estimated as part of the training process and involve simple modifications to current training infrastructures for deep networks. We demonstrate the approaches on several datasets, including large scale experiments on the ImageNet classification benchmark, showing how additional noisy data can improve state-of-the-art recognition models. 1 Introduction In recent years, deep learning methods have shown impressive results on image classification tasks. However, this achievement is only possible because of large amount of labeled images. Labeling images by hand is a laborious task and takes a lot of time and money. An alternative approach is to generate labels automatically. This includes user tags from social web sites and keywords from image search engines. Considering the abundance of such noisy labels, it is important to find a way to utilize them in deep learning. Unfortunately, those labels are very noisy and unlikely to help training deep networks without additional tricks. Our goal is to study the effect label noise on deep networks, and explore simple ways of improvement. We focus on the robustness of deep networks instead of data cleaning methods, which are well studied and can be used together with robust models directly. Although many noise robust classifiers are proposed so far, there are not many works on training deep networks on noisy labeled data, especially on large scale datasets. Our contribution in this paper is a novel way of modifying deep learning models so they can be effectively trained on data with high level of label noise. The modification is simply done by adding a linear layer on top of the softmax layer, which makes it easy to implement. This additional layer changes the output from the network to give better match to the noisy labels. Also, it is possible to learn the noise distribution directly from the noisy data. Using real-world image classification tasks, we demonstrate that the model actually works very well in practice. We even show that random images without labels (complete noise) can improve the classification performance. 2 Related Work In any classification model, degradation of performance is inevitable when there is noise in training labels [13, 15]. A simple approach to handle noisy labels is a data preprocessing stage, where labels suspected to be incorrect are removed or corrected [1, 3]. However, a weakness of this approach is the difficulty of distinguishing informative hard samples from harmful mislabeled ones [6]. Instead, in this paper, we focus on models robust to presence of label noise. 1 ar X iv :1 40 6. 20 80 v1 [ cs .C V ] 9 J un 2 01 4 The effect of label noise is well studied in common classifiers (e.g., SVMs, kNN, logistic regression), and their label noise robust variants have been proposed. See [5] for comprehensive review. A more recent work [2] proposed a generic unbiased estimator for binary classification with noisy labels. They employ a surrogate cost function that can be expressed by a weighted sum of the original cost functions, and gave theoretical bounds on the performance. In this paper, we will also consider this idea and extend it multiclass. A cost function similar to ours is proposed in [2] to make logistic regression robust to label noise. They also proposed a learning algorithm for noise parameters. However, we consider deep networks, a more powerful and complex classifier than logistic regression, and propose a different learning algorithm for noise parameters that is more suited for back-propagation training. Considering the recent success of deep learning [8, 17, 16], there are very few works about deep learning from noisy labels. In [11, 9], noise modeling is incorporated to neural network in the same way as our proposed model. However, only binary classification is considered in [11], and [9] assumed symmetric label noise (noise is independent of the true label). Therefore, there is only a single noise parameter, which can be tuned by cross-validation. In this paper, we consider multiclass classification and assume more realistic asymmetric label noise, which makes it impossible to use cross-validation to adjust noise parameters (there can be a million parameters). 3 Approach In this paper, we consider two approaches to make an existing classification model, which we call the base model, robust against noisy labels: bottom-up and top-down noise models. In the bottomup model, we add an additional layer to the model that changes the label probabilities output by the base model so it would better match to noisy labels. Top-down model, on other hand, changes given noisy labels before feeding them to the base model. Both models require a noise model for training, so we will give an easy way to estimate noise levels using clean data. Also, it is possible to learn noise distribution from noisy data in the bottom-up model. Although only deep neural networks are used in our experiments, the both approaches can be applied to any classification model with a cross entropy cost. 3.1 Bottom-up Noise Model We assume that label noise is random conditioned on the true class, but independent of the input x (see [10] for more detail about this type of noise). Based on this assumption, we add an additional layer to a deep network (see Figure 1) that changes its output so it would better match to the noisy labels. The weights of this layer corresponds to the probabilities of a certain class being mislabeled to another class. Because those probabilities are often unknown, we will show how estimate them from additional clean data, or from the noisy data itself. Let D be the true data distribution generating correctly labeled samples (x, y∗), where x is an input vector and y∗ is the corresponding label. However, we only observe noisy labeled samples (x, ỹ) that generated from a some noisy distribution D̃. We assume that the label noise is random conditioned on the true labels. Then, the noise distribution can be parameterized by a matrix Q = {qji}: qji := p(ỹ = j|y∗ = i). Q is a probability matrix because its elements are positive and each column sums to one. The probability of input x being labeled as j in D̃ is given by p(ỹ = j|x, θ) = ∑ i p(ỹ = j|y∗ = i)p(y∗ = i|x) = ∑ i qjip(y ∗ = i|x, θ). (1) where p(y∗ = i|x, θ) is the probabilistic output of the base model with parameters θ. If the true noise distribution is known, we can modify this for noisy labeled data. During training, Q will act as an adapter that transforms the model’s output to better match the noisy labels. Deep\t\r network Learnin from noisy labels in deep neural networks Sainbayar Sukhbaatar Dept. of Computer Science, NYU, 715 Broadway, New Y rk, NY 10003 sainbar@cs.nyu.edu Rob Fergus Courant Institute, NYU, 715 Broadway, New York, NY 10003 fergus@cs.nyu.edu", "title": "" }, { "docid": "c460ac78bb06e7b5381506f54200a328", "text": "Efficient virtual machine (VM) management can dramatically reduce energy consumption in data centers. Existing VM management algorithms fall into two categories based on whether the VMs' resource demands are assumed to be static or dynamic. The former category fails to maximize the resource utilization as they cannot adapt to the dynamic nature of VMs' resource demands. Most approaches in the latter category are heuristical and lack theoretical performance guarantees. In this work, we formulate dynamic VM management as a large-scale Markov Decision Process (MDP) problem and derive an optimal solution. Our analysis of real-world data traces supports our choice of the modeling approach. However, solving the large-scale MDP problem suffers from the curse of dimensionality. Therefore, we further exploit the special structure of the problem and propose an approximate MDP-based dynamic VM management method, called MadVM. We prove the convergence of MadVM and analyze the bound of its approximation error. Moreover, MadVM can be implemented in a distributed system, which should suit the needs of real data centers. Extensive simulations based on two real-world workload traces show that MadVM achieves significant performance gains over two existing baseline approaches in power consumption, resource shortage and the number of VM migrations. Specifically, the more intensely the resource demands fluctuate, the more MadVM outperforms.", "title": "" }, { "docid": "ec11d0b10af5507c18d918edb42a9ab8", "text": "Traditional way of manual meter reading was not only waste of human and material resources, but also very inconvenient. Especially with the emergence of a number of high residential in recent years, this traditional way of water management was obviously inefficient. Cable automatic meter reading system is very vulnerable and it needs a heavy workload of construction wiring. In this paper, based on the study of existed water meters, a kind of design schema of wireless smart water meter was introduced. In the system, the main communication way is based on Zigbee technology. This kind of design schema is appropriate for the modern water management and the efficiency can be improved.", "title": "" }, { "docid": "9aa91978651f42157b42a55b936a9bc0", "text": "Suicide, the eighth leading cause of death in the United States, accounts for more than 30 000 deaths per year. The total number of suicides has changed little over time. For example, 27 596 U.S. suicides occurred in 1981, and 30 575 occurred in 1998. Between 1981 and 1998, the age-adjusted suicide rate decreased by 9.3%from 11.49 suicides per 100 000 persons to 10.42 suicides per 100 000 persons (www.cdc.gov/ncipc/wisqars). The suicide rate in men (18.7 suicides per 100 000 men in 1998) is more than four times that in women (4.4 suicides per 100 000 women in 1998). In females, suicide rates remain relatively constant beginning in the midteens. In males, suicide rates are stable from the late teenage years until the late 70s, when the rate increases substantially (to 41 suicides per 100 000 persons annually in men 75 to 84 years of age). White men have a twofold higher risk for suicide compared with African-American men (20.2 vs. 10.9 suicides, respectively, each year per 100 000 men). The risk in white women is double that of women in U.S. nonwhite ethnic/racial minority groups (4.9 vs. 2.4 per 100 000 women each year). In countries other than the United States, the most recently reported rates of suicide vary widely, ranging from less than 1 in 100 000 persons per year in Syria, Egypt, and Lebanon to more than 40 in 100 000 persons per year in many former Soviet republics (www.who.int/whosis). Over the past century, Hungary had the world's highest reported rate of suicide; the reason is unknown. Of note, the reported rates of suicide in first-generation immigrants to Australia tend to be more similar to rates in their native country than to rates in their country of current residence (1, 2); these figures indicate the influence of culture and ethnicity on suicide rates. Suicide is the third leading cause of death in persons 15 to 34 years of age. The U.S. suicide rate in all youths decreased by 18% from 1990 to 1998 (www.cdc.gov/ncipc/wisqars) despite a 3.6-fold increase from 1992 to 1995 in white men, a 4.7-fold increase in African-American men, and a 2.1-fold increase in African-American women. Worldwide, from 1950 to 1995 in persons of all ages, suicide rates increased by approximately 35% in men and only approximately 10% in women (www.who.int/whosis). The reasons for the differences in rates among age, sex, and ethnic groups and the change in rates since the 1950s are unknown. Suicide is generally a complication of a psychiatric disorder. More than 90% of suicide victims have a diagnosable psychiatric illness (3-7), and most persons who attempt suicide have a psychiatric disorder. The most common psychiatric conditions associated with suicide or serious suicide attempt are mood disorders (3-8). Investigators have proposed many models to explain or predict suicide (9). One such explanatory and predictive model is the stress-diathesis model (10). One stressor is almost invariably the onset or acute worsening of a psychiatric disorder, but other types of stressors, such as a psychosocial crisis, can also contribute. The diathesis for suicidal behavior includes a combination of factors, such as sex, religion, familial and genetic components, childhood experiences, psychosocial support system, availability of highly lethal suicide methods, and various other factors, including cholesterol level. In this review, I describe the neurobiological correlates of the stressors and the diathesis. Literature for this review came from searches of the MEDLINE database (1996 to the present) and from literature cited in review articles. The factors that determined inclusion in this review were superiority of research design (use of psychiatric controls, quality of psychometrics, diagnostic information on the study sample, and definition of suicidal behavior; prospective studies were favored), adequate representation of major points of view, and pivotal reviews of key subjects. What is Suicidal Behavior? Suicidal behavior refers to the most clear-cut and unambiguous act of completed suicide but also includes a heterogeneous spectrum of suicide attempts that range from highly lethal attempts (in which survival is the result of good fortune) to low-lethality attempts that occur in the context of a social crisis and contain a strong element of an appeal for help (11). Suicidal ideation without action is more common than suicidal behavior (11). In most countries, men have a higher reported rate of completed suicide, whereas women have a higher rate of attempted suicide (12). Men tend to use means that are more lethal, plan the suicide attempt more carefully, and avoid detection. In contrast, women tend to use less lethal means of suicide, which carry a higher chance of survival, and they more commonly express an appeal for help by conducting the attempt in a manner that favors discovery and rescue (13, 14). Thus, suicidal behavior has two dimensions (13). The first dimension is the degree of medical lethality or damage resulting from the suicide attempt. The second dimension relates to suicidal intent and measures the degree of preparation, the desire to die versus the desire to live, and the chances of discovery. Intent and lethality are correlated with each other and with biological abnormalities associated with suicide risk (13, 15, 16). The clinical profiles of suicide attempts and completions overlap (17). Suicide attempters who survive very lethal attempts, which are known as failed suicides, have the same clinical and psychosocial profile as suicide completers (11, 17). The study and prevention of failed suicides are probably most relevant to completed suicides. Somewhat related to suicide attempters are patients with serious medical illnesses who do not adhere to treatment regimensfor example, diabetic patients who do not take prescribed medications to control blood sugar levels or persons who engage in high-risk behaviors, such as sky diving or mountaineering. These groups warrant further study to determine whether they have psychopathology that overlaps with the psychopathology of suicide attempters. Intent and lethality are also related to the risk for future completed suicide (13). Subsequent suicide attempts may involve a greater degree of intent and lethality (18), and a previous suicide attempt is an important predictor of future suicide (19, 20) or suicide attempt (21). Careful inquiry about past suicide attempts is an essential part of risk assessment in psychiatric patients. Because more than two thirds of suicides occur with the first attempt, history of a suicide attempt is insufficient to predict most suicides; additional risk factors must be considered. Clinical Correlates of Suicidal Behavior Psychological autopsy studies involve review of all available medical records and interviews with family members and friends of the deceased. This method generates valid psychiatric diagnoses (22), and most studies have found that more than 90% of suicide victims had a psychiatric disorder at the time of suicide (3-6, 23). That percentage may be underestimated because accurate data depend on finding informants who knew the victim's state of mind in the weeks before death. Approximately 60% of all suicides occur in persons with a mood disorder (3, 4, 6, 7), and the rest occur in persons with various other psychiatric conditions, including schizophrenia; alcoholism (24); substance abuse (5, 25, 26); and personality disorders (27), such as borderline or antisocial personality disorder (23, 28-30). Lifetime mortality from suicide in discharged hospital populations is approximately 20% in persons with bipolar disorder (manic depression), 15% in persons with unipolar depression, 10% in persons with schizophrenia, 18% in persons with alcoholism, and 5% to 10% in persons with both borderline and antisocial personality disorders (29-33). These personality disorders are characterized by emotional liability, aggression, and impulsivity. The lifetime mortality due to suicide is lower in general psychiatric populations (34, 35). Although suicide is generally a complication of a psychiatric disorder, most persons with a psychiatric disorder never attempt suicide. Even the higher-risk groups, such as persons with unipolar or bipolar mood disorders, have a lifetime suicide attempt rate less than 50%. Thus, persons with these psychiatric disorders who die by suicide differ from those who never attempt suicide. To understand those differences, investigators have compared persons who have attempted suicide and those who have not by matching psychiatric diagnosis and comparable objective severity of illness (10). Suicide attempters differ in two important ways from nonattempters with the same psychiatric disorder. First, they experience more subjective depression and hopelessness and, in particular, have more severe suicidal ideation. They also perceive fewer reasons for living despite having the same objective severity of psychiatric illness and a similar number of adverse life events. One possible explanation for the greater sense of hopelessness and greater number of suicidal ideations is a predisposition for such feelings in the face of illness or other life stressor. The pressure of greater lifetime aggressivity and impulsivity suggests a second diathesis element in suicidal patients. These individuals not only are more aggressive toward others and their environment but are more impulsive in other ways that involve, for example, relationships or personal decisions about a job or purchases. A propensity for more severe suicidal ideation and a greater likelihood of acting on powerful feelings combine to place some patients at greater risk for suicide attempts than others. For clinicians, important indicators of such a diathesis are a history of a suicide attempt, which indicates the presence of a diathesis for suicidal behavior, and a family history of suicidal behavior. Suicidal behavior is known to be transmitted within families, ", "title": "" }, { "docid": "333b21433d17a9d271868e203c8a9481", "text": "The aim of stock prediction is to effectively predict future stock market trends (or stock prices), which can lead to increased profit. One major stock analysis method is the use of candlestick charts. However, candlestick chart analysis has usually been based on the utilization of numerical formulas. There has been no work taking advantage of an image processing technique to directly analyze the visual content of the candlestick charts for stock prediction. Therefore, in this study we apply the concept of image retrieval to extract seven different wavelet-based texture features from candlestick charts. Then, similar historical candlestick charts are retrieved based on different texture features related to the query chart, and the “future” stock movements of the retrieved charts are used for stock prediction. To assess the applicability of this approach to stock prediction, two datasets are used, containing 5-year and 10-year training and testing sets, collected from the Dow Jones Industrial Average Index (INDU) for the period between 1990 and 2009. Moreover, two datasets (2010 and 2011) are used to further validate the proposed approach. The experimental results show that visual content extraction and similarity matching of candlestick charts is a new and useful analytical method for stock prediction. More specifically, we found that the extracted feature vectors of 30, 90, and 120, the number of textual features extracted from the candlestick charts in the BMP format, are more suitable for predicting stock movements, while the 90 feature vector offers the best performance for predicting short- and medium-term stock movements. That is, using the 90 feature vector provides the lowest MAPE (3.031%) and Theil’s U (1.988%) rates in the twenty-year dataset, and the best MAPE (2.625%, 2.945%) and Theil’s U (1.622%, 1.972%) rates in the two validation datasets (2010 and 2011).", "title": "" }, { "docid": "9030887c9d95a80ac59e645f19b7e848", "text": "The notion of a neuron that responds selectively to the image of a particular complex object has been controversial ever since Gross and his colleagues reported neurons in the temporal cortex of monkeys that were selective for the sight of a monkey's hand (Gross, Rocha-Miranda, & Bender, 1972). Since that time, evidence has mounted for neurons in the temporal lobe that respond selectively to faces. The present paper presents a critical analysis of the evidence for face neurons and discusses the implications of these neurons for models of object recognition. The paper also presents some possible reasons for the evolution of face neurons and suggests some analogies with the development of language in humans.", "title": "" }, { "docid": "2afbf85020a40b7e1476d19419e7a2bd", "text": "Coronary artery disease is the leading global cause of mortality. Long recognized to be heritable, recent advances have started to unravel the genetic architecture of the disease. Common variant association studies have linked approximately 60 genetic loci to coronary risk. Large-scale gene sequencing efforts and functional studies have facilitated a better understanding of causal risk factors, elucidated underlying biology and informed the development of new therapeutics. Moving forwards, genetic testing could enable precision medicine approaches by identifying subgroups of patients at increased risk of coronary artery disease or those with a specific driving pathophysiology in whom a therapeutic or preventive approach would be most useful.", "title": "" }, { "docid": "ff8d55e7b997a9888fafade0366c3ce2", "text": "OBJECTIVE\nTumors within Meckel's cave are challenging and often require complex approaches. In this report, an expanded endoscopic endonasal approach is reported as a substitute for or complement to other surgical options for the treatment of various tumors within this region.\n\n\nMETHODS\nA database of more than 900 patients who underwent the expanded endoscopic endonasal approach at the University of Pittsburgh Medical Center from 1998 to March of 2008 were reviewed. From these, only patients who had an endoscopic endonasal approach to Meckel's cave were considered. The technique uses the maxillary sinus and the pterygopalatine fossa as part of the working corridor. Infraorbital/V2 and the vidian neurovascular bundles are used as surgical landmarks. The quadrangular space is opened, which is bound by the internal carotid artery medially and inferiorly, V2 laterally, and the abducens nerve superiorly. This offers direct access to the anteroinferomedial segment of Meckel's cave, which can be extended through the petrous bone to reach the cerebellopontine angle.\n\n\nRESULTS\nForty patients underwent an endoscopic endonasal approach to Meckel's cave. The most frequent abnormalities encountered were adenoid cystic carcinoma, meningioma, and schwannomas. Meckel's cave and surrounding structures were accessed adequately in all patients. Five patients developed a new facial numbness in at least 1 segment of the trigeminal nerve, but the deficit was permanent in only 2. Two patients had a transient VIth cranial nerve palsy. Nine patients (30%) showed improvement of preoperative deficits on Cranial Nerves III to VI.\n\n\nCONCLUSION\nIn selected patients, the expanded endoscopic endonasal approach to the quadrangular space provides adequate exposure of Meckel's cave and its vicinity, with low morbidity.", "title": "" }, { "docid": "5213aa65c5a291f0839046607dcf5f6c", "text": "The distribution and mobility of chromium in the soils and sludge surrounding a tannery waste dumping area was investigated to evaluate its vertical and lateral movement of operational speciation which was determined in six steps to fractionate the material in the soil and sludge into (i) water soluble, (ii) exchangeable, (iii) carbonate bound, (iv) reducible, (v) oxidizable, and (vi) residual phases. The present study shows that about 63.7% of total chromium is mobilisable, and 36.3% of total chromium is nonbioavailable in soil, whereas about 30.2% of total chromium is mobilisable, and 69.8% of total chromium is non-bioavailable in sludge. In contaminated sites the concentration of chromium was found to be higher in the reducible phase in soils (31.3%) and oxidisable phases in sludge (56.3%) which act as the scavenger of chromium in polluted soils. These results also indicate that iron and manganese rich soil can hold chromium that will be bioavailable to plants and biota. Thus, results of this study can indicate the status of bioavailable of chromium in this area, using sequential extraction technique. So a suitable and proper management of handling tannery sludge in the said area will be urgently needed to the surrounding environment as well as ecosystems.", "title": "" }, { "docid": "8a83060c0a454a5f7a13114846bbe9c5", "text": "Evolutionary Algorithms (EAs) are a fascinating branch of computational intelligence with much potential for use in many application areas. The fundamental principle of EAs is to use ideas inspired by the biological mechanisms observed in nature, such as selection and genetic changes, to find the best solution for a given optimization problem. Generally, EAs use iterative processes, by growing a population of solutions selected in a guided random search and using parallel processing, in order to achieve a desired result. Such population based approaches, for example particle swarm and ant colony optimization (inspired from biology), are among the most popular metaheuristic methods being used in machine learning, along with others such as the simulated annealing (inspired from thermodynamics). In this paper, we provide a short survey on the state-of-the-art of EAs, beginning with some background on the theory of evolution and contrasting the original ideas of Darwin and Lamarck; we then continue with a discussion on the analogy between biological and computational sciences, and briefly describe some fundamentals of EAs, including the Genetic Algorithms, Genetic Programming, Evolution Strategies, Swarm Intelligence Algorithms (i.e., Particle Swarm Optimization, Ant Colony Optimization, Bacteria Foraging Algorithms, Bees Algorithm, Invasive Weed Optimization), Memetic Search, Differential Evolution Search, Artificial Immune Systems, Gravitational Search Algorithm, Intelligent Water Drops Algorithm. We conclude with a short description of the usefulness of EAs for Knowledge Discovery and Data Mining tasks and present some open problems and challenges to further stimulate research.", "title": "" }, { "docid": "55b2465349e4965a35b4c894c5545afb", "text": "Context-awareness is a key concept in ubiquitous computing. But to avoid developing dedicated context-awareness sub-systems for specific application areas there is a need for more generic programming frameworks. Such frameworks can help the programmer to develop and deploy context-aware applications faster. This paper describes the Java Context-Awareness Framework – JCAF, which is a Java-based context-awareness infrastructure and programming API for creating context-aware computer applications. The paper presents the design principles behind JCAF, its runtime architecture, and its programming API. The paper presents some applications of using JCAF in three different applications and discusses lessons learned from using JCAF.", "title": "" }, { "docid": "75a1832a5fdd9c48f565eb17e8477b4b", "text": "We introduce a new interactive system: a game that is fun and can be used to create valuable output. When people play the game they help determine the contents of images by providing meaningful labels for them. If the game is played as much as popular online games, we estimate that most images on the Web can be labeled in a few months. Having proper labels associated with each image on the Web would allow for more accurate image search, improve the accessibility of sites (by providing descriptions of images to visually impaired individuals), and help users block inappropriate images. Our system makes a significant contribution because of its valuable output and because of the way it addresses the image-labeling problem. Rather than using computer vision techniques, which don't work well enough, we encourage people to do the work by taking advantage of their desire to be entertained.", "title": "" }, { "docid": "87dd019430e4345026b8de22f696c6e2", "text": "Although consumer research began focusing on emotional response to advertising during the 1980s (Goodstein, Edell, and Chapman Moore. 1990; Burke and Edell, 1989; Aaker, Stayman, and Vezina, 1988; Holbrook and Batra, 1988), perhaps one of the most practical measures of affective response has only recently emerged. Part of the difficulty in developing measures of emotional response stems from the complexity of emotion itself (Plummer and Leckenby, 1985). Researchers have explored several different measurement formats including: verbal self-reports (adjective checklists), physiological techniques, photodecks, and dial-turning instruments.", "title": "" }, { "docid": "5ffe358766049379b0910ac1181100af", "text": "A novel one-section bandstop filter (BSF), which possesses the characteristics of compact size, wide bandwidth, and low insertion loss is proposed and fabricated. This bandstop filter was constructed by using single quarter-wavelength resonator with one section of anti-coupled lines with short circuits at one end. The attenuation-pole characteristics of this type of bandstop filters are investigated through TEM transmission-line model. Design procedures are clearly presented. The 3-dB bandwidth of the first stopband and insertion loss of the first passband of this BSF is from 2.3 GHz to 9.5 GHz and below 0.3 dB, respectively. There is good agreement between the simulated and experimental results.", "title": "" }, { "docid": "ce2d4247b1072b3c593e73fe9d67cf63", "text": "OBJECTIVE\nTo improve walking and other aspects of physical function with a progressive 6-month exercise program in patients with multiple sclerosis (MS).\n\n\nMETHODS\nMS patients with mild to moderate disability (Expanded Disability Status Scale scores 1.0 to 5.5) were randomly assigned to an exercise or control group. The intervention consisted of strength and aerobic training initiated during 3-week inpatient rehabilitation and continued for 23 weeks at home. The groups were evaluated at baseline and at 6 months. The primary outcome was walking speed, measured by 7.62 m and 500 m walk tests. Secondary outcomes included lower extremity strength, upper extremity endurance and dexterity, peak oxygen uptake, and static balance. An intention-to-treat analysis was used.\n\n\nRESULTS\nNinety-one (96%) of the 95 patients entering the study completed it. Change between groups was significant in the 7.62 m (p = 0.04) and 500 m walk tests (p = 0.01). In the 7.62 m walk test, 22% of the exercising patients showed clinically meaningful improvements. The exercise group also showed increased upper extremity endurance as compared to controls. No other noteworthy exercise-induced changes were observed. Exercise adherence varied considerably among the exercisers.\n\n\nCONCLUSIONS\nWalking speed improved in this randomized study. The results confirm that exercise is safe for multiple sclerosis patients and should be recommended for those with mild to moderate disability.", "title": "" }, { "docid": "d57533a410ea82ed6355eddf4eb72874", "text": "The aim of this paper is twofold: (i) to introduce the framework of update semantics and to explain what kind of phenomena may successfully be analysed in it; (ii) to give a detailed analysis of one such phenomenon: default reasoning.", "title": "" }, { "docid": "e46c6e50325d2603a5ae31080f7bfeb5", "text": "End-to-end learning machines enable a direct mapping from the raw input data to the desired outputs, eliminating the need for handcrafted features. Despite less engineering effort than the hand-crafted counterparts, these learning machines achieve extremely good results for many computer vision and medical image analysis tasks. Two dominant classes of end-to-end learning machines are massive-training artificial neural networks (MTANNs) and convolutional neural networks (CNNs). Although MTANNs have been actively used for a number of medical image analysis tasks over the past two decades, CNNs have recently gained popularity in the field of medical imaging. In this study, we have compared these two successful learning machines both experimentally and theoretically. For that purpose, we considered two well-studied topics in the field of medical image analysis: detection of lung nodules and distinction between benign and malignant lung nodules in computed tomography (CT). For a thorough analysis, we used 2 optimized MTANN architectures and 4 distinct CNN architectures that have different depths. Our experiments demonstrated that the performance of MTANNs was substantially higher than that of CNN when using only limited training data. With a larger training dataset, the performance gap became less evident even though the margin was still significant. Specifically, for nodule detection, MTANNs generated 2.7 false positives per patient at 100% sensitivity, which was significantly (p<.05) lower than the best performing CNN model with 22.7 false positives per patient at the same level of sensitivity. For nodule classification, MTANNs yielded an area under the receiver-operating-characteristic curve (AUC) of 0.8806 (95% CI: 0.8389 to 0.9223), which was significantly (p<.05) greater than the best performing CNN model with an AUC of 0.7755 (95% CI: 0.7120 to 0.8270). Thus, with limited training data, MTANNs would be a suitable end-to-end machine-learning model for detection and classification of focal lesions that do not require high-level semantic features.", "title": "" } ]
scidocsrr
0854889eec567aae60cd300f94181f11
On the Synergy of Network Science and Artificial Intelligence
[ { "docid": "ba2029c92fc1e9277e38edff0072ac82", "text": "Estimation, recognition, and near-future prediction of 3D trajectories based on their two dimensional projections available from one camera source is an exceptionally difficult problem due to uncertainty in the trajectories and environment, high dimensionality of the specific trajectory states, lack of enough labeled data and so on. In this article, we propose a solution to solve this problem based on a novel deep learning model dubbed disjunctive factored four-way conditional restricted Boltzmann machine (DFFW-CRBM). Our method improves state-of-the-art deep learning techniques for high dimensional time-series modeling by introducing a novel tensor factorization capable of driving forth order Boltzmann machines to considerably lower energy levels, at no computational costs. DFFW-CRBMs are capable of accurately estimating, recognizing, and performing near-future prediction of three-dimensional trajectories from their 2D projections while requiring limited amount of labeled data. We evaluate our method on both simulated and real-world data, showing its effectiveness in predicting and classifying complex ball trajectories and human activities.", "title": "" } ]
[ { "docid": "13d9b338b83a5fcf75f74607bf7428a7", "text": "We extend the neural Turing machine (NTM) model into a dynamic neural Turing machine (D-NTM) by introducing trainable address vectors. This addressing scheme maintains for each memory cell two separate vectors, content and address vectors. This allows the D-NTM to learn a wide variety of location-based addressing strategies, including both linear and nonlinear ones. We implement the D-NTM with both continuous and discrete read and write mechanisms. We investigate the mechanisms and effects of learning to read and write into a memory through experiments on Facebook bAbI tasks using both a feedforward and GRU controller. We provide extensive analysis of our model and compare different variations of neural Turing machines on this task. We show that our model outperforms long short-term memory and NTM variants. We provide further experimental results on the sequential MNIST, Stanford Natural Language Inference, associative recall, and copy tasks.", "title": "" }, { "docid": "2a914d703108f165aecbb7ad1a2dde2c", "text": "The general objective of our work is to investigate the area and power-delay performances of low-voltage full adder cells in different CMOS logic styles for the predominating tree structured arithmetic circuits. A new hybrid style full adder circuit is also presented. The sum and carry generation circuits of the proposed full adder are designed with hybrid logic styles. To operate at ultra-low supply voltage, the pass logic circuit that cogenerates the intermediate XOR and XNOR outputs has been improved to overcome the switching delay problem. As full adders are frequently employed in a tree structured configuration for high-performance arithmetic circuits, a cascaded simulation structure is introduced to evaluate the full adders in a realistic application environment. A systematic and elegant procedure to scale the transistor for minimal power-delay product is proposed. The circuits being studied are optimized for energy efficiency at 0.18-/spl mu/m CMOS process technology. With the proposed simulation environment, it is shown that some survival cells in stand alone operation at low voltage may fail when cascaded in a larger circuit, either due to the lack of drivability or unsatisfactory speed of operation. The proposed hybrid full adder exhibits not only the full swing logic and balanced outputs but also strong output drivability. The increase in the transistor count of its complementary CMOS output stage is compensated by its area efficient layout. Therefore, it remains one of the best contenders for designing large tree structured arithmetic circuits with reduced energy consumption while keeping the increase in area to a minimum.", "title": "" }, { "docid": "2f44362f2c294580240d99a8cc402d1f", "text": "Research problem: The study explored think-aloud methods usage within usability testing by examining the following questions: How, and why is the think-aloud method used? What is the gap between theory and practice? Where does this gap occur? Literature review: The review informed the survey design. Usability research based on field studies and empirical tests indicates that variations in think-aloud procedures may reduce test reliability. The guidance offered on think-aloud procedures within a number of handbooks on usability testing is also mixed. This indicates potential variability in practice, but how much and for what reasons is unknown. Methodology: An exploratory, qualitative survey was conducted using a web-based questionnaire (during November-December 2010). Usability evaluators were sought via emails (sent to personal contacts, usability companies, conference attendees, and special interest groups) to be cascaded to the international community. As a result we received 207 full responses. Descriptive statistics and thematic coding were used to analyze the data sets. Results: Respondents found the concurrent technique particularly suited usability testing as it was fast, easy for users to relate to, and requires limited resources. Divergent practice was reported in terms of think-aloud instructions, practice, interventions, and the use of demonstrations. A range of interventions was used to better understand participant actions and verbalizations, however, respondents were aware of potential threats to test reliability, and took steps to reduce this impact. Implications: The reliability considerations underpinning the classic think-aloud approach are pragmatically balanced against the need to capture useful data in the time available. A limitation of the study is the focus on the concurrent method; other methods were explored but the differences in application were not considered. Future work is needed to explore the impact of divergent use of think-aloud instructions, practice tasks, and the use of demonstrations on test reliability.", "title": "" }, { "docid": "b8fa50df3c76c2192c67cda7ae4d05f5", "text": "Task parallelism has increasingly become a trend with programming models such as OpenMP 3.0, Cilk, Java Concurrency, X10, Chapel and Habanero-Java (HJ) to address the requirements of multicore programmers. While task parallelism increases productivity by allowing the programmer to express multiple levels of parallelism, it can also lead to performance degradation due to increased overheads. In this article, we introduce a transformation framework for optimizing task-parallel programs with a focus on task creation and task termination operations. These operations can appear explicitly in constructs such as async, finish in X10 and HJ, task, taskwait in OpenMP 3.0, and spawn, sync in Cilk, or implicitly in composite code statements such as foreach and ateach loops in X10, forall and foreach loops in HJ, and parallel loop in OpenMP.\n Our framework includes a definition of data dependence in task-parallel programs, a happens-before analysis algorithm, and a range of program transformations for optimizing task parallelism. Broadly, our transformations cover three different but interrelated optimizations: (1) finish-elimination, (2) forall-coarsening, and (3) loop-chunking. Finish-elimination removes redundant task termination operations, forall-coarsening replaces expensive task creation and termination operations with more efficient synchronization operations, and loop-chunking extracts useful parallelism from ideal parallelism. All three optimizations are specified in an iterative transformation framework that applies a sequence of relevant transformations until a fixed point is reached. Further, we discuss the impact of exception semantics on the specified transformations, and extend them to handle task-parallel programs with precise exception semantics. Experimental results were obtained for a collection of task-parallel benchmarks on three multicore platforms: a dual-socket 128-thread (16-core) Niagara T2 system, a quad-socket 16-core Intel Xeon SMP, and a quad-socket 32-core Power7 SMP. We have observed that the proposed optimizations interact with each other in a synergistic way, and result in an overall geometric average performance improvement between 6.28× and 10.30×, measured across all three platforms for the benchmarks studied.", "title": "" }, { "docid": "42eca5d49ef3e27c76b65f8feccd8499", "text": "Convolutional Neural Networks (CNNs) have shown to yield very strong results in several Computer Vision tasks. Their application to language has received much less attention, and it has mainly focused on static classification tasks, such as sentence classification for Sentiment Analysis or relation extraction. In this work, we study the application of CNNs to language modeling, a dynamic, sequential prediction task that needs models to capture local as well as long-range dependency information. Our contribution is twofold. First, we show that CNNs achieve 11-26% better absolute performance than feed-forward neural language models, demonstrating their potential for language representation even in sequential tasks. As for recurrent models, our model outperforms RNNs but is below state of the art LSTM models. Second, we gain some understanding of the behavior of the model, showing that CNNs in language act as feature detectors at a high level of abstraction, like in Computer Vision, and that the model can profitably use information from as far as 16 words before the target.", "title": "" }, { "docid": "5c4f313482543223306be014cff0cc2e", "text": "Transformer inrush currents are high-magnitude, harmonic rich currents generated when transformer cores are driven into saturation during energization. These currents have undesirable effects, including potential damage or loss-of-life of transformer, protective relay miss operation, and reduced power quality on the system. This paper explores the theoretical explanations of inrush currents and explores different factors that have influences on the shape and magnitude of those inrush currents. PSCAD/EMTDC is used to investigate inrush currents phenomena by modeling a practical power system circuit for single phase transformer", "title": "" }, { "docid": "54d54094acea1900e183144d32b1910f", "text": "A large body of work has been devoted to address corporate-scale privacy concerns related to social networks. Most of this work focuses on how to share social networks owned by organizations without revealing the identities or the sensitive relationships of the users involved. Not much attention has been given to the privacy risk of users posed by their daily information-sharing activities.\n In this article, we approach the privacy issues raised in online social networks from the individual users’ viewpoint: we propose a framework to compute the privacy score of a user. This score indicates the user’s potential risk caused by his or her participation in the network. Our definition of privacy score satisfies the following intuitive properties: the more sensitive information a user discloses, the higher his or her privacy risk. Also, the more visible the disclosed information becomes in the network, the higher the privacy risk. We develop mathematical models to estimate both sensitivity and visibility of the information. We apply our methods to synthetic and real-world data and demonstrate their efficacy and practical utility.", "title": "" }, { "docid": "35225f6ca92daf5b17bdd2a5395b83ca", "text": "A neural network with a single layer of hidden units of gaussian type is proved to be a universal approximator for real-valued maps defined on convex, compact sets of Rn.", "title": "" }, { "docid": "eb836852ea301e07dcc6c022f89fd8a8", "text": "This paper proposes a practical circuit-based model for Li-ion cells, which can be directly connected to a model of a complete electric vehicle (EV) system. The goal of this paper is to provide EV system designers with a tool in simulation programs such as Matlab/Simulink to model the behaviour of Li-ion cells under various operating conditions in EV or other applications. The current direction, state of charge (SoC), temperature and C-rate dependency are represented by empirical equations obtained from measurements on LiFePO4 cells. Tradeoffs between model complexity and accuracy have been made based on practical considerations in EV applications. Depending on the required accuracy and operating conditions, the EV system designer can choose the influences to be included in the system simulation.", "title": "" }, { "docid": "a4922f728f50fa06a63b826ed84c9f24", "text": "Simulations are attractive environments for training agents as they provide an abundant source of data and alleviate certain safety concerns during the training process. But the behaviours developed by agents in simulation are often specific to the characteristics of the simulator. Due to modeling error, strategies that are successful in simulation may not transfer to their real world counterparts. In this paper, we demonstrate a simple method to bridge this “reality gap”. By randomizing the dynamics of the simulator during training, we are able to develop policies that are capable of adapting to very different dynamics, including ones that differ significantly from the dynamics on which the policies were trained. This adaptivity enables the policies to generalize to the dynamics of the real world without any training on the physical system. Our approach is demonstrated on an object pushing task using a robotic arm. Despite being trained exclusively in simulation, our policies are able to maintain a similar level of performance when deployed on a real robot, reliably moving an object to a desired location from random initial configurations. We explore the impact of various design decisions and show that the resulting policies are robust to significant calibration error.", "title": "" }, { "docid": "5bef975924d427c3ae186d92a93d4f74", "text": "The Voronoi diagram of a set of sites partitions space into regions, one per site; the region for a site s consists of all points closer to s than to any other site. The dual of the Voronoi diagram, the Delaunay triangulation, is the unique triangulation such that the circumsphere of every simplex contains no sites in its interior. Voronoi diagrams and Delaunay triangulations have been rediscovered or applied in many areas of mathematics and the natural sciences; they are central topics in computational geometry, with hundreds of papers discussing algorithms and extensions. Section 27.1 discusses the definition and basic properties in the usual case of point sites in R with the Euclidean metric, while Section 27.2 gives basic algorithms. Some of the many extensions obtained by varying metric, sites, environment, and constraints are discussed in Section 27.3. Section 27.4 finishes with some interesting and nonobvious structural properties of Voronoi diagrams and Delaunay triangulations.", "title": "" }, { "docid": "e78d82c45dcb5297244f98ef0d26c10e", "text": "The current study examines changes over time in a commonly used measure of dispositional empathy. A cross-temporal meta-analysis was conducted on 72 samples of American college students who completed at least one of the four subscales (Empathic Concern, Perspective Taking, Fantasy, and Personal Distress) of the Interpersonal Reactivity Index (IRI) between 1979 and 2009 (total N = 13,737). Overall, the authors found changes in the most prototypically empathic subscales of the IRI: Empathic Concern was most sharply dropping, followed by Perspective Taking. The IRI Fantasy and Personal Distress subscales exhibited no changes over time. Additional analyses found that the declines in Perspective Taking and Empathic Concern are relatively recent phenomena and are most pronounced in samples from after 2000.", "title": "" }, { "docid": "c75b7ad0faf841b7ec4ae7f91d236259", "text": "People have been shown to project lifelike attributes onto robots and to display behavior indicative of empathy in human-robot interaction. Our work explores the role of empathy by examining how humans respond to a simple robotic object when asked to strike it. We measure the effects of lifelike movement and stories on people's hesitation to strike the robot, and we evaluate the relationship between hesitation and people's trait empathy. Our results show that people with a certain type of high trait empathy (empathic concern) hesitate to strike the robots. We also find that high empathic concern and hesitation are more strongly related for robots with stories. This suggests that high trait empathy increases people's hesitation to strike a robot, and that stories may positively influence their empathic responses.", "title": "" }, { "docid": "fd0c32b1b4e52f397d0adee5de7e381c", "text": "Context. Electroencephalography (EEG) is a complex signal and can require several years of training, as well as advanced signal processing and feature extraction methodologies to be correctly interpreted. Recently, deep learning (DL) has shown great promise in helping make sense of EEG signals due to its capacity to learn good feature representations from raw data. Whether DL truly presents advantages as compared to more traditional EEG processing approaches, however, remains an open question. Objective. In this work, we review 156 papers that apply DL to EEG, published between January 2010 and July 2018, and spanning different application domains such as epilepsy, sleep, braincomputer interfacing, and cognitive and affective monitoring. We extract trends and highlight interesting approaches from this large body of literature in order to inform future research and formulate recommendations. Methods. Major databases spanning the fields of science and engineering were queried to identify relevant studies published in scientific journals, conferences, and electronic preprint repositories. Various data items were extracted for each study pertaining to 1) the data, 2) the preprocessing methodology, 3) the DL design choices, 4) the results, and 5) the reproducibility of the experiments. These items were then analyzed one by one to uncover trends. Results. Our analysis reveals that the amount of EEG data used across studies varies from less than ten minutes to thousands of hours, while the number of samples seen during training by a network varies from a few dozens to several millions, depending on how epochs are extracted. Interestingly, we saw that more than half the studies used publicly available data and that there has also been a clear shift from intra-subject to inter-subject approaches over the last few years. About 40% of the studies used convolutional neural networks (CNNs), while 14% used recurrent neural networks (RNNs), most often with a total of 3 to 10 layers. Moreover, almost one-half of the studies trained their models on raw or preprocessed EEG time series. Finally, the median gain in accuracy of DL approaches over traditional baselines was 5.4% across all relevant studies. More importantly, however, we noticed studies often suffer from poor reproducibility: a majority of papers would be hard or impossible to reproduce given the unavailability of their data and code. ∗The first two authors contributed equally to this work. Significance. To help the community progress and share work more effectively, we provide a list of recommendations for future studies. We also make our summary table of DL and EEG papers available and invite authors of published work to contribute to it directly.", "title": "" }, { "docid": "3910a3317ea9ff4ea6c621e562b1accc", "text": "Compaction of agricultural soils is a concern for many agricultural soil scientists and farmers since soil compaction, due to heavy field traffic, has resulted in yield reduction of most agronomic crops throughout the world. Soil compaction is a physical form of soil degradation that alters soil structure, limits water and air infiltration, and reduces root penetration in the soil. Consequences of soil compaction are still underestimated. A complete understanding of processes involved in soil compaction is necessary to meet the future global challenge of food security. We review here the advances in understanding, quantification, and prediction of the effects of soil compaction. We found the following major points: (1) When a soil is exposed to a vehicular traffic load, soil water contents, soil texture and structure, and soil organic matter are the three main factors which determine the degree of compactness in that soil. (2) Soil compaction has direct effects on soil physical properties such as bulk density, strength, and porosity; therefore, these parameters can be used to quantify the soil compactness. (3) Modified soil physical properties due to soil compaction can alter elements mobility and change nitrogen and carbon cycles in favour of more emissions of greenhouse gases under wet conditions. (4) Severe soil compaction induces root deformation, stunted shoot growth, late germination, low germination rate, and high mortality rate. (5) Soil compaction decreases soil biodiversity by decreasing microbial biomass, enzymatic activity, soil fauna, and ground flora. (6) Boussinesq equations and finite element method models, that predict the effects of the soil compaction, are restricted to elastic domain and do not consider existence of preferential paths of stress propagation and localization of deformation in compacted soils. (7) Recent advances in physics of granular media and soil mechanics relevant to soil compaction should be used to progress in modelling soil compaction.", "title": "" }, { "docid": "5116079b69aeb1858177429fabd10f80", "text": "Deep convolutional neural networks (CNN) have shown their promise as a universal representation for recognition. However, global CNN activations at present lack geometric invariance, which limits their robustness for tasks such as classification and matching of highly variable scenes. To improve the invariance of CNN activations without degrading their discriminative power, this paper presents a simple but effective scheme called multi-scale orderless pooling (or MOP-CNN for short). This approach works by extracting CNN activations for local patches at multiple scales, followed by orderless VLAD pooling of these activations at each scale level and concatenating the result. This feature representation decisively outperforms global CNN activations and achieves state-of-the-art performance for scene classification on such challenging benchmarks as SUN397, MIT Indoor Scenes, and ILSVRC2012, as well as for instance-level retrieval on the Holidays dataset.", "title": "" }, { "docid": "05cd3cd38b699c0dea7fd2ba771ed770", "text": "Background: Electric vehicles have been identified as being a key technology in reducing future emissions and energy consumption in the mobility sector. The focus of this article is to review and assess the energy efficiency and the environmental impact of battery electric cars (BEV), which is the only technical alternative on the market available today to vehicles with internal combustion engine (ICEV). Electricity onboard a car can be provided either by a battery or a fuel cell (FCV). The technical structure of BEV is described, clarifying that it is relatively simple compared to ICEV. Following that, ICEV can be ‘e-converted’ by experienced personnel. Such an e-conversion project generated reality-close data reported here. Results: Practicability of today's BEV is discussed, revealing that particularly small-size BEVs are useful. This article reports on an e-conversion of a used Smart. Measurements on this car, prior and after conversion, confirmed a fourfold energy efficiency advantage of BEV over ICEV, as supposed in literature. Preliminary energy efficiency data of FCV are reviewed being only slightly lower compared to BEV. However, well-to-wheel efficiency suffers from 47% to 63% energy loss during hydrogen production. With respect to energy efficiency, BEVs are found to represent the only alternative to ICEV. This, however, is only true if the electricity is provided by very efficient power plants or better by renewable energy production. Literature data on energy consumption and greenhouse gas (GHG) emission by ICEV compared to BEV suffer from a 25% underestimation of ICEV-standardized driving cycle numbers in relation to street conditions so far. Literature data available for BEV, on the other hand, were mostly modeled and based on relatively heavy BEV as well as driving conditions, which do not represent the most useful field of BEV operation. Literature data have been compared with measurements based on the converted Smart, revealing a distinct GHG emissions advantage due to the German electricity net conditions, which can be considerably extended by charging electricity from renewable sources. Life cycle carbon footprint of BEV is reviewed based on literature data with emphasis on lithium-ion batteries. Battery life cycle assessment (LCA) data available in literature, so far, vary significantly by a factor of up to 5.6 depending on LCA methodology approach, but also with respect to the battery chemistry. Carbon footprint over 100,000 km calculated for the converted 10-year-old Smart exhibits a possible reduction of over 80% in comparison to the Smart with internal combustion engine. Conclusion: Findings of the article confirm that the electric car can serve as a suitable instrument towards a much more sustainable future in mobility. This is particularly true for small-size BEV, which is underrepresented in LCA literature data so far. While CO2-LCA of BEV seems to be relatively well known apart from the battery, life cycle impact of BEV in categories other than the global warming potential reveals a complex and still incomplete picture. Since technology of the electric car is of limited complexity with the exception of the battery, used cars can also be converted from combustion to electric. This way, it seems possible to reduce CO2-equivalent emissions by 80% (factor 5 efficiency improvement). * Correspondence: e.helmers@umwelt-campus.de Institut für angewandtes Stoffstrommanagement (IfaS) am Umwelt-Campus Birkenfeld, Trier University of Applied Sciences, P.O. Box 1380 Birkenfeld, D-55761, Germany © 2012 Helmers and Marx; licensee Springer. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. Helmers and Marx Environmental Sciences Europe 2012, 24:14 Page 2 of 15 http://www.enveurope.com/content/24/1/14", "title": "" }, { "docid": "7efc7056f11b61eb9c0d35c57e81a7f7", "text": "Action Language is a specification language for reactive sof tware systems. In this paper, we present the syntax and the semantics of the Action Language and we also present an in finite-state symbolic model checker called Action Language Verifier (ALV) that verifies (or falsifies) CTL properti es of Action Language specifications. ALV is built on top of the Composite Symbolic Library, which is a symbolic manip ulator that combines multiple symbolic representations. ALV is a polymorphic model checker that can use different com binations of the symbolic representations implemented in the Composite Symbolic Library. We describe the heuristi cs implemented in ALV for computing fixpoints using the composite symbolic representation. Since Action Langu ge specifications allow declaration of unbounded integer variables and parameterized integer constants, verificati on of Action Language specifications is undecidable. ALV uses several heuristics to conservatively approximate the fixpoint computations. ALV also implements an automated abstraction technique that enables parameterized verifica tion of a concurrent system with an arbitrary number of identical processes.", "title": "" }, { "docid": "54b4726650b3afcddafb120ff99c9951", "text": "Online harassment has been a problem to a greater or lesser extent since the early days of the internet. Previous work has applied anti-spam techniques like machine-learning based text classification (Reynolds, 2011) to detecting harassing messages. However, existing public datasets are limited in size, with labels of varying quality. The #HackHarassment initiative (an alliance of 1 tech companies and NGOs devoted to fighting bullying on the internet) has begun to address this issue by creating a new dataset superior to its predecssors in terms of both size and quality. As we (#HackHarassment) complete further rounds of labelling, later iterations of this dataset will increase the available samples by at least an order of magnitude, enabling corresponding improvements in the quality of machine learning models for harassment detection. In this paper, we introduce the first models built on the #HackHarassment dataset v1.0 (a new open dataset, which we are delighted to share with any interested researcherss) as a benchmark for future research.", "title": "" }, { "docid": "7a417c3fe0a93656f5628463d9c425e7", "text": "Given a finite range space Σ = (X, R), with N = |X| + |R|, we present two simple algorithms, based on the multiplicative-weight method, for computing a small-size hitting set or set cover of Σ. The first algorithm is a simpler variant of the Brönnimann-Goodrich algorithm but more efficient to implement, and the second algorithm can be viewed as solving a two-player zero-sum game. These algorithms, in conjunction with some standard geometric data structures, lead to near-linear algorithms for computing a small-size hitting set or set cover for a number of geometric range spaces. For example, they lead to O(N polylog(N)) expected-time randomized O(1)-approximation algorithms for both hitting set and set cover if X is a set of points and ℜ a set of disks in R2.", "title": "" } ]
scidocsrr
f67544bde50fcb5a22cea405184aaa65
Overview of the improvement of the ring-stage survival assay-a novel phenotypic assay for the detection of artemisinin-resistant Plasmodium falciparum
[ { "docid": "5995a2775a6a10cf4f2bd74a2959935d", "text": "Artemisinin-based combination therapy is recommended to treat Plasmodium falciparum worldwide, but observations of longer artemisinin (ART) parasite clearance times (PCTs) in Southeast Asia are widely interpreted as a sign of potential ART resistance. In search of an in vitro correlate of in vivo PCT after ART treatment, a ring-stage survival assay (RSA) of 0–3 h parasites was developed and linked to polymorphisms in the Kelch propeller protein (K13). However, RSA remains a laborious process, involving heparin, Percoll gradient, and sorbitol treatments to obtain rings in the 0–3 h window. Here two alternative RSA protocols are presented and compared to the standard Percoll-based method, one highly stage-specific and one streamlined for laboratory application. For all protocols, P. falciparum cultures were synchronized with 5 % sorbitol treatment twice over two intra-erythrocytic cycles. For a filtration-based RSA, late-stage schizonts were passed through a 1.2 μm filter to isolate merozoites, which were incubated with uninfected erythrocytes for 45 min. The erythrocytes were then washed to remove lysis products and further incubated until 3 h post-filtration. Parasites were pulsed with either 0.1 % dimethyl sulfoxide (DMSO) or 700 nM dihydroartemisinin in 0.1 % DMSO for 6 h, washed twice in drug-free media, and incubated for 66–90 h, when survival was assessed by microscopy. For a sorbitol-only RSA, synchronized young (0–3 h) rings were treated with 5 % sorbitol once more prior to the assay and adjusted to 1 % parasitaemia. The drug pulse, incubation, and survival assessment were as described above. Ring-stage survival of P. falciparum parasites containing either the K13 C580 or C580Y polymorphism (associated with low and high RSA survival, respectively) were assessed by the described filtration and sorbitol-only methods and produced comparable results to the reported Percoll gradient RSA. Advantages of both new methods include: fewer reagents, decreased time investment, and fewer procedural steps, with enhanced stage-specificity conferred by the filtration method. Assessing P. falciparum ART sensitivity in vitro via RSA can be streamlined and accurately evaluated in the laboratory by filtration or sorbitol synchronization methods, thus increasing the accessibility of the assay to research groups.", "title": "" } ]
[ { "docid": "b9538c45fc55caff8b423f6ecc1fe416", "text": " Summary. The Probabilistic I/O Automaton model of [31] is used as the basis for a formal presentation and proof of the randomized consensus algorithm of Aspnes and Herlihy. The algorithm guarantees termination within expected polynomial time. The Aspnes-Herlihy algorithm is a rather complex algorithm. Processes move through a succession of asynchronous rounds, attempting to agree at each round. At each round, the agreement attempt involves a distributed random walk. The algorithm is hard to analyze because of its use of nontrivial results of probability theory (specifically, random walk theory which is based on infinitely many coin flips rather than on finitely many coin flips), because of its complex setting, including asynchrony and both nondeterministic and probabilistic choice, and because of the interplay among several different sub-protocols. We formalize the Aspnes-Herlihy algorithm using probabilistic I/O automata. In doing so, we decompose it formally into three subprotocols: one to carry out the agreement attempts, one to conduct the random walks, and one to implement a shared counter needed by the random walks. Properties of all three subprotocols are proved separately, and combined using general results about automaton composition. It turns out that most of the work involves proving non-probabilistic properties (invariants, simulation mappings, non-probabilistic progress properties, etc.). The probabilistic reasoning is isolated to a few small sections of the proof. The task of carrying out this proof has led us to develop several general proof techniques for probabilistic I/O automata. These include ways to combine expectations for different complexity measures, to compose expected complexity properties, to convert probabilistic claims to deterministic claims, to use abstraction mappings to prove probabilistic properties, and to apply random walk theory in a distributed computational setting. We apply all of these techniques to analyze the expected complexity of the algorithm.", "title": "" }, { "docid": "3e4a2d4564e9904b3d3b0457860da5cf", "text": "Model-based, torque-level control can offer precision and speed advantages over velocity-level or position-level robot control. However, the dynamic parameters of the robot must be identified accurately. Several steps are involved in dynamic parameter identification, including modeling the system dynamics, joint position/torque data acquisition and filtering, experimental design, dynamic parameters estimation and validation. In this paper, we propose a novel, computationally efficient and intuitive optimality criterion to design the excitation trajectory for the robot to follow. Experiments are carried out for a 6 degree of freedom (DOF) Staubli TX-90 robot. We validate the dynamics parameters using torque prediction accuracy and compare to existing methods. The RMS errors of the prediction were small, and the computation time for the new, optimal objective function is an order of magnitude less than for existing approaches. & 2014 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "6b1e67c1768f9ec7a6ab95a9369b92d1", "text": "Autoregressive sequence models based on deep neural networks, such as RNNs, Wavenet and the Transformer attain state-of-the-art results on many tasks. However, they are difficult to parallelize and are thus slow at processing long sequences. RNNs lack parallelism both during training and decoding, while architectures like WaveNet and Transformer are much more parallelizable during training, yet still operate sequentially during decoding. We present a method to extend sequence models using discrete latent variables that makes decoding much more parallelizable. We first autoencode the target sequence into a shorter sequence of discrete latent variables, which at inference time is generated autoregressively, and finally decode the output sequence from this shorter latent sequence in parallel. To this end, we introduce a novel method for constructing a sequence of discrete latent variables and compare it with previously introduced methods. Finally, we evaluate our model end-to-end on the task of neural machine translation, where it is an order of magnitude faster at decoding than comparable autoregressive models. While lower in BLEU than purely autoregressive models, our model achieves higher scores than previously proposed non-autoregressive translation models.", "title": "" }, { "docid": "60999276e84cbd46d778c62439014598", "text": "Graph comprehension is constrained by the goals of the cognitive system that processes the graph and by the context in which the graph appears. In this paper we report the results of a study using a sentence-graph verification paradigm. We recorded participants’ reaction times to indicate whether the information contained in a simple bar graph matched a written description of the graph. Aside from the consistency of visual and verbal information, we manipulated whether the graph was ascending or descending, the relational term in the verbal description, and the labels of the bars of the graph. Our results showed that the biggest source of variance in people’s reaction times is whether the order in which the referents appear in the graph is the same as the order in which they appear in the sentence. The implications of this finding for contemporary theories of graph comprehension are discussed.", "title": "" }, { "docid": "7ad4c2f0b66a11891bd19d175becf5c2", "text": "The presence of noise represent a relevant issue in image feature extraction and classification. In deep learning, representation is learned directly from the data and, therefore, the classification model is influenced by the quality of the input. However, the ability of deep convolutional neural networks to deal with images that have a different quality when compare to those used to train the network is still to be fully understood. In this paper, we evaluate the generalization of models learned by different networks using noisy images. Our results show that noise cause the classification problem to become harder. However, when image quality is prone to variations after deployment, it might be advantageous to employ models learned using noisy data.", "title": "" }, { "docid": "672c11254309961fe02bc48827f8949e", "text": "HIV-1 integration into the host genome favors actively transcribed genes. Prior work indicated that the nuclear periphery provides the architectural basis for integration site selection, with viral capsid-binding host cofactor CPSF6 and viral integrase-binding cofactor LEDGF/p75 contributing to selection of individual sites. Here, by investigating the early phase of infection, we determine that HIV-1 traffics throughout the nucleus for integration. CPSF6-capsid interactions allow the virus to bypass peripheral heterochromatin and penetrate the nuclear structure for integration. Loss of interaction with CPSF6 dramatically alters virus localization toward the nuclear periphery and integration into transcriptionally repressed lamina-associated heterochromatin, while loss of LEDGF/p75 does not significantly affect intranuclear HIV-1 localization. Thus, CPSF6 serves as a master regulator of HIV-1 intranuclear localization by trafficking viral preintegration complexes away from heterochromatin at the periphery toward gene-dense chromosomal regions within the nuclear interior.", "title": "" }, { "docid": "f262c85e241e0c6dd6eb472841284345", "text": "BACKGROUND\nWe evaluated the feasibility and tolerability of triple- versus double-drug chemotherapy in elderly patients with oesophagogastric cancer.\n\n\nMETHODS\nPatients aged 65 years or older with locally advanced or metastatic oesophagogastric cancer were stratified and randomised to infusional 5-FU, leucovorin and oxaliplatin without (FLO) or with docetaxel 50 mg/m(2) (FLOT) every 2 weeks. The study is registered at ClinicalTrials.gov, identifier NCT00737373.\n\n\nFINDINGS\nOne hundred and forty three (FLO, 71; FLOT, 72) patients with a median age of 70 years were enrolled. The triple combination was associated with more treatment-related National Cancer Institute Common Toxicity Criteria (NCI-CTC) grade 3/4 adverse events (FLOT, 81.9%; FLO, 38.6%; P<.001) and more patients experiencing a ≥10-points deterioration of European Organization for Research and Treatment of Cancer Quality of Life (EORTC QoL) global health status scores (FLOT, 47.5%; FLO 20.5%; p=.011). The triple combination was associated with more alopecia (P<.001), neutropenia (P<.001), leukopenia (P<.001), diarrhoea (P=.006) and nausea (P=.029).). No differences were observed in treatment duration and discontinuation due to toxicity, cumulative doses or toxic deaths between arms. The triple combination improved response rates and progression-free survival in the locally advanced subgroup and in the subgroup of patients aged between 65 and 70 years but not in the metastatic group or in patients aged 70 years and older.\n\n\nINTERPRETATION\nThe triple-drug chemotherapy was feasible in elderly patients with oesophagogastric cancer. However, toxicity was significantly increased and QoL deteriorated in a relevant proportion of patients.\n\n\nFUNDING\nThe study was partially funded by Sanofi-Aventis.", "title": "" }, { "docid": "a90fe1117e587d5b48a056278f48b01d", "text": "The concept of a medical parallel robot applicable to chest compression in the process of cardiopulmonary resuscitation (CPR) is proposed in this paper. According to the requirement of CPR action, a three-prismatic-universal-universal (3-PUU) translational parallel manipulator (TPM) is designed and developed for such applications, and a detailed analysis has been performed for the 3-PUU TPM involving the issues of kinematics, dynamics, and control. In view of the physical constraints imposed by mechanical joints, both the robot-reachable workspace and the maximum inscribed cylinder-usable workspace are determined. Moreover, the singularity analysis is carried out via the screw theory, and the robot architecture is optimized to obtain a large well-conditioning usable workspace. Based on the principle of virtual work with a simplifying hypothesis adopted, the dynamic model is established, and dynamic control utilizing computed torque method is implemented. At last, the experimental results made for the prototype illustrate the performance of the control algorithm well. This research will lay a good foundation for the development of a medical robot to assist in CPR operation.", "title": "" }, { "docid": "75cb5c4c9c122d6e80419a3ceb99fd67", "text": "Indonesian clove cigarettes (kreteks), typically have the appearance of a conventional domestic cigarette. The unique aspects of kreteks are that in addition to tobacco they contain dried clove buds (15-40%, by wt.), and are flavored with a proprietary \"sauce\". Whereas the clove buds contribute to generating high levels of eugenol in the smoke, the \"sauce\" may also contribute other potentially harmful constituents in addition to those associated with tobacco use. We measured levels of eugenol, trans-anethole (anethole), and coumarin in smoke from 33 brands of clove-flavored cigarettes (filtered and unfiltered) from five kretek manufacturers. In order to provide information for evaluating the delivery of these compounds under standard smoking conditions, a quantification method was developed for their measurement in mainstream cigarette smoke. The method allowed collection of mainstream cigarette smoke particulate matter on a Cambridge filter pad, extraction with methanol, sampling by automated headspace solid-phase microextraction, and subsequent analysis using gas chromatography/mass spectrometry. The presence of these compounds was confirmed in the smoke of kreteks using mass spectral library matching, high-resolution mass spectrometry (+/-0.0002 amu), and agreement with a relative retention time index, and native standards. We found that when kreteks were smoked according to standardized machine smoke parameters as specified by the International Standards Organization, all 33 clove brands contained levels of eugenol ranging from 2,490 to 37,900 microg/cigarette (microg/cig). Anethole was detected in smoke from 13 brands at levels of 22.8-1,030 microg/cig, and coumarin was detected in 19 brands at levels ranging from 9.2 to 215 microg/cig. These detected levels are significantly higher than the levels found in commercial cigarette brands available in the United States.", "title": "" }, { "docid": "2a8c3676233cf1ae61fe91a7af3873d9", "text": "Rumination has attracted increasing theoretical and empirical interest in the past 15 years. Previous research has demonstrated significant relationships between rumination, depression, and metacognition. Two studies were conducted to further investigate these relationships and test the fit of a clinical metacognitive model of rumination and depression in samples of both depressed and nondepressed participants. In these studies, we collected cross-sectional data of rumination, depression, and metacognition. The relationships among variables were examined by testing the fit of structural equation models. In the study on depressed participants, a good model fit was obtained consistent with predictions. There were similarities and differences between the depressed and nondepressed samples in terms of relationships among metacognition, rumination, and depression. In each case, theoretically consistent paths between positive metacognitive beliefs, rumination, negative metacognitive beliefs, and depression were evident. The conceptual and clinical implications of these data are discussed.", "title": "" }, { "docid": "77be4363f9080eb8a3b73c9237becca4", "text": "Aim: The purpose of this paper is to present findings of an integrative literature review related to employees’ motivational practices in organizations. Method: A broad search of computerized databases focusing on articles published in English during 1999– 2010 was completed. Extensive screening sought to determine current literature themes and empirical research evidence completed in employees’ focused specifically on motivation in organization. Results: 40 articles are included in this integrative literature review. The literature focuses on how job characteristics, employee characteristic, management practices and broader environmental factors influence employees’ motivation. Research that links employee’s motivation is both based on qualitative and quantitative studies. Conclusion: This literature reveals widespread support of motivation concepts in organizations. Theoretical and editorial literature confirms motivation concepts are central to employees. Job characteristics, management practices, employee characteristics and broader environmental factors are the key variables influence employees’ motivation in organization.", "title": "" }, { "docid": "89875f4c0d70e655dd1ff9ffef7c04c2", "text": "Flexible electronics incorporate all the functional attributes of conventional rigid electronics in formats that have been altered to survive mechanical deformations. Understanding the evolution of device performance during bending, stretching, or other mechanical cycling is, therefore, fundamental to research efforts in this area. Here, we review the various classes of flexible electronic devices (including power sources, sensors, circuits and individual components) and describe the basic principles of device mechanics. We then review techniques to characterize the deformation tolerance and durability of these flexible devices, and we catalogue and geometric designs that are intended to optimize electronic systems for maximum flexibility.", "title": "" }, { "docid": "87b7b05c6af2fddb00f7b1d3a60413c1", "text": "Mobile crowdsensing (MCS) is a human-driven Internet of Things service empowering citizens to observe the phenomena of individual, community, or even societal value by sharing sensor data about their environment while on the move. Typical MCS service implementations utilize cloud-based centralized architectures, which consume a lot of computational resources and generate significant network traffic, both in mobile networks and toward cloud-based MCS services. Mobile edge computing (MEC) is a natural choice to distribute MCS solutions by moving computation to network edge, since an MEC-based architecture enables significant performance improvements due to the partitioning of problem space based on location, where real-time data processing and aggregation is performed close to data sources. This in turn reduces the associated traffic in mobile core and will facilitate MCS deployments of massive scale. This paper proposes an edge computing architecture adequate for massive scale MCS services by placing key MCS features within the reference MEC architecture. In addition to improved performance, the proposed architecture decreases privacy threats and permits citizens to control the flow of contributed sensor data. It is adequate for both data analytics and real-time MCS scenarios, in line with the 5G vision to integrate a huge number of devices and enable innovative applications requiring low network latency. Our analysis of service overhead introduced by distributed architecture and service reconfiguration at network edge performed on real user traces shows that this overhead is controllable and small compared with the aforementioned benefits. When enhanced by interoperability concepts, the proposed architecture creates an environment for the establishment of an MCS marketplace for bartering and trading of both raw sensor data and aggregated/processed information.", "title": "" }, { "docid": "8e16b62676e5ef36324c738ffd5f737d", "text": "Virtualization technology has shown immense popularity within embedded systems due to its direct relationship with cost reduction, better resource utilization, and higher performance measures. Efficient hypervisors are required to achieve such high performance measures in virtualized environments, while taking into consideration the low memory footprints as well as the stringent timing constraints of embedded systems. Although there are a number of open-source hypervisors available such as Xen, Linux KVM and OKL4 Micro visor, this is the first paper to present the open-source embedded hypervisor Extensible Versatile hyper Visor (Xvisor) and compare it against two of the commonly used hypervisors KVM and Xen in-terms of comparison factors that affect the whole system performance. Experimental results on ARM architecture prove Xvisor's lower CPU overhead, higher memory bandwidth, lower lock synchronization latency and lower virtual timer interrupt overhead and thus overall enhanced virtualized embedded system performance.", "title": "" }, { "docid": "59c2e1dcf41843d859287124cc655b05", "text": "Atherosclerotic cardiovascular disease (ASCVD) is the most common cause of death in most Western countries. Nutrition factors contribute importantly to this high risk for ASCVD. Favourable alterations in diet can reduce six of the nine major risk factors for ASCVD, i.e. high serum LDL-cholesterol levels, high fasting serum triacylglycerol levels, low HDL-cholesterol levels, hypertension, diabetes and obesity. Wholegrain foods may be one the healthiest choices individuals can make to lower the risk for ASCVD. Epidemiological studies indicate that individuals with higher levels (in the highest quintile) of whole-grain intake have a 29 % lower risk for ASCVD than individuals with lower levels (lowest quintile) of whole-grain intake. It is of interest that neither the highest levels of cereal fibre nor the highest levels of refined cereals provide appreciable protection against ASCVD. Generous intake of whole grains also provides protection from development of diabetes and obesity. Diets rich in wholegrain foods tend to decrease serum LDL-cholesterol and triacylglycerol levels as well as blood pressure while increasing serum HDL-cholesterol levels. Whole-grain intake may also favourably alter antioxidant status, serum homocysteine levels, vascular reactivity and the inflammatory state. Whole-grain components that appear to make major contributions to these protective effects are: dietary fibre; vitamins; minerals; antioxidants; phytosterols; other phytochemicals. Three servings of whole grains daily are recommended to provide these health benefits.", "title": "" }, { "docid": "f10724859d8982be426891e0d5c44629", "text": "This paper empirically examines how capital affects a bank’s performance (survival and market share) and how this effect varies across banking crises, market crises, and normal times that occurred in the US over the past quarter century. We have two main results. First, capital helps small banks to increase their probability of survival and market share at all times (during banking crises, market crises, and normal times). Second, capital enhances the performance of medium and large banks primarily during banking crises. Additional tests explore channels through which capital generates these effects. Numerous robustness checks and additional tests are performed. & 2013 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "ac8a0b4ad3f2905bc4e37fa4b0fcbe0a", "text": "In this work we present a NIDS cluster as a scalable solution for realizing high-performance, stateful network intrusion detection on commodity hardware. The design addresses three challenges: (i) distributing traffic evenly across an extensible set of analysis nodes in a fashion that minimizes the communication required for coordination, (ii) adapting the NIDS’s operation to support coordinating its low-level analysis rather than just aggregating alerts; and (iii) validating that the cluster produces sound results. Prototypes of our NIDS cluster now operate at the Lawrence Berkeley National Laboratory and the University of California at Berkeley. In both environments the clusters greatly enhance the power of the network security monitoring.", "title": "" }, { "docid": "fbc3afe22ed7c2cc6d60be5fcb906b90", "text": "The thud of a bouncing ball, the onset of speech as lips open — when visual and audio events occur together, it suggests that there might be a common, underlying event that produced both signals. In this paper, we argue that the visual and audio components of a video signal should be modeled jointly using a fused multisensory representation. We propose to learn such a representation in a self-supervised way, by training a neural network to predict whether video frames and audio are temporally aligned. We use this learned representation for three applications: (a) sound source localization, i.e. visualizing the source of sound in a video; (b) audio-visual action recognition; and (c) on/offscreen audio source separation, e.g. removing the off-screen translator’s voice from a foreign official’s speech. Code, models, and video results are available on our webpage: http://andrewowens.com/multisensory.", "title": "" }, { "docid": "97691304930a85066a15086877473857", "text": "In the context of modern cryptosystems, a common theme is the creation of distributed trust networks. In most of these designs, permanent storage of a contract is required. However, permanent storage can become a major performance and cost bottleneck. As a result, good code compression schemes are a key factor in scaling these contract based cryptosystems. For this project, we formalize and implement a data structure called the Merkelized Abstract Syntax Tree (MAST) to address both data integrity and compression. MASTs can be used to compactly represent contractual programs that will be executed remotely, and by using some of the properties of Merkle trees, they can also be used to verify the integrity of the code being executed. A concept by the same name has been discussed in the Bitcoin community for a while, the terminology originates from the work of Russel O’Connor and Pieter Wuille, however this discussion was limited to private correspondences. We present a formalization of it and provide an implementation.The project idea was developed with Bitcoin applications in mind, and the experiment we set up uses MASTs in a crypto currency network simulator. Using MASTs in the Bitcoin protocol [2] would increase the complexity (length) of contracts permitted on the network, while simultaneously maintaining the security of broadcasted data. Additionally, contracts may contain privileged, secret branches of execution.", "title": "" }, { "docid": "ae43cf8140bbaf7aa8bc04eceb130fda", "text": "Network virtualization has become increasingly prominent in recent years. It enables the creation of network infrastructures that are specifically tailored to the needs of distinct network applications and supports the instantiation of favorable environments for the development and evaluation of new architectures and protocols. Despite the wide applicability of network virtualization, the shared use of routing devices and communication channels leads to a series of security-related concerns. It is necessary to provide protection to virtual network infrastructures in order to enable their use in real, large scale environments. In this paper, we present an overview of the state of the art concerning virtual network security. We discuss the main challenges related to this kind of environment, some of the major threats, as well as solutions proposed in the literature that aim to deal with different security aspects.", "title": "" } ]
scidocsrr
9024ad2b909493bd511fc45ef0308be2
An image-warping VR-architecture: design, implementation and applications
[ { "docid": "8745e21073db143341e376bad1f0afd7", "text": "The Virtual Reality (VR) user interface style allows natural hand and body motions to manipulate virtual objects in 3D environments using one or more 3D input devices. This style is best suited to application areas where traditional two-dimensional styles fall short, such as scienti c visualization, architectural visualization, and remote manipulation. Currently, the programming e ort required to produce a VR application is too large, and many pitfalls must be avoided in the creation of successful VR programs. In this paper we describe the Decoupled Simulation Model for creating successful VR applications, and a software system that embodies this model. The MR Toolkit simpli es the development of VR applications by providing standard facilities required by a wide range of VR user interfaces. These facilities include support for distributed computing, head-mounted displays, room geometry management, performance monitoring, hand input devices, and sound feedback. The MR Toolkit encourages programmers to structure their applications to take advantage of the distributed computing capabilities of workstation networks improving the application's performance. In this paper, the motivations and the architecture of the toolkit are outlined, the programmer's view is described, and a simple application is brie y described. CR", "title": "" } ]
[ { "docid": "f8d0929721ba18b2412ca516ac356004", "text": "Because of the fact that vehicle crash tests are complex and complicated experiments it is advisable to establish their mathematical models. This paper contains an overview of the kinematic and dynamic relationships of a vehicle in a collision. There is also presented basic mathematical model representing a collision together with its analysis. The main part of this paper is devoted to methods of establishing parameters of the vehicle crash model and to real crash data investigation i.e. – creation of a Kelvin model for a real experiment, its analysis and validation. After model’s parameters extraction a quick assessment of an occupant crash severity is done. Key-Words: Modeling, vehicle crash, Kelvin model, data processing.", "title": "" }, { "docid": "6e07a006d4e34f35330c74116762a611", "text": "Human replicas may elicit unintended cold, eerie feelings in viewers, an effect known as the uncanny valley. Masahiro Mori, who proposed the effect in 1970, attributed it to inconsistencies in the replica's realism with some of its features perceived as human and others as nonhuman. This study aims to determine whether reducing realism consistency in visual features increases the uncanny valley effect. In three rounds of experiments, 548 participants categorized and rated humans, animals, and objects that varied from computer animated to real. Two sets of features were manipulated to reduce realism consistency. (For humans, the sets were eyes-eyelashes-mouth and skin-nose-eyebrows.) Reducing realism consistency caused humans and animals, but not objects, to appear eerier and colder. However, the predictions of a competing theory, proposed by Ernst Jentsch in 1906, were not supported: The most ambiguous representations-those eliciting the greatest category uncertainty-were neither the eeriest nor the coldest.", "title": "" }, { "docid": "a5ac7aa3606ebb683d4d9de5dcd89856", "text": "Advanced persistent threats (APTs) pose a significant risk to nearly every infrastructure. Due to the sophistication of these attacks, they are able to bypass existing security systems and largely infiltrate the target network. The prevention and detection of APT campaigns is also challenging, because of the fact that the attackers constantly change and evolve their advanced techniques and methods to stay undetected. In this paper we analyze 22 different APT reports and give an overview of the used techniques and methods. The analysis is focused on the three main phases of APT campaigns that allow to identify the relevant characteristics of such attacks. For each phase we describe the most commonly used techniques and methods. Through this analysis we could reveal different relevant characteristics of APT campaigns, for example that the usage of 0-day exploit is not common for APT attacks. Furthermore, the analysis shows that the dumping of credentials is a relevant step in the lateral movement phase for most APT campaigns. Based on the identified characteristics, we also propose concrete prevention and detection approaches that make it possible to identify crucial malicious activities that are performed during APT campaigns.", "title": "" }, { "docid": "27ef8bac566dbba418870036ed555b1a", "text": "Seemingly unrelated regression (SUR) models are useful in studying the interactions among different variables. In a high dimensional setting or when applied to large panel of time series, these models require a large number of parameters to be estimated and suffer of inferential problems. To avoid overparametrization and overfitting issues, we propose a hierarchical Dirichlet process prior for SUR models, which allows shrinkage of SUR coefficients toward multiple locations and identification of group of coefficients. We propose a two-stage hierarchical prior distribution, where the first stage of the hierarchy consists in a Lasso conditionally independent prior distribution of the NormalGamma family for the SUR coefficients. The second stage is given by a random mixture distribution for the Normal-Gamma hyperparameters, which allows for parameter parsimony through two components: the first one is a random Dirac point-mass distribution, which induces sparsity in the SUR coefficients; the second is a Dirichlet process prior, which allows for clustering of the SUR coefficients. Our sparse SUR model with multiple locations, scales and shapes includes the Vector autoregressive models (VAR) and dynamic panel models as special cases. We consider an international business cycle applications to show the effectiveness of our model and inference approach. Our new multiple shrinkage prior model allows us to better understand shock transmission phenomena, to extract coloured networks and to classify the linkages strenght. The empirical results represent a different point of view on international business cycles providing interesting new findings in the relationship between core and pheriphery countries.", "title": "" }, { "docid": "5d40cae84395cc94d68bd4352383d66b", "text": "Scalable High Efficiency Video Coding (SHVC) is the extension of the High Efficiency Video Coding (HEVC). This standard is developed to ameliorate the coding efficiency for the spatial and quality scalability. In this paper, we investigate a survey for SHVC extension. We describe also its types and explain the different additional coding tools that further improve the Enhancement Layer (EL) coding efficiency. Furthermore, we assess through experimental results the performance of the SHVC for different coding configurations. The effectiveness of the SHVC was demonstrated, using two layers, by comparing its coding adequacy compared to simulcast configuration and HEVC for enhancement layer using HM16 for several test sequences and coding conditions.", "title": "" }, { "docid": "a5f9b7b7b25ccc397acde105c39c3d9d", "text": "Processors with multiple cores and complex cache coherence protocols are widely employed to improve the overall performance. It is a major challenge to verify the correctness of a cache coherence protocol since the number of reachable states grows exponentially with the number of cores. In this paper, we propose an efficient test generation technique, which can be used to achieve full state and transition coverage in simulation based verification for a wide variety of cache coherence protocols. Based on effective analysis of the state space structure, our method can generate more efficient test sequences (50% shorter) compared with tests generated by breadth first search. Moreover, our proposed approach can generate tests on-the-fly due to its space efficient design.", "title": "" }, { "docid": "590ad5ce089e824d5e9ec43c54fa3098", "text": "The abstraction of a shared memory is of growing importance in distributed computing systems. Traditional memory consistency ensures that all processes agree on a common order of all operations on memory. Unfortunately, providing these guarantees entails access latencies that prevent scaling to large systems. This paper weakens such guarantees by definingcausal memory, an abstraction that ensures that processes in a system agree on the relative ordering of operations that arecausally related. Because causal memory isweakly consistent, it admits more executions, and hence more concurrency, than either atomic or sequentially consistent memories. This paper provides a formal definition of causal memory and gives an implementation for message-passing systems. In addition, it describes a practical class of programs that, if developed for a strongly consistent memory, run correctly with causal memory.", "title": "" }, { "docid": "e46943cc1c73a56093d4194330d52d52", "text": "This paper deals with the compact modeling of an emerging technology: the carbon nanotube field-effect transistor (CNTFET). The paper proposed two design-oriented compact models, the first one for CNTFET with a classical behavior (MOSFET-like CNTFET), and the second one for CNTFET with an ambipolar behavior (Schottky-barrier CNTFET). Both models have been compared with exact numerical simulations and then implemented in VHDL-AMS", "title": "" }, { "docid": "30e15e8a3e6eaf424b2f994d2631ac37", "text": "This paper presents a volumetric stereo and silhouette fusion algorithm for acquiring high quality models from multiple calibrated photographs. Our method is based on computing and merging depth maps. Different from previous methods of this category, the silhouette information is also applied in our algorithm to recover the shape information on the textureless and occluded areas. The proposed algorithm starts by computing visual hull using a volumetric method in which a novel projection test method is proposed for visual hull octree construction. Then, the depth map of each image is estimated by an expansion-based approach that returns a 3D point cloud with outliers and redundant information. After generating an oriented point cloud from stereo by rejecting outlier, reducing scale, and estimating surface normal for the depth maps, another oriented point cloud from silhouette is added by carving the visual hull octree structure using the point cloud from stereo to restore the textureless and occluded surfaces. Finally, Poisson Surface Reconstruction approach is applied to convert the oriented point cloud both from stereo and silhouette into a complete and accurate triangulated mesh model. The proposed approach has been implemented and the performance of the approach is demonstrated on several real data sets, along with qualitative comparisons with the state-of-the-art image-based modeling techniques according to the Middlebury benchmark.", "title": "" }, { "docid": "1fcd6f0c91522a91fa05b0d969f8eec1", "text": "Nonnegative matrix factorization (NMF) is a popular method for multivariate analysis of nonnegative data, the goal of which is to decompose a data matrix into a product of two factor matrices with all entries in factor matrices restricted to be nonnegative. NMF was shown to be useful in a task of clustering (especially document clustering), but in some cases NMF produces the results inappropriate to the clustering problems. In this paper, we present an algorithm for orthogonal nonnegative matrix factorization, where an orthogonality constraint is imposed on the nonnegative decomposition of a term-document matrix. The result of orthogonal NMF can be clearly interpreted for the clustering problems, and also the performance of clustering is usually better than that of the NMF. We develop multiplicative updates directly from true gradient on Stiefel manifold, whereas existing algorithms consider additive orthogonality constraints. Experiments on several different document data sets show our orthogonal NMF algorithms perform better in a task of clustering, compared to the standard NMF and an existing orthogonal NMF.", "title": "" }, { "docid": "e0f0ccb0e1c2f006c5932f6b373fb081", "text": "This paper proposes a methodology to be used in the segmentation of infrared thermography images for the detection of bearing faults in induction motors. The proposed methodology can be a helpful tool for preventive and predictive maintenance of the induction motor. This methodology is based on manual threshold image processing to obtain a segmentation of an infrared thermal image, which is used for the detection of critical points known as hot spots on the system under test. From these hot spots, the parameters of interest that describe the thermal behavior of the induction motor were obtained. With the segmented image, it is possible to compare and analyze the thermal conditions of the system.", "title": "" }, { "docid": "4f296caa2ee4621a8e0858bfba701a3b", "text": "This paper considers the problem of assessing visual aesthetic quality with semantic information. We cast the assessment problem as the main task among a multi-task deep model, and argue that semantic recognition offers the key to addressing this problem. Based on convolutional neural networks, we propose a general multi-task framework with four different structures. In each structure, aesthetic quality assessment task and semantic recognition task are leveraged, and different features are explored to improve the quality assessment. Moreover, an effective strategy of keeping a balanced effect between the semantic task and aesthetic task is developed to optimize the parameters of our framework. The correlation analysis among the tasks validates the importance of the semantic recognition in aesthetic quality assessment. Extensive experiments verify the effectiveness of the proposed multi-task framework, and further corroborate the", "title": "" }, { "docid": "bf85db5489a61b5fca8d121de198be97", "text": "In this paper, we propose a novel recursive recurrent neural network (R2NN) to model the end-to-end decoding process for statistical machine translation. R2NN is a combination of recursive neural network and recurrent neural network, and in turn integrates their respective capabilities: (1) new information can be used to generate the next hidden state, like recurrent neural networks, so that language model and translation model can be integrated naturally; (2) a tree structure can be built, as recursive neural networks, so as to generate the translation candidates in a bottom up manner. A semi-supervised training approach is proposed to train the parameters, and the phrase pair embedding is explored to model translation confidence directly. Experiments on a Chinese to English translation task show that our proposed R2NN can outperform the stateof-the-art baseline by about 1.5 points in BLEU.", "title": "" }, { "docid": "8af844944f6edee4c271d73a552dc073", "text": "Many important email-related tasks, such as email classification or search, highly rely on building quality document representations (e.g., bag-of-words or key phrases) to assist matching and understanding. Despite prior success on representing textual messages, creating quality user representations from emails was overlooked. In this paper, we propose to represent users using embeddings that are trained to reflect the email communication network. Our experiments on Enron dataset suggest that the resulting embeddings capture the semantic distance between users. To assess the quality of embeddings in a real-world application, we carry out auto-foldering task where the lexical representation of an email is enriched with user embedding features. Our results show that folder prediction accuracy is improved when embedding features are present across multiple settings.", "title": "" }, { "docid": "3194a0dd979b668bb25afb10260c30d2", "text": "An octa-band antenna for 5.7-in mobile phones with the size of 80 mm <inline-formula> <tex-math notation=\"LaTeX\">$\\times6$ </tex-math></inline-formula> mm <inline-formula> <tex-math notation=\"LaTeX\">$\\times5.8$ </tex-math></inline-formula> mm is proposed and studied. The proposed antenna is composed of a coupled line, a monopole branch, and a ground branch. By using the 0.25-, 0.5-, and 0.75-wavelength modes, the lower band (704–960 MHz) and the higher band (1710–2690 MHz) are covered. The working mechanism is analyzed based on the S-parameters and the surface current distributions. The attractive merits of the proposed antenna are that the nonground portion height is only 6 mm and any lumped element is not used. A prototype of the proposed antenna is fabricated and measured. The measured −6 dB impedance bandwidths are 350 MHz (0.67–1.02 GHz) and 1.27 GHz (1.65–2.92 GHz) at the lower and higher bands, respectively, which can cover the LTE700, GSM850, GSM900, GSM1800, GSM1900, UMTS, LTE2300, and LTE2500 bands. The measured patterns, gains, and efficiencies are presented.", "title": "" }, { "docid": "38f6aaf5844ddb6e4ed0665559b7f813", "text": "A novel dual-broadband multiple-input-multiple-output (MIMO) antenna system is developed. The MIMO antenna system consists of two dual-broadband antenna elements, each of which comprises two opened loops: an outer loop and an inner loop. The opened outer loop acts as a half-wave dipole and is excited by electromagnetic coupling from the inner loop, leading to a broadband performance for the lower band. The opened inner loop serves as two monopoles. A combination of the two monopoles and the higher modes from the outer loop results in a broadband performance for the upper band. The bandwidths (return loss >;10 dB) achieved for the dual-broadband antenna element are 1.5-2.8 GHz (~ 60%) for the lower band and 4.7-8.5 GHz (~ 58\\%) for the upper band. Two U-shaped slots are introduced to reduce the coupling between the two dual-broadband antenna elements. The isolation achieved is higher than 15 dB in the lower band and 20 dB in the upper band, leading to an envelope correlation coefficient of less than 0.01. The dual-broadband MIMO antenna system has a compact volume of 50×17×0.8 mm3, suitable for GSM/UMTS/LTE and WLAN communication handsets.", "title": "" }, { "docid": "5dec9852efc32d0a9b93cd173573abf0", "text": "Magnitudes and timings of kinematic variables have often been used to investigate technique. Where large inter-participant differences exist, as in basketball, analysis of intra-participant variability may provide an alternative indicator of good technique. The aim of the present study was to investigate the joint kinematics and coordination-variability between missed and successful (swishes) free throw attempts. Collegiate level basketball players performed 20 free throws, during which ball release parameters and player kinematics were recorded. For each participant, three misses and three swishes were randomly selected and analysed. Margins of error were calculated based on the optimal-minimum-speed principle. Differences in outcome were distinguished by ball release speeds statistically lower than the optimal speed (misses -0.12 +/- 0.10m s(-1); swishes -0.02 +/- 0.07m s(-1); P < 0.05). No differences in wrist linear velocity were detected, but as the elbow influences the wrist through velocity-dependent-torques, elbow-wrist angle-angle coordination-variability was quantified using vector-coding and found to increase in misses during the last 0.01 s before ball release (P < 0.05). As the margin of error on release parameters is small, the coordination-variability is small, but the increased coordination-variability just before ball release for misses is proposed to arise from players perceiving the technique to be inappropriate and trying to correct the shot. The synergy or coupling relationship between the elbow and wrist angles to generate the appropriate ball speed is proposed as the mechanism determining success of free-throw shots in experienced players.", "title": "" }, { "docid": "dd5c0dc27c0b195b1b8f2c6e6a5cea88", "text": "The increasing dependence on information networks for business operations has focused managerial attention on managing risks posed by failure of these networks. In this paper, we develop models to assess the risk of failure on the availability of an information network due to attacks that exploit software vulnerabilities. Software vulnerabilities arise from software installed on the nodes of the network. When the same software stack is installed on multiple nodes on the network, software vulnerabilities are shared among them. These shared vulnerabilities can result in correlated failure of multiple nodes resulting in longer repair times and greater loss of availability of the network. Considering positive network effects (e.g., compatibility) alone without taking the risks of correlated failure and the resulting downtime into account would lead to overinvestment in homogeneous software deployment. Exploiting characteristics unique to information networks, we present a queuing model that allows us to quantify downtime loss faced by a rm as a function of (1) investment in security technologies to avert attacks, (2) software diversification to limit the risk of correlated failure under attacks, and (3) investment in IT resources to repair failures due to attacks. The novelty of this method is that we endogenize the failure distribution and the node correlation distribution, and show how the diversification strategy and other security measures/investments may impact these two distributions, which in turn determine the security loss faced by the firm. We analyze and discuss the effectiveness of diversification strategy under different operating conditions and in the presence of changing vulnerabilities. We also take into account the benefits and costs of a diversification strategy. Our analysis provides conditions under which diversification strategy is advantageous.", "title": "" }, { "docid": "af5a2ad28ab61015c0344bf2e29fe6a7", "text": "Recent years have shown that more than ever governments and intelligence agencies try to control and bypass the cryptographic means used for the protection of data. Backdooring encryption algorithms is considered as the best way to enforce cryptographic control. Until now, only implementation backdoors (at the protocol/implementation/management level) are generally considered. In this paper we propose to address the most critical issue of backdoors: mathematical backdoors or by-design backdoors, which are put directly at the mathematical design of the encryption algorithm. While the algorithm may be totally public, proving that there is a backdoor, identifying it and exploiting it, may be an intractable problem. We intend to explain that it is probably possible to design and put such backdoors. Considering a particular family (among all the possible ones), we present BEA-1, a block cipher algorithm which is similar to the AES and which contains a mathematical backdoor enabling an operational and effective cryptanalysis. The BEA-1 algorithm (80-bit block size, 120-bit key, 11 rounds) is designed to resist to linear and differential cryptanalyses. A challenge will be proposed to the cryptography community soon. Its aim is to assess whether our backdoor is easily detectable and exploitable or not.", "title": "" } ]
scidocsrr
41b4bd5410ae9034056f7a4453a51680
Amulet: An Energy-Efficient, Multi-Application Wearable Platform
[ { "docid": "1f95cc7adafe07ad9254359ab405a980", "text": "Event-driven programming is a popular model for writing programs for tiny embedded systems and sensor network nodes. While event-driven programming can keep the memory overhead down, it enforces a state machine programming style which makes many programs difficult to write, maintain, and debug. We present a novel programming abstraction called protothreads that makes it possible to write event-driven programs in a thread-like style, with a memory overhead of only two bytes per protothread. We show that protothreads significantly reduce the complexity of a number of widely used programs previously written with event-driven state machines. For the examined programs the majority of the state machines could be entirely removed. In the other cases the number of states and transitions was drastically decreased. With protothreads the number of lines of code was reduced by one third. The execution time overhead of protothreads is on the order of a few processor cycles.", "title": "" }, { "docid": "5fd6462e402e3a3ab1e390243d80f737", "text": "We present TinyOS, a flexible, application-specific operating system for sensor networks. Sensor networks consist of (potentially) thousands of tiny, low-power nodes, each of which execute concurrent, reactive programs that must operate with severe memory and power constraints. The sensor network challenges of limited resources, event-centric concurrent applications, and low-power operation drive the design of TinyOS. Our solution combines flexible, fine-grain components with an execution model that supports complex yet safe concurrent operations. TinyOS meets these challenges well and has become the platform of choice for sensor network research; it is in use by over a hundred groups worldwide, and supports a broad range of applications and research topics. We provide a qualitative and quantitative evaluation of the system, showing that it supports complex, concurrent programs with very low memory requirements (many applications fit within 16KB of memory, and the core OS is 400 bytes) and efficient, low-power operation. We present our experiences with TinyOS as a platform for sensor network innovation and applications.", "title": "" }, { "docid": "9bcc81095c32ea39de23217983d33ddc", "text": "The Internet of Things (IoT) is characterized by heterogeneous devices. They range from very lightweight sensors powered by 8-bit microcontrollers (MCUs) to devices equipped with more powerful, but energy-efficient 32-bit processors. Neither a traditional operating system (OS) currently running on Internet hosts, nor typical OS for sensor networks are capable to fulfill the diverse requirements of such a wide range of devices. To leverage the IoT, redundant development should be avoided and maintenance costs should be reduced. In this paper we revisit the requirements for an OS in the IoT. We introduce RIOT OS, an OS that explicitly considers devices with minimal resources but eases development across a wide range of devices. RIOT OS allows for standard C and C++ programming, provides multi-threading as well as real-time capabilities, and needs only a minimum of 1.5 kB of RAM.", "title": "" } ]
[ { "docid": "7e1e475f5447894a6c246e7d47586c4b", "text": "Between 1983 and 2003 forty accidental autoerotic deaths (all males, 13-79 years old) have been investigated at the Institute of Legal Medicine in Hamburg. Three cases with a rather unusual scenery are described in detail: (1) a 28-year-old fireworker was found hanging under a bridge in a peculiar bound belt system. The autopsy and the reconstruction revealed signs of asphyxiation, feminine underwear, and several layers of plastic clothing. (2) A 16-year-old pupil dressed with feminine plastic and rubber utensils fixed and strangulated himself with an electric wire. (3) A 28-year-old handicapped man suffered from progressive muscular dystrophy and was nearly unable to move. His bizarre sexual fantasies were exaggerating: he induced a nurse to draw plastic bags over his body, close his mouth with plastic strips, and put him in a rubbish container where he died from suffocation.", "title": "" }, { "docid": "77f3dfeba56c3731fda1870ce48e1aca", "text": "The organicist view of society is updated by incorporating concepts from cybernetics, evolutionary theory, and complex adaptive systems. Global society can be seen as an autopoietic network of self-producing components, and therefore as a living system or ‘superorganism’. Miller's living systems theory suggests a list of functional components for society's metabolism and nervous system. Powers' perceptual control theory suggests a model for a distributed control system implemented through the market mechanism. An analysis of the evolution of complex, networked systems points to the general trends of increasing efficiency, differentiation and integration. In society these trends are realized as increasing productivity, decreasing friction, increasing division of labor and outsourcing, and increasing cooperativity, transnational mergers and global institutions. This is accompanied by increasing functional autonomy of individuals and organisations and the decline of hierarchies. The increasing complexity of interactions and instability of certain processes caused by reduced friction necessitate a strengthening of society's capacity for information processing and control, i.e. its nervous system. This is realized by the creation of an intelligent global computer network, capable of sensing, interpreting, learning, thinking, deciding and initiating actions: the ‘global brain’. Individuals are being integrated ever more tightly into this collective intelligence. Although this image may raise worries about a totalitarian system that restricts individual initiaSocial Evolution & History / March 2007 58 tive, the superorganism model points in the opposite direction, towards increasing freedom and diversity. The model further suggests some specific futurological predictions for the coming decades, such as the emergence of an automated distribution network, a computer immune system, and a global consensus about values and standards.", "title": "" }, { "docid": "43bab96fad8afab1ea350e327a8f7aec", "text": "The traditional databases are not capable of handling unstructured data and high volumes of real-time datasets. Diverse datasets are unstructured lead to big data, and it is laborious to store, manage, process, analyze, visualize, and extract the useful insights from these datasets using traditional database approaches. However, many technical aspects exist in refining large heterogeneous datasets in the trend of big data. This paper aims to present a generalized view of complete big data system which includes several stages and key components of each stage in processing the big data. In particular, we compare and contrast various distributed file systems and MapReduce-supported NoSQL databases concerning certain parameters in data management process. Further, we present distinct distributed/cloud-based machine learning (ML) tools that play a key role to design, develop and deploy data models. The paper investigates case studies on distributed ML tools such as Mahout, Spark MLlib, and FlinkML. Further, we classify analytics based on the type of data, domain, and application. We distinguish various visualization tools pertaining three parameters: functionality, analysis capabilities, and supported development environment. Furthermore, we systematically investigate big data tools and technologies (Hadoop 3.0, Spark 2.3) including distributed/cloud-based stream processing tools in a comparative approach. Moreover, we discuss functionalities of several SQL Query tools on Hadoop based on 10 parameters. Finally, We present some critical points relevant to research directions and opportunities according to the current trend of big data. Investigating infrastructure tools for big data with recent developments provides a better understanding that how different tools and technologies apply to solve real-life applications.", "title": "" }, { "docid": "c6aa0e5f93d02fdd07e55dfa62aac6bc", "text": "While CNNs naturally lend themselves to densely sampled data, and sophisticated implementations are available, they lack the ability to efficiently process sparse data. In this work we introduce a suite of tools that exploit sparsity in both the feature maps and the filter weights, and thereby allow for significantly lower memory footprints and computation times than the conventional dense framework when processing data with a high degree of sparsity. Our scheme provides (i) an efficient GPU implementation of a convolution layer based on direct, sparse convolution; (ii) a filter step within the convolution layer, which we call attention, that prevents fill-in, i.e., the tendency of convolution to rapidly decrease sparsity, and guarantees an upper bound on the computational resources; and (iii) an adaptation of the backpropagation algorithm, which makes it possible to combine our approach with standard learning frameworks, while still exploiting sparsity in the data and the model.", "title": "" }, { "docid": "894e945c9bb27f5464d1b8f119139afc", "text": "Motion analysis is used in computer vision to understand the behaviour of moving objects in sequences of images. Optimising the interpretation of dynamic biological systems requires accurate and precise motion tracking as well as efficient representations of high-dimensional motion trajectories so that these can be used for prediction tasks. Here we use image sequences of the heart, acquired using cardiac magnetic resonance imaging, to create time-resolved three-dimensional segmentations using a fully convolutional network trained on anatomical shape priors. This dense motion model formed the input to a supervised denoising autoencoder (4Dsurvival), which is a hybrid network consisting of an autoencoder that learns a task-specific latent code representation trained on observed outcome data, yielding a latent representation optimised for survival prediction. To handle right-censored survival outcomes, our network used a Cox partial likelihood loss function. In a study of 302 patients the predictive accuracy (quantified by Harrell's C-index) was significantly higher (p = .0012) for our model C=0.75 (95% CI: 0.70 - 0.79) than the human benchmark of C=0.59 (95% CI: 0.53 - 0.65). This work demonstrates how a complex computer vision task using high-dimensional medical image data can efficiently predict human survival.", "title": "" }, { "docid": "0e6bdfbfb3d47042a3a4f38c0260180c", "text": "Named Entity Recognition is an important task but is still relatively new for Vietnamese. It is partly due to the lack of a large annotated corpus. In this paper, we present a systematic approach in building a named entity annotated corpus while at the same time building rules to recognize Vietnamese named entities. The resulting open source system achieves an F-measure of 83%, which is better compared to existing Vietnamese NER systems. © 2010 Springer-Verlag Berlin Heidelberg. Index", "title": "" }, { "docid": "c898f6186ff15dff41dcb7b3376b975d", "text": "The future grid is evolving into a smart distribution network that integrates multiple distributed energy resources ensuring at the same time reliable operation and increased power quality. In recent years, many research papers have addressed the voltage violation problems that arise from the high penetration of distributed generation. In view of the transition to active network management and the increase in the quantity of collected data, distributed control schemes have been proposed that use pervasive communications to deal with the complexity of smart grid. This paper reviews the recent publications on distributed and decentralized voltage control of smart distribution networks, summarizes their control models, and classifies the solution methodologies. Moreover, it comments on issues that should be addressed in the future and the perspectives of industry applications.", "title": "" }, { "docid": "caae1bbaf151f876f102a1e3e6bd5266", "text": "It is well-known that information and communication technologies enable many tasks in the context of precision agriculture. In fact, more and more farmers and food and agriculture companies are using precision agriculture-based systems to enhance not only their products themselves, but also their means of production. Consequently, problems arising from large amounts of data management and processing are arising. It would be very useful to have an infrastructure that allows information and agricultural tasks to be efficiently shared and handled. The cloud computing paradigm offers a solution. In this study, a cloud-based software architecture is proposed with the aim of enabling a complete crop management system to be deployed and validated. Such architecture includes modules developed by using Google App Engine, which allows the information to be easily retrieved and processed and agricultural tasks to be properly defined and planned. Additionally, Google’s Datastore (which ensures a high scalability degree), hosts both information that describes such agricultural tasks and agronomic data. The architecture has been validated in a system that comprises a wireless sensor network with fixed nodes and a mobile node on an unmanned aerial vehicle (UAV), deployed in an agricultural farm in the Region of Murcia (Spain). Such a network allows soil water and plant status to be monitored. The UAV (capable of executing missions defined by an administrator) is useful for acquiring visual information in an autonomous manner (under operator supervision, if needed). The system performance has been analysed and results that demonstrate the benefits of using the proposed architecture are detailed.", "title": "" }, { "docid": "414f3647551a4cadeb05143d30230dec", "text": "Future cellular networks are faced with the challenge of coping with significant traffic growth without increasing operating costs. Network virtualization and Software Defined Networking (SDN) are emerging solutions for fine-grained control and management of networks. In this article, we present a new dynamic tunnel switching technique for SDN-based cellular core networks. The technique introduces a virtualized Evolved Packet Core (EPC) gateway with the capability to select and dynamically switch the user plane processing element for each user. Dynamic GPRS Tunneling Protocol (GTP) termination enables switching the mobility anchor of an active session between a cloud environment, where general purpose hardware is in use, and a fast path implemented with dedicated hardware. We describe a prototype implementation of the technique based on an OpenStack cloud, an OpenFlow controller with GTP tunnel switching, and a dedicated fast path element.", "title": "" }, { "docid": "cec9f586803ffc8dc5868f6950967a1f", "text": "This report aims to summarize the field of technological forecasting (TF), its techniques and applications by considering the following questions: • What are the purposes of TF? • Which techniques are used for TF? • What are the strengths and weaknesses of these techniques / how do we evaluate their quality? • Do we need different TF techniques for different purposes/technologies? We also present a brief analysis of how TF is used in practice. We analyze how corporate decisions, such as investing millions of dollars to a new technology like solar energy, are being made and we explore if funding allocation decisions are based on “objective, repeatable, and quantifiable” decision parameters. Throughout the analysis, we compare the bibliometric and semantic-enabled approach of the MIT/MIST Collaborative research project “Technological Forecasting using Data Mining and Semantics” (TFDMS) with the existing studies / practices of TF and where TFDMS fits in and how it will contribute to the general TF field.", "title": "" }, { "docid": "033d7d924481a9429c03bb4bcc7b12fc", "text": "BACKGROUND\nThis study investigates the variations of Heart Rate Variability (HRV) due to a real-life stressor and proposes a classifier based on nonlinear features of HRV for automatic stress detection.\n\n\nMETHODS\n42 students volunteered to participate to the study about HRV and stress. For each student, two recordings were performed: one during an on-going university examination, assumed as a real-life stressor, and one after holidays. Nonlinear analysis of HRV was performed by using Poincaré Plot, Approximate Entropy, Correlation dimension, Detrended Fluctuation Analysis, Recurrence Plot. For statistical comparison, we adopted the Wilcoxon Signed Rank test and for development of a classifier we adopted the Linear Discriminant Analysis (LDA).\n\n\nRESULTS\nAlmost all HRV features measuring heart rate complexity were significantly decreased in the stress session. LDA generated a simple classifier based on the two Poincaré Plot parameters and Approximate Entropy, which enables stress detection with a total classification accuracy, a sensitivity and a specificity rate of 90%, 86%, and 95% respectively.\n\n\nCONCLUSIONS\nThe results of the current study suggest that nonlinear HRV analysis using short term ECG recording could be effective in automatically detecting real-life stress condition, such as a university examination.", "title": "" }, { "docid": "e1a1faf5d2121a3d5cd993d0f9c257a5", "text": "This paper is the product of an area-exam study. It intends to explain the concept of ontology in the context of knowledge engineering research, which is a sub-area of artiicial intelligence research. It introduces the state of the art on methodologies and tools for building ontologies. It also tries to point out some possible future directions for ontology research.", "title": "" }, { "docid": "ec97d6daf87e79dfc059a022d38e4ff2", "text": "There are numerous passive contrast sensing autofocus algorithms that are well documented in literature, but some aspects of their comparative performance have not been widely researched. This study explores the relative merits of a set of autofocus algorithms via examining them against a variety of scene conditions. We create a statistics engine that considers a scene taken through a range of focal values and then computes the best focal position using each autofocus algorithm. The process is repeated across a survey of test scenes containing different representative conditions. The results are assessed against focal positions which are determined by manually focusing the scenes. Through examining these results, we then derive conclusions about the relative merits of each autofocus algorithm with respect to the criteria accuracy and unimodality. Our study concludes that the basic 2D spatial gradient measurement approaches yield the best autofocus results in terms of accuracy and unimodality.", "title": "" }, { "docid": "c63ce594f3e940783ae24494a6cb1aa9", "text": "In this paper, a new deep reinforcement learning based augmented general sequence tagging system is proposed. The new system contains two parts: a deep neural network (DNN) based sequence tagging model and a deep reinforcement learning (DRL) based augmented tagger. The augmented tagger helps improve system performance by modeling the data with minority tags. The new system is evaluated on SLU and NLU sequence tagging tasks using ATIS and CoNLL2003 benchmark datasets, to demonstrate the new system’s outstanding performance on general tagging tasks. Evaluated by F1 scores, it shows that the new system outperforms the current state-of-the-art model on ATIS dataset by 1.9 % and that on CoNLL-2003 dataset by 1.4 %.", "title": "" }, { "docid": "64c06bffe4aeff54fbae9d87370e552c", "text": "Social networking sites occupy increasing fields of daily life and act as important communication channels today. But recent research also discusses the dark side of these sites, which expresses in form of stress, envy, addiction or even depression. Nevertheless, there must be a reason why people use social networking sites, even though they face related risks. One reason is human curiosity that tempts users to behave like this. The research on hand presents the impact of curiosity on user acceptance of social networking sites, which is theorized and empirically evaluated by using the technology acceptance model and a quantitative study among Facebook users. It further reveals that especially two types of human curiosity, epistemic and interpersonal curiosity, influence perceived usefulness and perceived enjoyment, and with it technology acceptance.", "title": "" }, { "docid": "5846c9761ec90040feaf71656401d6dd", "text": "Internet of Things (IoT) is an emergent technology that provides a promising opportunity to improve industrial systems by the smartly use of physical objects, systems, platforms and applications that contain embedded technology to communicate and share intelligence with each other. In recent years, a great range of industrial IoT applications have been developed and deployed. Among these applications, the Water and Oil & Gas Distribution System is tremendously important considering the huge amount of fluid loss caused by leakages and other possible hydraulic failures. Accordingly, to design an accurate Fluid Distribution Monitoring System (FDMS) represents a critical task that imposes a serious study and an adequate planning. This paper reviews the current state-of-the-art of IoT, major IoT applications in industries and focus more on the Industrial IoT FDMS (IIoT FDMS).", "title": "" }, { "docid": "5edc36b296a14950b366e0b3c4ba570c", "text": "Œe ecient management of data is an important prerequisite for realising the potential of the Internet of Œings (IoT). Two issues given the large volume of structured time-series IoT data are, addressing the diculties of data integration between heterogeneous Œings and improving ingestion and query performance across databases on both resource-constrained Œings and in the cloud. In this paper, we examine the structure of public IoT data and discover that the majority exhibit unique ƒat, wide and numerical characteristics with a mix of evenly and unevenly-spaced time-series. We investigate the advances in time-series databases for telemetry data and combine these €ndings with microbenchmarks to determine the best compression techniques and storage data structures to inform the design of a novel solution optimised for IoT data. A query translation method with low overhead even on resource-constrained Œings allows us to utilise rich data models like the Resource Description Framework (RDF) for interoperability and data integration on top of the optimised storage. Our solution, TritanDB, shows an order of magnitude performance improvement across both Œings and cloud hardware on many state-of-the-art databases within IoT scenarios. Finally, we describe how TritanDB supports various analyses of IoT time-series data like forecasting.", "title": "" }, { "docid": "1d3318884ffe201e50312b68bf51956a", "text": "This paper explores alternate algorithms, reward functions and feature sets for performing multi-document summarization using reinforcement learning with a high focus on reproducibility. We show that ROUGE results can be improved using a unigram and bigram similarity metric when training a learner to select sentences for summarization. Learners are trained to summarize document clusters based on various algorithms and reward functions and then evaluated using ROUGE. Our experiments show a statistically significant improvement of 1.33%, 1.58%, and 2.25% for ROUGE-1, ROUGE-2 and ROUGEL scores, respectively, when compared with the performance of the state of the art in automatic summarization with reinforcement learning on the DUC2004 dataset. Furthermore query focused extensions of our approach show an improvement of 1.37% and 2.31% for ROUGE-2 and ROUGE-SU4 respectively over query focused extensions of the state of the art with reinforcement learning on the DUC2006 dataset.", "title": "" }, { "docid": "bc42c1e0bc130ea41af09db0d3ec0c8d", "text": "In Western societies, the population grows old, and we must think about solutions to help them to stay at home in a secure environment. By providing a specific analysis of people behavior, computer vision offers a good solution for healthcare systems, and particularly for fall detection. This demo will show the results of a new method to detect falls using a monocular camera. The main characteristic of this method is the use of head 3D trajectories for fall detection.", "title": "" }, { "docid": "ec37e61fcac2639fa6e605b362f2a08d", "text": "Keyphrases that efficiently summarize a document’s content are used in various document processing and retrieval tasks. Current state-of-the-art techniques for keyphrase extraction operate at a phrase-level and involve scoring candidate phrases based on features of their component words. In this paper, we learn keyphrase taggers for research papers using token-based features incorporating linguistic, surfaceform, and document-structure information through sequence labeling. We experimentally illustrate that using withindocument features alone, our tagger trained with Conditional Random Fields performs on-par with existing state-of-the-art systems that rely on information from Wikipedia and citation networks. In addition, we are also able to harness recent work on feature labeling to seamlessly incorporate expert knowledge and predictions from existing systems to enhance the extraction performance further. We highlight the modeling advantages of our keyphrase taggers and show significant performance improvements on two recently-compiled datasets of keyphrases from Computer Science research papers.", "title": "" } ]
scidocsrr
33115c38cc10bfa1cf19b6d28490f4bb
Word Sense Induction by Community Detection
[ { "docid": "b9cf32ef9364f55c5f59b4c6a9626656", "text": "Graph-based methods have gained attention in many areas of Natural Language Processing (NLP) including Word Sense Disambiguation (WSD), text summarization, keyword extraction and others. Most of the work in these areas formulate their problem in a graph-based setting and apply unsupervised graph clustering to obtain a set of clusters. Recent studies suggest that graphs often exhibit a hierarchical structure that goes beyond simple flat clustering. This paper presents an unsupervised method for inferring the hierarchical grouping of the senses of a polysemous word. The inferred hierarchical structures are applied to the problem of word sense disambiguation, where we show that our method performs significantly better than traditional graph-based methods and agglomerative clustering yielding improvements over state-of-the-art WSD systems based on sense induction.", "title": "" }, { "docid": "f3b176b37ccdd616eace518f9cf3af63", "text": "Word Sense Induction (WSI) is the task of identifying the different senses (uses) of a target word in a given text. Traditional graph-based approaches create and then cluster a graph, in which each vertex corresponds to a word that co-occurs with the target word, and edges between vertices are weighted based on the co-occurrence frequency of their associated words. In contrast, in our approach each vertex corresponds to a collocation that co-occurs with the target word, and edges between vertices are weighted based on the co-occurrence frequency of their associated collocations. A smoothing technique is applied to identify more edges between vertices and the resulting graph is then clustered. Our evaluation under the framework of SemEval-2007 WSI task shows the following: (a) our approach produces less sense-conflating clusters than those produced by traditional graph-based approaches, (b) our approach outperforms the existing state-of-the-art results.", "title": "" } ]
[ { "docid": "41e188c681516862a69fe8e90c58a618", "text": "This paper explores the use of Information-Centric Networking (ICN) to support management operations in IoT deployments, presenting the design of a flexible architecture that allows the appropriate operation of IoT devices within a delimited ICN network domain. Our architecture has been designed with special consideration to naming, interoperation, security and energy-efficiency requirements. We theoretically assess the communication overhead introduced by the security procedures of our solution, both at IoT devices and clients. Additionally, we show the potential of our architecture to accommodate enhanced management applications, focusing on a specific use case, i.e. an information freshness service level agreement application. Finally, we present a proof-of-concept implementation of our architecture over an Arduino board, and we use it to carry out a set of experiments that validate the feasibility of our solution. & 2016 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "3120b862a5957b0deeec5345376b74d0", "text": "This paper deals with automatic cartoon colorization. This is a hard issue, since it is an ill-posed problem that usually requires user intervention to achieve high quality. Motivated by the recent successes in natural image colorization based on deep learning techniques, we investigate the colorization problem at the cartoon domain using Convolutional Neural Network. To our best knowledge, no existing papers or research studies address this problem using deep learning techniques. Here we investigate a deep Convolutional Neural Network based automatic color filling method for cartoons.", "title": "" }, { "docid": "1d162153d7bbaf63991f79aa92eeae6e", "text": "We describe a contextual parser for the Robot Commands Treebank, a new crowdsourced resource. In contrast to previous semantic parsers that select the most-probable parse, we consider the different problem of parsing using additional situational context to disambiguate between different readings of a sentence. We show that multiple semantic analyses can be searched using dynamic programming via interaction with a spatial planner, to guide the parsing process. We are able to parse sentences in near linear-time by ruling out analyses early on that are incompatible with spatial context. We report a 34% upper bound on accuracy, as our planner correctly processes spatial context for 3,394 out of 10,000 sentences. However, our parser achieves a 96.53% exactmatch score for parsing within the subset of sentences recognized by the planner, compared to 82.14% for a non-contextual parser.", "title": "" }, { "docid": "d4a893a151ce4a3dee0e5fde0ba11b7b", "text": "Software-Defined Radio (SDR) technology has already cleared up passive radar applications. Nevertheless, until now, no work has pointed how this flexible radio could fully and directly exploit pulsed radar signals. This paper aims at introducing this field of study presenting not only an SDR-based radar-detector but also how it could be conceived on a low power consumption device as a tablet, which would make convenient a passive network to identify and localize aircraft as a redundancy to the conventional air traffic control in adverse situations. After a brief approach of the main features of the equipment, as well as of the developed processing script, indoor experiments took place. Their results demonstrate that the processing of pulsed radar signal allows emitters to be identified when a local database is confronted. All this commitment has contributed to a greater proposal of an Electronic Intelligence (ELINT) or Electronic Support Measures (ESM) system embedded on a tablet, presenting characteristics of portability and furtiveness. This study is suggested for the areas of Software-Defined Radio, Electronic Warfare, Electromagnetic Devices and Radar Signal Processing.", "title": "" }, { "docid": "9245a5a3daad7fbce9416b1dedb9e9ab", "text": "BACKGROUND\nDespite the growing epidemic of heart failure with preserved ejection fraction (HFpEF), no valid measure of patients' health status (symptoms, function, and quality of life) exists. We evaluated the Kansas City Cardiomyopathy Questionnaire (KCCQ), a validated measure of HF with reduced EF, in patients with HFpEF.\n\n\nMETHODS AND RESULTS\nUsing a prospective HF registry, we dichotomized patients into HF with reduced EF (EF≤ 40) and HFpEF (EF≥50). The associations between New York Heart Association class, a commonly used criterion standard, and KCCQ Overall Summary and Total Symptom domains were evaluated using Spearman correlations and 2-way ANOVA with differences between patients with HF with reduced EF and HFpEF tested with interaction terms. Predictive validity of the KCCQ Overall Summary scores was assessed with Kaplan-Meier curves for death and all-cause hospitalization. Covariate adjustment was made using Cox proportional hazards models. Internal reliability was assessed with Cronbach's α. Among 849 patients, 200 (24%) had HFpEF. KCCQ summary scores were strongly associated with New York Heart Association class in both patients with HFpEF (r=-0.62; P<0.001) and HF with reduced EF (r=-0.55; P=0.27 for interaction). One-year event-free rates by KCCQ category among patients with HFpEF were 0 to 25=13.8%, 26 to 50=59.1%, 51 to 75=73.8%, and 76 to 100=77.8% (log rank P<0.001), with no significant interaction by EF (P=0.37). The KCCQ domains demonstrated high internal consistency among patients with HFpEF (Cronbach's α=0.96 for overall summary and ≥0.69 in all subdomains).\n\n\nCONCLUSIONS\nAmong patients with HFpEF, the KCCQ seems to be a valid and reliable measure of health status and offers excellent prognostic ability. Future studies should extend and replicate our findings, including the establishment of its responsiveness to clinical change.", "title": "" }, { "docid": "2603c07864b92c6723b40c83d3c216b9", "text": "Background: A study was undertaken to record exacerbations and health resource use in patients with COPD during 6 months of treatment with tiotropium, salmeterol, or matching placebos. Methods: Patients with COPD were enrolled in two 6-month randomised, placebo controlled, double blind, double dummy studies of tiotropium 18 μg once daily via HandiHaler or salmeterol 50 μg twice daily via a metered dose inhaler. The two trials were combined for analysis of heath outcomes consisting of exacerbations, health resource use, dyspnoea (assessed by the transitional dyspnoea index, TDI), health related quality of life (assessed by St George’s Respiratory Questionnaire, SGRQ), and spirometry. Results: 1207 patients participated in the study (tiotropium 402, salmeterol 405, placebo 400). Compared with placebo, tiotropium but not salmeterol was associated with a significant delay in the time to onset of the first exacerbation. Fewer COPD exacerbations/patient year occurred in the tiotropium group (1.07) than in the placebo group (1.49, p<0.05); the salmeterol group (1.23 events/year) did not differ from placebo. The tiotropium group had 0.10 hospital admissions per patient year for COPD exacerbations compared with 0.17 for salmeterol and 0.15 for placebo (not statistically different). For all causes (respiratory and non-respiratory) tiotropium, but not salmeterol, was associated with fewer hospital admissions while both groups had fewer days in hospital than the placebo group. The number of days during which patients were unable to perform their usual daily activities was lowest in the tiotropium group (tiotropium 8.3 (0.8), salmeterol 11.1 (0.8), placebo 10.9 (0.8), p<0.05). SGRQ total score improved by 4.2 (0.7), 2.8 (0.7) and 1.5 (0.7) units during the 6 month trial for the tiotropium, salmeterol and placebo groups, respectively (p<0.01 tiotropium v placebo). Compared with placebo, TDI focal score improved in both the tiotropium group (1.1 (0.3) units, p<0.001) and the salmeterol group (0.7 (0.3) units, p<0.05). Evaluation of morning pre-dose FEV1, peak FEV1 and mean FEV1 (0–3 hours) showed that tiotropium was superior to salmeterol while both active drugs were more effective than placebo. Conclusions: Exacerbations of COPD and health resource usage were positively affected by daily treatment with tiotropium. With the exception of the number of hospital days associated with all causes, salmeterol twice daily resulted in no significant changes compared with placebo. Tiotropium also improved health related quality of life, dyspnoea, and lung function in patients with COPD.", "title": "" }, { "docid": "4a9da1575b954990f98e6807deae469e", "text": "Recently, there has been considerable debate concerning key sizes for publ i c key based cry p t o graphic methods. Included in the debate have been considerations about equivalent key sizes for diffe rent methods and considerations about the minimum re q u i red key size for diffe rent methods. In this paper we propose a method of a n a lyzing key sizes based upon the value of the data being protected and the cost of b reaking ke y s . I . I n t ro d u c t i o n A . W H Y I S K E Y S I Z E I M P O R T A N T ? In order to keep transactions based upon public key cryptography secure, one must ensure that the underlying keys are sufficiently large as to render the best possible attack infeasible. However, this really just begs the question as one is now left with the task of defining ‘infeasible’. Does this mean infeasible given access to (say) most of the Internet to do the computations? Does it mean infeasible to a large adversary with a large (but unspecified) budget to buy the hardware for an attack? Does it mean infeasible with what hardware might be obtained in practice by utilizing the Internet? Is it reasonable to assume that if utilizing the entire Internet in a key breaking effort makes a key vulnerable that such an attack might actually be conducted? If a public effort involving a substantial fraction of the Internet breaks a single key, does this mean that similar sized keys are unsafe? Does one need to be concerned about such public efforts or does one only need to be concerned about possible private, sur reptitious efforts? After all, if a public attack is known on a particular key, it is easy to change that key. We shall attempt to address these issues within this paper. number 13 Apr i l 2000 B u l l e t i n News and A dv i c e f rom RSA La bo rat o r i e s I . I n t ro d u c t i o n I I . M et ho ds o f At tac k I I I . H i s tor i ca l R es u l t s and t he R S A Ch a l le nge I V. Se cu r i t y E st i m ate s", "title": "" }, { "docid": "956cf3bf67aa60391b7c96162a5013bd", "text": "Transferring artistic styles onto everyday photographs has become an extremely popular task in both academia and industry. Recently, offline training has replaced online iterative optimization, enabling nearly real-time stylization. When those stylization networks are applied directly to high-resolution images, however, the style of localized regions often appears less similar to the desired artistic style. This is because the transfer process fails to capture small, intricate textures and maintain correct texture scales of the artworks. Here we propose a multimodal convolutional neural network that takes into consideration faithful representations of both color and luminance channels, and performs stylization hierarchically with multiple losses of increasing scales. Compared to state-of-the-art networks, our network can also perform style transfer in nearly real-time by performing much more sophisticated training offline. By properly handling style and texture cues at multiple scales using several modalities, we can transfer not just large-scale, obvious style cues but also subtle, exquisite ones. That is, our scheme can generate results that are visually pleasing and more similar to multiple desired artistic styles with color and texture cues at multiple scales.", "title": "" }, { "docid": "3105a48f0b8e45857e8d48e26b258e04", "text": "Dominated by the behavioral science approach for a long time, information systems research increasingly acknowledges design science as a complementary approach. While primarily information systems instantiations, but also constructs and models have been discussed quite comprehensively, the design of methods is addressed rarely. But methods appear to be of utmost importance particularly for organizational engineering. This paper justifies method construction as a core approach to organizational engineering. Based on a discussion of fundamental scientific positions in general and approaches to information systems research in particular, appropriate conceptualizations of 'method' and 'method construction' are presented. These conceptualizations are then discussed regarding their capability of supporting organizational engineering. Our analysis is located on a meta level: Method construction is conceptualized and integrated from a large number of references. Method instantiations or method engineering approaches however are only referenced and not described in detail.", "title": "" }, { "docid": "b39904ccd087e59794cf2cc02e5d2644", "text": "In this paper, we propose a novel walking method for torque controlled robots. The method is able to produce a wide range of speeds without requiring off-line optimizations and re-tuning of parameters. We use a quadratic whole-body optimization method running online which generates joint torques, given desired Cartesian accelerations of center of mass and feet. Using a dynamics model of the robot inside this optimizer, we ensure both compliance and tracking, required for fast locomotion. We have designed a foot-step planner that uses a linear inverted pendulum as simplified robot internal model. This planner is formulated as a quadratic convex problem which optimizes future steps of the robot. Fast libraries help us performing these calculations online. With very few parameters to tune and no perception, our method shows notable robustness against strong external pushes, relatively large terrain variations, internal noises, model errors and also delayed communication.", "title": "" }, { "docid": "d3b0a831715bd2f2de9d94811bdd47e7", "text": "Aspect Term Extraction (ATE) identifies opinionated aspect terms in texts and is one of the tasks in the SemEval Aspect Based Sentiment Analysis (ABSA) contest. The small amount of available datasets for supervised ATE and the costly human annotation for aspect term labelling give rise to the need for unsupervised ATE. In this paper, we introduce an architecture that achieves top-ranking performance for supervised ATE. Moreover, it can be used efficiently as feature extractor and classifier for unsupervised ATE. Our second contribution is a method to automatically construct datasets for ATE. We train a classifier on our automatically labelled datasets and evaluate it on the human annotated SemEval ABSA test sets. Compared to a strong rule-based baseline, we obtain a dramatically higher F-score and attain precision values above 80%. Our unsupervised method beats the supervised ABSA baseline from SemEval, while preserving high precision scores.", "title": "" }, { "docid": "eab86ab18bd47e883b184dcd85f366cd", "text": "We study corporate bond default rates using an extensive new data set spanning the 1866–2008 period. We find that the corporate bond market has repeatedly suffered clustered default events much worse than those experienced during the Great Depression. For example, during the railroad crisis of 1873–1875, total defaults amounted to 36% of the par value of the entire corporate bond market. Using a regime-switching model, we examine the extent to which default rates can be forecast by financial and macroeconomic variables. We find that stock returns, stock return volatility, and changes in GDP are strong predictors of default rates. Surprisingly, however, credit spreads are not. Over the long term, credit spreads are roughly twice as large as default losses, resulting in an average credit risk premium of about 80 basis points. We also find that credit spreads do not adjust in response to realized default rates. & 2011 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "897962874a43ee19e3f50f431d4c449e", "text": "According to Dennett, the same system may be described using a ‘physical’ (mechanical) explanatory stance, or using an ‘intentional’ (beliefand goalbased) explanatory stance. Humans tend to find the physical stance more helpful for certain systems, such as planets orbiting a star, and the intentional stance for others, such as living animals. We define a formal counterpart of physical and intentional stances within computational theory: a description of a system as either a device, or an agent, with the key difference being that ‘devices’ are directly described in terms of an input-output mapping, while ‘agents’ are described in terms of the function they optimise. Bayes’ rule can then be applied to calculate the subjective probability of a system being a device or an agent, based only on its behaviour. We illustrate this using the trajectories of an object in a toy grid-world domain.", "title": "" }, { "docid": "a1d9c897f926fa4cc45ebc6209deb6bc", "text": "This paper addresses the relationship between the ego, id, and internal objects. While ego psychology views the ego as autonomous of the drives, a less well-known alternative position views the ego as constituted by the drives. Based on Freud's ego-instinct account, this position has developed into a school of thought which postulates that the drives act as knowers. Given that there are multiple drives, this position proposes that personality is constituted by multiple knowers. Following on from Freud, the ego is viewed as a composite sub-set of the instinctual drives (ego-drives), whereas those drives cut off from expression form the id. The nature of the \"self\" is developed in terms of identification and the possibility of multiple personalities is also established. This account is then extended to object-relations and the explanatory value of the ego-drive account is discussed in terms of the addressing the nature of ego-structures and the dynamic nature of internal objects. Finally, the impact of psychological conflict and the significance of repression for understanding the nature of splits within the psyche are also discussed.", "title": "" }, { "docid": "ee4288bcddc046ae5e9bcc330264dc4f", "text": "Emerging recognition of two fundamental errors underpinning past polices for natural resource issues heralds awareness of the need for a worldwide fundamental change in thinking and in practice of environmental management. The first error has been an implicit assumption that ecosystem responses to human use are linear, predictable and controllable. The second has been an assumption that human and natural systems can be treated independently. However, evidence that has been accumulating in diverse regions all over the world suggests that natural and social systems behave in nonlinear ways, exhibit marked thresholds in their dynamics, and that social-ecological systems act as strongly coupled, complex and evolving integrated systems. This article is a summary of a report prepared on behalf of the Environmental Advisory Council to the Swedish Government, as input to the process of the World Summit on Sustainable Development (WSSD) in Johannesburg, South Africa in 26 August 4 September 2002. We use the concept of resilience--the capacity to buffer change, learn and develop--as a framework for understanding how to sustain and enhance adaptive capacity in a complex world of rapid transformations. Two useful tools for resilience-building in social-ecological systems are structured scenarios and active adaptive management. These tools require and facilitate a social context with flexible and open institutions and multi-level governance systems that allow for learning and increase adaptive capacity without foreclosing future development options.", "title": "" }, { "docid": "e1485bddbab0c3fa952d045697ff2112", "text": "The diversity of an ensemble of classifiers is known to be an important factor in determining its generalization error. We present a new method for generating ensembles, Decorate (Diverse Ensemble Creation by Oppositional Relabeling of Artificial Training Examples), that directly constructs diverse hypotheses using additional artificially-constructed training examples. The technique is a simple, general meta-learner that can use any strong learner as a base classifier to build diverse committees. Experimental results using decision-tree induction as a base learner demonstrate that this approach consistently achieves higher predictive accuracy than the base classifier, Bagging and Random Forests. Decorate also obtains higher accuracy than Boosting on small training sets, and achieves comparable performance on larger training sets.", "title": "" }, { "docid": "b84c233a32dfe8fd004ad33a6565df9c", "text": "Graph databases with a custom non-relational backend promote themselves to outperform relational databases in answering queries on large graphs. Recent empirical studies show that this claim is not always true. However, these studies focus only on pattern matching queries and neglect analytical queries used in practice such as shortest path, diameter, degree centrality or closeness centrality. In addition, there is no distinction between different types of pattern matching queries. In this paper, we introduce a set of analytical and pattern matching queries, and evaluate them in Neo4j and a market-leading commercial relational database system. We show that the relational database system outperforms Neo4j for our analytical queries and that Neo4j is faster for queries that do not filter on specific edge types.", "title": "" }, { "docid": "4bf78f78c76f65bbdc856e1290311cd1", "text": "The capacity to rectify DNA double-strand breaks (DSBs) is crucial for the survival of all species. DSBs can be repaired either by homologous recombination (HR) or non-homologous end joining (NHEJ). The long-standing notion that bacteria rely solely on HR for DSB repair has been overturned by evidence that mycobacteria and other genera have an NHEJ system that depends on a dedicated DNA ligase, LigD, and the DNA-end-binding protein Ku. Recent studies have illuminated the role of NHEJ in protecting the bacterial chromosome against DSBs and other clastogenic stresses. There is also emerging evidence of functional crosstalk between bacterial NHEJ proteins and components of other DNA-repair pathways. Although still a young field, bacterial NHEJ promises to teach us a great deal about the nexus of DNA repair and bacterial pathogenesis.", "title": "" }, { "docid": "340f64ed182a54ef617d7aa2ffeac138", "text": "Compared with animals, plants generally possess a high degree of developmental plasticity and display various types of tissue or organ regeneration. This regenerative capacity can be enhanced by exogenously supplied plant hormones in vitro, wherein the balance between auxin and cytokinin determines the developmental fate of regenerating organs. Accumulating evidence suggests that some forms of plant regeneration involve reprogramming of differentiated somatic cells, whereas others are induced through the activation of relatively undifferentiated cells in somatic tissues. We summarize the current understanding of how plants control various types of regeneration and discuss how developmental and environmental constraints influence these regulatory mechanisms.", "title": "" }, { "docid": "3fe2f080342154fca61b3c1bb4ee8aba", "text": "In this paper, we apply imitation learning to develop drivers for The Open Racing Car Simulator (TORCS). Our approach can be classified as a direct method in that it applies supervised learning to learn car racing behaviors from the data collected from other drivers. In the literature, this approach is known to have led to extremely poor performance with drivers capable of completing only very small parts of a track. In this paper we show that, by using high-level information about the track ahead of the car and by predicting high-level actions, it is possible to develop drivers with performances that in some cases are only 15% lower than the performance of the fastest driver available in TORCS. Our experimental results suggest that our approach can be effective in developing drivers with good performance in non-trivial tracks using a very limited amount of data and computational resources. We analyze the driving behavior of the controllers developed using our approach and identify perceptual aliasing as one of the factors which can limit performance of our approach.", "title": "" } ]
scidocsrr
51baa8f8d538dcfe131ffe1cad8a7cfe
Research on Combining Scrum with CMMI in Small and Medium Organizations
[ { "docid": "0cf1f63fd39c8c74465fad866958dac6", "text": "Software development organizations that have been employing capability maturity models, such as SW-CMM or CMMI for improving their processes are now increasingly interested in the possibility of adopting agile development methods. In the context of project management, what can we say about Scrum’s alignment with CMMI? The aim of our paper is to present the mapping between CMMI and the agile method Scrum, showing major gaps between them and identifying how organizations are adopting complementary practices in their projects to make these two approaches more compliant. This is useful for organizations that have a plan-driven process based on the CMMI model and are planning to improve the agility of processes or to help organizations to define a new project management framework based on both CMMI and Scrum practices.", "title": "" } ]
[ { "docid": "76e8496e4ce5ce940673e01ff04f088d", "text": "A fundamental fact about polynomial interpolation is that k evaluations of a degree-(k-1) polynomial f are sufficient to determine f. This is also necessary in a strong sense: given k-1 evaluations, we learn nothing about the value of f on any k'th point. In this paper, we study a variant of the polynomial interpolation problem. Instead of querying entire evaluations of f (which are elements of a large field F), we are allowed to query partial evaluations; that is, each evaluation delivers a few elements from a small subfield of F, rather than a single element from F. We show that in this model, one can do significantly better than in the traditional setting, in terms of the amount of information required to determine the missing evaluation. More precisely, we show that only O(k) bits are necessary to recover a missing evaluation. In contrast, the traditional method of looking at k evaluations requires Omega(k log(k)) bits. We also show that our result is optimal for linear methods, even up to the leading constants. Our motivation comes from the use of Reed-Solomon (RS) codes for distributed storage systems, in particular for the exact repair problem. The traditional use of RS codes in this setting is analogous to the traditional interpolation problem. Each node in a system stores an evaluation of f, and if one node fails we can recover it by reading k other nodes. However, each node is free to send less information, leading to the modified problem above. The quickly-developing field of regenerating codes has yielded several codes which take advantage of this freedom. However, these codes are not RS codes, and RS codes are still often used in practice; in 2011, Dimakis et al. asked how well RS codes could perform in this setting. Our results imply that RS codes can also take advantage of this freedom to download partial symbols. In some parameter regimes---those with small levels of sub-packetization---our scheme for RS codes outperforms all known regenerating codes. Even with a high degree of sub-packetization, our methods give non-trivial schemes, and we give an improved repair scheme for a specific (14,10)-RS code used in the Facebook Hadoop Analytics cluster.", "title": "" }, { "docid": "f0c415dfb22032064e8cdb0ec76403b7", "text": "In this paper, an impedance control scheme for aerial robotic manipulators is proposed, with the aim of reducing the end-effector interaction forces with the environment. The proposed control has a multi-level architecture, in detail the outer loop is composed by a trajectory generator and an impedance filter that modifies the trajectory to achieve a complaint behaviour in the end-effector space; a middle loop is used to generate the joint space variables through an inverse kinematic algorithm; finally the inner loop is aimed at ensuring the motion tracking. The proposed control architecture has been experimentally tested.", "title": "" }, { "docid": "cc70efd881626a16ab23b9305e67adce", "text": "Many different sciences have developed many different tests to describe and characterise spatial point data. For example, all the trees in a given area may be mapped such that their x, y co-ordinates and other variables, or ‘marks’, (e.g. species, size) might be recorded. Statistical techniques can be used to explore interactions between events at different length scales and interactions between different types of events in the same area. SpPack is a menu-driven add-in for Excel written in Visual Basic for Applications (VBA) that provides a range of statistical analyses for spatial point data. These include simple nearest-neighbour-derived tests and more sophisticated second-order statistics such as Ripley’s K-function and the neighbourhood density function (NDF). Some simple grid or quadrat-based statistics are also calculated. The application of the SpPack add-in is demonstrated for artificially generated event sets with known properties and for a multi-type ecological event set.  2003 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "72196b0a2eed5e9747d90593cdd0684d", "text": "Advanced silicon (Si) node technology development is moving to 10/7nm technology and pursuing die size reduction, efficiency enhancement and lower power consumption for mobile applications in the semiconductor industry. The flip chip chip scale package (fcCSP) has been viewed as an attractive solution to achieve the miniaturization of die size, finer bump pitch, finer line width and spacing (LW/LS) substrate requirements, and is widely adopted in mobile devices to satisfy the increasing demands of higher performance, higher bandwidth, and lower power consumption as well as multiple functions. The utilization of mass reflow (MR) chip attach process in a fcCSP with copper (Cu) pillar bumps, embedded trace substrate (ETS) technology and molded underfill (MUF) is usually viewed as the cost-efficient solution. However, when finer bump pitch and LW/LS with an escaped trace are designed in flip chip MR process, a higher risk of a bump to trace short can occur. In order to reduce the risk of bump to trace short as well as extremely low-k (ELK) damage in a fcCSP with advanced Si node, the thermo-compression bonding (TCB) and TCB with non-conductive paste (TCNCP) have been adopted, although both methodologies will cause a higher assembly cost due to the lower units per hour (UPH) assembly process. For the purpose of delivering a cost-effective chip attach process as compared to TCB/TCNCP methodologies as well as reducing the risk of bump to trace as compared to the MR process, laser assisted bonding (LAB) chip attach methodology was studied in a 15x15mm fcCSP with 10nm backend process daisy-chain die for this paper. Using LAB chip attach technology can increase the UPH by more than 2-times over TCB and increase the UPH 5-times compared to TCNCP. To realize the ELK performance of a 10nm fcCSP with fine bump pitch of $60 \\mu \\mathrm{m}$ and $90 \\mu \\mathrm{m}$ as well as 2-layer ETS with two escaped traces design, the quick temperature cycling (QTC) test was performed after the LAB chip attach process. The comparison of polyimide (PI) layer Cu pillar bumps to non-PI Cu pillar bumps (without a PI layer) will be discussed to estimate the 10nm ELK performance. The evaluated result shows that the utilization of LAB can not only achieve a bump pitch reduction with a finer LW/LS substrate with escaped traces in the design, but it also validates ELK performance and Si node reduction. Therefore, the illustrated LAB chip attach processes examined here can guarantee the assembly yield with less ELK damage risk in a 10nm fcCSP with finer bump pitch and substrate finer LW/LS design in the future.", "title": "" }, { "docid": "154528ab93e89abe965b6abd93af6a13", "text": "We investigate the geometry of that function in the plane or 3-space, which associates to each point the square of the shortest distance to a given curve or surface. Particular emphasis is put on second order Taylor approximants and other local quadratic approximants. Their key role in a variety of geometric optimization algorithms is illustrated at hand of registration in Computer Vision and surface approximation.", "title": "" }, { "docid": "e2ea233e4baaf3c76337c779060531cf", "text": "OBJECTIVES\nAnticoagulant and antiplatelet medications are known to increase the risk and severity of traumatic intracranial hemorrhage (tICH), even with minor head trauma. Most studies on bleeding propensity with head trauma are retrospective, are based on trauma registries, or include heterogeneous mechanisms of injury. The goal of this study was to determine the rate of tICH from only a common low-acuity mechanism of injury, that of a ground-level fall, in patients taking one or more of the following antiplatelet or anticoagulant medications: aspirin, warfarin, prasugrel, ticagrelor, dabigatran, rivaroxaban, apixaban, or enoxaparin.\n\n\nMETHODS\nThis was a prospective cohort study conducted at a Level I tertiary care trauma center of consecutive patients meeting the inclusion criteria of a ground-level fall with head trauma as affirmed by the treating clinician, a computed tomography (CT) head obtained, and taking and one of the above antiplatelet or anticoagulants. Patients were identified prospectively through electronic screening with confirmatory chart review. Emergency department charts were abstracted without subsequent knowledge of the hospital course. Patients transferred with a known abnormal CT head were excluded. Primary outcome was rate of tICH on initial CT head. Rates with 95% confidence intervals (CIs) were compared.\n\n\nRESULTS\nOver 30 months, we enrolled 939 subjects. The mean ± SD age was 78.3 ± 11.9 years and 44.6% were male. There were a total of 33 patients with tICH (3.5%, 95% CI = 2.5%-4.9%). Antiplatelets had a rate of tICH of 4.3% (95% CI = 3.0%-6.2%) compared to anticoagulants with a rate of 1.7% (95% CI = 0.4%-4.5%). Aspirin without other agents had an tICH rate of 4.6% (95% CI = 3.2%-6.6%); of these, 81.5% were taking low-dose 81 mg aspirin. Two patients received a craniotomy (one taking aspirin, one taking warfarin). There were four deaths (three taking aspirin, one taking warfarin). Most (72.7%) subjects with tICH were discharged home or to a rehabilitation facility. There were no tICH in 31 subjects taking a direct oral anticoagulant. CIs were overlapping for the groups.\n\n\nCONCLUSION\nThere is a low incidence of clinically significant tICH with a ground-level fall in head trauma in patients taking an anticoagulant or antiplatelet medication. There was no statistical difference in rate of tICH between antiplatelet and anticoagulants, which is unanticipated and counterintuitive as most literature and teaching suggests a higher rate with anticoagulants. A larger data set is needed to determine if small differences between the groups exist.", "title": "" }, { "docid": "aa4887f5671a23580a5c48b8f0508f74", "text": "Thrombocytopenia–absent radius syndrome is a rare autosomal recessive disorder characterized by megakaryocytic thrombocytopenia and longitudinal limb deficiencies mostly affecting the radial ray. Most patients are compound heterozygotes for a 200 kb interstitial microdeletion in 1q21.1 and a hypomorphic allele in RBM8A, mapping in the deleted segment. At the moment, the complete molecular characterization of thrombocytopenia–absent radius syndrome is limited to a handful of patients mostly ascertained in the pediatric age We report on a fetus with bilateral upper limb deficiency found at standard prenatal ultrasound examination. The fetus had bilateral radial agenesis and humeral hypo/aplasia with intact thumbs, micrognathia and urinary anomalies, indicating thrombocytopenia–absent radius syndrome. Molecular studies demonstrated compound heterozygosity for the 1q21.1 microdeletion and the RBM8A rs139428292 variant at the hemizygous state, inherited from the mother and father, respectively The molecular information allowed prenatal diagnosis in the following pregnancy resulting in the birth of a healthy carrier female. A review was carried out with the attempt to the trace the fetal ultrasound presentation of thrombocytopenia–absent radius syndrome and discussing opportunities for second-tier molecular studies within a multidisciplinary setting.", "title": "" }, { "docid": "7fb075251c846b7521abaa32a82b9918", "text": "Keystroke dynamics-the analysis of typing rhythms to discriminate among users-has been proposed for detecting impostors (i.e., both insiders and external attackers). Since many anomaly-detection algorithms have been proposed for this task, it is natural to ask which are the top performers (e.g., to identify promising research directions). Unfortunately, we cannot conduct a sound comparison of detectors using the results in the literature because evaluation conditions are inconsistent across studies. Our objective is to collect a keystroke-dynamics data set, to develop a repeatable evaluation procedure, and to measure the performance of a range of detectors so that the results can be compared soundly. We collected data from 51 subjects typing 400 passwords each, and we implemented and evaluated 14 detectors from the keystroke-dynamics and pattern-recognition literature. The three top-performing detectors achieve equal-error rates between 9.6% and 10.2%. The results-along with the shared data and evaluation methodology-constitute a benchmark for comparing detectors and measuring progress.", "title": "" }, { "docid": "b10b42c8fbe13ad8d1d04aec9df12a00", "text": "As an alternative strategy to antibiotic use in aquatic disease management, probiotics have recently attracted extensive attention in aquaculture. However, the use of terrestrial bacterial species as probiotics for aquaculture has had limited success, as bacterial strain characteristics are dependent upon the environment in which they thrive. Therefore, isolating potential probiotic bacteria from the marine environment in which they grow optimally is a better approach. Bacteria that have been used successfully as probiotics belong to the genus Vibrio and Bacillus, and the species Thalassobacter utilis. Most researchers have isolated these probiotic strains from shrimp culture water, or from the intestine of different penaeid species. The use of probiotic bacteria, based on the principle of competitive exclusion, and the use of immunostimulants are two of the most promising preventive methods developed in the fight against diseases during the last few years. It also noticed that probiotic bacteria could produce some digestive enzymes, which might improve the digestion of shrimp, thus enhancing the ability of stress resistance and health of the shrimp. However, the probiotics in aquatic environment remain to be a controversial concept, as there was no authentic evidence / real environment demonstrations on the successful use of probiotics and their mechanisms of action in vivo. The present review highlights the potential sources of probiotics, mechanism of action, diversity of probiotic microbes and challenges of probiotic usage in shrimp aquaculture.", "title": "" }, { "docid": "713d446993c0f7937eaa60ef0f8a34b8", "text": "Abiotic stress induces several changes in plants at physiological and molecular level. Plants have evolved regulatory mechanisms guided towards establishment of stress tolerance in which epigenetic modifications play a pivotal role. We provide examples of gene expression changes that are brought about by conversion of active chromatin to silent heterochromatin and vice versa. Methylation of CG sites and specific modification of histone tail determine whether a particular locus is transcriptionally active or silent. We present a lucid review of epigenetic machinery and epigenetic alterations involving DNA methylation, histone tail modifications, chromatin remodeling, and RNA directed epigenetic changes.", "title": "" }, { "docid": "99c088268633c19a8c4789c58c4c9aca", "text": "Executing agile quadrotor maneuvers with cablesuspended payloads is a challenging problem and complications induced by the dynamics typically require trajectory optimization. State-of-the-art approaches often need significant computation time and complex parameter tuning. We present a novel dynamical model and a fast trajectory optimization algorithm for quadrotors with a cable-suspended payload. Our first contribution is a new formulation of the suspended payload behavior, modeled as a link attached to the quadrotor with a combination of two revolute joints and a prismatic joint, all being passive. Differently from state of the art, we do not require the use of hybrid modes depending on the cable tension. Our second contribution is a fast trajectory optimization technique for the aforementioned system. Our model enables us to pose the trajectory optimization problem as a Mathematical Program with Complementarity Constraints (MPCC). Desired behaviors of the system (e.g., obstacle avoidance) can easily be formulated within this framework. We show that our approach outperforms the state of the art in terms of computation speed and guarantees feasibility of the trajectory with respect to both the system dynamics and control input saturation, while utilizing far fewer tuning parameters. We experimentally validate our approach on a real quadrotor showing that our method generalizes to a variety of tasks, such as flying through desired waypoints while avoiding obstacles, or throwing the payload toward a desired target. To the best of our knowledge, this is the first time that three-dimensional, agile maneuvers exploiting the system dynamics have been achieved on quadrotors with a cable-suspended payload. SUPPLEMENTARY MATERIAL This paper is accompanied by a video showcasing the experiments: https://youtu.be/s9zb5MRXiHA", "title": "" }, { "docid": "7a398cae0109297a19195691505b8caf", "text": "There is a growing interest in models that can learn from unlabelled speech paired with visual context. This setting is relevant for low-resource speech processing, robotics, and human language acquisition research. Here, we study how a visually grounded speech model, trained on images of scenes paired with spoken captions, captures aspects of semantics. We use an external image tagger to generate soft text labels from images, which serve as targets for a neural model that maps untranscribed speech to semantic keyword labels. We introduce a newly collected data set of human semantic relevance judgements and an associated task, semantic speech retrieval, where the goal is to search for spoken utterances that are semantically relevant to a given text query. Without seeing any text, the model trained on parallel speech and images achieves a precision of almost 60% on its top ten semantic retrievals. Compared to a supervised model trained on transcriptions, our model matches human judgements better by some measures, especially in retrieving non-verbatim semantic matches. We perform an extensive analysis of the model and its resulting representations.", "title": "" }, { "docid": "31c2dc8045f43c7bf1aa045e0eb3b9ad", "text": "This paper addresses the task of functional annotation of genes from biomedical literature. We view this task as a hierarchical text categorization problem with Gene Ontology as a class hierarchy. We present a novel global hierarchical learning approach that takes into account the semantics of a class hierarchy. This algorithm with AdaBoost as the underlying learning procedure significantly outperforms the corresponding “flat” approach, i.e. the approach that does not consider any hierarchical information. In addition, we propose a novel hierarchical evaluation measure that gives credit to partially correct classification and discriminates errors by both distance and depth in a class hierarchy.", "title": "" }, { "docid": "5016ab74ebd9c1359e8dec80ee220bcf", "text": "The possibility of communication between plants was proposed nearly 20 years ago, although previous demonstrations have suffered from methodological problems and have not been widely accepted. Here we report the first rigorous, experimental evidence demonstrating that undamaged plants respond to cues released by neighbors to induce higher levels of resistance against herbivores in nature. Sagebrush plants that were clipped in the field released a pulse of an epimer of methyl jasmonate that has been shown to be a volatile signal capable of inducing resistance in wild tobacco. Wild tobacco plants with clipped sagebrush neighbors had increased levels of the putative defensive oxidative enzyme, polyphenol oxidase, relative to control tobacco plants with unclipped sagebrush neighbors. Tobacco plants near clipped sagebrush experienced greatly reduced levels of leaf damage by grasshoppers and cutworms during three field seasons compared to unclipped controls. This result was not caused by an altered light regime experienced by tobacco near clipped neighbors. Barriers to soil contact between tobacco and sagebrush did not reduce the difference in leaf damage although barriers that blocked air contact negated the effect.", "title": "" }, { "docid": "3207403c7f748bd7935469a74aa1c38f", "text": "This article briefly reviews the rise of Critical Discourse Analysis and teases out a detailed analysis of the various critiques that have been levelled at CDA and its practitioners over the last twenty years, both by scholars working within the “critical” paradigm and by other critics. A range of criticisms are discussed which target the underlying premises, the analytical methodology and the disputed areas of reader response and the integration of contextual factors. Controversial issues such as the predominantly negative focus of much CDA scholarship, and the status of CDA as an emergent “intellectual orthodoxy”, are also reviewed. The conclusions offer a summary of the principal criticisms that emerge from this overview, and suggest some ways in which these problems could be attenuated.", "title": "" }, { "docid": "d952d54231f1093129fe23f051fc858d", "text": "As part of the Face Recognition Technology (FERET) program, the U.S. Army Research Laboratory (ARL) conducted supervised government tests and evaluations of automatic face recognition algorithms. The goal of the tests was to provide an independent method of evaluating algorithms and assessing the state of the art in automatic face recognition. This report describes the design and presents the results of the August 1994 and March 1995 FERET tests. Results for FERET tests administered by ARL between August 1994 and August 1996 are reported.", "title": "" }, { "docid": "3ce021aa52dac518e1437d397c63bf68", "text": "Malaria is a common and sometimes fatal disease caused by infection with Plasmodium parasites. Cerebral malaria (CM) is a most severe complication of infection with Plasmodium falciparum parasites which features a complex immunopathology that includes a prominent neuroinflammation. The experimental mouse model of cerebral malaria (ECM) induced by infection with Plasmodium berghei ANKA has been used abundantly to study the role of single genes, proteins and pathways in the pathogenesis of CM, including a possible contribution to neuroinflammation. In this review, we discuss the Plasmodium berghei ANKA infection model to study human CM, and we provide a summary of all host genetic effects (mapped loci, single genes) whose role in CM pathogenesis has been assessed in this model. Taken together, the reviewed studies document the many aspects of the immune system that are required for pathological inflammation in ECM, but also identify novel avenues for potential therapeutic intervention in CM and in diseases which feature neuroinflammation.", "title": "" }, { "docid": "314722d112f5520f601ed6917f519466", "text": "In this work we propose an online multi person pose tracking approach which works on two consecutive frames It−1 and It . The general formulation of our temporal network allows to rely on any multi person pose estimation approach as spatial network. From the spatial network we extract image features and pose features for both frames. These features serve as input for our temporal model that predicts Temporal Flow Fields (TFF). These TFF are vector fields which indicate the direction in which each body joint is going to move from frame It−1 to frame It . This novel representation allows to formulate a similarity measure of detected joints. These similarities are used as binary potentials in a bipartite graph optimization problem in order to perform tracking of multiple poses. We show that these TFF can be learned by a relative small CNN network whilst achieving state-of-the-art multi person pose tracking results.", "title": "" }, { "docid": "947fdb3233e57b5df8ce92df31f2a0be", "text": "Recent work by Cohen et al. [1] has achieved state-of-the-art results for learning spherical images in a rotation invariant way by using ideas from group representation theory and noncommutative harmonic analysis. In this paper we propose a generalization of this work that generally exhibits improved performace, but from an implementation point of view is actually simpler. An unusual feature of the proposed architecture is that it uses the Clebsch–Gordan transform as its only source of nonlinearity, thus avoiding repeated forward and backward Fourier transforms. The underlying ideas of the paper generalize to constructing neural networks that are invariant to the action of other compact groups.", "title": "" }, { "docid": "25238b85534ee95d70e581145fa28c07", "text": "Advances in sequencing and high-throughput techniques have provided an unprecedented opportunity to interrogate human diseases on a genome-wide scale. The list of disease-causing mutations is expanding rapidly, and mutations affecting mRNA translation are no exception. Translation (protein synthesis) is one of the most complex processes in the cell. The orchestrated action of ribosomes, tRNAs and numerous translation factors decodes the information contained in mRNA into a polypeptide chain. The intricate nature of this process renders it susceptible to deregulation at multiple levels. In this Review, we summarize current evidence of translation deregulation in human diseases other than cancer. We discuss translation-related diseases on the basis of the molecular aberration that underpins their pathogenesis (including tRNA dysfunction, ribosomopathies, deregulation of the integrated stress response and deregulation of the mTOR pathway) and describe how deregulation of translation generates the phenotypic variability observed in these disorders. Translation deregulation causes many human diseases, which can be broadly categorized into tRNA or ribosomal dysfunction, and deregulation of the integrated stress response or the mTOR pathway. The complexity of the translation process and its cellular contexts could explain the phenotypic variability of these disorders.", "title": "" } ]
scidocsrr
bd3fba0b990bdb15fac6dc7062496162
Visual SLAM with Line and Corner Features
[ { "docid": "a88b2916f73dedabceda574f10a93672", "text": "A key component of a mobile robot system is the ability to localize itself accurately and, simultaneously, to build a map of the environment. Most of the existing algorithms are based on laser range finders, sonar sensors or artificial landmarks. In this paper, we describe a vision-based mobile robot localization and mapping algorithm, which uses scale-invariant image features as natural landmarks in unmodified environments. The invariance of these features to image translation, scaling and rotation makes them suitable landmarks for mobile robot localization and map building. With our Triclops stereo vision system, these landmarks are localized and robot ego-motion is estimated by least-squares minimization of the matched landmarks. Feature viewpoint variation and occlusion are taken into account by maintaining a view direction for each landmark. Experiments show that these visual landmarks are robustly matched, robot pose is estimated and a consistent three-dimensional map is built. As image features are not noise-free, we carry out error analysis for the landmark positions and the robot pose. We use Kalman filters to track these landmarks in a dynamic environment, resulting in a database map with landmark positional uncertainty. KEY WORDS—localization, mapping, visual landmarks, mobile robot", "title": "" }, { "docid": "a53065d1cfb1fe898182d540d65d394b", "text": "This paper presents a novel approach for detecting affine invariant interest points. Our method can deal with significant affine transformations including large scale changes. Such transformations introduce significant changes in the point location as well as in the scale and the shape of the neighbourhood of an interest point. Our approach allows to solve for these problems simultaneously. It is based on three key ideas : 1) The second moment matrix computed in a point can be used to normalize a region in an affine invariant way (skew and stretch). 2) The scale of the local structure is indicated by local extrema of normalized derivatives over scale. 3) An affine-adapted Harris detector determines the location of interest points. A multi-scale version of this detector is used for initialization. An iterative algorithm then modifies location, scale and neighbourhood of each point and converges to affine invariant points. For matching and recognition, the image is characterized by a set of affine invariant points ; the affine transformation associated with each point allows the computation of an affine invariant descriptor which is also invariant to affine illumination changes. A quantitative comparison of our detector with existing ones shows a significant improvement in the presence of large affine deformations. Experimental results for wide baseline matching show an excellent performance in the presence of large perspective transformations including significant scale changes. Results for recognition are very good for a database with more than 5000 images.", "title": "" } ]
[ { "docid": "71da47c6837022a80dccabb0a1f5c00e", "text": "The treatment of obesity and cardiovascular diseases is one of the most difficult and important challenges nowadays. Weight loss is frequently offered as a therapy and is aimed at improving some of the components of the metabolic syndrome. Among various diets, ketogenic diets, which are very low in carbohydrates and usually high in fats and/or proteins, have gained in popularity. Results regarding the impact of such diets on cardiovascular risk factors are controversial, both in animals and humans, but some improvements notably in obesity and type 2 diabetes have been described. Unfortunately, these effects seem to be limited in time. Moreover, these diets are not totally safe and can be associated with some adverse events. Notably, in rodents, development of nonalcoholic fatty liver disease (NAFLD) and insulin resistance have been described. The aim of this review is to discuss the role of ketogenic diets on different cardiovascular risk factors in both animals and humans based on available evidence.", "title": "" }, { "docid": "af2a1083436450b9147eb7b51be5c761", "text": "Over the past century, various value models have been proposed. To determine which value model best predicts prosocial behavior, mental health, and pro-environmental behavior, we subjected seven value models to a hierarchical regression analysis. A sample of University students (N = 271) completed the Portrait Value Questionnaire (Schwartz et al., 2012), the Basic Value Survey (Gouveia et al., 2008), and the Social Value Orientation scale (Van Lange et al., 1997). Additionally, they completed the Values Survey Module (Hofstede and Minkov, 2013), Inglehart's (1977) materialism-postmaterialism items, the Study of Values, fourth edition (Allport et al., 1960; Kopelman et al., 2003), and the Rokeach (1973) Value Survey. However, because the reliability of the latter measures was low, only the PVQ-RR, the BVS, and the SVO where entered into our analysis. Our results provide empirical evidence that the PVQ-RR is the strongest predictor of all three outcome variables, explaining variance above and beyond the other two instruments in almost all cases. The BVS significantly predicted prosocial and pro-environmental behavior, while the SVO only explained variance in pro-environmental behavior.", "title": "" }, { "docid": "3c103640a41779e8069219b9c4849ba7", "text": "Electronic banking is becoming more popular every day. Financial institutions have accepted the transformation to provide electronic banking facilities to their customers in order to remain relevant and thrive in an environment that is competitive. A contributing factor to the customer retention rate is the frequent use of multiple online functionality however despite all the benefits of electronic banking, some are still hesitant to use it because of security concerns. The perception is that gender, age, education level, salary, culture and profession all have an impact on electronic banking usage. This study reports on how the Knowledge Discovery and Data Mining (KDDM) process was used to determine characteristics and electronic banking behavior of high net worth individuals at a South African bank. Findings JIBC December 2017, Vol. 22, No.3 2 indicate that product range and age had the biggest impact on electronic banking behavior. The value of user segmentation is that the financial institution can provide a more accurate service to their users based on their preferences and online banking behavior.", "title": "" }, { "docid": "cceb05e100fe8c9f9dab9f6525d435db", "text": "Conventional feedback control methods can solve various types of robot control problems very efficiently by capturing the structure with explicit models, such as rigid body equations of motion. However, many control problems in modern manufacturing deal with contacts and friction, which are difficult to capture with first-order physical modeling. Hence, applying control design methodologies to these kinds of problems often results in brittle and inaccurate controllers, which have to be manually tuned for deployment. Reinforcement learning (RL) methods have been demonstrated to be capable of learning continuous robot controllers from interactions with the environment, even for problems that include friction and contacts. In this paper, we study how we can solve difficult control problems in the real world by decomposing them into a part that is solved efficiently by conventional feedback control methods, and the residual which is solved with RL. The final control policy is a superposition of both control signals. We demonstrate our approach by training an agent to successfully perform a real-world block assembly task involving contacts and unstable objects.", "title": "" }, { "docid": "82aab8fe60da7c23eef945d7a1ec00fe", "text": "A novel broadband dual-polarized crossed-dipole antenna element with parasitic branches is designed for 2G/3G/LTE base stations. The proposed antenna mainly comprises a curved reflector, two crossed-dipoles, a pair of feeding strips, and two couples of balanced-unbalanced (BALUN) transformers. Compared to the traditional square-loop radiator dipole, the impedance bandwidth of the proposed antenna can be greatly improved after employing two parasitic metal stubs and two pairs of parasitic metal branches, and a better radiation performance of the proposed antenna can be obtained by optimizing the angle of the reflector. Simulation results show that the proposed antenna element can operate from 1.7 to 2.7 GHz with has an impedance bandwidth of VSWR <; 1.5, the port isolation of more than 30 dB, a stable radiation pattern with half-power beamwidth 65.2°-5.6° at H-plane and V-plane, and a relatively stable dipole antenna gain of 8.5 ± 0.4 (dBi). Furthermore, measured results have a good agreement with simulated ones.", "title": "" }, { "docid": "eb781b72664c6ed36c5aa87e8f456bd4", "text": "We suggest that planning for automated earthmoving operations such as digging a foundation or leveling a mound of soil, be treated at multiple levels. In a system that we have developed, a coarse-level planner is used to tessellate the volume to be excavated into smaller pieces that are sequenced in order to complete the task efficiently. Each of the smaller volumes is treated with a refined planner that selects digging actions based on constraint optimization over the space of prototypical digging actions. We discuss planners and the associated representations for two types of earthmoving machines: an excavator backhoe and a wheel loader. Experimental results from a full-scale automated excavator and simulated wheel loader are presented .", "title": "" }, { "docid": "c527d891bb7baeabad43cba148a0fcf9", "text": "As a framework for extractive summarization, sentence regression has achieved state-of-the-art performance in several widely-used practical systems. The most challenging task within the sentence regression framework is to identify discriminative features to encode a sentence into a feature vector. So far, sentence regression approaches have neglected to use features that capture contextual relations among sentences.\n We propose a neural network model, Contextual Relation-based Summarization (CRSum), to take advantage of contextual relations among sentences so as to improve the performance of sentence regression. Specifically, we first use sentence relations with a word-level attentive pooling convolutional neural network to construct sentence representations. Then, we use contextual relations with a sentence-level attentive pooling recurrent neural network to construct context representations. Finally, CRSum automatically learns useful contextual features by jointly learning representations of sentences and similarity scores between a sentence and sentences in its context. Using a two-level attention mechanism, CRSum is able to pay attention to important content, i.e., words and sentences, in the surrounding context of a given sentence.\n We carry out extensive experiments on six benchmark datasets. CRSum alone can achieve comparable performance with state-of-the-art approaches; when combined with a few basic surface features, it significantly outperforms the state-of-the-art in terms of multiple ROUGE metrics.", "title": "" }, { "docid": "6c4a7a6d21c85f3f2f392fbb1621cc51", "text": "The International Academy of Education (IAE) is a not-for-profit scientific association that promotes educational research, and its dissemination and implementation. Founded in 1986, the Academy is dedicated to strengthening the contributions of research, solving critical educational problems throughout the world, and providing better communication among policy makers, researchers, and practitioners. The general aim of the IAE is to foster scholarly excellence in all fields of education. Towards this end, the Academy provides timely syntheses of research-based evidence of international importance. The Academy also provides critiques of research and of its evidentiary basis and its application to policy. This booklet about teacher professional learning and development has been prepared for inclusion in the Educational Practices Series developed by the International Academy of Education and distributed by the International Bureau of Education and the Academy. As part of its mission, the Academy provides timely syntheses of research on educational topics of international importance. This is the eighteenth in a series of booklets on educational practices that generally improve learning. This particular booklet is based on a synthesis of research evidence produced for the New Zealand Ministry of Education's Iterative Best Evidence Synthesis (BES) Programme, which is designed to be a catalyst for systemic improvement and sustainable development in education. This synthesis, and others in the series, are available electronically at www.educationcounts.govt.nz/themes/BES. All BESs are written using a collaborative approach that involves the writers, teacher unions, principal groups, teacher educators, academics, researchers, policy advisers, and other interested parties. To ensure its rigour and usefulness, each BES follows national guidelines developed by the Ministry of Education. Professor Helen Timperley was lead writer for the Teacher Professional Learning and Development: Best Evidence Synthesis Iteration [BES], assisted by teacher educators Aaron Wilson and Heather Barrar and research assistant Irene Fung, all of the University of Auckland. The BES is an analysis of 97 studies of professional development that led to improved outcomes for the students of the participating teachers. Most of these studies came from the United States, New Zealand, the Netherlands, the United Kingdom, Canada, and Israel. Dr Lorna Earl provided formative quality assurance for the synthesis; Professor John Hattie and Dr Gavin Brown oversaw the analysis of effect sizes. Helen Timperley is Professor of Education at the University of Auckland. The primary focus of her research is promotion of professional and organizational learning in schools for the purpose of improving student learning. She has …", "title": "" }, { "docid": "c2df8cc7775bd4ec2bfdf4498d136c9f", "text": "Particle Swarm Optimization is a popular heuristic search algorithm which is inspired by the social learning of birds or fishes. It is a swarm intelligence technique for optimization developed by Eberhart and Kennedy [1] in 1995. Inertia weight is an important parameter in PSO, which significantly affects the convergence and exploration-exploitation trade-off in PSO process. Since inception of Inertia Weight in PSO, a large number of variations of Inertia Weight strategy have been proposed. In order to propose one or more than one Inertia Weight strategies which are efficient than others, this paper studies 15 relatively recent and popular Inertia Weight strategies and compares their performance on 05 optimization test problems.", "title": "" }, { "docid": "dfb5a6dbd1b8788cda6cb41ba741006d", "text": "The notion of ‘user satisfaction’ plays a prominent role in HCI, yet it remains evasive. This exploratory study reports three experiments from an ongoing research program. In this program we aim to uncover (1) what user satisfaction is, (2) whether it is primarily determined by user expectations or by the interactive experience, (3) how user satisfaction may be related to perceived usability, and (4) the extent to which satisfaction rating scales capture the same interface qualities as uncovered in self-reports of interactive experiences. In all three experiments reported here user satisfaction was found to be a complex construct comprising several concepts, the distribution of which varied with the nature of the experience. Expectations were found to play an important role in the way users approached a browsing task. Satisfaction and perceived usability was assessed using two methods: scores derived from unstructured interviews and from the Web site Analysis MeasureMent Inventory (WAMMI) rating scales. Scores on these two instruments were somewhat similar, but conclusions drawn across all three experiments differed in terms of satisfaction ratings, suggesting that rating scales and interview statements may tap different interface qualities. Recent research suggests that ‘beauty’, or ‘appeal’ is linked to perceived usability so that what is ‘beautiful’ is also perceived to be usable [Interacting with Computers 13 (2000) 127]. This was true in one experiment here using a web site high in perceived usability and appeal. However, using a site with high appeal but very low in perceived usability yielded very high satisfaction, but low perceived usability scores, suggesting that what is ‘beautiful’ need not also be perceived to be usable. The results suggest that web designers may need to pay attention to both visual appeal and usability. q 2002 Elsevier Science B.V. All rights reserved.", "title": "" }, { "docid": "405cd4bacbcfddc9b4254aee166ee394", "text": "A fundamental problem for the visual perception of 3D shape is that patterns of optical stimulation are inherently ambiguous. Recent mathematical analyses have shown, however, that these ambiguities can be highly constrained, so that many aspects of 3D structure are uniquely specified even though others might be underdetermined. Empirical results with human observers reveal a similar pattern of performance. Judgments about 3D shape are often systematically distorted relative to the actual structure of an observed scene, but these distortions are typically constrained to a limited class of transformations. These findings suggest that the perceptual representation of 3D shape involves a relatively abstract data structure that is based primarily on qualitative properties that can be reliably determined from visual information.", "title": "" }, { "docid": "758978c4b8f3bdd0a57fe9865892fbc3", "text": "The foundation of a process model lies in its structural specifications. Using a generic process modeling language for workflows, we show how a structural specification may contain deadlock and lack of synchronization conflicts that could compromise the correct execution of workflows. In general, identification of such conflicts is a computationally complex problem and requires development of effective algorithms specific for the target modeling language. We present a visual verification approach and algorithm that employs a set of graph reduction rules to identify structural conflicts in process models for the given workflow modeling language. We also provide insights into the correctness and complexity of the reduction process. Finally, we show how the reduction algorithm may be used to count possible instance subgraphs of a correct process model. The main contribution of the paper is a new technique for satisfying well-defined correctness criteria in process models.", "title": "" }, { "docid": "7bf64a2dbfa14b52d0ee46d0c61bf8d2", "text": "Mobility prediction allows estimating the stability of paths in a mobile wireless Ad Hoc networks. Identifying stable paths helps to improve routing by reducing the overhead and the number of connection interruptions. In this paper, we introduce a neural network based method for mobility prediction in Ad Hoc networks. This method consists of a multi-layer and recurrent neural network using back propagation through time algorithm for training.", "title": "" }, { "docid": "a4790fdc5f6469b45fa4a22a871f3501", "text": "NSGA ( [5]) is a popular non-domination based genetic algorithm for multiobjective optimization. It is a very effective algorithm but has been generally criticized for its computational complexity, lack of elitism and for choosing the optimal parameter value for sharing parameter σshare. A modified version, NSGAII ( [3]) was developed, which has a better sorting algorithm , incorporates elitism and no sharing parameter needs to be chosen a priori. NSGA-II is discussed in detail in this.", "title": "" }, { "docid": "d2f64c21d0a3a54b4a2b75b7dd7df029", "text": "Library of Congress Cataloging in Publication Data EB. Boston studies in the philosophy of science.The concept of autopoiesis is due to Maturana and Varela 8, 9. The aim of this article is to revisit the concepts of autopoiesis and cognition in the hope of.Amazon.com: Autopoiesis and Cognition: The Realization of the Living Boston Studies in the Philosophy of Science, Vol. 42 9789027710161: H.R. Maturana.Autopoiesis, The Santiago School of Cognition, and. In their early work together Maturana and Varela developed the idea of autopoiesis.Autopoiesis and Cognition: The Realization of the Living Dordecht.", "title": "" }, { "docid": "74c86a2ff975d8298b356f0243e82ab0", "text": "Building intelligent agents that can communicate with and learn from humans in natural language is of great value. Supervised language learning is limited by the ability of capturing mainly the statistics of training data, and is hardly adaptive to new scenarios or flexible for acquiring new knowledge without inefficient retraining or catastrophic forgetting. We highlight the perspective that conversational interaction serves as a natural interface both for language learning and for novel knowledge acquisition and propose a joint imitation and reinforcement approach for grounded language learning through an interactive conversational game. The agent trained with this approach is able to actively acquire information by asking questions about novel objects and use the justlearned knowledge in subsequent conversations in a one-shot fashion. Results compared with other methods verified the effectiveness of the proposed approach.", "title": "" }, { "docid": "cbc9437811bff9a1d96dd5d5f886c598", "text": "Weakly supervised learning for object detection has been gaining significant attention in the recent past. Visually similar objects are extracted automatically from weakly labelled videos hence bypassing the tedious process of manually annotating training data. However, the problem as applied to small or medium sized objects is still largely unexplored. Our observation is that weakly labelled information can be derived from videos involving human-object interactions. Since the object is characterized neither by its appearance nor its motion in such videos, we propose a robust framework that taps valuable human context and models similarity of objects based on appearance and functionality. Furthermore, the framework is designed such that it maximizes the utility of the data by detecting possibly multiple instances of an object from each video. We show that object models trained in this fashion perform between 86% and 92% of their fully supervised counterparts on three challenging RGB and RGB-D datasets.", "title": "" }, { "docid": "8f3b18f410188ae4f7b09435ce92639e", "text": "Biogenic amines are important nitrogen compounds of biological importance in vegetable, microbial and animal cells. They can be detected in both raw and processed foods. In food microbiology they have sometimes been related to spoilage and fermentation processes. Some toxicological characteristics and outbreaks of food poisoning are associated with histamine and tyramine. Secondary amines may undergo nitrosation and form nitrosamines. A better knowledge of the factors controlling their formation is necessary in order to improve the quality and safety of food.", "title": "" }, { "docid": "a47d001dc8305885e42a44171c9a94b2", "text": "Community detection in complex network has become a vital step to understand the structure and dynamics of networks in various fields. However, traditional node clustering and relatively new proposed link clustering methods have inherent drawbacks to discover overlapping communities. Node clustering is inadequate to capture the pervasive overlaps, while link clustering is often criticized due to the high computational cost and ambiguous definition of communities. So, overlapping community detection is still a formidable challenge. In this work, we propose a new overlapping community detection algorithm based on network decomposition, called NDOCD. Specifically, NDOCD iteratively splits the network by removing all links in derived link communities, which are identified by utilizing node clustering technique. The network decomposition contributes to reducing the computation time and noise link elimination conduces to improving the quality of obtained communities. Besides, we employ node clustering technique rather than link similarity measure to discover link communities, thus NDOCD avoids an ambiguous definition of community and becomes less time-consuming. We test our approach on both synthetic and real-world networks. Results demonstrate the superior performance of our approach both in computation time and accuracy compared to state-of-the-art algorithms.", "title": "" }, { "docid": "fb60eb0a7334ce5c5d3c62b812b9f4f8", "text": "The structure and culture of an organization does affect implementation of projects. In this paper we try to identify organizational factors that could affect the implementation efforts of an Integrated Financial Management Information System (IFMIS). The information system in question has taken overtly a long time and it's not complete yet. We set out to And out whether organizational issues are at play in this particular project. The project under study is a large-scale integrated information system which aims at strengthening and further developing Financial Management Information in the wider public service in Kenya. We borrow concepts from Structuration Theory (ST) as applied in sociology to understand the organizational perspective in the project. We use the theory to help explain some of the meanings, norms and issues of power experienced during the implementation of the IFMIS. Without ruling out problems of technological nature, the findings suggest that many of the problems in the IFMIS implementation may be attributed to organizational factors, and that certain issues are related to the existing organization culture within government.", "title": "" } ]
scidocsrr
4c2f0475c875d7d0d8fe1db66f329323
Learning to Drive using Inverse Reinforcement Learning and Deep Q-Networks
[ { "docid": "e0f5f73eb496b77cddc5820fb6306f4b", "text": "Safe handling of dynamic highway and inner city scenarios with autonomous vehicles involves the problem of generating traffic-adapted trajectories. In order to account for the practical requirements of the holistic autonomous system, we propose a semi-reactive trajectory generation method, which can be tightly integrated into the behavioral layer. The method realizes long-term objectives such as velocity keeping, merging, following, stopping, in combination with a reactive collision avoidance by means of optimal-control strategies within the Frenét-Frame [12] of the street. The capabilities of this approach are demonstrated in the simulation of a typical high-speed highway scenario.", "title": "" } ]
[ { "docid": "723f2257daace86d9cd72d26b59c211d", "text": "Instead of simply using two-dimensional User × Item features, advanced recommender systems rely on more additional dimensions (e.g. time, location, social network) in order to provide better recommendation services. In the first part of this paper, we will survey a variety of dimension features and show how they are integrated into the recommendation process. When the service providers collect more and more personal information, it brings great privacy concerns to the public. On another side, the service providers could also suffer from attacks launched by malicious users who want to bias the recommendations. In the second part of this paper, we will survey attacks from and against recommender service providers, and existing solutions.", "title": "" }, { "docid": "329cf5a87b554a3eb233bd8227bc78a1", "text": "Anomaly detection refers to methods that provide warnings of unusual behaviors which may compromise the security and performance of communication networks. In this paper it is proposed a novel model for network anomaly detection combining baseline, K-means clustering and particle swarm optimization (PSO). The baseline consists of network traffic normal behavior profiles, generated by the application of Baseline for Automatic Backbone Management (BLGBA) model in SNMP historical network data set, while K-means is a supervised learning clustering algorithm used to recognize patterns or features in data sets. In order to escape from local optima problem, the K-means is associated to PSO, which is a meta-heuristic whose main characteristics include low computational complexity and small number of input parameters dependence. The proposed anomaly detection approach classifies data clusters from baseline and real traffic using the K-means combined with PSO. Anomalous behaviors can be identified by comparing the distance between real traffic and cluster centroids. Tests were performed in the network of State University of Londrina and the obtained detection and false alarm rates are promising.", "title": "" }, { "docid": "033fb4c857f79fc593bd9a7e12269b49", "text": "Within any Supply Chain Risk Management (SCRM) approach, the concept “Risk” occupies a central interest. Numerous frameworks which differ by the provided definitions and relationships between supply chain risk dimensions and metrics are available. This article provides an outline of the most common SCRM methodologies, in order to suggest an “integrated conceptual model”. The objective of such an integrated model is not to describe yet another conceptual model of Risk, but rather to offer a concrete structure incorporating the characteristics of the supply chain in the risk management process. The proposed alignment allows a better understanding of the dynamic of risk management strategies. Firstly, the model was analyzed through its positioning and its contributions compared to existing tools and models in the literature. This comparison highlights the critical points overlooked in the past. Secondly, the model was applied on case studies of major supply chain crisis.", "title": "" }, { "docid": "4e50dff9307dcbe43ef8bee9df1f0d1b", "text": "Research advancements allow computational systems to automatically caption social media images. Often, these captions are evaluated with sighted humans using the image as a reference. Here, we explore how blind and visually impaired people experience these captions in two studies about social media images. Using a contextual inquiry approach (n=6 blind/visually impaired), we found that blind people place a lot of trust in automatically generated captions, filling in details to resolve differences between an image's context and an incongruent caption. We built on this in-person study with a second, larger online experiment (n=100 blind/visually impaired) to investigate the role of phrasing in encouraging trust or skepticism in captions. We found that captions emphasizing the probability of error, rather than correctness, encouraged people to attribute incongruence to an incorrect caption, rather than missing details. Where existing research has focused on encouraging trust in intelligent systems, we conclude by challenging this assumption and consider the benefits of encouraging appropriate skepticism.", "title": "" }, { "docid": "a10aa780d9f1a65461ad0874173d8f56", "text": "OS fingerprinting tries to identify the type and version of a system based on gathered information of a target host. It is an essential step for many subsequent penetration attempts and attacks. Traditional OS fingerprinting depends on banner grabbing schemes or network traffic analysis results to identify the system. These interactive procedures can be detected by intrusion detection systems (IDS) or fooled by fake network packets. In this paper, we propose a new OS fingerprinting mechanism in virtual machine hypervisors that adopt the memory de-duplication technique. Specifically, when multiple memory pages with the same contents occupy only one physical page, their reading and writing access delay will demonstrate some special properties. We use the accumulated access delay to the memory pages that are unique to some specific OS images to derive out whether or not our VM instance and the target VM are using the same OS. The experiment results on VMware ESXi hypervisor with both Windows and Ubuntu Linux OS images show the practicability of the attack. We also discuss the mechanisms to defend against such attacks by the hypervisors and VMs.", "title": "" }, { "docid": "c4dbf075f91d1a23dda421261911a536", "text": "In cultures of the Litopenaeus vannamei with biofloc, the concentrations of nitrate rise during the culture period, which may cause a reduction in growth and mortality of the shrimps. Therefore, the aim of this study was to determine the effect of the concentration of nitrate on the growth and survival of shrimp in systems using bioflocs. The experiment consisted of four treatments with three replicates each: The concentrations of nitrate that were tested were 75 (control), 150, 300, and 600 mg NO3 −-N/L. To achieve levels above 75 mg NO3 −-N/L, different dosages of sodium nitrate (PA) were added. For this purpose, twelve experimental units with a useful volume of 45 L were stocked with 15 juvenile L. vannamei (1.30 ± 0.31 g), corresponding to a stocking density of 333 shrimps/m3, that were reared for an experimental period of 42 days. Regarding the water quality parameters measured throughout the study, no significant differences were detected (p > 0.05). Concerning zootechnical performance, a significant difference (p < 0.05) was verified with the 75 (control) and 150 treatments presenting the best performance indexes, while the 300 and 600 treatments led to significantly poorer results (p < 0.05). The histopathological damage was observed in the gills and hepatopancreas of the shrimps exposed to concentrations ≥300 mg NO3 −-N/L for 42 days, and poorer zootechnical performance and lower survival were observed in the shrimps reared at concentrations ≥300 mg NO3 −-N/L under a salinity of 23. The results obtained in this study show that concentrations of nitrate up to 177 mg/L are acceptable for the rearing of L. vannamei in systems with bioflocs, without renewal of water, at a salinity of 23.", "title": "" }, { "docid": "799517016245ffa33a06795b26e308cc", "text": "The goal of this ”proyecto fin de carrera” was to produce a review of the face detection and face recognition literature as comprehensive as possible. Face detection was included as a unavoidable preprocessing step for face recogntion, and as an issue by itself, because it presents its own difficulties and challenges, sometimes quite different from face recognition. We have soon recognized that the amount of published information is unmanageable for a short term effort, such as required of a PFC, so in agreement with the supervisor we have stopped at a reasonable time, having reviewed most conventional face detection and face recognition approaches, leaving advanced issues, such as video face recognition or expression invariances, for the future work in the framework of a doctoral research. I have tried to gather much of the mathematical foundations of the approaches reviewed aiming for a self contained work, which is, of course, rather difficult to produce. My supervisor encouraged me to follow formalism as close as possible, preparing this PFC report more like an academic report than an engineering project report.", "title": "" }, { "docid": "094dbd57522cb7b9b134b14852bea78b", "text": "When encountering qualitative research for the first time, one is confronted with both the number of methods and the difficulty of collecting, analysing and presenting large amounts of data. In quantitative research, it is possible to make a clear distinction between gathering and analysing data. However, this distinction is not clear-cut in qualitative research. The objective of this paper is to provide insight for the novice researcher and the experienced researcher coming to grounded theory for the first time. For those who already have experience in the use of the method the paper provides further much needed discussion arising out of デエW マWデエラSげゲ ;Sラヮデキラミ キミ デエW I“ aキWノSく In this paper the authors present a practical application and illustrate how grounded theory method was applied to an interpretive case study research. The paper discusses grounded theory method and provides guidance for the use of the method in interpretive studies.", "title": "" }, { "docid": "4706560ae6318724e6eb487d23804a76", "text": "Schizophrenia is a complex neurodevelopmental disorder characterized by cognitive deficits. These deficits in cognitive functioning have been shown to relate to a variety of functional and treatment outcomes. Cognitive adaptation training (CAT) is a home-based, manual-driven treatment that utilizes environmental supports and compensatory strategies to bypass cognitive deficits and improve target behaviors and functional outcomes in individuals with schizophrenia. Unlike traditional case management, CAT provides environmental supports and compensatory strategies tailored to meet the behavioral style and neurocognitive deficits of each individual patient. The case of Ms. L. is presented to illustrate CAT treatment.", "title": "" }, { "docid": "e565780704f5c68c985af856d0a53ce0", "text": "Establishing trust relationships between routing nodes represents a vital security requirement to establish reliable routing processes that exclude infected or selfish nodes. In this paper, we propose a new security scheme for the Internet of things and mainly for the RPL (Routing Protocol for Low-power and Lossy Networks) called: Metric-based RPL Trustworthiness Scheme (MRTS). The primary aim is to enhance RPL security and deal with the trust inference problem. MRTS addresses trust issue during the construction and maintenance of routing paths from each node to the BR (Border Router). To handle this issue, we extend DIO (DODAG Information Object) message by introducing a new trust-based metric ERNT (Extended RPL Node Trustworthiness) and a new Objective Function TOF (Trust Objective Function). In fact, ERNT represents the trust values for each node within the network, and TOF demonstrates how ERNT is mapped to path cost. In MRTS all nodes collaborate to calculate ERNT by taking into account nodes' behavior including selfishness, energy, and honesty components. We implemented our scheme by extending the distributed Bellman-Ford algorithm. Evaluation results demonstrated that the new scheme improves the security of RPL.", "title": "" }, { "docid": "ab7a69accb17ff99642ab225facec95d", "text": "It is challenging to adopt computing-intensive and parameter-rich Convolutional Neural Networks (CNNs) in mobile devices due to limited hardware resources and low power budgets. To support multiple concurrently running applications, one mobile device needs to perform multiple CNN tests simultaneously in real-time. Previous solutions cannot guarantee a high enough frame rate when serving multiple applications with reasonable hardware and power cost. In this paper, we present a novel process-in-memory architecture to process emerging binary CNN tests in Wide-IO2 DRAMs. Compared to state-of-the-art accelerators, our design improves CNN test performance by 4× ∼ 11× with small hardware and power overhead.", "title": "" }, { "docid": "d9f8858915ea3881763a0f8064102998", "text": "Digital signatures are one of the fundamental security primitives in Vehicular Ad-Hoc Networks (VANETs) because they provide authenticity and non-repudiation in broadcast communication. However, the current broadcast authentication standard in VANETs is vulnerable to signature flooding: excessive signature verification requests that exhaust the computational resources of victims. In this paper, we propose two efficient broadcast authentication schemes, Fast Authentication (FastAuth) and Selective Authentication (SelAuth), as two countermeasures to signature flooding. FastAuth secures periodic single-hop beacon messages. By exploiting the sender's ability to predict its own future beacons, FastAuth enables 50 times faster verification than previous mechanisms using the Elliptic Curve Digital Signature Algorithm. SelAuth secures multi-hop applications in which a bogus signature may spread out quickly and impact a significant number of vehicles. SelAuth pro- vides fast isolation of malicious senders, even under a dynamic topology, while consuming only 15%--30% of the computational resources compared to other schemes. We provide both analytical and experimental evaluations based on real traffic traces and NS-2 simulations. With the near-term deployment plans of VANET on all vehicles, our approaches can make VANETs practical.", "title": "" }, { "docid": "9aae377bf3ebb202b13fab2cbd85f1ce", "text": "The paper describes a rule-based information extraction (IE) system developed for Polish medical texts. We present two applications designed to select data from medical documentation in Polish: mammography reports and hospital records of diabetic patients. First, we have designed a special ontology that subsequently had its concepts translated into two separate models, represented as typed feature structure (TFS) hierarchies, complying with the format required by the IE platform we adopted. Then, we used dedicated IE grammars to process documents and fill in templates provided by the models. In particular, in the grammars, we addressed such linguistic issues as: ambiguous keywords, negation, coordination or anaphoric expressions. Resolving some of these problems has been deferred to a post-processing phase where the extracted information is further grouped and structured into more complex templates. To this end, we defined special heuristic algorithms on the basis of sample data. The evaluation of the implemented procedures shows their usability for clinical data extraction tasks. For most of the evaluated templates, precision and recall well above 80% were obtained.", "title": "" }, { "docid": "e4c27a97a355543cf113a16bcd28ca50", "text": "A metamaterial-based broadband low-profile grid-slotted patch antenna is presented. By slotting the radiating patch, a periodic array of series capacitor loaded metamaterial patch cells is formed, and excited through the coupling aperture in a ground plane right underneath and parallel to the slot at the center of the patch. By exciting two adjacent resonant modes simultaneously, broadband impedance matching and consistent radiation are achieved. The dispersion relation of the capacitor-loaded patch cell is applied in the mode analysis. The proposed grid-slotted patch antenna with a low profile of 0.06 λ0 (λ0 is the center operating wavelength in free space) achieves a measured bandwidth of 28% for the |S11| less than -10 dB and maximum gain of 9.8 dBi.", "title": "" }, { "docid": "47da8530df2160ee29ff05aee4ab0342", "text": "The objective of this review was to update Sobal and Stunkard's exhaustive review of the literature on the relation between socioeconomic status (SES) and obesity (Psychol Bull 1989;105:260-75). Diverse research databases (including CINAHL, ERIC, MEDLINE, and Social Science Abstracts) were comprehensively searched during the years 1988-2004 inclusive, using \"obesity,\" \"socioeconomic status,\" and synonyms as search terms. A total of 333 published studies, representing 1,914 primarily cross-sectional associations, were included in the review. The overall pattern of results, for both men and women, was of an increasing proportion of positive associations and a decreasing proportion of negative associations as one moved from countries with high levels of socioeconomic development to countries with medium and low levels of development. Findings varied by SES indicator; for example, negative associations (lower SES associated with larger body size) for women in highly developed countries were most common with education and occupation, while positive associations for women in medium- and low-development countries were most common with income and material possessions. Patterns for women in higher- versus lower-development countries were generally less striking than those observed by Sobal and Stunkard; this finding is interpreted in light of trends related to globalization. Results underscore a view of obesity as a social phenomenon, for which appropriate action includes targeting both economic and sociocultural factors.", "title": "" }, { "docid": "4d331769ca3f02e9ec96e172d98f3fab", "text": "This review focuses on the most recent applications of zinc oxide (ZnO) nanostructures for tissue engineering. ZnO is one of the most investigated metal oxides, thanks to its multifunctional properties coupled with the ease of preparing various morphologies, such as nanowires, nanorods, and nanoparticles. Most ZnO applications are based on its semiconducting, catalytic and piezoelectric properties. However, several works have highlighted that ZnO nanostructures may successfully promote the growth, proliferation and differentiation of several cell lines, in combination with the rise of promising antibacterial activities. In particular, osteogenesis and angiogenesis have been effectively demonstrated in numerous cases. Such peculiarities have been observed both for pure nanostructured ZnO scaffolds as well as for three-dimensional ZnO-based hybrid composite scaffolds, fabricated by additive manufacturing technologies. Therefore, all these findings suggest that ZnO nanostructures represent a powerful tool in promoting the acceleration of diverse biological processes, finally leading to the formation of new living tissue useful for organ repair.", "title": "" }, { "docid": "69a6cfb649c3ccb22f7a4467f24520f3", "text": "We propose a two-stage neural model to tackle question generation from documents. First, our model estimates the probability that word sequences in a document are ones that a human would pick when selecting candidate answers by training a neural key-phrase extractor on the answers in a question-answering corpus. Predicted key phrases then act as target answers and condition a sequence-tosequence question-generation model with a copy mechanism. Empirically, our keyphrase extraction model significantly outperforms an entity-tagging baseline and existing rule-based approaches. We further demonstrate that our question generation system formulates fluent, answerable questions from key phrases. This twostage system could be used to augment or generate reading comprehension datasets, which may be leveraged to improve machine reading systems or in educational settings.", "title": "" }, { "docid": "104d16c298c8790ca8da0df4d7e34a4b", "text": "musical structure of a culture or genre” (Bharucha 1984, p. 421). So, unlike tonal hierarchies that refer to cognitive representations of the structure of music across different pieces of music in the style, event hierarchies refer to a particular piece of music and the place of each event in that piece. The two hierarchies occupy complementary roles. In listening to music or music-like experimental materials (melodies and harmonic progressions), the listener responds both to the structure provided by the tonal hierarchy and the structure provided by the event hierarchy. Musical activity involves dynamic patterns of stability and instability to which both the tonal and event hierarchies contribute. Understanding the relations between them and their interaction in processing musical structure is a central issue, not yet extensively studied empirically. 3.3 Empirical Research: The Basic Studies This section outlines the classic findings that illustrate tonal relationships and the methodologies used to establish these findings. 3.3.1 The Probe Tone Method Quantification is the first step in empirical studies because it makes possible the kinds of analytic techniques needed to understand complex human behaviors. An experimental method that has been used to quantify the tonal hierarchy is called the probe-tone method (Krumhansl and Shepard 1979). It was based on the observation that if you hear the incomplete ascending C major scale, C-D-E-F-G-A-B, you strongly expect that the next tone will be the high C. It is the next logical tone in the series, proximal to the last tone of the context, B, and it is the tonic of the key. When, in the experiment, incomplete ascending and descending scale contexts were followed by the tone C (the probe tone), listeners rated it highly as to how well it completed the scale (1 = very badly, 7 = very well). Other probe tones, however, also received fairly high ratings, and they were not necessarily those that are close in pitch to the last tone of the context. For example, the more musically trained listeners also gave high ratings to the dominant, G, and the mediant, E, which together with the C form the tonic triad. The tones of the scale received higher ratings than the nonscale tones, C# D# F# G# and A#. Less musically trained listeners were more influenced by how close the probe tone was to the tone sounded most recently at the end of the context, although their ratings also contained some of the tonal hierarchy pattern. A subsequent study used this method with a variety of contexts at the beginning of the trials (Krumhansl and Kessler 1982). Contexts were chosen because they are clear indicators of a key. They included the scale, the tonic triad chord, and chord 56 C.L. Krumhansl and L.L. Cuddy sequences strongly defining major and minor keys. These contexts were followed by all possible probe tones in the 12-tone chromatic scale, which musically trained listeners were instructed to judge in terms of how well they fit with the preceding context in a musical sense. The results for contexts of the same mode (major or minor) were similar when transposed to a common tonic. Also, the results were largely independent of which particular type of context was used (e.g., chord versus chord cadence). Consequently, the rating data were transposed to a common tonic and averaged over the context types. The resulting values are termed standardized key profiles. The values for the major key profile are 6.35, 2.23, 3.48, 2.33, 4.38, 4.09, 2.52, 5.19, 2.39, 3.66, 2.29, 2.88, where the first number corresponds to the mean rating for the tonic of the key, the second to the next of the 12 tones in the chromatic scale, and so on. The values for the minor key context are 6.33, 2.68, 3.52, 5.38, 2.60, 3.53, 2.54, 4.75, 3.98, 2.69, 3.34, 3.17. These are plotted in Fig. 3.1, in which C is assumed to be the tonic. Both major and minor contexts produce clear and musically interpretable hierarchies in the sense that tones are ordered or ranked according to music-theoretic descriptions. The results of these initial studies suggested that it is possible to obtain quantitative judgments of the degree to which different tones are perceived as stable reference tones in musical contexts. The task appeared to be accessible to listeners who differed considerably in their music training. This was important for further investigations of the responses of listeners without knowledge of specialized vocabularies for describing music, or who were unfamiliar with the musical style. Finally, the results in these and many subsequent studies were quite consistent over a variety of task instructions and musical contexts used to induce a sense of key. Quantification Fig. 3.1 (a) Probe tone ratings for a C major context. (b) Probe tone ratings for a C minor context. Values from Krumhansl and Kessler (1982) 57 3 A Theory of Tonal Hierarchies in Music of the tonal hierarchies is an important first step in empirical research but, as seen later, a great deal of research has studied it from a variety of different perspectives. 3.3.2 Converging Evidence To substantiate any theoretical construct, such as the tonal hierarchy, it is important to have evidence from experiments using different methods. This strategy is known as “converging operations” (Garner et al. 1956). This section describes a number of other experimental measures that show influences of the tonal hierarchy. It has an effect on the degree to which tones are perceived as similar to one another (Krumhansl 1979), such that tones high in the hierarchy are perceived as relatively similar to one another. For example, in the key of C major, C and G are perceived as highly related, whereas C# and G# are perceived as distantly related, even though they are just as far apart objectively (in semitones). In addition, a pair of tones is heard as more related when the second is more stable in the tonal hierarchy than the first (compared to the reverse order). For example, the tones F#-G are perceived as more related to one another than are G-F# because G is higher in the tonal hierarchy than F#. Similar temporal-order asymmetries also appear in memory studies. For example, F# is more often confused with G than G is confused with F# (Krumhansl 1979). These data reflect the proposition that each tone is drawn toward, or expected to resolve to, a tone of greater stability in the tonal hierarchy. Janata and Reisberg (1988) showed that the tonal hierarchy also influenced reaction time measures in tasks requiring a categorical judgment about a tone’s key membership. For both scale and chord contexts, faster reaction times (in-key/outof-key) were obtained for tones higher in the hierarchy. In addition, a recency effect was found for the scale context as for the nonmusicians in the original probe tone study (Krumhansl and Shepard 1979). Miyazaki (1989) found that listeners with absolute pitch named tones highest in tonal hierarchy of C major faster and more accurately than other tones. This is remarkable because it suggests that musical training has a very specific effect on the acquisition of absolute pitch. Most of the early piano repertoire is written in the key of C major and closely related keys. All of these listeners began piano lessons as young as 3–5 years of age, and were believed to have acquired absolute pitch through exposure to piano tones. The tonal hierarchy also appears in judgments of what tone constitutes a good phrase ending (Palmer and Krumhansl 1987a, b; Boltz 1989a, b). A number of studies show that the tonal hierarchy is one of the factors that influences expectations for melodic continuations (Schmuckler 1989; Krumhansl 1991, 1995b; Cuddy and Lunney 1995; Krumhansl et al. 1999, 2000). Other factors include pitch proximity, interval size, and melodic direction. The influence of the tonal hierarchy has also been demonstrated in a study of expressive piano performance (Thompson and Cuddy 1997). Expression refers to 58 C.L. Krumhansl and L.L. Cuddy the changes in duration and dynamics (loudness) that performers add beyond the notated music. For the harmonized sequences used in their study, the performance was influenced by the tonal hierarchy. Tones that were tonally stable within a key (higher in the tonal hierarchy) tended to be played for longer duration in the melody than those less stable (lower in the tonal hierarchy). A method used more recently (Aarden 2003, described in Huron 2006) is a reaction-time task in which listeners had to judge whether unfamiliar melodies went up, down, or stayed the same (a tone was repeated). The underlying idea is that reaction times should be faster when the tone conforms to listeners’ expectations. His results confirmed this hypothesis, namely, that reaction times were faster for tones higher in the hierarchy. As described later, his data conformed to a very large statistical analysis he did of melodies in major and minor keys. Finally, tonal expectations result in event-related potentials (ERPs), changes in electrical potentials measured on the surface of the head (Besson and Faïta 1995; Besson et al. 1998). A larger P300 component, a positive change approximately 300 ms after the final tone, was found when a melody ended with a tone out of the scale of its key than a tone in the scale. This finding was especially true for musicians and familiar melodies, suggesting that learning plays some role in producing the effect; however, the effect was also present in nonmusicians, only to a lesser degree. This section has cited only a small proportion of the studies that have been conducted on tonal hierarchies. A closely related issue that has also been studied extensively is the existence of, and the effects of, a hierarchy of chords. The choice of the experiments reviewed here was to illustrate the variety of approaches that have been taken. Across the studies, consistent effects were found with many different kinds of experimental", "title": "" }, { "docid": "1a7dd0fb317a9640ee6e90036d6036fa", "text": "A genome-wide association study was performed to identify genetic factors involved in susceptibility to psoriasis (PS) and psoriatic arthritis (PSA), inflammatory diseases of the skin and joints in humans. 223 PS cases (including 91 with PSA) were genotyped with 311,398 single nucleotide polymorphisms (SNPs), and results were compared with those from 519 Northern European controls. Replications were performed with an independent cohort of 577 PS cases and 737 controls from the U.S., and 576 PSA patients and 480 controls from the U.K.. Strongest associations were with the class I region of the major histocompatibility complex (MHC). The most highly associated SNP was rs10484554, which lies 34.7 kb upstream from HLA-C (P = 7.8x10(-11), GWA scan; P = 1.8x10(-30), replication; P = 1.8x10(-39), combined; U.K. PSA: P = 6.9x10(-11)). However, rs2395029 encoding the G2V polymorphism within the class I gene HCP5 (combined P = 2.13x10(-26) in U.S. cases) yielded the highest ORs with both PS and PSA (4.1 and 3.2 respectively). This variant is associated with low viral set point following HIV infection and its effect is independent of rs10484554. We replicated the previously reported association with interleukin 23 receptor and interleukin 12B (IL12B) polymorphisms in PS and PSA cohorts (IL23R: rs11209026, U.S. PS, P = 1.4x10(-4); U.K. PSA: P = 8.0x10(-4); IL12B:rs6887695, U.S. PS, P = 5x10(-5) and U.K. PSA, P = 1.3x10(-3)) and detected an independent association in the IL23R region with a SNP 4 kb upstream from IL12RB2 (P = 0.001). Novel associations replicated in the U.S. PS cohort included the region harboring lipoma HMGIC fusion partner (LHFP) and conserved oligomeric golgi complex component 6 (COG6) genes on chromosome 13q13 (combined P = 2x10(-6) for rs7993214; OR = 0.71), the late cornified envelope gene cluster (LCE) from the Epidermal Differentiation Complex (PSORS4) (combined P = 6.2x10(-5) for rs6701216; OR 1.45) and a region of LD at 15q21 (combined P = 2.9x10(-5) for rs3803369; OR = 1.43). This region is of interest because it harbors ubiquitin-specific protease-8 whose processed pseudogene lies upstream from HLA-C. This region of 15q21 also harbors the gene for SPPL2A (signal peptide peptidase like 2a) which activates tumor necrosis factor alpha by cleavage, triggering the expression of IL12 in human dendritic cells. We also identified a novel PSA (and potentially PS) locus on chromosome 4q27. This region harbors the interleukin 2 (IL2) and interleukin 21 (IL21) genes and was recently shown to be associated with four autoimmune diseases (Celiac disease, Type 1 diabetes, Grave's disease and Rheumatoid Arthritis).", "title": "" }, { "docid": "40d8c7f1d24ef74fa34be7e557dca920", "text": "the rapid changing Internet environment has formed a competitive business setting, which provides opportunities for conducting businesses online. Availability of online transaction systems enable users to buy and make payment for products and services using the Internet platform. Thus, customers’ involvements in online purchasing have become an important trend. However, since the market is comprised of many different people and cultures, with diverse viewpoints, e-commerce businesses are being challenged by the reality of complex behavior of consumers. Therefore, it is vital to identify the factors that affect consumers purchasing decision through e-commerce in respective cultures and societies. In response to this claim, the purpose of this study is to explore the factors affecting customers’ purchasing decision through e-commerce (online shopping). Several factors such as trust, satisfaction, return policy, cash on delivery, after sale service, cash back warranty, business reputation, social and individual attitude, are considered. At this stage, the factors mentioned above, which are commonly considered influencing purchasing decision through online shopping in literature, are hypothesized to measure the causal relationship within the framework.", "title": "" } ]
scidocsrr
34f55fae069b6bbe6f1ca9a850542add
A Deep Learning Driven Active Framework for Segmentation of Large 3D Shape Collections
[ { "docid": "e8eaeb8a2bb6fa71997aa97306bf1bb0", "text": "Article history: Available online 18 February 2016", "title": "" }, { "docid": "87af466921c1c6a48518859e09e88fa8", "text": "Ensembles of neural networks are known to be much more robust and accurate than individual networks. However, training multiple deep networks for model averaging is computationally expensive. In this paper, we propose a method to obtain the seemingly contradictory goal of ensembling multiple neural networks at no additional training cost. We achieve this goal by training a single neural network, converging to several local minima along its optimization path and saving the model parameters. To obtain repeated rapid convergence, we leverage recent work on cyclic learning rate schedules. The resulting technique, which we refer to as Snapshot Ensembling, is simple, yet surprisingly effective. We show in a series of experiments that our approach is compatible with diverse network architectures and learning tasks. It consistently yields lower error rates than state-of-the-art single models at no additional training cost, and compares favorably with traditional network ensembles. On CIFAR-10 and CIFAR-100 our DenseNet Snapshot Ensembles obtain error rates of 3.4% and 17.4% respectively.", "title": "" } ]
[ { "docid": "8093101949a96d27082712ce086bf11f", "text": "Transition-based dependency parsers often need sequences of local shift and reduce operations to produce certain attachments. Correct individual decisions hence require global information about the sentence context and mistakes cause error propagation. This paper proposes a novel transition system, arc-swift, that enables direct attachments between tokens farther apart with a single transition. This allows the parser to leverage lexical information more directly in transition decisions. Hence, arc-swift can achieve significantly better performance with a very small beam size. Our parsers reduce error by 3.7–7.6% relative to those using existing transition systems on the Penn Treebank dependency parsing task and English Universal Dependencies.", "title": "" }, { "docid": "d43e38bca0289c612c429f497171713c", "text": "Due to the unprecedented growth of unedited videos, finding highlights relevant to a text query in a set of unedited videos has become increasingly important. We refer this task as semantic highlight retrieval and propose a query-dependent video representation for retrieving a variety of highlights. Our method consists of two parts: 1) “viralets”, a mid-level representation bridging between semantic [Fig. 1(a)] and visual [Fig. 1(c)] spaces and 2) a novel Semantic-MODulation (SMOD) procedure to make viralets query-dependent (referred to as SMOD viralets). Given SMOD viralets, we train a single highlight ranker to predict the highlightness of clips with respect to a variety of queries (two examples in Fig. 1), whereas existing approaches can be applied only in a few predefined domains. Other than semantic highlight retrieval, viralets can also be used to associate relevant terms to each video. We utilize this property and propose a simple term prediction method based on nearest neighbor search. To conduct experiments, we collect a viral video dataset1 including users’ comments, highlights, and/or original videos. Among a testing database with 1189 clips (13% highlights and 87% non-highlights), our highlight ranker achieves 41.2% recall at top-10 retrieved clips. It is significantly higher than the state-of-the-art domain-specific highlight ranker and its extension. Similarly, our method also outperforms all baseline methods on the publicly available video highlight dataset. Finally, our simple term prediction method utilizing viralets outperforms the state-of-the-art matrix factorization method (adapted from Kalayeh et al.). 1 Viral videos refer to popular online videos. We focus on user-generated viral videos, which typically contain short highlight marked by users.", "title": "" }, { "docid": "29f17b7d7239a2845d513976e4981d6a", "text": "Agriculture is the backbone of the Indian economy. As all know that demand of agricultural products are increasing day by day as the population is ever increasing, so there is a need to minimize labor, limit the use of water and increase the production of crops. So there is a need to switch from traditional agriculture to the modern agriculture. The introduction of internet of things into agriculture modernization will help solve these problems. This paper presents the IOT based agriculture production system which will monitor or analyze the crop environment like temperature humidity and moisture content in soil. This paper uses the integration of RFID technology and sensors. As both have different objective sensors are for sensing and RIFD technology is for identification This will effectively solve the problem of farmer, increase the yield and saves his time, power, money.", "title": "" }, { "docid": "47fccbf00b2caaad529d660073b7e9a0", "text": "The rapidly increasing popularity of community-based Question Answering (cQA) services, e.g. Yahoo! Answers, Baidu Zhidao, etc. have attracted great attention from both academia and industry. Besides the basic problems, like question searching and answer finding, it should be noted that the low participation rate of users in cQA service is the crucial problem which limits its development potential. In this paper, we focus on addressing this problem by recommending answer providers, in which a question is given as a query and a ranked list of users is returned according to the likelihood of answering the question. Based on the intuitive idea for recommendation, we try to introduce topic-level model to improve heuristic term-level methods, which are treated as the baselines. The proposed approach consists of two steps: (1) discovering latent topics in the content of questions and answers as well as latent interests of users to build user profiles; (2) recommending question answerers for new arrival questions based on latent topics and term-level model. Specifically, we develop a general generative model for questions and answers in cQA, which is then altered to obtain a novel computationally tractable Bayesian network model. Experiments are carried out on a real-world data crawled from Yahoo! Answers during Jun 12 2007 to Aug 04 2007, which consists of 118510 questions, 772962 answers and 150324 users. The experimental results reveal significant improvements over the baseline methods and validate the positive influence of topic-level information.", "title": "" }, { "docid": "30eb03eca06dcc006a28b5e00431d9ed", "text": "We present for the first time a μW-power convolutional neural network for seizure detection running on a low-power microcontroller. On a dataset of 22 patients a median sensitivity of 100% is achieved. With a false positive rate of 20.7 fp/h and a short detection delay of 3.4 s it is suitable for the application in an implantable closed-loop device.", "title": "" }, { "docid": "88a052d1e6e5d6776711b58e0711869d", "text": "We are in the midst of a revolution in military affairs (RMA) unlike any seen since the Napoleonic Age, when France transformed warfare with the concept of levŽe en masse. Chief of Naval Operations Admiral Jay Johnson has called it \"a fundamental shift from what we call platform-centric warfare to something we call network-centric warfare,\" and it will prove to be the most important RMA in the past 200 years.", "title": "" }, { "docid": "2f20bca0134eb1bd9d65c4791f94ddcc", "text": "We present an attention-based model for recognizing multiple objects in images. The proposed model is a deep recurrent neural network trained with reinforcement learning to attend to the most relevant regions of the input image. We show that the model learns to both localize and recognize multiple objects despite being given only class labels during training. We evaluate the model on the challenging task of transcribing house number sequences from Google Street View images and show that it is both more accurate than the state-of-the-art convolutional networks and uses fewer parameters and less computation.", "title": "" }, { "docid": "b2b4e5162b3d7d99a482f9b82820d59e", "text": "Modern Internet-enabled smart lights promise energy efficiency and many additional capabilities over traditional lamps. However, these connected lights create a new attack surface, which can be maliciously used to violate users’ privacy and security. In this paper, we design and evaluate novel attacks that take advantage of light emitted by modern smart bulbs in order to infer users’ private data and preferences. The first two attacks are designed to infer users’ audio and video playback by a systematic observation and analysis of the multimediavisualization functionality of smart light bulbs. The third attack utilizes the infrared capabilities of such smart light bulbs to create a covert-channel, which can be used as a gateway to exfiltrate user’s private data out of their secured home or office network. A comprehensive evaluation of these attacks in various real-life settings confirms their feasibility and affirms the need for new privacy protection mechanisms.", "title": "" }, { "docid": "d5c4e44514186fa1d82545a107e87c94", "text": "Recent research in computer vision has increasingly focused on building systems for observing humans and understanding their look, activities, and behavior providing advanced interfaces for interacting with humans, and creating sensible models of humans for various purposes. This paper presents a new algorithm for detecting moving objects from a static background scene based on frame difference. Firstly, the first frame is captured through the static camera and after that sequence of frames is captured at regular intervals. Secondly, the absolute difference is calculated between the consecutive frames and the difference image is stored in the system. Thirdly, the difference image is converted into gray image and then translated into binary image. Finally, morphological filtering is done to remove noise.", "title": "" }, { "docid": "d1e9eb1357381310c4540a6dcbe8973a", "text": "We introduce a method for learning Bayesian networks that handles the discretization of continuous variables as an integral part of the learning process. The main ingredient in this method is a new metric based on the Minimal Description Length principle for choosing the threshold values for the discretization while learning the Bayesian network structure. This score balances the complexity of the learned discretization and the learned network structure against how well they model the training data. This ensures that the discretization of each variable introduces just enough intervals to capture its interaction with adjacent variables in the network. We formally derive the new metric, study its main properties, and propose an iterative algorithm for learning a discretization policy. Finally, we illustrate its behavior in applications to supervised learning.", "title": "" }, { "docid": "896500db22d621abf1a0fd88cedc8483", "text": "The motion analysis of human skeletons is crucial for human action recognition, which is one of the most active topics in computer vision. In this paper, we propose a fully end-to-end action-attending graphic neural network (A2GNN) for skeleton-based action recognition, in which each irregular skeleton is structured as an undirected attribute graph. To extract high-level semantic representation from skeletons, we perform the local spectral graph filtering on the constructed attribute graphs like the standard image convolution operation. Considering not all joints are informative for action analysis, we design an action-attending layer to detect those salient action units by adaptively weighting skeletal joints. Herein, the filtering responses are parameterized into a weighting function irrelevant to the order of input nodes. To further encode continuous motion variations, the deep features learnt from skeletal graphs are gathered along consecutive temporal slices and then fed into a recurrent gated network. Finally, the spectral graph filtering, action-attending, and recurrent temporal encoding are integrated together to jointly train for the sake of robust action recognition as well as the intelligibility of human actions. To evaluate our A2GNN, we conduct extensive experiments on four benchmark skeleton-based action datasets, including the large-scale challenging NTU RGB+D dataset. The experimental results demonstrate that our network achieves the state-of-the-art performances.", "title": "" }, { "docid": "7c10a44e5fa0f9e01951e89336c4b4d6", "text": "Previous studies have examined the online research behaviors of graduate students in terms of how they seek and retrieve research-related information on the Web across diverse disciplines. However, few have focused on graduate students’ searching activities, and particularly for their research tasks. Drawing on Kuiper, Volman, and Terwel’s (2008) three aspects of web literacy skills (searching, reading, and evaluating), this qualitative study aims to better understand a group of graduate engineering students’ searching, reading, and evaluating processes for research purposes. Through in-depth interviews and the think-aloud protocol, we compared the strategies employed by 22 Taiwanese graduate engineering students. The results showed that the students’ online research behaviors included seeking and obtaining, reading and interpreting, and assessing and evaluating sources. The findings suggest that specialized training for preparing novice researchers to critically evaluate relevant information or scholarly work to fulfill their research purposes is needed. Implications for enhancing the information literacy of engineering students are discussed.", "title": "" }, { "docid": "b59b5bfb0758a07a72c6bbd7f90212e0", "text": "The ease with which digital images can be manipulated without severe degradation of quality makes it necessary to be able to verify the authenticity of digital images. One way to establish the image authenticity is by computing a hash sequence from an image. This hash sequence must be robust against non content-altering manipulations, but must be able to show if the content of the image has been tampered with. Furthermore, the hash has to have enough differentiating power such that the hash sequences from two different images are not similar. This paper presents an image hashing system based on local Histogram of Oriented Gradients. The system is shown to have good differentiating power, robust against non content-altering manipulations such as filtering and JPEG compression and is sensitive to content-altering attacks.", "title": "" }, { "docid": "4b6da0b9c88f4d94abfbbcb08bb0fc43", "text": "In this paper we show how word embeddings can be used to increase the effectiveness of a state-of-the art Locality Sensitive Hashing (LSH) based first story detection (FSD) system over a standard tweet corpus. Vocabulary mismatch, in which related tweets use different words, is a serious hindrance to the effectiveness of a modern FSD system. In this case, a tweet could be flagged as a first story even if a related tweet, which uses different but synonymous words, was already returned as a first story. In this work, we propose a novel approach to mitigate this problem of lexical variation, based on tweet expansion. In particular, we propose to expand tweets with semantically related paraphrases identified via automatically mined word embeddings over a background tweet corpus. Through experimentation on a large data stream comprised of 50 million tweets, we show that FSD effectiveness can be improved by 9.5% over a state-of-the-art FSD system.", "title": "" }, { "docid": "14077e87744089bb731085590be99a75", "text": "The Vehicle Routing Problem (VRP) is an important problem occurring in many logistics systems. The objective of VRP is to serve a set of customers at minimum cost, such that every node is visited by exactly one vehicle only once. In this paper, we consider the Dynamic Vehicle Routing Problem (DVRP) which new customer demands are received along the day. Hence, they must be serviced at their locations by a set of vehicles in real time minimizing the total travel distance. The main goal of this research is to find a solution of DVRP using genetic algorithm. However we used some heuristics in addition during generation of the initial population and crossover for tuning the system to obtain better result. The computational experiments were applied to 22 benchmarks instances with up to 385 customers and the effectiveness of the proposed approach is validated by comparing the computational results with those previously presented in the literature.", "title": "" }, { "docid": "e4a3dfe53a66d0affd73234761e7e0e2", "text": "BACKGROUND\nWhether cannabis can cause psychotic or affective symptoms that persist beyond transient intoxication is unclear. We systematically reviewed the evidence pertaining to cannabis use and occurrence of psychotic or affective mental health outcomes.\n\n\nMETHODS\nWe searched Medline, Embase, CINAHL, PsycINFO, ISI Web of Knowledge, ISI Proceedings, ZETOC, BIOSIS, LILACS, and MEDCARIB from their inception to September, 2006, searched reference lists of studies selected for inclusion, and contacted experts. Studies were included if longitudinal and population based. 35 studies from 4804 references were included. Data extraction and quality assessment were done independently and in duplicate.\n\n\nFINDINGS\nThere was an increased risk of any psychotic outcome in individuals who had ever used cannabis (pooled adjusted odds ratio=1.41, 95% CI 1.20-1.65). Findings were consistent with a dose-response effect, with greater risk in people who used cannabis most frequently (2.09, 1.54-2.84). Results of analyses restricted to studies of more clinically relevant psychotic disorders were similar. Depression, suicidal thoughts, and anxiety outcomes were examined separately. Findings for these outcomes were less consistent, and fewer attempts were made to address non-causal explanations, than for psychosis. A substantial confounding effect was present for both psychotic and affective outcomes.\n\n\nINTERPRETATION\nThe evidence is consistent with the view that cannabis increases risk of psychotic outcomes independently of confounding and transient intoxication effects, although evidence for affective outcomes is less strong. The uncertainty about whether cannabis causes psychosis is unlikely to be resolved by further longitudinal studies such as those reviewed here. However, we conclude that there is now sufficient evidence to warn young people that using cannabis could increase their risk of developing a psychotic illness later in life.", "title": "" }, { "docid": "d03dbec2a7361aaa41097703654e6a5d", "text": "1Department of Computer Science Electrical Engineering, University of Missouri-Kansas City, Kansas City, MO 64110, USA 2Department of Computer Science and Information Engineering, National Chung Cheng University, Chiayi 62102, Taiwan 3Key Laboratory of Network Security and Cryptology, Fujian Normal University, Fujian 350007, P. R. China 4Department of Information Engineering and Computer Science, Feng Chia University, No. 100, Wenhwa Rd., Xitun Dist., Taichung 40724, Taiwan ∗Corresponding author: ccc@cs.ccu.edu.tw", "title": "" }, { "docid": "a2a63b4e7864a6a7aa057d2addf50065", "text": "Research in automatic analysis of sign language has largely focused on recognizing the lexical (or citation) form of sign gestures as they appear in continuous signing, and developing algorithms that scale well to large vocabularies. However, successful recognition of lexical signs is not sufficient for a full understanding of sign language communication. Nonmanual signals and grammatical processes which result in systematic variations in sign appearance are integral aspects of this communication but have received comparatively little attention in the literature. In this survey, we examine data acquisition, feature extraction and classification methods employed for the analysis of sign language gestures. These are discussed with respect to issues such as modeling transitions between signs in continuous signing, modeling inflectional processes, signer independence, and adaptation. We further examine works that attempt to analyze nonmanual signals and discuss issues related to integrating these with (hand) sign gestures. We also discuss the overall progress toward a true test of sign recognition systems--dealing with natural signing by native signers. We suggest some future directions for this research and also point to contributions it can make to other fields of research. Web-based supplemental materials (appendicies) which contain several illustrative examples and videos of signing can be found at www.computer.org/publications/dlib.", "title": "" }, { "docid": "48b88774957a6d30ae9d0a97b9643647", "text": "The defect detection on manufactures is extremely important in the optimization of industrial processes; particularly, the visual inspection plays a fundamental role. The visual inspection is often carried out by a human expert. However, new technology features have made this inspection unreliable. For this reason, many researchers have been engaged to develop automatic analysis processes of manufactures and automatic optical inspections in the industrial production of printed circuit boards. Among the defects that could arise in this industrial process, those of the solder joints are very important, because they can lead to an incorrect functioning of the board; moreover, the amount of the solder paste can give some information on the quality of the industrial process. In this paper, a neural network-based automatic optical inspection system for the diagnosis of solder joint defects on printed circuit boards assembled in surface mounting technology is presented. The diagnosis is handled as a pattern recognition problem with a neural network approach. Five types of solder joints have been classified in respect to the amount of solder paste in order to perform the diagnosis with a high recognition rate and a detailed classification able to give information on the quality of the manufacturing process. The images of the boards under test are acquired and then preprocessed to extract the region of interest for the diagnosis. Three types of feature vectors are evaluated from each region of interest, which are the images of the solder joints under test, by exploiting the properties of the wavelet transform and the geometrical characteristics of the preprocessed images. The performances of three different classifiers which are a multilayer perceptron, a linear vector quantization, and a K-nearest neighbor classifier are compared. The n-fold cross-validation has been exploited to select the best architecture for the neural classifiers, while a number of experiments have been devoted to estimating the best value of K in the K-NN. The results have proved that the MLP network fed with the GW-features has the best recognition rate. This approach allows to carry out the diagnosis burden on image processing, feature extraction, and classification algorithms, reducing the cost and the complexity of the acquisition system. In fact, the experimental results suggest that the reason for the high recognition rate in the solder joint classification is due to the proper preprocessing steps followed as well as to the information contents of the features", "title": "" } ]
scidocsrr
4416ba7d54f47b41f654c4358ef2d632
Non-rigid Object Tracking via Deformable Patches Using Shape-Preserved KCF and Level Sets
[ { "docid": "198311a68ad3b9ee8020b91d0b029a3c", "text": "Online multi-object tracking aims at producing complete tracks of multiple objects using the information accumulated up to the present moment. It still remains a difficult problem in complex scenes, because of frequent occlusion by clutter or other objects, similar appearances of different objects, and other factors. In this paper, we propose a robust online multi-object tracking method that can handle these difficulties effectively. We first propose the tracklet confidence using the detectability and continuity of a tracklet, and formulate a multi-object tracking problem based on the tracklet confidence. The multi-object tracking problem is then solved by associating tracklets in different ways according to their confidence values. Based on this strategy, tracklets sequentially grow with online-provided detections, and fragmented tracklets are linked up with others without any iterative and expensive associations. Here, for reliable association between tracklets and detections, we also propose a novel online learning method using an incremental linear discriminant analysis for discriminating the appearances of objects. By exploiting the proposed learning method, tracklet association can be successfully achieved even under severe occlusion. Experiments with challenging public datasets show distinct performance improvement over other batch and online tracking methods.", "title": "" }, { "docid": "85a076e58f4d117a37dfe6b3d68f5933", "text": "We propose a new model for active contours to detect objects in a given image, based on techniques of curve evolution, Mumford-Shah (1989) functional for segmentation and level sets. Our model can detect objects whose boundaries are not necessarily defined by the gradient. We minimize an energy which can be seen as a particular case of the minimal partition problem. In the level set formulation, the problem becomes a \"mean-curvature flow\"-like evolving the active contour, which will stop on the desired boundary. However, the stopping term does not depend on the gradient of the image, as in the classical active contour models, but is instead related to a particular segmentation of the image. We give a numerical algorithm using finite differences. Finally, we present various experimental results and in particular some examples for which the classical snakes methods based on the gradient are not applicable. Also, the initial curve can be anywhere in the image, and interior contours are automatically detected.", "title": "" } ]
[ { "docid": "eb766409144157d20fd0c709b3d92035", "text": "Primary human lymphedema (Milroy's disease), characterized by a chronic and disfiguring swelling of the extremities, is associated with heterozygous inactivating missense mutations of the gene encoding vascular endothelial growth factor C/D receptor (VEGFR-3). Here, we describe a mouse model and a possible treatment for primary lymphedema. Like the human patients, the lymphedema (Chy) mice have an inactivating Vegfr3 mutation in their germ line, and swelling of the limbs because of hypoplastic cutaneous, but not visceral, lymphatic vessels. Neuropilin (NRP)-2 bound VEGF-C and was expressed in the visceral, but not in the cutaneous, lymphatic endothelia, suggesting that it may participate in the pathogenesis of lymphedema. By using virus-mediated VEGF-C gene therapy, we were able to generate functional lymphatic vessels in the lymphedema mice. Our results suggest that growth factor gene therapy is applicable to human lymphedema and provide a paradigm for other diseases associated with mutant receptors.", "title": "" }, { "docid": "76ce7807d5afcb5fb5e1d4bf65d01489", "text": "Tile antiradical activities of various antioxidants were determined using the free radical, 2.2-Diphenyl-l-pict3,1hydrazyl (DPPI-I°). In its radical form, DPPI-I ° has an absorption band at 515 nm which disappears upon reduction by an antiradical compound. Twenty compounds were reacted with the DPPI-I ° and shown to follow one of three possible reaction kinetic types. Ascorbie acid, isoascorbic acid and isoeugenol reacted quickly with the DPPI-I ° reaching a steady state immediately. Rosmarinic acid and 6-tocopherol reacted a little slower and reached a steady state within 30 rain. The remaining compounds reacted more progressively with the DPPH ° reaching a steady state from I to 6 h. Caffeic acid, gentisic acid and gallic acid showed the highest antiradical activities with a stoichiometo, of 4 to 6 reduced DPPH ° molecules pet\" molecule of antioxidant. Vanillin, phenol, y-resort3'lic acid and vanillic acid were found to be poor antiradical compounds. The stoichiometry, for the other 13 phenolic compounds varied from one to three reduced DPPH ° molecules pet\" molecule of antioxidant. Possible mechanisms are proposed to explain the e.werimental results.", "title": "" }, { "docid": "834af0b828702aae0482a2e31e3f8a40", "text": "We routinely hear vendors claim that their systems are “secure.” However, without knowing what assumptions are made by the vendor, it is hard to justify such a claim. Prior to claiming the security of a system, it is important to identify the threats to the system in question. Enumerating the threats to a system helps system architects develop realistic and meaningful security requirements. In this paper, we investigate how threat modeling can be used as foundations for the specification of security requirements. Although numerous works have been published on threat modeling, there is a lack of integrated, systematic approach toward threat modeling for complex systems. We examine the differences between modeling software products and complex systems, and outline our approach for identifying threats of networked systems. We also present three case studies of threat modeling: Software-Defined Radio, a network traffic monitoring tool (VisFlowConnect), and a cluster security monitoring tool (NVisionCC).", "title": "" }, { "docid": "56ec3abe17259cae868e17dc2163fc0e", "text": "This paper reports a case study about lessons learned and usability issues encountered in a usability inspection of a digital library system called the Networked Computer Science Technical Reference Library (NCSTRL). Using a co-discovery technique with a team of three expert usability inspectors (the authors), we performed a usability inspection driven by a broad set of anticipated user tasks. We found many good design features in NCSTRL, but the primary result of a usability inspection is a list of usability problems as candidates for fixing. The resulting problems are organized by usability problem type and by system functionality, with emphasis on the details of problems specific to digital library functions. The resulting usability problem list was used to illustrate a cost/importance analysis technique that trades off importance to fix against cost to fix. The problems are sorted by the ratio of importance to cost, producing a priority ranking for resolution.", "title": "" }, { "docid": "f649db3b6fa6a929ac0434b12ddeea54", "text": "The rapid growth of e-Commerce amongst private sectors and Internet usage amongst citizens has vastly stimulated e-Government initiatives from many countries. The Thailand e-Government initiative is based on the government's long-term strategic policy that aims to reform and overhaul the Thai bureaucracy. This study attempted to identify the e-Excise success factors by employing the IS success model. The study focused on finding the factors that may contribute to the success of the e-Excise initiative. The Delphi Technique was used to investigate the determinant factors for the success of the e-Excise initiative. Three-rounds of data collection were conducted with 77 active users from various industries. The results suggest that by increasing Trust in the e-Government website, Perceptions of Information Quality, Perceptions of System Quality, and Perceptions of Service Quality will influence System Usage and User Satisfaction, and will ultimately have consequences for the Perceived Net Benefits.", "title": "" }, { "docid": "51f4b288d0c902e083a0eede6f342ba2", "text": "Transactional memory (TM) is a promising synchronization mechanism for the next generation of multicore processors. Best-effort Hardware Transactional Memory (HTM) designs, such as Sun's prototype Rock processor and AMD's proposed Advanced Synchronization Facility (ASF), can efficiently execute many transactions, but abort in some cases due to various limitations. Hybrid TM systems can use a compatible software TM (STM) in such cases.\n We introduce a family of hybrid TMs built using the recent NOrec STM algorithm that, unlike existing hybrid approaches, provide both low overhead on hardware transactions and concurrent execution of hardware and software transactions. We evaluate implementations for Rock and ASF, exploring how the differing HTM designs affect optimization choices. Our investigation yields valuable input for designers of future best-effort HTMs.", "title": "" }, { "docid": "365b95202095942c4b2b43a5e6f6e04e", "text": "Abstract. In this paper we use the contraction mapping theorem to obtain asymptotic stability results of the zero solution of a nonlinear neutral Volterra integro-differential equation with variable delays. Some conditions which allow the coefficient functions to change sign and do not ask the boundedness of delays are given. An asymptotic stability theorem with a necessary and sufficient condition is proved, which improve and extend the results in the literature. Two examples are also given to illustrate this work.", "title": "" }, { "docid": "8d9d2bc18bede24fede2e3d14b0e7f87", "text": "Artificial Neural Network (ANN) forms a useful tool in pattern recognition tasks. Collection of five, eight or more cards in a cards game are normally called poker hands. There are various poker variations, each with different poker hands ranking. In the present paper, an attempt is made to solve poker hand classification problem using different learning paradigms and architectures of artificial neural network: multi-layer feed-forward Backpropagation (supervised) and self-organizing map (un-supervised). Poker data set is touted to be a difficult dataset for classification algorithms. Experimental results are presented to demonstrate the performance of the proposed system. The paper also aims to suggest about training algorithms and training parameters that must be chosen in order to solve poker hand classification problem using neural network model. As neural networks are the most convenient tools for handling complicated data sets with real values, one of the most important objectives of the paper is to explain how a neural network can also be used successfully for classification kind of problems involving categorical attributes. The proposed model succeeded in classification of poker hands with 94% classification accuracy.", "title": "" }, { "docid": "b163fb3faa31f6db35599d32d7946523", "text": "Humans learn how to behave directly through environmental experience and indirectly through rules and instructions. Behavior analytic research has shown that instructions can control behavior, even when such behavior leads to sub-optimal outcomes (Hayes, S. (Ed.). 1989. Rule-governed behavior: cognition, contingencies, and instructional control. Plenum Press.). Here we examine the control of behavior through instructions in a reinforcement learning task known to depend on striatal dopaminergic function. Participants selected between probabilistically reinforced stimuli, and were (incorrectly) told that a specific stimulus had the highest (or lowest) reinforcement probability. Despite experience to the contrary, instructions drove choice behavior. We present neural network simulations that capture the interactions between instruction-driven and reinforcement-driven behavior via two potential neural circuits: one in which the striatum is inaccurately trained by instruction representations coming from prefrontal cortex/hippocampus (PFC/HC), and another in which the striatum learns the environmentally based reinforcement contingencies, but is \"overridden\" at decision output. Both models capture the core behavioral phenomena but, because they differ fundamentally on what is learned, make distinct predictions for subsequent behavioral and neuroimaging experiments. Finally, we attempt to distinguish between the proposed computational mechanisms governing instructed behavior by fitting a series of abstract \"Q-learning\" and Bayesian models to subject data. The best-fitting model supports one of the neural models, suggesting the existence of a \"confirmation bias\" in which the PFC/HC system trains the reinforcement system by amplifying outcomes that are consistent with instructions while diminishing inconsistent outcomes.", "title": "" }, { "docid": "84d8ff8724df86ce100ddfbb150e7446", "text": "Adaptive Gaussian mixtures have been used for modeling nonstationary temporal distributions of pixels in video surveillance applications. However, a common problem for this approach is balancing between model convergence speed and stability. This paper proposes an effective scheme to improve the convergence rate without compromising model stability. This is achieved by replacing the global, static retention factor with an adaptive learning rate calculated for each Gaussian at every frame. Significant improvements are shown on both synthetic and real video data. Incorporating this algorithm into a statistical framework for background subtraction leads to an improved segmentation performance compared to a standard method.", "title": "" }, { "docid": "d92e0e7ff8d0dabcac5b773d361a26a3", "text": "Several studies on brain Magnetic Resonance Images (MRI) show relations between neuroanatomical abnormalities of brain structures and neurological disorders, such as Attention Defficit Hyperactivity Disorder (ADHD) and Alzheimer. These abnormalities seem to be correlated with the size and shape of these structures, and there is an active field of research trying to find accurate methods for automatic MRI segmentation. In this project, we study the automatic segmentation of structures from the Basal Ganglia and we propose a new methodology based on Stacked Sparse Autoencoders (SSAE). SSAE is a strategy that belongs to the family of Deep Machine Learning and consists on a supervised learning method based on an unsupervisely pretrained Feed-forward Neural Network. Moreover, we present two approaches based on 2D and 3D features of the images. We compare the results obtained on the different regions of interest with those achieved by other machine learning techniques such as Neural Networks and Support Vector Machines. We observed that in most cases SSAE improves those other methods. We demonstrate that the 3D features do not report better results than the 2D ones as could be thought. Furthermore, we show that SSAE provides state-of-the-art Dice Coefficient results (left, right): Caudate (90.63±1.4, 90.31±1.7), Putamen (91.03±1.4, 90.82±1.4), Pallidus (85.11±1.8, 83.47±2.2), Accumbens (74.26±4.4, 74.46±4.6).", "title": "" }, { "docid": "7d7e4ddaa9c582c28e9186036fc0a375", "text": "It has become common to distribute software in forms that are isomorphic to the original source code. An important example is Java bytecode. Since such codes are easy to decompile, they increase the risk of malicious reverse engineering attacks.In this paper we describe the design of a Java code obfuscator, a tool which - through the application of code transformations - converts a Java program into an equivalent one that is more difficult to reverse engineer.We describe a number of transformations which obfuscate control-flow. Transformations are evaluated with respect to potency (To what degree is a human reader confused?), resilience (How well are automatic deobfuscation attacks resisted?), cost (How much time/space overhead is added?), and stealth (How well does obfuscated code blend in with the original code?).The resilience of many control-altering transformations rely on the resilience of opaque predicates. These are boolean valued expressions whose values are known to the obfuscator but difficult to determine for an automatic deobfuscator. We show how to construct resilient, cheap, and stealthy opaque predicates based on the intractability of certain static analysis problems such as alias analysis.", "title": "" }, { "docid": "43abb5eadd40c7e5e5d13c7ff33da9d7", "text": "Roll-to-Roll (R2R) production of thin film based display components (e.g., active matrix TFT backplanes and touch screens) combine the advantages of the use > of inexpensive, lightweight, and flexible substrates with high throughput production. Significant cost reduction opportunities can also be found in terms of processing tool capital cost, utilized substrate area, and process gas flow when compared with batch processing systems. Applied Materials has developed a variety of different web handling and coating technologies/platforms to enable high volume R2R manufacture of thin film silicon solar cells, TFT active matrix backplanes, touch screen devices, and ultra-high barriers for organic electronics. The work presented in this chapter therefore describes the latest advances in R2R PVD processing and principal challenges inherent in moving from lab and pilot scale manufacturing to high volume manufacturing of flexible display devices using CVD for the deposition of active semiconductors layers, gate insulators, and high performance barrier/passivation layers. This chapter also includes brief description of the process and cost advantage of the use of rotatable PVD source technologies (primarily for use in flexible touch panel manufacture) and a summary of the current performance levels obtained for R2R processed amorphous silicon and IGZO TFT backplanes. Results will also be presented for barrier film for final device/frontplane encapsulation for display applications.", "title": "" }, { "docid": "e2fb4ed617cffabba2f28b95b80a30b3", "text": "The importance of information security education, information security training, and information security awareness in organisations cannot be overemphasised. This paper presents working definitions for information security education, information security training and information security awareness. An investigation to determine if any differences exist between information security education, information security training and information security awareness was conducted. This was done to help institutions understand when they need to train or educate employees and when to introduce information security awareness programmes. A conceptual analysis based on the existing literature was used for proposing working definitions, which can be used as a reference point for future information security researchers. Three important attributes (namely focus, purpose and method) were identified as the distinguishing characteristics of information security education, information security training and information security awareness. It was found that these information security concepts are different in terms of their focus, purpose and methods of delivery.", "title": "" }, { "docid": "24e0fb7247644ba6324de9c86fdfeb12", "text": "There has recently been a surge of work in explanatory artificial intelligence (XAI). This research area tackles the important problem that complex machines and algorithms often cannot provide insights into their behavior and thought processes. XAI allows users and parts of the internal system to be more transparent, providing explanations of their decisions in some level of detail. These explanations are important to ensure algorithmic fairness, identify potential bias/problems in the training data, and to ensure that the algorithms perform as expected. However, explanations produced by these systems is neither standardized nor systematically assessed. In an effort to create best practices and identify open challenges, we provide our definition of explainability and show how it can be used to classify existing literature. We discuss why current approaches to explanatory methods especially for deep neural networks are insufficient. Finally, based on our survey, we conclude with suggested future research directions for explanatory artificial intelligence.", "title": "" }, { "docid": "cf6138d5af2946363188a3696cc2b7c0", "text": "The Rational Unified Process® (RUP®) is a software engineering process framework. It captures many of the best practices in modern software development in a form that is suitable for a wide range of projects and organizations. It embeds object-oriented techniques and uses the UML as the principal notation for the several models that are built during the development. The RUP is also an open process framework that allows software organizations to tailor the process to their specific need, and to capture their own specific process know-how in the form of process components. Many process components are now developed by various organizations to cover different domains, technologies, tools, or type of development, and these components can be assembled to rapidly compose a suitable process. This tutorial will introduce the basic concepts and principles, which lie under the RUP framework, and show concrete examples of its usage.", "title": "" }, { "docid": "c2a3344c607cf06c24ed8d2664243284", "text": "It is common for cloud users to require clusters of inter-connected virtual machines (VMs) in a geo-distributed IaaS cloud, to run their services. Compared to isolated VMs, key challenges on dynamic virtual cluster (VC) provisioning (computation + communication resources) lie in two folds: (1) optimal placement of VCs and inter-VM traffic routing involve NP-hard problems, which are non-trivial to solve offline, not to mention if an online efficient algorithm is sought; (2) an efficient pricing mechanism is missing, which charges a market-driven price for each VC as a whole upon request, while maximizing system efficiency or provider revenue over the entire span. This paper proposes efficient online auction mechanisms to address the above challenges. We first design SWMOA, a novel online algorithm for dynamic VC provisioning and pricing, achieving truthfulness, individual rationality, computation efficiency, and <inline-formula><tex-math notation=\"LaTeX\">$(1+2\\log \\mu)$</tex-math><alternatives> <inline-graphic xlink:href=\"wu-ieq1-2601905.gif\"/></alternatives></inline-formula>-competitiveness in social welfare, where <inline-formula><tex-math notation=\"LaTeX\">$\\mu$</tex-math><alternatives> <inline-graphic xlink:href=\"wu-ieq2-2601905.gif\"/></alternatives></inline-formula> is related to the problem size. Next, applying a randomized reduction technique, we convert the social welfare maximizing auction into a revenue maximizing online auction, PRMOA, achieving <inline-formula><tex-math notation=\"LaTeX\">$O(\\log \\mu)$ </tex-math><alternatives><inline-graphic xlink:href=\"wu-ieq3-2601905.gif\"/></alternatives></inline-formula> -competitiveness in provider revenue, as well as truthfulness, individual rationality and computation efficiency. We investigate auction design in different cases of resource cost functions in the system. We validate the efficacy of the mechanisms through solid theoretical analysis and trace-driven simulations.", "title": "" }, { "docid": "f3e382102c57e9d8f5349e374d1e6907", "text": "In SCARA robots, which are often used in industrial applications, all joint axes are parallel, covering three degrees of freedom in translation and one degree of freedom in rotation. Therefore, conventional approaches for the handeye calibration of articulated robots cannot be used for SCARA robots. In this paper, we present a new linear method that is based on dual quaternions and extends the work of [1] for SCARA robots. To improve the accuracy, a subsequent nonlinear optimization is proposed. We address several practical implementation issues and show the effectiveness of the method by evaluating it on synthetic and real data.", "title": "" }, { "docid": "bbd378407abb1c2a9a5016afee40c385", "text": "One approach to the generation of natural-sounding synthesized speech waveforms is to select and concatenate units from a large speech database. Units (in the current work, phonemes) are selected to produce a natural realisation of a target phoneme sequence predicted from text which is annotated with prosodic and phonetic context information. We propose that the units in a synthesis database can be considered as a state transition network in which the state occupancy cost is the distance between a database unit and a target, and the transition cost is an estimate of the quality of concatenation of two consecutive units. This framework has many similarities to HMM-based speech recognition. A pruned Viterbi search is used to select the best units for synthesis from the database. This approach to waveform synthesis permits training from natural speech: two methods for training from speech are presented which provide weights which produce more natural speech than can be obtained by hand-tuning.", "title": "" }, { "docid": "1e7721225d84896a72f2ea790570ecbd", "text": "We have developed a Blumlein line pulse generator which utilizes the superposition of electrical pulses launched from two individually switched pulse forming lines. By using a fast power MOSFET as a switch on each end of the Blumlein line, we were able to generate pulses with amplitudes of 1 kV across a 100-Omega load. Pulse duration and polarity can be controlled by the temporal delay in the triggering of the two switches. In addition, the use of identical switches allows us to overcome pulse distortions arising from the use of non-ideal switches in the traditional Blumlein configuration. With this pulse generator, pulses with durations between 8 and 300 ns were applied to Jurkat cells (a leukemia cell line) to investigate the pulse dependent increase in calcium levels. The development of the calcium levels in individual cells was studied by spinning-disc confocal fluorescent microscopy with the calcium indicator, fluo-4. With this fast imaging system, fluorescence changes, representing calcium mobilization, could be resolved with an exposure of 5 ms every 18 ms. For a 60-ns pulse duration, each rise in intracellular calcium was greater as the electric field strength was increased from 25 kV/cm to 100 kV/cm. Only for the highest electric field strength is the response dependent on the presence of extracellular calcium. The results complement ion-exchange mechanisms previously observed during the charging of cellular membranes, which were suggested by observations of membrane potential changes during exposure.", "title": "" } ]
scidocsrr
90d757d40e80bc376b4bcaef82a8a6e3
Neural Discourse Modeling of Conversations
[ { "docid": "64330f538b3d8914cbfe37565ab0d648", "text": "The compositionality of meaning extends beyond the single sentence. Just as words combine to form the meaning of sentences, so do sentences combine to form the meaning of paragraphs, dialogues and general discourse. We introduce both a sentence model and a discourse model corresponding to the two levels of compositionality. The sentence model adopts convolution as the central operation for composing semantic vectors and is based on a novel hierarchical convolutional neural network. The discourse model extends the sentence model and is based on a recurrent neural network that is conditioned in a novel way both on the current sentence and on the current speaker. The discourse model is able to capture both the sequentiality of sentences and the interaction between different speakers. Without feature engineering or pretraining and with simple greedy decoding, the discourse model coupled to the sentence model obtains state of the art performance on a dialogue act classification experiment.", "title": "" } ]
[ { "docid": "60f31d60213abe65faec3eb69edb1eea", "text": "In this paper, a novel multi-layer four-way out-of-phase power divider based on substrate integrated waveguide (SIW) is proposed. The four-way power division is realized by 3-D mode coupling; vertical partitioning of a SIW followed by lateral coupling to two half-mode SIW. The measurement results show the excellent insertion loss (S<inf>21</inf>, S<inf>31</inf>, S<inf>41</inf>, S<inf>51</inf>: −7.0 ± 0.5 dB) and input return loss (S<inf>11</inf>: −10 dB) in X-band (7.63 GHz ∼ 11.12 GHz). We expect that the proposed power divider play an important role for the integration of compact multi-way SIW circuits.", "title": "" }, { "docid": "f79f807b8b3f6516a14eaea37f9de82c", "text": "Negative consumer opinion poses a potential barrier to the application of nutrigenomic intervention. The present study has aimed to determine attitudes toward genetic testing and personalised nutrition among the European public. An omnibus opinion survey of a representative sample aged 14-55+ years (n 5967) took place in France, Italy, Great Britain, Portugal, Poland and Germany during June 2005 as part of the Lipgene project. A majority of respondents (66 %) reported that they would be willing to undergo genetic testing and 27 % to follow a personalised diet. Individuals who indicated a willingness to have a genetic test for the personalising of their diets were more likely to report a history of high blood cholesterol levels, central obesity and/or high levels of stress than those who would have a test only for general interest. Those who indicated that they would not have a genetic test were more likely to be male and less likely to report having central obesity. Individuals with a history of high blood cholesterol were less likely than those who did not to worry if intervention foods contained GM ingredients. Individuals who were aware that they had health problems associated with the metabolic syndrome appeared particularly favourable toward nutrigenomic intervention. These findings are encouraging for the future application of personalised nutrition provided that policies are put in place to address public concern about how genetic information is used and held.", "title": "" }, { "docid": "41098050e76786afbb892d4cd1ffaad2", "text": "Human grasps, especially whole-hand grasps, are difficult to animate because of the high number of degrees of freedom of the hand and the need for the hand to conform naturally to the object surface. Captured human motion data provides us with a rich source of examples of natural grasps. However, for each new object, we are faced with the problem of selecting the best grasp from the database and adapting it to that object. This paper presents a data-driven approach to grasp synthesis. We begin with a database of captured human grasps. To identify candidate grasps for a new object, we introduce a novel shape matching algorithm that matches hand shape to object shape by identifying collections of features having similar relative placements and surface normals. This step returns many grasp candidates, which are clustered and pruned by choosing the grasp best suited for the intended task. For pruning undesirable grasps, we develop an anatomically-based grasp quality measure specific to the human hand. Examples of grasp synthesis are shown for a variety of objects not present in the original database. This algorithm should be useful both as an animator tool for posing the hand and for automatic grasp synthesis in virtual environments.", "title": "" }, { "docid": "c9750e95b3bd422f0f5e73cf6c465b35", "text": "Lingual nerve damage complicating oral surgery would sometimes require electrographic exploration. Nevertheless, direct recording of conduction in lingual nerve requires its puncture at the foramen ovale. This method is too dangerous to be practiced routinely in these diagnostic indications. The aim of our study was to assess spatial relationships between lingual nerve and mandibular ramus in the infratemporal fossa using an original technique. Therefore, ten lingual nerves were dissected on five fresh cadavers. All the nerves were catheterized with a 3/0 wire. After meticulous repositioning of the nerve and medial pterygoid muscle reinsertion, CT-scan examinations were performed with planar acquisitions and three-dimensional reconstructions. Localization of lingual nerve in the infratemporal fossa was assessed successively at the level of the sigmoid notch of the mandible, lingula and third molar. At the level of the lingula, lingual nerve was far from the maxillary vessels; mean distance between the nerve and the anterior border of the ramus was 19.6 mm. The posteriorly opened angle between the medial side of the ramus and the line joining the lingual nerve and the anterior border of the ramus measured 17°. According to these findings, we suggest that the lingual nerve might be reached through the intra-oral puncture at the intermaxillary commissure; therefore, we modify the inferior alveolar nerve block technique to propose a safe and reproducible protocol likely to be performed routinely as electrographic exploration of the lingual nerve. What is more, this original study protocol provided interesting educational materials and could be developed for the conception of realistic 3D virtual anatomy supports.", "title": "" }, { "docid": "5ed98c54020d4e5c7b0fa8b66436f3e1", "text": "An important issue in the design of a mobile computing system is how to manage the location information of mobile clients. In the existing commercial cellular mobile computing systems, a twotier architecture is adopted (Mouly and Pautet, 1992). However, the two-tier architecture is not scalable. In the literatures (Pitoura and Samaras, 2001; Pitoura and Fudos, 1998), a hierarchical database structure is proposed in which the location information of mobile clients within a cell is managed by the location database responsible for the cell. The location databases of different cells are organized into a tree-like structure to facilitate the search of mobile clients. Although this architecture can distribute the updates and the searching workload amongst the location databases in the system, location update overheads can be very expensive when the mobility of clients is high. In this paper, we study the issues on how to generate location updates under the distance-based method for systems using hierarchical location databases. A cost-based method is proposed for calculating the optimal distance threshold with the objective to minimize the total location management cost. Furthermore, under the existing hierarchical location database scheme, the tree structure of the location databases is static. It cannot adapt to the changes in mobility patterns of mobile clients. This will affect the total location management cost in the system. In the second part of the paper, we present a re-organization strategy to re-structure the hierarchical tree of location databases according to the mobility patterns of the clients with the objective to minimize the location management cost. Extensive simulation experiments have been performed to investigate the re-organization strategy when our location update generation method is applied.", "title": "" }, { "docid": "5d43586ebd66c6fc09683558536b89e9", "text": "In this paper, we present an overview of UHF RFID tag performance characterization. We review the link budget of RFID system, explain different tag performance characteristics, and describe various testing methods. We also review state-of-the art test systems present on the market today.", "title": "" }, { "docid": "41b1a0c362c7bdb77b7dbcc20adcd532", "text": "Augmented reality involves the use of models and their associated renderings to supplement information in a real scene. In order for this information to be relevant or meaningful, the models must be positioned and displayed in such a way that they align with their corresponding real objects. For practical reasons this alignment cannot be known a priori, and cannot be hard-wired into a system. Instead a simple, reliable alignment or calibration process is performed so that computer models can be accurately registered with their real-life counterparts. We describe the design and implementation of such a process and we show how it can be used to create convincing interactions between real and virtual objects.", "title": "" }, { "docid": "b50d85f1993525c01370b9d90063e135", "text": "This paper is aimed at analyzing the behavior of a packed bed latent heat thermal energy storage system. The packed bed is composed of spherical capsules filled with paraffin wax as PCM usable with a solar water heating system. The model developed in this study uses the fundamental equations similar to those of Schumann, except that the phase change phenomena of PCM inside the capsules are analyzed by using enthalpy method. The equations are numerically solved, and the results obtained are used for the thermal performance analysis of both charging and discharging processes. The effects of the inlet heat transfer fluid temperature (Stefan number), mass flow rate and phase change temperature range on the thermal performance of the capsules of various radii have been investigated. The results indicate that for the proper modeling of performance of the system the phase change temperature range of the PCM must be accurately known, and should be taken into account. 2008 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "4eb37f87312ce521c30858f6a97edd59", "text": "We propose an automatic framework for quality assessment of a photograph as well as analysis of its aesthetic attributes. In contrast to the previous methods that rely on manually designed features to account for photo aesthetics, our method automatically extracts such features using a pretrained deep convolutional neural network (DCNN). To make the DCNN-extracted features more suited to our target tasks of photo quality assessment and aesthetic attribute analysis, we propose a novel feature encoding scheme, which supports vector machines-driven sparse restricted Boltzmann machines, which enhances sparseness of features and discrimination between target classes. Experimental results show that our method outperforms the current state-of-the-art methods in automatic photo quality assessment, and gives aesthetic attribute ratings that can be used for photo editing. We demonstrate that our feature encoding scheme can also be applied to general object classification task to achieve performance gains.", "title": "" }, { "docid": "f1325dd1350acf612dc1817db693a3d6", "text": "Software for the measurement of genetic diversity (SMOGD) is a web-based application for the calculation of the recently proposed genetic diversity indices G'(ST) and D(est) . SMOGD includes bootstrapping functionality for estimating the variance, standard error and confidence intervals of estimated parameters, and SMOGD also generates genetic distance matrices from pairwise comparisons between populations. SMOGD accepts standard, multilocus Genepop and Arlequin formatted input files and produces HTML and tab-delimited output. This allows easy data submission, quick visualization, and rapid import of results into spreadsheet or database programs.", "title": "" }, { "docid": "f054e4464f2ef68ad9127afe00108b9a", "text": "RFID systems often use near-field magnetic coupling to implement communication channels. The advertised operational range of these channels is less than 10 cm and therefore several implemented systems assume that the communication channel is location limited and therefore relatively secure. Nevertheless, there have been repeated questions raised about the vulnerability of these near-field systems against eavesdropping and skimming attacks. In this paper we revisit the topic of RFID eavesdropping and skimming attacks, surveying previous work and explaining why the feasibility of practical attacks is still a relevant and novel research topic. We present a brief overview of the radio characteristics for popular HF RFID standards and present some practical results for eavesdropping experiments against tokens adhering to the ISO 14443 and ISO 15693 standards. We also discuss how an attacker could construct a low-cost eavesdropping device using easy to obtain parts and reference designs. Finally, we present results for skimming experiments against ISO 14443 tokens.", "title": "" }, { "docid": "c300043b5546f8ca75070aa66d05a1d3", "text": "In recent years high-speed electromagnetic repulsion mechanism (ERM), which is produced based on the eddy-current effect, has been widely applied to vacuum circuit breakers. One of the challenges for the design of ERM is to improve the speed of ERM by optimization design. In this paper, a novel co-simulation model is proposed. The related electromagnetic field, mechanical field and structural field of ERM have been analyzed through co-simulation. Results show that achieving separation in a few milliseconds is possible. Besides, considering the metal plate, as a driver of frequent operation, is possible to reach its mechanic limit, the stress and the strain of it are analyzed. In addition, according to the parametric analysis, the relationship between improving speed and optimizing parameters, e.g., the turns of the coil, the structural size, the storing form of energy and the initial gap are investigated.", "title": "" }, { "docid": "cd25829b5e42a77485ceefd18b682410", "text": "Members of the Fleischner Society compiled a glossary of terms for thoracic imaging that replaces previous glossaries published in 1984 and 1996 for thoracic radiography and computed tomography (CT), respectively. The need to update the previous versions came from the recognition that new words have emerged, others have become obsolete, and the meaning of some terms has changed. Brief descriptions of some diseases are included, and pictorial examples (chest radiographs and CT scans) are provided for the majority of terms.", "title": "" }, { "docid": "b89259a915856b309a02e6e7aa6c957f", "text": "The paper proposes a comprehensive information security maturity model (ISMM) that addresses both technical and socio/non-technical security aspects. The model is intended for securing e-government services (implementation and service delivery) in an emerging and increasing security risk environment. The paper utilizes extensive literature review and survey study approaches. A total of eight existing ISMMs were selected and critically analyzed. Models were then categorized into security awareness, evaluation and management orientations. Based on the model’s strengths – three models were selected to undergo further analyses and then synthesized. Each of the three selected models was either from the security awareness, evaluation or management orientations category. To affirm the findings – a survey study was conducted into six government organizations located in Tanzania. The study was structured to a large extent by the security controls adopted from the Security By Consensus (SBC) model. Finally, an ISMM with five critical maturity levels was proposed. The maturity levels were: undefined, defined, managed, controlled and optimized. The papers main contribution is the proposed model that addresses both technical and non-technical security services within the critical maturity levels. Additionally, the paper enhances awareness and understanding on the needs for security in e-government services to stakeholders.", "title": "" }, { "docid": "409d104fa3e992ac72c65b004beaa963", "text": "The 19-item Body-Image Questionnaire, developed by our team and first published in this journal in 1987 by Bruchon-Schweitzer, was administered to 1,222 male and female French subjects. A principal component analysis of their responses yielded an axis we interpreted as a general Body Satisfaction dimension. The four-factor structure observed in 1987 was not replicated. Body Satisfaction was associated with sex, health, and with current and future emotional adjustment.", "title": "" }, { "docid": "78835f284b953e50c55c31d49695701f", "text": "The Named-Data Networking (NDN) has emerged as a clean-slate Internet proposal on the wave of Information-Centric Networking. Although the NDN's data-plane seems to offer many advantages, e.g., native support for multicast communications and flow balance, it also makes the network infrastructure vulnerable to a specific DDoS attack, the Interest Flooding Attack (IFA). In IFAs, a botnet issuing unsatisfiable content requests can be set up effortlessly to exhaust routers' resources and cause a severe performance drop to legitimate users. So far several countermeasures have addressed this security threat, however, their efficacy was proved by means of simplistic assumptions on the attack model. Therefore, we propose a more complete attack model and design an advanced IFA. We show the efficiency of our novel attack scheme by extensively assessing some of the state-of-the-art countermeasures. Further, we release the software to perform this attack as open source tool to help design future more robust defense mechanisms.", "title": "" }, { "docid": "ccbed79c4f8504594f37303beb6e9e0b", "text": "Recent attacks on Bitcoin’s peer-to-peer (P2P) network demonstrated that its transaction-flooding protocols, which are used to ensure network consistency, may enable user deanonymization—the linkage of a user’s IP address with her pseudonym in the Bitcoin network. In 2015, the Bitcoin community responded to these attacks by changing the network’s flooding mechanism to a different protocol, known as diffusion. However, it is unclear if diffusion actually improves the system’s anonymity. In this paper, we model the Bitcoin networking stack and analyze its anonymity properties, both preand post-2015. The core problem is one of epidemic source inference over graphs, where the observational model and spreading mechanisms are informed by Bitcoin’s implementation; notably, these models have not been studied in the epidemic source detection literature before. We identify and analyze near-optimal source estimators. This analysis suggests that Bitcoin’s networking protocols (both preand post-2015) offer poor anonymity properties on networks with a regular-tree topology. We confirm this claim in simulation on a 2015 snapshot of the real Bitcoin P2P network topology.", "title": "" }, { "docid": "39ed08e9a08b7d71a4c177afe8f0056a", "text": "This paper proposes an anticipation model of potential customers’ purchasing behavior. This model is inferred from past purchasing behavior of loyal customers and the web server log files of loyal and potential customers by means of clustering analysis and association rules analysis. Clustering analysis collects key characteristics of loyal customers’ personal information; these are used to locate other potential customers. Association rules analysis extracts knowledge of loyal customers’ purchasing behavior, which is used to detect potential customers’ near-future interest in a star product. Despite using offline analysis to filter out potential customers based on loyal customers’ personal information and generate rules of loyal customers’ click streams based on loyal customers’ web log data, an online analysis which observes potential customers’ web logs and compares it with loyal customers’ click stream rules can more readily target potential customers who may be interested in the star products in the near future. 2006 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "61d02f77b270a24756b2ab2164ece5d0", "text": "The transdifferentiation of epithelial cells into motile mesenchymal cells, a process known as epithelial–mesenchymal transition (EMT), is integral in development, wound healing and stem cell behaviour, and contributes pathologically to fibrosis and cancer progression. This switch in cell differentiation and behaviour is mediated by key transcription factors, including SNAIL, zinc-finger E-box-binding (ZEB) and basic helix–loop–helix transcription factors, the functions of which are finely regulated at the transcriptional, translational and post-translational levels. The reprogramming of gene expression during EMT, as well as non-transcriptional changes, are initiated and controlled by signalling pathways that respond to extracellular cues. Among these, transforming growth factor-β (TGFβ) family signalling has a predominant role; however, the convergence of signalling pathways is essential for EMT.", "title": "" }, { "docid": "5b131fbca259f07bd1d84d4f61761903", "text": "We aimed to identify a blood flow restriction (BFR) endurance exercise protocol that would both maximize cardiopulmonary and metabolic strain, and minimize the perception of effort. Twelve healthy males (23 ± 2 years, 75 ± 7 kg) performed five different exercise protocols in randomized order: HI, high-intensity exercise starting at 105% of the incremental peak power (P peak); I-BFR30, intermittent BFR at 30% P peak; C-BFR30, continuous BFR at 30% P peak; CON30, control exercise without BFR at 30% P peak; I-BFR0, intermittent BFR during unloaded exercise. Cardiopulmonary, gastrocnemius oxygenation (StO2), capillary lactate ([La]), and perceived exertion (RPE) were measured. V̇O2, ventilation (V̇ E), heart rate (HR), [La] and RPE were greater in HI than all other protocols. However, muscle StO2 was not different between HI (set1—57.8 ± 5.8; set2—58.1 ± 7.2%) and I-BRF30 (set1—59.4 ± 4.1; set2—60.5 ± 6.6%, p < 0.05). While physiologic responses were mostly similar between I-BFR30 and C-BFR30, [La] was greater in I-BFR30 (4.2 ± 1.1 vs. 2.6 ± 1.1 mmol L−1, p = 0.014) and RPE was less (5.6 ± 2.1 and 7.4 ± 2.6; p = 0.014). I-BFR30 showed similar reduced muscle StO2 compared with HI, and increased blood lactate compared to C-BFR30 exercise. Therefore, this study demonstrate that endurance cycling with intermittent BFR promotes muscle deoxygenation and metabolic strain, which may translate into increased endurance training adaptations while minimizing power output and RPE.", "title": "" } ]
scidocsrr
bd404c364c2400990168678acf70ae6f
Change-Point Detection in Time-Series Data Based on Subspace Identification
[ { "docid": "dca74df16e3a90726d51b3222483ac94", "text": "We are concerned with the issue of detecting outliers and change points from time series. In the area of data mining, there have been increased interest in these issues since outlier detection is related to fraud detection, rare event discovery, etc., while change-point detection is related to event/trend change detection, activity monitoring, etc. Although, in most previous work, outlier detection and change point detection have not been related explicitly, this paper presents a unifying framework for dealing with both of them. In this framework, a probabilistic model of time series is incrementally learned using an online discounting learning algorithm, which can track a drifting data source adaptively by forgetting out-of-date statistics gradually. A score for any given data is calculated in terms of its deviation from the learned model, with a higher score indicating a high possibility of being an outlier. By taking an average of the scores over a window of a fixed length and sliding the window, we may obtain a new time series consisting of moving-averaged scores. Change point detection is then reduced to the issue of detecting outliers in that time series. We compare the performance of our framework with those of conventional methods to demonstrate its validity through simulation and experimental applications to incidents detection in network security.", "title": "" }, { "docid": "0d41a6d4cf8c42ccf58bccd232a46543", "text": "Novelty detection is the ident ification of new or unknown data or signal that a machine learning system is not aware of during training. In this paper we focus on neural network based approaches for novelty detection. Statistical approaches are covered in part-I paper.", "title": "" } ]
[ { "docid": "3dcb93232121be1ff8a2d96ecb25bbdd", "text": "We describe the approach that won the preliminary phase of the German traffic sign recognition benchmark with a better-than-human recognition rate of 98.98%.We obtain an even better recognition rate of 99.15% by further training the nets. Our fast, fully parameterizable GPU implementation of a Convolutional Neural Network does not require careful design of pre-wired feature extractors, which are rather learned in a supervised way. A CNN/MLP committee further boosts recognition performance.", "title": "" }, { "docid": "c8ba829a6b0e158d1945bbb0ed68045b", "text": "Specific pieces of music can elicit strong emotions in listeners and, possibly in connection with these emotions, can be remembered even years later. However, episodic memory for emotional music compared with less emotional music has not yet been examined. We investigated whether emotional music is remembered better than less emotional music. Also, we examined the influence of musical structure on memory performance. Recognition of 40 musical excerpts was investigated as a function of arousal, valence, and emotional intensity ratings of the music. In the first session the participants judged valence and arousal of the musical pieces. One week later, participants listened to the 40 old and 40 new musical excerpts randomly interspersed and were asked to make an old/new decision as well as to indicate arousal and valence of the pieces. Musical pieces that were rated as very positive were recognized significantly better. Musical excerpts rated as very positive are remembered better. Valence seems to be an important modulator of episodic long-term memory for music. Evidently, strong emotions related to the musical experience facilitate memory formation and retrieval.", "title": "" }, { "docid": "391cce3ac9ab87e31203637d89a8a082", "text": "MicroRNAs (miRNAs) are small conserved non-coding RNA molecules that post-transcriptionally regulate gene expression by targeting the 3' untranslated region (UTR) of specific messenger RNAs (mRNAs) for degradation or translational repression. miRNA-mediated gene regulation is critical for normal cellular functions such as the cell cycle, differentiation, and apoptosis, and as much as one-third of human mRNAs may be miRNA targets. Emerging evidence has demonstrated that miRNAs play a vital role in the regulation of immunological functions and the prevention of autoimmunity. Here we review the many newly discovered roles of miRNA regulation in immune functions and in the development of autoimmunity and autoimmune disease. Specifically, we discuss the involvement of miRNA regulation in innate and adaptive immune responses, immune cell development, T regulatory cell stability and function, and differential miRNA expression in rheumatoid arthritis and systemic lupus erythematosus.", "title": "" }, { "docid": "802d66fda1701252d1addbd6d23f6b4c", "text": "Powered wheelchair users often struggle to drive safely and effectively and, in more critical cases, can only get around when accompanied by an assistant. To address these issues, we propose a collaborative control mechanism that assists users as and when they require help. The system uses a multiple-hypothesis method to predict the driver's intentions and, if necessary, adjusts the control signals to achieve the desired goal safely. The main emphasis of this paper is on a comprehensive evaluation, where we not only look at the system performance but also, perhaps more importantly, characterize the user performance in an experiment that combines eye tracking with a secondary task. Without assistance, participants experienced multiple collisions while driving around the predefined route. Conversely, when they were assisted by the collaborative controller, not only did they drive more safely but also they were able to pay less attention to their driving, resulting in a reduced cognitive workload. We discuss the importance of these results and their implications for other applications of shared control, such as brain-machine interfaces, where it could be used to compensate for both the low frequency and the low resolution of the user input.", "title": "" }, { "docid": "2710a25b3cf3caf5ebd5fb9f08c9e5e3", "text": "Your use of the JSTOR archive indicates your acceptance of JSTOR's Terms and Conditions of Use, available at http://www.jstor.org/page/info/about/policies/terms.jsp. JSTOR's Terms and Conditions of Use provides, in part, that unless you have obtained prior permission, you may not download an entire issue of a journal or multiple copies of articles, and you may use content in the JSTOR archive only for your personal, non-commercial use.", "title": "" }, { "docid": "8cd701723c72b16dfe7d321cb657ee31", "text": "A coupled-inductor double-boost inverter (CIDBI) is proposed for microinverter photovoltaic (PV) module system, and the control strategy applied to it is analyzed. Also, the operation principle of the proposed inverter is discussed and the gain from dc to ac is deduced in detail. The main attribute of the CIDBI topology is the fact that it generates an ac output voltage larger than the dc input one, depending on the instantaneous duty cycle and turns ratio of the coupled inductor as well. This paper points out that the gain is proportional to the duty cycle approximately when the duty cycle is around 0.5 and the synchronized pulsewidth modulation can be applicable to this novel inverter. Finally, the proposed inverter servers as a grid inverter in the grid-connected PV system and the experimental results show that the CIDBI can implement the single-stage PV-grid-connected power generation competently and be of small volume and high efficiency by leaving out the transformer or the additional dc-dc converter.", "title": "" }, { "docid": "f3811a34b2abd34d20e24e90ab9fe046", "text": "Recently, the development of neural machine translation (NMT) has significantly improved the translation quality of automatic machine translation. While most sentences are more accurate and fluent than translations by statistical machine translation (SMT)-based systems, in some cases, the NMT system produces translations that have a completely different meaning. This is especially the case when rare words occur. When using statistical machine translation, it has already been shown that significant gains can be achieved by simplifying the input in a preprocessing step. A commonly used example is the pre-reordering approach. In this work, we used phrase-based machine translation to pre-translate the input into the target language. Then a neural machine translation system generates the final hypothesis using the pre-translation. Thereby, we use either only the output of the phrase-based machine translation (PBMT) system or a combination of the PBMT output and the source sentence. We evaluate the technique on the English to German translation task. Using this approach we are able to outperform the PBMT system as well as the baseline neural MT system by up to 2 BLEU points. We analyzed the influence of the quality of the initial system on the final result.", "title": "" }, { "docid": "6485211d35cef2766675d78311864ff0", "text": "In this paper, we investigate architectural and practical issues related to the setup of a broadband home network solution. Our experience led us to the consideration of a hybrid, wireless and wired, Mesh-Network to enable high data rate service delivery everywhere in the home. We demonstrate the effectiveness of our proposal using a real experimental testbed. This latter consists of a multi-hop mesh network composed of a home gateway and \"extenders\" supporting several types of physical connectivity including PLC, WiFi, and Ethernet. The solution also includes a layer 2 implementation of the OLSR protocol for path selection. We developed an extension of this protocol for QoS assurance and to enable the proper execution of existing services. We have also implemented a fast WiFi handover algorithm to ensure service continuity in case of user mobility among the extenders inside the home.", "title": "" }, { "docid": "4ab3db4b0c338dbe8d5bb9e1f49f2a5c", "text": "BACKGROUND\nSub-Saharan African (SSA) countries are currently experiencing one of the most rapid epidemiological transitions characterized by increasing urbanization and changing lifestyle factors. This has resulted in an increase in the incidence of non-communicable diseases, especially cardiovascular disease (CVD). This double burden of communicable and chronic non-communicable diseases has long-term public health impact as it undermines healthcare systems.\n\n\nPURPOSE\nThe purpose of this paper is to explore the socio-cultural context of CVD risk prevention and treatment in sub-Saharan Africa. We discuss risk factors specific to the SSA context, including poverty, urbanization, developing healthcare systems, traditional healing, lifestyle and socio-cultural factors.\n\n\nMETHODOLOGY\nWe conducted a search on African Journals On-Line, Medline, PubMed, and PsycINFO databases using combinations of the key country/geographic terms, disease and risk factor specific terms such as \"diabetes and Congo\" and \"hypertension and Nigeria\". Research articles on clinical trials were excluded from this overview. Contrarily, articles that reported prevalence and incidence data on CVD risk and/or articles that report on CVD risk-related beliefs and behaviors were included. Both qualitative and quantitative articles were included.\n\n\nRESULTS\nThe epidemic of CVD in SSA is driven by multiple factors working collectively. Lifestyle factors such as diet, exercise and smoking contribute to the increasing rates of CVD in SSA. Some lifestyle factors are considered gendered in that some are salient for women and others for men. For instance, obesity is a predominant risk factor for women compared to men, but smoking still remains mostly a risk factor for men. Additionally, structural and system level issues such as lack of infrastructure for healthcare, urbanization, poverty and lack of government programs also drive this epidemic and hampers proper prevention, surveillance and treatment efforts.\n\n\nCONCLUSION\nUsing an African-centered cultural framework, the PEN3 model, we explore future directions and efforts to address the epidemic of CVD risk in SSA.", "title": "" }, { "docid": "10b0ab2570a7bba1ac1f575a0555eb4a", "text": "It is well known that ozone concentration depends on air/oxygen input flow rate and power consumed by the ozone chamber. For every chamber, there exists a unique optimum flow rate that results in maximum ozone concentration. If the flow rate is increased (beyond) or decreased (below) from this optimum value, the ozone concentration drops. This paper proposes a technique whereby the concentration can be maintained even if the flow rate increases. The idea is to connect n number of ozone chambers in parallel, with each chamber designed to operate at its optimum point. Aside from delivering high ozone concentration at high flow rate, the proposed system requires only one power supply to drive all these (multiple) chambers simultaneously. In addition, due to its modularity, the system is very flexible, i.e., the number of chambers can be added or removed as demanded by the (output) ozone requirements. This paper outlines the chamber design using mica as dielectric and the determination of its parameters. To verify the concept, three chambers are connected in parallel and driven by a single transformer-less LCL resonant power supply. Moreover, a closed-loop feedback controller is implemented to ensure that the voltage gain remains at the designated value even if the number of chambers is changed or there is a variation in the components. It is shown that the flow rate can be increased linearly with the number of chambers while maintaining a constant ozone concentration.", "title": "" }, { "docid": "e0382c9d739281b4bc78f4a69827ac37", "text": "Of numerous proposals to improve the accuracy of naive Bayes by weakening its attribute independence assumption, both LBR and Super-Parent TAN have demonstrated remarkable error performance. However, both techniques obtain this outcome at a considerable computational cost. We present a new approach to weakening the attribute independence assumption by averaging all of a constrained class of classifiers. In extensive experiments this technique delivers comparable prediction accuracy to LBR and Super-Parent TAN with substantially improved computational efficiency at test time relative to the former and at training time relative to the latter. The new algorithm is shown to have low variance and is suited to incremental learning.", "title": "" }, { "docid": "81d07b747f12f10066571c784e212991", "text": "This work presents a bi-arm rolled monopole for ultrawide-band (UWB) applications. The roll monopole is constructed by wrapping a planar monopole. The impedance and radiation characteristics of the proposed roll monopole are experimentally compared with a rectangular planar monopole and strip monopole. Furthermore, the transfer responses of transmit-receive antenna systems comprising two identical monopoles are examined across the UWB band. The characteristics of the monopoles are investigated in both time and frequency domains for UWB single-band and multiple-band schemes. The study shows that the proposed bi-arm rolled monopole is capable of achieving broadband and omnidirectional radiation characteristics within 3.1-10.6 GHz for UWB wireless communications.", "title": "" }, { "docid": "a2d851b76d6abcb3d9377c566b8bf6d9", "text": "Many fabrication processes for polymeric objects include melt extrusion, in which the molten polymer is conveyed by a ram or a screw and the melt is then forced through a shaping die in continuous processing or into a mold for the manufacture of discrete molded parts. The properties of the fabricated solid object, including morphology developed during cooling and solidification, depend in part on the stresses and orientation induced during the melt shaping. Most polymers used for commercial processing are of sufficiently high molecular weight that the polymer chains are highly entangled in the melt, resulting in flow behavior that differs qualitatively from that of low-molecular-weight liquids. Obvious manifestations of the differences from classical Newtonian fluids are a strongly shear-dependent viscosity and finite stresses normal to the direction of shear in rectilinear flow, transients of the order of seconds for the buildup or relaxation of stresses following a change in shear rate, a finite phase angle between stress and shear rate in oscillatory shear, ratios of extensional to shear viscosities that are considerably greater than 3, and substantial extrudate swell on extrusion from a capillary or slit. These rheological characteristics of molten polymers have been reviewed in textbooks (e.g. Larson 1999, Macosko 1994); the recent research emphasis in rheology has been to establish meaningful constitutive models that incorporate chain behavior at a molecular level. All polymer melts and concentrated solutions exhibit instabilities during extrusion when the stresses to which they are subjected become sufficiently high. The first manifestation of extrusion instability is usually the appearance of distortions on the extrudate surface, sometimes accompanied by oscillating flow. Gross distortion of the extrudate usually follows. The sequence of extrudate distortions", "title": "" }, { "docid": "5a0cf2582fab28fe07d215435632b610", "text": "5G radio access networks are expected to provide very high capacity, ultra-reliability and low latency, seamless mobility, and ubiquitous end-user experience anywhere and anytime. Driven by such stringent service requirements coupled with the expected dense deployments and diverse use case scenarios, the architecture of 5G New Radio (NR) wireless access has further evolved from the traditionally cell-centric radio access to a more flexible beam-based user-centric radio access. This article provides an overview of the NR system multi-beam operation in terms of initial access procedures and mechanisms associated with synchronization, system information, and random access. We further discuss inter-cell mobility handling in NR and its reliance on new downlink-based measurements to compensate for a lack of always-on reference signals in NR. Furthermore, we describe some of the user-centric coordinated transmission mechanisms envisioned in NR in order to realize seamless intra/inter-cell handover between physical transmission and reception points and reduce the interference levels across the network.", "title": "" }, { "docid": "5e840c5649492d5e93ddef2b94432d5f", "text": "Commercially available laser lithography systems have been available for several years. One such system manufactured by Heidelberg Instruments can be used to produce masks for lithography or to directly pattern photoresist using either a 3 micron or 1 micron beam. These systems are designed to operate using computer aided design (CAD) mask files, but also have the capability of using images. In image mode, the power of the exposure is based on the intensity of each pixel in the image. This results in individual pixels that are the size of the beam, which establishes the smallest feature that can be patterned. When developed, this produces a range of heights within the photoresist which can then be transferred to the material beneath and used for a variety of applications. Previous research efforts have demonstrated that this process works well overall, but is limited in resolution and feature size due to the pixel approach of the exposure. However, if we modify the method used, much smaller features can be resolved, without the pixilation. This is achieved by utilizing multiple exposures of slightly different CAD type files in sequence. While the smallest beam width is approximately 1 micron, the beam positioning accuracy is much smaller, with 40 nm step changes in beam position based on the machine's servo gearing and optical design. When exposing in CAD mode, the beam travels along lines at constant power, so by automating multiple files in succession, and employing multiple smaller exposures of lower intensity, a similar result can be achieved. With this line exposure approach, pixilation can be greatly reduced. Due to the beam positioning accuracy of this mode, the effective resolution between lines is on the order of 40 nm steps, resulting in unexposed features of much smaller size and higher resolution.", "title": "" }, { "docid": "01ee1036caeb4a64477aa19d0f8a6429", "text": "In recent years, Twitter has become one of the most important microblogging services of the Web 2.0. Among the possible uses it allows, it can be employed for communicating and broadcasting information in real time. The goal of this research is to analyze the task of automatic tweet generation from a text summarization perspective in the context of the journalism genre. To achieve this, different state-of-the-art summarizers are selected and employed for producing multi-lingual tweets in two languages (English and Spanish). A wide experimental framework is proposed, comprising the creation of a new corpus, the generation of the automatic tweets, and their assessment through a quantitative and a qualitative evaluation, where informativeness, indicativeness and interest are key criteria that should be ensured in the proposed context. From the results obtained, it was observed that although the original tweets were considered as model tweets with respect to their informativeness, they were not among the most interesting ones from a human viewpoint. Therefore, relying only on these tweets may not be the ideal way to communicate news through Twitter, especially if a more personalized and catchy way of reporting news wants to be performed. In contrast, we showed that recent text summarization techniques may be more appropriate, reflecting a balance between indicativeness and interest, even if their content was different from the tweets delivered by the news providers.", "title": "" }, { "docid": "7d860b431f44d42572fc0787bf452575", "text": "Time-of-flight (TOF) measurement capability promises to improve PET image quality. We characterized the physical and clinical PET performance of the first Biograph mCT TOF PET/CT scanner (Siemens Medical Solutions USA, Inc.) in comparison with its predecessor, the Biograph TruePoint TrueV. In particular, we defined the improvements with TOF. The physical performance was evaluated according to the National Electrical Manufacturers Association (NEMA) NU 2-2007 standard with additional measurements to specifically address the TOF capability. Patient data were analyzed to obtain the clinical performance of the scanner. As expected for the same size crystal detectors, a similar spatial resolution was measured on the mCT as on the TruePoint TrueV. The mCT demonstrated modestly higher sensitivity (increase by 19.7 ± 2.8%) and peak noise equivalent count rate (NECR) (increase by 15.5 ± 5.7%) with similar scatter fractions. The energy, time and spatial resolutions for a varying single count rate of up to 55 Mcps resulted in 11.5 ± 0.2% (FWHM), 527.5 ± 4.9 ps (FWHM) and 4.1 ± 0.0 mm (FWHM), respectively. With the addition of TOF, the mCT also produced substantially higher image contrast recovery and signal-to-noise ratios in a clinically-relevant phantom geometry. The benefits of TOF were clearly demonstrated in representative patient images.", "title": "" }, { "docid": "6392a6c384613f8ed9630c8676f0cad8", "text": "References D. Bruckner, J. Rosen, and E. R. Sparks. deepviz: Visualizing convolutional neural networks for image classification. 2014. Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. Imagenet classification with deep convolutional neural networks. In Advances in neural information processing systems, pages 1097–1105, 2012. Laurens Van der Maaten and Geoffrey Hinton. Visualizing data using t-sne. Journal of Machine Learning Research,9(2579-2605):85, 2008. Jason Yosinski, Jeff Clune, Anh Nguyen, Thomas Fuchs, and Hods Lipson. Understanding neural networks through deep visualization. arXiv preprint arXiv:1506.06579, 2015. Matthew D Zeiler and Rob Fergus. Visualizing and understanding convolutional networks. In Computer vision–ECCV 2014, pages 818–833. Springer, 2014. Network visualization of ReVACNN", "title": "" }, { "docid": "9e638e09b77463e8c232c7960d49a544", "text": "Force feedback coupled with visual display allows people to interact intuitively with complex virtual environments. For this synergy of haptics and graphics to flourish, however, haptic systems must be capable of modeling environments with the same richness, complexity and interactivity that can be found in existing graphic systems. To help meet this challenge, we have developed a haptic rendering system that allows f r the efficient tactile display of graphical information. The system uses a common high-level framework to model contact constraints, surface shading, friction and tex ture. The multilevel control system also helps ensure that the haptic device will remain stable even as the limits of the renderer’s capabilities are reached. CR", "title": "" } ]
scidocsrr
7f2303e532cd188758f34799820759d4
RUN: Residual U-Net for Computer-Aided Detection of Pulmonary Nodules without Candidate Selection
[ { "docid": "bf85db5489a61b5fca8d121de198be97", "text": "In this paper, we propose a novel recursive recurrent neural network (R2NN) to model the end-to-end decoding process for statistical machine translation. R2NN is a combination of recursive neural network and recurrent neural network, and in turn integrates their respective capabilities: (1) new information can be used to generate the next hidden state, like recurrent neural networks, so that language model and translation model can be integrated naturally; (2) a tree structure can be built, as recursive neural networks, so as to generate the translation candidates in a bottom up manner. A semi-supervised training approach is proposed to train the parameters, and the phrase pair embedding is explored to model translation confidence directly. Experiments on a Chinese to English translation task show that our proposed R2NN can outperform the stateof-the-art baseline by about 1.5 points in BLEU.", "title": "" } ]
[ { "docid": "0ab6ee50661e92fe7935ddd2c447f793", "text": "In this paper, a high-performance single-phase transformerless online uninterruptible power supply (UPS) is proposed. The proposed UPS is composed of a four-leg-type converter, which operates as a rectifier, a battery charger/discharger, and an inverter. The rectifier has the capability of power-factor collection and regulates a constant dc-link voltage. The battery charger/discharger eliminates the need for the transformer and the increase of the number of battery and supplies the power demanded by the load to the dc-link capacitor in the event of the input-power failure or abrupt decrease of the input voltage. The inverter provides a regulated sinusoidal output voltage to the load and limits the output current under an impulsive load. The control of the dc-link voltage enhances the transient response of the output voltage and the utilization of the input power. By utilizing the battery charger/discharger, the overall efficiency of the system is improved, and the size, weight, and cost of the system are significantly reduced. Experimental results obtained with a 3-kVA prototype show a normal efficiency of over 95.6% and an input power factor of over 99.7%.", "title": "" }, { "docid": "84d2e697b2f2107d34516909f22768c6", "text": "PURPOSE\nSchema therapy was first applied to individuals with borderline personality disorder (BPD) over 20 years ago, and more recent work has suggested efficacy across a range of disorders. The present review aimed to systematically synthesize evidence for the efficacy and effectiveness of schema therapy in reducing early maladaptive schema (EMS) and improving symptoms as applied to a range of mental health disorders in adults including BPD, other personality disorders, eating disorders, anxiety disorders, and post-traumatic stress disorder.\n\n\nMETHODS\nStudies were identified through electronic searches (EMBASE, PsycINFO, MEDLINE from 1990 to January 2016).\n\n\nRESULTS\nThe search produced 835 titles, of which 12 studies were found to meet inclusion criteria. A significant number of studies of schema therapy treatment were excluded as they failed to include a measure of schema change. The Clinical Trial Assessment Measure was used to rate the methodological quality of studies. Schema change and disorder-specific symptom change was found in 11 of the 12 studies.\n\n\nCONCLUSIONS\nSchema therapy has demonstrated initial significant results in terms of reducing EMS and improving symptoms for personality disorders, but formal mediation analytical studies are lacking and rigorous evidence for other mental health disorders is currently sparse.\n\n\nPRACTITIONER POINTS\nFirst review to investigate whether schema therapy leads to reduced maladaptive schemas and symptoms across mental health disorders. Limited evidence for schema change with schema therapy in borderline personality disorder (BPD), with only three studies conducting correlational analyses. Evidence for schema and symptom change in other mental health disorders is sparse, and so use of schema therapy for disorders other than BPD should be based on service user/patient preference and clinical expertise and/or that the theoretical underpinnings of schema therapy justify the use of it therapeutically. Further work is needed to develop the evidence base for schema therapy for other disorders.", "title": "" }, { "docid": "b93919bbb2dab3a687cccb71ee515793", "text": "The processing and analysis of colour images has become an important area of study and application. The representation of the RGB colour space in 3D-polar coordinates (hue, saturation and brightness) can sometimes simplify this task by revealing characteristics not visible in the rectangular coordinate representation. The literature describes many such spaces (HLS, HSV, etc.), but many of them, having been developed for computer graphics applications, are unsuited to image processing and analysis tasks. We describe the flaws present in these colour spaces, and present three prerequisites for 3D-polar coordinate colour spaces well-suited to image processing and analysis. We then derive 3D-polar coordinate representations which satisfy the prerequisites, namely a space based on the norm which has efficient linear transform functions to and from the RGB space; and an improved HLS (IHLS) space. The most important property of this latter space is a “well-behaved” saturation coordinate which, in contrast to commonly used ones, always has a small numerical value for near-achromatic colours, and is completely independent of the brightness function. Three applications taking advantage of the good properties of the IHLS space are described: the calculation of a saturation-weighted hue mean and of saturation-weighted hue histograms, and feature extraction using mathematical morphology. 1Updated July 16, 2003. 2Jean Serra is with the Centre de Morphologie Mathématique, Ecole des Mines de Paris, 35 rue Saint-Honoré, 77305 Fontainebleau cedex, France.", "title": "" }, { "docid": "3d25100e6a9410c6c08fae14135043d0", "text": "We propose to learn semantic spatio-temporal embeddings for videos to support high-level video analysis. The first step of the proposed embedding employs a deep architecture consisting of two channels of convolutional neural networks (capturing appearance and local motion) followed by their corresponding Gated Recurrent Unit encoders for capturing longer-term temporal structure of the CNN features. The resultant spatio-temporal representation (a vector) is used to learn a mapping via a multilayer perceptron to the word2vec semantic embedding space, leading to a semantic interpretation of the video vector that supports high-level analysis. We demonstrate the usefulness and effectiveness of this new video representation by experiments on action recognition, zero-shot video classification, and “word-to-video” retrieval, using the UCF-101 dataset.", "title": "" }, { "docid": "a4d7596cfcd4a9133c5677a481c88cf0", "text": "The understanding of where humans look in a scene is a problem of great interest in visual perception and computer vision. When eye-tracking devices are not a viable option, models of human attention can be used to predict fixations. In this paper we give two contribution. First, we show a model of visual attention that is simply based on deep convolutional neural networks trained for object classification tasks. A method for visualizing saliency maps is defined which is evaluated in a saliency prediction task. Second, we integrate the information of these maps with a bottom-up differential model of eye-movements to simulate visual attention scanpaths. Results on saliency prediction and scores of similarity with human scanpaths demonstrate the effectiveness of this model.", "title": "" }, { "docid": "37e65ab2fc4d0a9ed5b8802f41a1a2a2", "text": "This paper is based on a panel discussion held at the Artificial Intelligence in Medicine Europe (AIME) conference in Amsterdam, The Netherlands, in July 2007. It had been more than 15 years since Edward Shortliffe gave a talk at AIME in which he characterized artificial intelligence (AI) in medicine as being in its \"adolescence\" (Shortliffe EH. The adolescence of AI in medicine: will the field come of age in the '90s? Artificial Intelligence in Medicine 1993;5:93-106). In this article, the discussants reflect on medical AI research during the subsequent years and characterize the maturity and influence that has been achieved to date. Participants focus on their personal areas of expertise, ranging from clinical decision-making, reasoning under uncertainty, and knowledge representation to systems integration, translational bioinformatics, and cognitive issues in both the modeling of expertise and the creation of acceptable systems.", "title": "" }, { "docid": "7263e768247914490f3b91c916587614", "text": "Activity Recognition is an emerging field of research, born from the larger fields of ubiquitous computing, context-aware computing and multimedia. Recently, recognizing everyday life activities becomes one of the challenges for pervasive computing. In our work, we developed a novel wearable system easy to use and comfortable to bring. Our wearable system is based on a new set of 20 computationally efficient features and the Random Forest classifier. We obtain very encouraging results with classification accuracy of human activities recognition of up", "title": "" }, { "docid": "de3789fe0dccb53fe8555e039fde1bc6", "text": "Estimating consumer surplus is challenging because it requires identification of the entire demand curve. We rely on Uber’s “surge” pricing algorithm and the richness of its individual level data to first estimate demand elasticities at several points along the demand curve. We then use these elasticity estimates to estimate consumer surplus. Using almost 50 million individuallevel observations and a regression discontinuity design, we estimate that in 2015 the UberX service generated about $2.9 billion in consumer surplus in the four U.S. cities included in our analysis. For each dollar spent by consumers, about $1.60 of consumer surplus is generated. Back-of-the-envelope calculations suggest that the overall consumer surplus generated by the UberX service in the United States in 2015 was $6.8 billion.", "title": "" }, { "docid": "47e9515f703c840c38ab0c3095f48a3a", "text": "Hnefatafl is an ancient Norse game - an ancestor of chess. In this paper, we report on the development of computer players for this game. In the spirit of Blondie24, we evolve neural networks as board evaluation functions for different versions of the game. An unusual aspect of this game is that there is no general agreement on the rules: it is no longer much played, and game historians attempt to infer the rules from scraps of historical texts, with ambiguities often resolved on gut feeling as to what the rules must have been in order to achieve a balanced game. We offer the evolutionary method as a means by which to judge the merits of alternative rule sets", "title": "" }, { "docid": "da63c4d9cc2f3278126490de54c34ce5", "text": "The growth of Web-based social networking and the properties of those networks have created great potential for producing intelligent software that integrates a user's social network and preferences. Our research looks particularly at assigning trust in Web-based social networks and investigates how trust information can be mined and integrated into applications. This article introduces a definition of trust suitable for use in Web-based social networks with a discussion of the properties that will influence its use in computation. We then present two algorithms for inferring trust relationships between individuals that are not directly connected in the network. Both algorithms are shown theoretically and through simulation to produce calculated trust values that are highly accurate.. We then present TrustMail, a prototype email client that uses variations on these algorithms to score email messages in the user's inbox based on the user's participation and ratings in a trust network.", "title": "" }, { "docid": "ac56eb533e3ae40b8300d4269fd2c08f", "text": "We present a recurrent encoder-decoder deep neural network architecture that directly translates speech in one language into text in another. The model does not explicitly transcribe the speech into text in the source language, nor does it require supervision from the ground truth source language transcription during training. We apply a slightly modified sequence-to-sequence with attention architecture that has previously been used for speech recognition and show that it can be repurposed for this more complex task, illustrating the power of attention-based models. A single model trained end-to-end obtains state-of-the-art performance on the Fisher Callhome Spanish-English speech translation task, outperforming a cascade of independently trained sequence-to-sequence speech recognition and machine translation models by 1.8 BLEU points on the Fisher test set. In addition, we find that making use of the training data in both languages by multi-task training sequence-to-sequence speech translation and recognition models with a shared encoder network can improve performance by a further 1.4 BLEU points.", "title": "" }, { "docid": "7f47434e413230faf04849cf43a845fa", "text": "Although surgical resection remains the gold standard for treatment of liver cancer, there is a growing need for alternative therapies. Microwave ablation (MWA) is an experimental procedure that has shown great promise for the treatment of unresectable tumors and exhibits many advantages over other alternatives to resection, such as radiofrequency ablation and cryoablation. However, the antennas used to deliver microwave power largely govern the effectiveness of MWA. Research has focused on coaxial-based interstitial antennas that can be classified as one of three types (dipole, slot, or monopole). Choked versions of these antennas have also been developed, which can produce localized power deposition in tissue and are ideal for the treatment of deepseated hepatic tumors.", "title": "" }, { "docid": "9e439c83f4c29b870b1716ceae5aa1f3", "text": "Suspension system plays an imperative role in retaining the continuous road wheel contact for better road holding. In this paper, fuzzy self-tuning of PID controller is designed to control of active suspension system for quarter car model. A fuzzy self-tuning is used to develop the optimal control gain for PID controller (proportional, integral, and derivative gains) to minimize suspension working space of the sprung mass and its change rate to achieve the best comfort of the driver. The results of active suspension system with fuzzy self-tuning PID controller are presented graphically and comparisons with the PID and passive system. It is found that, the effectiveness of using fuzzy self-tuning appears in the ability to tune the gain parameters of PID controller", "title": "" }, { "docid": "e25b5b0f51f9c00515a849f5fd05d39b", "text": "These are exciting times for research into the psychological processes underlying second language acquisition (SLA). In the 1970s, SLA emerged as a field of inquiry in its own right (Brown 1980), and in the 1980s, a number of different approaches to central questions in the field began to develop in parallel and in relative isolation (McLaughlin and Harrington 1990). In the 1990s, however, these different approaches began to confront one another directly. Now we are entering a period reminiscent, in many ways, of the intellectually turbulent times following the Chomskyan revolution (Chomsky 1957; 1965). Now, as then, researchers are debating basic premises of a science of mind, language, and learning. Some might complain, not entirely without reason, that we are still debating the same issues after 30-40 years. However, there are now new conceptual and research tools available to test hypotheses in ways previously thought impossible. Because of this, many psychologists believe there will soon be significant advancement on some SLA issues that have resisted closure for decades. We outline some of these developments and explore where the field may be heading. More than ever, it appears possible that psychological theory and SLA theory are converging on solutions to common issues.", "title": "" }, { "docid": "f57bcea5431a11cc431f76727ba81a26", "text": "We develop a Bayesian procedure for estimation and inference for spatial models of roll call voting. This approach is extremely flexible, applicable to any legislative setting, irrespective of size, the extremism of the legislators’ voting histories, or the number of roll calls available for analysis. The model is easily extended to let other sources of information inform the analysis of roll call data, such as the number and nature of the underlying dimensions, the presence of party whipping, the determinants of legislator preferences, and the evolution of the legislative agenda; this is especially helpful since generally it is inappropriate to use estimates of extant methods (usually generated under assumptions of sincere voting) to test models embodying alternate assumptions (e.g., log-rolling, party discipline). A Bayesian approach also provides a coherent framework for estimation and inference with roll call data that eludes extant methods; moreover, via Bayesian simulation methods, it is straightforward to generate uncertainty assessments or hypothesis tests concerning any auxiliary quantity of interest or to formally compare models. In a series of examples we show how our method is easily extended to accommodate theoretically interesting models of legislative behavior. Our goal is to provide a statistical framework for combining the measurement of legislative preferences with tests of models of legislative behavior.", "title": "" }, { "docid": "4592c8f5758ccf20430dbec02644c931", "text": "Taylor & Francis makes every effort to ensure the accuracy of all the information (the “Content”) contained in the publications on our platform. However, Taylor & Francis, our agents, and our licensors make no representations or warranties whatsoever as to the accuracy, completeness, or suitability for any purpose of the Content. Any opinions and views expressed in this publication are the opinions and views of the authors, and are not the views of or endorsed by Taylor & Francis. The accuracy of the Content should not be relied upon and should be independently verified with primary sources of information. Taylor and Francis shall not be liable for any losses, actions, claims, proceedings, demands, costs, expenses, damages, and other liabilities whatsoever or howsoever caused arising directly or indirectly in connection with, in relation to or arising out of the use of the Content.", "title": "" }, { "docid": "f2346fffa0297554440145a3165e921e", "text": "The proliferation of knowledge-sharing communities like Wikipedia and the advances in automated information extraction from Web pages enable the construction of large knowledge bases with facts about entities and their relationships. The facts can be represented in the RDF data model, as so-called subject-property-object triples, and can thus be queried by structured query languages like SPARQL. In principle, this allows precise querying in the database spirit. However, RDF data may be highly diverse and queries may return way too many results, so that ranking by informativeness measures is crucial to avoid overwhelming users. Moreover, as facts are extracted from textual contexts or have community-provided annotations, it can be beneficial to consider also keywords for formulating search requests. This paper gives an overview of recent and ongoing work on ranked retrieval of RDF data with keyword-augmented structured queries. The ranking method is based on statistical language models, the state-of-the-art paradigm in information retrieval. The paper develops a novel form of language models for the structured, but schema-less setting of RDF triples and extended SPARQL queries. 1 Motivation and Background Entity-Relationship graphs are receiving great attention for information management outside of mainstream database engines. In particular, the Semantic-Web data model RDF (Resource Description Format) is gaining popularity for applications on scientific data such as biological networks [14], social Web2.0 applications [4], large-scale knowledge bases such as DBpedia [2] or YAGO [13], and more generally, as a light-weight representation for the “Web of data” [5]. An RDF data collection consists of a set of subject-property-object triples, SPO triples for short. In ER terminology, an SPO triple corresponds to a pair of entities connected by a named relationship or to an entity connected to the value of a named attribute. As the object of a triple can in turn be the subject of other triples, we can also view the RDF data as a graph of typed nodes and typed edges where nodes correspond to entities and edges to relationships (viewing attributes as relations as well). Some of the existing RDF collections contain more than a billion triples. As a simple example that we will use throughout the paper, consider a Web portal on movies. Table 1 shows a few sample triples. The example illustrates a number of specific requirements that RDF data poses for querying: Copyright 0000 IEEE. Personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional purposes or for creating new collective works for resale or redistribution to servers or lists, or to reuse any copyrighted component of this work in other works must be obtained from the IEEE. Bulletin of the IEEE Computer Society Technical Committee on Data Engineering", "title": "" }, { "docid": "e9c4877bca5f1bfe51f97818cc4714fa", "text": "INTRODUCTION Gamification refers to the application of game dynamics, mechanics, and frameworks into non-game settings. Many educators have attempted, with varying degrees of success, to effectively utilize game dynamics to increase student motivation and achievement in the classroom. In an effort to better understand how gamification can effectively be utilized to this end, presented here is a review of existing literature on the subject as well as a case study on three different applications of gamification in the post-secondary setting. This analysis reveals that the underlying dynamics that make games engaging are largely already recognized and utilized in modern pedagogical practices, although under different designations. This provides some legitimacy to a practice that is sometimes dismissed as superficial, and also provides a way of formulating useful guidelines for those wishing to utilize the power of games to motivate student achievement. RELATED WORK The first step of this study was to review literature related to the use of gamification in education. This was undertaken in order to inform the subsequent case studies. Several works were reviewed with the intention of finding specific game dynamics that were met with a certain degree of success across a number of circumstances. To begin, Jill Laster [10] provides a brief summary of the early findings of Lee Sheldon, an assistant professor at Indiana University at Bloomington and the author of The Multiplayer Classroom: Designing Coursework as a Game [16]. Here, Sheldon reports that the gamification of his class on multiplayer game design at Indiana University at Bloomington in 2010 was a success, with the average grade jumping a full letter grade from the previous year [10]. Sheldon gamified his class by renaming the performance of presentations as 'completing quests', taking tests as 'fighting monsters', writing papers as 'crafting', and receiving letter grades as 'gaining experience points'. In particular, he notes that changing the language around grades celebrates getting things right rather than punishing getting things wrong [10]. Although this is plausible, this example is included here first because it points to the common conception of what gamifying a classroom means: implementing game components by simply trading out the parlance of pedagogy for that of gaming culture. Although its intentions are good, it is this reduction of game design to its surface characteristics that Elizabeth Lawley warns is detrimental to the successful gamification of a classroom [5]. Lawley, a professor of interactive games and media at the Rochester Institute of Technology (RIT), notes that when implemented properly, \"gamification can help enrich educational experiences in a way that students will recognize and respond to\" [5]. However, she warns that reducing the complexity of well designed games to their surface elements (i.e. badges and experience points) falls short of engaging students. She continues further, suggesting that beyond failing to engage, limiting the implementation of game dynamics to just the surface characteristics can actually damage existing interest and engagement [5]. Lawley is not suggesting that game elements should be avoided, but rather she is stressing the importance of allowing them to surface as part of a deeper implementation that includes the underlying foundations of good game design. Upon reviewing the available literature, certain underlying dynamics and concepts found in game design are shown to be more consistently successful than others when applied to learning environments, these are: o Freedom to Fail o Rapid Feedback o Progression o Storytelling Freedom to Fail Game design often encourages players to experiment without fear of causing irreversible damage by giving them multiple lives, or allowing them to start again at the most recent 'checkpoint'. Incorporating this 'freedom to fail' into classroom design is noted to be an effective dynamic in increasing student engagement [7,9,11,15]. If students are encouraged to take risks and experiment, the focus is taken away from final results and re-centered on the process of learning instead. The effectiveness of this change in focus is recognized in modern pedagogy as shown in the increased use of formative assessment. Like the game dynamic of having the 'freedom to fail', formative assessment focuses on the process of learning rather than the end result by using assessment to inform subsequent lessons and separating assessment from grades whenever possible [17]. This can mean that the student is using ongoing self assessment, or that the teacher is using", "title": "" }, { "docid": "019375c14bc0377acbf259ef423fa46f", "text": "Original approval signatures are on file with the University of Oregon Graduate School.", "title": "" }, { "docid": "78ced4f3e99c5abc1a3f5e81fbc63106", "text": "This paper presents a high performance vision-based system with a single static camera for traffic surveillance, for moving vehicle detection with occlusion handling, tracking, counting, and One Class Support Vector Machine (OC-SVM) classification. In this approach, moving objects are first segmented from the background using the adaptive Gaussian Mixture Model (GMM). After that, several geometric features are extracted, such as vehicle area, height, width, centroid, and bounding box. As occlusion is present, an algorithm was implemented to reduce it. The tracking is performed with adaptive Kalman filter. Finally, the selected geometric features: estimated area, height, and width are used by different classifiers in order to sort vehicles into three classes: small, midsize, and large. Extensive experimental results in eight real traffic videos with more than 4000 ground truth vehicles have shown that the improved system can run in real time under an occlusion index of 0.312 and classify vehicles with a global detection rate or recall, precision, and F-measure of up to 98.190%, and an F-measure of up to 99.051% for midsize vehicles.", "title": "" } ]
scidocsrr
f28662555a0c4bea946168cb47ac0b27
High-Performance Neural Networks for Visual Object Classification
[ { "docid": "27ad413fa5833094fb2e557308fa761d", "text": "A common practice to gain invariant features in object recognition models is to aggregate multiple low-level features over a small neighborhood. However, the differences between those models makes a comparison of the properties of different aggregation functions hard. Our aim is to gain insight into different functions by directly comparing them on a fixed architecture for several common object recognition tasks. Empirical results show that a maximum pooling operation significantly outperforms subsampling operations. Despite their shift-invariant properties, overlapping pooling windows are no significant improvement over non-overlapping pooling windows. By applying this knowledge, we achieve state-of-the-art error rates of 4.57% on the NORB normalized-uniform dataset and 5.6% on the NORB jittered-cluttered dataset.", "title": "" }, { "docid": "0a3f5ff37c49840ec8e59cbc56d31be2", "text": "Convolutional neural networks (CNNs) are well known for producing state-of-the-art recognizers for document processing [1]. However, they can be difficult to implement and are usually slower than traditional multi-layer perceptrons (MLPs). We present three novel approaches to speeding up CNNs: a) unrolling convolution, b) using BLAS (basic linear algebra subroutines), and c) using GPUs (graphic processing units). Unrolled convolution converts the processing in each convolutional layer (both forward-propagation and back-propagation) into a matrix-matrix product. The matrix-matrix product representation of CNNs makes their implementation as easy as MLPs. BLAS is used to efficiently compute matrix products on the CPU. We also present a pixel shader based GPU implementation of CNNs. Results on character recognition problems indicate that unrolled convolution with BLAS produces a dramatic 2.4X−3.0X speedup. The GPU implementation is even faster and produces a 3.1X−4.1X speedup.", "title": "" } ]
[ { "docid": "fbb6c8566fbe79bf8f78af0dc2dedc7b", "text": "Automatic essay evaluation (AEE) systems are designed to assist a teacher in the task of classroom assessment in order to alleviate the demands of manual subject evaluation. However, although numerous AEE systems are available, most of these systems do not use elaborate domain knowledge for evaluation, which limits their ability to give informative feedback to students and also their ability to constructively grade a student based on a particular domain of study. This paper is aimed at improving on the achievements of previous studies by providing a subject-focussed evaluation system that considers the domain knowledge while scoring and provides informative feedback to its user. The study employs a combination of techniques such as system design and modelling using Unified Modelling Language (UML), information extraction, ontology development, data management, and semantic matching in order to develop a prototype subject-focussed AEE system. The developed system was evaluated to determine its level of performance and usability. The result of the usability evaluation showed that the system has an overall mean rating of 4.17 out of maximum of 5, which indicates ‘good usability’. In terms of performance, the assessment done by the system was also found to have sufficiently high correlation with those done by domain experts, in addition to providing appropriate feedback to the user.", "title": "" }, { "docid": "da1d1e9ddb5215041b9565044b9feecb", "text": "As multiprocessors with large numbers of processors become more prevalent, we face the task of developing scheduling algorithms for the multiprogrammed use of such machines. The scheduling decisions must take into account the number of processors available, the overall system load, and the ability of each application awaiting activation to make use of a given number of processors.\nThe parallelism within an application can be characterized at a number of different levels of detail. At the highest level, it might be characterized by a single parameter (such as the proportion of the application that is sequential, or the average number of processors the application would use if an unlimited number of processors were available). At the lowest level, representing all the parallelism in the application requires the full data dependency graph (which is more information than is practically manageable).\nIn this paper, we examine the quality of processor allocation decisions under multiprogramming that can be made with several different high-level characterizations of application parallelism. We demonstrate that decisions based on parallelism characterizations with two to four parameters are superior to those based on single-parameter characterizations (such as fraction sequential or average parallelism). The results are based predominantly on simulation, with some guidance from a simple analytic model.", "title": "" }, { "docid": "460238e247fc60b0ca300ba9caafdc97", "text": "Time-resolved optical spectroscopy is widely used to study vibrational and electronic dynamics by monitoring transient changes in excited state populations on a femtosecond timescale. Yet the fundamental cause of electronic and vibrational dynamics—the coupling between the different energy levels involved—is usually inferred only indirectly. Two-dimensional femtosecond infrared spectroscopy based on the heterodyne detection of three-pulse photon echoes has recently allowed the direct mapping of vibrational couplings, yielding transient structural information. Here we extend the approach to the visible range and directly measure electronic couplings in a molecular complex, the Fenna–Matthews–Olson photosynthetic light-harvesting protein. As in all photosynthetic systems, the conversion of light into chemical energy is driven by electronic couplings that ensure the efficient transport of energy from light-capturing antenna pigments to the reaction centre. We monitor this process as a function of time and frequency and show that excitation energy does not simply cascade stepwise down the energy ladder. We find instead distinct energy transport pathways that depend sensitively on the detailed spatial properties of the delocalized excited-state wavefunctions of the whole pigment–protein complex.", "title": "" }, { "docid": "486dae23f5a7b19cf8c20fab60de6b0f", "text": "Histopathological alterations induced by paraquat in the digestive gland of the freshwater snail Lymnaea luteola were investigated. Samples were collected from the Kondakarla lake (Visakhapatnam, Andhra Pradesh, India), where agricultural activities are widespread. Acute toxicity of series of concentration of paraquat to Lymnaea luteola was determined by recording snail mortality of 24, 48, 72 and 96 hrs exposures. The Lc50 value based on probit analysis was found to be 0.073 ml/L for 96 hrs of exposure to the herbicide. Results obtained shown that there were no mortality of snail either in control and those exposed to 0.0196 ml/L paraquat throughout the 96 hrs 100% mortality was recorded with 48hrs on exposed to 0.790 ppm concentration of stock solution of paraquat. At various concentrations paraquat causes significant dose dependent histopathological changes in the digestive gland of L.luteola. The histopathological examinations revealed the following changes: amebocytes infiltrations, the lumen of digestive gland tubule was shrunken; degeneration of cells, secretory cells became irregular, necrosis of cells and atrophy in the connective tissue of digestive gland.", "title": "" }, { "docid": "ab7663ef08505e37be080eab491d2607", "text": "This paper has studied the fatigue and friction of big end bearing on an engine connecting rod by combining the multi-body dynamics and hydrodynamic lubrication model. First, the basic equations and the application on AVL-Excite software platform of multi-body dynamics have been described in detail. Then, introduce the hydrodynamic lubrication model, which is the extended Reynolds equation derived from the Navier-Stokes equation and the equation of continuity. After that, carry out the static calculation of connecting rod assembly. At the same time, multi-body dynamics analysis has been performed and stress history can be obtained by finite element data recovery. Next, execute the fatigue analysis combining the Static stress and dynamic stress, safety factor distribution of connecting rod will be obtained as result. At last, detailed friction analysis of the big-end bearing has been performed. And got a good agreement when contrast the simulation results to the Bearing wear in the experiment.", "title": "" }, { "docid": "d390b0e5b1892297af37659fb92c03b5", "text": "Encouraged by recent waves of successful applications of deep learning, some researchers have demonstrated the effectiveness of applying convolutional neural networks (CNN) to time series classification problems. However, CNN and other traditional methods require the input data to be of the same dimension which prevents its direct application on data of various lengths and multi-channel time series with different sampling rates across channels. Long short-term memory (LSTM), another tool in the deep learning arsenal and with its design nature, is more appropriate for problems involving time series such as speech recognition and language translation. In this paper, we propose a novel model incorporating a sequence-to-sequence model that consists two LSTMs, one encoder and one decoder. The encoder LSTM accepts input time series of arbitrary lengths, extracts information from the raw data and based on which the decoder LSTM constructs fixed length sequences that can be regarded as discriminatory features. For better utilization of the raw data, we also introduce the attention mechanism into our model so that the feature generation process can peek at the raw data and focus its attention on the part of the raw data that is most relevant to the feature under construction. We call our model S2SwA, as the short for Sequence-to-Sequence with Attention. We test S2SwA on both uni-channel and multi-channel time series datasets and show that our model is competitive with the state-of-the-art in real world tasks such as human activity recognition.", "title": "" }, { "docid": "7374e16190e680669f76fc7972dc3975", "text": "Open-plan office layout is commonly assumed to facilitate communication and interaction between co-workers, promoting workplace satisfaction and team-work effectiveness. On the other hand, open-plan layouts are widely acknowledged to be more disruptive due to uncontrollable noise and loss of privacy. Based on the occupant survey database from Center for the Built Environment (CBE), empirical analyses indicated that occupants assessed Indoor Environmental Quality (IEQ) issues in different ways depending on the spatial configuration (classified by the degree of enclosure) of their workspace. Enclosed private offices clearly outperformed open-plan layouts in most aspects of IEQ, particularly in acoustics, privacy and the proxemics issues. Benefits of enhanced ‘ease of interaction’ were smaller than the penalties of increased noise level and decreased privacy resulting from open-plan office configuration.", "title": "" }, { "docid": "61309b5f8943f3728f714cd40f260731", "text": "Article history: Received 4 January 2011 Received in revised form 1 August 2011 Accepted 13 August 2011 Available online 15 September 2011 Advertising media are a means of communication that creates different marketing and communication results among consumers. Over the years, newspaper, magazine, TV, and radio have provided a one-way media where information is broadcast and communicated. Due to the widespread application of the Internet, advertising has entered into an interactive communications mode. In the advent of 3G broadband mobile communication systems and smartphone devices, consumers' preferences can be pre-identified and advertising messages can therefore be delivered to consumers in a multimedia format at the right time and at the right place with the right message. In light of this new advertisement possibility, designing personalized mobile advertising to meet consumers' needs becomes an important issue. This research uses the fuzzy Delphi method to identify the key personalized attributes in a personalized mobile advertising message for different products. Results of the study identify six important design attributes for personalized advertisements: price, preference, promotion, interest, brand, and type of mobile device. As personalized mobile advertising becomes more integrated in people's daily activities, its pros and cons and social impact are also discussed. The research result can serve as a guideline for the key parties in mobile marketing industry to facilitate the development of the industry and ensure that advertising resources are properly used. © 2011 Elsevier Inc. All rights reserved.", "title": "" }, { "docid": "e4a1200b7f8143b1322c8a66d625d842", "text": "This paper examines the spatial patterns of unemployment in Chicago between 1980 and 1990. We study unemployment clustering with respect to different social and economic distance metrics that reßect the structure of agents’ social networks. SpeciÞcally, we use physical distance, travel time, and differences in ethnic and occupational distribution between locations. Our goal is to determine whether our estimates of spatial dependence are consistent with models in which agents’ employment status is affected by information exchanged locally within their social networks. We present non-parametric estimates of correlation across Census tracts as a function of each distance metric as well as pairs of metrics, both for unemployment rate itself and after conditioning on a set of tract characteristics. Our results indicate that there is a strong positive and statistically signiÞcant degree of spatial dependence in the distribution of raw unemployment rates, for all our metrics. However, once we condition on a set of covariates, most of the spatial autocorrelation is eliminated, with the exception of physical and occupational distance. Racial and ethnic composition variables are the single most important factor in explaining the observed correlation patterns.", "title": "" }, { "docid": "78e561cfb2578cc9d5634f008a4e6c7e", "text": "The TCP transport layer protocol is designed for connections that traverse a single path between the sender and receiver. However, there are several environments in which multiple paths can be used by a connection simultaneously. In this paper we consider the problem of supporting striped connections that operate over multiple paths. We propose an end-to-end transport layer protocol called pTCP that allows connections to enjoy the aggregate bandwidths offered by the multiple paths, irrespective of the individual characteristics of the paths. We show that pTCP can have a varied range of applications through instantiations in three different environments: (a) bandwidth aggregation on multihomed mobile hosts, (b) service differentiation using purely end-to-end mechanisms, and (c) end-systems based network striping. In each of the applications we demonstrate the applicability of pTCP and how its efficacy compares with existing approaches through simulation results.", "title": "" }, { "docid": "42d3adba03f835f120404cfe7571a532", "text": "This study investigated the psychometric properties of the Arabic version of the SMAS. SMAS is a variant of IAT customized to measure addiction to social media instead of the Internet as a whole. Using a self-report instrument on a cross-sectional sample of undergraduate students, the results revealed the following. First, the exploratory factor analysis showed that a three-factor model fits the data well. Second, concurrent validity analysis showed the SMAS to be a valid measure of social media addiction. However, further studies and data should verify the hypothesized model. Finally, this study showed that the Arabic version of the SMAS is a valid and reliable instrument for use in measuring social media addiction in the Arab world.", "title": "" }, { "docid": "16cac565c6163db83496c41ea98f61f9", "text": "The rapid increase in multimedia data transmission over the Internet necessitates the multi-modal summarization (MMS) from collections of text, image, audio and video. In this work, we propose an extractive multi-modal summarization method that can automatically generate a textual summary given a set of documents, images, audios and videos related to a specific topic. The key idea is to bridge the semantic gaps between multi-modal content. For audio information, we design an approach to selectively use its transcription. For visual information, we learn the joint representations of text and images using a neural network. Finally, all of the multimodal aspects are considered to generate the textual summary by maximizing the salience, non-redundancy, readability and coverage through the budgeted optimization of submodular functions. We further introduce an MMS corpus in English and Chinese, which is released to the public1. The experimental results obtained on this dataset demonstrate that our method outperforms other competitive baseline methods.", "title": "" }, { "docid": "c01fbc8bd278b06e0476c6fbffca0ad1", "text": "Memristors can be optimally used to implement logic circuits. In this paper, a logic circuit based on Memristor Ratioed Logic (MRL) is proposed. Specifically, a hybrid CMOS-memristive logic family by a suitable combination of 4 memristor and a complementary inverter CMOS structure is presented. The proposed structure by having outputs of AND, OR and XOR gates of inputs at the same time, reducing the area and connections and fewer power consumption can be appropriate for implementation of more complex circuits. Circuit design of a single-bit Full Adder is considered as a case study. The Full Adder proposed is implemented using 10 memristors and 4 transistors comparing to 18 memristors and 8 transistors in the other related work.", "title": "" }, { "docid": "b44d6d71650fc31c643ac00bd45772cd", "text": "We give in this paper a complete description of the Knuth-Bendix completion algorithm. We prove its correctness in full, isolating carefully the essential abstract notions, so that the proof may be extended to other versions and extensions of the basic algorithm. We show that it defines a semidecision algorithm for the validity problem in the equational theories for which it applies, yielding a decision procedure whenever the algorithm terminates.", "title": "" }, { "docid": "45faf47f5520a4f21719f5169334aabb", "text": "Many dynamic-content online services are comprised of multiple interacting components and data partitions distributed across server clusters. Understanding the performance of these services is crucial for efficient system management. This paper presents a profile-driven performance model for cluster-based multi-component online services. Our offline constructed application profiles characterize component resource needs and inter-component communications. With a given component placement strategy, the application profile can be used to predict system throughput and average response time for the online service. Our model differentiates remote invocations from fast-path calls between co-located components and we measure the network delay caused by blocking inter-component communications. Validation with two J2EE-based online applications show that our model can predict application performance with small errors (less than 13% for throughput and less than 14% for the average response time). We also explore how this performance model can be used to assist system management functions for multi-component online services, with case examinations on optimized component placement, capacity planning, and cost-effectiveness analysis.", "title": "" }, { "docid": "ea7acc555f2cb2de898a3706c31006db", "text": "Securing the supply chain of integrated circuits is of utmost importance to computer security. In addition to counterfeit microelectronics, the theft or malicious modification of designs in the foundry can result in catastrophic damage to critical systems and large projects. In this letter, we describe a 3-D architecture that splits a design into two separate tiers: one tier that contains critical security functions is manufactured in a trusted foundry; another tier is manufactured in an unsecured foundry. We argue that a split manufacturing approach to hardware trust based on 3-D integration is viable and provides several advantages over other approaches.", "title": "" }, { "docid": "103f4ff03cc1aef7c173b36ccc33e680", "text": "Wireless environments are typically characterized by unpredictable and unreliable channel conditions. In such environments, fragmentation of network-bound data is a commonly adapted technique to improve the probability of successful data transmissions and reduce the energy overheads incurred due to re-transmissions. The overall latencies involved with fragmentation and consequent re-assembly of fragments are often neglected which bear significant effects on the real-time guarantees of the participating applications. This work studies the latencies introduced as a result of the fragmentation performed at the link layer (MAC layer in IEEE 802.11) of the source device and their effects on end-to-end delay constraints of mobile applications (e.g., media streaming). Based on the observed effects, this work proposes a feedback-based adaptive approach that chooses an optimal fragment size to (a) satisfy end-to-end delay requirements of the distributed application and (b) minimize the energy consumption of the source device by increasing the probability of successful transmissions, thereby reducing re-transmissions and their associated costs.", "title": "" }, { "docid": "1cb5a2d9abde060ba4f004fac84ca9ca", "text": "To reach a real-time stereo vision in embedded systems, we propose in this paper, the adaptation and optimization of the well-known Disparity Space Image (DSI) on a single FPGA(Field programmable gate Arrays) that is designed for high efficiency when realized in hardware. An initial disparity map was calculated using the DSI structure and then a median filter was applied to smooth the disparity map. Many methods reported in the literature are mainly restricted to implement the SAD algorithm (Sum of Absolute Differences) on an FPGA. An evaluation of our method is done by comparing the obtained results of our method with a very fast and well-known sum of absolute differences algorithm using hardware-based implementations.", "title": "" }, { "docid": "5948f08c1ca41b7024a4f7c0b2a99e5b", "text": "Nowadays, neural networks play an important role in the task of relation classification. By designing different neural architectures, researchers have improved the performance to a large extent, compared with traditional methods. However, existing neural networks for relation classification are usually of shallow architectures (e.g., one-layer convolution neural networks or recurrent networks). They may fail to explore the potential representation space in different abstraction levels. In this paper, we propose deep recurrent neural networks (DRNNs) to tackle this challenge. Further, we propose a data augmentation method by leveraging the directionality of relations. We evaluate our DRNNs on the SemEval-2010 Task 8, and achieve an F1score of 85.81%, outperforming state-of-theart recorded results.", "title": "" }, { "docid": "698dfa061afb89ac4dc768ec7a68ff1a", "text": "a r t i c l e i n f o Social network sites such as Facebook give off the impression that others are doing better than we are. As a result, the use of these sites may lead to negative social comparison (i.e., feeling like others are doing better than oneself). According to social comparison theory, such negative social comparisons are detrimental to perceptions about the self. The current study therefore investigated the indirect relationship between Facebook use and self-perceptions through negative social comparison. Because happier people process social information differently than unhappier people, we also investigated whether the relationship between Facebook use and social comparison and, as a result, self-perception, differs depending on the degree of happiness of the emerging adult. A survey among 231 emerging adults (age 18–25) showed that Facebook use was related to a greater degree of negative social comparison, which was in turn related negatively to self-perceived social competence and physical attractiveness. The indirect relationship between Facebook use and self-perception through negative social comparison was attenuated among happier individuals, as the relationship between Facebook use and negative social comparison was weaker among happier individuals. SNS use was thus negatively related to self-perception through negative social comparison, especially among unhappy individuals. Social network sites (SNSs), such as Facebook, are notorious for giving off the impression that other people are living better lives than we are (Chou & Edge, 2012). People generally present themselves and their lives positively on SNSs (Dorethy, Fiebert, & Warren, 2014) for example by posting pictures in which they look their best (Manago, Graham, Greenfield, & Salimkhan, 2008) and are having a good time with their friends (Zhao, Grasmuck, & Martin, 2008). The vast majority of time spent on SNSs consists of viewing these idealized SNS profiles, pictures, and status updates of others (Pempek, Yermolayeva, & Calvert, 2009). Such information about how others are doing may impact how people see themselves, that is, their self-perceptions because people base their self-perceptions at least partly on how they are doing in comparison to others (Festinger, 1954). These potential effects of SNS use on self-perceptions through social comparison are the focus of the current study. Previous research on the effects of SNSs on self-perceptions has focused predominantly on the implications of social interactions on these websites (e.g., feedback from others) (Valkenburg, Peter, & Schouten, 2006) or due to editing and viewing content about the self …", "title": "" } ]
scidocsrr
1396859e54315d0c571f77cfd2ebec62
HealthyTogether: exploring social incentives for mobile fitness applications
[ { "docid": "42992bd3e26ab8b74dceb7707495d7af", "text": "Though a variety of persuasive health applications have been designed with a preventive standpoint toward diseases in mind, many have been designed largely for a general audience. Designers of these technologies may achieve more success if applications consider an individual’s personality type. Our goal for this research was to explore the relationship between personality and persuasive technologies in the context of health-promoting mobile applications. We conducted an online survey with 240 participants using storyboards depicting eight different persuasive strategies, the Big Five Inventory for personality domains, and questions on perceptions of the persuasive technologies. Our results and analysis revealed a number of significant relationships between personality and the persuasive technologies we evaluated. The findings from this study can guide the development of persuasive technologies that can cater to individual personalities to improve the likelihood of their success.", "title": "" }, { "docid": "16d949f6915cbb958cb68a26c6093b6b", "text": "Overweight and obesity are a global epidemic, with over one billion overweight adults worldwide (300+ million of whom are obese). Obesity is linked to several serious health problems and medical conditions. Medical experts agree that physical activity is critical to maintaining fitness, reducing weight, and improving health, yet many people have difficulty increasing and maintaining physical activity in everyday life. Clinical studies have shown that health benefits can occur from simply increasing the number of steps one takes each day and that social support can motivate people to stay active. In this paper, we describe Houston, a prototype mobile phone application for encouraging activity by sharing step count with friends. We also present four design requirements for technologies that encourage physical activity that we derived from a three-week long in situ pilot study that was conducted with women who wanted to increase their physical activity.", "title": "" }, { "docid": "5e7a06213a32e0265dcb8bc11a5bb3f1", "text": "The global obesity epidemic has prompted our community to explore the potential for technology to play a stronger role in promoting healthier lifestyles. Although there are several examples of successful games based on focused physical interaction, persuasive applications that integrate into everyday life have had more mixed results. This underscores a need for designs that encourage physical activity while addressing fun, sustainability, and behavioral change. This note suggests a new perspective, inspired in part by the social nature of many everyday fitness applications and by the successful encouragement of long term play in massively multiplayer online games. We first examine the game design literature to distill a set of principles for discussing and comparing applications. We then use these principles to analyze an existing application. Finally, we present Kukini, a design for an everyday fitness game.", "title": "" } ]
[ { "docid": "13c7278393988ec2cfa9a396255e6ff3", "text": "Finding good transfer functions for rendering medical volumes is difficult, non-intuitive, and time-consuming. We introduce a clustering-based framework for the automatic generation of transfer functions for volumetric data. The system first applies mean shift clustering to oversegment the volume boundaries according to their low-high (LH) values and their spatial coordinates, and then uses hierarchical clustering to group similar voxels. A transfer function is then automatically generated for each cluster such that the number of occlusions is reduced. The framework also allows for semi-automatic operation, where the user can vary the hierarchical clustering results or the transfer functions generated. The system improves the efficiency and effectiveness of visualizing medical images and is suitable for medical imaging applications.", "title": "" }, { "docid": "fe5b87cacf87c6eab9c252cef41c24d8", "text": "The Filter Bank Common Spatial Pattern (FBCSP) algorithm employs multiple spatial filters to automatically select key temporal-spatial discriminative EEG characteristics and the Naïve Bayesian Parzen Window (NBPW) classifier using offline learning in EEG-based Brain-Computer Interfaces (BCI). However, it has yet to address the non-stationarity inherent in the EEG between the initial calibration session and subsequent online sessions. This paper presents the FBCSP that employs the NBPW classifier using online adaptive learning that augments the training data with available labeled data during online sessions. However, employing semi-supervised learning that simply augments the training data with available data using predicted labels can be detrimental to the classification accuracy. Hence, this paper presents the FBCSP using online semi-supervised learning that augments the training data with available data that matches the probabilistic model captured by the NBPW classifier using predicted labels. The performances of FBCSP using online adaptive and semi-supervised learning are evaluated on the BCI Competition IV datasets IIa and IIb and compared to the FBCSP using offline learning. The results showed that the FBCSP using online semi-supervised learning yielded relatively better session-to-session classification results compared against the FBCSP using offline learning. The FBCSP using online adaptive learning on true labels yielded the best results in both datasets, but the FBCSP using online semi-supervised learning on predicted labels is more practical in BCI applications where the true labels are not available.", "title": "" }, { "docid": "046f2b6ec65903d092f8576cd210d7ee", "text": "Aim\nThe principal study objective was to investigate the pharmacokinetic characteristics and determine the absolute bioavailability and tolerability of a new sublingual (SL) buprenorphine wafer.\n\n\nMethods\nThe study was of open label, two-way randomized crossover design in 14 fasted healthy male and female volunteers. Each participant, under naltrexone block, received either a single intravenous dose of 300 mcg of buprenorphine as a constant infusion over five minutes or a sublingual dose of 800 mcg of buprenorphine in two treatment periods separated by a seven-day washout period. Blood sampling for plasma drug assay was taken on 16 occasions throughout a 48-hour period (predose and at 10, 20, 30, and 45 minutes, 1, 1.5, 2, 2.5, 3, 4, 6, 8, 12, 24 and 48 hours postdose). The pharmacokinetic parameters were determined by noncompartmental analyses of the buprenorphine plasma concentration-time profiles. Local tolerability was assessed using modified Likert scales.\n\n\nResults\nThe absolute bioavailability of SL buprenorphine was 45.4% (95% confidence interval = 37.8-54.3%). The median times to peak plasma concentration were 10 minutes and 60 minutes after IV and SL administration, respectively. The peak plasma concentration was 2.65 ng/mL and 0.74 ng/mL after IV and SL administration, respectively. The half-lives were 9.1 hours and 11.2 hours after IV and SL administration, respectively. The wafer had very good local tolerability.\n\n\nConclusions\nThis novel sublingual buprenorphine wafer has high bioavailability and reduced Tmax compared with other SL tablet formulations of buprenorphine. The wafer displayed very good local tolerability. The results suggest that this novel buprenorphine wafer may provide enhanced clinical utility in the management of both acute and chronic pain.\n\n\nBackground\nBuprenorphine is approved for use in pain management and opioid addiction. Sublingual administration of buprenorphine is a simple and noninvasive route of administration and has been available for many years. Improved sublingual formulations may lead to increased utilization of this useful drug for acute and chronic pain management.", "title": "" }, { "docid": "1b0046cbee1afd3e7471f92f115f3d74", "text": "We present an approach to improve statistical machine translation of image descriptions by multimodal pivots defined in visual space. The key idea is to perform image retrieval over a database of images that are captioned in the target language, and use the captions of the most similar images for crosslingual reranking of translation outputs. Our approach does not depend on the availability of large amounts of in-domain parallel data, but only relies on available large datasets of monolingually captioned images, and on state-ofthe-art convolutional neural networks to compute image similarities. Our experimental evaluation shows improvements of 1 BLEU point over strong baselines.", "title": "" }, { "docid": "20e504a115a1448ea366eae408b6391f", "text": "Clustering algorithms have emerged as an alternative powerful meta-learning tool to accurately analyze the massive volume of data generated by modern applications. In particular, their main goal is to categorize data into clusters such that objects are grouped in the same cluster when they are similar according to specific metrics. There is a vast body of knowledge in the area of clustering and there has been attempts to analyze and categorize them for a larger number of applications. However, one of the major issues in using clustering algorithms for big data that causes confusion amongst practitioners is the lack of consensus in the definition of their properties as well as a lack of formal categorization. With the intention of alleviating these problems, this paper introduces concepts and algorithms related to clustering, a concise survey of existing (clustering) algorithms as well as providing a comparison, both from a theoretical and an empirical perspective. From a theoretical perspective, we developed a categorizing framework based on the main properties pointed out in previous studies. Empirically, we conducted extensive experiments where we compared the most representative algorithm from each of the categories using a large number of real (big) data sets. The effectiveness of the candidate clustering algorithms is measured through a number of internal and external validity metrics, stability, runtime, and scalability tests. In addition, we highlighted the set of clustering algorithms that are the best performing for big data.", "title": "" }, { "docid": "49168ffff3d4212bc010e8085a3c2e8f", "text": "Recent advances in the solution of nonconvex optimization problems use simulated annealing techniques that are considerably faster than exhaustive global search techniques. This letter presents a simulated annealing technique, which is t/log (t) times faster than conventional simulated annealing, and applies it to a multisensor location and tracking problem.", "title": "" }, { "docid": "88e1d4f4245a4162ddd27503302ce6b4", "text": "Using ethnographic research methods, the authors studied the structure of the needs and priorities of people working in a vineyard to gain a better understanding of the potential for sensor networks in agriculture. We discuss an extended study of vineyard workers and their work practices to assess the potential for sensor network systems to aid work in this environment. The major purpose is to find new directions and new topics that pervasive computing and sensor networks might address in designing technologies to support a broader range of users and activities.", "title": "" }, { "docid": "61c68d03ed5769bf4c061ba78624cc7f", "text": "Extant xenarthrans (armadillos, anteaters and sloths) are among the most derived placental mammals ever evolved. South America was the cradle of their evolutionary history. During the Tertiary, xenarthrans experienced an extraordinary radiation, whereas South America remained isolated from other continents. The 13 living genera are relics of this earlier diversification and represent one of the four major clades of placental mammals. Sequences of the three independent protein-coding nuclear markers alpha2B adrenergic receptor (ADRA2B), breast cancer susceptibility (BRCA1), and von Willebrand Factor (VWF) were determined for 12 of the 13 living xenarthran genera. Comparative evolutionary dynamics of these nuclear exons using a likelihood framework revealed contrasting patterns of molecular evolution. All codon positions of BRCA1 were shown to evolve in a strikingly similar manner, and third codon positions appeared less saturated within placentals than those of ADRA2B and VWF. Maximum likelihood and Bayesian phylogenetic analyses of a 47 placental taxa data set rooted by three marsupial outgroups resolved the phylogeny of Xenarthra with some evidence for two radiation events in armadillos and provided a strongly supported picture of placental interordinal relationships. This topology was fully compatible with recent studies, dividing placentals into the Southern Hemisphere clades Afrotheria and Xenarthra and a monophyletic Northern Hemisphere clade (Boreoeutheria) composed of Laurasiatheria and Euarchontoglires. Partitioned likelihood statistical tests of the position of the root, under different character partition schemes, identified three almost equally likely hypotheses for early placental divergences: a basal Afrotheria, an Afrotheria + Xenarthra clade, or a basal Xenarthra (Epitheria hypothesis). We took advantage of the extensive sampling realized within Xenarthra to assess its impact on the location of the root on the placental tree. By resampling taxa within Xenarthra, the conservative Shimodaira-Hasegawa likelihood-based test of alternative topologies was shown to be sensitive to both character and taxon sampling.", "title": "" }, { "docid": "fa7682dc85d868e57527fdb3124b309c", "text": "The seminal 2003 paper by Cosley, Lab, Albert, Konstan, and Reidl, demonstrated the susceptibility of recommender systems to rating biases. To facilitate browsing and selection, almost all recommender systems display average ratings before accepting ratings from users which has been shown to bias ratings. This effect is called Social Inuence Bias (SIB); the tendency to conform to the perceived \\norm\" in a community. We propose a methodology to 1) learn, 2) analyze, and 3) mitigate the effect of SIB in recommender systems. In the Learning phase, we build a baseline dataset by allowing users to rate twice: before and after seeing the average rating. In the Analysis phase, we apply a new non-parametric significance test based on the Wilcoxon statistic to test whether the data is consistent with SIB. If significant, we propose a Mitigation phase using polynomial regression and the Bayesian Information Criterion (BIC) to predict unbiased ratings. We evaluate our approach on a dataset of 9390 ratings from the California Report Card (CRC), a rating-based system designed to encourage political engagement. We found statistically significant evidence of SIB. Mitigating models were able to predict changed ratings with a normalized RMSE of 12.8% and reduce bias by 76.3%. The CRC, our data, and experimental code are available at: http://californiareportcard.org/data/", "title": "" }, { "docid": "b46b8dd33cf82d82d41f501ea87ebfc1", "text": "Repetition is a core principle in music. This is especially true for popular songs, generally marked by a noticeable repeating musical structure, over which the singer performs varying lyrics. On this basis, we propose a simple method for separating music and voice, by extraction of the repeating musical structure. First, the period of the repeating structure is found. Then, the spectrogram is segmented at period boundaries and the segments are averaged to create a repeating segment model. Finally, each time-frequency bin in a segment is compared to the model, and the mixture is partitioned using binary time-frequency masking by labeling bins similar to the model as the repeating background. Evaluation on a dataset of 1,000 song clips showed that this method can improve on the performance of an existing music/voice separation method without requiring particular features or complex frameworks.", "title": "" }, { "docid": "36248b57ff386a6e316b7c8273e351d0", "text": "Mental stress has become a social issue and could become a cause of functional disability during routine work. In addition, chronic stress could implicate several psychophysiological disorders. For example, stress increases the likelihood of depression, stroke, heart attack, and cardiac arrest. The latest neuroscience reveals that the human brain is the primary target of mental stress, because the perception of the human brain determines a situation that is threatening and stressful. In this context, an objective measure for identifying the levels of stress while considering the human brain could considerably improve the associated harmful effects. Therefore, in this paper, a machine learning (ML) framework involving electroencephalogram (EEG) signal analysis of stressed participants is proposed. In the experimental setting, stress was induced by adopting a well-known experimental paradigm based on the montreal imaging stress task. The induction of stress was validated by the task performance and subjective feedback. The proposed ML framework involved EEG feature extraction, feature selection (receiver operating characteristic curve, t-test and the Bhattacharya distance), classification (logistic regression, support vector machine and naïve Bayes classifiers) and tenfold cross validation. The results showed that the proposed framework produced 94.6% accuracy for two-level identification of stress and 83.4% accuracy for multiple level identification. In conclusion, the proposed EEG-based ML framework has the potential to quantify stress objectively into multiple levels. The proposed method could help in developing a computer-aided diagnostic tool for stress detection.", "title": "" }, { "docid": "0286fb17d9ddb18fb25152c7e5b943c4", "text": "Treemaps are a well known method for the visualization of attributed hierarchical data. Previously proposed treemap layout algorithms are limited to rectangular shapes, which cause problems with the aspect ratio of the rectangles as well as with identifying the visualized hierarchical structure. The approach of Voronoi treemaps presented in this paper eliminates these problems through enabling subdivisions of and in polygons. Additionally, this allows for creating treemap visualizations within areas of arbitrary shape, such as triangles and circles, thereby enabling a more flexible adaptation of treemaps for a wider range of applications.", "title": "" }, { "docid": "29479201c12e99eb9802dd05cff60c36", "text": "Exposures to air pollution in the form of particulate matter (PM) can result in excess production of reactive oxygen species (ROS) in the respiratory system, potentially causing both localized cellular injury and triggering a systemic inflammatory response. PM-induced inflammation in the lung is modulated in large part by alveolar macrophages and their biochemical signaling, including production of inflammatory cytokines, the primary mechanism via which inflammation is initiated and sustained. We developed a robust, relevant, and flexible method employing a rat alveolar macrophage cell line (NR8383) which can be applied to routine samples of PM from air quality monitoring sites to gain insight into the drivers of PM toxicity that lead to oxidative stress and inflammation. Method performance was characterized using extracts of ambient and vehicular engine exhaust PM samples. Our results indicate that the reproducibility and the sensitivity of the method are satisfactory and comparisons between PM samples can be made with good precision. The average relative percent difference for all genes detected during 10 different exposures was 17.1%. Our analysis demonstrated that 71% of genes had an average signal to noise ratio (SNR) ≥ 3. Our time course study suggests that 4 h may be an optimal in vitro exposure time for observing short-term effects of PM and capturing the initial steps of inflammatory signaling. The 4 h exposure resulted in the detection of 57 genes (out of 84 total), of which 86% had altered expression. Similarities and conserved gene signaling regulation among the PM samples were demonstrated through hierarchical clustering and other analyses. Overlying the core congruent patterns were differentially regulated genes that resulted in distinct sample-specific gene expression \"fingerprints.\" Consistent upregulation of Il1f5 and downregulation of Ccr7 was observed across all samples, while TNFα was upregulated in half of the samples and downregulated in the other half. Overall, this PM-induced cytokine expression assay could be effectively integrated into health studies and air quality monitoring programs to better understand relationships between specific PM components, oxidative stress activity and inflammatory signaling potential.", "title": "" }, { "docid": "b9717a3ce0ed7245621314ba3e1ce251", "text": "Analog beamforming with phased arrays is a promising technique for 5G wireless communication at millimeter wave frequencies. Using a discrete codebook consisting of multiple analog beams, each beam focuses on a certain range of angles of arrival or departure and corresponds to a set of fixed phase shifts across frequency due to practical hardware considerations. However, for sufficiently large bandwidth, the gain provided by the phased array is actually frequency dependent, which is an effect called beam squint, and this effect occurs even if the radiation pattern of the antenna elements is frequency independent. This paper examines the nature of beam squint for a uniform linear array (ULA) and analyzes its impact on codebook design as a function of the number of antennas and system bandwidth normalized by the carrier frequency. The criterion for codebook design is to guarantee that each beam's minimum gain for a range of angles and for all frequencies in the wideband system exceeds a target threshold, for example 3 dB below the array's maximum gain. Analysis and numerical examples suggest that a denser codebook is required to compensate for beam squint. For example, 54% more beams are needed compared to a codebook design that ignores beam squint for a ULA with 32 antennas operating at a carrier frequency of 73 GHz and bandwidth of 2.5 GHz. Furthermore, beam squint with this design criterion limits the bandwidth or the number of antennas of the array if the other one is fixed.", "title": "" }, { "docid": "81487ff9bc7cc46035d88848f9d41419", "text": "This paper proposes a method for classifying the type of lexical-semantic relation between a given pair of words. Given an inventory of target relationships, this task can be seen as a multi-class classification problem. We train a supervised classifier by assuming that a specific type of lexical-semantic relation between a pair of words would be signaled by a carefully designed set of relation-specific similarities between the words. These similarities are computed by exploiting “sense representations” (sense/concept embeddings). The experimental results show that the proposed method clearly outperforms an existing state-of-the-art method that does not utilize sense/concept embeddings, thereby demonstrating the effectiveness of the sense representations.", "title": "" }, { "docid": "2b123076b5d3e848916cd33a9c6321d0", "text": "This paper proposes a new isolated bridgeless AC-DC power factor correction (PFC) converter. The proposed circuit consists of dual flyback converters which provide high power factor (PF). By eliminating input bridge diodes, the number of conducting components is reduced. Therefore, conduction losses are decreased and efficiency can be further improved. Critical conduction mode (CRM) operation decreases the switching losses of switch components. Thus, operational modes of CRM are analyzed and sensing configurations are also presented to address some of the challenge points such as zero crossing detection (ZCD) circuit and sensing circuits of the bridgeless converter. Using a transformer allows for more flexible voltage gain design and, thus, a single-stage isolated PFC. The proposed circuit is verified with a 75W (12V/6.4A) experimental prototype in discontinuous conduction mode (DCM) and CRM.", "title": "" }, { "docid": "d1ff3f763fac877350d402402b29323c", "text": "The study of microstrip patch antennas has made great progress in recent years. Compared with conventional antennas, microstrip patch antennas have more advantages and better prospects. They are lighter in weight, low volume, low cost, low profile, smaller in dimension and ease of fabrication and conformity. Moreover, the microstrip patch antennas can provide dual and circular polarizations, dual-frequency operation, frequency agility, broad band-width, feedline flexibility, beam scanning omnidirectional patterning. In this paper we discuss the microstrip antenna, types of microstrip antenna, feeding techniques and application of microstrip patch antenna with their advantage and disadvantages over conventional microwave antennas.", "title": "" }, { "docid": "08a75a1b6643d0aedcd3419b7ac143b2", "text": "Traditional image coding standards (such as JPEG and JPEG2000) make the decoded image suffer from many blocking artifacts or noises since the use of big quantization steps. To overcome this problem, we proposed an end-to-end compression framework based on two CNNs, as shown in Figure 1, which produce a compact representation for encoding using a third party coding standard and reconstruct the decoded image, respectively. To make two CNNs effectively collaborate, we develop a unified end-to-end learning framework to simultaneously learn CrCNN and ReCNN such that the compact representation obtained by CrCNN preserves the structural information of the image, which facilitates to accurately reconstruct the decoded image using ReCNN and also makes the proposed compression framework compatible with existing image coding standards.", "title": "" }, { "docid": "2fe11bee56ecafabeb24c69aae63f8cb", "text": "Enabled by virtualization technologies, various multi-tier applications (such as web applications) are hosted by virtual machines (VMs) in cloud data centers. Live migration of multi-tier applications across geographically distributed data centers is important for load management, power saving, routine server maintenance and quality-of-service. Different from a single-VM migration, VMs in a multi-tier application are closely correlated, which results in a correlated VM migrations problem. Current live migration algorithms for single-VM cause significant application performance degradation because intermediate data exchange between different VMs suffers relatively low bandwidth and high latency across distributed data centers. In this paper, we design and implement a coordination system called VMbuddies for correlated VM migrations in the cloud. Particularly, we propose an adaptive network bandwidth allocation algorithm to minimize the migration cost in terms of migration completion time, network traffic and migration downtime. Experiments using a public benchmark show that VMbuddies significantly reduces the performance degradation and migration cost of multi-tier applications.", "title": "" } ]
scidocsrr
f06e068e74c0adee96c1e7ae44770b30
NBA (network balancing act): a high-performance packet processing framework for heterogeneous processors
[ { "docid": "b088f6f89facb0139f1e6c299ed2e9a3", "text": "Scaling the performance of short TCP connections on multicore systems is fundamentally challenging. Although many proposals have attempted to address various shortcomings, inefficiency of the kernel implementation still persists. For example, even state-of-the-art designs spend 70% to 80% of CPU cycles in handling TCP connections in the kernel, leaving only small room for innovation in the user-level program. This work presents mTCP, a high-performance userlevel TCP stack for multicore systems. mTCP addresses the inefficiencies from the ground up—from packet I/O and TCP connection management to the application interface. In addition to adopting well-known techniques, our design (1) translates multiple expensive system calls into a single shared memory reference, (2) allows efficient flowlevel event aggregation, and (3) performs batched packet I/O for high I/O efficiency. Our evaluations on an 8-core machine showed that mTCP improves the performance of small message transactions by a factor of 25 compared to the latest Linux TCP stack and a factor of 3 compared to the best-performing research system known so far. It also improves the performance of various popular applications by 33% to 320% compared to those on the Linux stack.", "title": "" } ]
[ { "docid": "31c48b4aa8402ad6439ec1acd5cbb889", "text": "Face recognition has been extensively used in a wide variety of security systems for identity authentication for years. However, many security systems are vulnerable to spoofing face attacks (e.g., 2D printed photo, replayed video). Consequently, a number of anti-spoofing approaches have been proposed. In this study, we introduce a new algorithm that addresses the face liveness detection based on the digital focus technique. The proposed algorithm relies on the property of digital focus with various depths of field (DOFs) while shooting. Two features of the blurriness level and the gradient magnitude threshold are computed on the nose and the cheek subimages. The differences of these two features between the nose and the cheek in real face images and spoofing face images are used to facilitate detection. A total of 75 subjects with both real and spoofing face images were used to evaluate the proposed framework. Preliminary experimental results indicated that this new face liveness detection system achieved a high recognition rate of 94.67% and outperformed many state-of-the-art methods. The computation speed of the proposed algorithm was the fastest among the tested methods.", "title": "" }, { "docid": "fae60b86d98a809f876117526106719d", "text": "Big Data security analysis is commonly used for the analysis of large volume security data from an organisational perspective, requiring powerful IT infrastructure and expensive data analysis tools. Therefore, it can be considered to be inaccessible to the vast majority of desktop users and is difficult to apply to their rapidly growing data sets for security analysis. A number of commercial companies offer a desktop-oriented big data security analysis solution; however, most of them are prohibitive to ordinary desktop users with respect to cost and IT processing power. This paper presents an intuitive and inexpensive big data security analysis approach using Computational Intelligence (CI) techniques for Windows desktop users, where the combination of Windows batch programming, EmEditor and R are used for the security analysis. The simulation is performed on a real dataset with more than 10 million observations, which are collected from Windows Firewall logs to demonstrate how a desktop user can gain insight into their abundant and untouched data and extract useful information to prevent their system from current and future security threats. This CI-based big data security analysis approach can also be extended to other types of security logs such as event logs, application logs and web logs.", "title": "" }, { "docid": "5d827a27d9fb1fe4041e21dde3b8ce44", "text": "Cloud storage systems are becoming increasingly popular. A promising technology that keeps their cost down is deduplication, which stores only a single copy of repeating data. Client-side deduplication attempts to identify deduplication opportunities already at the client and save the bandwidth of uploading copies of existing files to the server. In this work we identify attacks that exploit client-side deduplication, allowing an attacker to gain access to arbitrary-size files of other users based on a very small hash signatures of these files. More specifically, an attacker who knows the hash signature of a file can convince the storage service that it owns that file, hence the server lets the attacker download the entire file. (In parallel to our work, a subset of these attacks were recently introduced in the wild with respect to the Dropbox file synchronization service.) To overcome such attacks, we introduce the notion of proofs-of-ownership (PoWs), which lets a client efficiently prove to a server that that the client holds a file, rather than just some short information about it. We formalize the concept of proof-of-ownership, under rigorous security definitions, and rigorous efficiency requirements of Petabyte scale storage systems. We then present solutions based on Merkle trees and specific encodings, and analyze their security. We implemented one variant of the scheme. Our performance measurements indicate that the scheme incurs only a small overhead compared to naive client-side deduplication.", "title": "" }, { "docid": "3744970293b3ed4c4543e6f2313fe2e4", "text": "With the proliferation of GPS-enabled smart devices and increased availability of wireless network, spatial crowdsourcing (SC) has been recently proposed as a framework to automatically request workers (i.e., smart device carriers) to perform location-sensitive tasks (e.g., taking scenic photos, reporting events). In this paper we study a destination-aware task assignment problem that concerns the optimal strategy of assigning each task to proper worker such that the total number of completed tasks can be maximized whilst all workers can reach their destinations before deadlines after performing assigned tasks. Finding the global optimal assignment turns out to be an intractable problem since it does not imply optimal assignment for individual worker. Observing that the task assignment dependency only exists amongst subsets of workers, we utilize tree-decomposition technique to separate workers into independent clusters and develop an efficient depth-first search algorithm with progressive bounds to prune non-promising assignments. Our empirical studies demonstrate that our proposed technique is quite effective and settle the problem nicely.", "title": "" }, { "docid": "d97af6f656cba4018a5d367861a07f01", "text": "Traditional Cloud model is not designed to handle latency-sensitive Internet of Things applications. The new trend consists on moving data to be processed close to where it was generated. To this end, Fog Computing paradigm suggests using the compute and storage power of network elements. In such environments, intelligent and scalable orchestration of thousands of heterogeneous devices in complex environments is critical for IoT Service providers. In this vision paper, we present a framework, called Foggy, that facilitates dynamic resource provisioning and automated application deployment in Fog Computing architectures. We analyze several applications and identify their requirements that need to be taken intoconsideration in our design of the Foggy framework. We implemented a proof of concept of a simple IoT application continuous deployment using Raspberry Pi boards.", "title": "" }, { "docid": "33ee29c4ccab435b8b64058b584e13cd", "text": "In this paper, we present a music recommendation system, which provides a personalized service of music recommendation. The polyphonic music objects of MIDI format are first analyzed for deriving information for music grouping. For this purpose, the representative track of each polyphonic music object is first determined, and then six features are extracted from this track for proper music grouping. Moreover, the user access histories are analyzed to derive the profiles of user interests and behaviors for user grouping. The content-based, collaborative, and statistics-based recommendation methods are proposed based on the favorite degrees of the users to the music groups, and the user groups they belong to. A series of experiments are carried out to show that our approach performs well.", "title": "" }, { "docid": "af1118d8de62821df883250837def5ad", "text": "Roller compaction is commonly used in the pharmaceutical industry to improve powder flow and compositional uniformity. The process produces ribbons which are milled into granules. The ribbon solid fraction (SF) can affect both the granule size and the tensile strength of downstream tablets. Roll force, which is directly related to the applied stress on the powder in the nip region, is typically the most dominant process parameter controlling the ribbon solid fraction. This work is an extension of a previous study, leveraging mathematical modeling as part of a Quality by Design development strategy (Powder Technology, 2011, 213: 1–13). In this paper, a semi-empirical unified powder compaction model is postulated describing powder solid fraction evolution as a function of applied stress in three geometries: the tapped cylinder (uniaxial strain—part of a standard tapped density measurement), the roller compaction geometry (plane strain deformation), and tablet compression (uniaxial strain). A historical database (CRAVE) containing data from many different formulations was leveraged to evaluate the model. The internally developed CRAVE database contains all aspects of drug product development batch records and was queried to retrieve tablet compression data along with corresponding roller compaction and tap density measurements for the same batch. Tablet compaction data and tap density data were used to calibrate a quadratic relationship between stress and the reciprocal of porosity. The quadratic relationship was used to predict the roll stress and corresponding roll force required to attain the reported ribbon SF. The predicted roll force was found to be consistent with the actual roll force values recorded across 136 different formulations in 136 batch records. In addition, significant correlations were found between the first and the second order constants of the quadratic relationship, suggesting that a single formulation-dependent fitting parameter may be used to define the complete SF versus stress relationship. The fitting parameter could be established by compressing a single tablet and measuring the powder tapped density. It was concluded that characterization of this parameter at a small scale can help define the required process parameters for both roller compactors and tablet presses at a large scale.", "title": "" }, { "docid": "9dfef5bc76b78e7577b9eb377b830a9e", "text": "Patients with Parkinson's disease may have difficulties in speaking because of the reduced coordination of the muscles that control breathing, phonation, articulation and prosody. Symptoms that may occur because of changes are weakening of the volume of the voice, voice monotony, changes in the quality of the voice, speed of speech, uncontrolled repetition of words. The evaluation of some of the disorders mentioned can be achieved through measuring the variation of parameters in an objective manner. It may be done to evaluate the response to the treatments with intra-daily frequency pre / post-treatment, as well as in the long term. Software systems allow these measurements also by recording the patient's voice. This allows to carry out a large number of tests by means of a larger number of patients and a higher frequency of the measurements. The main goal of our work was to design and realize Voxtester, an effective and simple to use software system useful to measure whether changes in voice emission are sensitive to pharmacologic treatments. Doctors and speech therapists can easily use it without going into the technical details, and we think that this goal is reached only by Voxtester, up to date.", "title": "" }, { "docid": "1c77370d8a69e83f45ddd314b798f1b1", "text": "The use of networks for communications between the Electronic Control Units (ECU) of a vehicle in production cars dates from the beginning of the 90s. The speci c requirements of the di erent car domains have led to the development of a large number of automotive networks such as LIN, CAN, CAN FD, FlexRay, MOST, automotive Ethernet AVB, etc.. This report rst introduces the context of in-vehicle embedded systems and, in particular, the requirements imposed on the communication systems. Then, a review of the most widely used, as well as the emerging automotive networks is given. Next, the current e orts of the automotive industry on middleware technologies which may be of great help in mastering the heterogeneity, are reviewed, with a special focus on the proposals of the AUTOSAR consortium. Finally, we highlight future trends in the development of automotive communication systems. ∗This technical report is an updated version of two earlier review papers on automotive networks: N. Navet, Y.-Q. Song, F. Simonot-Lion, C. Wilwert, \"Trends in Automotive Communication Systems\", Proceedings of the IEEE, special issue on Industrial Communications Systems, vol 96, no6, pp1204-1223, June 2005 [66]. An updated version of this IEEE Proceedings then appeared as chapter 4 in The Automotive Embedded Systems Handbook in 2008 [62].", "title": "" }, { "docid": "312751cb91bc62e1db0e137f1b0b6748", "text": "Advertising, long the financial mainstay of the web ecosystem, has become nearly ubiquitous in the world of mobile apps. While ad targeting on the web is fairly well understood, mobile ad targeting is much less studied. In this paper, we use empirical methods to collect a database of over 225,000 ads on 32 simulated devices hosting one of three distinct user profiles. We then analyze how the ads are targeted by correlating ads to potential targeting profiles using Bayes’ rule and Pearson’s chi squared test. This enables us to measure the prevalence of different forms of targeting. We find that nearly all ads show the effects of applicationand time-based targeting, while we are able to identify location-based targeting in 43% of the ads and user-based targeting in 39%.", "title": "" }, { "docid": "263c7309eb803c91ab15af5708cf039c", "text": "In wave optics, the Wigner distribution and its Fourier dual, the ambiguity function, are important tools in optical system simulation and analysis. The light field fulfills a similar role in the computer graphics community. In this paper, we establish that the light field as it is used in computer graphics is equivalent to a smoothed Wigner distribution and that these are equivalent to the raw Wigner distribution under a geometric optics approximation. Using this insight, we then explore two recent contributions: Fourier slice photography in computer graphics and wavefront coding in optics, and we examine the similarity between explanations of them using Wigner distributions and explanations of them using light fields. Understanding this long-suspected equivalence may lead to additional insights and the productive exchange of ideas between the two fields.", "title": "" }, { "docid": "f122373d44be16dadd479c75cca34a2a", "text": "This paper presents the design, fabrication, and evaluation of a novel type of valve that uses an electropermanent magnet [1]. This valve is then used to build actuators for a soft robot. The developed EPM valves require only a brief (5 ms) pulse of current to turn flow on or off for an indefinite period of time. EPMvalves are characterized and demonstrated to be well suited for the control of elastomer fluidic actuators. The valves drive the pressurization and depressurization of fluidic channels within soft actuators. Furthermore, the forward locomotion of a soft, multi-actuator rolling robot is driven by EPM valves. The small size and energy-efficiency of EPM valves may make them valuable in soft mobile robot applications.", "title": "" }, { "docid": "75c1fa342d6f30d68b0aba906a54dd69", "text": "The Constrained Application Protocol (CoAP) is a promising candidate for future smart city applications that run on resource-constrained devices. However, additional security means are mandatory to cope with the high security requirements of smart city applications. We present a framework to evaluate lightweight intrusion detection techniques for CoAP applications. This framework combines an OMNeT++ simulation with C/C++ application code that also runs on real hardware. As the result of our work, we used our framework to evaluate intrusion detection techniques for a smart public transport application that uses CoAP. Our first evaluations indicate that a hybrid IDS approach is a favorable choice for smart city applications.", "title": "" }, { "docid": "b5fe13becf36cdc699a083b732dc5d6a", "text": "The stability of two-dimensional, linear, discrete systems is examined using the 2-D matrix Lyapunov equation. While the existence of a positive definite solution pair to the 2-D Lyapunov equation is sufficient for stability, the paper proves that such existence is not necessary for stability, disproving a long-standing conjecture.", "title": "" }, { "docid": "29fa75e49d4179072ec25b8aab6b48e2", "text": "We describe the design, development, and API for two discourse parsers for Rhetorical Structure Theory. The two parsers use the same underlying framework, but one uses features that rely on dependency syntax, produced by a fast shift-reduce parser, whereas the other uses a richer feature space, including both constituentand dependency-syntax and coreference information, produced by the Stanford CoreNLP toolkit. Both parsers obtain state-of-the-art performance, and use a very simple API consisting of, minimally, two lines of Scala code. We accompany this code with a visualization library that runs the two parsers in parallel, and displays the two generated discourse trees side by side, which provides an intuitive way of comparing the two parsers.", "title": "" }, { "docid": "497d72ce075f9bbcb2464c9ab20e28de", "text": "Eukaryotic organisms radiated in Proterozoic oceans with oxygenated surface waters, but, commonly, anoxia at depth. Exceptionally preserved fossils of red algae favor crown group emergence more than 1200 million years ago, but older (up to 1600-1800 million years) microfossils could record stem group eukaryotes. Major eukaryotic diversification ~800 million years ago is documented by the increase in the taxonomic richness of complex, organic-walled microfossils, including simple coenocytic and multicellular forms, as well as widespread tests comparable to those of extant testate amoebae and simple foraminiferans and diverse scales comparable to organic and siliceous scales formed today by protists in several clades. Mid-Neoproterozoic establishment or expansion of eukaryophagy provides a possible mechanism for accelerating eukaryotic diversification long after the origin of the domain. Protists continued to diversify along with animals in the more pervasively oxygenated oceans of the Phanerozoic Eon.", "title": "" }, { "docid": "b7668f382f1857ff034d8088328f866d", "text": "Diverse lines of evidence point to a basic human aversion to physically harming others. First, we demonstrate that unwillingness to endorse harm in a moral dilemma is predicted by individual differences in aversive reactivity, as indexed by peripheral vasoconstriction. Next, we tested the specific factors that elicit the aversive response to harm. Participants performed actions such as discharging a fake gun into the face of the experimenter, fully informed that the actions were pretend and harmless. These simulated harmful actions increased peripheral vasoconstriction significantly more than did witnessing pretend harmful actions or to performing metabolically matched nonharmful actions. This suggests that the aversion to harmful actions extends beyond empathic concern for victim harm. Together, these studies demonstrate a link between the body and moral decision-making processes.", "title": "" }, { "docid": "f1e646a0627a5c61a0f73a41d35ccac7", "text": "Smart cities play an increasingly important role for the sustainable economic development of a determined area. Smart cities are considered a key element for generating wealth, knowledge and diversity, both economically and socially. A Smart City is the engine to reach the sustainability of its infrastructure and facilitate the sustainable development of its industry, buildings and citizens. The first goal to reach that sustainability is reduce the energy consumption and the levels of greenhouse gases (GHG). For that purpose, it is required scalability, extensibility and integration of new resources in order to reach a higher awareness about the energy consumption, distribution and generation, which allows a suitable modeling which can enable new countermeasure and action plans to mitigate the current excessive power consumption effects. Smart Cities should offer efficient support for global communications and access to the services and information. It is required to enable a homogenous and seamless machine to machine (M2M) communication in the different solutions and use cases. This work presents how to reach an interoperable Smart Lighting solution over the emerging M2M protocols such as CoAP built over REST architecture. This follows up the guidelines defined by the IP for Smart Objects Alliance (IPSO Alliance) in order to implement and interoperable semantic level for the street lighting, and describes the integration of the communications and logic over the existing street lighting infrastructure.", "title": "" }, { "docid": "dc1bd4603d9673fb4cd0fd9d7b0b6952", "text": "We investigate the contribution of option markets to price discovery, using a modification of Hasbrouck’s (1995) “information share” approach. Based on five years of stock and options data for 60 firms, we estimate the option market’s contribution to price discovery to be about 17 percent on average. Option market price discovery is related to trading volume and spreads in both markets, and stock volatility. Price discovery across option strike prices is related to leverage, trading volume, and spreads. Our results are consistent with theoretical arguments that informed investors trade in both stock and option markets, suggesting an important informational role for options. ∗Chakravarty is from Purdue University; Gulen is from the Pamplin College of Business, Virginia Tech; and Mayhew is from the Terry College of Business, University of Georgia and the U.S. Securities and Exchange Commission. We would like to thank the Institute for Quantitative Research in Finance (the Q-Group) for funding this research. Gulen acknowledges funding from a Virginia Tech summer grant and Mayhew acknowledges funding from the TerrySanford Research Grant at the Terry College of Business and from the University of Georgia Research Foundation. We would like to thank the editor, Rick Green; Michael Cliff; Joel Hasbrouck; Raman Kumar; an anonymous referee; and seminar participants at Purdue University, the University of Georgia, Texas Christian University, the University of South Carolina, the Securities and Exchange Commission, the University of Delaware, George Washington University, the Commodity Futures Trading Commission, the Batten Conference at the College of William and Mary, the 2002 Q-Group Conference, and the 2003 INQUIRE conference. The U.S. Securities and Exchange Commission disclaims responsibility for any private publication or statement of any SEC employee or Commissioner. This study expresses the author’s views and does not necessarily reflect those of the Commission, the Commissioners, or other members of the staff.", "title": "" }, { "docid": "0d93bf1b3b891a625daa987652ca1964", "text": "In this paper, we show that a continuous spectrum of randomis ation exists, in which most existing tree randomisations are only operating around the tw o ends of the spectrum. That leaves a huge part of the spectrum largely unexplored. We propose a ba se le rner VR-Tree which generates trees with variable-randomness. VR-Trees are able to span f rom the conventional deterministic trees to the complete-random trees using a probabilistic pa rameter. Using VR-Trees as the base models, we explore the entire spectrum of randomised ensemb les, together with Bagging and Random Subspace. We discover that the two halves of the spectrum have their distinct characteristics; and the understanding of which allows us to propose a new appr o ch in building better decision tree ensembles. We name this approach Coalescence, which co ales es a number of points in the random-half of the spectrum. Coalescence acts as a committe e of “ xperts” to cater for unforeseeable conditions presented in training data. Coalescence is found to perform better than any single operating point in the spectrum, without the need to tune to a specific level of randomness. In our empirical study, Coalescence ranks top among the benchm arking ensemble methods including Random Forests, Random Subspace and C5 Boosting; and only Co alescence is significantly better than Bagging and Max-Diverse Ensemble among all the methods in the comparison. Although Coalescence is not significantly better than Random Forests , we have identified conditions under which one will perform better than the other.", "title": "" } ]
scidocsrr
c76246a3a23e9bed92c92fd984bd2c88
Race directed random testing of concurrent programs
[ { "docid": "cb1952a4931955856c6479d7054c57e7", "text": "This paper presents a static race detection analysis for multithreaded Java programs. Our analysis is based on a formal type system that is capable of capturing many common synchronization patterns. These patterns include classes with internal synchronization, classes thatrequire client-side synchronization, and thread-local classes. Experience checking over 40,000 lines of Java code with the type system demonstrates that it is an effective approach for eliminating races conditions. On large examples, fewer than 20 additional type annotations per 1000 lines of code were required by the type checker, and we found a number of races in the standard Java libraries and other test programs.", "title": "" } ]
[ { "docid": "b7d585ffa334a5c0f88575e42a8682c4", "text": "Detecting impending failure of hard disks is an important prediction task which might help computer systems to prevent loss of data and performance degradation. Currently most of the hard drive vendors support self-monitoring, analysis and reporting technology (SMART) which are often considered unreliable for such tasks. The problem of finding alternatives to SMART for predicting disk failure is an area of active research. In this paper, we consider events recorded from live disks and show that it is possible to construct decision support systems which can detect such failures. It is desired that any such prediction methodology should have high accuracy and ease of interpretability. Black box models can deliver highly accurate solutions but do not provide an understanding of events which explains the decision given by it. To this end we explore rule based classifiers for predicting hard disk failures from various disk events. We show that it is possible to learn easy to understand rules, from disk events, which have extremely low false alarm rates on real world data.", "title": "" }, { "docid": "ce3d82fc815a965a66be18d20434e80f", "text": "In this paper the three-phase grid connected inverter has been investigated. The inverter’s control strategy is based on the adaptive hysteresis current controller. Inverter connects the DG (distributed generation) source to the grid. The main advantages of this method are constant switching frequency, better current control, easy filter design and less THD (total harmonic distortion). Since a constant and ripple free dc bus voltage is not ensured at the output of alternate energy sources, the main aim of the proposed algorithm is to make the output of the inverter immune to the fluctuations in the dc input voltage This inverter can be used to connect the medium and small-scale wind turbines and solar cells to the grid and compensate local load reactive power. Reactive power compensating improves SUF (system usage factor) from nearly 20% (in photovoltaic systems) to 100%. The simulation results confirm that switching frequency is constant and THD of injected current is low.", "title": "" }, { "docid": "f0958d2c952c7140c998fa13a2bf4374", "text": "OBJECTIVE\nThe objective of this study is to outline explicit criteria for assessing the contribution of qualitative empirical studies in health and medicine, leading to a hierarchy of evidence specific to qualitative methods.\n\n\nSTUDY DESIGN AND SETTING\nThis paper arose from a series of critical appraisal exercises based on recent qualitative research studies in the health literature. We focused on the central methodological procedures of qualitative method (defining a research framework, sampling and data collection, data analysis, and drawing research conclusions) to devise a hierarchy of qualitative research designs, reflecting the reliability of study conclusions for decisions made in health practice and policy.\n\n\nRESULTS\nWe describe four levels of a qualitative hierarchy of evidence-for-practice. The least likely studies to produce good evidence-for-practice are single case studies, followed by descriptive studies that may provide helpful lists of quotations but do not offer detailed analysis. More weight is given to conceptual studies that analyze all data according to conceptual themes but may be limited by a lack of diversity in the sample. Generalizable studies using conceptual frameworks to derive an appropriately diversified sample with analysis accounting for all data are considered to provide the best evidence-for-practice. Explicit criteria and illustrative examples are described for each level.\n\n\nCONCLUSION\nA hierarchy of evidence-for-practice specific to qualitative methods provides a useful guide for the critical appraisal of papers using these methods and for defining the strength of evidence as a basis for decision making and policy generation.", "title": "" }, { "docid": "1167ef2f839531bcaca3fae3cd25cf55", "text": "Finger impairment following stroke results in significant deficits in hand manipulation and the performance of everyday tasks. While recent advances in rehabilitation robotics have shown promise for facilitating functional improvement, it remains unclear how best to employ these devices to maximize benefits. Current devices for the hand, however, lack the capacity to fully explore the space of possible training paradigms. Particularly, they cannot provide the independent joint control and levels of velocity and torque required. To fill this need, we have developed a prototype for one digit, the cable actuated finger exoskeleton (CAFE), a three-degree-of-freedom robotic exoskeleton for the index finger. This paper presents the design and development of the CAFE, with performance testing results.", "title": "" }, { "docid": "a71d142df039c6a361e60ec1342a3980", "text": "Intelligent transportation systems (ITS) rely on connected vehicle applications to address real-world problems. Research is currently being conducted to support safety, mobility and environmental applications. This paper presents the DrivingStyles architecture, which adopts data mining techniques and neural networks to analyze and generate a classification of driving styles and fuel consumption based on driver characterization. In particular, we have implemented an algorithm that is able to characterize the degree of aggressiveness of each driver. We have also developed a methodology to calculate, in real-time, the consumption and environmental impact of spark ignition and diesel vehicles from a set of variables obtained from the vehicle’s electronic control unit (ECU). In this paper, we demonstrate the impact of the driving style on fuel consumption, as well as its correlation with the greenhouse gas emissions generated by each vehicle. Overall, our platform is able to assist drivers in correcting their bad driving habits, while offering helpful tips to improve fuel economy and driving safety.", "title": "" }, { "docid": "276ccf2a41d91739f5d0bd884abdedbd", "text": "Evidence of the effects of playing violent video games on subsequent aggression has been mixed. This study examined how playing a violent video game affected levels of aggression displayed in a laboratory. A total of 43 undergraduate students (22 men and 21 women) were randomly assigned to play either a violent (Mortal Kombat) or nonviolent (PGA Tournament Golf) video game for 10 min. Then they competed with a confederate in a reaction time task that allowed for provocation and retaliation. Punishment levels set by participants for their opponents served as the measure of aggression. The results confirmed our hypothesis that playing the violent game would result in more aggression than would playing the nonviolent game. In addition, a Game Sex interaction showed that ssed", "title": "" }, { "docid": "4d18ea8816e9e4abf428b3f413c82f9e", "text": "This paper reviews computer vision and image analysis studies aiming at automated diagnosis or screening of malaria infection in microscope images of thin blood film smears. Existing works interpret the diagnosis problem differently or propose partial solutions to the problem. A critique of these works is furnished. In addition, a general pattern recognition framework to perform diagnosis, which includes image acquisition, pre-processing, segmentation, and pattern classification components, is described. The open problems are addressed and a perspective of the future work for realization of automated microscopy diagnosis of malaria is provided.", "title": "" }, { "docid": "6f2162f883fce56eaa6bd8d0fbcedc0b", "text": "While data from Massive Open Online Courses (MOOCs) offers the potential to gain new insights into the ways in which online communities can contribute to student learning, much of the richness of the data trace is still yet to be mined. In particular, very little work has attempted fine-grained content analyses of the student interactions in MOOCs. Survey research indicates the importance of student goals and intentions in keeping them involved in a MOOC over time. Automated fine-grained content analyses offer the potential to detect and monitor evidence of student engagement and how it relates to other aspects of their behavior. Ultimately these indicators reflect their commitment to remaining in the course. As a methodological contribution, in this paper we investigate using computational linguistic models to measure learner motivation and cognitive engagement from the text of forum posts. We validate our techniques using survival models that evaluate the predictive validity of these variables in connection with attrition over time. We conduct this evaluation in three MOOCs focusing on very different types of learning materials. Prior work demonstrates that participation in the discussion forums at all is a strong indicator of student commitment. Our methodology allows us to differentiate better among these students, and to identify danger signs that a struggling student is in need of support within a population whose interaction with the course offers the opportunity for effective support to be administered. Theoretical and practical implications will be discussed.", "title": "" }, { "docid": "c6a23113b0e88c884eaddfba9cce2667", "text": "Recent research in machine learning has focused on breaking audio spectrograms into separate sources of sound using latent variable decompositions. These methods require that the number of sources be specified in advance, which is not always possible. To address this problem, we develop Gamma Process Nonnegative Matrix Factorization (GaP-NMF), a Bayesian nonparametric approach to decomposing spectrograms. The assumptions behind GaP-NMF are based on research in signal processing regarding the expected distributions of spectrogram data, and GaP-NMF automatically discovers the number of latent sources. We derive a mean-field variational inference algorithm and evaluate GaP-NMF on both synthetic data and recorded music.", "title": "" }, { "docid": "27775805c45a82cbd31fd9a5e93f3df1", "text": "In a dynamic world, mechanisms allowing prediction of future situations can provide a selective advantage. We suggest that memory systems differ in the degree of flexibility they offer for anticipatory behavior and put forward a corresponding taxonomy of prospection. The adaptive advantage of any memory system can only lie in what it contributes for future survival. The most flexible is episodic memory, which we suggest is part of a more general faculty of mental time travel that allows us not only to go back in time, but also to foresee, plan, and shape virtually any specific future event. We review comparative studies and find that, in spite of increased research in the area, there is as yet no convincing evidence for mental time travel in nonhuman animals. We submit that mental time travel is not an encapsulated cognitive system, but instead comprises several subsidiary mechanisms. A theater metaphor serves as an analogy for the kind of mechanisms required for effective mental time travel. We propose that future research should consider these mechanisms in addition to direct evidence of future-directed action. We maintain that the emergence of mental time travel in evolution was a crucial step towards our current success.", "title": "" }, { "docid": "e3ca898c936009e149d5639a6e72359e", "text": "Tracking bits through block ciphers and optimizing attacks at hand is one of the tedious task symmetric cryptanalysts have to deal with. It would be nice if a program will automatically handle them at least for well-known attack techniques, so that cryptanalysts will only focus on nding new attacks. However, current automatic tools cannot be used as is, either because they are tailored for speci c ciphers or because they only recover a speci c part of the attacks and cryptographers are still needed to nalize the analysis. In this paper we describe a generic algorithm exhausting the best meetin-the-middle and impossible di erential attacks on a very large class of block ciphers from byte to bit-oriented, SPN, Feistel and Lai-Massey block ciphers. Contrary to previous tools that target to nd the best di erential / linear paths in the cipher and leave the cryptanalysts to nd the attack using these paths, we automatically nd the best attacks by considering the cipher and the key schedule algorithms. The building blocks of our algorithm led to two algorithms designed to nd the best simple meet-in-the-middle attacks and the best impossible truncated differential attacks respectively. We recover and improve many attacks on AES, mCRYPTON, SIMON, IDEA, KTANTAN, PRINCE and ZORRO. We show that this tool can be used by designers to improve their analysis.", "title": "" }, { "docid": "fe4c9336db84d7303280b87485f4262f", "text": "The mechanistic target of rapamycin (mTOR) coordinates eukaryotic cell growth and metabolism with environmental inputs, including nutrients and growth factors. Extensive research over the past two decades has established a central role for mTOR in regulating many fundamental cell processes, from protein synthesis to autophagy, and deregulated mTOR signaling is implicated in the progression of cancer and diabetes, as well as the aging process. Here, we review recent advances in our understanding of mTOR function, regulation, and importance in mammalian physiology. We also highlight how the mTOR signaling network contributes to human disease and discuss the current and future prospects for therapeutically targeting mTOR in the clinic.", "title": "" }, { "docid": "1e5956b0d9d053cd20aad8b53730c969", "text": "The cloud is migrating to the edge of the network, where routers themselves may become the virtualisation infrastructure, in an evolution labelled as \"the fog\". However, many other complementary technologies are reaching a high level of maturity. Their interplay may dramatically shift the information and communication technology landscape in the following years, bringing separate technologies into a common ground. This paper offers a comprehensive definition of the fog, comprehending technologies as diverse as cloud, sensor networks, peer-to-peer networks, network virtualisation functions or configuration management techniques. We highlight the main challenges faced by this potentially breakthrough technology amalgamation.", "title": "" }, { "docid": "56beaf6067d944fc17fe282155c303a0", "text": "The femur is the enlarged and the vigorous bone in the human body, ranging from the hip to the knee. This bone is responsible for the creation of Red Blood Cell in the body. Since this bone is a major part of the body, a method is proposed through this paper to visualize and classify deformities for locating fractures in the femur through image processing techniques. The input image is preprocessed to highlight the domain of interest. In the process, the foreground which is the major domain of interest is figured out by suppressing the background details. The mathematical morphological techniques are used for these operations. With the help of basic morphological operations, the foreground is highlighted and edge detection is used to highlight the objects in the foreground. The processed image is classified using the support vector machine (SVM) to distinguish fractured and unfractured sides of the bone.", "title": "" }, { "docid": "93363db7856de156d314adb747db5c63", "text": "This paper presents a detailed analysis about the power losses and efficiency of multilevel dc-dc converters. The analysis considers different loss mechanisms and gives out quantitative descriptions of the power losses and useful design criteria. The analysis is based on a three-level multilevel dc-dc converter and can be extended to other switched-capacitor converters. The comparison between the theoretical analysis and the experimental results are shown to substantiate the theory", "title": "" }, { "docid": "e3f392ea43d435e08dc8996902fb6349", "text": "In nanopore sequencing devices, electrolytic current signals are sensitive to base modifications, such as 5-methylcytosine (5-mC). Here we quantified the strength of this effect for the Oxford Nanopore Technologies MinION sequencer. By using synthetically methylated DNA, we were able to train a hidden Markov model to distinguish 5-mC from unmethylated cytosine. We applied our method to sequence the methylome of human DNA, without requiring special steps for library preparation.", "title": "" }, { "docid": "5f414e1f03aa2a9c54fc98b05ca65cdb", "text": "Power MOSFETs have become the standard choice as the main switching device for low-voltage (<200 V) switchmode power-supply (SMPS) converter applications. However using manufacturers’ datasheets to choose or size the correct device for a specific circuit topology is becoming increasingly difficult. The main criteria for MOSFET selection are the power loss associated with the MOSFET (related to the overall efficiency of the SMPS) and the power-dissipation capability of the MOSFET (related to the maximum junction temperature and thermal performance of the package). This application note focuses on the basic characteristics and understanding of the MOSFET.", "title": "" }, { "docid": "160a866ca769a847138c5afc7f34db38", "text": "STUDY OBJECTIVE\nThe purpose of this article is to review the published literature and perform a systematic review to evaluate the effectiveness and feasibility of the use of a hysteroscope for vaginoscopy or hysteroscopy in diagnosing and establishing therapeutic management of adolescent patients with gynecologic problems.\n\n\nDESIGN\nA systematic review.\n\n\nSETTING\nPubMed, Web of science, and Scopus searches were performed for the period up to September 2013 to identify all the eligible studies. Additional relevant articles were identified using citations within these publications.\n\n\nPARTICIPANTS\nFemale adolescents aged 10 to 18 years.\n\n\nRESULTS\nA total of 19 studies were included in the systematic review. We identified 19 case reports that described the application of a hysteroscope as treatment modality for some gynecologic conditions or diseases in adolescents. No original study was found matching the age of this specific population.\n\n\nCONCLUSIONS\nA hysteroscope is a useful substitute for vaginoscopy or hysteroscopy for the exploration of the immature genital tract and may help in the diagnosis and treatment of gynecologic disorders in adolescent patients with an intact hymen, limited vaginal access, or a narrow vagina.", "title": "" }, { "docid": "570eca9884edb7e4a03ed95763be20aa", "text": "Gene expression is a fundamentally stochastic process, with randomness in transcription and translation leading to cell-to-cell variations in mRNA and protein levels. This variation appears in organisms ranging from microbes to metazoans, and its characteristics depend both on the biophysical parameters governing gene expression and on gene network structure. Stochastic gene expression has important consequences for cellular function, being beneficial in some contexts and harmful in others. These situations include the stress response, metabolism, development, the cell cycle, circadian rhythms, and aging.", "title": "" }, { "docid": "5cc666e8390b0d3cefaee2d55ad7ee38", "text": "The thermal environment surrounding preterm neonates in closed incubators is regulated via air temperature control mode. At present, these control modes do not take account of all the thermal parameters involved in a pattern of incubator such as the thermal parameters of preterm neonates (birth weight < 1000 grams). The objective of this work is to design and validate a generalized predictive control (GPC) that takes into account the closed incubator model as well as the newborn premature model. Then, we implemented this control law on a DRAGER neonatal incubator with and without newborn using microcontroller card. Methods: The design of the predictive control law is based on a prediction model. The developed model allows us to take into account all the thermal exchanges (radioactive, conductive, convective and evaporative) and the various interactions between the environment of the incubator and the premature newborn. Results: The predictive control law and the simulation model developed in Matlab/Simulink environment make it possible to evaluate the quality of the mode of control of the air temperature to which newborn must be raised. The results of the simulation and implementation of the air temperature inside the incubator (with newborn and without newborn) prove the feasibility and effectiveness of the proposed GPC controller compared with a proportional–integral–derivative controller (PID controller). Keywords—Incubator; neonatal; model; temperature; Arduino; GPC", "title": "" } ]
scidocsrr
c6800000b91876cb175b1475a62c6584
A Production Oriented Approach for Vandalism Detection in Wikidata - The Buffaloberry Vandalism Detector at WSDM Cup 2017
[ { "docid": "40da1f85f7bdc84537a608ce6bec0e17", "text": "This paper reports on the PAN 2014 evaluation lab which hosts three shared tasks on plagiarism detection, author identification, and author profiling. To improve the reproducibility of shared tasks in general, and PAN’s tasks in particular, the Webis group developed a new web service called TIRA, which facilitates software submissions. Unlike many other labs, PAN asks participants to submit running softwares instead of their run output. To deal with the organizational overhead involved in handling software submissions, the TIRA experimentation platform helps to significantly reduce the workload for both participants and organizers, whereas the submitted softwares are kept in a running state. This year, we addressed the matter of responsibility of successful execution of submitted softwares in order to put participants back in charge of executing their software at our site. In sum, 57 softwares have been submitted to our lab; together with the 58 software submissions of last year, this forms the largest collection of softwares for our three tasks to date, all of which are readily available for further analysis. The report concludes with a brief summary of each task.", "title": "" } ]
[ { "docid": "54c2914107ae5df0a825323211138eb9", "text": "An implicit, but pervasive view in the information science community is that people are perpetual seekers after new public information, incessantly identifying and consuming new information by browsing the Web and accessing public collections. One aim of this review is to move beyond this consumer characterization, which regards information as a public resource containing novel data that we seek out, consume, and then discard. Instead, I want to focus on a very different view: where familiar information is used as a personal resource that we keep, manage, and (sometimes repeatedly) exploit. I call this information curation. I first summarize limitations of the consumer perspective. I then review research on three different information curation processes: keeping, management, and exploitation. I describe existing work detailing how each of these processes is applied to different types of personal data: documents, e-mail messages, photos, and Web pages. The research indicates people tend to keep too much information, with the exception of contacts and Web pages. When managing information, strategies that rely on piles as opposed to files provide surprising benefits. And in spite of the emergence of desktop search, exploitation currently remains reliant on manual methods such as navigation. Several new technologies have the potential to address important", "title": "" }, { "docid": "88602ba9bcb297af04e58ed478664ee5", "text": "Effective and accurate diagnosis of Alzheimer's disease (AD), as well as its prodromal stage (i.e., mild cognitive impairment (MCI)), has attracted more and more attention recently. So far, multiple biomarkers have been shown to be sensitive to the diagnosis of AD and MCI, i.e., structural MR imaging (MRI) for brain atrophy measurement, functional imaging (e.g., FDG-PET) for hypometabolism quantification, and cerebrospinal fluid (CSF) for quantification of specific proteins. However, most existing research focuses on only a single modality of biomarkers for diagnosis of AD and MCI, although recent studies have shown that different biomarkers may provide complementary information for the diagnosis of AD and MCI. In this paper, we propose to combine three modalities of biomarkers, i.e., MRI, FDG-PET, and CSF biomarkers, to discriminate between AD (or MCI) and healthy controls, using a kernel combination method. Specifically, ADNI baseline MRI, FDG-PET, and CSF data from 51AD patients, 99 MCI patients (including 43 MCI converters who had converted to AD within 18 months and 56 MCI non-converters who had not converted to AD within 18 months), and 52 healthy controls are used for development and validation of our proposed multimodal classification method. In particular, for each MR or FDG-PET image, 93 volumetric features are extracted from the 93 regions of interest (ROIs), automatically labeled by an atlas warping algorithm. For CSF biomarkers, their original values are directly used as features. Then, a linear support vector machine (SVM) is adopted to evaluate the classification accuracy, using a 10-fold cross-validation. As a result, for classifying AD from healthy controls, we achieve a classification accuracy of 93.2% (with a sensitivity of 93% and a specificity of 93.3%) when combining all three modalities of biomarkers, and only 86.5% when using even the best individual modality of biomarkers. Similarly, for classifying MCI from healthy controls, we achieve a classification accuracy of 76.4% (with a sensitivity of 81.8% and a specificity of 66%) for our combined method, and only 72% even using the best individual modality of biomarkers. Further analysis on MCI sensitivity of our combined method indicates that 91.5% of MCI converters and 73.4% of MCI non-converters are correctly classified. Moreover, we also evaluate the classification performance when employing a feature selection method to select the most discriminative MR and FDG-PET features. Again, our combined method shows considerably better performance, compared to the case of using an individual modality of biomarkers.", "title": "" }, { "docid": "59101ef7f0d3fe1976c4abd364400bc5", "text": "Although conventional Yagi antenna has the advantage of unidirectional radiation patterns, it is not suitable for wideband applications due to its drawback of narrow bandwidth. In this communication, a compact wideband planar printed quasi-Yagi antenna is presented. The proposed quasi-Yagi antenna consists of a microstrip line to slotline transition structure, a driver dipole, and a parasitic strip element. The driver dipole is connected to the slotline through a coplanar stripline (CPS). The proposed antenna uses a stepped connection structure between the CPS and the slotline to improve the impedance matching. Two apertures are symmetrically etched in the ground plane to improve the unidirectional radiation characteristics. Simulation and experimental results show that the unidirectional radiation patterns of the proposed antenna are good. A 92.2% measured bandwidth with from 3.8 to 10.3 GHz is achieved. A moderate gain, which is better than 4 dBi, is also obtained.", "title": "" }, { "docid": "222ab6804b3fe15fe23b27bc7f5ede5f", "text": "Single-image super-resolution (SR) reconstruction via sparse representation has recently attracted broad interest. It is known that a low-resolution (LR) image is susceptible to noise or blur due to the degradation of the observed image, which would lead to a poor SR performance. In this paper, we propose a novel robust edge-preserving smoothing SR (REPS-SR) method in the framework of sparse representation. An EPS regularization term is designed based on gradient-domain-guided filtering to preserve image edges and reduce noise in the reconstructed image. Furthermore, a smoothing-aware factor adaptively determined by the estimation of the noise level of LR images without manual interference is presented to obtain an optimal balance between the data fidelity term and the proposed EPS regularization term. An iterative shrinkage algorithm is used to obtain the SR image results for LR images. The proposed adaptive smoothing-aware scheme makes our method robust to different levels of noise. Experimental results indicate that the proposed method can preserve image edges and reduce noise and outperforms the current state-of-the-art methods for noisy images.", "title": "" }, { "docid": "0b1baa3190abb39284f33b8e73bcad1d", "text": "Despite significant advances in machine learning and perception over the past few decades, perception algorithms can still be unreliable when deployed in challenging time-varying environments. When these systems are used for autonomous decision-making, such as in self-driving vehicles, the impact of their mistakes can be catastrophic. As such, it is important to characterize the performance of the system and predict when and where it may fail in order to take appropriate action. While similar in spirit to the idea of introspection, this work introduces a new paradigm for predicting the likely performance of a robot’s perception system based on past experience in the same workspace. In particular, we propose two models that probabilistically predict perception performance from observations gathered over time. While both approaches are place-specific, the second approach additionally considers appearance similarity when incorporating past observations. We evaluate our method in a classical decision-making scenario in which the robot must choose when and where to drive autonomously in 60 km of driving data from an urban environment. Results demonstrate that both approaches lead to fewer false decisions (in terms of incorrectly offering or denying autonomy) for two different detector models, and show that leveraging visual appearance within a state-of-the-art navigation framework increases the accuracy of our performance predictions.", "title": "" }, { "docid": "ae7405600f7cf3c7654cc2db73a22340", "text": "The usual approach for automatic summarization is sentence extraction, where key sentences from the input documents are selected based on a suite of features. While word frequency often is used as a feature in summarization, its impact on system performance has not been isolated. In this paper, we study the contribution to summarization of three factors related to frequency: content word frequency, composition functions for estimating sentence importance from word frequency, and adjustment of frequency weights based on context. We carry out our analysis using datasets from the Document Understanding Conferences, studying not only the impact of these features on automatic summarizers, but also their role in human summarization. Our research shows that a frequency based summarizer can achieve performance comparable to that of state-of-the-art systems, but only with a good composition function; context sensitivity improves performance and significantly reduces repetition.", "title": "" }, { "docid": "35625f248c81ebb5c20151147483f3f6", "text": "A very simple way to improve the performance of almost any mac hine learning algorithm is to train many different models on the same data a nd then to average their predictions [3]. Unfortunately, making predictions u ing a whole ensemble of models is cumbersome and may be too computationally expen sive to allow deployment to a large number of users, especially if the indivi dual models are large neural nets. Caruana and his collaborators [1] have shown th at it is possible to compress the knowledge in an ensemble into a single model whi ch is much easier to deploy and we develop this approach further using a dif ferent compression technique. We achieve some surprising results on MNIST and w e show that we can significantly improve the acoustic model of a heavily use d commercial system by distilling the knowledge in an ensemble of models into a si ngle model. We also introduce a new type of ensemble composed of one or more full m odels and many specialist models which learn to distinguish fine-grained c lasses that the full models confuse. Unlike a mixture of experts, these specialist m odels can be trained rapidly and in parallel.", "title": "" }, { "docid": "e1b6de27518c1c17965a891a8d14a1e1", "text": "Mobile phones are becoming more and more widely used nowadays, and people do not use the phone only for communication: there is a wide variety of phone applications allowing users to select those that fit their needs. Aggregated over time, application usage patterns exhibit not only what people are consistently interested in but also the way in which they use their phones, and can help improving phone design and personalized services. This work aims at mining automatically usage patterns from apps data recorded continuously with smartphones. A new probabilistic framework for mining usage patterns is proposed. Our methodology involves the design of a bag-of-apps model that robustly represents level of phone usage over specific times of the day, and the use of a probabilistic topic model that jointly discovers patterns of usage over multiple applications and describes users as mixtures of such patterns. Our framework is evaluated using 230 000+ hours of real-life app phone log data, demonstrates that relevant patterns of usage can be extracted, and is objectively validated on a user retrieval task with competitive performance.", "title": "" }, { "docid": "d417b73715337b661c940b370a96fc7b", "text": "In this paper we introduce a new decentralized digital currency, called NRGcoin. Prosumers in the smart grid trade locally produced renewable energy using NRGcoins, the value of which is determined on an open currency exchange market. Similar to Bitcoins, this currency offers numerous advantages over fiat currency, but unlike Bitcoins it is generated by injecting energy into the grid, rather than spending energy on computational power. In addition, we propose a novel trading paradigm for buying and selling green energy in the smart grid. Our mechanism achieves demand response by providing incentives to prosumers to balance their production and consumption out of their own self-interest. We study the advantages of our proposed currency over traditional money and environmental instruments, and explore its benefits for all parties in the smart grid.", "title": "" }, { "docid": "fd91f09861da433d27d4db3f7d2a38a6", "text": "Herbert Simon’s research endeavor aimed to understand the processes that participate in human decision making. However, despite his effort to investigate this question, his work did not have the impact in the “decision making” community that it had in other fields. His rejection of the assumption of perfect rationality, made in mainstream economics, led him to develop the concept of bounded rationality. Simon’s approach also emphasized the limitations of the cognitive system, the change of processes due to expertise, and the direct empirical study of cognitive processes involved in decision making. In this article, we argue that his subsequent research program in problem solving and expertise offered critical tools for studying decision-making processes that took into account his original notion of bounded rationality. Unfortunately, these tools were ignored by the main research paradigms in decision making, such as Tversky and Kahneman’s biased rationality approach (also known as the heuristics and biases approach) and the ecological approach advanced by Gigerenzer and others. We make a proposal of how to integrate Simon’s approach with the main current approaches to decision making. We argue that this would lead to better models of decision making that are more generalizable, have higher ecological validity, include specification of cognitive processes, and provide a better understanding of the interaction between the characteristics of the cognitive system and the contingencies of the environment.", "title": "" }, { "docid": "3d3589a002f8195bb20324dd8a8f5d76", "text": "Vacuum-based end effectors are widely used in industry and are often preferred over parallel-jaw and multifinger grippers due to their ability to lift objects with a single point of contact. Suction grasp planners often target planar surfaces on point clouds near the estimated centroid of an object. In this paper, we propose a compliant suction contact model that computes the quality of the seal between the suction cup and local target surface and a measure of the ability of the suction grasp to resist an external gravity wrench. To characterize grasps, we estimate robustness to perturbations in end-effector and object pose, material properties, and external wrenches. We analyze grasps across 1,500 3D object models to generate Dex-Net 3.0, a dataset of 2.8 million point clouds, suction grasps, and grasp robustness labels. We use Dex-Net 3.0 to train a Grasp Quality Convolutional Neural Network (GQ-CNN) to classify robust suction targets in point clouds containing a single object. We evaluate the resulting system in 350 physical trials on an ABB YuMi fitted with a pneumatic suction gripper. When evaluated on novel objects that we categorize as Basic (prismatic or cylindrical), Typical (more complex geometry), and Adversarial (with few available suction-grasp points) Dex-Net 3.0 achieves success rates of 98%, 82%, and 58% respectively, improving to 81% in the latter case when the training set includes only adversarial objects. Code, datasets, and supplemental material can be found at http://berkeleyautomation.github.io/dex-net.", "title": "" }, { "docid": "9d319b7bfdf43b05aa79f67c990ccb73", "text": "Queries are the foundations of data intensive applications. In model-driven software engineering (MDE), model queries are core technologies of tools and transformations. As software models are rapidly increasing in size and complexity, traditional tools exhibit scalability issues that decrease productivity and increase costs [17]. While scalability is a hot topic in the database community and recent NoSQL efforts have partially addressed many shortcomings, this happened at the cost of sacrificing the ad-hoc query capabilities of SQL. Unfortunately, this is a critical problem for MDE applications due to their inherent workload complexity. In this paper, we aim to address both the scalability and ad-hoc querying challenges by adapting incremental graph search techniques – known from the EMF-IncQuery framework – to a distributed cloud infrastructure. We propose a novel architecture for distributed and incremental queries, and conduct experiments to demonstrate that IncQuery-D, our prototype system, can scale up from a single workstation to a cluster that can handle very large models and complex incremental queries efficiently.", "title": "" }, { "docid": "1364388181335859cabcdcecf73038e8", "text": "In this paper, we propose an image completion algorithm based on dense correspondence between the input image and an exemplar image retrieved from the Internet. Contrary to traditional methods which register two images according to sparse correspondence, in this paper, we propose a hierarchical PatchMatch method that progressively estimates a dense correspondence, which is able to capture small deformations between images. The estimated dense correspondence has usually large occlusion areas that correspond to the regions to be completed. A nearest neighbor field (NNF) interpolation algorithm interpolates a smooth and accurate NNF over the occluded region. Given the calculated NNF, the correct image content from the exemplar image is transferred to the input image. Finally, as there could be a color difference between the completed content and the input image, a color correction algorithm is applied to remove the visual artifacts. Numerical results show that our proposed image completion method can achieve photo realistic image completion results.", "title": "" }, { "docid": "a0124ccd8586bd082ef4510389269d5d", "text": "We present a convolutional-neural-network-based system that faithfully colorizes black and white photographic images without direct human assistance. We explore various network architectures, objectives, color spaces, and problem formulations. The final classification-based model we build generates colorized images that are significantly more aesthetically-pleasing than those created by the baseline regression-based model, demonstrating the viability of our methodology and revealing promising avenues for future work.", "title": "" }, { "docid": "c20b774b1e2422cadaf41e60652f7363", "text": "In some situations, utilities may try to “save” the fuse of a circuit following temporary faults by de-energizing the line with the fast operation of an upstream recloser before the fuse is damaged. This fuse-saving practice is accomplished through proper time coordination between a recloser and a fuse. However, the installation of distributed generation (DG) into distribution networks may affect this coordination due to additional fault current contributions from the distributed resources. This phenomenon of recloser-fuse miscoordination is investigated in this paper with the help of a typical network that employs fuse saving. The limitations of a recloser equipped with time and instantaneous overcurrent elements with respect to fuse savings, in the presence of DG, are discussed. An adaptive relaying strategy is proposed to ensure fuse savings in the new scenario even in the worst fault conditions. The simulation results obtained by adaptively changing relay settings in response to changing DG configurations confirm that the settings selected theoretically in accordance with the proposed strategy hold well in operation.", "title": "" }, { "docid": "83e53a09792e434db2bb5bef32c7bf61", "text": "Extractive document summarization aims to conclude given documents by extracting some salient sentences. Often, it faces two challenges: 1) how to model the information redundancy among candidate sentences; 2) how to select the most appropriate sentences. This paper attempts to build a strong summarizer DivSelect+CNNLM by presenting new algorithms to optimize each of them. Concretely, it proposes CNNLM, a novel neural network language model (NNLM) based on convolutional neural network (CNN), to project sentences into dense distributed representations, then models sentence redundancy by cosine similarity. Afterwards, it formulates the selection process as an optimization problem, constructing a diversified selection process (DivSelect) with the aim of selecting some sentences which have high prestige, meantime, are dis-similar with each other. Experimental results on DUC2002 and DUC2004 benchmark data sets demonstrate the effectiveness of our approach.", "title": "" }, { "docid": "96669cea810d2918f2d35875f87d45f2", "text": "In this paper, a new probabilistic tagging method is presented which avoids problems that Markov Model based taggers face, when they have to estimate transition probabilities from sparse data. In this tagging method, transition probabilities are estimated using a decision tree. Based on this method, a part-of-speech tagger (called TreeTagger) has been implemented which achieves 96.36 % accuracy on Penn-Treebank data which is better than that of a trigram tagger (96.06 %) on the same data.", "title": "" }, { "docid": "42fa2e99d0c17cf706e6674dafb898a7", "text": "To improve software productivity, when constructing new software systems, developers often reuse existing class libraries or frameworks by invoking their APIs. Those APIs, however, are often complex and not well documented, posing barriers for developers to use them in new client code. To get familiar with how those APIs are used, developers may search the Web using a general search engine to find relevant documents or code examples. Developers can also use a source code search engine to search open source repositories for source files that use the same APIs. Nevertheless, the number of returned source files is often large. It is difficult for developers to learn API usages from a large number of returned results. In order to help developers understand API usages and write API client code more effectively, we have developed an API usage mining framework and its supporting tool called MAPO (for <u>M</u>ining <u>AP</u>I usages from <u>O</u>pen source repositories). Given a query that describes a method, class, or package for an API, MAPO leverages the existing source code search engines to gather relevant source files and conducts data mining. The mining leads to a short list of frequent API usages for developers to inspect. MAPO currently consists of five components: a code search engine, a source code analyzer, a sequence preprocessor, a frequent sequence miner, and a frequent sequence post processor. We have examined the effectiveness of MAPO using a set of various queries. The preliminary results show that the framework is practical for providing informative and succinct API usage patterns.", "title": "" }, { "docid": "188e971e34192af93c36127b69d89064", "text": "1 1 This paper has been revised and extended from the authors' previous work [23][24][25]. ABSTRACT Ontology mapping seeks to find semantic correspondences between similar elements of different ontologies. It is a key challenge to achieve semantic interoperability in building the Semantic Web. This paper proposes a new generic and adaptive ontology mapping approach, called the PRIOR+, based on propagation theory, information retrieval techniques and artificial intelligence. The approach consists of three major modules, i.e., the IR-based similarity generator, the adaptive similarity filter and weighted similarity aggregator, and the neural network based constraint satisfaction solver. The approach first measures both linguistic and structural similarity of ontologies in a vector space model, and then aggregates them using an adaptive method based on their harmonies, which is defined as an estimator of performance of similarity. Finally to improve mapping accuracy the interactive activation and competition neural network is activated, if necessary, to search for a solution that can satisfy ontology constraints. The experimental results show that harmony is a good estimator of f-measure; the harmony based adaptive aggregation outperforms other aggregation methods; neural network approach significantly boosts the performance in most cases. Our approach is competitive with top ranked systems on benchmark tests at OAEI campaign 2007, and performs the best on real cases in OAEI benchmark tests.", "title": "" }, { "docid": "e41e5221116a7b63c2238fc4541c1d4c", "text": ". . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xii CHAPTER", "title": "" } ]
scidocsrr
290b378cdbef5fbc22e940d194cb0784
Superionic glass-ceramic electrolytes for room-temperature rechargeable sodium batteries.
[ { "docid": "ea96aa3b9f162c69c738be2b190db9e0", "text": "Batteries are currently being developed to power an increasingly diverse range of applications, from cars to microchips. How can scientists achieve the performance that each application demands? How will batteries be able to power the many other portable devices that will no doubt be developed in the coming years? And how can batteries become a sustainable technology for the future? The technological revolution of the past few centuries has been fuelled mainly by variations of the combustion reaction, the fire that marked the dawn of humanity. But this has come at a price: the resulting emissions of carbon dioxide have driven global climate change. For the sake of future generations, we urgently need to reconsider how we use energy in everything from barbecues to jet aeroplanes and power stations. If a new energy economy is to emerge, it must be based on a cheap and sustainable energy supply. One of the most flagrantly wasteful activities is travel, and here battery devices can potentially provide a solution, especially as they can be used to store energy from sustainable sources such as the wind and solar power. Because batteries are inherently simple in concept, it is surprising that their development has progressed much more slowly than other areas of electronics. As a result, they are often seen as being the heaviest, costliest and least-green components of any electronic device. It was the lack of good batteries that slowed down the deployment of electric cars and wireless communication, which date from at least 1899 and 1920, respectively (Fig. 1). The slow progress is due to the lack of suitable electrode materials and electrolytes, together with difficulties in mastering the interfaces between them. All batteries are composed of two electrodes connected by an ionically conductive material called an electrolyte. The two electrodes have different chemical potentials, dictated by the chemistry that occurs at each. When these electrodes are connected by means of an external device, electrons spontaneously flow from the more negative to the more positive potential. Ions are transported through the electrolyte, maintaining the charge balance, and electrical energy can be tapped by the external circuit. In secondary, or rechargeable, batteries, a larger voltage applied in the opposite direction can cause the battery to recharge. The amount of electrical energy per mass or volume that a battery can deliver is a function of the cell's voltage and capacity, which are dependent on the …", "title": "" } ]
[ { "docid": "968ee8726afb8cc82d629ac8afabf3db", "text": "Online communities are increasingly important to organizations and the general public, but there is little theoretically based research on what makes some online communities more successful than others. In this article, we apply theory from the field of social psychology to understand how online communities develop member attachment, an important dimension of community success. We implemented and empirically tested two sets of community features for building member attachment by strengthening either group identity or interpersonal bonds. To increase identity-based attachment, we gave members information about group activities and intergroup competition, and tools for group-level communication. To increase bond-based attachment, we gave members information about the activities of individual members and interpersonal similarity, and tools for interpersonal communication. Results from a six-month field experiment show that participants’ visit frequency and self-reported attachment increased in both conditions. Community features intended to foster identity-based attachment had stronger effects than features intended to foster bond-based attachment. Participants in the identity condition with access to group profiles and repeated exposure to their group’s activities visited their community twice as frequently as participants in other conditions. The new features also had stronger effects on newcomers than on old-timers. This research illustrates how theory from the social science literature can be applied to gain a more systematic understanding of online communities and how theory-inspired features can improve their success. 1", "title": "" }, { "docid": "c3271548bf0c90541153e629dc298d61", "text": "A number of recent studies have shown that a Deep Convolutional Neural Network (DCNN) pretrained on a large dataset can be adopted as a universal image descriptor, and that doing so leads to impressive performance at a range of image classification tasks. Most of these studies, if not all, adopt activations of the fully-connected layer of a DCNN as the image or region representation and it is believed that convolutional layer activations are less discriminative. This paper, however, advocates that if used appropriately, convolutional layer activations constitute a powerful image representation. This is achieved by adopting a new technique proposed in this paper called cross-convolutional-layer pooling. More specifically, it extracts subarrays of feature maps of one convolutional layer as local features, and pools the extracted features with the guidance of the feature maps of the successive convolutional layer. Compared with existing methods that apply DCNNs in the similar local feature setting, the proposed method avoids the input image style mismatching issue which is usually encountered when applying fully connected layer activations to describe local regions. Also, the proposed method is easier to implement since it is codebook free and does not have any tuning parameters. By applying our method to four popular visual classification tasks, it is demonstrated that the proposed method can achieve comparable or in some cases significantly better performance than existing fully-connected layer based image representations.", "title": "" }, { "docid": "92a0fb602276952962762b07e7cd4d2b", "text": "Representation of video is a vital problem in action recognition. This paper proposes Stacked Fisher Vectors (SFV), a new representation with multi-layer nested Fisher vector encoding, for action recognition. In the first layer, we densely sample large subvolumes from input videos, extract local features, and encode them using Fisher vectors (FVs). The second layer compresses the FVs of subvolumes obtained in previous layer, and then encodes them again with Fisher vectors. Compared with standard FV, SFV allows refining the representation and abstracting semantic information in a hierarchical way. Compared with recent mid-level based action representations, SFV need not to mine discriminative action parts but can preserve mid-level information through Fisher vector encoding in higher layer. We evaluate the proposed methods on three challenging datasets, namely Youtube, J-HMDB, and HMDB51. Experimental results demonstrate the effectiveness of SFV, and the combination of the traditional FV and SFV outperforms stateof-the-art methods on these datasets with a large margin.", "title": "" }, { "docid": "7f0a721287ed05c67c5ecf1206bab4e6", "text": "This study underlines the value of the brand personality and its influence on consumer’s decision making, through relational variables. An empirical study, in which 380 participants have received an SMS ad, confirms that brand personality does actually influence brand trust, brand attachment and brand commitment. The levels of brand sensitivity and involvement have also an impact on the brand personality and on its related variables.", "title": "" }, { "docid": "274d24f2e061eea92a2030e93c640e27", "text": "Traditional convolutional layers extract features from patches of data by applying a non-linearity on an affine function of the input. We propose a model that enhances this feature extraction process for the case of sequential data, by feeding patches of the data into a recurrent neural network and using the outputs or hidden states of the recurrent units to compute the extracted features. By doing so, we exploit the fact that a window containing a few frames of the sequential data is a sequence itself and this additional structure might encapsulate valuable information. In addition, we allow for more steps of computation in the feature extraction process, which is potentially beneficial as an affine function followed by a non-linearity can result in too simple features. Using our convolutional recurrent layers, we obtain an improvement in performance in two audio classification tasks, compared to traditional convolutional layers.", "title": "" }, { "docid": "3920597ba84564e1928773e1f22cd6d4", "text": "Neuroelectric oscillations reflect rhythmic shifting of neuronal ensembles between high and low excitability states. In natural settings, important stimuli often occur in rhythmic streams, and when oscillations entrain to an input rhythm their high excitability phases coincide with events in the stream, effectively amplifying neuronal input responses. When operating in a 'rhythmic mode', attention can use these differential excitability states as a mechanism of selection by simply enforcing oscillatory entrainment to a task-relevant input stream. When there is no low-frequency rhythm that oscillations can entrain to, attention operates in a 'continuous mode', characterized by extended increase in gamma synchrony. We review the evidence for early sensory selection by oscillatory phase-amplitude modulations, its mechanisms and its perceptual and behavioral consequences.", "title": "" }, { "docid": "9a7f9ecf4dafaaaee2a76d49b51c545e", "text": "Given a set of documents from a specific domain (e.g., medical research journals), how do we automatically build a Knowledge Graph (KG) for that domain? Automatic identification of relations and their schemas, i.e., type signature of arguments of relations (e.g., undergo(Patient, Surgery)), is an important first step towards this goal. We refer to this problem as Relation Schema Induction (RSI). In this paper, we propose Schema Induction using Coupled Tensor Factorization (SICTF), a novel tensor factorization method for relation schema induction. SICTF factorizes Open Information Extraction (OpenIE) triples extracted from a domain corpus along with additional side information in a principled way to induce relation schemas. To the best of our knowledge, this is the first application of tensor factorization for the RSI problem. Through extensive experiments on multiple real-world datasets, we find that SICTF is not only more accurate than state-of-the-art baselines, but also significantly faster (about 14x faster).", "title": "" }, { "docid": "69f72b8eadadba733f240fd652ca924e", "text": "We address the problem of finding descriptive explanations of facts stored in a knowledge graph. This is important in high-risk domains such as healthcare, intelligence, etc. where users need additional information for decision making and is especially crucial for applications that rely on automatically constructed knowledge bases where machine learned systems extract facts from an input corpus and working of the extractors is opaque to the end-user. We follow an approach inspired from information retrieval and propose a simple and efficient, yet effective solution that takes into account passage level as well as document level properties to produce a ranked list of passages describing a given input relation. We test our approach using Wikidata as the knowledge base and Wikipedia as the source corpus and report results of user studies conducted to study the effectiveness of our proposed model.", "title": "" }, { "docid": "fda40e94b771e6ac4d0390236fd4eb56", "text": "How does users’ freedom of choice, or the lack thereof, affect interface preferences? The research reported in this article approaches this question from two theoretical perspectives. The first of these argues that an interface with a dominant market share benefits from the absence of competition because users acquire skills that are specific to that particular interface, which in turn reduces the probability that they will switch to a new competitor interface in the future. By contrast, the second perspective proposes that the advantage that a market leader has in being able to install a set of non-transferable skills in its user base is offset by a psychological force that causes humans to react against perceived constraints on their freedom of choice. We test a research model that incorporates the key predictions of these two theoretical perspectives in an experiment involving consequential interface choices. We find strong support for the second perspective, which builds upon the theory of psychological reactance.", "title": "" }, { "docid": "ae527d90981c371c4807799802dbc5a8", "text": "We present our efforts to deploy mobile robots in office environments, focusing in particular on the challenge of planning a schedule for a robot to accomplish user-requested actions. We concretely aim to make our CoBot mobile robots available to execute navigational tasks requested by users, such as telepresence, and picking up and delivering messages or objects at different locations. We contribute an efficient web-based approach in which users can request and schedule the execution of specific tasks. The scheduling problem is converted to a mixed integer programming problem. The robot executes the scheduled tasks using a synthetic speech and touch-screen interface to interact with users, while allowing users to follow the task execution online. Our robot uses a robust Kinect-based safe navigation algorithm, moves fully autonomously without the need to be chaperoned by anyone, and is robust to the presence of moving humans, as well as non-trivial obstacles, such as legged chairs and tables. Our robots have already performed 15km of autonomous service tasks. Introduction and Related Work We envision a system in which autonomous mobile robots robustly perform service tasks in indoor environments. The robots perform tasks which are requested by building residents over the web, such as delivering mail, fetching coffee, or guiding visitors. To fulfill the users’ requests, we must plan a schedule of when the robot will execute each task in accordance with the constraints specified by the users. Many efforts have used the web to access robots, including the early examples of the teleoperation of a robotic arm (Goldberg et al. 1995; Taylor and Trevelyan 1995) and interfacing with a mobile robot (e.g, (Simmons et al. 1997; Siegwart and Saucy 1999; Saucy and Mondada 2000; Schulz et al. 2000)), among others. The robot Xavier (Simmons et al. 1997; 2000) allowed users to make requests over the web for the robot to go to specific places, and other mobile robots soon followed (Siegwart and Saucy 1999; Grange, Fong, and Baur 2000; Saucy and Mondada 2000; Schulz et al. 2000). The RoboCup@Home initiative (Visser and Burkhard 2007) provides competition setups for indoor Copyright © 2011, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. Figure 1: CoBot-2, an omnidirectional mobile robot for indoor users. service autonomous robots, with an increasingly wide scope of challenges focusing on robot autonomy and verbal interaction with users. In this work, we present our architecture to effectively make a fully autonomous indoor service robot available to general users. We focus on the problem of planning a schedule for the robot, and present a mixed integer linear programming solution for planning a schedule. We ground our work on the CoBot-2 platform1, shown in Figure 1. CoBot-2 autonomously localizes and navigates in a multi-floor office environment while effectively avoiding obstacles (Biswas and Veloso 2010). The robot carries a variety of sensing and computing devices, including a camera, a Kinect depthcamera, a Hokuyo LIDAR, a touch-screen tablet, microphones, speakers, and wireless communication. CoBot-2 executes tasks sent by users over the web, and we have devised a user-friendly web interface that allows users to specify tasks. Currently, the robot executes three types of tasks: a GoToRoom task where the robot visits a location, a Telepresence task where the robot goes to a location CoBot-2 was designed and built by Michael Licitra, mlicitra@cmu.edu, as a scaled-up version of the CMDragons small-size soccer robots, also designed and built by him. 27 Automated Action Planning for Autonomous Mobile Robots: Papers from the 2011 AAAI Workshop (WS-11-09)", "title": "" }, { "docid": "64753b3c47e52ff6f1760231dc13cd63", "text": "Theatrical improvisation (impro or improv) is a demanding form of live, collaborative performance. Improv is a humorous and playful artform built on an open-ended narrative structure which simultaneously celebrates effort and failure. It is thus an ideal test bed for the development and deployment of interactive artificial intelligence (AI)-based conversational agents, or artificial improvisors. This case study introduces an improv show experiment featuring human actors and artificial improvisors. We have previously developed a deeplearning-based artificial improvisor, trained on movie subtitles, that can generate plausible, context-based, lines of dialogue suitable for theatre (Mathewson and Mirowski 2017b). In this work, we have employed it to control what a subset of human actors say during an improv performance. We also give human-generated lines to a different subset of performers. All lines are provided to actors with headphones and all performers are wearing headphones. This paper describes a Turing test, or imitation game, taking place in a theatre, with both the audience members and the performers left to guess who is a human and who is a machine. In order to test scientific hypotheses about the perception of humans versus machines we collect anonymous feedback from volunteer performers and audience members. Our results suggest that rehearsal increases proficiency and possibility to control events in the performance. That said, consistency with real world experience is limited by the interface and the mechanisms used to perform the show. We also show that human-generated lines are shorter, more positive, and have less difficult words with more grammar and spelling mistakes than the artificial improvisor generated lines.", "title": "" }, { "docid": "4f7fdd852f520f6928eeb69b3d0d1632", "text": "Hadoop MapReduce is a popular framework for distributed storage and processing of large datasets and is used for big data analytics. It has various configuration parameters which play an important role in deciding the performance i.e., the execution time of a given big data processing job. Default values of these parameters do not result in good performance and therefore it is important to tune them. However, there is inherent difficulty in tuning the parameters due to two important reasons - first, the parameter search space is large and second, there are cross-parameter interactions. Hence, there is a need for a dimensionality-free method which can automatically tune the configuration parameters by taking into account the cross-parameter dependencies. In this paper, we propose a novel Hadoop parameter tuning methodology, based on a noisy gradient algorithm known as the simultaneous perturbation stochastic approximation (SPSA). The SPSA algorithm tunes the selected parameters by directly observing the performance of the Hadoop MapReduce system. The approach followed is independent of parameter dimensions and requires only 2 observations per iteration while tuning. We demonstrate the effectiveness of our methodology in achieving good performance on popular Hadoop benchmarks namely Grep, Bigram, Inverted Index, Word Co-occurrence and Terasort. Our method, when tested on a 25 node Hadoop cluster shows 45-66% decrease in execution time of Hadoop jobs on an average, when compared to prior methods. Further, our experiments also indicate that the parameters tuned by our method are resilient to changes in number of cluster nodes, which makes our method suitable to optimize Hadoop when it is provided as a service on the cloud.", "title": "" }, { "docid": "307c8b04c447757f1bbcc5bf9976f423", "text": "BACKGROUND\nChemical and biomedical Named Entity Recognition (NER) is an essential prerequisite task before effective text mining can begin for biochemical-text data. Exploiting unlabeled text data to leverage system performance has been an active and challenging research topic in text mining due to the recent growth in the amount of biomedical literature. We present a semi-supervised learning method that efficiently exploits unlabeled data in order to incorporate domain knowledge into a named entity recognition model and to leverage system performance. The proposed method includes Natural Language Processing (NLP) tasks for text preprocessing, learning word representation features from a large amount of text data for feature extraction, and conditional random fields for token classification. Other than the free text in the domain, the proposed method does not rely on any lexicon nor any dictionary in order to keep the system applicable to other NER tasks in bio-text data.\n\n\nRESULTS\nWe extended BANNER, a biomedical NER system, with the proposed method. This yields an integrated system that can be applied to chemical and drug NER or biomedical NER. We call our branch of the BANNER system BANNER-CHEMDNER, which is scalable over millions of documents, processing about 530 documents per minute, is configurable via XML, and can be plugged into other systems by using the BANNER Unstructured Information Management Architecture (UIMA) interface. BANNER-CHEMDNER achieved an 85.68% and an 86.47% F-measure on the testing sets of CHEMDNER Chemical Entity Mention (CEM) and Chemical Document Indexing (CDI) subtasks, respectively, and achieved an 87.04% F-measure on the official testing set of the BioCreative II gene mention task, showing remarkable performance in both chemical and biomedical NER. BANNER-CHEMDNER system is available at: https://bitbucket.org/tsendeemts/banner-chemdner.", "title": "" }, { "docid": "6cc3476cbb294ba2b6e95b962ff7c5d6", "text": "Recent advances in position localization techniques have fundamentally enhanced social networking services, allowing users to share their locations and location-related content, such as geo-tagged photos and notes. We refer to these social networks as location-based social networks (LBSNs). Location data both bridges the gap between the physical and digital worlds and enables a deeper understanding of user preferences and behavior. This addition of vast geospatial datasets has stimulated research into novel recommender systems that seek to facilitate users’ travels and social interactions. In this paper, we offer a systematic review of this research, summarizing the contributions of individual efforts and exploring their relations. We discuss the new properties and challenges that location brings to recommendation systems for LBSNs. We present a comprehensive survey of recommender systems for LBSNs, analyzing 1) the data source used, 2) the methodology employed to generate a recommendation, and 3) the objective of the recommendation. We propose three taxonomies that partition the recommender systems according to the properties listed above. First, we categorize the recommender systems by the objective of the recommendation, which can include locations, users, activities, or social media.Second, we categorize the recommender systems by the methodologies employed, including content-based, link analysis-based, and collaborative filtering-based methodologies. Third, we categorize the systems by the data sources used, including user profiles, user online histories, and user location histories. For each category, we summarize the goals and contributions of each system and highlight one representative research effort. Further, we provide comparative analysis of the recommendation systems within each category. Finally, we discuss methods of evaluation for these recommender systems and point out promising research topics for future work. This article presents a panorama of the recommendation systems in location-based social networks with a balanced depth, facilitating research into this important research theme.", "title": "" }, { "docid": "7804d1c4ec379ed47d45917786946b2f", "text": "Data mining technology has been applied to library management. In this paper, Boustead College Library Information Management System in the history of circulation records, the reader information and collections as a data source, using the Microsoft SQL Server 2005 as a data mining tool, applying data mining algorithm as cluster, association rules and time series to identify characteristics of the reader to borrow in order to achieve individual service.", "title": "" }, { "docid": "742c7ccfc1bc0f5150b47683fbfd455e", "text": "Detailed facial performance geometry can be reconstructed using dense camera and light setups in controlled studios. However, a wide range of important applications cannot employ these approaches, including all movie productions shot from a single principal camera. For post-production, these require dynamic monocular face capture for appearance modification. We present a new method for capturing face geometry from monocular video. Our approach captures detailed, dynamic, spatio-temporally coherent 3D face geometry without the need for markers. It works under uncontrolled lighting, and it successfully reconstructs expressive motion including high-frequency face detail such as folds and laugh lines. After simple manual initialization, the capturing process is fully automatic, which makes it versatile, lightweight and easy-to-deploy. Our approach tracks accurate sparse 2D features between automatically selected key frames to animate a parametric blend shape model, which is further refined in pose, expression and shape by temporally coherent optical flow and photometric stereo. We demonstrate performance capture results for long and complex face sequences captured indoors and outdoors, and we exemplify the relevance of our approach as an enabling technology for model-based face editing in movies and video, such as adding new facial textures, as well as a step towards enabling everyone to do facial performance capture with a single affordable camera.", "title": "" }, { "docid": "4f43cd8225c70c0328ea4a971abc0e2f", "text": "Home security system is needed for convenience and safety. This system invented to keep home safe from intruder. In this work, we present the design and implementation of a GSM based wireless home security system. which take a very less power. The system is a wireless home network which contains a GSM modem and magnet with relay which are door security nodes. The system can response rapidly as intruder detect and GSM module will do alert home owner. This security system for alerting a house owner wherever he will. In this system a relay and magnet installed at entry point to a precedence produce a signal through a public telecom network and sends a message or redirect a call that that tells about your home update or predefined message which is embedded in microcontroller. Suspected activities are conveyed to remote user through SMS or Call using GSM technology.", "title": "" }, { "docid": "19700a52f05178ea1c95d576f050f57d", "text": "With the progress of mobile devices and wireless broadband, a new eMarket platform, termed spatial crowdsourcing is emerging, which enables workers (aka crowd) to perform a set of spatial tasks (i.e., tasks related to a geographical location and time) posted by a requester. In this paper, we study a version of the spatial crowd-sourcing problem in which the workers autonomously select their tasks, called the worker selected tasks (WST) mode. Towards this end, given a worker, and a set of tasks each of which is associated with a location and an expiration time, we aim to find a schedule for the worker that maximizes the number of performed tasks. We first prove that this problem is NP-hard. Subsequently, for small number of tasks, we propose two exact algorithms based on dynamic programming and branch-and-bound strategies. Since the exact algorithms cannot scale for large number of tasks and/or limited amount of resources on mobile platforms, we also propose approximation and progressive algorithms. We conducted a thorough experimental evaluation on both real-world and synthetic data to compare the performance and accuracy of our proposed approaches.", "title": "" }, { "docid": "2effb3276d577d961f6c6ad18a1e7b3e", "text": "This paper extends the recovery of structure and motion to im age sequences with several independently moving objects. The mot ion, structure, and camera calibration are all a-priori unknown. The fundamental constraint that we introduce is that multiple motions must share the same camer parameters. Existing work on independent motions has not employed this constr ai t, and therefore has not gained over independent static-scene reconstructi ons. We show how this constraint leads to several new results in st ructure and motion recovery, where Euclidean reconstruction becomes pos ible in the multibody case, when it was underconstrained for a static scene. We sho w how to combine motions of high-relief, low-relief and planar objects. Add itionally we show that structure and motion can be recovered from just 4 points in th e uncalibrated, fixed camera, case. Experiments on real and synthetic imagery demonstrate the v alidity of the theory and the improvement in accuracy obtained using multibody an alysis.", "title": "" }, { "docid": "61ecbc652cf9f57136e8c1cd6fed2fb0", "text": "Recent advancements in digital technology have attracted the interest of educators and researchers to develop technology-assisted inquiry-based learning environments in the domain of school science education. Traditionally, school science education has followed deductive and inductive forms of inquiry investigation, while the abductive form of inquiry has previously been sparsely explored in the literature related to computers and education. We have therefore designed a mobile learning application ‘ThinknLearn’, which assists high school students in generating hypotheses during abductive inquiry investigations. The M3 evaluation framework was used to investigate the effectiveness of using ‘ThinknLearn’ to facilitate student learning. The results indicated in this paper showed improvements in the experimental group’s learning performance as compared to a control group in pre-post tests. In addition, the experimental group also maintained this advantage during retention tests as well as developing positive attitudes toward mobile learning. 2012 Elsevier Ltd. All rights reserved.", "title": "" } ]
scidocsrr
caa8433540b6133b9466e0583701db74
Exploring capturable everyday memory for autobiographical authentication
[ { "docid": "2a0de2b93a6a227380264e7bc6cac094", "text": "The most common computer authentication method is to use alphanumerical usernames and passwords. This method has been shown to have significant drawbacks. For example, users tend to pick passwords that can be easily guessed. On the other hand, if a password is hard to guess, then it is often hard to remember. To address this problem, some researchers have developed authentication methods that use pictures as passwords. In this paper, we conduct a comprehensive survey of the existing graphical password techniques. We classify these techniques into two categories: recognition-based and recall-based approaches. We discuss the strengths and limitations of each method and point out the future research directions in this area. We also try to answer two important questions: \"Are graphical passwords as secure as text-based passwords?\"; \"What are the major design and implementation issues for graphical passwords?\" This survey will be useful for information security researchers and practitioners who are interested in finding an alternative to text-based authentication methods", "title": "" }, { "docid": "46a66d6d3d4ad927deb96d8d15af6669", "text": "Security questions (or challenge questions) are commonly used to authenticate users who have lost their passwords. We examined the password retrieval mechanisms for a number of personal banking websites, and found that many of them rely in part on security questions with serious usability and security weaknesses. We discuss patterns in the security questions we observed. We argue that today's personal security questions owe their strength to the hardness of an information-retrieval problem. However, as personal information becomes ubiquitously available online, the hardness of this problem, and security provided by such questions, will likely diminish over time. We supplement our survey of bank security questions with a small user study that supplies some context for how such questions are used in practice.", "title": "" } ]
[ { "docid": "c4fa73bd2d6b06f4655eeacaddf3b3a7", "text": "In recent years, the robotic research area has become extremely prolific in terms of wearable active exoskeletons for human body motion assistance, with the presentation of many novel devices, for upper limbs, lower limbs, and the hand. The hand shows a complex morphology, a high intersubject variability, and offers limited space for physical interaction with a robot: as a result, hand exoskeletons usually are heavy, cumbersome, and poorly usable. This paper introduces a novel device designed on the basis of human kinematic compatibility, wearability, and portability criteria. This hand exoskeleton, briefly HX, embeds several features as underactuated joints, passive degrees of freedom ensuring adaptability and compliance toward the hand anthropometric variability, and an ad hoc design of self-alignment mechanisms to absorb human/robot joint axes misplacement, and proposes a novel mechanism for the thumb opposition. The HX kinematic design and actuation are discussed together with theoretical and experimental data validating its adaptability performances. Results suggest that HX matches the self-alignment design goal and is then suited for close human-robot interaction.", "title": "" }, { "docid": "67261cd9b1d71b57cb53766b06e157e4", "text": "Automatically recognizing rear light signals of front vehicles can significantly improve driving safety by automatic alarm and taking actions proactively to prevent rear-end collisions and accidents. Much previous research only focuses on detecting brake signals at night. In this paper, we present the design and implementation of a robust hierarchical framework for detecting taillights of vehicles and estimating alert signals (turning and braking) in the daytime. The three-layer structure of the vision-based framework can obviously reduce both false positives and false negatives of taillight detection. Comparing to other existing work addressing nighttime detection, the proposed method is capable of recognizing taillight signals under different illumination circumstances. By carrying out contrast experiments with existing state-of-the-art methods, the results show the high detection rate of the framework in different weather conditions during the daytime.", "title": "" }, { "docid": "77f3dfeba56c3731fda1870ce48e1aca", "text": "The organicist view of society is updated by incorporating concepts from cybernetics, evolutionary theory, and complex adaptive systems. Global society can be seen as an autopoietic network of self-producing components, and therefore as a living system or ‘superorganism’. Miller's living systems theory suggests a list of functional components for society's metabolism and nervous system. Powers' perceptual control theory suggests a model for a distributed control system implemented through the market mechanism. An analysis of the evolution of complex, networked systems points to the general trends of increasing efficiency, differentiation and integration. In society these trends are realized as increasing productivity, decreasing friction, increasing division of labor and outsourcing, and increasing cooperativity, transnational mergers and global institutions. This is accompanied by increasing functional autonomy of individuals and organisations and the decline of hierarchies. The increasing complexity of interactions and instability of certain processes caused by reduced friction necessitate a strengthening of society's capacity for information processing and control, i.e. its nervous system. This is realized by the creation of an intelligent global computer network, capable of sensing, interpreting, learning, thinking, deciding and initiating actions: the ‘global brain’. Individuals are being integrated ever more tightly into this collective intelligence. Although this image may raise worries about a totalitarian system that restricts individual initiaSocial Evolution & History / March 2007 58 tive, the superorganism model points in the opposite direction, towards increasing freedom and diversity. The model further suggests some specific futurological predictions for the coming decades, such as the emergence of an automated distribution network, a computer immune system, and a global consensus about values and standards.", "title": "" }, { "docid": "4248d4620096f5e4520a8f2d5ace2b63", "text": "With rapid increase in internet traffic over last few years due to the use of variety of internet applications, the area of IP traffic classification becomes very significant from the point of view of various internet service providers and other governmental and private organizations. Now days, traditional IP traffic classification techniques such as port number based and payload based direct packet inspection techniques are seldom used because of use of dynamic port number instead of well-known port number in packet headers and various encryption techniques which inhibit inspection of packet payload. Current trends are use of machine learning (ML) techniques for this classification. In this research paper, real time internet traffic dataset has been developed using packet capturing tool and then using attribute selection algorithms, a reduced feature dataset has been developed. After that, five ML algorithms MLP, RBF, C4.5, Bayes Net and Naïve Bayes are used for IP traffic classification with these datasets. This experimental analysis shows that Bayes Net and C4.5 are effective ML techniques for IP traffic classification with accuracy in the range of 94 %.", "title": "" }, { "docid": "c89d41581dfbb30a12a9d1b7f189d6d8", "text": "Relational phrases (e.g., “got married to”) and their hypernyms (e.g., “is a relative of”) are central for many tasks including question answering, open information extraction, paraphrasing, and entailment detection. This has motivated the development of several linguistic resources (e.g. DIRT, PATTY, and WiseNet) which systematically collect and organize relational phrases. These resources have demonstrable practical benefits, but are each limited due to noise, sparsity, or size. We present a new general-purpose method, RELLY, for constructing a large hypernymy graph of relational phrases with high-quality subsumptions using collective probabilistic programming techniques. Our graph induction approach integrates small highprecision knowledge bases together with large automatically curated resources, and reasons collectively to combine these resources into a consistent graph. Using RELLY, we construct a high-coverage, high-precision hypernymy graph consisting of 20K relational phrases and 35K hypernymy links. Our evaluation indicates a hypernymy link precision of 78%, and demonstrates the value of this resource for a document-relevance ranking task.", "title": "" }, { "docid": "bb853c369f37d2d960d6b312f80cfa98", "text": "The purpose of this platform is to support research and education goals in human-robot interaction and mobile manipulation with applications that require the integration of these abilities. In particular, our research aims to develop personal robots that work with people as capable teammates to assist in eldercare, healthcare, domestic chores, and other physical tasks that require robots to serve as competent members of human-robot teams. The robot’s small, agile design is particularly well suited to human-robot interaction and coordination in human living spaces. Our collaborators include the Laboratory for Perceptual Robotics at the University of Massachusetts at Amherst, Xitome Design, Meka Robotics, and digitROBOTICS.", "title": "" }, { "docid": "abe0205896b0edb31e1a527456b33184", "text": "MouseLight is a spatially-aware standalone mobile projector with the form factor of a mouse that can be used in combination with digital pens on paper. By interacting with the projector and the pen bimanually, users can visualize and modify the virtually augmented contents on top of the paper, and seamlessly transition between virtual and physical information. We present a high fidelity hardware prototype of the system and demonstrate a set of novel interactions specifically tailored to the unique properties of MouseLight. MouseLight differentiates itself from related systems such as PenLight in two aspects. First, MouseLight presents a rich set of bimanual interactions inspired by the ToolGlass interaction metaphor, but applied to physical paper. Secondly, our system explores novel displaced interactions, that take advantage of the independent input and output that is spatially aware of the underneath paper. These properties enable users to issue remote commands such as copy and paste or search. We also report on a preliminary evaluation of the system which produced encouraging observations and feedback.", "title": "" }, { "docid": "fef4383a5a06687636ba4001ab0e510c", "text": "In this paper, a depth camera-based novel approach for human activity recognition is presented using robust depth silhouettes context features and advanced Hidden Markov Models (HMMs). During HAR framework, at first, depth maps are processed to identify human silhouettes from noisy background by considering frame differentiation constraints of human body motion and compute depth silhouette area for each activity to track human movements in a scene. From the depth silhouettes context features, temporal frames information are computed for intensity differentiation measurements, depth history features are used to store gradient orientation change in overall activity sequence and motion difference features are extracted for regional motion identification. Then, these features are processed by Principal component analysis for dimension reduction and kmean clustering for code generation to make better activity representation. Finally, we proposed a new way to model, train and recognize different activities using advanced HMM. Each activity has been chosen with the highest likelihood value. Experimental results show superior recognition rate, resulting up to the mean recognition of 57.69% over the state of the art methods for fifteen daily routine activities using IM-Daily Depth Activity dataset. In addition, MSRAction3D dataset also showed some promising results.", "title": "" }, { "docid": "cbbd8c44de7e060779ed60c6edc31e3c", "text": "This letter presents a compact broadband microstrip-line-fed sleeve monopole antenna for application in the DTV system. The design of meandering the monopole into a compact structure is applied for size reduction. By properly selecting the length and spacing of the sleeve, the broadband operation for the proposed design can be achieved, and the obtained impedance bandwidth covers the whole DTV (470862 MHz) band. Most importantly, the matching condition over a wide frequency range can be performed well even when a small ground-plane length is used; meanwhile, a small variation in the impedance bandwidth is observed for the ground-plane length varied in a great range.", "title": "" }, { "docid": "a67df1737ca4e5cb41fe09ccb57c0e88", "text": "Generation of electricity from solar energy has gained worldwide acceptance due to its abundant availability and eco-friendly nature. Even though the power generated from solar looks to be attractive; its availability is subjected to variation owing to many factors such as change in irradiation, temperature, shadow etc. Hence, extraction of maximum power from solar PV using Maximum Power Point Tracking (MPPT) method was the subject of study in the recent past. Among many methods proposed, Hill Climbing and Incremental Conductance MPPT methods were popular in reaching Maximum Power under constant irradiation. However, these methods show large steady state oscillations around MPP and poor dynamic performance when subjected to change in environmental conditions. On the other hand, bioinspired algorithms showed excellent characteristics when dealing with non-linear, non-differentiable and stochastic optimization problems without involving excessive mathematical computations. Hence, in this paper an attempt is made by applying modifications to Particle Swarm Optimization technique, with emphasis on initial value selection, for Maximum Power Point Tracking. The key features of this method include ability to track the global peak power accurately under change in environmental condition with almost zero steady state oscillations, faster dynamic response and easy implementation. Systematic evaluation has been carried out for different partial shading conditions and finally the results obtained are compared with existing methods. In addition, simulations results are validated via built-in hardware prototype. © 2015 Published by Elsevier B.V. 37 38 39 40 41 42 43 44 45 46 47 48 . Introduction Ever growing energy demand by mankind and the limited availbility of resources remain as a major challenge to the power sector ndustry. The need for renewable energy resources has been augented in large scale and aroused due to its huge availability nd pollution free operation. Among the various renewable energy esources, solar energy has gained worldwide recognition because f its minimal maintenance, zero noise and reliability. Because of he aforementioned advantages; solar energy have been widely sed for various applications, but not limited to, such as megawatt cale power plants, water pumping, solar home systems, commuPlease cite this article in press as: R. Venugopalan, et al., Modified Parti Tracking for uniform and under partial shading condition, Appl. Soft C ication satellites, space vehicles and reverse osmosis plants [1]. owever, power generation using solar energy still remain uncerain, despite of all the efforts, due to various factors such as poor ∗ Corresponding author at: SELECT, VIT University, Vellore, Tamilnadu 632014, ndia. Tel.: +91 9600117935; fax: +91 9490113830. E-mail address: sudhakar.babu2013@vit.ac.in (T. Sudhakarbabu). ttp://dx.doi.org/10.1016/j.asoc.2015.05.029 568-4946/© 2015 Published by Elsevier B.V. 49 50 51 52 conversion efficiency, high installation cost and reduced power output under varying environmental conditions. Further, the characteristics of solar PV are non-linear in nature imposing constraints on solar power generation. Therefore, to maximize the power output from solar PV and to enhance the operating efficiency of the solar photovoltaic system, Maximum Power Point Tracking (MPPT) algorithms are essential [2]. Various MPPT algorithms [3–5] have been investigated and reported in the literature and the most popular ones are Fractional Open Circuit Voltage [6–8], Fractional Short Circuit Current [9–11], Perturb and Observe (P&O) [12–17], Incremental Conductance (Inc. Cond.) [18–22], and Hill Climbing (HC) algorithm [23–26]. In fractional open circuit voltage, and fractional short circuit current method; its performance depends on an approximate linear correlation between Vmpp, Voc and Impp, Isc values. However, the above relation is not practically valid; hence, exact value of Maximum cle Swarm Optimization technique based Maximum Power Point omput. J. (2015), http://dx.doi.org/10.1016/j.asoc.2015.05.029 Power Point (MPP) cannot be assured. Perturb and Observe (P&O) method works with the voltage perturbation based on present and previous operating power values. Regardless of its simple structure, its efficiency principally depends on the tradeoff between the 53 54 55 56 ARTICLE IN G Model ASOC 2982 1–12 2 R. Venugopalan et al. / Applied Soft C Nomenclature IPV Current source Rs Series resistance Rp Parallel resistance VD diode voltage ID diode current I0 leakage current Vmpp voltage at maximum power point Voc open circuit voltage Impp current at maximum power point Isc short circuit current Vmpn nominal maximum power point voltage at 1000 W/m2 Npp number of parallel PV modules Nss number of series PV modules w weight factor c1 acceleration factor c2 acceleration factor pbest personal best position gbest global best position Vt thermal voltage K Boltzmann constant T temperature q electron charge Ns number of cells in series Vocn nominal open circuit voltage at 1000W/m2 G irradiation Gn nominal Irradiation Kv voltage temperature coefficient dT difference in temperature RLmin minimum value of load at output RLmax maximum value of load at output Rin internal resistance of the PV module RPVmin minimum reflective impedance of PV array RPVmax maximum reflective impedance of PV array R equivalent output load resistance t M o w t b A c M h n ( e a i p p w a u t H o i 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 o b converter efficiency racking speed and the steady state oscillations in the region of PP [15]. Incremental Conductance (Inc. Cond.) algorithm works n the principle of comparing ratios of Incremental Conductance ith instantaneous conductance and it has the similar disadvanage as that of P&O method [20,21]. HC method works alike P&O ut it is based on the perturbation of duty cycle of power converter. ll these traditional methods have the following disadvantages in ommon; reduced efficiency and steady state oscillations around PP. Realizing the above stated drawbacks; various researchers ave worked on applying certain Artificial Intelligence (AI) techiques like Neural Network (NN) [27,28] and Fuzzy Logic Control FLC) [29,30]. However, these techniques require periodic training, normous volume of data for training, computational complexity nd large memory capacity. Application of aforementioned MPPT methods for centralzed/string PV system is limited as they fail to track the global eak power under partial shading conditions. In addition, multile peaks occur in P-V curve under partial shading condition in hich the unique peak point i.e., global power peak should be ttained. However, when conventional MPPT techniques are used nder such conditions, they usually get trapped in any one of Please cite this article in press as: R. Venugopalan, et al., Modified Part Tracking for uniform and under partial shading condition, Appl. Soft C he local power peaks; drastically lowering the search efficiency. ence, to improve MPP tracking efficiency of conventional methds under PS conditions certain modifications have been proposed n Ref. [31]. Some used two stage approach to track the MPP [32]. PRESS omputing xxx (2015) xxx–xxx In the first stage, a wide search is performed which ensures that the operating point is moved closer to the global peak which is further fine-tuned in the second stage to reach the global peak value. Even though tracking efficiency has improved the method still fails to find the global maximum under all conditions. Another interesting approach is improving the Fibonacci search method for global MPP tracking [33]. Alike two stage method, this one also suffers from the same drawback that it does not guarantee accurate MPP tracking under all shaded conditions [34]. Yet another unique formulation combining DIRECT search method with P&O was put forward for global MPP tracking in Ref. [35]. Even though it is rendered effective, it is very complex and increases the computational burden. In the recent past, bio-inspired algorithms like GA, PSO and ACO have drawn considerable researcher’s attention for MPPT application; since they ensure sufficient class of accuracy while dealing with non-linear, non-differentiable and stochastic optimization problems without involving excessive mathematical computations [32,36–38]. Further, these methods offer various advantages such as computational simplicity, easy implementation and faster response. Among those methods, PSO method is largely discussed and widely used for solar MPPT due to the fact that it has simple structure, system independency, high adaptability and lesser number of tuning parameters. Further in PSO method, particles are allowed to move in random directions and the best values are evolved based on pbest and gbest values. This exploration process is very suitable for MPPT application. To improve the search efficiency of the conventional PSO method authors have proposed modifications to the existing algorithm. In Ref. [39], the authors have put forward an additional perception capability for the particles in search space so that best solutions are evolved with higher accuracy than PSO. However, details on implementation under partial shading condition are not discussed. Further, this method is only applicable when the entire module receive uniform insolation cannot be considered. Traditional PSO method is modified in Ref. [40] by introducing equations for velocity update and inertia. Even though the method showed better performance, use of extra coefficients in the conventional PSO search limits its advantage and increases the computational burden of the algorithm. Another approach", "title": "" }, { "docid": "2710a25b3cf3caf5ebd5fb9f08c9e5e3", "text": "Your use of the JSTOR archive indicates your acceptance of JSTOR's Terms and Conditions of Use, available at http://www.jstor.org/page/info/about/policies/terms.jsp. JSTOR's Terms and Conditions of Use provides, in part, that unless you have obtained prior permission, you may not download an entire issue of a journal or multiple copies of articles, and you may use content in the JSTOR archive only for your personal, non-commercial use.", "title": "" }, { "docid": "244360e0815243d6a04d64a974da1b89", "text": "The life-history of Haplorchoides mehrai Pande & Shukla, 1976 is elucidated. The cercariae occurred in the thiarid snail Melanoides tuberculatus (Muller) collected from Chilka Lake, Orissa State. Metacercariae were found beneath the scales of Puntius sophore (Hamilton). Several species of catfishes in the lake served as definitive hosts. All stages in the life-cycle were successfully established under experimental conditions in the laboratory. The cercariae are of opisthorchioid type with a large globular and highly granular excretory bladder and seven pairs of pre-vesicular penetration glands. The adult flukes are redescribed to include details of the ventro-genital complex. Only three Indian species of the genus, i.e. H. attenuatus (Srivastava, 1935), H. pearsoni Pande & Shukla, 1976 and H. mehrai Pande & Shukla, 1976, are considered valid, and the remaining Indian species of the genus are considered as species inquirendae. The generic diagnosis of Haplorchoides is amended and the genus is included in the subfamily Haplorchiinae and the family Heterophyidae.", "title": "" }, { "docid": "1d632c181e89e7d019595f2757f7ee66", "text": "This study investigated the process by which employee perceptions of the organizational environment are related to job involvement, effort, and performance. The researchers developed an operational definition of psychological climate that was based on how employees perceive aspects of the organizational environment and interpret them in relation to their own well-being. Perceived psychological climate was then related to job involvement, effort, and performance in a path-analytic framework. Results showed that perceptions of a motivating and involving psychological climate were related to job involvement, which in turn was related to effort. Effort was also related to work performance. Results revealed that a modest but statistically significant effect of job involvement on performance became nonsignificant when effort was inserted into the model, indicating the mediating effect of effort on the relationship. The results cross-validated well across 2 samples of outside salespeople, indicating that relationships are generalizable across these different sales contexts.", "title": "" }, { "docid": "9140faa8bd908e5c8d0d9b326f07e231", "text": "The purpose of this paper is to provide a preliminary report on the rst broad-based experimental comparison of modern heuristics for the asymmetric traveling salesmen problem (ATSP). There are currently three general classes of such heuristics: classical tour construction heuristics such as Nearest Neighbor and the Greedy algorithm, local search algorithms based on re-arranging segments of the tour, as exemplied by the Kanellakis-Papadimitriou algorithm [KP80], and algorithms based on patching together the cycles in a minimum cycle cover, the best of which are variants on an algorithm proposed by Zhang [Zha93]. We test implementations of the main contenders from each class on a variety of instance types, introducing a variety of new random instance generators modeled on real-world applications of the ATSP. Among the many tentative conclusions we reach is that no single algorithm is dominant over all instance classes, although for each class the best tours are found either by Zhang's algorithm or an iterated variant on KanellakisPapadimitriou.", "title": "" }, { "docid": "4f58172c8101b67b9cd544b25d09f2e2", "text": "For years, researchers in face recognition area have been representing and recognizing faces based on subspace discriminant analysis or statistical learning. Nevertheless, these approaches are always suffering from the generalizability problem. This paper proposes a novel non-statistics based face representation approach, local Gabor binary pattern histogram sequence (LGBPHS), in which training procedure is unnecessary to construct the face model, so that the generalizability problem is naturally avoided. In this approach, a face image is modeled as a \"histogram sequence\" by concatenating the histograms of all the local regions of all the local Gabor magnitude binary pattern maps. For recognition, histogram intersection is used to measure the similarity of different LGBPHSs and the nearest neighborhood is exploited for final classification. Additionally, we have further proposed to assign different weights for each histogram piece when measuring two LGBPHSes. Our experimental results on AR and FERET face database show the validity of the proposed approach especially for partially occluded face images, and more impressively, we have achieved the best result on FERET face database.", "title": "" }, { "docid": "414871ff942d8be9dbb18e0da05455ad", "text": "We propose a detection and segmentation algorithm for the purposes of fine-grained recognition. The algorithm first detects low-level regions that could potentially belong to the object and then performs a full-object segmentation through propagation. Apart from segmenting the object, we can also `zoom in' on the object, i.e. center it, normalize it for scale, and thus discount the effects of the background. We then show that combining this with a state-of-the-art classification algorithm leads to significant improvements in performance especially for datasets which are considered particularly hard for recognition, e.g. birds species. The proposed algorithm is much more efficient than other known methods in similar scenarios. Our method is also simpler and we apply it here to different classes of objects, e.g. birds, flowers, cats and dogs. We tested the algorithm on a number of benchmark datasets for fine-grained categorization. It outperforms all the known state-of-the-art methods on these datasets, sometimes by as much as 11%. It improves the performance of our baseline algorithm by 3-4%, consistently on all datasets. We also observed more than a 4% improvement in the recognition performance on a challenging large-scale flower dataset, containing 578 species of flowers and 250,000 images.", "title": "" }, { "docid": "458470e18ce2ab134841f76440cfdc2b", "text": "Dependency trees help relation extraction models capture long-range relations between words. However, existing dependency-based models either neglect crucial information (e.g., negation) by pruning the dependency trees too aggressively, or are computationally inefficient because it is difficult to parallelize over different tree structures. We propose an extension of graph convolutional networks that is tailored for relation extraction, which pools information over arbitrary dependency structures efficiently in parallel. To incorporate relevant information while maximally removing irrelevant content, we further apply a novel pruning strategy to the input trees by keeping words immediately around the shortest path between the two entities among which a relation might hold. The resulting model achieves state-of-the-art performance on the large-scale TACRED dataset, outperforming existing sequence and dependency-based neural models. We also show through detailed analysis that this model has complementary strengths to sequence models, and combining them further improves the state of the art.", "title": "" }, { "docid": "6d766690805f74495c5b29b889320908", "text": "With cloud storage services, it is commonplace for data to be not only stored in the cloud, but also shared across multiple users. However, public auditing for such shared data - while preserving identity privacy - remains to be an open challenge. In this paper, we propose the first privacy-preserving mechanism that allows public auditing on shared data stored in the cloud. In particular, we exploit ring signatures to compute the verification information needed to audit the integrity of shared data. With our mechanism, the identity of the signer on each block in shared data is kept private from a third party auditor (TPA), who is still able to verify the integrity of shared data without retrieving the entire file. Our experimental results demonstrate the effectiveness and efficiency of our proposed mechanism when auditing shared data.", "title": "" }, { "docid": "6c3f320eda59626bedb2aad4e527c196", "text": "Though research on the Semantic Web has progressed at a steady pace, its promise has yet to be realized. One major difficulty is that, by its very nature, the Semantic Web is a large, uncensored system to which anyone may contribute. This raises the question of how much credence to give each source. We cannot expect each user to know the trustworthiness of each source, nor would we want to assign top-down or global credibility values due to the subjective nature of trust. We tackle this problem by employing a web of trust, in which each user provides personal trust values for a small number of other users. We compose these trusts to compute the trust a user should place in any other user in the network. A user is not assigned a single trust rank. Instead, different users may have different trust values for the same user. We define properties for combination functions which merge such trusts, and define a class of functions for which merging may be done locally while maintaining these properties. We give examples of specific functions and apply them to data from Epinions and our BibServ bibliography server. Experiments confirm that the methods are robust to noise, and do not put unreasonable expectations on users. We hope that these methods will help move the Semantic Web closer to fulfilling its promise.", "title": "" }, { "docid": "678d3dccdd77916d0c653d88785e1300", "text": "BACKGROUND\nFatigue is one of the common complaints of multiple sclerosis (MS) patients, and its treatment is relatively unclear. Ginseng is one of the herbal medicines possessing antifatigue properties, and its administration in MS for such a purpose has been scarcely evaluated. The purpose of this study was to evaluate the efficacy and safety of ginseng in the treatment of fatigue and the quality of life of MS patients.\n\n\nMETHODS\nEligible female MS patients were randomized in a double-blind manner, to receive 250-mg ginseng or placebo twice daily over 3 months. Outcome measures included the Modified Fatigue Impact Scale (MFIS) and the Iranian version of the Multiple Sclerosis Quality Of Life Questionnaire (MSQOL-54). The questionnaires were used after randomization, and again at the end of the study.\n\n\nRESULTS\nOf 60 patients who were enrolled in the study, 52 (86%) subjects completed the trial with good drug tolerance. Statistical analysis showed better effects for ginseng than the placebo as regards MFIS (p = 0.046) and MSQOL (p ≤ 0.0001) after 3 months. No serious adverse events were observed during follow-up.\n\n\nCONCLUSIONS\nThis study indicates that 3-month ginseng treatment can reduce fatigue and has a significant positive effect on quality of life. Ginseng is probably a good candidate for the relief of MS-related fatigue. Further studies are needed to shed light on the efficacy of ginseng in this field.", "title": "" } ]
scidocsrr
ef3f08e17f6ba2cfc17956b583032cf6
Augmented reality in the smart factory: Supporting workers in an industry 4.0. environment
[ { "docid": "d8bf55d8a2aaa1061310f3d976a87c57", "text": "characterized by four distinguishable interface styles, each lasting for many years and optimized to the hardware available at the time. In the first period, the early 1950s and 1960s, computers were used in batch mode with punched-card input and line-printer output; there were essentially no user interfaces because there were no interactive users (although some of us were privileged to be able to do console debugging using switches and lights as our “user interface”). The second period in the evolution of interfaces (early 1960s through early 1980s) was the era of timesharing on mainframes and minicomputers using mechanical or “glass” teletypes (alphanumeric displays), when for the first time users could interact with the computer by typing in commands with parameters. Note that this era persisted even through the age of personal microcomputers with such operating systems as DOS and Unix with their command line shells. During the 1970s, timesharing and manual command lines remained deeply entrenched, but at Xerox PARC the third age of user interfaces dawned. Raster graphics-based networked workstations and “pointand-click” WIMP GUIs (graphical user interfaces based on windows, icons, menus, and a pointing device, typically a mouse) are the legacy of Xerox PARC that we’re still using today. WIMP GUIs were popularized by the Macintosh in 1984 and later copied by Windows on the PC and Motif on Unix workstations. Applications today have much the same look and feel as the early desktop applications (except for the increased “realism” achieved through the use of drop shadows for buttons and other UI widgets); the main advance lies in the shift from monochrome displays to color and in a large set of software-engineering tools for building WIMP interfaces. I find it rather surprising that the third generation of WIMP user interfaces has been so dominant for more than two decades; they are apparently sufficiently good for conventional desktop tasks that the field is stuck comfortably in a rut. I argue in this essay that the status quo does not suffice—that the newer forms of computing and computing devices available today necessitate new thinking t h e h u m a n c o n n e c t i o n Andries van Dam", "title": "" } ]
[ { "docid": "d3f97e0de15ab18296e161e287890e18", "text": "Nosocomial or hospital acquired infections threaten the survival and neurodevelopmental outcomes of infants admitted to the neonatal intensive care unit, and increase cost of care. Premature infants are particularly vulnerable since they often undergo invasive procedures and are dependent on central catheters to deliver nutrition and on ventilators for respiratory support. Prevention of nosocomial infection is a critical patient safety imperative, and invariably requires a multidisciplinary approach. There are no short cuts. Hand hygiene before and after patient contact is the most important measure, and yet, compliance with this simple measure can be unsatisfactory. Alcohol based hand sanitizer is effective against many microorganisms and is efficient, compared to plain or antiseptic containing soaps. The use of maternal breast milk is another inexpensive and simple measure to reduce infection rates. Efforts to replicate the anti-infectious properties of maternal breast milk by the use of probiotics, prebiotics, and synbiotics have met with variable success, and there are ongoing trials of lactoferrin, an iron binding whey protein present in large quantities in colostrum. Attempts to boost the immunoglobulin levels of preterm infants with exogenous immunoglobulins have not been shown to reduce nosocomial infections significantly. Over the last decade, improvements in the incidence of catheter-related infections have been achieved, with meticulous attention to every detail from insertion to maintenance, with some centers reporting zero rates for such infections. Other nosocomial infections like ventilator acquired pneumonia and staphylococcus aureus infection remain problematic, and outbreaks with multidrug resistant organisms continue to have disastrous consequences. Management of infections is based on the profile of microorganisms in the neonatal unit and community and targeted therapy is required to control the disease without leading to the development of more resistant strains.", "title": "" }, { "docid": "2dc24d2ecaf2494543128f5e9e5f4864", "text": "Design of a multiphase hybrid permanent magnet (HPM) generator for series hybrid electric vehicle (SHEV) application is presented in this paper. The proposed hybrid excitation topology together with an integral passive rectifier replaces the permanent magnet (PM) machine and active power electronics converter in hybrid/electric vehicles, facilitating the control over constant PM flux-linkage. The HPM topology includes two rotor elements: a PM and a wound field (WF) rotor with a 30% split ratio, coupled on the same shaft in one machine housing. Both rotors share a nine-phase stator that results in higher output voltage and power density when compared to three-phase design. The HPM generator design is based on a 3-kW benchmark PM machine to ensure the feasibility and validity of design tools and procedures. The WF rotor is designed to realize the same pole shape and number as in the PM section and to obtain the same flux-density in the air-gap while minimizing the WF input energy. Having designed and analyzed the machine using equivalent magnetic circuit and finite element analysis, a laboratory prototype HPM generator is built and tested with the measurements compared to predicted results confirming the designed characteristics and machine performance. The paper also presents comprehensive machine loss and mass audits.", "title": "" }, { "docid": "55f95c7b59f17fb210ebae97dbd96d72", "text": "Clustering is a widely studied data mining problem in the text domains. The problem finds numerous applications in customer segmentation, classification, collaborative filtering, visualization, document organization, and indexing. In this chapter, we will provide a detailed survey of the problem of text clustering. We will study the key challenges of the clustering problem, as it applies to the text domain. We will discuss the key methods used for text clustering, and their relative advantages. We will also discuss a number of recent advances in the area in the context of social network and linked data.", "title": "" }, { "docid": "afdc57b5d573e2c99c73deeef3c2fd5f", "text": "The purpose of this article is to consider oral reading fluency as an indicator of overall reading competence. We begin by examining theoretical arguments for supposing that oral reading fluency may reflect overall reading competence. We then summarize several studies substantiating this phenomenon. Next, we provide an historical analysis of the extent to which oral reading fluency has been incorporated into measurement approaches during the past century. We conclude with recommendations about the assessment of oral reading fluency for research and practice.", "title": "" }, { "docid": "d7e794a106f29f5ebe917c2e7b6007eb", "text": "In this paper, several recent theoretical conceptions of technology-mediated education are examined and a study of 2159 online learners is presented. The study validates an instrument designed to measure teaching, social, and cognitive presence indicative of a community of learners within the community of inquiry (CoI) framework [Garrison, D. R., Anderson, T., & Archer, W. (2000). Critical inquiry in a textbased environment: Computer conferencing in higher education. The Internet and Higher Education, 2, 1–19; Garrison, D. R., Anderson, T., & Archer, W. (2001). Critical thinking, cognitive presence, and computer conferencing in distance education. American Journal of Distance Education, 15(1), 7–23]. Results indicate that the survey items cohere into interpretable factors that represent the intended constructs. Further it was determined through structural equation modeling that 70% of the variance in the online students’ levels of cognitive presence, a multivariate measure of learning, can be modeled based on their reports of their instructors’ skills in fostering teaching presence and their own abilities to establish a sense of social presence. Additional analysis identifies more details of the relationship between learner understandings of teaching and social presence and its impact on their cognitive presence. Implications for online teaching, policy, and faculty development are discussed. ! 2008 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "8392c5faf4c837fd06b6f50d110b6e84", "text": "Pool of knowledge available to the mankind depends on the source of learning resources, which can vary from ancient printed documents to present electronic material. The rapid conversion of material available in traditional libraries to digital form needs a significant amount of work if we are to maintain the format and the look of the electronic documents as same as their printed counterparts. Most of the printed documents contain not only characters and its formatting but also some associated non text objects such as tables, charts and graphical objects. It is challenging to detect them and to concentrate on the format preservation of the contents while reproducing them. To address this issue, we propose an algorithm using local thresholds for word space and line height to locate and extract all categories of tables from scanned document images. From the experiments performed on 298 documents, we conclude that our algorithm has an overall accuracy of about 75% in detecting tables from the scanned document images. Since the algorithm does not completely depend on rule lines, it can detect all categories of tables in a range of scanned documents with different font types, styles and sizes to extract their formatting features. Moreover, the algorithm can be applied to locate tables in multi column layouts with small modification in layout analysis. Treating tables with their existing formatting features will tremendously help the reproducing of printed documents for reprinting and updating purposes.", "title": "" }, { "docid": "48a75e28154d630da14fd3dba09d0af8", "text": "Over the years, artificial intelligence (AI) is spreading its roots in different areas by utilizing the concept of making the computers learn and handle complex tasks that previously require substantial laborious tasks by human beings. With better accuracy and speed, AI is helping lawyers to streamline work processing. New legal AI software tools like Catalyst, Ross intelligence, and Matlab along with natural language processing provide effective quarrel resolution, better legal clearness, and superior admittance to justice and fresh challenges to conventional law firms providing legal services using leveraged cohort correlate model. This paper discusses current applications of legal AI and suggests deep learning and machine learning techniques that can be applied in future to simplify the cumbersome legal tasks.", "title": "" }, { "docid": "64c156ee4171b5b84fd4eedb1d922f55", "text": "We introduce a large computational subcategorization lexicon which includes subcategorization frame (SCF) and frequency information for 6,397 English verbs. This extensive lexicon was acquired automatically from five corpora and the Web using the current version of the comprehensive subcategorization acquisition system of Briscoe and Carroll (1997). The lexicon is provided freely for research use, along with a script which can be used to filter and build sub-lexicons suited for different natural language processing (NLP) purposes. Documentation is also provided which explains each sub-lexicon option and evaluates its accuracy.", "title": "" }, { "docid": "fca63f719115e863f5245f15f6b1be50", "text": "Model-based testing (MBT) in hardware-in-the-loop (HIL) platform is a simulation and testing environment for embedded systems, in which test design automation provided by MBT is combined with HIL methodology. A HIL platform is a testing environment in which the embedded system under testing (SUT) assumes to be operating with real-world inputs and outputs. In this paper, we focus on presenting the novel methodologies and tools that were used to conduct the validation of the MBT in HIL platform. Another novelty of the validation approach is that it aims to provide a comprehensive and many-sided process view to validating MBT and HIL related systems including different component, integration and system level testing activities. The research is based on the constructive method of the related scientific literature and testing technologies, and the results are derived through testing and validating the implemented MBT in HIL platform. The used testing process indicated that the functionality of the constructed MBT in HIL prototype platform was validated.", "title": "" }, { "docid": "5d13c7c50cb43de80df7b6f02c866dab", "text": "Deep neural networks (DNNs) are vulnerable to adversarial examples, even in the black-box case, where the attacker is limited to solely query access. Existing black-box approaches to generating adversarial examples typically require a significant number of queries, either for training a substitute network or estimating gradients from the output scores. We introduce GenAttack, a gradient-free optimization technique which uses genetic algorithms for synthesizing adversarial examples in the black-box setting. Our experiments on the MNIST, CIFAR-10, and ImageNet datasets show that GenAttack can successfully generate visually imperceptible adversarial examples against state-of-the-art image recognition models with orders of magnitude fewer queries than existing approaches. For example, in our CIFAR-10 experiments, GenAttack required roughly 2,568 times less queries than the current state-of-the-art black-box attack. Furthermore, we show that GenAttack can successfully attack both the state-of-the-art ImageNet defense, ensemble adversarial training, and non-differentiable, randomized input transformation defenses. GenAttack’s success against ensemble adversarial training demonstrates that its query efficiency enables it to exploit the defense’s weakness to direct black-box attacks. GenAttack’s success against non-differentiable input transformations indicates that its gradient-free nature enables it to be applicable against defenses which perform gradient masking/obfuscation to confuse the attacker. Our results suggest that evolutionary algorithms open up a promising area of research into effective gradient-free black-box attacks.", "title": "" }, { "docid": "01a70ee73571e848575ed992c1a3a578", "text": "BACKGROUND\nNursing turnover is a major issue for health care managers, notably during the global nursing workforce shortage. Despite the often hierarchical structure of the data used in nursing studies, few studies have investigated the impact of the work environment on intention to leave using multilevel techniques. Also, differences between intentions to leave the current workplace or to leave the profession entirely have rarely been studied.\n\n\nOBJECTIVE\nThe aim of the current study was to investigate how aspects of the nurse practice environment and satisfaction with work schedule flexibility measured at different organisational levels influenced the intention to leave the profession or the workplace due to dissatisfaction.\n\n\nDESIGN\nMultilevel models were fitted using survey data from the RN4CAST project, which has a multi-country, multilevel, cross-sectional design. The data analysed here are based on a sample of 23,076 registered nurses from 2020 units in 384 hospitals in 10 European countries (overall response rate: 59.4%). Four levels were available for analyses: country, hospital, unit, and individual registered nurse. Practice environment and satisfaction with schedule flexibility were aggregated and studied at the unit level. Gender, experience as registered nurse, full vs. part-time work, as well as individual deviance from unit mean in practice environment and satisfaction with work schedule flexibility, were included at the individual level. Both intention to leave the profession and the hospital due to dissatisfaction were studied.\n\n\nRESULTS\nRegarding intention to leave current workplace, there is variability at both country (6.9%) and unit (6.9%) level. However, for intention to leave the profession we found less variability at the country (4.6%) and unit level (3.9%). Intention to leave the workplace was strongly related to unit level variables. Additionally, individual characteristics and deviance from unit mean regarding practice environment and satisfaction with schedule flexibility were related to both outcomes. Major limitations of the study are its cross-sectional design and the fact that only turnover intention due to dissatisfaction was studied.\n\n\nCONCLUSIONS\nWe conclude that measures aiming to improve the practice environment and schedule flexibility would be a promising approach towards increased retention of registered nurses in both their current workplaces and the nursing profession as a whole and thus a way to counteract the nursing shortage across European countries.", "title": "" }, { "docid": "776cba62170ee8936629aabca314fd46", "text": "While the Global Positioning System (GPS) tends to be not useful anymore in terms of precise localization once one gets into a building, Low Energy beacons might come in handy instead. Navigating free of signal reception problems throughout a building when one has never visited that place before is a challenge tackled with indoors localization. Using Bluetooth Low Energy1 (BLE) beacons (either iBeacon or Eddystone formats) is the medium to accomplish that. Indeed, different purpose oriented applications can be designed, developed and shaped towards the needs of any person in the context of a certain building. This work presents a series of post-processing filters to enhance the outcome of the estimated position applying trilateration as the main and straightforward technique to locate someone within a building. A later evaluation tries to give enough evidence around the feasibility of this indoor localization technique. A mobile app should be everything a user would need to have within a building in order to navigate inside.", "title": "" }, { "docid": "b89099e9b01a83368a1ebdb2f4394eba", "text": "Orangutans (Pongo pygmaeus and Pongo abelii) are semisolitary apes and, among the great apes, the most distantly related to humans. Raters assessed 152 orangutans on 48 personality descriptors; 140 of these orangutans were also rated on a subjective well-being questionnaire. Principal-components analysis yielded 5 reliable personality factors: Extraversion, Dominance, Neuroticism, Agreeableness, and Intellect. The authors found no factor analogous to human Conscientiousness. Among the orangutans rated on all 48 personality descriptors and the subjective well-being questionnaire, Extraversion, Agreeableness, and low Neuroticism were related to subjective well-being. These findings suggest that analogues of human, chimpanzee, and orangutan personality domains existed in a common ape ancestor.", "title": "" }, { "docid": "330129cb283fac3dc4df9f0c36b1de48", "text": "Hydrokinetic turbines convert kinetic energy of moving river or tide water into electrical energy. In this work, design considerations of river current turbines are discussed with emphasis on straight bladed Darrieus rotors. Fluid dynamic analysis is carried out to predict the performance of the rotor. Discussions on a broad range of physical and operational conditions that may impact the design scenario are also presented. In addition, a systematic design procedure along with supporting information that would aid various decision making steps are outlined and illustrated by a design example. Finally, the scope for further work is highlighted", "title": "" }, { "docid": "b83e537a2c8dcd24b096005ef0cb3897", "text": "We present Deep Speaker, a neural speaker embedding system that maps utterances to a hypersphere where speaker similarity is measured by cosine similarity. The embeddings generated by Deep Speaker can be used for many tasks, including speaker identification, verification, and clustering. We experiment with ResCNN and GRU architectures to extract the acoustic features, then mean pool to produce utterance-level speaker embeddings, and train using triplet loss based on cosine similarity. Experiments on three distinct datasets suggest that Deep Speaker outperforms a DNN-based i-vector baseline. For example, Deep Speaker reduces the verification equal error rate by 50% (relatively) and improves the identification accuracy by 60% (relatively) on a text-independent dataset. We also present results that suggest adapting from a model trained with Mandarin can improve accuracy for English speaker recognition.", "title": "" }, { "docid": "76e7f63fa41d6d457e6e4386ad7b9896", "text": "A growing body of work has highlighted the challenges of identifying the stance that a speaker holds towards a particular topic, a task that involves identifying a holistic subjective disposition. We examine stance classification on a corpus of 4873 posts from the debate website ConvinceMe.net, for 14 topics ranging from the playful to the ideological. We show that ideological debates feature a greater share of rebuttal posts, and that rebuttal posts are significantly harder to classify for stance, for both humans and trained classifiers. We also demonstrate that the number of subjective expressions varies across debates, a fact correlated with the performance of systems sensitive to sentiment-bearing terms. We present results for classifying stance on a per topic basis that range from 60% to 75%, as compared to unigram baselines that vary between 47% and 66%. Our results suggest that features and methods that take into account the dialogic context of such posts improve accuracy.", "title": "" }, { "docid": "0c886080015642aa5b7c103adcd2a81d", "text": "The problem of gauging information credibility on social networks has received considerable attention in recent years. Most previous work has chosen Twitter, the world's largest micro-blogging platform, as the premise of research. In this work, we shift the premise and study the problem of information credibility on Sina Weibo, China's leading micro-blogging service provider. With eight times more users than Twitter, Sina Weibo is more of a Facebook-Twitter hybrid than a pure Twitter clone, and exhibits several important characteristics that distinguish it from Twitter. We collect an extensive set of microblogs which have been confirmed to be false rumors based on information from the official rumor-busting service provided by Sina Weibo. Unlike previous studies on Twitter where the labeling of rumors is done manually by the participants of the experiments, the official nature of this service ensures the high quality of the dataset. We then examine an extensive set of features that can be extracted from the microblogs, and train a classifier to automatically detect the rumors from a mixed set of true information and false information. The experiments show that some of the new features we propose are indeed effective in the classification, and even the features considered in previous studies have different implications with Sina Weibo than with Twitter. To the best of our knowledge, this is the first study on rumor analysis and detection on Sina Weibo.", "title": "" }, { "docid": "95db9ce9faaf13e8ff8d5888a6737683", "text": "Measurements of pH, acidity, and alkalinity are commonly used to describe water quality. The three variables are interrelated and can sometimes be confused. The pH of water is an intensity factor, while the acidity and alkalinity of water are capacity factors. More precisely, acidity and alkalinity are defined as a water’s capacity to neutralize strong bases or acids, respectively. The term “acidic” for pH values below 7 does not imply that the water has no alkalinity; likewise, the term “alkaline” for pH values above 7 does not imply that the water has no acidity. Water with a pH value between 4.5 and 8.3 has both total acidity and total alkalinity. The definition of pH, which is based on logarithmic transformation of the hydrogen ion concentration ([H+]), has caused considerable disagreement regarding the appropriate method of describing average pH. The opinion that pH values must be transformed to [H+] values before averaging appears to be based on the concept of mixing solutions of different pH. In practice, however, the averaging of [H+] values will not provide the correct average pH because buffers present in natural waters have a greater effect on final pH than does dilution alone. For nearly all uses of pH in fisheries and aquaculture, pH values may be averaged directly. When pH data sets are transformed to [H+] to estimate average pH, extreme pH values will distort the average pH. Values of pH conform more closely to a normal distribution than do values of [H+], making the pH values more acceptable for use in statistical analysis. Moreover, electrochemical measurements of pH and many biological responses to [H+] are described by the Nernst equation, which states that the measured or observed response is linearly related to 10-fold changes in [H+]. Based on these considerations, pH rather than [H+] is usually the most appropriate variable for use in statistical analysis. *Corresponding author: boydce1@auburn.edu Received November 2, 2010; accepted February 7, 2011 Published online September 27, 2011 Temperature, salinity, hardness, pH, acidity, and alkalinity are fundamental variables that define the quality of water. Although all six variables have precise, unambiguous definitions, the last three variables are often misinterpreted in aquaculture and fisheries studies. In this paper, we explain the concepts of pH, acidity, and alkalinity, and we discuss practical relationships among those variables. We also discuss the concept of pH averaging as an expression of the central tendency of pH measurements. The concept of pH averaging is poorly understood, if not controversial, because many believe that pH values, which are log-transformed numbers, cannot be averaged directly. We argue that direct averaging of pH values is the simplest and most logical approach for most uses and that direct averaging is based on sound practical and statistical principles. THE pH CONCEPT The pH is an index of the hydrogen ion concentration ([H+]) in water. The [H+] affects most chemical and biological processes; thus, pH is an important variable in water quality endeavors. Water temperature probably is the only water quality variable that is measured more commonly than pH. The pH concept has its basis in the ionization of water:", "title": "" }, { "docid": "f8cd8b54218350fa18d4d59ca0a58a05", "text": "This study provides conceptual and empirical arguments why an assessment of applicants' procedural knowledge about interpersonal behavior via a video-based situational judgment test might be valid for academic and postacademic success criteria. Four cohorts of medical students (N = 723) were followed from admission to employment. Procedural knowledge about interpersonal behavior at the time of admission was valid for both internship performance (7 years later) and job performance (9 years later) and showed incremental validity over cognitive factors. Mediation analyses supported the conceptual link between procedural knowledge about interpersonal behavior, translating that knowledge into actual interpersonal behavior in internships, and showing that behavior on the job. Implications for theory and practice are discussed.", "title": "" }, { "docid": "019d5deed0ed1e5b50097d5dc9121cb6", "text": "Within interactive narrative research, agency is largely considered in terms of a player's autonomy in a game, defined as theoretical agency. Rather than in terms of whether or not the player feels they have agency, their perceived agency. An effective interactive narrative needs to provide a player a level of agency that satisfies their desires and must do that without compromising its own structure. Researchers frequently turn to techniques for increasing theoretical agency to accomplish this. This paper proposes an approach to categorize and explore techniques in which a player's level of perceived agency is affected without requiring more or less theoretical agency.", "title": "" } ]
scidocsrr
a590e37d84d0ca3bf95f6e43784730bc
A Survey of Modern Questions and Challenges in Feature Extraction
[ { "docid": "f0f47ce0fc361740aedf17d6d2061e03", "text": "In supervised learning scenarios, feature selection has be en studied widely in the literature. Selecting features in unsupervis ed learning scenarios is a much harder problem, due to the absence of class la bel that would guide the search for relevant information. And, almos t all of previous unsupervised feature selection methods are “wrapper ” techniques that require a learning algorithm to evaluate the candidate fe ture subsets. In this paper, we propose a “filter” method for feature select ion which is independent of any learning algorithm. Our method can be per formed in either supervised or unsupervised fashion. The proposed me thod is based on the observation that, in many real world classification pr oblems, data from the same class are often close to each other. The importa nce of a feature is evaluated by its power of locality preserving, or , Laplacian Score. We compare our method with data variance (unsupervised) an d Fisher score (supervised) on two data sets. Experimental re sults demonstrate the effectiveness and efficiency of our algorithm.", "title": "" } ]
[ { "docid": "f4bb27786cf81892f30a01796fbbdbde", "text": "Kiosks are increasingly being heralded as a technology through which governments, government departments and local authorities or municipalities can engage with citizens. In particular, they have attractions in their potential to bridge the digital divide. There is some evidence to suggest that the citizen uptake of kiosks and indeed other channels for e-government, such as web sites, is slow, although studies on the use of kiosks for health information provision offer some interesting perspectives on user behaviour with kiosk technology. This article argues that the delivery of e-government through kiosks presents a number of strategic challenges, which will need to be negotiated over the next few years in order that kiosk applications are successful in enhancing accessibility to and engagement with egovernment. The article suggests that this involves consideration of: the applications to be delivered through a kiosk; one stop shop service and knowledge architectures; mechanisms for citizen identification; and, the integration of kiosks within the total interface between public bodies and their communities. The article concludes by outlining development and research agendas in each of these areas.", "title": "" }, { "docid": "c4ff647b5962d3d713577c16a7a9cae5", "text": "In this paper we propose the use of an illumination invariant transform to improve many aspects of visual localisation, mapping and scene classification for autonomous road vehicles. The illumination invariant colour space stems from modelling the spectral properties of the camera and scene illumination in conjunction, and requires only a single parameter derived from the image sensor specifications. We present results using a 24-hour dataset collected using an autonomous road vehicle, demonstrating increased consistency of the illumination invariant images in comparison to raw RGB images during daylight hours. We then present three example applications of how illumination invariant imaging can improve performance in the context of vision-based autonomous vehicles: 6-DoF metric localisation using monocular cameras over a 24-hour period, life-long visual localisation and mapping using stereo, and urban scene classification in changing environments. Our ultimate goal is robust and reliable vision-based perception and navigation an attractive proposition for low-cost autonomy for road vehicles.", "title": "" }, { "docid": "1f1a8f5f7612e131ce7b99c13aa4d5db", "text": "Background subtraction can be treated as the binary classification problem of highlighting the foreground region in a video whilst masking the background region, and has been broadly applied in various vision tasks such as video surveillance and traffic monitoring. However, it still remains a challenging task due to complex scenes and for lack of the prior knowledge about the temporal information. In this paper, we propose a novel background subtraction model based on 3D convolutional neural networks (3D CNNs) which combines temporal and spatial information to effectively separate the foreground from all the sequences in an end-to-end manner. Different from conventional models, we view background subtraction as three-class classification problem, i.e., the foreground, the background and the boundary. This design can obtain more reasonable results than existing baseline models. Experiments on the Change Detection 2012 dataset verify the potential of our model in both quantity and quality.", "title": "" }, { "docid": "799f9ca9ea641c1893e4900fdc29c8d4", "text": "This paper presents a large scale general purpose image database with human annotated ground truth. Firstly, an all-in-all labeling framework is proposed to group visual knowledge of three levels: scene level (global geometric description), object level (segmentation, sketch representation, hierarchical decomposition), and low-mid level (2.1D layered representation, object boundary attributes, curve completion, etc.). Much of this data has not appeared in previous databases. In addition, And-Or Graph is used to organize visual elements to facilitate top-down labeling. An annotation tool is developed to realize and integrate all tasks. With this tool, we’ve been able to create a database consisting of more than 636,748 annotated images and video frames. Lastly, the data is organized into 13 common subsets to serve as benchmarks for diverse evaluation endeavors.", "title": "" }, { "docid": "ea94a3c561476e88d5ac2640656a3f92", "text": "Point cloud is a basic description of discrete shape information. Parameterization of unorganized points is important for shape analysis and shape reconstruction of natural objects. In this paper we present a new algorithm for global parameterization of an unorganized point cloud and its application to the meshing of the cloud. Our method is guided by principal directions so as to preserve the intrinsic geometric properties. After initial estimation of principal directions, we develop a kNN(k-nearest neighbor) graph-based method to get a smooth direction field. Then the point cloud is cut to be topologically equivalent to a disk. The global parameterization is computed and its gradients align well with the guided direction field. A mixed integer solver is used to guarantee a seamless parameterization across the cut lines. The resultant parameterization can be used to triangulate and quadrangulate the point cloud simultaneously in a fully automatic manner, where the shape of the data is of any genus. & 2011 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "9f9128951d6c842689f61fc19c79f238", "text": "This paper concerns image reconstruction for helical x-ray transmission tomography (CT) with multi-row detectors. We introduce two approximate cone-beam (CB) filtered-backprojection (FBP) algorithms of the Feldkamp type, obtained by extending to three dimensions (3D) two recently proposed exact FBP algorithms for 2D fan-beam reconstruction. The new algorithms are similar to the standard Feldkamp-type FBP for helical CT. In particular, they can reconstruct each transaxial slice from data acquired along an arbitrary segment of helix, thereby efficiently exploiting the available data. In contrast to the standard Feldkamp-type algorithm, however, the redundancy weight is applied after filtering, allowing a more efficient numerical implementation. To partially alleviate the CB artefacts, which increase with increasing values of the helical pitch, a frequency-mixing method is proposed. This method reconstructs the high frequency components of the image using the longest possible segment of helix, whereas the low frequencies are reconstructed using a minimal, short-scan, segment of helix to minimize CB artefacts. The performance of the algorithms is illustrated using simulated data.", "title": "" }, { "docid": "4829d8c0dd21f84c3afbe6e1249d6248", "text": "We present an action recognition and detection system from temporally untrimmed videos by combining motion and appearance features. Motion and appearance are two kinds of complementary cues for human action understanding from video. For motion features, we adopt the Fisher vector representation with improved dense trajectories due to its rich descriptive capacity. For appearance feature, we choose the deep convolutional neural network activations due to its recent success in image based tasks. With this fused feature of iDT and CNN, we train a SVM classifier for each action class in the one-vs-all scheme. We report both the recognition and detection results of our system on Thumos 14 Challenge. From the results, we see that our method rank 4 in the action recognition task and 2 in the action detection task.", "title": "" }, { "docid": "1027ce2c8e3a231fe8ab3f469a857f82", "text": "There are two major challenges for a high-performance remote-sensing database. First, it must provide low-latency retrieval of very large volumes of spatio-temporal data. This requires effective declustering and placement of a multidimensional dataset onto a large disk farm. Second, the order of magnitude reduction in data-size due to postprocessing makes it imperative, from a performance perspective, that the postprocessing be done on the machine that holds the data. This requires careful coordination of computation and data retrieval. This paper describes the design, implementation and evaluation of Titan, a parallel shared-nothing database designed for handling remotesensing data. The computational platform for Titan is a 16-processor IBM SP-2 with four fast disks attached to each processor. Titan is currently operational and contains about 24 GB of AVHRR data from the NOAA-7 satellite. The experimental results show that Titan provides good performance for global queries and interactive response times for local queries.", "title": "" }, { "docid": "a70d064af5e8c5842b8ca04abc3fb2d6", "text": "In the current scenario of cloud computing, heterogeneous resources are located in various geographical locations requiring security-aware resource management to handle security threats. However, existing techniques are unable to protect systems from security attacks. To provide a secure cloud service, a security-based resource management technique is required that manages cloud resources automatically and delivers secure cloud services. In this paper, we propose a self-protection approach in cloud resource management called SECURE, which offers self-protection against security attacks and ensures continued availability of services to authorized users. The performance of SECURE has been evaluated using SNORT. The experimental results demonstrate that SECURE performs effectively in terms of both the intrusion detection rate and false positive rate. Further, the impact of security on quality of service (QoS) has been analyzed.", "title": "" }, { "docid": "b50b912cb79368db51825e7cbea2df5d", "text": "Effectively solving the problem of sketch generation, which aims to produce human-drawing-like sketches from real photographs, opens the door for many vision applications such as sketch-based image retrieval and nonphotorealistic rendering. In this paper, we approach automatic sketch generation from a human visual perception perspective. Instead of gathering insights from photographs, for the first time, we extract information from a large pool of human sketches. In particular, we study how multiple Gestalt rules can be encapsulated into a unified perceptual grouping framework for sketch generation. We further show that by solving the problem of Gestalt confliction, i.e., encoding the relative importance of each rule, more similar to human-made sketches can be generated. For that, we release a manually labeled sketch dataset of 96 object categories and 7,680 sketches. A novel evaluation framework is proposed to quantify human likeness of machinegenerated sketches by examining how well they can be classified using models trained from human data. Finally, we demonstrate the superiority of our sketches under the practical application of sketch-based image retrieval.", "title": "" }, { "docid": "3432123018be278cb2e85892925ce4e6", "text": "The cellular heterogeneity and complex tissue architecture of most tumor samples is a major obstacle in image analysis on standard hematoxylin and eosin-stained (H&E) tissue sections. A mixture of cancer and normal cells complicates the interpretation of their cytological profiles. Furthermore, spatial arrangement and architectural organization of cells are generally not reflected in cellular characteristics analysis. To address these challenges, first we describe an automatic nuclei segmentation of H&E tissue sections. In the task of deconvoluting cellular heterogeneity, we adopt Landmark based Spectral Clustering (LSC) to group individual nuclei in such a way that nuclei in the same group are more similar. We next devise spatial statistics for analyzing spatial arrangement and organization, which are not detectable by individual cellular characteristics. Our quantitative, spatial statistics analysis could benefit H&E section analysis by refining and complementing cellular characteristics analysis.", "title": "" }, { "docid": "a70bf8d63587aaa719d3171155943153", "text": "Population-based linkage analysis is a new method for analysing genomewide single nucleotide polymorphism (SNP) genotype data in case–control samples, which does not assume a common disease, common variant model. The genome is scanned for extended segments that show increased identity-by-descent sharing within case–case pairs, relative to case–control or control–control pairs. The method is robust to allelic heterogeneity and is suited to mapping genes which contain multiple, rare susceptibility variants of relatively high penetrance. We analysed genomewide SNP datasets for two schizophrenia case–control cohorts, collected in Aberdeen (461 cases, 459 controls) and Munich (429 cases, 428 controls). Population-based linkage testing must be performed within homogeneous samples and it was therefore necessary to analyse the cohorts separately. Each cohort was first subjected to several procedures to improve genetic homogeneity, including identity-by-state outlier detection and multidimensional scaling analysis. When testing only cases who reported a positive family history of major psychiatric disease, consistent with a model of strongly penetrant susceptibility alleles, we saw a distinct peak on chromosome 19q in both cohorts that appeared in meta-analysis (P=0.000016) to surpass the traditional level for genomewide significance for complex trait linkage. The linkage signal was also present in a third case–control sample for familial bipolar disorder, such that meta-analysing all three datasets together yielded a linkage P=0.0000026. A model of rare but highly penetrant disease alleles may be more applicable to some instances of major psychiatric diseases than the common disease common variant model, and we therefore suggest that other genome scan datasets are analysed with this new, complementary method.", "title": "" }, { "docid": "e112af9e35690b64acc7242611b39dd2", "text": "Body sensor network systems can help people by providing healthcare services such as medical monitoring, memory enhancement, medical data access, and communication with the healthcare provider in emergency situations through the SMS or GPRS [1,2]. Continuous health monitoring with wearable [3] or clothing-embedded transducers [4] and implantable body sensor networks [5] will increase detection of emergency conditions in at risk patients. Not only the patient, but also their families will benefit from these. Also, these systems provide useful methods to remotely acquire and monitor the physiological signals without the need of interruption of the patient’s normal life, thus improving life quality [6,7].", "title": "" }, { "docid": "81e65e50b96a5b38fbbddeb6ad0acfe4", "text": "Effectively using full syntactic parsing information in Neural Networks (NNs) to solve relational tasks, e.g., question similarity, is still an open problem. In this paper, we propose to inject structural representations in NNs by (i) learning an SVM model using Tree Kernels (TKs) on relatively few pairs of questions (few thousands) as gold standard (GS) training data is typically scarce, (ii) predicting labels on a very large corpus of question pairs, and (iii) pre-training NNs on such large corpus. The results on Quora and SemEval question similarity datasets show that NNs trained with our approach can learn more accurate models, especially after fine tuning on GS.", "title": "" }, { "docid": "f87fea9cd76d1545c34f8e813347146e", "text": "In fault detection and isolation, diagnostic test results are commonly used to compute a set of diagnoses, where each diagnosis points at a set of components which might behave abnormally. In distributed systems consisting of multiple control units, the test results in each unit can be used to compute local diagnoses while all test results in the complete system give the global diagnoses. It is an advantage for both repair and fault-tolerant control to have access to the global diagnoses in each unit since these diagnoses represent all test results in all units. However, when the diagnoses, for example, are to be used to repair a unit, only the components that are used by the unit are of interest. The reason for this is that it is only these components that could have caused the abnormal behavior. However, the global diagnoses might include components from the complete system and therefore often include components that are superfluous for the unit. Motivated by this observation, a new type of diagnosis is proposed, namely, the condensed diagnosis. Each unit has a unique set of condensed diagnoses which represents the global diagnoses. The benefit of the condensed diagnoses is that they only include components used by the unit while still representing the global diagnoses. The proposed method is applied to an automotive vehicle, and the results from the application study show the benefit of using condensed diagnoses compared to global diagnoses.", "title": "" }, { "docid": "216efaca84a9df871e7919f0b5215b77", "text": "This paper is concerned with some experimental results and practical evaluations of a three-level phase shifted ZVS-PWM DC-DC converter with neutral point clamping diodes and flying capacitor for a variety of gas metal arc welding machines. This new DC-DC converter suitable for high power applications is implemented by modifying the high-frequency-linked half bridge soft switching PWM DC-DC converter with two active edge resonant cells (AERCs) in high side and low side DC rails, which is previously-developed put into practice by the authors. The operating principle of the three-level phase-shift ZVS-PWM DC-DC converter and its experimental and simulation results including power regulation characteristics vs. phase-shifted angle and power conversion efficiency characteristics in addition to power loss analysis are illustrated and evaluated comparatively from a practical point of view, along with the remarkable advantageous features as compared with previously-developed one.", "title": "" }, { "docid": "827e9045f932b146a8af66224e114be6", "text": "Using a common set of attributes to determine which methodology to use in a particular data warehousing project.", "title": "" }, { "docid": "66fc8b47dd186fa17240ee64aadf7ca7", "text": "Posterior reversible encephalopathy syndrome (PRES) is characterized by variable associations of seizure activity, consciousness impairment, headaches, visual abnormalities, nausea/vomiting, and focal neurological signs. The PRES may occur in diverse situations. The findings on neuroimaging in PRES are often symmetric and predominate edema in the white matter of the brain areas perfused by the posterior brain circulation, which is reversible when the underlying cause is treated. We report the case of PRES in normotensive patient with hyponatremia.", "title": "" }, { "docid": "d449a4d183c2a3e1905935f624d684d3", "text": "This paper introduces the approach CBRDIA (Case-based Reasoning for Document Invoice Analysis) which uses the principles of case-based reasoning to analyze, recognize and interpret invoices. Two CBR cycles are performed sequentially in CBRDIA. The first one consists in checking whether a similar document has already been processed, which makes the interpretation of the current one easy. The second cycle works if the first one fails. It processes the document by analyzing and interpreting its structuring elements (adresses, amounts, tables, etc) one by one. The CBR cycles allow processing documents from both knonwn or unknown classes. Applied on 923 invoices, CBRDIA reaches a recognition rate of 85,22% for documents of known classes and 74,90% for documents of unknown classes.", "title": "" }, { "docid": "741078742178d09f911ef9633befeb9b", "text": "We introduce a novel kernel for comparing two text documents. The kernel is an inner product in the feature space consisting of all subsequences of length k. A subsequence is any ordered sequence of k characters occurring in the text though not necessarily contiguously. The subsequences are weighted by an exponentially decaying factor of their full length in the text, hence emphasising those occurrences which are close to contiguous. A direct computation of this feature vector would involve a prohibitive amount of computation even for modest values of k, since the dimension of the feature space grows exponentially with k. The paper describes how despite this fact the inner product can be efficiently evaluated by a dynamic programming technique. A preliminary experimental comparison of the performance of the kernel compared with a standard word feature space kernel [4] is made showing encouraging results.", "title": "" } ]
scidocsrr
1152ca1e52211fee8c089a8119edc5e5
Charge equalization converter with parallel primary winding for series connected Lithium-Ion battery strings in HEV
[ { "docid": "90c3543eca7a689188725e610e106ce9", "text": "Lithium-based battery technology offers performance advantages over traditional battery technologies at the cost of increased monitoring and controls overhead. Multiple-cell Lead-Acid battery packs can be equalized by a controlled overcharge, eliminating the need to periodically adjust individual cells to match the rest of the pack. Lithium-based based batteries cannot be equalized by an overcharge, so alternative methods are required. This paper discusses several cell-balancing methodologies. Active cell balancing methods remove charge from one or more high cells and deliver the charge to one or more low cells. Dissipative techniques find the high cells in the pack, and remove excess energy through a resistive element until their charges match the low cells. This paper presents the theory of charge balancing techniques and the advantages and disadvantages of the presented methods. INTRODUCTION Lithium Ion and Lithium Polymer battery chemistries cannot be overcharged without damaging active materials [1-5]. The electrolyte breakdown voltage is precariously close to the fully charged terminal voltage, typically in the range of 4.1 to 4.3 volts/cell. Therefore, careful monitoring and controls must be implemented to avoid any single cell from experiencing an overvoltage due to excessive charging. Single lithium-based cells require monitoring so that cell voltage does not exceed predefined limits of the chemistry. Series connected lithium cells pose a more complex problem: each cell in the string must be monitored and controlled. Even though the pack voltage may appear to be within acceptable limits, one cell of the series string may be experiencing damaging voltage due to cell-to-cell imbalances. Traditionally, cell-to-cell imbalances in lead-acid batteries have been solved by controlled overcharging [6,7]. Leadacid batteries can be brought into overcharge conditions without permanent cell damage, as the excess energy is released by gassing. This gassing mechanism is the natural method for balancing a series string of lead acid battery cells. Other chemistries, such as NiMH, exhibit similar natural cell-to-cell balancing mechanisms [8]. Because a Lithium battery cannot be overcharged, there is no natural mechanism for cell equalization. Therefore, an alternative method must be employed. This paper discusses three categories of cell balancing methodologies: charging methods, active methods, and passive methods. Cell balancing is necessary for highly transient lithium battery applications, especially those applications where charging occurs frequently, such as regenerative braking in electric vehicle (EV) or hybrid electric vehicle (HEV) applications. Regenerative braking can cause problems for Lithium Ion batteries because the instantaneous regenerative braking current inrush can cause battery voltage to increase suddenly, possibly over the electrolyte breakdown threshold voltage. Deviations in cell behaviors generally occur because of two phenomenon: changes in internal impedance or cell capacity reduction due to aging. In either case, if one cell in a battery pack experiences deviant cell behavior, that cell becomes a likely candidate to overvoltage during high power charging events. Cells with reduced capacity or high internal impedance tend to have large voltage swings when charging and discharging. For HEV applications, it is necessary to cell balance lithium chemistry because of this overvoltage potential. For EV applications, cell balancing is desirable to obtain maximum usable capacity from the battery pack. During charging, an out-of-balance cell may prematurely approach the end-of-charge voltage (typically 4.1 to 4.3 volts/cell) and trigger the charger to turn off. Cell balancing is useful to control the higher voltage cells until the rest of the cells can catch up. In this way, the charger is not turned off until the cells simultaneously reach the end-of-charge voltage. END-OF-CHARGE CELL BALANCING METHODS Typically, cell-balancing methods employed during and at end-of-charging are useful only for electric vehicle purposes. This is because electric vehicle batteries are generally fully charged between each use cycle. Hybrid electric vehicle batteries may or may not be maintained fully charged, resulting in unpredictable end-of-charge conditions to enact the balancing mechanism. Hybrid vehicle batteries also require both high power charge (regenerative braking) and discharge (launch assist or boost) capabilities. For this reason, their batteries are usually maintained at a SOC that can discharge the required power but still have enough headroom to accept the necessary regenerative power. To fully charge the HEV battery for cell balancing would diminish charge acceptance capability (regenerative braking). CHARGE SHUNTING The charge-shunting cell balancing method selectively shunts the charging current around each cell as they become fully charged (Figure 1). This method is most efficiently employed on systems with known charge rates. The shunt resistor R is sized to shunt exactly the charging current I when the fully charged cell voltage V is reached. If the charging current decreases, resistor R will discharge the shunted cell. To avoid extremely large power dissipations due to R, this method is best used with stepped-current chargers with a small end-of-charge current.", "title": "" } ]
[ { "docid": "690f65505dd936f834a3bb8042147564", "text": "Forging new memories for facts and events, holding critical details in mind on a moment-to-moment basis, and retrieving knowledge in the service of current goals all depend on a complex interplay between neural ensembles throughout the brain. Over the past decade, researchers have increasingly utilized powerful analytical tools (e.g., multivoxel pattern analysis) to decode the information represented within distributed functional magnetic resonance imaging activity patterns. In this review, we discuss how these methods can sensitively index neural representations of perceptual and semantic content and how leverage on the engagement of distributed representations provides unique insights into distinct aspects of memory-guided behavior. We emphasize that, in addition to characterizing the contents of memories, analyses of distributed patterns shed light on the processes that influence how information is encoded, maintained, or retrieved, and thus inform memory theory. We conclude by highlighting open questions about memory that can be addressed through distributed pattern analyses.", "title": "" }, { "docid": "ca0f2b3565b6479c5c3b883325bf3296", "text": "We present a simple, robust generation system which performs content selection and surface realization in a unified, domain-independent framework. In our approach, we break up the end-to-end generation process into a sequence of local decisions, arranged hierarchically and each trained discriminatively. We deployed our system in three different domains—Robocup sportscasting, technical weather forecasts, and common weather forecasts, obtaining results comparable to state-ofthe-art domain-specific systems both in terms of BLEU scores and human evaluation.", "title": "" }, { "docid": "df0be45b6db0de70acb6bbf44e7898aa", "text": "The paper focuses on conservation agriculture (CA), defined as minimal soil disturbance (no-till, NT) and permanent soil cover (mulch) combined with rotations, as a more sustainable cultivation system for the future. Cultivation and tillage play an important role in agriculture. The benefits of tillage in agriculture are explored before introducing conservation tillage (CT), a practice that was borne out of the American dust bowl of the 1930s. The paper then describes the benefits of CA, a suggested improvement on CT, where NT, mulch and rotations significantly improve soil properties and other biotic factors. The paper concludes that CA is a more sustainable and environmentally friendly management system for cultivating crops. Case studies from the rice-wheat areas of the Indo-Gangetic Plains of South Asia and the irrigated maize-wheat systems of Northwest Mexico are used to describe how CA practices have been used in these two environments to raise production sustainably and profitably. Benefits in terms of greenhouse gas emissions and their effect on global warming are also discussed. The paper concludes that agriculture in the next decade will have to sustainably produce more food from less land through more efficient use of natural resources and with minimal impact on the environment in order to meet growing population demands. Promoting and adopting CA management systems can help meet this goal.", "title": "" }, { "docid": "ccaa01441d7de9009dea10951a3ea2f3", "text": "for Natural Language A First Course in Computational Semanti s Volume II Working with Dis ourse Representation Stru tures Patri k Bla kburn & Johan Bos September 3, 1999", "title": "" }, { "docid": "5ca14c0581484f5618dd806a6f994a03", "text": "Many of existing criteria for evaluating Web sites quality require methods such as heuristic evaluations, or/and empirical usability tests. This paper aims at defining a quality model and a set of characteristics relating internal and external quality factors and giving clues about potential problems, which can be measured by automated tools. The first step in the quality assessment process is an automatic check of the source code, followed by manual evaluation, possibly supported by an appropriate user panel. As many existing tools can check sites (mainly considering accessibility issues), the general architecture will be based upon a conceptual model of the site/page, and the tools will export their output to a Quality Data Base, which is the basis for subsequent actions (checking, reporting test results, etc.).", "title": "" }, { "docid": "738f60fbfe177eec52057c8e5ab43e55", "text": "From social science to biology, numerous applications often rely on graphlets for intuitive and meaningful characterization of networks at both the global macro-level as well as the local micro-level. While graphlets have witnessed a tremendous success and impact in a variety of domains, there has yet to be a fast and efficient approach for computing the frequencies of these subgraph patterns. However, existing methods are not scalable to large networks with millions of nodes and edges, which impedes the application of graphlets to new problems that require large-scale network analysis. To address these problems, we propose a fast, efficient, and parallel algorithm for counting graphlets of size k={3,4}-nodes that take only a fraction of the time to compute when compared with the current methods used. The proposed graphlet counting algorithms leverages a number of proven combinatorial arguments for different graphlets. For each edge, we count a few graphlets, and with these counts along with the combinatorial arguments, we obtain the exact counts of others in constant time. On a large collection of 300+ networks from a variety of domains, our graphlet counting strategies are on average 460x faster than current methods. This brings new opportunities to investigate the use of graphlets on much larger networks and newer applications as we show in the experiments. To the best of our knowledge, this paper provides the largest graphlet computations to date as well as the largest systematic investigation on over 300+ networks from a variety of domains.", "title": "" }, { "docid": "25a1ff583d944075593615777ec4c3be", "text": "Diagnostic blood samples collected by phlebotomy are the most common type of biological specimens drawn and sent to laboratory medicine facilities for being analyzed, thus supporting caring physicians in patient diagnosis, follow-up and/or therapeutic monitoring. Phlebotomy, a relatively invasive medical procedure, is indeed critical for the downstream procedures accomplished either in the analytical phase made in the laboratory or in the interpretive process done by the physicians. Diagnosis, management, treatment of patients and ultimately patient safety itself can be compromised by poor phlebotomy quality. We have read with interest a recent article where the authors addressed important aspects of venous blood collection for laboratory medicine analysis. The authors conducted a phlebotomy survey based on the Clinical and Laboratory Standard Institute (CLSI) H03-A6 document (presently replaced by the GP41-A6 document) in three government hospitals in Ethiopia to evaluate 120 professionals (101 non-laboratory professionals vs. 19 laboratory professionals) as regards the venous blood collection practice. The aim of this mini (non-systematic) review is to both take a cue from the above article and from current practices we had already observed in other laboratory settings, and discuss four questionable activities performed by health care professionals during venous blood collection. We refer to: i) diet restriction assessment; ii) puncture site cleansing; iii) timing of tourniquet removal and; iv) mixing specimen with additives.", "title": "" }, { "docid": "98388ecea031b70916cabda20edf3496", "text": "Rim-driven thrusters have received much attention concerning the potential benefits in vibration and hydrodynamic characteristics, which are of great importance in marine transportation systems. In this sense, the rim-driven permanent magnet, brushless dc, and induction motors have been recently suggested to be employed as marine propulsion motors. On the other hand, high-temperature superconducting (HTS) synchronous motors are becoming much fascinating, particularly in transport applications, regarding some considerable advantages such as low loss, high efficiency, and compactness. However, the HTS-type rim-driven synchronous motor has not been studied yet. Therefore, this paper is devoted to a design practice of rim-driven synchronous motors with HTS field winding. A detailed design procedure is developed for the HTS rim-driven motors, and the design algorithm is validated applying the finite element (FE) method. The FE model of a three-phase 2.5-MW HTS rim-driven synchronous motor is utilized, and the electromagnetic characteristics of the motor are then evaluated. The goal is to design an HTS machine fitted in a thin duct to minimize the hydrodynamic drag force. The design problem exhibits some difficulties while considering various constraints.", "title": "" }, { "docid": "41f5e010cf81fd0152a806853f4d7e93", "text": "Consider revising medium temperature used LM35 temperature sensor, what is an economic and feasible method. This study mainly researches the applicability of LM35 temperature sensor in soil temperature testing field. Selected the sensor, and based on the theoretical equation between the sensor output voltage and Celsius temperature; introduced correction coefficient, carried through the calibration experiment of the sensor; further more, it is applied to the potted rice's soil temperature detection. The calibration results show that, each sensor correction coefficient is different from others, but these numerical are close to 1, the linear relationship was very significant between tested medium temperature and sensor output voltage. In the key trial period of rice potted, used LM35DZ type temperature sensor to measure the soil temperature. The analysis result show that, the changing trends are basically equal both soil temperature and air temperature, and the characteristics of soil temperatures are lag. The variance analysis shows that, the difference was not significant paper film covered and without covered on soil temperature.", "title": "" }, { "docid": "ae70b9ef5eeb6316b5b022662191cc4f", "text": "The total harmonic distortion (THD) is an important performance criterion for almost any communication device. In most cases, the THD of a periodic signal, which has been processed in some way, is either measured directly or roughly estimated numerically, while analytic methods are employed only in a limited number of simple cases. However, the knowledge of the theoretical THD may be quite important for the conception and design of the communication equipment (e.g. transmitters, power amplifiers). The aim of this paper is to present a general theoretic approach, which permits to obtain an analytic closed-form expression for the THD. It is also shown that in some cases, an approximate analytic method, having good precision and being less sophisticated, may be developed. Finally, the mathematical technique, on which the proposed method is based, is described in the appendix.", "title": "" }, { "docid": "1419e2f53412b4ce2d6944bad163f13d", "text": "Determining the emotion of a song that best characterizes the affective content of the song is a challenging issue due to the difficulty of collecting reliable ground truth data and the semantic gap between human's perception and the music signal of the song. To address this issue, we represent an emotion as a point in the Cartesian space with valence and arousal as the dimensions and determine the coordinates of a song by the relative emotion of the song with respect to other songs. We also develop an RBF-ListNet algorithm to optimize the ranking-based objective function of our approach. The cognitive load of annotation, the accuracy of emotion recognition, and the subjective quality of the proposed approach are extensively evaluated. Experimental results show that this ranking-based approach simplifies emotion annotation and enhances the reliability of the ground truth. The performance of our algorithm for valence recognition reaches 0.326 in Gamma statistic.", "title": "" }, { "docid": "07b362c7f6e941513cfbafce1ba87db1", "text": "ResearchGate is increasingly used by scholars to upload the full-text of their articles and make them freely available for everyone. This study aims to investigate the extent to which ResearchGate members as authors of journal articles comply with publishers’ copyright policies when they self-archive full-text of their articles on ResearchGate. A random sample of 500 English journal articles available as full-text on ResearchGate were investigated. 108 articles (21.6%) were open access (OA) published in OA journals or hybrid journals. Of the remaining 392 articles, 61 (15.6%) were preprint, 24 (6.1%) were post-print and 307 (78.3%) were published (publisher) PDF. The key finding was that 201 (51.3%) out of 392 non-OA articles infringed the copyright and were non-compliant with publishers’ policy. While 88.3% of journals allowed some form of self-archiving (SHERPA/RoMEO green, blue or yellow journals), the majority of non-compliant cases (97.5%) occurred when authors self-archived publishers’ PDF files (final published version). This indicates that authors infringe copyright most of the time not because they are not allowed to self-archive, but because they use the wrong version, which might imply their lack of understanding of copyright policies and/or complexity and diversity of policies.", "title": "" }, { "docid": "8be48d08aec21ecdf8a124fa3fef8d48", "text": "Topic modeling has become a widely used tool for document management. However, there are few topic models distinguishing the importance of documents on different topics. In this paper, we propose a framework LIMTopic to incorporate link based importance into topic modeling. To instantiate the framework, RankTopic and HITSTopic are proposed by incorporating topical pagerank and topical HITS into topic modeling respectively. Specifically, ranking methods are first used to compute the topical importance of documents. Then, a generalized relation is built between link importance and topic modeling. We empirically show that LIMTopic converges after a small number of iterations in most experimental settings. The necessity of incorporating link importance into topic modeling is justified based on KL-Divergences between topic distributions converted from topical link importance and those computed by basic topic models. To investigate the document network summarization performance of topic models, we propose a novel measure called log-likelihood of ranking-integrated document-word matrix. Extensive experimental results show that LIMTopic performs better than baseline models in generalization performance, document clustering and classification, topic interpretability and document network summarization performance. Moreover, RankTopic has comparable performance with relational topic model (RTM) and HITSTopic performs much better than baseline models in document clustering and classification.", "title": "" }, { "docid": "dcada3c12fb14b454964b97b8541b69d", "text": "nce ch; n ple iray r. In hue 003 Abstract. We present a comparison between two color equalization algorithms: Retinex, the famous model due to Land and McCann, and Automatic Color Equalization (ACE), a new algorithm recently presented by the authors. These two algorithms share a common approach to color equalization, but different computational models. We introduce the two models focusing on differences and common points. An analysis of their computational characteristics illustrates the way the Retinex approach has influenced ACE structure, and which aspects of the first algorithm have been modified in the second one and how. Their interesting equalization properties, like lightness and color constancy, image dynamic stretching, global and local filtering, and data driven dequantization, are qualitatively and quantitatively presented and compared, together with their ability to mimic the human visual system. © 2004 SPIE and IS&T. [DOI: 10.1117/1.1635366]", "title": "" }, { "docid": "2f8a6dcaeea91ef5034908b5bab6d8d3", "text": "Web-based social systems enable new community-based opportunities for participants to engage, share, and interact. This community value and related services like search and advertising are threatened by spammers, content polluters, and malware disseminators. In an effort to preserve community value and ensure longterm success, we propose and evaluate a honeypot-based approach for uncovering social spammers in online social systems. Two of the key components of the proposed approach are: (1) The deployment of social honeypots for harvesting deceptive spam profiles from social networking communities; and (2) Statistical analysis of the properties of these spam profiles for creating spam classifiers to actively filter out existing and new spammers. We describe the conceptual framework and design considerations of the proposed approach, and we present concrete observations from the deployment of social honeypots in MySpace and Twitter. We find that the deployed social honeypots identify social spammers with low false positive rates and that the harvested spam data contains signals that are strongly correlated with observable profile features (e.g., content, friend information, posting patterns, etc.). Based on these profile features, we develop machine learning based classifiers for identifying previously unknown spammers with high precision and a low rate of false positives.", "title": "" }, { "docid": "60de343325a305b08dfa46336f2617b5", "text": "On Friday, May 12, 2017 a large cyber-attack was launched using WannaCry (or WannaCrypt). In a few days, this ransomware virus targeting Microsoft Windows systems infected more than 230,000 computers in 150 countries. Once activated, the virus demanded ransom payments in order to unlock the infected system. The widespread attack affected endless sectors – energy, transportation, shipping, telecommunications, and of course health care. Britain’s National Health Service (NHS) reported that computers, MRI scanners, blood-storage refrigerators and operating room equipment may have all been impacted. Patient care was reportedly hindered and at the height of the attack, NHS was unable to care for non-critical emergencies and resorted to diversion of care from impacted facilities. While daunting to recover from, the entire situation was entirely preventable. A Bcritical^ patch had been released by Microsoft on March 14, 2017. Once applied, this patch removed any vulnerability to the virus. However, hundreds of organizations running thousands of systems had failed to apply the patch in the first 59 days it had been released. This entire situation highlights a critical need to reexamine how we maintain our health information systems. Equally important is a need to rethink how organizations sunset older, unsupported operating systems, to ensure that security risks are minimized. For example, in 2016, the NHS was reported to have thousands of computers still running Windows XP – a version no longer supported or maintained by Microsoft. There is no question that this will happen again. However, health organizations can mitigate future risk by ensuring best security practices are adhered to.", "title": "" }, { "docid": "c6f3d4b2a379f452054f4220f4488309", "text": "3D Morphable Models (3DMMs) are powerful statistical models of 3D facial shape and texture, and among the state-of-the-art methods for reconstructing facial shape from single images. With the advent of new 3D sensors, many 3D facial datasets have been collected containing both neutral as well as expressive faces. However, all datasets are captured under controlled conditions. Thus, even though powerful 3D facial shape models can be learnt from such data, it is difficult to build statistical texture models that are sufficient to reconstruct faces captured in unconstrained conditions (in-the-wild). In this paper, we propose the first, to the best of our knowledge, in-the-wild 3DMM by combining a powerful statistical model of facial shape, which describes both identity and expression, with an in-the-wild texture model. We show that the employment of such an in-the-wild texture model greatly simplifies the fitting procedure, because there is no need to optimise with regards to the illumination parameters. Furthermore, we propose a new fast algorithm for fitting the 3DMM in arbitrary images. Finally, we have captured the first 3D facial database with relatively unconstrained conditions and report quantitative evaluations with state-of-the-art performance. Complementary qualitative reconstruction results are demonstrated on standard in-the-wild facial databases.", "title": "" }, { "docid": "0e514c165e362de91764f3ddd2a09e15", "text": "The authors examined how networks of teams integrate their efforts to succeed collectively. They proposed that integration processes used to align efforts among multiple teams are important predictors of multiteam performance. The authors used a multiteam system (MTS) simulation to assess how both cross-team and within-team processes relate to MTS performance over multiple performance episodes that differed in terms of required interdependence levels. They found that cross-team processes predicted MTS performance beyond that accounted for by within-team processes. Further, cross-team processes were more important for MTS effectiveness when there were high cross-team interdependence demands as compared with situations in which teams could work more independently. Results are discussed in terms of extending theory and applications from teams to multiteam systems.", "title": "" }, { "docid": "033553066cafa5c777bfab564a957c17", "text": "BACKGROUND\nBased on evidence that psychologic distress often goes unrecognized although it is common among cancer patients, clinical practice guidelines recommend routine screening for distress. For this study, the authors sought to determine whether the single-item Distress Thermometer (DT) compared favorably with longer measures currently used to screen for distress.\n\n\nMETHODS\nPatients (n = 380) who were recruited from 5 sites completed the DT and identified the presence or absence of 34 problems using a standardized list. Participants also completed the 14-item Hospital Anxiety and Depression Scale (HADS) and an 18-item version of the Brief Symptom Inventory (BSI-18), both of which have established cutoff scores for identifying clinically significant distress.\n\n\nRESULTS\nReceiver operating characteristic (ROC) curve analyses of DT scores yielded area under the curve estimates relative to the HADS cutoff score (0.80) and the BSI-18 cutoff scores (0.78) indicative of good overall accuracy. ROC analyses also showed that a DT cutoff score of 4 had optimal sensitivity and specificity relative to both the HADS and BSI-18 cutoff scores. Additional analyses indicated that, compared with patients who had DT scores < 4, patients who had DT scores > or = 4 were more likely to be women, have a poorer performance status, and report practical, family, emotional, and physical problems (P < or = 0.05).\n\n\nCONCLUSIONS\nFindings confirm that the single-item DT compares favorably with longer measures used to screen for distress. A DT cutoff score of 4 yielded optimal sensitivity and specificity in a general cancer population relative to established cutoff scores on longer measures. The use of this cutoff score identified patients with a range of problems that were likely to reflect psychologic distress.", "title": "" }, { "docid": "8f0805ba67919e349f2cd506378a5171", "text": "Cycloastragenol (CAG) is an aglycone of astragaloside IV. It was first identified when screening Astragalus membranaceus extracts for active ingredients with antiaging properties. The present study demonstrates that CAG stimulates telomerase activity and cell proliferation in human neonatal keratinocytes. In particular, CAG promotes scratch wound closure of human neonatal keratinocyte monolayers in vitro. The distinct telomerase-activating property of CAG prompted evaluation of its potential application in the treatment of neurological disorders. Accordingly, CAG induced telomerase activity and cAMP response element binding (CREB) activation in PC12 cells and primary neurons. Blockade of CREB expression in neuronal cells by RNA interference reduced basal telomerase activity, and CAG was no longer efficacious in increasing telomerase activity. CAG treatment not only induced the expression of bcl2, a CREB-regulated gene, but also the expression of telomerase reverse transcriptase in primary cortical neurons. Interestingly, oral administration of CAG for 7 days attenuated depression-like behavior in experimental mice. In conclusion, CAG stimulates telomerase activity in human neonatal keratinocytes and rat neuronal cells, and induces CREB activation followed by tert and bcl2 expression. Furthermore, CAG may have a novel therapeutic role in depression.", "title": "" } ]
scidocsrr
75e6108558b653a0b4dfb0a5bd0a4272
Exhausted Parents: Development and Preliminary Validation of the Parental Burnout Inventory
[ { "docid": "f84f279b6ef3b112a0411f5cba82e1b0", "text": "PHILADELPHIA The difficulties inherent in obtaining consistent and adequate diagnoses for the purposes of research and therapy have been pointed out by a number of authors. Pasamanick12 in a recent article viewed the low interclinician agreement on diagnosis as an indictment of the present state of psychiatry and called for \"the development of objective, measurable and verifiable criteria of classification based not on personal or parochial considerations, buton behavioral and other objectively measurable manifestations.\" Attempts by other investigators to subject clinical observations and judgments to objective measurement have resulted in a wide variety of psychiatric rating ~ c a l e s . ~ J ~ These have been well summarized in a review article by Lorr l1 on \"Rating Scales and Check Lists for the E v a 1 u a t i o n of Psychopathology.\" In the area of psychological testing, a variety of paper-andpencil tests have been devised for the purpose of measuring specific personality traits; for example, the Depression-Elation Test, devised by Jasper in 1930. This report describes the development of an instrument designed to measure the behavioral manifestations of depression. In the planning of the research design of a project aimed at testing certain psychoanalytic formulations of depression, the necessity for establishing an appropriate system for identifying depression was recognized. Because of the reports on the low degree of interclinician agreement on diagnosis,13 we could not depend on the clinical diagnosis, but had to formulate a method of defining depression that would be reliable and valid. The available instruments were not considered adequate for our purposes. The Minnesota Multiphasic Personality Inventory, for example, was not specifically designed", "title": "" }, { "docid": "c90ab409ea2a9726f6ddded45e0fdea9", "text": "About a decade ago, the Adult Attachment Interview (AAI; C. George, N. Kaplan, & M. Main, 1985) was developed to explore parents' mental representations of attachment as manifested in language during discourse of childhood experiences. The AAI was intended to predict the quality of the infant-parent attachment relationship, as observed in the Ainsworth Strange Situation, and to predict parents' responsiveness to their infants' attachment signals. The current meta-analysis examined the available evidence with respect to these predictive validity issues. In regard to the 1st issue, the 18 available samples (N = 854) showed a combined effect size of 1.06 in the expected direction for the secure vs. insecure split. For a portion of the studies, the percentage of correspondence between parents' mental representation of attachment and infants' attachment security could be computed (the resulting percentage was 75%; kappa = .49, n = 661). Concerning the 2nd issue, the 10 samples (N = 389) that were retrieved showed a combined effect size of .72 in the expected direction. According to conventional criteria, the effect sizes are large. It was concluded that although the predictive validity of the AAI is a replicated fact, there is only partial knowledge of how attachment representations are transmitted (the transmission gap).", "title": "" }, { "docid": "feafd64c9f81b07f7f616d2e36e15e0c", "text": "Burnout is a prolonged response to chronic emotional and interpersonal stressors on the job, and is defined by the three dimensions of exhaustion, cynicism, and inefficacy. The past 25 years of research has established the complexity of the construct, and places the individual stress experience within a larger organizational context of people's relation to their work. Recently, the work on burnout has expanded internationally and has led to new conceptual models. The focus on engagement, the positive antithesis of burnout, promises to yield new perspectives on interventions to alleviate burnout. The social focus of burnout, the solid research basis concerning the syndrome, and its specific ties to the work domain make a distinct and valuable contribution to people's health and well-being.", "title": "" } ]
[ { "docid": "c5122000c9d8736cecb4d24e6f56aab8", "text": "New credit cards containing Europay, MasterCard and Visa (EMV) chips for enhanced security used in-store purchases rather than online purchases have been adopted considerably. EMV supposedly protects the payment cards in such a way that the computer chip in a card referred to as chip-and-pin cards generate a unique one time code each time the card is used. The one time code is designed such that if it is copied or stolen from the merchant system or from the system terminal cannot be used to create a counterfeit copy of that card or counterfeit chip of the transaction. However, in spite of this design, EMV technology is not entirely foolproof from failure. In this paper we discuss the issues, failures and fraudulent cases associated with EMV Chip-And-Card technology.", "title": "" }, { "docid": "ba7cb71cf07765f915d548f2a01e7b98", "text": "Existing data storage systems offer a wide range of functionalities to accommodate an equally diverse range of applications. However, new classes of applications have emerged, e.g., blockchain and collaborative analytics, featuring data versioning, fork semantics, tamper-evidence or any combination thereof. They present new opportunities for storage systems to efficiently support such applications by embedding the above requirements into the storage. In this paper, we present ForkBase, a storage engine designed for blockchain and forkable applications. By integrating core application properties into the storage, ForkBase not only delivers high performance but also reduces development effort. The storage manages multiversion data and supports two variants of fork semantics which enable different fork worklflows. ForkBase is fast and space efficient, due to a novel index class that supports efficient queries as well as effective detection of duplicate content across data objects, branches and versions. We demonstrate ForkBase’s performance using three applications: a blockchain platform, a wiki engine and a collaborative analytics application. We conduct extensive experimental evaluation against respective state-of-the-art solutions. The results show that ForkBase achieves superior performance while significantly lowering the development effort. PVLDB Reference Format: Sheng Wang, Tien Tuan Anh Dinh, Qian Lin, Zhongle Xie, Meihui Zhang, Qingchao Cai, Gang Chen, Beng Chin Ooi, Pingcheng Ruan. ForkBase: An Efficient Storage Engine for Blockchain and Forkable Applications. PVLDB, 11(10): 1137-1150, 2018. DOI: https://doi.org/10.14778/3231751.3231762", "title": "" }, { "docid": "4bce4bc5fde90ed5448ee6361a9534ff", "text": "Much of human dialogue occurs in semicooperative settings, where agents with different goals attempt to agree on common decisions. Negotiations require complex communication and reasoning skills, but success is easy to measure, making this an interesting task for AI. We gather a large dataset of human-human negotiations on a multi-issue bargaining task, where agents who cannot observe each other’s reward functions must reach an agreement (or a deal) via natural language dialogue. For the first time, we show it is possible to train end-to-end models for negotiation, which must learn both linguistic and reasoning skills with no annotated dialogue states. We also introduce dialogue rollouts, in which the model plans ahead by simulating possible complete continuations of the conversation, and find that this technique dramatically improves performance. Our code and dataset are publicly available.1", "title": "" }, { "docid": "debb7f6f8e00b536dd823c4b513f5950", "text": "It is known that in the Tower of Ha noi graphs there are at most two different shortest paths between any fixed pair of vertices. A formula is given that counts, for a given vertex v, thenumber of verticesu such that there are two shortest u, v-paths. The formul a is expressed in terms of Stern’s diatomic sequenceb(n) (n ≥ 0) and implies that only for vertices of degree two this number is zero. Plane embeddings of the Tower of Hanoi graphs are also presented that provide an explicit description ofb(n) as the number of elements of the sets of vertices of the Tower of Hanoi graphs intersected by certain lines in the plane. © 2004 Elsevier Ltd. All rights reserved. MSC (2000):05A15; 05C12; 11B83; 51M15", "title": "" }, { "docid": "f01545609634b5aab7e3c1406f93046c", "text": "We present a neural network architecture based on bidirectional LSTMs to compute representations of words in the sentential contexts. These context-sensitive word representations are suitable for, e.g., distinguishing different word senses and other context-modulated variations in meaning. To learn the parameters of our model, we use cross-lingual supervision, hypothesizing that a good representation of a word in context will be one that is sufficient for selecting the correct translation into a second language. We evaluate the quality of our representations as features in three downstream tasks: prediction of semantic supersenses (which assign nouns and verbs into a few dozen semantic classes), low resource machine translation, and a lexical substitution task, and obtain state-of-the-art results on all of these.", "title": "" }, { "docid": "1285bd50bb6462b9864d61a59e77435e", "text": "Precision Agriculture is advancing but not as fast as predicted 5 years ago. The development of proper decision-support systems for implementing precision decisions remains a major stumbling block to adoption. Other critical research issues are discussed, namely, insufficient recognition of temporal variation, lack of whole-farm focus, crop quality assessment methods, product tracking and environmental auditing. A generic research programme for precision agriculture is presented. A typology of agriculture countries is introduced and the potential of each type for precision agriculture discussed.", "title": "" }, { "docid": "e7b1d82b6716434da8bbeeeec895dac4", "text": "Grapevine is the one of the most important fruit species in the world. Comparative genome sequencing of grape cultivars is very important for the interpretation of the grape genome and understanding its evolution. The genomes of four Georgian grape cultivars—Chkhaveri, Saperavi, Meskhetian green, and Rkatsiteli, belonging to different haplogroups, were resequenced. The shotgun genomic libraries of grape cultivars were sequenced on an Illumina HiSeq. Pinot Noir nuclear, mitochondrial, and chloroplast DNA were used as reference. Mitochondrial DNA of Chkhaveri closely matches that of the reference Pinot noir mitochondrial DNA, with the exception of 16 SNPs found in the Chkhaveri mitochondrial DNA. The number of SNPs in mitochondrial DNA from Saperavi, Meskhetian green, and Rkatsiteli was 764, 702, and 822, respectively. Nuclear DNA differs from the reference by 1,800,675 nt in Chkhaveri, 1,063,063 nt in Meskhetian green, 2,174,995 in Saperavi, and 5,011,513 in Rkatsiteli. Unlike mtDNA Pinot noir, chromosomal DNA is closer to the Meskhetian green than to other cultivars. Substantial differences in the number of SNPs in mitochondrial and nuclear DNA of Chkhaveri and Pinot noir cultivars are explained by backcrossing or introgression of their wild predecessors before or during the process of domestication. Annotation of chromosomal DNA of Georgian grape cultivars by MEGANTE, a web-based annotation system, shows 66,745 predicted genes (Chkhaveri—17,409; Saperavi—17,021; Meskhetian green—18,355; and Rkatsiteli—13,960). Among them, 106 predicted genes and 43 pseudogenes of terpene synthase genes were found in chromosomes 12, 18 random (18R), and 19. Four novel TPS genes not present in reference Pinot noir DNA were detected. Two of them—germacrene A synthase (Chromosome 18R) and (−) germacrene D synthase (Chromosome 19) can be identified as putatively full-length proteins. This work performs the first attempt of the comparative whole genome analysis of different haplogroups of Vitis vinifera cultivars. Based on complete nuclear and mitochondrial DNA sequence analysis, hypothetical phylogeny scheme of formation of grape cultivars is presented.", "title": "" }, { "docid": "2b6b8098ea397f85554113a42876f368", "text": "Teacher efficacy has proved to be powerfully related to many meaningful educational outcomes such as teachers’ persistence, enthusiasm, commitment and instructional behavior, as well as student outcomes such as achievement, motivation, and self-efficacy beliefs. However, persistent measurement problems have plagued those who have sought to study teacher efficacy. We review many of the major measures that have been used to capture the construct, noting problems that have arisen with each. We then propose a promising new measure of teacher efficacy along with validity and reliability data from three separate studies. Finally, new directions for research made possible by this instrument are explored. r 2001 Elsevier Science Ltd. All rights reserved.", "title": "" }, { "docid": "e592ccd706b039b12cc4e724a7b217cd", "text": "In fully distributed machine learning, privacy and security are important issues. These issues are often dealt with using secure multiparty computation (MPC). However, in our application domain, known MPC algorithms are not scalable or not robust enough. We propose a light-weight protocol to quickly and securely compute the sum of the inputs of a subset of participants assuming a semi-honest adversary. During the computation the participants learn no individual values. We apply this protocol to efficiently calculate the sum of gradients as part of a fully distributed mini-batch stochastic gradient descent algorithm. The protocol achieves scalability and robustness by exploiting the fact that in this application domain a “quick and dirty” sum computation is acceptable. In other words, speed and robustness takes precedence over precision. We analyze the protocol theoretically as well as experimentally based on churn statistics from a real smartphone trace. We derive a sufficient condition for preventing the leakage of an individual value, and we demonstrate the feasibility of the overhead of the protocol.", "title": "" }, { "docid": "6a4437fa8a5a764d99ed5471401f5ce4", "text": "There is disagreement in the literature about the exact nature of the phenomenon of empathy. There are emotional, cognitive, and conditioning views, applying in varying degrees across species. An adequate description of the ultimate and proximate mechanism can integrate these views. Proximately, the perception of an object's state activates the subject's corresponding representations, which in turn activate somatic and autonomic responses. This mechanism supports basic behaviors (e.g., alarm, social facilitation, vicariousness of emotions, mother-infant responsiveness, and the modeling of competitors and predators) that are crucial for the reproductive success of animals living in groups. The Perception-Action Model (PAM), together with an understanding of how representations change with experience, can explain the major empirical effects in the literature (similarity, familiarity, past experience, explicit teaching, and salience). It can also predict a variety of empathy disorders. The interaction between the PAM and prefrontal functioning can also explain different levels of empathy across species and age groups. This view can advance our evolutionary understanding of empathy beyond inclusive fitness and reciprocal altruism and can explain different levels of empathy across individuals, species, stages of development, and situations.", "title": "" }, { "docid": "73128099f3ddd19e4f88d10cdafbd506", "text": "BACKGROUND\nRecently, there has been an increased interest in the effects of essential oils on athletic performances and other physiological effects. This study aimed to assess the effects of Citrus sinensis flower and Mentha spicata leaves essential oils inhalation in two different groups of athlete male students on their exercise performance and lung function.\n\n\nMETHODS\nTwenty physical education students volunteered to participate in the study. The subjects were randomly assigned into two groups: Mentha spicata and Citrus sinensis (ten participants each). One group was nebulized by Citrus sinensis flower oil and the other by Mentha spicata leaves oil in a concentration of (0.02 ml/kg of body mass) which was mixed with 2 ml of normal saline for 5 min before a 1500 m running tests. Lung function tests were measured using a spirometer for each student pre and post nebulization giving the same running distance pre and post oils inhalation.\n\n\nRESULTS\nA lung function tests showed an improvement on the lung status for the students after inhaling of the oils. Interestingly, there was a significant increase in Forced Expiratory Volume in the first second and Forced Vital Capacity after inhalation for the both oils. Moreover significant reductions in the means of the running time were observed among these two groups. The normal spirometry results were 50 %, while after inhalation with M. spicata oil the ratio were 60 %.\n\n\nCONCLUSION\nOur findings support the effectiveness of M. spicata and C. sinensis essential oils on the exercise performance and respiratory function parameters. However, our conclusion and generalisability of our results should be interpreted with caution due to small sample size and lack of control groups, randomization or masking. We recommend further investigations to explain the mechanism of actions for these two essential oils on exercise performance and respiratory parameters.\n\n\nTRIAL REGISTRATION\nISRCTN10133422, Registered: May 3, 2016.", "title": "" }, { "docid": "4a26afba58270d7ce1a0eb50bd659eae", "text": "Recommendation can be reduced to a sub-problem of link prediction, with specific nodes (users and items) and links (similar relations among users/items, and interactions between users and items). However, the previous link prediction algorithms need to be modified to suit the recommendation cases since they do not consider the separation of these two fundamental relations: similar or dissimilar and like or dislike. In this paper, we propose a novel and unified way to solve this problem, which models the relation duality using complex number. Under this representation, the previous works can directly reuse. In experiments with the Movie Lens dataset and the Android software website AppChina.com, the presented approach achieves significant performance improvement comparing with other popular recommendation algorithms both in accuracy and coverage. Besides, our results revealed some new findings. First, it is observed that the performance is improved when the user and item popularities are taken into account. Second, the item popularity plays a more important role than the user popularity does in final recommendation. Since its notable performance, we are working to apply it in a commercial setting, AppChina.com website, for application recommendation.", "title": "" }, { "docid": "749cfda68d5d7f09c0861dc723563db9", "text": "BACKGROUND\nOnline social networking use has been integrated into adolescents' daily life and the intensity of online social networking use may have important consequences on adolescents' well-being. However, there are few validated instruments to measure social networking use intensity. The present study aims to develop the Social Networking Activity Intensity Scale (SNAIS) and validate it among junior middle school students in China.\n\n\nMETHODS\nA total of 910 students who were social networking users were recruited from two junior middle schools in Guangzhou, and 114 students were retested after two weeks to examine the test-retest reliability. The psychometrics of the SNAIS were estimated using appropriate statistical methods.\n\n\nRESULTS\nTwo factors, Social Function Use Intensity (SFUI) and Entertainment Function Use Intensity (EFUI), were clearly identified by both exploratory and confirmatory factor analyses. No ceiling or floor effects were observed for the SNAIS and its two subscales. The SNAIS and its two subscales exhibited acceptable reliability (Cronbach's alpha = 0.89, 0.90 and 0.60, and test-retest Intra-class Correlation Coefficient = 0.85, 0.87 and 0.67 for Overall scale, SFUI and EFUI subscale, respectively, p<0.001). As expected, the SNAIS and its subscale scores were correlated significantly with emotional connection to social networking, social networking addiction, Internet addiction, and characteristics related to social networking use.\n\n\nCONCLUSIONS\nThe SNAIS is an easily self-administered scale with good psychometric properties. It would facilitate more research in this field worldwide and specifically in the Chinese population.", "title": "" }, { "docid": "772193675598233ba1ab60936b3091d4", "text": "The proposed quasiresonant control scheme can be widely used in a dc-dc flyback converter because it can achieve high efficiency with minimized external components. The proposed dynamic frequency selector improves conversion efficiency especially at light loads to meet the requirement of green power since the converter automatically switches to the discontinuous conduction mode for reducing the switching frequency and the switching power loss. Furthermore, low quiescent current can be guaranteed by the constant current startup circuit to further reduce power loss after the startup procedure. The test chip fabricated in VIS 0.5 μm 500 V UHV process occupies an active silicon area of 3.6 mm 2. The peak efficiency can achieve 92% at load of 80 W and 85% efficiency at light load of 5 W.", "title": "" }, { "docid": "903d67c31159d95921f160700d876cf2", "text": "Second Life (SL) is currently the most mature and popular multi-user virtual world platform being used in education. Through an in-depth examination of SL, this article explores its potential and the barriers that multi-user virtual environments present to educators wanting to use immersive 3-D spaces in their teaching. The context is set by tracing the history of virtual worlds back to early multi-user online computer gaming environments and describing the current trends in the development of 3-D immersive spaces. A typology for virtual worlds is developed and the key features that have made unstructured 3-D spaces so attractive to educators are described. The popularity in use of SL is examined through three critical components of the virtual environment experience: technical, immersive and social. From here, the paper discusses the affordances that SL offers for educational activities and the types of teaching approaches that are being explored by institutions. The work concludes with a critical analysis of the barriers to successful implementation of SL as an educational tool and maps a number of developments that are underway to address these issues across virtual worlds more broadly. Introduction The story of virtual worlds is one that cannot be separated from technological change. As we witness increasing maturity and convergence in broadband, wireless computing, British Journal of Educational Technology Vol 40 No 3 2009 414–426 doi:10.1111/j.1467-8535.2009.00952.x © 2009 The Author. Journal compilation © 2009 Becta. Published by Blackwell Publishing, 9600 Garsington Road, Oxford OX4 2DQ, UK and 350 Main Street, Malden, MA 02148, USA. video and audio technologies, we see virtual immersive environments becoming more practical and useable. In this article, I review the present socio-technical environment of virtual worlds, and draw on an analysis of Second Life (SL) to outline the potential for and the barriers to successful implementation of 3-D immersive spaces in education. Virtual worlds have existed in some form since the early 1980s, but their absolute definition remains contested. This reflects the general nature of a term that draws on multiple writings of the virtual and the difficulties in attempting to fix descriptions in an area that is undergoing persistent technological development. The numerous contextual descriptions that have appeared, from the perspectives of writers, academics, industry professionals and the media, have further complicated agreement on a common understanding of virtual worlds. Bell (2008) has approached this problem by suggesting a combined definition based on the work of Bartle (2004), Castronova (2004) and Koster (2004), drawing the work together using key terms that relate to: synchronicity, persistence, network of people, avatar representation and facilitation of the experience by networked computers. But perhaps the most satisfying and simplest insight comes from Schroeder (1996, 2008) who has consistently argued that virtual environments and virtual reality technologies should be defined as: A computer-generated display that allows or compels the user (or users) to have a sense of being present in an environment other than the one they are actually in, and to interact with that environment (Schroeder, 1996, p. 25) In other words, a virtual world provides an experience set within a technological environment that gives the user a strong sense of being there. The multi-user virtual environments (MUVEs) of today share common features that reflect their roots in the gaming worlds of multi-user dungeons and massively multiplayer online games (MMOs), made more popular in recent times through titles such as NeverWinter Nights and World of Warcraft, both based on the Dungeons and Dragons genre of role-playing game. Virtual worlds may appear in different forms yet they possess a number of recurrent features that include: • persistence of the in-world environment • a shared space allowing multiple users to participate simultaneously • virtual embodiment in the form of an avatar (a personisable 3-D representation of the self) • interactions that occur between users and objects in a 3-D environment • an immediacy of action such that interactions occur in real time • similarities to the real world such as topography, movement and physics that provide the illusion of being there. (Smart, Cascio & Paffendof, 2007) These are features compelling enough to attract more than 300 million registered users to spend part of their time within commercial social and gaming virtual worlds (Hays, 2008). Second Life in higher education 415 © 2009 The Author. Journal compilation © 2009 Becta. From MMOs and MUVEs to SL What marks a significant difference between MUVEs and MMOs is the lack of a predetermined narrative or plot-driven storyline. In the worlds exemplified by SL, there is no natural purpose unless one is created or built. Here, social interaction exists not as a precursor to goal-oriented action but rather, it occurs within an open-ended system that offers a number of freedoms to the player, such as: the creation and ownership of objects; the creation of interpersonal networks; and monetary transactions that occur within a tangible economic structure (Castronova, 2004; Ludlow & Wallace, 2007). It is primarily this open-endedness, combined with the ability to create content and shape the virtual environment in an almost infinite number of ways, which has attracted educators to the possibilities afforded by immersive 3-D spaces. A typology of virtual worlds Within the broad panorama of virtual environments, we can find offerings from both open source projects and proprietary vendors. These include the worlds of OpenSim, Croquet Consortium, ActiveWorlds, Project Wonderland, There, Olive and Twinity. We can identify a number of approaches to platform development and delivery each defined by their perceived target audience. For example, Olive specifically markets itself towards large institutions and enterprise-level productivity. MUVEs, therefore, can be categorised in a number of ways. In the typology shown in Table 1, a number of extant 3-D virtual worlds are grouped by their narrative approach and 3-D representational system. There are several alternative categorisations that have been suggested. Messinger, Stroulia and Lyons (2008) builds their typology on Porter’s (2004) original typology of virtual communities where the five key elements of purpose, place, platform, population and profit models are identified. Messenger uses this alternative typology productively to help identify the historic antecedents of virtual worlds, their future applications and topics for future research. What both these typologies demonstrate is that there is a range of virtual worlds, which offer distinctly different settings in which to site educational interventions. Within the typology outlined in Table 1, concrete educational activity can be identified in all four of the virtual world categories listed. The boundaries between these categories are soft and reflect the flexibility of some virtual worlds to provide more than one form of use. This is particularly true of SL, and has contributed to this platform’s high profile in comparison to other contemporary MUVEs. Although often defined as a 3-D social networking space, SL also supports role-playing game communities and some degree of cooperative workflow through the in-world tools and devices that have been built by residents. SL as the platform of choice for education SL represents the most mature of the social virtual world platforms, and the high usage figures compared with other competing platforms reflects this dominance within the educational world. The regular Eduserv virtual worlds survey conducted among UK tertiary educators has identified SL as the most popular educational MUVE: 416 British Journal of Educational Technology Vol 40 No 3 2009 © 2009 The Author. Journal compilation © 2009 Becta. Ta bl e 1 : A ty po lo gy of 3 -D vi rt ua lw or ld s (a da pt ed fr om M cK eo w n, 2 0 0 7 ) Fl ex ib le na rr at iv e So ci al w or ld Si m ul at io n W or ks pa ce G am es (M M P O R G s) an d se ri ou s ga m es So ci al pl at fo rm s, 3 -D ch at ro om s an d vi rt u al w or ld ge n er at or s Si m u la ti on s or re fle ct io n s of th e ‘r ea l’ 3 -D re al is at io n of C SC W s W or ld of W ar cr af t N ev er W in te r N ig ht s", "title": "" }, { "docid": "9c637dff0539c6a80ecceb8e9fa9d567", "text": "Learning the stress patterns of English words presents a challenge for L1 speakers from syllable-timed and/or tone languages. Realization of stress contrasts in previous studies has been measured in a variety of ways. This study adapts and extends Pairwise Variability Index (PVI), a method generally used to measure duration as a property of speech rhythm, to compare F0 and amplitude contrasts across L1 and L2 production of stressed and unstressed syllables in English multisyllabic words. L1 North American English and L1 Taiwan-Mandarin English speech data were extracted from the AESOP-ILAS corpus. Results of acoustic analysis show that overall, stress contrasts were realized most robustly by L1 English speakers. A general pattern of contrast underdifferentiation was found in L2 speakers with respect to F0, duration and intensity, with the most striking difference found in F0. These results corroborate our earlier findings on L1 Mandarin speakers’ production of on-focus/post-focus contrasts in their realization of English narrow focus. Taken together, these results demonstrate that underdifferentiation of prosodic contrasts at both the lexical and phrase levels is a major prosodic feature of Taiwan English; future research will determine whether it can also be found in the L2 English of other syllable-timed or tone language speakers.", "title": "" }, { "docid": "d658b95cc9dc81d0dbb3918795ccab50", "text": "A brain–computer interface (BCI) is a communication channel which does not depend on the brain’s normal output pathways of peripheral nerves and muscles [1–3]. It supplies paralyzed patients with a new approach to communicate with the environment. Among various brain monitoring methods employed in current BCI research, electroencephalogram (EEG) is the main interest due to its advantages of low cost, convenient operation and non-invasiveness. In present-day EEG-based BCIs, the following signals have been paid much attention: visual evoked potential (VEP), sensorimotor mu/beta rhythms, P300 evoked potential, slow cortical potential (SCP), and movement-related cortical potential (MRCP). Details about these signals can be found in chapter “Brain Signals for Brain–Computer Interfaces”. These systems offer some practical solutions (e.g., cursor movement and word processing) for patients with motor disabilities. In this chapter, practical designs of several BCIs developed in Tsinghua University will be introduced. First of all, we will propose the paradigm of BCIs based on the modulation of EEG rhythms and challenges confronting practical system designs. In Sect. 2, modulation and demodulation methods of EEG rhythms will be further explained. Furthermore, practical designs of a VEP-based BCI and a motor imagery based BCI will be described in Sect. 3. Finally, Sect. 4 will present some real-life application demos using these practical BCI systems.", "title": "" }, { "docid": "e0e00fdfecc4a23994315579938f740e", "text": "Budget allocation in online advertising deals with distributing the campaign (insertion order) level budgets to different sub-campaigns which employ different targeting criteria and may perform differently in terms of return-on-investment (ROI). In this paper, we present the efforts at Turn on how to best allocate campaign budget so that the advertiser or campaign-level ROI is maximized. To do this, it is crucial to be able to correctly determine the performance of sub-campaigns. This determination is highly related to the action-attribution problem, i.e. to be able to find out the set of ads, and hence the sub-campaigns that provided them to a user, that an action should be attributed to. For this purpose, we employ both last-touch (last ad gets all credit) and multi-touch (many ads share the credit) attribution methodologies. We present the algorithms deployed at Turn for the attribution problem, as well as their parallel implementation on the large advertiser performance datasets. We conclude the paper with our empirical comparison of last-touch and multi-touch attribution-based budget allocation in a real online advertising setting.", "title": "" }, { "docid": "f53d13eeccff0048fc96e532a52a2154", "text": "The physical principles underlying some current biomedical applications of magnetic nanoparticles are reviewed. Starting from well-known basic concepts, and drawing on examples from biology and biomedicine, the relevant physics of magnetic materials and their responses to applied magnetic fields are surveyed. The way these properties are controlled and used is illustrated with reference to (i) magnetic separation of labelled cells and other biological entities; (ii) therapeutic drug, gene and radionuclide delivery; (iii) radio frequency methods for the catabolism of tumours via hyperthermia; and (iv) contrast enhancement agents for magnetic resonance imaging applications. Future prospects are also discussed.", "title": "" }, { "docid": "10959ca4eaa8d8a44629255e98e104da", "text": "Millimeter-wave (mm-wave) wireless local area networks (WLANs) are expected to provide multi-Gbps connectivity by exploiting the large amount of unoccupied spectrum in e.g. the unlicensed 60 GHz band. However, to overcome the high path loss inherent at these high frequencies, mm-wave networks must employ highly directional beamforming antennas, which makes link establishment and maintenance much more challenging than in traditional omnidirectional networks. In particular, maintaining connectivity under node mobility necessitates frequent re-steering of the transmit and receive antenna beams to re-establish a directional mm-wave link. A simple exhaustive sequential scanning to search for new feasible antenna sector pairs may introduce excessive delay, potentially disrupting communication and lowering the QoS. In this paper, we propose a smart beam steering algorithm for fast 60 GHz link re-establishment under node mobility, which uses knowledge of previous feasible sector pairs to narrow the sector search space, thereby reducing the associated latency overhead. We evaluate the performance of our algorithm in several representative indoor scenarios, based on detailed simulations of signal propagation in a 60 GHz WLAN in WinProp with realistic building materials. We study the effect of indoor layout, antenna sector beamwidth, node mobility pattern, and device orientation awareness. Our results show that the smart beam steering algorithm achieves a 7-fold reduction of the sector search space on average, which directly translates into lower 60 GHz link re-establishment latency. Our results also show that our fast search algorithm selects the near-optimal antenna sector pair for link re-establishment.", "title": "" } ]
scidocsrr
2573b2c0bf97517508e4e11f6a91a414
DeSTNet: Densely Fused Spatial Transformer Networks
[ { "docid": "b16992ec2416b420b2115037c78cfd4b", "text": "Dictionary learning algorithms or supervised deep convolution networks have considerably improved the efficiency of predefined feature representations such as SIFT. We introduce a deep scattering convolution network, with complex wavelet filters over spatial and angular variables. This representation brings an important improvement to results previously obtained with predefined features over object image databases such as Caltech and CIFAR. The resulting accuracy is comparable to results obtained with unsupervised deep learning and dictionary based representations. This shows that refining image representations by using geometric priors is a promising direction to improve image classification and its understanding.", "title": "" } ]
[ { "docid": "1a7c72a1353e7983c5b55c82be70488d", "text": "education Ph.D. candidate, EECS, University of California, Berkeley, Spring 2019 (Expected). Advised by Prof. Benjamin Recht. S.M., EECS, Massachusetts Institute of Technology, Spring 2014. Advised by Prof. Samuel Madden. Thesis: Fast Transactions for Multicore In-Memory Databases. B.A., Computer Science, University of California, Berkeley, Fall 2010. B.S., Mechanical Engineering, University of California, Berkeley, Fall 2010.", "title": "" }, { "docid": "71e275e9bb796bda3279820bfdd1dafb", "text": "Alex M. Brooks Doctor of Philosophy The University of Sydney January 2007 Parametric POMDPs for Planning in Continuous State Spaces This thesis is concerned with planning and acting under uncertainty in partially-observable continuous domains. In particular, it focusses on the problem of mobile robot navigation given a known map. The dominant paradigm for robot localisation is to use Bayesian estimation to maintain a probability distribution over possible robot poses. In contrast, control algorithms often base their decisions on the assumption that a single state, such as the mode of this distribution, is correct. In scenarios involving significant uncertainty, this can lead to serious control errors. It is generally agreed that the reliability of navigation in uncertain environments would be greatly improved by the ability to consider the entire distribution when acting, rather than the single most likely state. The framework adopted in this thesis for modelling navigation problems mathematically is the Partially Observable Markov Decision Process (POMDP). An exact solution to a POMDP problem provides the optimal balance between reward-seeking behaviour and information-seeking behaviour, in the presence of sensor and actuation noise. Unfortunately, previous exact and approximate solution methods have had difficulty scaling to real applications. The contribution of this thesis is the formulation of an approach to planning in the space of continuous parameterised approximations to probability distributions. Theoretical and practical results are presented which show that, when compared with similar methods from the literature, this approach is capable of scaling to larger and more realistic problems. In order to apply the solution algorithm to real-world problems, a number of novel improvements are proposed. Specifically, Monte Carlo methods are employed to estimate distributions over future parameterised beliefs, improving planning accuracy without a loss of efficiency. Conditional independence assumptions are exploited to simplify the problem, reducing computational requirements. Scalability is further increased by focussing computation on likely beliefs, using metric indexing structures for efficient function approximation. Local online planning is incorporated to assist global offline planning, allowing the precision of the latter to be decreased without adversely affecting solution quality. Finally, the algorithm is implemented and demonstrated during real-time control of a mobile robot in a challenging navigation task. We argue that this task is substantially more challenging and realistic than previous problems to which POMDP solution methods have been applied. Results show that POMDP planning, which considers the evolution of the entire probability distribution over robot poses, produces significantly more robust behaviour when compared with a heuristic planner which considers only the most likely states and outcomes.", "title": "" }, { "docid": "0b7f00dcdfdd1fe002b2363097914bba", "text": "A new field of research, visual analytics, has been introduced. This has been defined as \"the science of analytical reasoning facilitated by interactive visual interfaces\" (Thomas and Cook, 2005). Visual analytic environments, therefore, support analytical reasoning using visual representations and interactions, with data representations and transformation capabilities, to support production, presentation, and dissemination. As researchers begin to develop visual analytic environments, it is advantageous to develop metrics and methodologies to help researchers measure the progress of their work and understand the impact their work has on the users who work in such environments. This paper presents five areas or aspects of visual analytic environments that should be considered as metrics and methodologies for evaluation are developed. Evaluation aspects need to include usability, but it is necessary to go beyond basic usability. The areas of situation awareness, collaboration, interaction, creativity, and utility are proposed as the five evaluation areas for initial consideration. The steps that need to be undertaken to develop systematic evaluation methodologies and metrics for visual analytic environments are outlined", "title": "" }, { "docid": "60f6e3345aae1f91acb187ba698f073b", "text": "A Cube-Satellite (CubeSat) is a small satellite weighing no more than one kilogram. CubeSats are used for space research, but their low-rate communication capability limits functionality. As greater payload and instrumentation functions are sought, increased data rate is needed. Since most CubeSats currently transmit at a 437 MHz frequency, several directional antenna types were studied for a 2.45 GHz, larger bandwidth transmission. This higher frequency provides the bandwidth needed for increasing the data rate. A deployable antenna mechanism maybe needed because most directional antennas are bigger than the CubeSat size constraints. From the study, a deployable hemispherical helical antenna prototype was built. Transmission between two prototype antenna equipped transceivers at varying distances tested the helical performance. When comparing the prototype antenna's maximum transmission distance to the other commercial antennas, the prototype outperformed all commercial antennas, except the patch antenna. The root cause was due to the helical antenna's narrow beam width. Future work can be done in attaining a more accurate alignment with the satellite's directional antenna to downlink with a terrestrial ground station.", "title": "" }, { "docid": "14b6b544144d6c14cb283fd0ac8308d8", "text": "Disrupted daily or circadian rhythms of lung function and inflammatory responses are common features of chronic airway diseases. At the molecular level these circadian rhythms depend on the activity of an autoregulatory feedback loop oscillator of clock gene transcription factors, including the BMAL1:CLOCK activator complex and the repressors PERIOD and CRYPTOCHROME. The key nuclear receptors and transcription factors REV-ERBα and RORα regulate Bmal1 expression and provide stability to the oscillator. Circadian clock dysfunction is implicated in both immune and inflammatory responses to environmental, inflammatory, and infectious agents. Molecular clock function is altered by exposomes, tobacco smoke, lipopolysaccharide, hyperoxia, allergens, bleomycin, as well as bacterial and viral infections. The deacetylase Sirtuin 1 (SIRT1) regulates the timing of the clock through acetylation of BMAL1 and PER2 and controls the clock-dependent functions, which can also be affected by environmental stressors. Environmental agents and redox modulation may alter the levels of REV-ERBα and RORα in lung tissue in association with a heightened DNA damage response, cellular senescence, and inflammation. A reciprocal relationship exists between the molecular clock and immune/inflammatory responses in the lungs. Molecular clock function in lung cells may be used as a biomarker of disease severity and exacerbations or for assessing the efficacy of chronotherapy for disease management. Here, we provide a comprehensive overview of clock-controlled cellular and molecular functions in the lungs and highlight the repercussions of clock disruption on the pathophysiology of chronic airway diseases and their exacerbations. Furthermore, we highlight the potential for the molecular clock as a novel chronopharmacological target for the management of lung pathophysiology.", "title": "" }, { "docid": "067eca04f9a60ae7cc4b77faa478ab22", "text": "The E. coli cytosine deaminase (CD) provides a negative selection system for suicide gene therapy as CD transfectants are eliminated following 5-fluorocytosine (5FC) treatment. Here we report a positive selection system for the CD gene using 5-fluorouracil (5FU) and cytosine in selection medium to screen for CD-positive transfectants. It is based on the relief of 5FU toxicity by uracil which is converted from cytosine via CD catalysis, as uracil competes with the toxic 5FU in subsequent pyrimidine metabolism. Hence, a retroviral vector containing the CD gene may pro- vide both positive and negative selections after gene transfer. The CD transfectants selected with the positive selection system showed susceptibility to 5FC in subsequent negative selection in vitro and in vivo. Therefore, this dual selection system is useful not only for combination therapy with transgene and CD gene, but can also act to eliminate selectively transduced cells after the transgene has furnished its effects or upon undesired conditions if 5FC is applied for negative selection in vivo.", "title": "" }, { "docid": "5e0fe6fb7c9b088540d571cea266d61e", "text": "With the rapid prevalence of smart mobile devices, the number of mobile Apps available has exploded over the past few years. To facilitate the choice of mobile Apps, existing mobile App recommender systems typically recommend popular mobile Apps to mobile users. However, mobile Apps are highly varied and often poorly understood, particularly for their activities and functions related to privacy and security. Therefore, more and more mobile users are reluctant to adopt mobile Apps due to the risk of privacy invasion and other security concerns. To fill this crucial void, in this paper, we propose to develop a mobile App recommender system with privacy and security awareness. The design goal is to equip the recommender system with the functionality which allows to automatically detect and evaluate the security risk of mobile Apps. Then, the recommender system can provide App recommendations by considering both the Apps' popularity and the users' security preferences. Specifically, a mobile App can lead to security risk because insecure data access permissions have been implemented in this App. Therefore, we first develop the techniques to automatically detect the potential security risk for each mobile App by exploiting the requested permissions. Then, we propose a flexible approach based on modern portfolio theory for recommending Apps by striking a balance between the Apps' popularity and the users' security concerns, and build an App hash tree to efficiently recommend Apps. Finally, we evaluate our approach with extensive experiments on a large-scale data set collected from Google Play. The experimental results clearly validate the effectiveness of our approach.", "title": "" }, { "docid": "2d151d6dcefa227e6ea90637c3f220dd", "text": "A wide range of approximate methods has been historically proposed for performance-based assessment of frame buildings in the aftermath of an earthquake. Most of these methods typically require a detailed analytical model representation of the respective building in order to assess its seismic vulnerability and post-earthquake functionality. This paper proposes an approximate method for estimating story-based engineering demand parameters (EDPs) such as peak story drift ratios, peak floor absolute accelerations, and residual story drift ratios in steel frame buildings with steel moment-resisting frames (MRFs). The proposed method is based on concepts from structural health monitoring, which does not require the use of detailed analytical models for structural and non-structural damage diagnosis. The proposed method is able to compute story-based EDPs in steel frame buildings with MRFs with reasonable accuracy. Such EDPs can facilitate damage assessment/control as well as building-specific seismic loss assessment. The proposed method is utilized to assess the extent of structural damage in an instrumented steel frame building that experienced the 1994 Northridge earthquake.", "title": "" }, { "docid": "f53d8be1ec89cb8a323388496d45dcd0", "text": "While Processing-in-Memory has been investigated for decades, it has not been embraced commercially. A number of emerging technologies have renewed interest in this topic. In particular, the emergence of 3D stacking and the imminent release of Micron's Hybrid Memory Cube device have made it more practical to move computation near memory. However, the literature is missing a detailed analysis of a killer application that can leverage a Near Data Computing (NDC) architecture. This paper focuses on in-memory MapReduce workloads that are commercially important and are especially suitable for NDC because of their embarrassing parallelism and largely localized memory accesses. The NDC architecture incorporates several simple processing cores on a separate, non-memory die in a 3D-stacked memory package; these cores can perform Map operations with efficient memory access and without hitting the bandwidth wall. This paper describes and evaluates a number of key elements necessary in realizing efficient NDC operation: (i) low-EPI cores, (ii) long daisy chains of memory devices, (iii) the dynamic activation of cores and SerDes links. Compared to a baseline that is heavily optimized for MapReduce execution, the NDC design yields up to 15X reduction in execution time and 18X reduction in system energy.", "title": "" }, { "docid": "69039983940e885fb261107d78edc258", "text": "Using generic interpolation machinery based on solving Poisson equations, a variety of novel tools are introduced for seamless editing of image regions. The first set of tools permits the seamless importation of both opaque and transparent source image regions into a destination region. The second set is based on similar mathematical ideas and allows the user to modify the appearance of the image seamlessly, within a selected region. These changes can be arranged to affect the texture, the illumination, and the color of objects lying in the region, or to make tileable a rectangular selection.", "title": "" }, { "docid": "b1b2a83d67456c0f0bf54092cbb06e65", "text": "The transmission of voice communications as datagram packets over IP networks, commonly known as voice-over-IP (VoIP) telephony, is rapidly gaining wide acceptance. With private phone conversations being conducted on insecure public networks, security of VoIP communications is increasingly important. We present a structured security analysis of the VoIP protocol stack, which consists of signaling (SIP), session description (SDP), key establishment (SDES, MIKEY, and ZRTP) and secure media transport (SRTP) protocols. Using a combination of manual and tool-supported formal analysis, we uncover several design flaws and attacks, most of which are caused by subtle inconsistencies between the assumptions that protocols at different layers of the VoIP stack make about each other. The most serious attack is a replay attack on SDES, which causes SRTP to repeat the keystream used for media encryption, thus completely breaking transport-layer security. We also demonstrate a man-in-the-middle attack on ZRTP, which allows the attacker to convince the communicating parties that they have lost their shared secret. If they are using VoIP devices without displays and thus cannot execute the \"human authentication\" procedure, they are forced to communicate insecurely, or not communicate at all, i.e., this becomes a denial of service attack. Finally, we show that the key derivation process used in MIKEY cannot be used to prove security of the derived key in the standard cryptographic model for secure key exchange.", "title": "" }, { "docid": "46004ee1f126c8a5b76166c5dc081bc8", "text": "In this study, an energy harvesting chip was developed to scavenge energy from artificial light to charge a wireless sensor node. The chip core is a miniature transformer with a nano-ferrofluid magnetic core. The chip embedded transformer can convert harvested energy from its solar cell to variable voltage output for driving multiple loads. This chip system yields a simple, small, and more importantly, a battery-less power supply solution. The sensor node is equipped with multiple sensors that can be enabled by the energy harvesting power supply to collect information about the human body comfort degree. Compared with lab instruments, the nodes with temperature, humidity and photosensors driven by harvested energy had variation coefficient measurement precision of less than 6% deviation under low environmental light of 240 lux. The thermal comfort was affected by the air speed. A flow sensor equipped on the sensor node was used to detect airflow speed. Due to its high power consumption, this sensor node provided 15% less accuracy than the instruments, but it still can meet the requirement of analysis for predicted mean votes (PMV) measurement. The energy harvesting wireless sensor network (WSN) was deployed in a 24-hour convenience store to detect thermal comfort degree from the air conditioning control. During one year operation, the sensor network powered by the energy harvesting chip retained normal functions to collect the PMV index of the store. According to the one month statistics of communication status, the packet loss rate (PLR) is 2.3%, which is as good as the presented results of those WSNs powered by battery. Referring to the electric power records, almost 54% energy can be saved by the feedback control of an energy harvesting sensor network. These results illustrate that, scavenging energy not only creates a reliable power source for electronic devices, such as wireless sensor nodes, but can also be an energy source by building an energy efficient program.", "title": "" }, { "docid": "3a29bbe76a53c8284123019eba7e0342", "text": "Although von Ammon' first used the term blepharphimosis in 1841, it was Vignes2 in 1889 who first associated blepharophimosis with ptosis and epicanthus inversus. In 1921, Dimitry3 reported a family in which there were 21 affected subjects in five generations. He described them as having ptosis alone and did not specify any other features, although photographs in the report show that they probably had the full syndrome. Dimitry's pedigree was updated by Owens et a/ in 1960. The syndrome appeared in both sexes and was transmitted as a Mendelian dominant. In 1935, Usher5 reviewed the reported cases. By then, 26 pedigrees had been published with a total of 175 affected persons with transmission mainly through affected males. There was no consanguinity in any pedigree. In three pedigrees, parents who obviously carried the gene were unaffected. Well over 150 families have now been reported and there is no doubt about the autosomal dominant pattern of inheritance. However, like Usher,5 several authors have noted that transmission is mainly through affected males and less commonly through affected females.4 6 Reports by Moraine et al7 and Townes and Muechler8 have described families where all affected females were either infertile with primary or secondary amenorrhoea or had menstrual irregularity. Zlotogora et a/9 described one family and analysed 38 families reported previously. They proposed the existence of two types: type I, the more common type, in which the syndrome is transmitted by males only and affected females are infertile, and type II, which is transmitted by both affected females and males. There is male to male transmission in both types and both are inherited as an autosomal dominant trait. They found complete penetrance in type I and slightly reduced penetrance in type II.", "title": "" }, { "docid": "3b5dcd12c1074100ffede33c8b3a680c", "text": "This paper proposes a two-stream flow-guided convolutional attention networks for action recognition in videos. The central idea is that optical flows, when properly compensated for the camera motion, can be used to guide attention to the human foreground. We thus develop crosslink layers from the temporal network (trained on flows) to the spatial network (trained on RGB frames). These crosslink layers guide the spatial-stream to pay more attention to the human foreground areas and be less affected by background clutter. We obtain promising performances with our approach on the UCF101, HMDB51 and Hollywood2 datasets.", "title": "" }, { "docid": "747f56b1b03fdb77042597f2f44730d6", "text": "We introduce KBGAN, an adversarial learning framework to improve the performances of a wide range of existing knowledge graph embedding models. Because knowledge graphs typically only contain positive facts, sampling useful negative training examples is a nontrivial task. Replacing the head or tail entity of a fact with a uniformly randomly selected entity is a conventional method for generating negative facts, but the majority of the generated negative facts can be easily discriminated from positive facts, and will contribute little towards the training. Inspired by generative adversarial networks (GANs), we use one knowledge graph embedding model as a negative sample generator to assist the training of our desired model, which acts as the discriminator in GANs. This framework is independent of the concrete form of generator and discriminator, and therefore can utilize a wide variety of knowledge graph embedding models as its building blocks. In experiments, we adversarially train two translation-based models, TRANSE and TRANSD, each with assistance from one of the two probability-based models, DISTMULT and COMPLEX. We evaluate the performances of KBGAN on the link prediction task, using three knowledge base completion datasets: FB15k-237, WN18 and WN18RR. Experimental results show that adversarial training substantially improves the performances of target embedding models under various settings.", "title": "" }, { "docid": "87aef15dc90a8981bda3fcc5b8045d7c", "text": "Human groups show structured levels of genetic similarity as a consequence of factors such as geographical subdivision and genetic drift. Surveying this structure gives us a scientific perspective on human origins, sheds light on evolutionary processes that shape both human adaptation and disease, and is integral to effectively carrying out the mission of global medical genetics and personalized medicine. Surveys of population structure have been ongoing for decades, but in the past three years, single-nucleotide-polymorphism (SNP) array technology has provided unprecedented detail on human population structure at global and regional scales. These studies have confirmed well-known relationships between distantly related populations and uncovered previously unresolvable relationships among closely related human groups. SNPs represent the first dense genome-wide markers, and as such, their analysis has raised many challenges and insights relevant to the study of population genetics with whole-genome sequences. Here we draw on the lessons from these studies to anticipate the directions that will be most fruitful to pursue during the emerging whole-genome sequencing era.", "title": "" }, { "docid": "c4a2b13eb9d8d9840ff246e02b02f85f", "text": "In this paper, we study the problem of designing efficient convolutional neural network architectures with the interest in eliminating the redundancy in convolution kernels. In addition to structured sparse kernels, low-rank kernels and the product of low-rank kernels, the product of structured sparse kernels, which is a framework for interpreting the recently-developed interleaved group convolutions (IGC) and its variants (e.g., Xception), has been attracting increasing interests. Motivated by the observation that the convolutions contained in a group convolution in IGC can be further decomposed in the same manner, we present a modularized building block, IGC-V2: interleaved structured sparse convolutions. It generalizes interleaved group convolutions, which is composed of two structured sparse kernels, to the product of more structured sparse kernels, further eliminating the redundancy. We present the complementary condition and the balance condition to guide the design of structured sparse kernels, obtaining a balance among three aspects: model size, computation complexity and classification accuracy. Experimental results demonstrate the advantage on the balance among these three aspects compared to interleaved group convolutions and Xception, and competitive performance compared to other state-of-the-art architecture design methods.", "title": "" }, { "docid": "23d9479a38afa6e8061fe431047bed4e", "text": "We introduce cMix, a new approach to anonymous communications. Through a precomputation, the core cMix protocol eliminates all expensive realtime public-key operations—at the senders, recipients and mixnodes—thereby decreasing real-time cryptographic latency and lowering computational costs for clients. The core real-time phase performs only a few fast modular multiplications. In these times of surveillance and extensive profiling there is a great need for an anonymous communication system that resists global attackers. One widely recognized solution to the challenge of traffic analysis is a mixnet, which anonymizes a batch of messages by sending the batch through a fixed cascade of mixnodes. Mixnets can offer excellent privacy guarantees, including unlinkability of sender and receiver, and resistance to many traffic-analysis attacks that undermine many other approaches including onion routing. Existing mixnet designs, however, suffer from high latency in part because of the need for real-time public-key operations. Precomputation greatly improves the real-time performance of cMix, while its fixed cascade of mixnodes yields the strong anonymity guarantees of mixnets. cMix is unique in not requiring any real-time public-key operations by users. Consequently, cMix is the first mixing suitable for low latency chat for lightweight devices. Our presentation includes a specification of cMix, security arguments, anonymity analysis, and a performance comparison with selected other approaches. We also give benchmarks from our prototype.", "title": "" }, { "docid": "ef5cfd6c5eaf48805e39a9eb454aa7b9", "text": "Neural networks are artificial learning systems. For more than two decades, they have help for detecting hostile behaviors in a computer system. This review describes those systems and theirs limits. It defines and gives neural networks characteristics. It also itemizes neural networks which are used in intrusion detection systems. The state of the art on IDS made from neural networks is reviewed. In this paper, we also make a taxonomy and a comparison of neural networks intrusion detection systems. We end this review with a set of remarks and future works that can be done in order to improve the systems that have been presented. This work is the result of a meticulous scan of the literature.", "title": "" }, { "docid": "c346ddfd1247d335c1a45d094ae2bb60", "text": "In this paper we introduce a novel approach for stereoscopic rendering of virtual environments with a wide Field-of-View (FoV) up to 360°. Handling such a wide FoV implies the use of non-planar projections and generates specific problems such as for rasterization and clipping of primitives. We propose a novel pre-clip stage specifically adapted to geometric approaches for which problems occur with polygons spanning across the projection discontinuities. Our approach integrates seamlessly with immersive virtual reality systems as it is compatible with stereoscopy, head-tracking, and multi-surface projections. The benchmarking of our approach with different hardware setups could show that it is well compliant with real-time constraint, and capable of displaying a wide range of FoVs. Thus, our geometric approach could be used in various VR applications in which the user needs to extend the FoV and apprehend more visual information.", "title": "" } ]
scidocsrr
eade12f9ddcc19e505e9e3b57398c1a0
Curiosity and exploration: facilitating positive subjective experiences and personal growth opportunities.
[ { "docid": "971e39e4b99695f249ec1d367b5044f0", "text": "Research on curiosity has undergone 2 waves of intense activity. The 1st, in the 1960s, focused mainly on curiosity's psychological underpinnings. The 2nd, in the 1970s and 1980s, was characterized by attempts to measure curiosity and assess its dimensionality. This article reviews these contributions with a concentration on the 1st wave. It is argued that theoretical accounts of curiosity proposed during the 1st period fell short in 2 areas: They did not offer an adequate explanation for why people voluntarily seek out curiosity, and they failed to delineate situational determinants of curiosity. Furthermore, these accounts did not draw attention to, and thus did not explain, certain salient characteristics of curiosity: its intensity, transience, association with impulsivity, and tendency to disappoint when satisfied. A new account of curiosity is offered that attempts to address these shortcomings. The new account interprets curiosity as a form of cognitively induced deprivation that arises from the perception of a gap in knowledge or understanding.", "title": "" } ]
[ { "docid": "f0958d2c952c7140c998fa13a2bf4374", "text": "OBJECTIVE\nThe objective of this study is to outline explicit criteria for assessing the contribution of qualitative empirical studies in health and medicine, leading to a hierarchy of evidence specific to qualitative methods.\n\n\nSTUDY DESIGN AND SETTING\nThis paper arose from a series of critical appraisal exercises based on recent qualitative research studies in the health literature. We focused on the central methodological procedures of qualitative method (defining a research framework, sampling and data collection, data analysis, and drawing research conclusions) to devise a hierarchy of qualitative research designs, reflecting the reliability of study conclusions for decisions made in health practice and policy.\n\n\nRESULTS\nWe describe four levels of a qualitative hierarchy of evidence-for-practice. The least likely studies to produce good evidence-for-practice are single case studies, followed by descriptive studies that may provide helpful lists of quotations but do not offer detailed analysis. More weight is given to conceptual studies that analyze all data according to conceptual themes but may be limited by a lack of diversity in the sample. Generalizable studies using conceptual frameworks to derive an appropriately diversified sample with analysis accounting for all data are considered to provide the best evidence-for-practice. Explicit criteria and illustrative examples are described for each level.\n\n\nCONCLUSION\nA hierarchy of evidence-for-practice specific to qualitative methods provides a useful guide for the critical appraisal of papers using these methods and for defining the strength of evidence as a basis for decision making and policy generation.", "title": "" }, { "docid": "ea6cb11966919ff9ef331766974aa4c7", "text": "Verifiable secret sharing is an important primitive in distributed cryptography. With the growing interest in the deployment of threshold cryptosystems in practice, the traditional assumption of a synchronous network has to be reconsidered and generalized to an asynchronous model. This paper proposes the first practical verifiable secret sharing protocol for asynchronous networks. The protocol creates a discrete logarithm-based sharing and uses only a quadratic number of messages in the number of participating servers. It yields the first asynchronous Byzantine agreement protocol in the standard model whose efficiency makes it suitable for use in practice. Proactive cryptosystems are another important application of verifiable secret sharing. The second part of this paper introduces proactive cryptosystems in asynchronous networks and presents an efficient protocol for refreshing the shares of a secret key for discrete logarithm-based sharings.", "title": "" }, { "docid": "5fde0c312b9ab2aada7a04a5dffc1d76", "text": "A security metric measures or assesses the extent to which a system meets its security objectives. Since meaningful quantitative security metrics are largely unavailable, the security community primarily uses qualitative metrics for security. In this paper, we present a novel quantitative metric for the security of computer networks that is based on an analysis of attack graphs. The metric measures the security strength of a network in terms of the strength of the weakest adversary who can successfully penetrate the network. We present an algorithm that computes the minimal sets of required initial attributes for the weakest adversary to possess in order to successfully compromise a network; given a specific network configuration, set of known exploits, a specific goal state, and an attacker class (represented by a set of all initial attacker attributes). We also demonstrate, by example, that diverse network configurations are not always beneficial for network security in terms of penetrability.", "title": "" }, { "docid": "8fda8068ce2cc06b3bcdf06b7e761ca0", "text": "Image forensics has attracted wide attention during the past decade. However, most existing works aim at detecting a certain operation, which means that their proposed features usually depend on the investigated image operation and they consider only binary classification. This usually leads to misleading results if irrelevant features and/or classifiers are used. For instance, a JPEG decompressed image would be classified as an original or median filtered image if it was fed into a median filtering detector. Hence, it is important to develop forensic methods and universal features that can simultaneously identify multiple image operations. Based on extensive experiments and analysis, we find that any image operation, including existing anti-forensics operations, will inevitably modify a large number of pixel values in the original images. Thus, some common inherent statistics such as the correlations among adjacent pixels cannot be preserved well. To detect such modifications, we try to analyze the properties of local pixels within the image in the residual domain rather than the spatial domain considering the complexity of the image contents. Inspired by image steganalytic methods, we propose a very compact universal feature set and then design a multiclass classification scheme for identifying many common image operations. In our experiments, we tested the proposed features as well as several existing features on 11 typical image processing operations and four kinds of anti-forensic methods. The experimental results show that the proposed strategy significantly outperforms the existing forensic methods in terms of both effectiveness and universality.", "title": "" }, { "docid": "0632f4a3119246ee9cd7b858dc0c3ed4", "text": "AIM\nIn order to improve the patients' comfort and well-being during and after a stay in the intensive care unit (ICU), the patients' perspective on the intensive care experience in terms of memories is essential. The aim of this study was to describe unpleasant and pleasant memories of the ICU stay in adult mechanically ventilated patients.\n\n\nMETHOD\nMechanically ventilated adults admitted for more than 24hours from two Swedish general ICUs were included and interviewed 5 days after ICU discharge using two open-ended questions. The data were analysed exploring the manifest content.\n\n\nFINDINGS\nOf the 250 patients interviewed, 81% remembered the ICU stay, 71% described unpleasant memories and 59% pleasant. Ten categories emerged from the content analyses (five from unpleasant and five from pleasant memories), contrasting with each other: physical distress and relief of physical distress, emotional distress and emotional well-being, perceptual distress and perceptual well-being, environmental distress and environmental comfort, and stress-inducing care and caring service.\n\n\nCONCLUSION\nMost critical care patients have both unpleasant and pleasant memories of their ICU stay. Pleasant memories such as support and caring service are important to relief the stress and may balance the impact of the distressing memories of the ICU stay.", "title": "" }, { "docid": "cec046aa647ece5f9449c470c6c6edcf", "text": "In this article we survey ambient intelligence (AmI), including its applications, some of the technologies it uses, and its social and ethical implications. The applications include AmI at home, care of the elderly, healthcare, commerce, and business, recommender systems, museums and tourist scenarios, and group decision making. Among technologies, we focus on ambient data management and artificial intelligence; for example planning, learning, event-condition-action rules, temporal reasoning, and agent-oriented technologies. The survey is not intended to be exhaustive, but to convey a broad range of applications, technologies, and technical, social, and ethical challenges.", "title": "" }, { "docid": "2282af5c9f4de5e0de2aae14c0a47840", "text": "The penetration of smart devices such as mobile phones, tabs has significantly changed the way people communicate. This has led to the growth of usage of social media tools such as twitter, facebook chats for communication. This has led to development of new challenges and perspectives in the language technologies research. Automatic processing of such texts requires us to develop new methodologies. Thus there is great need to develop various automatic systems such as information extraction, retrieval and summarization. Entity recognition is a very important sub task of Information extraction and finds its applications in information retrieval, machine translation and other higher Natural Language Processing (NLP) applications such as co-reference resolution. Some of the main issues in handling of such social media texts are i) Spelling errors ii) Abbreviated new language vocabulary such as “gr8” for great iii) use of symbols such as emoticons/emojis iv) use of meta tags and hash tags v) Code mixing. Entity recognition and extraction has gained increased attention in Indian research community. However there is no benchmark data available where all these systems could be compared on same data for respective languages in this new generation user generated text. Towards this we have organized the Code Mix Entity Extraction in social media text track for Indian languages (CMEE-IL) in the Forum for Information Retrieval Evaluation (FIRE). We present the overview of CMEE-IL 2016 track. This paper describes the corpus created for Hindi-English and Tamil-English. Here we also present overview of the approaches used by the participants. CCS Concepts • Computing methodologies ~ Artificial intelligence • Computing methodologies ~ Natural language processing • Information systems ~ Information extraction", "title": "" }, { "docid": "8961d0bd4ba45849bd8fa5c53c0cfb1d", "text": "SUMMARY\nThe program MODELTEST uses log likelihood scores to establish the model of DNA evolution that best fits the data.\n\n\nAVAILABILITY\nThe MODELTEST package, including the source code and some documentation is available at http://bioag.byu. edu/zoology/crandall_lab/modeltest.html.", "title": "" }, { "docid": "9bf698b09e48aa25e1c9bf1fa7885641", "text": "This paper presents a review of methods and techniques that have been proposed for the segmentation of magnetic resonance (MR) images of the brain, with a special emphasis on the segmentation of white matter lesions. First, artifacts affecting MR images (noise, partial volume effect, and shading artifact) are reviewed and methods that have been proposed to correct for these artifacts are discussed. Next, a taxonomy of generic segmentation algorithms is presented, categorized as region-based, edge-based, and classification algorithms. For each category, the applications proposed in the literature are subdivided into 2-D, 3-D, or multimodal approaches. In each case, tables listing authors, bibliographic references, and methods used have been compiled and are presented. This description of segmentation algorithms is followed by a section on techniques proposed specifically for the analysis of white matter lesions. Finally, a section is dedicated to a review and a comparison of validation methods proposed to assess the accuracy and the reliability of the results obtained with various segmentation algorithms.", "title": "" }, { "docid": "f3a8bb3fdda39554dfd98b639eeba335", "text": "Communication between auditory and vocal motor nuclei is essential for vocal learning. In songbirds, the nucleus interfacialis of the nidopallium (NIf) is part of a sensorimotor loop, along with auditory nucleus avalanche (Av) and song system nucleus HVC, that links the auditory and song systems. Most of the auditory information comes through this sensorimotor loop, with the projection from NIf to HVC representing the largest single source of auditory information to the song system. In addition to providing the majority of HVC's auditory input, NIf is also the primary driver of spontaneous activity and premotor-like bursting during sleep in HVC. Like HVC and RA, two nuclei critical for song learning and production, NIf exhibits behavioral-state dependent auditory responses and strong motor bursts that precede song output. NIf also exhibits extended periods of fast gamma oscillations following vocal production. Based on the converging evidence from studies of physiology and functional connectivity it would be reasonable to expect NIf to play an important role in the learning, maintenance, and production of song. Surprisingly, however, lesions of NIf in adult zebra finches have no effect on song production or maintenance. Only the plastic song produced by juvenile zebra finches during the sensorimotor phase of song learning is affected by NIf lesions. In this review, we carefully examine what is known about NIf at the anatomical, physiological, and behavioral levels. We reexamine conclusions drawn from previous studies in the light of our current understanding of the song system, and establish what can be said with certainty about NIf's involvement in song learning, maintenance, and production. Finally, we review recent theories of song learning integrating possible roles for NIf within these frameworks and suggest possible parallels between NIf and sensorimotor areas that form part of the neural circuitry for speech processing in humans.", "title": "" }, { "docid": "5d91cf986b61bf095c04b68da2bb83d3", "text": "The adeno-associated virus (AAV) vector has been used in preclinical and clinical trials of gene therapy for central nervous system (CNS) diseases. One of the biggest challenges of effectively delivering AAV to the brain is to surmount the blood-brain barrier (BBB). Herein, we identified several potential BBB shuttle peptides that significantly enhanced AAV8 transduction in the brain after a systemic administration, the best of which was the THR peptide. The enhancement of AAV8 brain transduction by THR is dose-dependent, and neurons are the primary THR targets. Mechanism studies revealed that THR directly bound to the AAV8 virion, increasing its ability to cross the endothelial cell barrier. Further experiments showed that binding of THR to the AAV virion did not interfere with AAV8 infection biology, and that THR competitively blocked transferrin from binding to AAV8. Taken together, our results demonstrate, for the first time, that BBB shuttle peptides are able to directly interact with AAV and increase the ability of the AAV vectors to cross the BBB for transduction enhancement in the brain. These results will shed important light on the potential applications of BBB shuttle peptides for enhancing brain transduction with systemic administration of AAV vectors.", "title": "" }, { "docid": "a33d982b4dde7c22ffc3c26214b35966", "text": "Background: In most cases, bug resolution is a collaborative activity among developers in software development where each developer contributes his or her ideas on how to resolve the bug. Although only one developer is recorded as the actual fixer for the bug, the contribution of the developers who participated in the collaboration cannot be neglected.\n Aims: This paper proposes a new approach, called DRETOM (Developer REcommendation based on TOpic Models), to recommending developers for bug resolution in collaborative behavior.\n Method: The proposed approach models developers' interest in and expertise on bug resolving activities based on topic models that are built from their historical bug resolving records. Given a new bug report, DRETOM recommends a ranked list of developers who are potential to participate in and contribute to resolving the new bug according to these developers' interest in and expertise on resolving it.\n Results: Experimental results on Eclipse JDT and Mozilla Firefox projects show that DRETOM can achieve high recall up to 82% and 50% with top 5 and top 7 recommendations respectively.\n Conclusion: Developers' interest in bug resolving activities should be taken into consideration. On condition that the parameter θ of DRETOM is set properly with trials, the proposed approach is practically useful in terms of recall.", "title": "" }, { "docid": "b02dcd4d78f87d8ac53414f0afd8604b", "text": "This paper presents an ultra-low-power event-driven analog-to-digital converter (ADC) with real-time QRS detection for wearable electrocardiogram (ECG) sensors in wireless body sensor network (WBSN) applications. Two QRS detection algorithms, pulse-triggered (PUT) and time-assisted PUT (t-PUT), are proposed based on the level-crossing events generated from the ADC. The PUT detector achieves 97.63% sensitivity and 97.33% positive prediction in simulation on the MIT-BIH Arrhythmia Database. The t-PUT improves the sensitivity and positive prediction to 97.76% and 98.59% respectively. Fabricated in 0.13 μm CMOS technology, the ADC with QRS detector consumes only 220 nW measured under 300 mV power supply, making it the first nanoWatt compact analog-to-information (A2I) converter with embedded QRS detector.", "title": "" }, { "docid": "fd59754c40f05710496d3b9738f97e47", "text": "The extent to which mental health consumers encounter stigma in their daily lives is a matter of substantial importance for their recovery and quality of life. This article summarizes the results of a nationwide survey of 1,301 mental health consumers concerning their experience of stigma and discrimination. Survey results and followup interviews with 100 respondents revealed experience of stigma from a variety of sources, including communities, families, churches, coworkers, and mental health caregivers. The majority of respondents tended to try to conceal their disorders and worried a great deal that others would find out about their psychiatric status and treat them unfavorably. They reported discouragement, hurt, anger, and lowered self-esteem as results of their experiences, and they urged public education as a means for reducing stigma. Some reported that involvement in advocacy and speaking out when stigma and discrimination were encountered helped them to cope with stigma. Limitations to generalization of results include the self-selection, relatively high functioning of participants, and respondent connections to a specific advocacy organization-the National Alliance for the Mentally Ill.", "title": "" }, { "docid": "bf22279451c635543b583015d3681b7e", "text": "A simple and compact microstrip-fed ultra wideband (UWB) printed monopole antenna with band-notched performance is proposed in this paper. The antenna is composed of a cirque ring with a small strip bar, so that the antenna occupies about 8.29 GHz bandwidth covering 3.18-11.47 GHz with expected band rejection of 5.09 GHz to 5.88 GHz. A quasi-omnidirectional and quasi-symmetrical radiation pattern is also obtained. This kind of band-notched UWB antenna requires no external filters and thus greatly simplifies the system design of UWB wireless communication.", "title": "" }, { "docid": "4c03c0fc33f8941a7769644b5dfb62ef", "text": "A multiband MIMO antenna for a 4G mobile terminal is proposed. The antenna structure consists of a multiband main antenna element, a printed inverted-L subantenna element operating in the higher 2.5 GHz bands, and a wideband loop sub-antenna element working in lower 0.9 GHz band. In order to improve the isolation and ECC characteristics of the proposed MIMO antenna, each element is located at a different corner of the ground plane. In addition, the inductive coils are employed to reduce the antenna volume and realize the wideband property of the loop sub-antenna element. Finally, the proposed antenna covers LTE band 7/8, PCS, WiMAX, and WLAN service, simultaneously. The MIMO antenna has ECC lower than 0.15 and isolation higher than 12 dB in both lower and higher frequency bands.", "title": "" }, { "docid": "fca521f5e0b48d27d68f07dfc1641edb", "text": "To compare cryo-EM images and 3D reconstructions with atomic structures in a quantitative way it is essential to model the electron scattering by solvent (water or ice) that surrounds protein assemblies. The most rigorous method for determining the density of solvating water atoms for this purpose has been to perform molecular-dynamics (MD) simulations of the protein-water system. In this paper we adapt the ideas of bulk-water modeling that are used in the refinement of X-ray crystal structures to the cryo-EM solvent-modeling problem. We present a continuum model for solvent density which matches MD-based results to within sampling errors. However, we also find that the simple binary-mask model of Jiang and Brünger (1994) performs nearly as well as the new model. We conclude that several methods are now available for rapid and accurate modeling of cryo-EM images and maps of solvated proteins.", "title": "" }, { "docid": "8d581aef7779713f3cb9f236fb83d7ff", "text": "Sandro Botticelli was one of the most esteemed painters and draughtsmen among Renaissance artists. Under the patronage of the De' Medici family, he was active in Florence during the flourishing of the Renaissance trend towards the reclamation of lost medical and anatomical knowledge of ancient times through the dissection of corpses. Combining the typical attributes of the elegant courtly style with hallmarks derived from the investigation and analysis of classical templates, he left us immortal masterpieces, the excellence of which incomprehensibly waned and was rediscovered only in the 1890s. Few know that it has already been reported that Botticelli concealed the image of a pair of lungs in his masterpiece, The Primavera. The present investigation provides evidence that Botticelli embedded anatomic imagery of the lung in another of his major paintings, namely, The Birth of Venus. Both canvases were most probably influenced and enlightened by the neoplatonic philosophy of the humanist teachings in the De' Medici's circle, and they represent an allegorical celebration of the cycle of life originally generated by the Divine Wind or Breath. This paper supports the theory that because of the anatomical knowledge to which he was exposed, Botticelli aimed to enhance the iconographical meaning of both the masterpieces by concealing images of the lung anatomy within them.", "title": "" }, { "docid": "c0d2fcd6daeb433a5729a412828372f8", "text": "Most 3D reconstruction approaches passively optimise over all data, exhaustively matching pairs, rather than actively selecting data to process. This is costly both in terms of time and computer resources, and quickly becomes intractable for large datasets. This work proposes an approach to intelligently filter large amounts of data for 3D reconstructions of unknown scenes using monocular cameras. Our contributions are twofold: First, we present a novel approach to efficiently optimise the Next-Best View (NBV) in terms of accuracy and coverage using partial scene geometry. Second, we extend this to intelligently selecting stereo pairs by jointly optimising the baseline and vergence to find the NBV’s best stereo pair to perform reconstruction. Both contributions are extremely efficient, taking 0.8ms and 0.3ms per pose, respectively. Experimental evaluation shows that the proposed method allows efficient selection of stereo pairs for reconstruction, such that a dense model can be obtained with only a small number of images. Once a complete model has been obtained, the remaining computational budget is used to intelligently refine areas of uncertainty, achieving results comparable to state-of-the-art batch approaches on the Middlebury dataset, using as little as 3.8% of the views.", "title": "" }, { "docid": "770d48a87dd718d20ea00c16ba0ac530", "text": "The purpose of this article is to describe emotion regulation, and how emotion regulation may be compromised in patients with autism spectrum disorder (ASD). This information may be useful for clinicians working with children with ASD who exhibit behavioral problems. Suggestions for practice are provided.", "title": "" } ]
scidocsrr
3fa91b18b304566a526737057d5b115b
Attentional convolutional neural networks for object tracking
[ { "docid": "d349cf385434027b4532080819d5745f", "text": "Although not commonly used, correlation filters can track complex objects through rotations, occlusions and other distractions at over 20 times the rate of current state-of-the-art techniques. The oldest and simplest correlation filters use simple templates and generally fail when applied to tracking. More modern approaches such as ASEF and UMACE perform better, but their training needs are poorly suited to tracking. Visual tracking requires robust filters to be trained from a single frame and dynamically adapted as the appearance of the target object changes. This paper presents a new type of correlation filter, a Minimum Output Sum of Squared Error (MOSSE) filter, which produces stable correlation filters when initialized using a single frame. A tracker based upon MOSSE filters is robust to variations in lighting, scale, pose, and nonrigid deformations while operating at 669 frames per second. Occlusion is detected based upon the peak-to-sidelobe ratio, which enables the tracker to pause and resume where it left off when the object reappears.", "title": "" }, { "docid": "dacebd3415ec50ca6c74e28048fe6fc8", "text": "The problem of arbitrary object tracking has traditionally been tackled by learning a model of the object’s appearance exclusively online, using as sole training data the video itself. Despite the success of these methods, their online-only approach inherently limits the richness of the model they can learn. Recently, several attempts have been made to exploit the expressive power of deep convolutional networks. However, when the object to track is not known beforehand, it is necessary to perform Stochastic Gradient Descent online to adapt the weights of the network, severely compromising the speed of the system. In this paper we equip a basic tracking algorithm with a novel fully-convolutional Siamese network trained end-to-end on the ILSVRC15 dataset for object detection in video. Our tracker operates at frame-rates beyond real-time and, despite its extreme simplicity, achieves state-of-the-art performance in multiple benchmarks.", "title": "" }, { "docid": "83f1830c3a9a92eb3492f9157adaa504", "text": "We propose a novel tracking framework called visual tracker sampler that tracks a target robustly by searching for the appropriate trackers in each frame. Since the real-world tracking environment varies severely over time, the trackers should be adapted or newly constructed depending on the current situation. To do this, our method obtains several samples of not only the states of the target but also the trackers themselves during the sampling process. The trackers are efficiently sampled using the Markov Chain Monte Carlo method from the predefined tracker space by proposing new appearance models, motion models, state representation types, and observation types, which are the basic important components of visual trackers. Then, the sampled trackers run in parallel and interact with each other while covering various target variations efficiently. The experiment demonstrates that our method tracks targets accurately and robustly in the real-world tracking environments and outperforms the state-of-the-art tracking methods.", "title": "" } ]
[ { "docid": "17ac85242f7ee4bc4991e54403e827c4", "text": "Over the last two decades, an impressive progress has been made in the identification of novel factors in the translocation machineries of the mitochondrial protein import and their possible roles. The role of lipids and possible protein-lipids interactions remains a relatively unexplored territory. Investigating the role of potential lipid-binding regions in the sub-units of the mitochondrial motor might help to shed some more light in our understanding of protein-lipid interactions mechanistically. Bioinformatics results seem to indicate multiple potential lipid-binding regions in each of the sub-units. The subsequent characterization of some of those regions in silico provides insight into the mechanistic functioning of this intriguing and essential part of the protein translocation machinery. Details about the way the regions interact with phospholipids were found by the use of Monte Carlo simulations. For example, Pam18 contains one possible transmembrane region and two tilted surface bound conformations upon interaction with phospholipids. The results demonstrate that the presented bioinformatics approach might be useful in an attempt to expand the knowledge of the possible role of protein-lipid interactions in the mitochondrial protein translocation process.", "title": "" }, { "docid": "4f069eeff7cf99679fb6f31e2eea55f0", "text": "The present study aims to design, develop, operate and evaluate a social media GIS (Geographical Information Systems) specially tailored to mash-up the information that local residents and governments provide to support information utilization from normal times to disaster outbreak times in order to promote disaster reduction. The conclusions of the present study are summarized in the following three points. (1) Social media GIS, an information system which integrates a Web-GIS, an SNS and Twitter in addition to an information classification function, a button function and a ranking function into a single system, was developed. This made it propose an information utilization system based on the assumption of disaster outbreak times when information overload happens as well as normal times. (2) The social media GIS was operated for fifty local residents who are more than 18 years old for ten weeks in Mitaka City of Tokyo metropolis. Although about 32% of the users were in their forties, about 30% were aged fifties, and more than 10% of the users were in their twenties, thirties and sixties or more. (3) The access survey showed that 260 pieces of disaster information were distributed throughout the whole city of Mitaka. Among the disaster information, danger-related information occupied 20%, safety-related information occupied 68%, and other information occupied 12%. Keywords—Social Media GIS; Web-GIS; SNS; Twitter; Disaster Information; Disaster Reduction; Support for Information Utilization", "title": "" }, { "docid": "8fffe94d662d46b977e0312dc790f4a4", "text": "Airline companies have increasingly employed electronic commerce (eCommerce) for strategic purposes, most notably in order to achieve long-term competitive advantage and global competitiveness by enhancing customer satisfaction as well as marketing efficacy and managerial efficiency. eCommerce has now emerged as possibly the most representative distribution channel in the airline industry. In this study, we describe an extended technology acceptance model (TAM), which integrates subjective norms and electronic trust (eTrust) into the model, in order to determine their relevance to the acceptance of airline business-to-customer (B2C) eCommerce websites (AB2CEWS). The proposed research model was tested empirically using data collected from a survey of customers who had utilized B2C eCommerce websites of two representative airline companies in South Korea (i.e., KAL and ASIANA) for the purpose of purchasing air tickets. Path analysis was employed in order to assess the significance and strength of the hypothesized causal relationships between subjective norms, eTrust, perceived ease of use, perceived usefulness, attitude toward use, and intention to reuse. Our results provide general support for an extended TAM, and also confirmed its robustness in predicting customers’ intention to reuse AB2CEWS. Valuable information was found from our results regarding the management of AB2CEWS in the formulation of airlines’ Internet marketing strategies. 2008 Published by Elsevier Ltd.", "title": "" }, { "docid": "3dd8c177ae928f7ccad2aa980bd8c747", "text": "The quality and nature of knowledge that can be found by an automated knowledge-extraction system depends on its inputs. For systems that learn by reading text, the Web offers a breadth of topics and currency, but it also presents the problems of dealing with casual, unedited writing, non-textual inputs, and the mingling of languages. The results of extraction using the KNEXT system on two Web corpora – Wikipedia and a collection of weblog entries – indicate that, with automatic filtering of the output, even ungrammatical writing on arbitrary topics can yield an extensive knowledge base, which human judges find to be of good quality, with propositions receiving an average score across both corpora of 2.34 (where the range is 1 to 5 and lower is better) versus 3.00 for unfiltered output from the same sources.", "title": "" }, { "docid": "ab5e3f7ad73d8143ae4dc4db40ebfade", "text": "Knowledge is an essential organizational resource that provides a sustainable competitive advantage in a highly competitive and dynamic economy. SMEs must therefore consider how to promote the sharing of knowledge and expertise between experts who possess it and novices who need to know. Thus, they need to emphaisze and more effectively exploit knowledge-based resources that already exist within the firm. A key issue for the failure of any KM initiative to facilitate knowledge sharing is the lack of consideration of how the organizational and interpersonal context as well as individual characteristics influence knowledge sharing behaviors. Due to the potential benefits that could be realized from knowledge sharing, this study focused on knowledge sharing as one fundamental knowledge-centered activity. Based on the review of previous literature regarding knowledge sharing within and across firms, this study infer that knowledge sharing in a workplace can be influenced by the organizational, individuallevel and technological factors. This study proposes a conceptual model of knowledge sharing within a broad KM framework as an indispensable tool for SMEs internationalization. The model was assessed by using data gathered from employees and managers of twenty-five (25) different SMEs in Norway. The proposed model of knowledge sharing argues that knowledge sharing is influenced by the organizational, individual-level and technological factors. The study also found mediated effect between the organizational factors as well as between the technological factor and knowledge sharing behavior (i.e., being mediated by the individual-level factors). The test results were statistically significant. The organizational factors were acknowledged to have a highly significant role in ensuring that knowledge sharing takes place in the workplace, although the remaining factors play a critical in the knowledge sharing process. For instance, the technological factor may effectively help in creating, storing and distributing explicit knowledge in an accessible and expeditious manner. The implications of the empirical findings are also provided in this study.", "title": "" }, { "docid": "bcf0156fdc95f431c550e0554cddbcbc", "text": "This paper deals with incremental classification and its particular application to invoice classification. An improved version of an already existant incremental neural network called IGNG (incremental growing neural gas) is used for this purpose. This neural network tries to cover the space of data by adding or deleting neurons as data is fed to the system. The improved version of the IGNG, called I2GNG used local thresholds in order to create or delete neurons. Applied on invoice documents represented with graphs, I2GNG shows a recognition rate of 97.63%.", "title": "" }, { "docid": "363381fbd6a5a19242a432ca80051bba", "text": "Multimedia data on social websites contain rich semantics and are often accompanied with user-defined tags. To enhance Web media semantic concept retrieval, the fusion of tag-based and content-based models can be used, though it is very challenging. In this article, a novel semantic concept retrieval framework that incorporates tag removal and model fusion is proposed to tackle such a challenge. Tags with useful information can facilitate media search, but they are often imprecise, which makes it important to apply noisy tag removal (by deleting uncorrelated tags) to improve the performance of semantic concept retrieval. Therefore, a multiple correspondence analysis (MCA)-based tag removal algorithm is proposed, which utilizes MCA's ability to capture the relationships among nominal features and identify representative and discriminative tags holding strong correlations with the target semantic concepts. To further improve the retrieval performance, a novel model fusion method is also proposed to combine ranking scores from both tag-based and content-based models, where the adjustment of ranking scores, the reliability of models, and the correlations between the intervals divided on the ranking scores and the semantic concepts are all considered. Comparative results with extensive experiments on the NUS-WIDE-LITE as well as the NUS-WIDE-270K benchmark datasets with 81 semantic concepts show that the proposed framework outperforms baseline results and the other comparison methods with each component being evaluated separately.", "title": "" }, { "docid": "b59e90e5d1fa3f58014dedeea9d5b6e4", "text": "The results of vitrectomy in 240 consecutive cases of ocular trauma were reviewed. Of these cases, 71.2% were war injuries. Intraocular foreign bodies were present in 155 eyes, of which 74.8% were metallic and 61.9% ferromagnetic. Multivariate analysis identified the prognostic factors predictive of poor visual outcome, which included: (1) presence of an afferent pupillary defect; (2) double perforating injuries; and (3) presence of intraocular foreign bodies. Association of vitreous hemorrhage with intraocular foreign bodies was predictive of a poor prognosis. Eyes with foreign bodies retained in the anterior segment and vitreous had a better prognosis than those with foreign bodies embedded in the retina. Timing of vitrectomy and type of trauma had no significant effect on the final visual results. Prophylactic scleral buckling reduced the incidence of retinal detachment after surgery. Injuries confined to the cornea had a better prognosis than scleral injuries.", "title": "" }, { "docid": "83cc283967bf6bc7f04729a5e08660e2", "text": "Logicians have, by and large, engaged in the convenient fiction that sentences of natural languages (at least declarative sentences) are either true or false or, at worst, lack a truth value, or have a third value often interpreted as 'nonsense'. And most contemporary linguists who have thought seriously about semantics, especially formal semantics, have largely shared this fiction, primarily for lack of a sensible alternative. Yet students o f language, especially psychologists and linguistic philosophers, have long been attuned to the fact that natural language concepts have vague boundaries and fuzzy edges and that, consequently, natural language sentences will very often be neither true, nor false, nor nonsensical, but rather true to a certain extent and false to a certain extent, true in certain respects and false in other respects. It is common for logicians to give truth conditions for predicates in terms of classical set theory. 'John is tall' (or 'TALL(j) ' ) is defined to be true just in case the individual denoted by 'John' (or ' j ') is in the set of tall men. Putting aside the problem that tallness is really a relative concept (tallness for a pygmy and tallness for a basketball player are obviously different) 1, suppose we fix a population relative to which we want to define tallness. In contemporary America, how tall do you have to be to be tall? 5'8\"? 5'9\"? 5'10\"? 5'11\"? 6'? 6'2\"? Obviously there is no single fixed answer. How old do you have to be to be middle-aged? 35? 37? 39? 40? 42? 45? 50? Again the concept is fuzzy. Clearly any attempt to limit truth conditions for natural language sentences to true, false and \"nonsense' will distort the natural language concepts by portraying them as having sharply defined rather than fuzzily defined boundaries. Work dealing with such questions has been done in psychology. To take a recent example, Eleanor Rosch Heider (1971) took up the question of whether people perceive category membership as a clearcut issue or a matter of degree. For example, do people think of members of a given", "title": "" }, { "docid": "f1efe8868f19ccbb4cf2ab5c08961cdb", "text": "High peak-to-average power ratio (PAPR) has been one of the major drawbacks of orthogonal frequency division multiplexing (OFDM) systems. In this letter, we propose a novel PAPR reduction scheme, known as PAPR reducing network (PRNet), based on the autoencoder architecture of deep learning. In the PRNet, the constellation mapping and demapping of symbols on each subcarrier is determined adaptively through a deep learning technique, such that both the bit error rate (BER) and the PAPR of the OFDM system are jointly minimized. We used simulations to show that the proposed scheme outperforms conventional schemes in terms of BER and PAPR.", "title": "" }, { "docid": "d88b845296811f881e46ed04e6caca31", "text": "OBJECTIVES\nThis study evaluated how patient characteristics and duplex ultrasound findings influence management decisions of physicians with specific expertise in the field of chronic venous disease.\n\n\nMETHODS\nWorldwide, 346 physicians with a known interest and experience in phlebology were invited to participate in an online survey about management strategies in patients with great saphenous vein (GSV) reflux and refluxing tributaries. The survey included two basic vignettes representing a 47 year old healthy male with GSV reflux above the knee and a 27 year old healthy female with a short segment refluxing GSV (CEAP classification C2sEpAs2,5Pr in both cases). Participants could choose one or more treatment options. Subsequently, the basic vignettes were modified according to different patient characteristics (e.g. older age, morbid obesity, anticoagulant treatment, peripheral arterial disease), clinical class (C4, C6), and duplex ultrasound findings (e.g. competent terminal valve, larger or smaller GSV diameter, presence of focal dilatation). The authors recorded the distribution of chosen management strategies; adjustment of strategies according to characteristics; and follow up strategies.\n\n\nRESULTS\nA total of 211 physicians (68% surgeons, 12% dermatologists, 12% angiologists, and 8% phlebologists) from 36 different countries completed the survey. In the basic case vignettes 1 and 2, respectively, 55% and 40% of participants proposed to perform endovenous thermal ablation, either with or without concomitant phlebectomies (p < .001). Looking at the modified case vignettes, between 20% and 64% of participants proposed to adapt their management strategy, opting for either a more or a less invasive treatment, depending on the modification introduced. The distribution of chosen management strategies changed significantly for all modified vignettes (p < .05).\n\n\nCONCLUSIONS\nThis study illustrates the worldwide variety in management preferences for treating patients with varicose veins (C2-C6). In clinical practice, patient related and duplex ultrasound related factors clearly influence therapeutic options.", "title": "" }, { "docid": "c1c044c7ede9cfde42878ea162d1f457", "text": "When designing the rotor of a radial flux permanent magnet synchronous machine (PMSM), one key part is the sizing of the permanent magnets (PM) in the rotor to produce the required air-gap flux density. This paper focuses on the effect that different coefficients have on the air-gap flux density of four radial flux PMSM rotor topologies. A direct connection is shown between magnet volume and flux producing magnet area with the aid of static finite element model simulations of the four rotor topologies. With this knowledge, the calculation of the flux producing magnet area can be done with ease once the minimum magnet volume has been determined. This technique can also be applied in the design of line-start PMSM rotors where the rotor area is limited.", "title": "" }, { "docid": "082f19bb94536f61a7c9e4edd9a9c829", "text": "Phytoplankton abundance and composition and the cyanotoxin, microcystin, were examined relative to environmental parameters in western Lake Erie during late-summer (2003–2005). Spatially explicit distributions of phytoplankton occurred on an annual basis, with the greatest chlorophyll (Chl) a concentrations occurring in waters impacted by Maumee River inflows and in Sandusky Bay. Chlorophytes, bacillariophytes, and cyanobacteria contributed the majority of phylogenetic-group Chl a basin-wide in 2003, 2004, and 2005, respectively. Water clarity, pH, and specific conductance delineated patterns of group Chl a, signifying that water mass movements and mixing were primary determinants of phytoplankton accumulations and distributions. Water temperature, irradiance, and phosphorus availability delineated patterns of cyanobacterial biovolumes, suggesting that biotic processes (most likely, resource-based competition) controlled cyanobacterial abundance and composition. Intracellular microcystin concentrations corresponded to Microcystis abundance and environmental parameters indicative of conditions coincident with biomass accumulations. It appears that environmental parameters regulate microcystin indirectly, via control of cyanobacterial abundance and distribution.", "title": "" }, { "docid": "43269c32b765b0f5d5d0772e0b1c5906", "text": "Silver nanoparticles (AgNPs) have been synthesized by Lantana camara leaf extract through simple green route and evaluated their antibacterial and catalytic activities. The leaf extract (LE) itself acts as both reducing and stabilizing agent at once for desired nanoparticle synthesis. The colorless reaction mixture turns to yellowish brown attesting the AgNPs formation and displayed UV-Vis absorption spectra. Structural analysis confirms the crystalline nature and formation of fcc structured metallic silver with majority (111) facets. Morphological studies elicit the formation of almost spherical shaped nanoparticles and as AgNO3 concentration is increased, there is an increment in the particle size. The FTIR analysis evidences the presence of various functional groups of biomolecules of LE is responsible for stabilization of AgNPs. Zeta potential measurement attests the higher stability of synthesized AgNPs. The synthesized AgNPs exhibited good antibacterial activity when tested against Escherichia coli, Pseudomonas spp., Bacillus spp. and Staphylococcus spp. using standard Kirby-Bauer disc diffusion assay. Furthermore, they showed good catalytic activity on the reduction of methylene blue by L. camara extract which is monitored and confirmed by the UV-Vis spectrophotometer.", "title": "" }, { "docid": "966f5ff1ef057f2d19d10865eef35728", "text": "Recognition of characters in natural images is a challenging task due to the complex background, variations of text size and perspective distortion, etc. Traditional optical character recognition (OCR) engine cannot perform well on those unconstrained text images. A novel technique is proposed in this paper that makes use of convolutional cooccurrence histogram of oriented gradient (ConvCoHOG), which is more robust and discriminative than both the histogram of oriented gradient (HOG) and the co-occurrence histogram of oriented gradients (CoHOG). In the proposed technique, a more informative feature is constructed by exhaustively extracting features from every possible image patches within character images. Experiments on two public datasets including the ICDAr 2003 Robust Reading character dataset and the Street View Text (SVT) dataset, show that our proposed character recognition technique obtains superior performance compared with state-of-the-art techniques.", "title": "" }, { "docid": "4768b338044e38949f50c5856bc1a07c", "text": "Radio-frequency identification (RFID) technology provides an effective tool for managing traceability along food supply chains. This is because it allows automatic digital registration of data, and therefore reduces errors and enables the availability of information on demand. A complete traceability system can be developed in the wine production sector by joining this technology with the use of wireless sensor networks for monitoring at the vineyards. A proposal of such a merged solution for a winery in Spain has been designed, deployed in an actual environment, and evaluated. It was shown that the system could provide a competitive advantage to the company by improving visibility of the processes performed and the associated control over product quality. Much emphasis has been placed on minimizing the impact of the new system in the current activities.", "title": "" }, { "docid": "3b074e9574838169881e212cb5899d27", "text": "The introduction of inexpensive 3D data acquisition devices has promisingly facilitated the wide availability and popularity of 3D point cloud, which attracts more attention on the effective extraction of novel 3D point cloud descriptors for accurate and efficient of 3D computer vision tasks. However, how to develop discriminative and robust feature descriptors from various point clouds remains a challenging task. This paper comprehensively investigates the existing approaches for extracting 3D point cloud descriptors which are categorized into three major classes: local-based descriptor, global-based descriptor and hybrid-based descriptor. Furthermore, experiments are carried out to present a thorough evaluation of performance of several state-of-the-art 3D point cloud descriptors used widely in practice in terms of descriptiveness, robustness and efficiency.", "title": "" }, { "docid": "261ab16552e2f7cfcdf89971a066a812", "text": "The paper demonstrates that in a multi-voltage level (medium and low-voltages) distribution system the incident energy can be reduced to 8 cal/cm2, or even less, (Hazard risk category, HRC 2), so that a PPE outfit of greater than 2 is not required. This is achieved with the current state of the art equipment and protective devices. It is recognized that in the existing distribution systems, not specifically designed with this objective, it may not be possible to reduce arc flash hazard to this low level, unless major changes in the system design and protection are made. A typical industrial distribution system is analyzed, and tables and time coordination plots are provided to support the analysis. Unit protection schemes and practical guidelines for arc flash reduction are provided. The methodology of IEEE 1584 [1] is used for the analyses.", "title": "" }, { "docid": "05540e05370b632f8b8cd165ae7d1d29", "text": "We describe FreeCam a system capable of generating live free-viewpoint video by simulating the output of a virtual camera moving through a dynamic scene. The FreeCam sensing hardware consists of a small number of static color video cameras and state-of-the-art Kinect depth sensors, and the FreeCam software uses a number of advanced GPU processing and rendering techniques to seamlessly merge the input streams, providing a pleasant user experience. A system such as FreeCam is critical for applications such as telepresence, 3D video-conferencing and interactive 3D TV. FreeCam may also be used to produce multi-view video, which is critical to drive newgeneration autostereoscopic lenticular 3D displays.", "title": "" }, { "docid": "65af21566422d9f0a11f07d43d7ead13", "text": "Scene labeling is a challenging computer vision task. It requires the use of both local discriminative features and global context information. We adopt a deep recurrent convolutional neural network (RCNN) for this task, which is originally proposed for object recognition. Different from traditional convolutional neural networks (CNN), this model has intra-layer recurrent connections in the convolutional layers. Therefore each convolutional layer becomes a two-dimensional recurrent neural network. The units receive constant feed-forward inputs from the previous layer and recurrent inputs from their neighborhoods. While recurrent iterations proceed, the region of context captured by each unit expands. In this way, feature extraction and context modulation are seamlessly integrated, which is different from typical methods that entail separate modules for the two steps. To further utilize the context, a multi-scale RCNN is proposed. Over two benchmark datasets, Standford Background and Sift Flow, the model outperforms many state-of-the-art models in accuracy and efficiency.", "title": "" } ]
scidocsrr
9e2c4c0d69b90a20ed2731cacaff4673
Robot arm control exploiting natural dynamics
[ { "docid": "56316a77e260d8122c4812d684f4d223", "text": "Manipulation fundamentally requires a manipulator to be mechanically coupled to the object being manipulated. A consideration of the physical constraints imposed by dynamic interaction shows that control of a vector quantity such as position or force is inadequate and that control of the manipulator impedance is also necessary. Techniques for control of manipulator behaviour are presented which result in a unified approach to kinematically constrained motion, dynamic interaction, target acquisition and obstacle avoidance.", "title": "" } ]
[ { "docid": "41eab64d00f1a4aaea5c5899074d91ca", "text": "Informally described design patterns are useful for communicating proven solutions for recurring design problems to developers, but they cannot be used as compliance points against which solutions that claim to conform to the patterns are checked. Pattern specification languages that utilize mathematical notation provide the needed formality, but often at the expense of usability. We present a rigorous and practical technique for specifying pattern solutions expressed in the unified modeling language (UML). The specification technique paves the way for the development of tools that support rigorous application of design patterns to UML design models. The technique has been used to create specifications of solutions for several popular design patterns. We illustrate the use of the technique by specifying observer and visitor pattern solutions.", "title": "" }, { "docid": "c49ffcb45cc0a7377d9cbdcf6dc07057", "text": "Dermoscopy is an in vivo method for the early diagnosis of malignant melanoma and the differential diagnosis of pigmented lesions of the skin. It has been shown to increase diagnostic accuracy over clinical visual inspection in the hands of experienced physicians. This article is a review of the principles of dermoscopy as well as recent technological developments.", "title": "" }, { "docid": "b0eea601ef87dbd1d7f39740ea5134ae", "text": "Syndromal classification is a well-developed diagnostic system but has failed to deliver on its promise of the identification of functional pathological processes. Functional analysis is tightly connected to treatment but has failed to develop testable. replicable classification systems. Functional diagnostic dimensions are suggested as a way to develop the functional classification approach, and experiential avoidance is described as 1 such dimension. A wide range of research is reviewed showing that many forms of psychopathology can be conceptualized as unhealthy efforts to escape and avoid emotions, thoughts, memories, and other private experiences. It is argued that experiential avoidance, as a functional diagnostic dimension, has the potential to integrate the efforts and findings of researchers from a wide variety of theoretical paradigms, research interests, and clinical domains and to lead to testable new approaches to the analysis and treatment of behavioral disorders. Steven C. Haves, Kelly G. Wilson, Elizabeth V. Gifford, and Victoria M. Follette. Department of Psychology. University of Nevada: Kirk Strosahl, Mental Health Center, Group Health Cooperative, Seattle, Washington. Preparation of this article was supported in part by Grant DA08634 from the National Institute on Drug Abuse. Correspondence concerning this article should be addressed to Steven C. Hayes, Department of Psychology, Mailstop 296, College of Arts and Science. University of Nevada, Reno, Nevada 89557-0062. The process of classification lies at the root of all scientific behavior. It is literally impossible to speak about a truly unique event, alone and cut off from all others, because words themselves are means of categorization (Brunei, Goodnow, & Austin, 1956). Science is concerned with refined and systematic verbal formulations of events and relations among events. Because \"events\" are always classes of events, and \"relations\" are always classes of relations, classification is one of the central tasks of science. The field of psychopathology has seen myriad classification systems (Hersen & Bellack, 1988; Sprock & Blashfield, 1991). The differences among some of these approaches are both long-standing and relatively unchanging, in part because systems are never free from a priori assumptions and guiding principles that provide a framework for organizing information (Adams & Cassidy, 1993). In the present article, we briefly examine the differences between two core classification strategies in psychopathology syndromal and functional. We then articulate one possible functional diagnostic dimension: experiential avoidance. Several common syndromal categories are examined to see how this dimension can organize data found among topographical groupings. Finally, the utility and implications of this functional dimensional category are examined. Comparing Syndromal and Functional Classification Although there are many purposes to diagnostic classification, most researchers seem to agree that the ultimate goal is the development of classes, dimensions, or relational categories that can be empirically wedded to treatment strategies (Adams & Cassidy, 1993: Hayes, Nelson & Jarrett, 1987: Meehl, 1959). Syndromal classification – whether dimensional or categorical – can be traced back to Wundt and Galen and, thus, is as old as scientific psychology itself (Eysenck, 1986). Syndromal classification starts with constellations of signs and symptoms to identify the disease entities that are presumed to give rise to these constellations. Syndromal classification thus starts with structure and, it is hoped, ends with utility. The attempt in functional classification, conversely, is to start with utility by identifying functional processes with clear treatment implications. It then works backward and returns to the issue of identifiable signs and symptoms that reflect these processes. These differences are fundamental. Syndromal Classification The economic and political dominance of the American Psychiatric Association's Diagnostic and Statistical Manual of Mental Disorders (e.g., 4th ed.; DSM -IV; American Psychiatric Association, 1994) has lead to a worldwide adoption of syndromal classification as an analytic strategy in psychopathology. The only widely used alternative, the International Classification of Diseases (ICD) system, was a source document for the original DSM, and continuous efforts have been made to ensure their ongoing compatibility (American Psychiatric Association 1994). The immediate goal of syndromal classification (Foulds. 1971) is to identify collections of signs (what one sees) and symptoms (what the client's complaint is). The hope is that these syndromes will lead to the identification of disorders with a known etiology, course, and response to treatment. When this has been achieved, we are no longer speaking of syndromes but of diseases. Because the construct of disease involves etiology and response to treatment, these classifications are ultimately a kind of functional unit. Thus, the syndromal classification approach is a topographically oriented classification strategy for the identification of functional units of abnormal behavior. When the same topographical outcome can be established by diverse processes, or when very different topographical outcomes can come from the same process, the syndromal model has a difficult time actually producing its intended functional units (cf. Bandura, 1982; Meehl, 1978). Some medical problems (e.g., cancer) have these features, and in these areas medical researchers no longer look to syndromal classification as a quick route to an understanding of the disease processes involved. The link between syndromes (topography of signs and symptoms) and diseases (function) has been notably weak in psychopathology. After over 100 years of effort, almost no psychological diseases have been clearly identified. With the exception of general paresis and a few clearly neurological disorders, psychiatric syndromes have remained syndromes indefinitely. In the absence of progress toward true functional entities, syndromal classification of psychopathology has several down sides. Symptoms are virtually non-falsifiable, because they depend only on certain formal features. Syndromal categories tend to evolve changing their names frequently and splitting into ever finer subcategories but except for political reasons (e.g., homosexuality as a disorder) they rarely simply disappear. As a result, the number of syndromes within the DSM system has increased exponentially (Follette, Houts, & Hayes, 1992). Increasingly refined topographical distinctions can always be made without the restraining and synthesizing effect of the identification of common etiological processes. In physical medicine, syndromes regularly disappear into disease categories. A wide variety of symptoms can be caused by a single disease, or a common symptom can be explained by very different diseases entities. For example, \"headaches\" are not a disease, because they could be due to influenza, vision problems, ruptured blood vessels, or a host of other factors. These etiological factors have very different treatment implications. Note that the reliability of symptom detection is not what is at issue. Reliably diagnosing headaches does not translate into reliably diagnosing the underlying functional entity, which after all is the crucial factor for treatment decisions. In the same way, the increasing reliability of DSM diagnoses is of little consolation in and of itself. The DSM system specifically eschews the primary importance of functional processes: \"The approach taken in DSM-III is atheoretical with regard to etiology or patho-physiological process\" (American Psychiatric Association, 1980, p. 7). This spirit of etiological agnosticism is carried forward in the most recent DSM incarnation. It is meant to encourage users from widely varying schools of psychology to use the same classification system. Although integration is a laudable goal, the price paid may have been too high (Follette & Hayes, 1992). For example, the link between syndromal categories and biological markers or change processes has been consistently disappointing. To date, compellingly sensitive and specific physiological markers have not been identified for any psychiatric syndrome (Hoes, 1986). Similarly, the link between syndromes and differential treatment has long been known to be weak (see Hayes et al., 1987). We still do not have compelling evidence that syndromal classification contributes substantially to treatment outcome (Hayes et al., 1987). Even in those few instances and not others, mechanisms of change are often unclear of unexamined (Follette, 1995), in part because syndromal categories give researchers few leads about where even to look. Without attention to etiology, treatment utility, and pathological process, the current syndromal system seems unlikely to evolve rapidly into a functional, theoretically relevant system. Functional Classification In a functional approach to classification, the topographical characteristics of any particular individual's behavior is not the basis for classification; instead, behaviors and sets of behaviors are organized by the functional processes that are thought to have produced and maintained them. This functional method is inherently less direct and naive than a syndromal approach, as it requires the application of pre-existing information about psychological processes to specific response forms. It thus integrates at least rudimentary forms of theory into the classification strategy, in sharp contrast with the atheoretical goals of the DSM system. Functional Diagnostic Dimensions as a Method of Functional Classification Classical functional analysis is the most dominant example of a functional classification system. It consists of six steps (Hayes & Follette, 1992) -Step 1: identify potentially relevant characterist", "title": "" }, { "docid": "ef9947c8f478d6274fcbcf8c9e300806", "text": "The introduction in 1998 of multi-detector row computed tomography (CT) by the major CT vendors was a milestone with regard to increased scan speed, improved z-axis spatial resolution, and better utilization of the available x-ray power. In this review, the general technical principles of multi-detector row CT are reviewed as they apply to the established four- and eight-section systems, the most recent 16-section scanners, and future generations of multi-detector row CT systems. Clinical examples are used to demonstrate both the potential and the limitations of the different scanner types. When necessary, standard single-section CT is referred to as a common basis and starting point for further developments. Another focus is the increasingly important topic of patient radiation exposure, successful dose management, and strategies for dose reduction. Finally, the evolutionary steps from traditional single-section spiral image-reconstruction algorithms to the most recent approaches toward multisection spiral reconstruction are traced.", "title": "" }, { "docid": "de9e0e080ec3210d771bfffb426e0245", "text": "PURPOSE\nTo compare adults who stutter with and without support group experience on measures of self-esteem, self-efficacy, life satisfaction, self-stigma, perceived stuttering severity, perceived origin and future course of stuttering, and importance of fluency.\n\n\nMETHOD\nParticipants were 279 adults who stutter recruited from the National Stuttering Association and Board Recognized Specialists in Fluency Disorders. Participants completed a Web-based survey comprised of various measures of well-being including the Rosenberg Self-Esteem Scale, Generalized Self-Efficacy Scale, Satisfaction with Life Scale, a measure of perceived stuttering severity, the Self-Stigma of Stuttering Scale, and other stuttering-related questions.\n\n\nRESULTS\nParticipants with support group experience as a whole demonstrated lower internalized stigma, were more likely to believe that they would stutter for the rest of their lives, and less likely to perceive production of fluent speech as being highly or moderately important when talking to other people, compared to participants with no support group experience. Individuals who joined support groups to help others feel better about themselves reported higher self-esteem, self-efficacy, and life satisfaction, and lower internalized stigma and perceived stuttering severity, compared to participants with no support group experience. Participants who stutter as an overall group demonstrated similar levels of self-esteem, higher self-efficacy, and lower life satisfaction compared to averages from normative data for adults who do not stutter.\n\n\nCONCLUSIONS\nFindings support the notion that self-help support groups limit internalization of negative attitudes about the self, and that focusing on helping others feel better in a support group context is linked to higher levels of psychological well-being.\n\n\nEDUCATIONAL OBJECTIVES\nAt the end of this activity the reader will be able to: (a) describe the potential psychological benefits of stuttering self-help support groups for people who stutter, (b) contrast between important aspects of well-being including self-esteem self-efficacy, and life satisfaction, (c) summarize differences in self-esteem, self-efficacy, life satisfaction, self-stigma, perceived stuttering severity, and perceptions of stuttering between adults who stutter with and without support group experience, (d) summarize differences in self-esteem, self-efficacy, and life satisfaction between adults who stutter and normative data for adults who do not stutter.", "title": "" }, { "docid": "c8e34c208f11c367e1f131edaa549c20", "text": "Recently one dimensional (1-D) nanostructured metal-oxides have attracted much attention because of their potential applications in gas sensors. 1-D nanostructured metal-oxides provide high surface to volume ratio, while maintaining good chemical and thermal stabilities with minimal power consumption and low weight. In recent years, various processing routes have been developed for the synthesis of 1-D nanostructured metal-oxides such as hydrothermal, ultrasonic irradiation, electrospinning, anodization, sol-gel, molten-salt, carbothermal reduction, solid-state chemical reaction, thermal evaporation, vapor-phase transport, aerosol, RF sputtering, molecular beam epitaxy, chemical vapor deposition, gas-phase assisted nanocarving, UV lithography and dry plasma etching. A variety of sensor fabrication processing routes have also been developed. Depending on the materials, morphology and fabrication process the performance of the sensor towards a specific gas shows a varying degree of success. This article reviews and evaluates the performance of 1-D nanostructured metal-oxide gas sensors based on ZnO, SnO(2), TiO(2), In(2)O(3), WO(x), AgVO(3), CdO, MoO(3), CuO, TeO(2) and Fe(2)O(3). Advantages and disadvantages of each sensor are summarized, along with the associated sensing mechanism. Finally, the article concludes with some future directions of research.", "title": "" }, { "docid": "35fbdf776186afa7d8991fa4ff22503d", "text": "Lang Linguist Compass 2016; 10: 701–719 wileyo Abstract Research and industry are becoming more and more interested in finding automatically the polarised opinion of the general public regarding a specific subject. The advent of social networks has opened the possibility of having access to massive blogs, recommendations, and reviews. The challenge is to extract the polarity from these data, which is a task of opinion mining or sentiment analysis. The specific difficulties inherent in this task include issues related to subjective interpretation and linguistic phenomena that affect the polarity of words. Recently, deep learning has become a popular method of addressing this task. However, different approaches have been proposed in the literature. This article provides an overview of deep learning for sentiment analysis in order to place these approaches in context.", "title": "" }, { "docid": "5fa019a88de4a1683ee63b2a25f8c285", "text": "Metabolomics is increasingly being applied towards the identification of biomarkers for disease diagnosis, prognosis and risk prediction. Unfortunately among the many published metabolomic studies focusing on biomarker discovery, there is very little consistency and relatively little rigor in how researchers select, assess or report their candidate biomarkers. In particular, few studies report any measure of sensitivity, specificity, or provide receiver operator characteristic (ROC) curves with associated confidence intervals. Even fewer studies explicitly describe or release the biomarker model used to generate their ROC curves. This is surprising given that for biomarker studies in most other biomedical fields, ROC curve analysis is generally considered the standard method for performance assessment. Because the ultimate goal of biomarker discovery is the translation of those biomarkers to clinical practice, it is clear that the metabolomics community needs to start “speaking the same language” in terms of biomarker analysis and reporting-especially if it wants to see metabolite markers being routinely used in the clinic. In this tutorial, we will first introduce the concept of ROC curves and describe their use in single biomarker analysis for clinical chemistry. This includes the construction of ROC curves, understanding the meaning of area under ROC curves (AUC) and partial AUC, as well as the calculation of confidence intervals. The second part of the tutorial focuses on biomarker analyses within the context of metabolomics. This section describes different statistical and machine learning strategies that can be used to create multi-metabolite biomarker models and explains how these models can be assessed using ROC curves. In the third part of the tutorial we discuss common issues and potential pitfalls associated with different analysis methods and provide readers with a list of nine recommendations for biomarker analysis and reporting. To help readers test, visualize and explore the concepts presented in this tutorial, we also introduce a web-based tool called ROCCET (ROC Curve Explorer & Tester, http://www.roccet.ca ). ROCCET was originally developed as a teaching aid but it can also serve as a training and testing resource to assist metabolomics researchers build biomarker models and conduct a range of common ROC curve analyses for biomarker studies.", "title": "" }, { "docid": "28f9a2b2f6f4e90de20c6af78727b131", "text": "The detection and potential removal of duplicates is desirable for a number of reasons, such as to reduce the need for unnecessary storage and computation, and to provide users with uncluttered search results. This paper describes an investigation into the application of scalable simhash and shingle state of the art duplicate detection algorithms for detecting near duplicate documents in the CiteSeerX digital library. We empirically explored the duplicate detection methods and evaluated their performance and application to academic documents and identified good parameters for the algorithms. We also analyzed the types of near duplicates identified by each algorithm. The highest F-scores achieved were 0.91 and 0.99 for the simhash and shingle-based methods respectively. The shingle-based method also identified a larger variety of duplicate types than the simhash-based method.", "title": "" }, { "docid": "33b2c5abe122a66b73840506aa3b443e", "text": "Semantic role labeling, the computational identification and labeling of arguments in text, has become a leading task in computational linguistics today. Although the issues for this task have been studied for decades, the availability of large resources and the development of statistical machine learning methods have heightened the amount of effort in this field. This special issue presents selected and representative work in the field. This overview describes linguistic background of the problem, the movement from linguistic theories to computational practice, the major resources that are being used, an overview of steps taken in computational systems, and a description of the key issues and results in semantic role labeling (as revealed in several international evaluations). We assess weaknesses in semantic role labeling and identify important challenges facing the field. Overall, the opportunities and the potential for useful further research in semantic role labeling are considerable.", "title": "" }, { "docid": "b248655d158da77d257a243ee331aa34", "text": "Paraphrase identification is a fundamental task in natural language process areas. During the process of fulfilling this challenge, different features are exploited. Semantically equivalence and syntactic similarity are of the most importance. Apart from advance feature extraction, deep learning based models are also proven their promising in natural language process jobs. As a result in this research, we adopted an interactive representation to modelling the relationship between two sentences not only on word level, but also on phrase and sentence level by employing convolution neural network to conduct paraphrase identification by using semantic and syntactic features at the same time. The experimental study on commonly used MSRP has shown the proposed method's promising potential.", "title": "" }, { "docid": "31d14e88b7c1aa953c1efac75da26d24", "text": "This session will focus on ways that Python is being used to successfully facilitate introductory computer science courses. After a brief introduction, we will present three different models for CS1 and CS2 using Python. Attendees will then participate in a discussion/question-answer session considering the advantages and challenges of using Python in the introductory courses. The presenters will focus on common issues, both positive and negative, that have arisen from the inclusion of Python in the introductory computer science curriculum as well as the impact that this can have on the entire computer science curriculum.", "title": "" }, { "docid": "d39ada44eb3c1c9b5dfa1abd0f1fbc22", "text": "The ability to computationally predict whether a compound treats a disease would improve the economy and success rate of drug approval. This study describes Project Rephetio to systematically model drug efficacy based on 755 existing treatments. First, we constructed Hetionet (neo4j.het.io), an integrative network encoding knowledge from millions of biomedical studies. Hetionet v1.0 consists of 47,031 nodes of 11 types and 2,250,197 relationships of 24 types. Data were integrated from 29 public resources to connect compounds, diseases, genes, anatomies, pathways, biological processes, molecular functions, cellular components, pharmacologic classes, side effects, and symptoms. Next, we identified network patterns that distinguish treatments from non-treatments. Then, we predicted the probability of treatment for 209,168 compound-disease pairs (het.io/repurpose). Our predictions validated on two external sets of treatment and provided pharmacological insights on epilepsy, suggesting they will help prioritize drug repurposing candidates. This study was entirely open and received realtime feedback from 40 community members.", "title": "" }, { "docid": "895f0424cb71c79b86ecbd11a4f2eb8e", "text": "A chronic alcoholic who had also been submitted to partial gastrectomy developed a syndrome of continuous motor unit activity responsive to phenytoin therapy. There were signs of minimal distal sensorimotor polyneuropathy. Symptoms of the syndrome of continuous motor unit activity were fasciculation, muscle stiffness, myokymia, impaired muscular relaxation and percussion myotonia. Electromyography at rest showed fasciculation, doublets, triplets, multiplets, trains of repetitive discharges and myotonic discharges. Trousseau's and Chvostek's signs were absent. No abnormality of serum potassium, calcium, magnesium, creatine kinase, alkaline phosphatase, arterial blood gases and pH were demonstrated, but the serum Vitamin B12 level was reduced. The electrophysiological findings and muscle biopsy were compatible with a mixed sensorimotor polyneuropathy. Tests of neuromuscular transmission showed a significant decrement in the amplitude of the evoked muscle action potential in the abductor digiti minimi on repetitive nerve stimulation. These findings suggest that hyperexcitability and hyperactivity of the peripheral motor axons underlie the syndrome of continuous motor unit activity in the present case. Ein chronischer Alkoholiker, mit subtotaler Gastrectomie, litt an einem Syndrom dauernder Muskelfaseraktivität, das mit Diphenylhydantoin behandelt wurde. Der Patient wies minimale Störungen im Sinne einer distalen sensori-motorischen Polyneuropathie auf. Die Symptome dieses Syndroms bestehen in: Fazikulationen, Muskelsteife, Myokymien, eine gestörte Erschlaffung nach der Willküraktivität und eine Myotonie nach Beklopfen des Muskels. Das Elektromyogramm in Ruhe zeigt: Faszikulationen, Doublets, Triplets, Multiplets, Trains repetitiver Potentiale und myotonische Entladungen. Trousseau- und Chvostek-Zeichen waren nicht nachweisbar. Gleichzeitig lagen die Kalium-, Calcium-, Magnesium-, Kreatinkinase- und Alkalinphosphatase-Werte im Serumspiegel sowie O2, CO2 und pH des arteriellen Blutes im Normbereich. Aber das Niveau des Vitamin B12 im Serumspiegel war deutlich herabgesetzt. Die muskelbioptische und elektrophysiologische Veränderungen weisen auf eine gemischte sensori-motorische Polyneuropathie hin. Die Abnahme der Amplitude der evozierten Potentiale, vom M. abductor digiti minimi abgeleitet, bei repetitiver Reizung des N. ulnaris, stellten eine Störung der neuromuskulären Überleitung dar. Aufgrund unserer klinischen und elektrophysiologischen Befunde könnten wir die Hypererregbarkeit und Hyperaktivität der peripheren motorischen Axonen als Hauptmechanismus des Syndroms dauernder motorischer Einheitsaktivität betrachten.", "title": "" }, { "docid": "062c970a14ac0715ccf96cee464a4fec", "text": "A goal of statistical language modeling is to learn the joint probability function of sequences of words in a language. This is intrinsically difficult because of the curse of dimensionality: a word sequence on which the model will be tested is likely to be different from all the word sequences seen during training. Traditional but very successful approaches based on n-grams obtain generalization by concatenating very short overlapping sequences seen in the training set. We propose to fight the curse of dimensionality by learning a distributed representation for words which allows each training sentence to inform the model about an exponential number of semantically neighboring sentences. The model learns simultaneously (1) a distributed representation for each word along with (2) the probability function for word sequences, expressed in terms of these representations. Generalization is obtained because a sequence of words that has never been seen before gets high probability if it is made of words that are similar (in the sense of having a nearby representation) to words forming an already seen sentence. Training such large models (with millions of parameters) within a reasonable time is itself a significant challenge. We report on experiments using neural networks for the probability function, showing on two text corpora that the proposed approach significantly improves on state-of-the-art n-gram models, and that the proposed approach allows to take advantage of longer contexts.", "title": "" }, { "docid": "c7eb67093a6f00bec0d96607e6384378", "text": "Two primary simulations have been developed and are being updated for the Mars Smart Lander Entry, Descent, and Landing (EDL). The high fidelity engineering end-to-end EDL simulation that is based on NASA Langley’s Program to Optimize Simulated Trajectories (POST) and the end-to-end real-time, hardware-in-the-loop simulation test bed, which is based on NASA JPL’s Dynamics Simulator for Entry, Descent and Surface landing (DSENDS). This paper presents the status of these Mars Smart Lander EDL end-to-end simulations at this time. Various models, capabilities, as well as validation and verification for these simulations are discussed.", "title": "" }, { "docid": "cd52e8b57646a81d985b2fab9083bda9", "text": "Tagging of faces present in a photo or video at shot level has multiple applications related to indexing and retrieval. Face clustering, which aims to group similar faces corresponding to an individual, is a fundamental step of face tagging. We present a progressive method of applying easy-to-hard grouping technique that applies increasingly sophisticated feature descriptors and classifiers on reducing number of faces from each of the iteratively generated clusters. Our primary goal is to design a cost effective solution for deploying it on low-power devices like mobile phones. First, the method initiates the clustering process by applying K-Means technique with relatively large K value on simple LBP features to generate the first set of high precision clusters. Multiple clusters generated for each individual (low recall) are then progressively merged by applying linear and non-linear subspace modelling strategies on custom selected sophisticated features like Gabor filter, Gabor Jets, and Spin LGBP (Local Gabor Binary Patterns) with spatially spinning bin support for histogram computation. Our experiments on the standard face databases like YouTube Faces, YouTube Celebrities, Indian Movie Face database, eNTERFACE, Multi-Pie, CK+, MindReading and internally collected mobile phone samples demonstrate the effectiveness of proposed approach as compared to state-of-the-art methods and a commercial solution on a mobile phone.", "title": "" }, { "docid": "aeb3e0b089e658b532b3ed6c626898dd", "text": "Semantics is seen as the key ingredient in the next phase of the Web infrastructure as well as the next generation of information systems applications. In this context, we review some of the reservations expressed about the viability of the Semantic Web. We respond to these by identifying a Semantic Technology that supports the key capabilities also needed to realize the Semantic Web vision, namely representing, acquiring and utilizing knowledge. Given that scalability is a key challenge, we briefly review our observations from developing three classes of real world applications and corresponding technology components: search/browsing, integration, and analytics. We distinguish this proven technology from some parts of the Semantic Web approach and offer subjective remarks which we hope will foster additional debate.", "title": "" }, { "docid": "ddb7d3fc66cf693e3283caa1d7f988a1", "text": "Customer reviews on online shopping platforms have potential commercial value. Realizing business intelligence by automatically extracting customers’ emotional attitude toward product features from a large amount of reviews, through fine-grained sentiment analysis, is of great importance. Long short-term memory (LSTM) network performs well in sentiment analysis of English reviews. A novel method that extended the network to Chinese product reviews was proposed to improve the performance of sentiment analysis on Chinese product reviews. Considering the differences between Chinese and English, a series of revisions were made to the Chinese corpus, such as word segmentation and stop word pruning. The review corpora vectorization was achieved by word2vec, and a LSTM network model was established based on the mathematical theories of the recurrent neural network. Finally, the feasibility of the LSTM model in finegrained sentiment analysis on Chinese product reviews was verified via experiment. Results demonstrate that the maximum accuracy of the experiment is 90.74%, whereas the maximum of F-score is 65.47%. The LSTM network proves to be feasible and effective when applied to sentiment analysis on product features of Chinese customer reviews. The performance of the LSTM network on fine-grained sentiment analysis is noticeably superior to that of the traditional machine learning method. This study provides references for fine-grained sentiment analysis on Chinese customer", "title": "" }, { "docid": "11333e88e8ff98422bdbf7d7846e9807", "text": "As a fundamental task, document similarity measure has broad impact to document-based classification, clustering and ranking. Traditional approaches represent documents as bag-of-words and compute document similarities using measures like cosine, Jaccard, and dice. However, entity phrases rather than single words in documents can be critical for evaluating document relatedness. Moreover, types of entities and links between entities/words are also informative. We propose a method to represent a document as a typed heterogeneous information network (HIN), where the entities and relations are annotated with types. Multiple documents can be linked by the words and entities in the HIN. Consequently, we convert the document similarity problem to a graph distance problem. Intuitively, there could be multiple paths between a pair of documents. We propose to use the meta-path defined in HIN to compute distance between documents. Instead of burdening user to define meaningful meta paths, an automatic method is proposed to rank the meta-paths. Given the meta-paths associated with ranking scores, an HIN-based similarity measure, KnowSim, is proposed to compute document similarities. Using Freebase, a well-known world knowledge base, to conduct semantic parsing and construct HIN for documents, our experiments on 20Newsgroups and RCV1 datasets show that KnowSim generates impressive high-quality document clustering.", "title": "" } ]
scidocsrr
9533e12030829a8be8bc9b5ea1b1f59b
EmotionCheck: leveraging bodily signals and false feedback to regulate our emotions
[ { "docid": "e88bac9a4023b1c741c720e034669109", "text": "We present AffectAura, an emotional prosthetic that allows users to reflect on their emotional states over long periods of time. We designed a multimodal sensor set-up for continuous logging of audio, visual, physiological and contextual data, a classification scheme for predicting user affective state and an interface for user reflection. The system continuously predicts a user's valence, arousal and engage-ment, and correlates this with information on events, communications and data interactions. We evaluate the interface through a user study consisting of six users and over 240 hours of data, and demonstrate the utility of such a reflection tool. We show that users could reason forward and backward in time about their emotional experiences using the interface, and found this useful.", "title": "" }, { "docid": "c767a9b6808b4556c6f55dd406f8eb0d", "text": "BACKGROUND\nInterest in mindfulness has increased exponentially, particularly in the fields of psychology and medicine. The trait or state of mindfulness is significantly related to several indicators of psychological health, and mindfulness-based therapies are effective at preventing and treating many chronic diseases. Interest in mobile applications for health promotion and disease self-management is also growing. Despite the explosion of interest, research on both the design and potential uses of mindfulness-based mobile applications (MBMAs) is scarce.\n\n\nOBJECTIVE\nOur main objective was to study the features and functionalities of current MBMAs and compare them to current evidence-based literature in the health and clinical setting.\n\n\nMETHODS\nWe searched online vendor markets, scientific journal databases, and grey literature related to MBMAs. We included mobile applications that featured a mindfulness-based component related to training or daily practice of mindfulness techniques. We excluded opinion-based articles from the literature.\n\n\nRESULTS\nThe literature search resulted in 11 eligible matches, two of which completely met our selection criteria-a pilot study designed to evaluate the feasibility of a MBMA to train the practice of \"walking meditation,\" and an exploratory study of an application consisting of mood reporting scales and mindfulness-based mobile therapies. The online market search eventually analyzed 50 available MBMAs. Of these, 8% (4/50) did not work, thus we only gathered information about language, downloads, or prices. The most common operating system was Android. Of the analyzed apps, 30% (15/50) have both a free and paid version. MBMAs were devoted to daily meditation practice (27/46, 59%), mindfulness training (6/46, 13%), assessments or tests (5/46, 11%), attention focus (4/46, 9%), and mixed objectives (4/46, 9%). We found 108 different resources, of which the most used were reminders, alarms, or bells (21/108, 19.4%), statistics tools (17/108, 15.7%), audio tracks (15/108, 13.9%), and educational texts (11/108, 10.2%). Daily, weekly, monthly statistics, or reports were provided by 37% (17/46) of the apps. 28% (13/46) of them permitted access to a social network. No information about sensors was available. The analyzed applications seemed not to use any external sensor. English was the only language of 78% (39/50) of the apps, and only 8% (4/50) provided information in Spanish. 20% (9/46) of the apps have interfaces that are difficult to use. No specific apps exist for professionals or, at least, for both profiles (users and professionals). We did not find any evaluations of health outcomes resulting from the use of MBMAs.\n\n\nCONCLUSIONS\nWhile a wide selection of MBMAs seem to be available to interested people, this study still shows an almost complete lack of evidence supporting the usefulness of those applications. We found no randomized clinical trials evaluating the impact of these applications on mindfulness training or health indicators, and the potential for mobile mindfulness applications remains largely unexplored.", "title": "" } ]
[ { "docid": "c9dea3f2c6a1c3adec1e77d76cd5a329", "text": "With the widespread applications of deep convolutional neural networks (DCNNs), it becomes increasingly important for DCNNs not only to make accurate predictions but also to explain how they make their decisions. In this work, we propose a CHannel-wise disentangled InterPretation (CHIP) model to give the visual interpretation to the predictions of DCNNs. The proposed model distills the class-discriminative importance of channels in networks by utilizing the sparse regularization. Here, we first introduce the network perturbation technique to learn the model. The proposed model is capable to not only distill the global perspective knowledge from networks but also present the class-discriminative visual interpretation for specific predictions of networks. It is noteworthy that the proposed model is able to interpret different layers of networks without re-training. By combining the distilled interpretation knowledge in different layers, we further propose the Refined CHIP visual interpretation that is both high-resolution and class-discriminative. Experimental results on the standard dataset demonstrate that the proposed model provides promising visual interpretation for the predictions of networks in image classification task compared with existing visual interpretation methods. Besides, the proposed method outperforms related approaches in the application of ILSVRC 2015 weakly-supervised localization task.", "title": "" }, { "docid": "b64602c81c036d30c9a6eec261d2e09f", "text": "In this paper, we discuss the need for an effective representation of video data to aid analysis of large datasets of video clips and describe a prototype developed to explore the use of spatio-temporal interest points for action recognition. Our focus is on ways that computation can assist analysis.", "title": "" }, { "docid": "1fe8a60595463038046be38b747565e3", "text": "Recent WiFi standards use Channel State Information (CSI) feedback for better MIMO and rate adaptation. CSI provides detailed information about current channel conditions for different subcarriers and spatial streams. In this paper, we show that CSI feedback from a client to the AP can be used to recognize different fine-grained motions of the client. We find that CSI can not only identify if the client is in motion or not, but also classify different types of motions. To this end, we propose APsense, a framework that uses CSI to estimate the sensor patterns of the client. It is observed that client's sensor (e.g. accelerometer) values are correlated to CSI values available at the AP. We show that using simple machine learning classifiers, APsense can classify different motions with accuracy as high as 90%.", "title": "" }, { "docid": "fa38b2d63562699af5200b5efa476f64", "text": "Hashtags, originally introduced in Twitter, are now becoming the most used way to tag short messages in social networks since this facilitates subsequent search, classification and clustering over those messages. However, extracting information from hashtags is difficult because their composition is not constrained by any (linguistic) rule and they usually appear in short and poorly written messages which are difficult to analyze with classic IR techniques. In this paper we address two challenging problems regarding the “meaning of hashtags”— namely, hashtag relatedness and hashtag classification — and we provide two main contributions. First we build a novel graph upon hashtags and (Wikipedia) entities drawn from the tweets by means of topic annotators (such as TagME); this graph will allow us to model in an efficacious way not only classic co-occurrences but also semantic relatedness among hashtags and entities, or between entities themselves. Based on this graph, we design algorithms that significantly improve state-of-the-art results upon known publicly available datasets. The second contribution is the construction and the public release to the research community of two new datasets: the former is a new dataset for hashtag relatedness, the latter is a dataset for hashtag classification that is up to two orders of magnitude larger than the existing ones. These datasets will be used to show the robustness and efficacy of our approaches, showing improvements in F1 up to two-digits in percentage (absolute).", "title": "" }, { "docid": "b1c1f9cdce2454508fc4a5c060dc1c57", "text": "We present a reduced-order approach for robust, dynamic, and efficient bipedal locomotion control, culminating in 3D balancing and walking with ATRIAS, a heavily underactuated legged robot. These results are a development toward solving a number of enduring challenges in bipedal locomotion: achieving robust 3D gaits at various speeds and transitioning between them, all while minimally draining on-board energy supplies. Our reduced-order control methodology works by extracting and exploiting general dynamical behaviors from the spring-mass model of bipedal walking. When implemented on a robot with spring-mass passive dynamics, e.g. ATRIAS, this controller is sufficiently robust to balance while subjected to pushes, kicks, and successive dodgeball strikes. The controller further allowed smooth transitions between stepping in place and walking at a variety of speeds (up to 1.2 m/s). The resulting gait dynamics also match qualitatively to the reduced-order model, and additionally, measurements of human walking. We argue that the presented locomotion performance is compelling evidence of the effectiveness of the presented approach; both the control concepts and the practice of building robots with passive dynamics to accommodate them. INTRODUCTION We present 3D bipedal walking control for the dynamic bipedal robot, ATRIAS (Fig. 1), by building controllers on a foundation of insights from a reduced-order “spring-mass” math model. This work is aimed at tackling an enduring set of challenges in bipedal robotics: fast 3D locomotion that is efficient and robust to disturbances. Further, we want the ability to transition between gaits of different speeds, including slowing to and starting from zero velocity. This set of demands is challenging from a generalized formal control approach because of various inconvenient mathematical properties; legged systems are typically cast as nonlinear, hybrid-dynamical, and nonholonomic systems, which at the same time, because of the very nature of walking, require highly robust control algorithms. Bipedal robots are also increasingly becoming underactuated, i.e. a system with fewer actuators than degrees of freedom [1]. Underactuation is problematic for nonlinear control methods; as degrees of underactuation increase, handy techniques like feedback-linearization decline in the scope of their utility. 1 Copyright c © 2015 by ASME FIGURE 1: ATRIAS, A HUMAN-SCALE BIPEDAL “SPRINGMASS” ROBOT DESIGNED TO WALK AND RUN IN THREE DIMENSIONS. Whenever a robot does not have an actuated foot planted on rigid ground, it is effectively underactuated. As a result, the faster legged robots move and the rougher the terrain they encounter, it becomes increasingly impractical to avoid these underactuated domains. Further, there are compelling reasons, both mechanical and dynamical, for removing actuators from certain degrees of freedom (see more in the robot design section). With these facts in mind, our robotic platform is built to embody an underactuated and compliant “spring-mass” model (Fig. 2A), and our control reckons with the severe underactuation that results. ATRIAS has twelve degrees of freedom when walking, but just six actuators. However, by numerically analyzing the spring-mass model, we identify important targets and structures of control that can be regulated on the full-order robot which approximates it. We organize the remainder of this paper as follows. We begin by surveying existing control methods for 3D, underactuated, and spring-mass locomotion. 2) The design philosophy of our spring-mass robot, ATRIAS, and its implementation are briefly described. 3) We then build a controller incrementally from a 1D idealized model, to a 3D model, to the full 12-degree-of-freedom robot. 4) We show that this controller can regulate speeds ranging from 0 m/s to 1.2 m/s and transition between them. 5) With a set of perturbation experiments, we demonstrate the robustness of the controller and 6) argue in our conclusions for the thoughtful cooperation between the tasks of robot design and control. (A) (B) Virtual leg Vi rtu al leg dir ec tio n spring-mass k m FIGURE 2: THE DESIGN PHILOSOPHY OF ATRIAS, WHICH MAXIMALLY EMBODIES THE “SPRING-MASS” MODEL OF WALKING AND RUNNING. A) THE SPRING MASS MODEL WITH A POINT MASS BODY AND MASSLESS LEG SPRING. B) ATRIAS WITH A VIRTUAL LEG SPRING OVERLAID.", "title": "" }, { "docid": "324dc3f410eb89f096dd72bffe9616bc", "text": "The use of the Internet by older adults is growing at a substantial rate. They are becoming an increasingly important potential market for electronic commerce. However, previous researchers and practitioners have focused mainly on the youth market and paid less attention to issues related to the online behaviors of older consumers. To bridge the gap, the purpose of this study is to increase a better understanding of the drivers and barriers affecting older consumers’ intention to shop online. To this end, this study is developed by integrating the Unified Theory of Acceptance and Use of Technology (UTAUT) and innovation resistance theory. By comparing younger consumers with their older counterparts, in terms of gender the findings indicate that the major factors driving older adults toward online shopping are performance expectation and social influence which is the same with younger. On the other hand, the major barriers include value, risk, and tradition which is different from younger. Consequently, it is notable that older adults show no gender differences in regards to the drivers and barriers. 2014 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "15ec9bfa4c3a989fb67dce4f1fb172c5", "text": "This paper proposes ReBNet, an end-to-end framework for training reconfigurable binary neural networks on software and developing efficient accelerators for execution on FPGA. Binary neural networks offer an intriguing opportunity for deploying large-scale deep learning models on resource-constrained devices. Binarization reduces the memory footprint and replaces the power-hungry matrix-multiplication with light-weight XnorPopcount operations. However, binary networks suffer from a degraded accuracy compared to their fixed-point counterparts. We show that the state-of-the-art methods for optimizing binary networks accuracy, significantly increase the implementation cost and complexity. To compensate for the degraded accuracy while adhering to the simplicity of binary networks, we devise the first reconfigurable scheme that can adjust the classification accuracy based on the application. Our proposition improves the classification accuracy by representing features with multiple levels of residual binarization. Unlike previous methods, our approach does not exacerbate the area cost of the hardware accelerator. Instead, it provides a tradeoff between throughput and accuracy while the area overhead of multi-level binarization is negligible.", "title": "" }, { "docid": "e1a2dc853f96f5b01fe89e5462bdcb52", "text": "Natural language generation from visual inputs has attracted extensive research attention recently. Generating poetry from visual content is an interesting but very challenging task. We propose and address the new multimedia task of generating classical Chinese poetry from image streams. In this paper, we propose an Images2Poem model with a selection mechanism and an adaptive self-attention mechanism for the problem. The model first selects representative images to summarize the image stream. During decoding, it adaptively pays attention to the information from either source-side image stream or target-side previously generated characters. It jointly summarizes the images and generates relevant, high-quality poetry from image streams. Experimental results demonstrate the effectiveness of the proposed approach. Our model outperforms baselines in different human evaluation metrics.", "title": "" }, { "docid": "0adb426bb2144baa149a3c1e97db55ee", "text": "Chatbots have drawn significant attention of late in both industry and academia. For most task completion bots in the industry, human intervention is the only means of avoiding mistakes in complex real-world cases. However, to the best of our knowledge, there is no existing research work modeling the collaboration between task completion bots and human workers. In this paper, we introduce CoChat, a dialog management framework to enable effective collaboration between bots and human workers. In CoChat, human workers can introduce new actions at any time to handle previously unseen cases. We propose a memory-enhanced hierarchical RNN (MemHRNN) to handle the one-shot learning challenges caused by instantly introducing new actions in CoChat. Extensive experiments on real-world datasets well demonstrate that CoChat can relieve most of the human workers’ workload, and get better user satisfaction rates comparing to other state-of-the-art frameworks.", "title": "" }, { "docid": "210e26d5d11582be68337a0cc387ab8e", "text": "This paper presents the results of experiments carried out with the goal of applying the machine learning techniques of reinforcement learning and neural networks with reinforcement learning to the game of Tetris. Tetris is a well-known computer game that can be played either by a single player or competitively with slight variations, toward the end of accumulating a high score or defeating the opponent. The fundamental hypothesis of this paper is that if the points earned in Tetris are used as the reward function for a machine learning agent, then that agent should be able to learn to play Tetris without other supervision. Toward this end, a state-space that summarizes the essential feature of the Tetris board is designed, high-level actions are developed to interact with the game, and agents are trained using Q-Learning and neural networks. As a result of these efforts, agents learn to play Tetris and to compete with other players. While the learning agents fail to accumulate as many points as the most advanced AI agents, they do learn to play more efficiently.", "title": "" }, { "docid": "d9366c0456eedecd396a9aa1dbc31e35", "text": "A connectionist model is presented, the TraceLink model, that implements an autonomous \"off-line\" consolidation process. The model consists of three subsystems: (1) a trace system (neocortex), (2) a link system (hippocampus and adjacent regions), and (3) a modulatory system (basal forebrain and other areas). The model is able to account for many of the characteristics of anterograde and retrograde amnesia, including Ribot gradients, transient global amnesia, patterns of shrinkage of retrograde amnesia, and correlations between anterograde and retrograde amnesia or the absence thereof (e.g., in isolated retrograde amnesia). In addition, it produces normal forgetting curves and can exhibit permastore. It also offers an explanation for the advantages of learning under high arousal for long-term retention.", "title": "" }, { "docid": "45de40eb5661ff0f44392e255c45f646", "text": "Cloud computing is a new computing paradigm that is gaining increased popularity. More and more sensitive user data are stored in the cloud. The privacy of users’ access pattern to the data should be protected to prevent un-trusted cloud servers from inferring users’ private information or launching stealthy attacks. Meanwhile, the privacy protection schemes should be efficient as cloud users often use thin client devices to access the cloud. In this paper, we propose a lightweight scheme to protect the privacy of data access pattern. Comparing with existing state-of-the-art solutions, our scheme incurs less communication and computational overhead, requires significantly less storage space at the cloud user, while consuming similar storage space at the cloud server. Rigorous proofs and extensive evaluations have been conducted to demonstrate that the proposed scheme can hide the data access pattern effectively in the long run after a reasonable number of accesses have been made.", "title": "" }, { "docid": "01d85de1c78a6f7eb5b65dacac29baf8", "text": "A chatbot is a conventional agent that is able to interact with users in a given subject by using natural language. The conversations in most chatbot are still using a keyboard as the input. Keyboard input is considered ineffective as the conversation is not natural without any saying and a conversation is not just about words. Therefore, this paper propose a design of a chatbot with avatar and voice interaction to make a conversation more alive. This proposed approach method will come from using several API and using its output as another input to next API. It would take speech recognition to take input from user, then proceed it to chatbot API to receive the chatbot reply in a text form. The reply will be processed to text-to-speech recognition and created a spoken, audio version of the reply. Last, the computer will render an avatar whose gesture and lips are sync with the audio reply. This design would make every customer service or any service with human interaction can use it to make interaction more natural. This design can be further explored with additional tool such as web camera to make the agent can analyze the user's emotion and reaction.", "title": "" }, { "docid": "ea5b41179508151987a1f6e6d154d7a6", "text": "Despite the considerable quantity of research directed towards multitouch technologies, a set of standardized UI components have not been developed. Menu systems provide a particular challenge, as traditional GUI menus require a level of pointing precision inappropriate for direct finger input. Marking menus are a promising alternative, but have yet to be investigated or adapted for use within multitouch systems. In this paper, we first investigate the human capabilities for performing directional chording gestures, to assess the feasibility of multitouch marking menus. Based on the positive results collected from this study, and in particular, high angular accuracy, we discuss our new multitouch marking menu design, which can increase the number of items in a menu, and eliminate a level of depth. A second experiment showed that multitouch marking menus perform significantly faster than traditional hierarchal marking menus, reducing acquisition times in both novice and expert usage modalities.", "title": "" }, { "docid": "87993df44973bd83724baace13ea1aa7", "text": "OBJECTIVE\nThe objective of this research was to determine the relative impairment associated with conversing on a cellular telephone while driving.\n\n\nBACKGROUND\nEpidemiological evidence suggests that the relative risk of being in a traffic accident while using a cell phone is similar to the hazard associated with driving with a blood alcohol level at the legal limit. The purpose of this research was to provide a direct comparison of the driving performance of a cell phone driver and a drunk driver in a controlled laboratory setting.\n\n\nMETHOD\nWe used a high-fidelity driving simulator to compare the performance of cell phone drivers with drivers who were intoxicated from ethanol (i.e., blood alcohol concentration at 0.08% weight/volume).\n\n\nRESULTS\nWhen drivers were conversing on either a handheld or hands-free cell phone, their braking reactions were delayed and they were involved in more traffic accidents than when they were not conversing on a cell phone. By contrast, when drivers were intoxicated from ethanol they exhibited a more aggressive driving style, following closer to the vehicle immediately in front of them and applying more force while braking.\n\n\nCONCLUSION\nWhen driving conditions and time on task were controlled for, the impairments associated with using a cell phone while driving can be as profound as those associated with driving while drunk.\n\n\nAPPLICATION\nThis research may help to provide guidance for regulation addressing driver distraction caused by cell phone conversations.", "title": "" }, { "docid": "f1c4577a013e313d3a0bfdd1f5c9981e", "text": "In this work, a simple and compact transition from substrate integrated waveguide (SIW) to traditional rectangular waveguide is proposed and demonstrated. The substrate of SIW can be easily surface-mounted to the standard flange of the waveguide by creating a flange on the substrate. A longitudinal slot window etched on the broad wall of SIW couples energy between SIW and rectangular waveguide. An example of the transition structure is realized at 35 GHz with substrate of RT/Duroid 5880. HFSS simulated result of the transition shows a return loss less than −15 dB over a frequency range of 800 MHz. A back to back connected transition has been fabricated, and the measured results confirm well with the anticipated ones.", "title": "" }, { "docid": "bab7a21f903157fcd0d3e70da4e7261a", "text": "The clinical, electrophysiological and morphological findings (light and electron microscopy of the sural nerve and gastrocnemius muscle) are reported in an unusual case of Guillain-Barré polyneuropathy with an association of muscle hypertrophy and a syndrome of continuous motor unit activity. Fasciculation, muscle stiffness, cramps, myokymia, impaired muscle relaxation and percussion myotonia, with their electromyographic accompaniments, were abolished by peripheral nerve blocking, carbamazepine, valproic acid or prednisone therapy. Muscle hypertrophy, which was confirmed by morphometric data, diminished 2 months after the beginning of prednisone therapy. Electrophysiological and nerve biopsy findings revealed a mixed process of axonal degeneration and segmental demyelination. Muscle biopsy specimen showed a marked predominance and hypertrophy of type-I fibres and atrophy, especially of type-II fibres.", "title": "" }, { "docid": "4df52d891c63975a1b9d4cd6c74571db", "text": "DDoS attacks have been a persistent threat to network availability for many years. Most of the existing mitigation techniques attempt to protect against DDoS by filtering out attack traffic. However, as critical network resources are usually static, adversaries are able to bypass filtering by sending stealthy low traffic from large number of bots that mimic benign traffic behavior. Sophisticated stealthy attacks on critical links can cause a devastating effect such as partitioning domains and networks. In this paper, we propose to defend against DDoS attacks by proactively changing the footprint of critical resources in an unpredictable fashion to invalidate an adversary's knowledge and plan of attack against critical network resources. Our present approach employs virtual networks (VNs) to dynamically reallocate network resources using VN placement and offers constant VN migration to new resources. Our approach has two components: (1) a correct-by-construction VN migration planning that significantly increases the uncertainty about critical links of multiple VNs while preserving the VN placement properties, and (2) an efficient VN migration mechanism that identifies the appropriate configuration sequence to enable node migration while maintaining the network integrity (e.g., avoiding session disconnection). We formulate and implement this framework using SMT logic. We also demonstrate the effectiveness of our implemented framework on both PlanetLab and Mininet-based experimentations.", "title": "" }, { "docid": "6f13503bf65ff58b7f0d4f3282f60dec", "text": "Body centric wireless communication is now accepted as an important part of 4th generation (and beyond) mobile communications systems, taking the form of human to human networking incorporating wearable sensors and communications. There are also a number of body centric communication systems for specialized occupations, such as paramedics and fire-fighters, military personnel and medical sensing and support. To support these developments there is considerable ongoing research into antennas and propagation for body centric communications systems, and this paper will summarise some of it, including the characterisation of the channel on the body, the optimisation of antennas for these channels, and communications to medical implants where advanced antenna design and characterisation and modelling of the internal body channel are important research needs. In all of these areas both measurement and simulation pose very different and challenging issues to be faced by the researcher.", "title": "" }, { "docid": "c68196f826f2afb61c13a0399d921421", "text": "BACKGROUND\nIndividuals with mild cognitive impairment (MCI) have a substantially increased risk of developing dementia due to Alzheimer's disease (AD). In this study, we developed a multivariate prognostic model for predicting MCI-to-dementia progression at the individual patient level.\n\n\nMETHODS\nUsing baseline data from 259 MCI patients and a probabilistic, kernel-based pattern classification approach, we trained a classifier to distinguish between patients who progressed to AD-type dementia (n = 139) and those who did not (n = 120) during a three-year follow-up period. More than 750 variables across four data sources were considered as potential predictors of progression. These data sources included risk factors, cognitive and functional assessments, structural magnetic resonance imaging (MRI) data, and plasma proteomic data. Predictive utility was assessed using a rigorous cross-validation framework.\n\n\nRESULTS\nCognitive and functional markers were most predictive of progression, while plasma proteomic markers had limited predictive utility. The best performing model incorporated a combination of cognitive/functional markers and morphometric MRI measures and predicted progression with 80% accuracy (83% sensitivity, 76% specificity, AUC = 0.87). Predictors of progression included scores on the Alzheimer's Disease Assessment Scale, Rey Auditory Verbal Learning Test, and Functional Activities Questionnaire, as well as volume/cortical thickness of three brain regions (left hippocampus, middle temporal gyrus, and inferior parietal cortex). Calibration analysis revealed that the model is capable of generating probabilistic predictions that reliably reflect the actual risk of progression. Finally, we found that the predictive accuracy of the model varied with patient demographic, genetic, and clinical characteristics and could be further improved by taking into account the confidence of the predictions.\n\n\nCONCLUSIONS\nWe developed an accurate prognostic model for predicting MCI-to-dementia progression over a three-year period. The model utilizes widely available, cost-effective, non-invasive markers and can be used to improve patient selection in clinical trials and identify high-risk MCI patients for early treatment.", "title": "" } ]
scidocsrr
093282edea65cc5ce4e7a88347b5eab5
Partial Fingerprint Matching through Region-Based Similarity
[ { "docid": "3160df3c3e64635f36a50c8d7fd27f8c", "text": "In this paper, we introduce the Minutia Cylinder-Code (MCC): a novel representation based on 3D data structures (called cylinders), built from minutiae distances and angles. The cylinders can be created starting from a subset of the mandatory features (minutiae position and direction) defined by standards like ISO/IEC 19794-2 (2005). Thanks to the cylinder invariance, fixed-length, and bit-oriented coding, some simple but very effective metrics can be defined to compute local similarities and to consolidate them into a global score. Extensive experiments over FVC2006 databases prove the superiority of MCC with respect to three well-known techniques and demonstrate the feasibility of obtaining a very effective (and interoperable) fingerprint recognition implementation for light architectures.", "title": "" } ]
[ { "docid": "137287318bc2a50feeb026add3f58a43", "text": "BACKGROUND\nThe use of bioactive proteins, such as rhBMP-2, may improve bone regeneration in oral and maxillofacial surgery.\n\n\nPURPOSE\nAnalyze the effect of using bioactive proteins for bone regeneration in implant-based rehabilitation.\n\n\nMATERIALS AND METHODS\nSeven databases were screened. Only clinical trials that evaluated the use of heterologous sources of bioactive proteins for bone formation prior to implant-based rehabilitation were included. Statistical analyses were carried out using a random-effects model by comparing the standardized mean difference between groups for bone formation, and risk ratio for implant survival (P ≤ .05).\n\n\nRESULTS\nSeventeen studies were included in the qualitative analysis, and 16 in the meta-analysis. For sinus floor augmentation, bone grafts showed higher amounts of residual bone graft particles than bioactive treatments (P ≤ .05). While for alveolar ridge augmentation bioactive treatments showed a higher level of bone formation than control groups (P ≤ .05). At 3 years of follow-up, no statistically significant differences were observed for implant survival (P > .05).\n\n\nCONCLUSIONS\nBioactive proteins may improve bone formation in alveolar ridge augmentation, and reduce residual bone grafts in sinus floor augmentation. Further studies are needed to evaluate the long-term effect of using bioactive treatments for implant-based rehabilitation.", "title": "" }, { "docid": "ff7b8957aeedc0805f972bf5bd6923f0", "text": "This study was designed to test the Fundamental Difference Hypothesis (Bley-Vroman, 1988), which states that, whereas children are known to learn language almost completely through (implicit) domain-specific mechanisms, adults have largely lost the ability to learn a language without reflecting on its structure and have to use alternative mechanisms, drawing especially on their problem-solving capacities, to learn a second language. The hypothesis implies that only adults with a high level of verbal analytical ability will reach near-native competence in their second language, but that this ability will not be a significant predictor of success for childhood second language acquisition. A study with 57 adult Hungarian-speaking immigrants confirmed the hypothesis in the sense that very few adult immigrants scored within the range of child arrivals on a grammaticality judgment test, and that the few who did had high levels of verbal analytical ability; this ability was not a significant predictor for childhood arrivals. This study replicates the findings of Johnson and Newport (1989) and provides an explanation for the apparent exceptions in their study. These findings lead to a reconceptualization of the Critical Period Hypothesis: If the scope of this hypothesis is lim-", "title": "" }, { "docid": "cb7dda8f4059e5a66e4a6e26fcda601e", "text": "Purpose – This UK-based research aims to build on the US-based work of Keller and Aaker, which found a significant association between “company credibility” (via a brand’s “expertise” and “trustworthiness”) and brand extension acceptance, hypothesising that brand trust, measured via two correlate dimensions, is significantly related to brand extension acceptance. Design/methodology/approach – Discusses brand extension and various prior, validated influences on its success. Focuses on the construct of trust and develops hypotheses about the relationship of brand trust with brand extension acceptance. The hypotheses are then tested on data collected from consumers in the UK. Findings – This paper, using 368 consumer responses to nine, real, low involvement UK product and service brands, finds support for a significant association between the variables, comparable in strength with that between media weight and brand share, and greater than that delivered by the perceived quality level of the parent brand. Originality/value – The research findings, which develop a sparse literature in this linkage area, are of significance to marketing practitioners, since brand trust, already associated with brand equity and brand loyalty, and now with brand extension, needs to be managed and monitored with care. The paper prompts further investigation of the relationship between brand trust and brand extension acceptance in other geographic markets and with other higher involvement categories.", "title": "" }, { "docid": "3f9a46f472ab276c39fb96b78df132ee", "text": "In this paper, we present a novel technique that enables capturing of detailed 3D models from flash photographs integrating shading and silhouette cues. Our main contribution is an optimization framework which not only captures subtle surface details but also handles changes in topology. To incorporate normals estimated from shading, we employ a mesh-based deformable model using deformation gradient. This method is capable of manipulating precise geometry and, in fact, it outperforms previous methods in terms of both accuracy and efficiency. To adapt the topology of the mesh, we convert the mesh into an implicit surface representation and then back to a mesh representation. This simple procedure removes self-intersecting regions of the mesh and solves the topology problem effectively. In addition to the algorithm, we introduce a hand-held setup to achieve multi-view photometric stereo. The key idea is to acquire flash photographs from a wide range of positions in order to obtain a sufficient lighting variation even with a standard flash unit attached to the camera. Experimental results showed that our method can capture detailed shapes of various objects and cope with topology changes well.", "title": "" }, { "docid": "b7521521277f944a9532dc4435a2bda7", "text": "The NDN project investigates Jacobson's proposed evolution from today's host-centric network architecture (IP) to a data-centric network architecture (NDN). This conceptually simple shift has far-reaching implications in how we design, develop, deploy and use networks and applications. The NDN design and development has attracted significant attention from the networking community. To facilitate broader participation in addressing NDN research and development challenges, this tutorial will describe the vision of this new architecture and its basic components and operations.", "title": "" }, { "docid": "b47d53485704f4237e57d220640346a7", "text": "Features of consciousness difficult to understand in terms of conventional neuroscience have evoked application of quantum theory, which describes the fundamental behavior of matter and energy. In this paper we propose that aspects of quantum theory (e.g. quantum coherence) and of a newly proposed physical phenomenon of quantum wave function \"self-collapse\" (objective reduction: OR Penrose, 1994) are essential for consciousness, and occur in cytoskeletal microtubules and other structures within each of the brain's neurons. The particular characteristics of microtubules suitable for quantum effects include their crystal-like lattice structure, hollow inner core, organization of cell function and capacity for information processing. We envisage that conformational states of microtubule subunits (tubulins) are coupled to internal quantum events, and cooperatively interact (compute) with other tubulins. We further assume that macroscopic coherent superposition of quantum-coupled tubulin conformational states occurs throughout significant brain volumes and provides the global binding essential to consciousness. We equate the emergence of the microtubule quantum coherence with pre-conscious processing which grows (for up to 500 ms) until the mass energy difference among the separated states of tubulins reaches a threshold related to quantum gravity. According to the arguments for OR put forth in Penrose (1994), superpositioned states each have their own space-time geometries. When the degree of coherent mass energy difference leads to sufficient separation of space time geometry, the system must choose and decay (reduce, collapse) to a single universe state. In this way, a transient superposition of slightly differing space-time geometries persists until an abrupt quantum --, classical reduction occurs. Unlike the random, \"subjective reduction\" (SR, or R) of standard quantum theory caused by observation or environmental entanglement, the OR we propose in microtubules is a se(f-collapse and it results in particular patterns of microtubule-tubulin conformational states that regulate neuronal activities including synaptic functions. Possibilities and probabilities for post-reduction tubulin states are influenced by factors including attachments of microtubule-associated proteins (MAPs) acting as \"nodes\" which tune and \"orchestrate\" the quantum oscillations. We thus term the self-tuning OR process in microtubules \"orchestrated objective reduction\" (\"Orch OR\"), and calculate an estimate for the number of tubulins (and neurons) whose coherence for relevant time periods (e.g. 500ms) will elicit Orch OR. In providing a connection among (1) pre-conscious to conscious transition, (2) fundamental space time notions, (3) non-computability, and (4) binding of various (time scale and spatial) reductions into an instantaneous event (\"conscious now\"), we believe Orch OR in brain microtubules is the most specific and plausible model for consciousness yet proposed. * Corresponding author. Tel.: (520) 626-2116. Fax: (520) 626-2689. E-Mail: srh(cv ccit.arizona.edu. 0378-4754/96/$15.00 © 1996 Elsevier Science B.V. All rights reserved SSDI0378-4754(95 ) 0049-6 454 S. Hameroff, R. Penrose/Mathematics and Computers in Simulation 40 (1996) 453 480", "title": "" }, { "docid": "03dc23b2556e21af9424500e267612bb", "text": "File fragment classification is an important and difficult problem in digital forensics. Previous works in this area mainly relied on specific byte sequences in file headers and footers, or statistical analysis and machine learning algorithms on data from the middle of the file. This paper introduces a new approach to classify file fragment based on grayscale image. The proposed method treats a file fragment as a grayscale image, and uses image classification method to classify file fragment. Furthermore, two models based on file-unbiased and type-unbiased are proposed to verify the validity of the proposed method. Compared with previous works, the experimental results are promising. An average classification accuracy of 39.7% in file-unbiased model and 54.7% in type-unbiased model are achieved on 29 file types.", "title": "" }, { "docid": "ba966c2fc67b88d26a3030763d56ed1a", "text": "Design of a long read-range, reconfigurable operating frequency radio frequency identification (RFID) metal tag is proposed in this paper. The antenna structure consists of two nonconnected load bars and two bowtie patches electrically connected through four pairs of vias to a conducting backplane to form a looped-bowtie RFID tag antenna that is suitable for mounting on metallic objects. The design offers more degrees of freedom to tune the input impedance of the proposed antenna. The load bars, which have a cutoff point on each bar, can be used to reconfigure the operating frequency of the tag by exciting any one of the three possible frequency modes; hence, this tag can be used worldwide for the UHF RFID frequency band. Experimental tests show that the maximum read range of the prototype, placed on a metallic object, are found to be 3.0, 3.2, and 3.3 m, respectively, for the three operating modes, which has been tested for an RFID reader with only 0.4 W error interrupt pending register (EIPR). The paper shows that the simulated and measured results are in good agreement with each other.", "title": "" }, { "docid": "dae40fa32526bf965bad70f98eb51bb7", "text": "Weight pruning methods for deep neural networks (DNNs) have been investigated recently, but prior work in this area is mainly heuristic, iterative pruning, thereby lacking guarantees on the weight reduction ratio and convergence time. To mitigate these limitations, we present a systematic weight pruning framework of DNNs using the alternating direction method of multipliers (ADMM). We first formulate the weight pruning problem of DNNs as a nonconvex optimization problem with combinatorial constraints specifying the sparsity requirements, and then adopt the ADMM framework for systematic weight pruning. By using ADMM, the original nonconvex optimization problem is decomposed into two subproblems that are solved iteratively. One of these subproblems can be solved using stochastic gradient descent, the other can be solved analytically. Besides, our method achieves a fast convergence rate. The weight pruning results are very promising and consistently outperform the prior work. On the LeNet-5 model for the MNIST data set, we achieve 71.2× weight reduction without accuracy loss. On the AlexNet model for the ImageNet data set, we achieve 21× weight reduction without accuracy loss. When we focus on the convolutional layer pruning for computation reductions, we can reduce the total computation by five times compared with the prior work (achieving a total of 13.4× weight reduction in convolutional layers). Our models and codes are released at https://github.com/KaiqiZhang/admm-pruning.", "title": "" }, { "docid": "bffcc580fa868d4c0b05742997caa55a", "text": "In this paper, we propose a probabilistic model for detecting relevant changes in registered aerial image pairs taken with the time differences of several years and in different seasonal conditions. The introduced approach, called the conditional mixed Markov model, is a combination of a mixed Markov model and a conditionally independent random field of signals. The model integrates global intensity statistics with local correlation and contrast features. A global energy optimization process ensures simultaneously optimal local feature selection and smooth observation-consistent segmentation. Validation is given on real aerial image sets provided by the Hungarian Institute of Geodesy, Cartography and Remote Sensing and Google Earth.", "title": "" }, { "docid": "bbf987eef74d76cf2916ae3080a2b174", "text": "The facial system plays an important role in human-robot interaction. EveR-4 H33 is a head system for an android face controlled by thirty-three motors. It consists of three layers: a mechanical layer, an inner cover layer and an outer cover layer. Motors are attached under the skin and some motors are correlated with each other. Some expressions cannot be shown by moving just one motor. In addition, moving just one motor can cause damage to other motors or the skin. To solve these problems, a facial muscle control method that controls motors in a correlated manner is required. We designed a facial muscle control method and applied it to EveR-4 H33. We develop the actress robot EveR-4A by applying the EveR-4 H33 to the 24 degrees of freedom upper body and mannequin legs. EveR-4A shows various facial expressions with lip synchronization using our facial muscle control method.", "title": "" }, { "docid": "b87920c111fa8e4233a537aee8f0c027", "text": "Mobile robots are increasingly being developed for highrisk missions in rough terrain situations, such as planetary exploration. Here a rough-terrain control (RTC) methodology is presented that exploits the actuator redundancy found in multi-wheeled mobile robot systems to improve ground traction and reduce power consumption. The methodology “chooses” an optimization criterion based on the local terrain profile. A key element of the method is to be able to estimate the wheelground contact angles. A method using an extended Kalman filter is presented for estimating these angles using simple onboard sensors. Simulation results for a wheeled micro-rover traversing Mars-like terrain demonstrate the effectiveness of the algorithms. INTRODUCTION Mobile robots are increasingly being developed for highrisk missions in rough terrain environments. One successful example is the NASA/JPL Sojourner Martian rover (Golombek, 1998). Future planetary missions will require mobile robots to perform difficult tasks in more challenging terrain than encountered by Sojourner (Hayati et al., 1996; Schenker, et al. 1997). Other examples of rough terrain applications for robotic systems can be found in the forestry and mining industries, and in hazardous material handling applications, such as the Chernobyl disaster site clean-up (Cunningham et. al., 1998; Gonthier and Papadopolous,1998; Osborn, 1989). In rough terrain, it is critical for mobile robots to maintain good wheel traction. Wheel slip could cause the rover to lose control and become trapped. Substantial work has been done on traction control of passenger vehicles on flat roads (Kawabe et al. , 1997). This work is not applicable to low-speed, rough terrain rovers because in these vehicles wheel slip is caused primarily by kinematic incompatibility or loose soil conditions, rather than “breakaway” wheel acceleration. Traction control for low-speed mobile robots on flat terrain has been studied (Reister and Unseren, 1993). Later work has considered the important effects of terrain unevenness on traction control (Sreenivasan and Wilcox, 1996). This work assumes knowledge of terrain geometry and soil characteristics. However, in such applications as planetary exploration this information is usually unknown. A fuzzy-logic traction control algorithm for a rocker-bogie rover that did not assume knowledge of terrain geometry has been developed (Hacot, 1998). This approach is based on heuristic rules related to vehicle mechanics. Knowledge of terrain information is critical to the traction control problem. An key variable for traction algorithms is the contact angles between the vehicle wheels and the ground (Sreenivasan and Wilcox, 1994; Farritor et al., 1998). Measuring this angle physically is difficult. Researchers have proposed installing multi-axis force sensors at each wheel to measure the contact force direction, and inferring the groundcontact angle from the force data (Sreenivasan and Wilcox, 1994). However, wheel-hub mounted multi-axis force sensors would be costly and complex. Complexity reduces reliability and adds weight, two factors that carry severe penalties for planetary exploration applications. This paper presents a control methodology for vehicles with redundant drive wheels for improved traction or reduced power consumption. In highly uneven terrain, traction is optimized. In relatively flat terrain, power consumption is minimized. A method is presented for estimating wheelground contact angles of mobile robots using simple on-board sensors. The algorithm is based on rigid-body kinematic equations and uses sensors such as vehicle inclinometers and wheel tachometers. It does not require the use of force sensors. The method uses an extended Kalman filter to fuse noisy sensor signals. Simulation results are presented for a planar two-wheeled rover on uneven Mars-like soil. It is shown that the wheelground contact angle estimation method can accurately estimate contact angles in the presence of sensor noise and wheel slip. It is also shown that the rough-terrain control (RTC) method leads to increased traction and improved power consumption as compared to traditional individual-wheel velocity control.", "title": "" }, { "docid": "720eccb945faa357bc44c5aa33fe60a9", "text": "The evolution of an arm exoskeleton design for treating shoulder pathology is examined. Tradeoffs between various kinematics configurations are explored, and a device with five active degrees of freedom is proposed. Two rapid-prototype designs were built and fitted to several subjects to verify the kinematic design and determine passive link adjustments. Control modes are developed for exercise therapy and functional rehabilitation, and a distributed software architecture that incorporates computer safety monitoring is described. Although intended primarily for therapy, the exoskeleton is also used to monitor progress in strength, range of motion, and functional task performance", "title": "" }, { "docid": "0da299fb53db5980a10e0ae8699d2209", "text": "Modern heuristics or metaheuristics are optimization algorithms that have been increasingly used during the last decades to support complex decision-making in a number of fields, such as logistics and transportation, telecommunication networks, bioinformatics, finance, and the like. The continuous increase in computing power, together with advancements in metaheuristics frameworks and parallelization strategies, are empowering these types of algorithms as one of the best alternatives to solve rich and real-life combinatorial optimization problems that arise in a number of financial and banking activities. This article reviews some of the works related to the use of metaheuristics in solving both classical and emergent problems in the finance arena. A non-exhaustive list of examples includes rich portfolio optimization, index tracking, enhanced indexation, credit risk, stock investments, financial project scheduling, option pricing, feature selection, bankruptcy and financial distress prediction, and credit risk assessment. This article also discusses some open opportunities for researchers in the field, and forecast the evolution of metaheuristics to include real-life uncertainty conditions into the optimization problems being considered.", "title": "" }, { "docid": "04956fbf44b2a1d7164325fc395c019a", "text": "The ever-growing number of people using Twitter makes it a valuable source of timely information. However, detecting events in Twitter is a difficult task, because tweets that report interesting events are overwhelmed by a large volume of tweets on unrelated topics. Existing methods focus on the textual content of tweets and ignore the social aspect of Twitter. In this paper, we propose mention-anomaly-based event detection (MABED), a novel statistical method that relies solely on tweets and leverages the creation frequency of dynamic links (i.e., mentions) that users insert in tweets to detect significant events and estimate the magnitude of their impact over the crowd. MABED also differs from the literature in that it dynamically estimates the period of time during which each event is discussed, rather than assuming a predefined fixed duration for all events. The experiments we conducted on both English and French Twitter data show that the mention-anomaly-based approach leads to more accurate event detection and improved robustness in presence of noisy Twitter content. Qualitatively speaking, we find that MABED helps with the interpretation of detected events by providing clear textual descriptions and precise temporal descriptions. We also show how MABED can help understanding users’ interest. Furthermore, we describe three visualizations designed to favor an efficient exploration of the detected events.", "title": "" }, { "docid": "b0d3388bc02f8ee55a8575de6253f5fb", "text": "Today’s rapid changing and competitive environment requires educators to stay abreast of the job market in order to prepare their students for the jobs being demanded. This is more relevant about Information Technology (IT) jobs than others. However, to stay abreast of the market job demands require retrieving, sifting and analyzing large volume of data in order to understand the trends of the job market. Traditional methods of data collection and analysis are not sufficient for this kind of analysis due to the large volume of job data that is generated through the web and elsewhere. Luckily, the field of data mining has emerged to collect and sift through such large data volumes. However, even with data mining, appropriate data collection techniques and analysis need to be followed in order to correctly understand the trend. This paper illustrates our experience with employing mining techniques to understand the trend in IT Technology jobs. Data was collect using data mining techniques over a number of years from an online job agency. The data was then analyzed to reach a conclusion about the trends in the job market. Our experience in this regard along with literature review of the relevant topics is illustrated in this paper.", "title": "" }, { "docid": "bf82fadedef61212cda85311a712560e", "text": "The extensive growth of the Internet of Things (IoT) is providing direction towards the smart urban. The smart urban is favored because it improves the standard of living of the citizens and provides excellence in the community services. The services may include but not limited to health, parking, transport, water, environment, power, and so forth. The diverse and heterogeneous environment of IoT and smart urban is challenged by real-time data processing and decision-making. In this research article, we propose IoT based smart urban architecture using Big Data analytics. The proposed architecture is divided into three different tiers: (1) data acquisition and aggregation, (2) data computation and processing, and (3) decision making and application. The proposed architecture is implemented and validated on Hadoop Ecosystem using reliable and authentic datasets. The research shows that the proposed system presents valuable imminent into the community development systems to get better the existing smart urban architecture.", "title": "" }, { "docid": "1f0926abdff68050ef88eea49adaf382", "text": "Words are the essence of communication: They are the building blocks of any language. Learning the meaning of words is thus one of the most important aspects of language acquisition: Children must first learn words before they can combine them into complex utterances. Many theories have been developed to explain the impressive efficiency of young children in acquiring the vocabulary of their language, as well as the developmental patterns observed in the course of lexical acquisition. A major source of disagreement among the different theories is whether children are equipped with special mechanisms and biases for word learning, or their general cognitive abilities are adequate for the task. We present a novel computational model of early word learning to shed light on the mechanisms that might be at work in this process. The model learns word meanings as probabilistic associations between words and semantic elements, using an incremental and probabilistic learning mechanism, and drawing only on general cognitive abilities. The results presented here demonstrate that much about word meanings can be learned from naturally occurring child-directed utterances (paired with meaning representations), without using any special biases or constraints, and without any explicit developmental changes in the underlying learning mechanism. Furthermore, our model provides explanations for the occasionally contradictory child experimental data, and offers predictions for the behavior of young word learners in novel situations.", "title": "" }, { "docid": "9e8c61584bbbda83c73a4cb2f74f8d37", "text": "Internet addiction (IA) has become a widespread and problematic phenomenon. Little is known about the effect of internet addiction (IA). The present study focus on the Meta analysis of internet addiction and its relation to mental health among youth. Effect size estimated the difference between the gender with respect to the severity of internet addiction and the depression, anxiety, social isolation and sleep pattern positive.", "title": "" } ]
scidocsrr
d0e679aae451c58682d22f36c93afdc1
CMU OAQA at TREC 2016 LiveQA: An Attentional Neural Encoder-Decoder Approach for Answer Ranking
[ { "docid": "d29634888a4f1cee1ed613b0f038ddb3", "text": "This work investigates the use of linguistically motivated features to improve search, in particular for ranking answers to non-factoid questions. We show that it is possible to exploit existing large collections of question–answer pairs (from online social Question Answering sites) to extract such features and train ranking models which combine them effectively. We investigate a wide range of feature types, some exploiting natural language processing such as coarse word sense disambiguation, named-entity identification, syntactic parsing, and semantic role labeling. Our experiments demonstrate that linguistic features, in combination, yield considerable improvements in accuracy. Depending on the system settings we measure relative improvements of 14% to 21% in Mean Reciprocal Rank and Precision@1, providing one of the most compelling evidence to date that complex linguistic features such as word senses and semantic roles can have a significant impact on large-scale information retrieval tasks.", "title": "" } ]
[ { "docid": "c020a3ba9a2615cb5ed9a7e9d5aa3ce0", "text": "Neural network approaches to Named-Entity Recognition reduce the need for carefully handcrafted features. While some features do remain in state-of-the-art systems, lexical features have been mostly discarded, with the exception of gazetteers. In this work, we show that this is unfair: lexical features are actually quite useful. We propose to embed words and entity types into a lowdimensional vector space we train from annotated data produced by distant supervision thanks to Wikipedia. From this, we compute — offline — a feature vector representing each word. When used with a vanilla recurrent neural network model, this representation yields substantial improvements. We establish a new state-of-the-art F1 score of 87.95 on ONTONOTES 5.0, while matching state-of-the-art performance with a F1 score of 91.73 on the over-studied CONLL-2003 dataset.", "title": "" }, { "docid": "93ec0a392a7a29312778c6834ffada73", "text": "BACKGROUND\nThe new world of safe aesthetic injectables has become increasingly popular with patients. Not only is there less risk than with surgery, but there is also significantly less downtime to interfere with patients' normal work and social schedules. Botulinum toxin (BoNT) type A (BoNTA) is an indispensable tool used in aesthetic medicine, and its broad appeal has made it a hallmark of modern culture. The key to using BoNTA to its best effect is to understand patient-specific factors that will determine the treatment plan and the physician's ability to personalize injection strategies.\n\n\nOBJECTIVES\nTo present international expert viewpoints and consensus on some of the contemporary best practices in aesthetic BoNTA, so that beginner and advanced injectors may find pearls that provide practical benefits.\n\n\nMETHODS AND MATERIALS\nExpert aesthetic physicians convened to discuss their approaches to treatment with BoNT. The discussions and consensus from this meeting were used to provide an up-to-date review of treatment strategies to improve patient results. Information is presented on patient management and assessment, documentation and consent, aesthetic scales, injection strategies, dilution, dosing, and adverse events.\n\n\nCONCLUSION\nA range of product- and patient-specific factors influence the treatment plan. Truly optimized outcomes are possible only when the treating physician has the requisite knowledge, experience, and vision to use BoNTA as part of a unique solution for each patient's specific needs.", "title": "" }, { "docid": "bdfa9a484a2bca304c0a8bbd6dcd7f1a", "text": "We present a multilingual Named Entity Recognition approach based on a robust and general set of features across languages and datasets. Our system combines shallow local information with clustering semi-supervised features induced on large amounts of unlabeled text. Understanding via empirical experimentation how to effectively combine various types of clustering features allows us to seamlessly export our system to other datasets and languages. The result is a simple but highly competitive system which obtains state of the art results across five languages and twelve datasets. The results are reported on standard shared task evaluation data such as CoNLL for English, Spanish and Dutch. Furthermore, and despite the lack of linguistically motivated features, we also report best results for languages such as Basque and German. In addition, we demonstrate that our method also obtains very competitive results even when the amount of supervised data is cut by half, alleviating the dependency on manually annotated data. Finally, the results show that our emphasis on clustering features is crucial to develop robust out-of-domain models. The system and models are freely available to facilitate its use and guarantee the reproducibility of results.", "title": "" }, { "docid": "ac6e2ecb17757c8d4048c4ac09add80f", "text": "Purpose – To examine issues of standardization and adaptation in global marketing strategy and to explain the dynamics of standardization. Design/methodology/approach – This is a conceptual research paper that has been developed based on gaps in prior frameworks of standardization/adaptation. A three-factor model of standardization/adaptation of global marketing strategy was developed. The three factors include homogeneity of customer response to the marketing mix, transferability of competitive advantage, and similarities in the degree of economic freedom. Findings – The model through the use of feedback effects explains the dynamics of standardization. Research limitations/implications – Future research needs to empirically test the model. To enable empirical validation, reliable and valid measures of the three factors proposed in the model need to be developed. Additionally, the model may be used in future research to delineate the impact a variable may have on the ability of a firm to follow a standardized global marketing strategy. Practical implications – The three-factor model aids decisions relating to standardization in a global marketing context. Originality/value – The paper furthers the discussion on the issue of standardization. Through the identification of three factors that impact standardization/adaptation decisions, and the consideration of feedback effects, the paper provides a foundation for future research addressing the issue.", "title": "" }, { "docid": "59a25ae61a22baa8e20ae1a5d88c4499", "text": "This paper tackles a major privacy threat in current location-based services where users have to report their exact locations to the database server in order to obtain their desired services. For example, a mobile user asking about her nearest restaurant has to report her exact location. With untrusted service providers, reporting private location information may lead to several privacy threats. In this paper, we present a peer-to-peer (P2P)spatial cloaking algorithm in which mobile and stationary users can entertain location-based services without revealing their exact location information. The main idea is that before requesting any location-based service, the mobile user will form a group from her peers via single-hop communication and/or multi-hop routing. Then,the spatial cloaked area is computed as the region that covers the entire group of peers. Two modes of operations are supported within the proposed P2P s patial cloaking algorithm, namely, the on-demand mode and the proactive mode. Experimental results show that the P2P spatial cloaking algorithm operated in the on-demand mode has lower communication cost and better quality of services than the proactive mode, but the on-demand incurs longer response time.", "title": "" }, { "docid": "2bd51149e9899b588ca08688c4ff1db2", "text": "Buildings are among the largest consumers of electricity in the US. A significant portion of this energy use in buildings can be attributed to HVAC systems used to maintain comfort for occupants. In most cases these building HVAC systems run on fixed schedules and do not employ any fine grained control based on detailed occupancy information. In this paper we present the design and implementation of a presence sensor platform that can be used for accurate occupancy detection at the level of individual offices. Our presence sensor is low-cost, wireless, and incrementally deployable within existing buildings. Using a pilot deployment of our system across ten offices over a two week period we identify significant opportunities for energy savings due to periods of vacancy. Our energy measurements show that our presence node has an estimated battery lifetime of over five years, while detecting occupancy accurately. Furthermore, using a building simulation framework and the occupancy information from our testbed, we show potential energy savings from 10% to 15% using our system.", "title": "" }, { "docid": "8cd8577a70729d03c1561df6a1fcbdbb", "text": "Quantum computing is a new computational paradigm created by reformulating information and computation in a quantum mechanical framework [30, 27]. Since the laws of physics appear to be quantum mechanical, this is the most relevant framework to consider when considering the fundamental limitations of information processing. Furthermore, in recent decades we have seen a major shift from just observing quantum phenomena to actually controlling quantum mechanical systems. We have seen the communication of quantum information over long distances, the “teleportation” of quantum information, and the encoding and manipulation of quantum information in many different physical media. We still appear to be a long way from the implementation of a large-scale quantum computer, however it is a serious goal of many of the world’s leading physicists, and progress continues at a fast pace. In parallel with the broad and aggressive program to control quantum mechanical systems with increased precision, and to control and interact a larger number of subsystems, researchers have also been aggressively pushing the boundaries of what useful tasks one could perform with quantum mechanical devices. These in-", "title": "" }, { "docid": "5c598998ffcf3d6008e8e5eed94fc396", "text": "Music information retrieval (MIR) is an emerging research area that receives growing attention from both the research community and music industry. It addresses the problem of querying and retrieving certain types of music from large music data set. Classification is a fundamental problem in MIR. Many tasks in MIR can be naturally cast in a classification setting, such as genre classification, mood classification, artist recognition, instrument recognition, etc. Music annotation, a new research area in MIR that has attracted much attention in recent years, is also a classification problem in the general sense. Due to the importance of music classification in MIR research, rapid development of new methods, and lack of review papers on recent progress of the field, we provide a comprehensive review on audio-based classification in this paper and systematically summarize the state-of-the-art techniques for music classification. Specifically, we have stressed the difference in the features and the types of classifiers used for different classification tasks. This survey emphasizes on recent development of the techniques and discusses several open issues for future research.", "title": "" }, { "docid": "51c42a305039d65dc442910c8078a9aa", "text": "Infants are experts at playing, with an amazing ability to generate novel structured behaviors in unstructured environments that lack clear extrinsic reward signals. We seek to mathematically formalize these abilities using a neural network that implements curiosity-driven intrinsic motivation. Using a simple but ecologically naturalistic simulated environment in which an agent can move and interact with objects it sees, we propose a “world-model” network that learns to predict the dynamic consequences of the agent’s actions. Simultaneously, we train a separate explicit “self-model” that allows the agent to track the error map of its worldmodel. It then uses the self-model to adversarially challenge the developing world-model. We demonstrate that this policy causes the agent to explore novel and informative interactions with its environment, leading to the generation of a spectrum of complex behaviors, including ego-motion prediction, object attention, and object gathering. Moreover, the world-model that the agent learns supports improved performance on object dynamics prediction, detection, localization and recognition tasks. Taken together, our results are initial steps toward creating flexible autonomous agents that self-supervise in realistic physical environments.", "title": "" }, { "docid": "567445f68597ea8ff5e89719772819be", "text": "We have developed an interactive pop-up book called Electronic Popables to explore paper-based computing. Our book integrates traditional pop-up mechanisms with thin, flexible, paper-based electronics and the result is an artifact that looks and functions much like an ordinary pop-up, but has added elements of dynamic interactivity. This paper introduces the book and, through it, a library of paper-based sensors and a suite of paper-electronics construction techniques. We also reflect on the unique and under-explored opportunities that arise from combining material experimentation, artistic design, and engineering.", "title": "" }, { "docid": "02c904c320db3a6e0fc9310f077f5d08", "text": "Rejuvenative procedures of the face are increasing in numbers, and a plethora of different therapeutic options are available today. Every procedure should aim for the patient's safety first and then for natural and long-lasting results. The face is one of the most complex regions in the human body and research continuously reveals new insights into the complex interplay of the different participating structures. Bone, ligaments, muscles, fat, and skin are the key players in the layered arrangement of the face.Aging occurs in all involved facial structures but the onset and the speed of age-related changes differ between each specific structure, between each individual, and between different ethnic groups. Therefore, knowledge of age-related anatomy is crucial for a physician's work when trying to restore a youthful face.This review focuses on the current understanding of the anatomy of the human face and tries to elucidate the morphological changes during aging of bone, ligaments, muscles, and fat, and their role in rejuvenative procedures.", "title": "" }, { "docid": "6470b7d1532012e938063d971f3ead29", "text": "As society continues to accumulate more and more data, demand for machine learning algorithms that can learn from data with limited human intervention only increases. Semi-supervised learning (SSL) methods, which extend supervised learning algorithms by enabling them to use unlabeled data, play an important role in addressing this challenge. In this thesis, a framework unifying the traditional assumptions and approaches to SSL is defined. A synthesis of SSL literature then places a range of contemporary approaches into this common framework. Our focus is on methods which use generative adversarial networks (GANs) to perform SSL. We analyse in detail one particular GAN-based SSL approach. This is shown to be closely related to two preceding approaches. Through synthetic experiments we provide an intuitive understanding and motivate the formulation of our focus approach. We then theoretically analyse potential alternative formulations of its loss function. This analysis motivates a number of research questions that centre on possible improvements to, and experiments to better understand the focus model. While we find support for our hypotheses, our conclusion more broadly is that the focus method is not especially robust.", "title": "" }, { "docid": "713c7761ecba317bdcac451fcc60e13d", "text": "We describe a method for automatically transcribing guitar tablatures from audio signals in accordance with the player's proficiency for use as support for a guitar player's practice. The system estimates the multiple pitches in each time frame and the optimal fingering considering playability and player's proficiency. It combines a conventional multipitch estimation method with a basic dynamic programming method. The difficulty of the fingerings can be changed by tuning the parameter representing the relative weights of the acoustical reproducibility and the fingering easiness. Experiments conducted using synthesized guitar audio signals to evaluate the transcribed tablatures in terms of the multipitch estimation accuracy and fingering easiness demonstrated that the system can simplify the fingering with higher precision of multipitch estimation results than the conventional method.", "title": "" }, { "docid": "e7bd07b86b8f1b50641853c06461ce89", "text": "Purpose – The purpose of this study is to conduct a scientometric analysis of the body of literature contained in 11 major knowledge management and intellectual capital (KM/IC) peer-reviewed journals. Design/methodology/approach – A total of 2,175 articles published in 11 major KM/IC peer-reviewed journals were carefully reviewed and subjected to scientometric data analysis techniques. Findings – A number of research questions pertaining to country, institutional and individual productivity, co-operation patterns, publication frequency, and favourite inquiry methods were proposed and answered. Based on the findings, many implications emerged that improve one’s understanding of the identity of KM/IC as a distinct scientific field. Research limitations/implications – The pool of KM/IC journals examined did not represent all available publication outlets, given that at least 20 peer-reviewed journals exist in the KM/IC field. There are also KM/IC papers published in other non-KM/IC specific journals. However, the 11 journals that were selected for the study have been evaluated by Bontis and Serenko as the top publications in the KM/IC area. Practical implications – Practitioners have played a significant role in developing the KM/IC field. However, their contributions have been decreasing. There is still very much a need for qualitative descriptions and case studies. It is critically important that practitioners consider collaborating with academics for richer research projects. Originality/value – This is the most comprehensive scientometric analysis of the KM/IC field ever conducted.", "title": "" }, { "docid": "1abcf9480879b3d29072f09d5be8609d", "text": "Warm restart techniques on training deep neural networks often achieve better recognition accuracies and can be regarded as easy methods to obtain multiple neural networks with no additional training cost from a single training process. Ensembles of intermediate neural networks obtained by warm restart techniques can provide higher accuracy than a single neural network obtained finally by a whole training process. However, existing methods on both of warm restart and its ensemble techniques use fixed cyclic schedules and have little degree of parameter adaption. This paper extends a class of possible schedule strategies of warm restart, and clarifies their effectiveness for recognition performance. Specifically, we propose parameterized functions and various cycle schedules to improve recognition accuracies by the use of deep neural networks with no additional training cost. Experiments on CIFAR-10 and CIFAR-100 show that our methods can achieve more accurate rates than the existing cyclic training and ensemble methods.", "title": "" }, { "docid": "1facd226c134b22f62613073deffce60", "text": "We present two experiments examining the impact of navigation techniques on users' navigation performance and spatial memory in a zoomable user interface (ZUI). The first experiment with 24 participants compared the effect of egocentric body movements with traditional multi-touch navigation. The results indicate a 47% decrease in path lengths and a 34% decrease in task time in favor of egocentric navigation, but no significant effect on users' spatial memory immediately after a navigation task. However, an additional second experiment with 8 participants revealed such a significant increase in performance of long-term spatial memory: The results of a recall task administered after a 15-minute distractor task indicate a significant advantage of 27% for egocentric body movements in spatial memory. Furthermore, a questionnaire about the subjects' workload revealed that the physical demand of the egocentric navigation was significantly higher but there was less mental demand.", "title": "" }, { "docid": "5227121a2feb59fc05775e2623239da9", "text": "BACKGROUND\nCriminal offenders with a diagnosis of psychopathy or borderline personality disorder (BPD) share an impulsive nature but tend to differ in their style of emotional response. This study aims to use multiple psychophysiologic measures to compare emotional responses to unpleasant and pleasant stimuli.\n\n\nMETHODS\nTwenty-five psychopaths as defined by the Hare Psychopathy Checklist and 18 subjects with BPD from 2 high-security forensic treatment facilities were included in the study along with 24 control subjects. Electrodermal response was used as an indicator of emotional arousal, modulation of the startle reflex as a measure of valence, and electromyographic activity of the corrugator muscle as an index of emotional expression.\n\n\nRESULTS\nCompared with controls, psychopaths were characterized by decreased electrodermal responsiveness, less facial expression, and the absence of affective startle modulation. A higher percentage of psychopaths showed no startle reflex. Subjects with BPD showed a response pattern very similar to that of controls, ie, they showed comparable autonomic arousal, and their startle responses were strongest to unpleasant slides and weakest to pleasant slides. However, corrugator electromyographic activity in subjects with BPD demonstrated little facial modulation when they viewed either pleasant or unpleasant slides.\n\n\nCONCLUSIONS\nThe results support the theory that psychopaths are characterized by a pronounced lack of fear in response to aversive events. Furthermore, the results suggest a general deficit in processing affective information, regardless of whether stimuli are negative or positive. Emotional hyporesponsiveness was specific to psychopaths, since results for offenders with BPD indicate a widely adequate processing of emotional stimuli.", "title": "" }, { "docid": "e777794833a060f99e11675952cd3342", "text": "In this paper we propose a novel method to utilize the skeletal structure not only for supporting force but for releasing heat by latent heat.", "title": "" }, { "docid": "3b75d996f21af68a0cd4d49ef7d4e10e", "text": "Observational studies suggest that including men in reproductive health interventions can enhance positive health outcomes. A randomized controlled trial was designed to test the impact of involving male partners in antenatal health education on maternal health care utilization and birth preparedness in urban Nepal. In total, 442 women seeking antenatal services during second trimester of pregnancy were randomized into three groups: women who received education with their husbands, women who received education alone and women who received no education. The education intervention consisted of two 35-min health education sessions. Women were followed until after delivery. Women who received education with husbands were more likely to attend a post-partum visit than women who received education alone [RR = 1.25, 95% CI = (1.01, 1.54)] or no education [RR = 1.29, 95% CI = (1.04, 1.60)]. Women who received education with their husbands were also nearly twice as likely as control group women to report making >3 birth preparations [RR = 1.99, 95% CI = (1.10, 3.59)]. Study groups were similar with respect to attending the recommended number of antenatal care checkups, delivering in a health institution or having a skilled provider at birth. These data provide evidence that educating pregnant women and their male partners yields a greater net impact on maternal health behaviors compared with educating women alone.", "title": "" }, { "docid": "830b5591bd98199936a5ea10ff2b058b", "text": "To stand up for the brands they support, members of brand communities develop “oppositional brand loyalty” towards other rival brands. This study identifies how the interaction characteristics of brand community affect the perceived benefits of community members, and whether the perceived benefits cause members to develop community commitment, as well as the relationship between community commitment and oppositional brand loyalty. This study examined members of online automobile communities in Taiwan, and obtained a total of 283 valid samples. The analytical results reveal that interaction characteristics of brand community make members perceive many benefits, with “brand community engagement” being the most noticeable. Furthermore, hedonic, social, and learning benefits are the main factors to form community commitments. When members have community commitments, they will form oppositional brand loyalty to other rival brands. Based on the analytical results, this study provides suggestions to enterprises regarding online brand community operations. © 2013 Elsevier Ltd. All rights reserved.", "title": "" } ]
scidocsrr
e93b72169f7986f4af221e81bd250504
Combining Convolutional and Recurrent Neural Networks for Human Skin Detection
[ { "docid": "37637ca24397aba35e1e4926f1a94c91", "text": "We propose a structured prediction architecture, which exploits the local generic features extracted by Convolutional Neural Networks and the capacity of Recurrent Neural Networks (RNN) to retrieve distant dependencies. The proposed architecture, called ReSeg, is based on the recently introduced ReNet model for image classification. We modify and extend it to perform the more challenging task of semantic segmentation. Each ReNet layer is composed of four RNN that sweep the image horizontally and vertically in both directions, encoding patches or activations, and providing relevant global information. Moreover, ReNet layers are stacked on top of pre-trained convolutional layers, benefiting from generic local features. Upsampling layers follow ReNet layers to recover the original image resolution in the final predictions. The proposed ReSeg architecture is efficient, flexible and suitable for a variety of semantic segmentation tasks. We evaluate ReSeg on several widely-used semantic segmentation datasets: Weizmann Horse, Oxford Flower, and CamVid, achieving stateof-the-art performance. Results show that ReSeg can act as a suitable architecture for semantic segmentation tasks, and may have further applications in other structured prediction problems. The source code and model hyperparameters are available on https://github.com/fvisin/reseg.", "title": "" } ]
[ { "docid": "0629fbef788719deb0c97e411a60b3a3", "text": "An experimental study was conducted to investigate the flow behavior around a corrugated dragonfly airfoil compared with a traditional, streamlined airfoil and a flat plate. The experimental study was conducted at the chord Reynolds number of ReC =34,000, i.e., the regime where Micro-Air-Vehicles (MAV) usually operate, to explore the potential applications of such bio-inspired airfoils for MAV designs. The measurement results demonstrated clearly that the corrugated dragonfly airfoil has much better performance over the streamlined airfoil and the flat plate in preventing large-scale flow separation and airfoil stall at the test low Reynolds number level. The detailed PIV measurements near the noses of the airfoils elucidated underlying physics about why the corrugated dragonfly airfoil could suppress flow separation and airfoil stall at low Reynolds numbers: Instead of having laminar separation, the protruding corners of the corrugated dragonfly airfoil were found to be acting as “turbulators” to generate unsteady vortices to promote the transition of the boundary layer from laminar to turbulent rapidly. The unsteady vortex structures trapped in the valleys of the corrugated cross section could pump high-speed fluid from outside to near wall regions to provide sufficient energy for the boundary layer to overcome the adverse pressure gradient, thus, discourage flow separations and airfoil stall.", "title": "" }, { "docid": "514d626cc44cf453706c0903cbc645fe", "text": "Peer group analysis is a new tool for monitoring behavior over time in data mining situations. In particular, the tool detects individual objects that begin to behave in a way distinct from objects to which they had previously been similar. Each object is selected as a target object and is compared with all other objects in the database, using either external comparison criteria or internal criteria summarizing earlier behavior patterns of each object. Based on this comparison, a peer group of objects most similar to the target object is chosen. The behavior of the peer group is then summarized at each subsequent time point, and the behavior of the target object compared with the summary of its peer group. Those target objects exhibiting behavior most different from their peer group summary behavior are flagged as meriting closer investigation. The tool is intended to be part of the data mining process, involving cycling between the detection of objects that behave in anomalous ways and the detailed examination of those objects. Several aspects of peer group analysis can be tuned to the particular application, including the size of the peer group, the width of the moving behavior window being used, the way the peer group is summarized, and the measures of difference between the target object and its peer group summary. We apply the tool in various situations and illustrate its use on a set of credit card transaction data.", "title": "" }, { "docid": "ac979967ab992da6115852e00e4769f2", "text": "Experiments were carried out to study the effect of high dose of “tulsi” (Ocimum sanctum Linn.) pellets on testis and epididymis in male albino rat. Wheat flour, oil and honey pellets of tulsi leaves were fed to albino rat, at 400mg/ 100g body weight per day, along with normal diet, for a period of 72 days. One group of tulsi-fed rats was left for recovery, after the last dose fed on day 72, up to day 120. This high dose of tulsi was found to cause durationdependant decrease of testis weight and derangements in the histo-architecture of testis as well as epididymis. The diameter of seminiferous tubules decreased considerably, with corresponding increase in the interstitium. Spermatogenesis was arrested, accompanied by degeneration of seminiferous epithelial elements. Epididymal tubules regressed, and the luminal spermatozoa formed a coagulum. In the recovery group, testis and epididymis regained normal weights, where as spermatogenesis was partially restored. Thus, high dose of tulsi leaf affects testicular and epididymyal structure and function reversibly.", "title": "" }, { "docid": "8f25b3b36031653311eee40c6c093768", "text": "This paper provides a survey of the applications of computers in music teaching. The systems are classified by musical activity rather than by technical approach. The instructional strategies involved and the type of knowledge represented are highlighted and areas for future research are identified.", "title": "" }, { "docid": "9846794c512f847ca16c43bcf055a757", "text": "Sensing and presenting on-road information of moving vehicles is essential for fully and semi-automated driving. It is challenging to track vehicles from affordable on-board cameras in crowded scenes. The mismatch or missing data are unavoidable and it is ineffective to directly present uncertain cues to support the decision-making. In this paper, we propose a physical model based on incompressible fluid dynamics to represent the vehicle’s motion, which provides hints of possible collision as a continuous scalar riskmap. We estimate the position and velocity of other vehicles from a monocular on-board camera located in front of the ego-vehicle. The noisy trajectories are then modeled as the boundary conditions in the simulation of advection and diffusion process. We then interactively display the animating distribution of substances, and show that the continuous scalar riskmap well matches the perception of vehicles even in presence of the tracking failures. We test our method on real-world scenes and discuss about its application for driving assistance and autonomous vehicle in the future.", "title": "" }, { "docid": "49bd1cdbeea10f39a2b34cfa5baac0ef", "text": "Recently, image inpainting task has revived with the help of deep learning techniques. Deep neural networks, especially the generative adversarial networks~(GANs) make it possible to recover the missing details in images. Due to the lack of sufficient context information, most existing methods fail to get satisfactory inpainting results. This work investigates a more challenging problem, e.g., the newly-emerging semantic image inpainting - a task to fill in large holes in natural images. In this paper, we propose an end-to-end framework named progressive generative networks~(PGN), which regards the semantic image inpainting task as a curriculum learning problem. Specifically, we divide the hole filling process into several different phases and each phase aims to finish a course of the entire curriculum. After that, an LSTM framework is used to string all the phases together. By introducing this learning strategy, our approach is able to progressively shrink the large corrupted regions in natural images and yields promising inpainting results. Moreover, the proposed approach is quite fast to evaluate as the entire hole filling is performed in a single forward pass. Extensive experiments on Paris Street View and ImageNet dataset clearly demonstrate the superiority of our approach. Code for our models is available at https://github.com/crashmoon/Progressive-Generative-Networks.", "title": "" }, { "docid": "1071d0c189f9220ba59acfca06c5addb", "text": "A 1.6 Gb/s receiver for optical communication has been designed and fabricated in a 0.25-/spl mu/m CMOS process. This receiver has no transimpedance amplifier and uses the parasitic capacitor of the flip-chip bonded photodetector as an integrating element and resolves the data with a double-sampling technique. A simple feedback loop adjusts a bias current to the average optical signal, which essentially \"AC couples\" the input. The resulting receiver resolves an 11 /spl mu/A input, dissipates 3 mW of power, occupies 80 /spl mu/m/spl times/50 /spl mu/m of area and operates at over 1.6 Gb/s.", "title": "" }, { "docid": "54d45486af755311fc5394c4b628be2e", "text": "Loop closure detection is essential and important in visual simultaneous localization and mapping (SLAM) systems. Most existing methods typically utilize a separate feature extraction part and a similarity metric part. Compared to these methods, an end-to-end network is proposed in this paper to jointly optimize the two parts in a unified framework for further enhancing the interworking between these two parts. First, a two-branch siamese network is designed to learn respective features for each scene of an image pair. Then a hierarchical weighted distance (HWD) layer is proposed to fuse the multi-scale features of each convolutional module and calculate the distance between the image pair. Finally, by using the contrastive loss in the training process, the effective feature representation and similarity metric can be learned simultaneously. Experiments on several open datasets illustrate the superior performance of our approach and demonstrate that the end-to-end network is feasible to conduct the loop closure detection in real time and provides an implementable method for visual SLAM systems.", "title": "" }, { "docid": "a299b0f58aaba6efff9361ff2b5a1e69", "text": "The continuing growth of World Wide Web and on-line text collections makes a large volume of information available to users. Automatic text summarization allows users to quickly understand documents. In this paper, we propose an automated technique for single document summarization which combines content-based and graph-based approaches and introduce the Hopfield network algorithm as a technique for ranking text segments. A series of experiments are performed using the DUC collection and a Thai-document collection. The results show the superiority of the proposed technique over reference systems, in addition the Hopfield network algorithm on undirected graph is shown to be the best text segment ranking algorithm in the study", "title": "" }, { "docid": "1251dd7b6b2bfa3778dcdeece4694988", "text": "Container technology provides a lightweight operating system level virtual hosting environment. Its emergence profoundly changes the development and deployment paradigms of multi-tier distributed applications. However, due to the incomplete implementation of system resource isolation mechanisms in the Linux kernel, some security concerns still exist for multiple containers sharing an operating system kernel on a multi-tenancy container cloud service. In this paper, we first present the information leakage channels we discovered that are accessible within the containers. Such channels expose a spectrum of system-wide host information to the containers without proper resource partitioning. By exploiting such leaked host information, it becomes much easier for malicious adversaries (acting as tenants in the container clouds) to launch advanced attacks that might impact the reliability of cloud services. Additionally, we discuss the root causes of the containers' information leakages and propose a two-stage defense approach. As demonstrated in the evaluation, our solution is effective and incurs trivial performance overhead.", "title": "" }, { "docid": "4929e1f954519f0976ec54e9ed8c2c37", "text": "Software support for making effective pen-based applications is currently rudimentary. To facilitate the creation of such applications, we have developed SATIN, a Java-based toolkit designed to support the creation of applications that leverage the informal nature of pens. This support includes a scenegraph for manipulating and rendering objects; support for zooming and rotating objects, switching between multiple views of an object, integration of pen input with interpreters, libraries for manipulating ink strokes, widgets optimized for pens, and compatibility with Java's Swing toolkit. SATIN includes a generalized architecture for handling pen input, consisting of recognizers, interpreters, and multi-interpreters. In this paper, we describe the functionality and architecture of SATIN, using two applications built with SATIN as examples.", "title": "" }, { "docid": "dd8c61b00519117ec153b3938f4c6e69", "text": "The characteristics of athletic shoes have been described with terms like cushioning, stability, and guidance.1,2 Despite many years of effort to optimize athletic shoe construction, the prevalence of running-related lower extremity injuries has not significantly declined; however, athletic performance has reached new heights.3-5 Criteria for optimal athletic shoe construction have been proposed, but no clear consensus has emerged.6-8 Given the unique demands of various sports, sportspecific shoe designs may simultaneously increase performance and decrease injury incidence.9-11 The purpose of this report is to provide an overview of current concepts in athletic shoe design, with emphasis on running shoes, so that athletic trainers and therapists (ATs) can assist their patients in selection of an appropriate shoe design.", "title": "" }, { "docid": "fc9699b4382b1ddc6f60fc6ec883a6d3", "text": "Applications hosted in today's data centers suffer from internal fragmentation of resources, rigidity, and bandwidth constraints imposed by the architecture of the network connecting the data center's servers. Conventional architectures statically map web services to Ethernet VLANs, each constrained in size to a few hundred servers owing to control plane overheads. The IP routers used to span traffic across VLANs and the load balancers used to spray requests within a VLAN across servers are realized via expensive customized hardware and proprietary software. Bisection bandwidth is low, severly constraining distributed computation Further, the conventional architecture concentrates traffic in a few pieces of hardware that must be frequently upgraded and replaced to keep pace with demand - an approach that directly contradicts the prevailing philosophy in the rest of the data center, which is to scale out (adding more cheap components) rather than scale up (adding more power and complexity to a small number of expensive components).\n Commodity switching hardware is now becoming available with programmable control interfaces and with very high port speeds at very low port cost, making this the right time to redesign the data center networking infrastructure. In this paper, we describe monsoon, a new network architecture, which scales and commoditizes data center networking monsoon realizes a simple mesh-like architecture using programmable commodity layer-2 switches and servers. In order to scale to 100,000 servers or more,monsoon makes modifications to the control plane (e.g., source routing) and to the data plane (e.g., hot-spot free multipath routing via Valiant Load Balancing). It disaggregates the function of load balancing into a group of regular servers, with the result that load balancing server hardware can be distributed amongst racks in the data center leading to greater agility and less fragmentation. The architecture creates a huge, flexible switching domain, supporting any server/any service and unfragmented server capacity at low cost.", "title": "" }, { "docid": "f8854602bbb2f5295a5fba82f22ca627", "text": "Models such as latent semantic analysis and those based on neural embeddings learn distributed representations of text, and match the query against the document in the latent semantic space. In traditional information retrieval models, on the other hand, terms have discrete or local representations, and the relevance of a document is determined by the exact matches of query terms in the body text. We hypothesize that matching with distributed representations complements matching with traditional local representations, and that a combination of the two is favourable. We propose a novel document ranking model composed of two separate deep neural networks, one that matches the query and the document using a local representation, and another that matches the query and the document using learned distributed representations. The two networks are jointly trained as part of a single neural network. We show that this combination or ‘duet’ performs significantly better than either neural network individually on a Web page ranking task, and significantly outperforms traditional baselines and other recently proposed models based on neural networks.", "title": "" }, { "docid": "5cf92beeeb4e1f3e36a8ff1fd639d40d", "text": "Mobile application spoofing is an attack where a malicious mobile app mimics the visual appearance of another one. A common example of mobile application spoofing is a phishing attack where the adversary tricks the user into revealing her password to a malicious app that resembles the legitimate one. In this paper, we propose a novel spoofing detection approach, tailored to the protection of mobile app login screens, using screenshot extraction and visual similarity comparison. We use deception rate as a novel similarity metric for measuring how likely the user is to consider a potential spoofing app as one of the protected applications. We conducted a large-scale online study where participants evaluated spoofing samples of popular mobile app login screens, and used the study results to implement a detection system that accurately estimates deception rate. We show that efficient detection is possible with low overhead.", "title": "" }, { "docid": "fe42cf28ff020c35d3a3013bb249c7d8", "text": "Sensors and actuators are the core components of all mechatronic systems used in a broad range of diverse applications. A relatively new and rapidly evolving area is the one of rehabilitation and assistive devices that comes to support and improve the quality of human life. Novel exoskeletons have to address many functional and cost-sensitive issues such as safety, adaptability, customization, modularity, scalability, and maintenance. Therefore, a smart variable stiffness actuator was developed. The described approach was to integrate in one modular unit a compliant actuator with all sensors and electronics required for real-time communications and control. This paper also introduces a new method to estimate and control the actuator's torques without using dedicated expensive torque sensors in conditions where the actuator's torsional stiffness can be adjusted by the user. A 6-degrees-of-freedom exoskeleton was assembled and tested using the technology described in this paper, and is introduced as a real-life case study for the mechatronic design, modularity, and integration of the proposed smart actuators, suitable for human–robot interaction. The advantages are discussed together with possible improvements and the possibility of extending the presented technology to other areas of mechatronics.", "title": "" }, { "docid": "4408d5fa31a64d54fbe4b4d70b18182b", "text": "Using microarray analysis, this study showed up-regulation of toll-like receptors 1, 2, 4, 7, 8, NF-κB, TNF, p38-MAPK, and MHC molecules in human peripheral blood mononuclear cells following infection with Plasmodium falciparum. This analysis reports herein further studies based on time-course microarray analysis with focus on malaria-induced host immune response. The results show that in early malaria, selected immune response-related genes were up-regulated including α β and γ interferon-related genes, as well as genes of IL-15, CD36, chemokines (CXCL10, CCL2, S100A8/9, CXCL9, and CXCL11), TRAIL and IgG Fc receptors. During acute febrile malaria, up-regulated genes included α β and γ interferon-related genes, IL-8, IL-1b IL-10 downstream genes, TGFB1, oncostatin-M, chemokines, IgG Fc receptors, ADCC signalling, complement-related genes, granzymes, NK cell killer/inhibitory receptors and Fas antigen. During recovery, genes for NK receptorsand granzymes/perforin were up-regulated. When viewed in terms of immune response type, malaria infection appeared to induce a mixed TH1 response, in which α and β interferon-driven responses appear to predominate over the more classic IL-12 driven pathway. In addition, TH17 pathway also appears to play a significant role in the immune response to P. falciparum. Gene markers of TH17 (neutrophil-related genes, TGFB1 and IL-6 family (oncostatin-M)) and THαβ (IFN-γ and NK cytotoxicity and ADCC gene) immune response were up-regulated. Initiation of THαβ immune response was associated with an IFN-αβ response, which ultimately resulted in moderate-mild IFN-γ achieved via a pathway different from the more classic IL-12 TH1 pattern. Based on these observations, this study speculates that in P. falciparum infection, THαβ/TH17 immune response may predominate over ideal TH1 response.", "title": "" }, { "docid": "531a7417bd66ff0fdd7fb35c7d6d8559", "text": "G. R. White University of Sussex, Brighton, UK Abstract In order to design new methodologies for evaluating the user experience of video games, it is imperative to initially understand two core issues. Firstly, how are video games developed at present, including components such as processes, timescales and staff roles, and secondly, how do studios design and evaluate the user experience. This chapter will discuss the video game development process and the practices that studios currently use to achieve the best possible user experience. It will present four case studies from game developers Disney Interactive (Black Rock Studio), Relentless, Zoe Mode, and HandCircus, each detailing their game development process and also how this integrates with the user experience evaluation. The case studies focus on different game genres, platforms, and target user groups, ensuring that this chapter represents a balanced view of current practices in evaluating user experience during the game development process.", "title": "" }, { "docid": "69fd03b01ba24925cd92e3dd4be8ff4f", "text": "There have been many proposals for first-order belief networks but these typically only let us reason about the individuals that we know about. There are many instances where we have to quantify over all of the individuals in a population. When we do this the population size often matters and we need to reason about all of the members of the population (but not necessarily individually). This paper presents an algorithm to reason about multiple individuals, where we may know particular facts about some of them, but want to treat the others as a group. Combining unification with variable elimination lets us reason about classes of individuals without needed to ground out the theory.", "title": "" } ]
scidocsrr
f64b0e6c0e0bb7b264772bd594817e45
Cluster-based sampling of multiclass imbalanced data
[ { "docid": "f6f6f322118f5240aec5315f183a76ab", "text": "Learning from data sets that contain very few instances of the minority class usually produces biased classifiers that have a higher predictive accuracy over the majority class, but poorer predictive accuracy over the minority class. SMOTE (Synthetic Minority Over-sampling Technique) is specifically designed for learning from imbalanced data sets. This paper presents a modified approach (MSMOTE) for learning from imbalanced data sets, based on the SMOTE algorithm. MSMOTE not only considers the distribution of minority class samples, but also eliminates noise samples by adaptive mediation. The combination of MSMOTE and AdaBoost are applied to several highly and moderately imbalanced data sets. The experimental results show that the prediction performance of MSMOTE is better than SMOTEBoost in the minority class and F-values are also improved.", "title": "" } ]
[ { "docid": "18f9fff4bd06f28cd39c97ff40467d0f", "text": "Smart agriculture is an emerging concept, because IOT sensors are capable of providing information about agriculture fields and then act upon based on the user input. In this Paper, it is proposed to develop a Smart agriculture System that uses advantages of cutting edge technologies such as Arduino, IOT and Wireless Sensor Network. The paper aims at making use of evolving technology i.e. IOT and smart agriculture using automation. Monitoring environmental conditions is the major factor to improve yield of the efficient crops. The feature of this paper includes development of a system which can monitor temperature, humidity, moisture and even the movement of animals which may destroy the crops in agricultural field through sensors using Arduino board and in case of any discrepancy send a SMS notification as well as a notification on the application developed for the same to the farmer’s smartphone using Wi-Fi/3G/4G. The system has a duplex communication link based on a cellularInternet interface that allows for data inspection and irrigation scheduling to be programmed through an android application. Because of its energy autonomy and low cost, the system has the potential to be useful in water limited geographically isolated areas.", "title": "" }, { "docid": "1c5a717591aa049303af7239ff203ebb", "text": "Indian Biotech opponents have attributed the increase of suicides to the monopolization of GM seeds, centering on patent control, application of terminator technology, marketing strategy, and increased production costs. The contentions of the biotech opponents, however, have been criticized for a lack of transparency in their modus operandi i.e. the use of methodology in their argumentation. The fact is, however, that with the intention of getting the attention of those capable of determining the future of GM cotton in India, opponents resorted to generating controversies. Therefore, this article will review and evaluate the multifaceted contentions of both opponents and defenders. Although the association between seed monopolization and farmer-suicide is debatable, we will show that there is a link between the economic factors associated with Bt. cultivation and farmer suicide. The underlying thesis of biotech opponents becomes all the more significant when analysed vis-à-vis the contention of the globalization critics that there has been a political and economic marginalization of the Indian farmers. Their accusation assumes significance in the context of a fragile democracy like India where market forces are accorded precedence over farmers' needs until election time.", "title": "" }, { "docid": "ca8da405a67d3b8a30337bc23dfce0cc", "text": "Object detection is one of the most important tasks of computer vision. It is usually performed by evaluating a subset of the possible locations of an image, that are more likely to contain the object of interest. Exhaustive approaches have now been superseded by object proposal methods. The interplay of detectors and proposal algorithms has not been fully analyzed and exploited up to now, although this is a very relevant problem for object detection in video sequences. We propose to connect, in a closed-loop, detectors and object proposal generator functions exploiting the ordered and continuous nature of video sequences. Different from tracking we only require a previous frame to improve both proposal and detection: no prediction based on local motion is performed, thus avoiding tracking errors. We obtain three to four points of improvement in mAP and a detection time that is lower than Faster Regions with CNN features (R-CNN), which is the fastest Convolutional Neural Network (CNN) based generic object detector known at the moment.", "title": "" }, { "docid": "ad02d315182c1b6181c6dda59185142c", "text": "Fact checking is an essential part of any investigative work. For linguistic, psychological and social reasons, it is an inherently human task. Yet, modern media make it increasingly difficult for experts to keep up with the pace at which information is produced. Hence, we believe there is value in tools to assist them in this process. Much of the effort on Web data research has been focused on coping with incompleteness and uncertainty. Comparatively, dealing with context has received less attention, although it is crucial in judging the validity of a claim. For instance, what holds true in a US state, might not in its neighbors, e.g., due to obsolete or superseded laws. In this work, we address the problem of checking the validity of claims in multiple contexts. We define a language to represent and query facts across different dimensions. The approach is non-intrusive and allows relatively easy modeling, while capturing incompleteness and uncertainty. We describe the syntax and semantics of the language. We present algorithms to demonstrate its feasibility, and we illustrate its usefulness through examples.", "title": "" }, { "docid": "9b9a04a859b51866930b3fb4d93653b6", "text": "BACKGROUND\nResults of several studies have suggested a probable etiologic association between Epstein-Barr virus (EBV) and leukemias; therefore, the aim of this study was to investigate the association of EBV in childhood leukemia.\n\n\nMETHODS\nA direct isothermal amplification method was developed for detection of the latent membrane protein 1 (LMP1) of EBV in the peripheral blood of 80 patients with leukemia (54 had lymphoid leukemia and 26 had myeloid leukemia) and of 20 hematologically healthy control subjects.\n\n\nRESULTS\nEBV LMP1 gene transcripts were found in 29 (36.3%) of the 80 patients with leukemia but in none of the healthy controls (P < .0001). Of the 29 EBV(+) cases, 23 (79.3%), 5 (17.3%), and 1 (3.4%) were acute lymphoblastic leukemia, acute myeloid leukemia, and chronic myeloid leukemia, respectively.\n\n\nCONCLUSION\nEBV LMP1 gene transcriptional activity was observed in a significant proportion of patients with acute lymphoblastic leukemia. EBV infection in patients with lymphoid leukemia may be a factor involved in the high incidence of pediatric leukemia in the Sudan.", "title": "" }, { "docid": "6c1317ef88110756467a10c4502851bb", "text": "Deciding query equivalence is an important problem in data management with many practical applications. Solving the problem, however, is not an easy task. While there has been a lot of work done in the database research community in reasoning about the semantic equivalence of SQL queries, prior work mainly focuses on theoretical limitations. In this paper, we present COSETTE, a fully automated prover that can determine the equivalence of SQL queries. COSETTE leverages recent advances in both automated constraint solving and interactive theorem proving, and returns a counterexample (in terms of input relations) if two queries are not equivalent, or a proof of equivalence otherwise. Although the problem of determining equivalence for arbitrary SQL queries is undecidable, our experiments show that COSETTE can determine the equivalences of a wide range of queries that arise in practice, including conjunctive queries, correlated queries, queries with outer joins, and queries with aggregates. Using COSETTE, we have also proved the validity of magic set rewrites, and confirmed various real-world query rewrite errors, including the famous COUNT bug. We are unaware of any prior tool that can automatically determine the equivalences of a broad range of queries as COSETTE, and believe that our tool represents a major step towards building provably-correct query optimizers for real-world database systems.", "title": "" }, { "docid": "d603806f579a937a24ad996543fe9093", "text": "Early vision relies heavily on rectangular windows for tasks such as smoothing and computing correspondence. While rectangular windows are efficient, they yield poor results near object boundaries. We describe an efficient method for choosing an arbitrarily shaped connected window, in a manner which varies at each pixel. Our approach can be applied to many problems, including image restoration and visual correspondence. It runs in linear time, and takes a few seconds on traditional benchmark images. Performance on both synthetic and real imagery with ground truth appears promising.", "title": "" }, { "docid": "67070d149bcee51cc93a81f21f15ad71", "text": "As an important and fundamental tool for analyzing the schedulability of a real-time task set on the multiprocessor platform, response time analysis (RTA) has been researched for several years on both Global Fixed Priority (G-FP) and Global Earliest Deadline First (G-EDF) scheduling. This paper proposes a new analysis that improves over current state-of-the-art RTA methods for both G-FP and G-EDF scheduling, by reducing their pessimism. The key observation is that when estimating the carry-in workload, all the existing RTA techniques depend on the worst case scenario in which the carry-in job should execute as late as possible and just finishes execution before its worst case response time (WCRT). But the carry-in workload calculated under this assumption may be over-estimated, and thus the accuracy of the response time analysis may be impacted. To address this problem, we first propose a new method to estimate the carry-in workload more precisely. The proposed method does not depend on any specific scheduling algorithm and can be used for both G-FP and G-EDF scheduling. We then propose a general RTA algorithm that can improve most existing RTA tests by incorporating our carry-in estimation method. To further improve the execution efficiency, we also introduce an optimization technique for our RTA tests. Experiments with randomly generated task sets are conducted and the results show that, compared with the state-of-the-art technologies, the proposed tests exhibit considerable performance improvements, up to 9 and 7.8 percent under G-FP and G-EDF scheduling respectively, in terms of schedulability test precision.", "title": "" }, { "docid": "90f188c1f021c16ad7c8515f1244c08a", "text": "Minimally invasive principles should be the driving force behind rehabilitating young individuals affected by severe dental erosion. The maxillary anterior teeth of a patient, class ACE IV, has been treated following the most conservatory approach, the Sandwich Approach. These teeth, if restored by conventional dentistry (eg, crowns) would have required elective endodontic therapy and crown lengthening. To preserve the pulp vitality, six palatal resin composite veneers and four facial ceramic veneers were delivered instead with minimal, if any, removal of tooth structure. In this article, the details about the treatment are described.", "title": "" }, { "docid": "609110c4bf31885d99618994306ef2cc", "text": "This study examined the ability of a collagen solution to aid revascularization of necrotic-infected root canals in immature dog teeth. Sixty immature teeth from 6 dogs were infected, disinfected, and randomized into experimental groups: 1: no further treatment; 2: blood in canal; 3: collagen solution in canal, 4: collagen solution + blood, and 5: negative controls (left for natural development). Uncorrected chi-square analysis of radiographic results showed no statistical differences (p >or= 0.05) between experimental groups regarding healing of radiolucencies but a borderline statistical difference (p = 0.058) for group 1 versus group 4 for radicular thickening. Group 2 showed significantly more apical closure than group 1 (p = 0.03) and a borderline statistical difference (p = 0.051) for group 3 versus group 1. Uncorrected chi-square analysis revealed that there were no statistical differences between experimental groups for histological results. However, some roots in each of groups 1 to 4 (previously infected) showed positive histologic outcomes (thickened walls in 43.9%, apical closure in 54.9%, and new luminal tissue in 29.3%). Revascularization of disinfected immature dog root canal systems is possible.", "title": "" }, { "docid": "eb0ef9876f37b5974ed27079bcda8e03", "text": "Increasing number of individuals are using the internet to meet their health information needs; however, little is known about the characteristics of online health information seekers and whether they differ from individuals who search for health information from offline sources. Researchers must examine the primary characteristics of online and offline health information seekers in order to better recognize their needs, highlight improvements that may be made in the arena of internet health information quality and availability, and understand factors that discriminate between those who seek online vs. offline health information. This study examines factors that differentiate between online and offline health information seekers in the United States. Data for this study are from a subsample (n = 385) of individuals from the 2000 General Social Survey. The subsample includes those respondents who were asked Internet and health seeking module questions. Similar to prior research, results of this study show that the majority of both online and offline health information seekers report reliance upon health care professionals as a source of health information. This study is unique in that the results illustrate that there are several key factors (age, income, and education) that discriminate between US online and offline health information seekers; this suggests that general \"digital divide\" characteristics influence where health information is sought. In addition to traditional digital divide factors, those who are healthier and happier are less likely to look exclusively offline for health information. Implications of these findings are discussed in terms of the digital divide and the patient-provider relationship.", "title": "" }, { "docid": "a35bdf118e84d71b161fea1b9e798a1a", "text": "Parallel imaging may be applied to cancel ghosts caused by a variety of distortion mechanisms, including distortions such as off-resonance or local flow, which are space variant. Phased array combining coefficients may be calculated that null ghost artifacts at known locations based on a constrained optimization, which optimizes SNR subject to the nulling constraint. The resultant phased array ghost elimination (PAGE) technique is similar to the method known as sensitivity encoding (SENSE) used for accelerated imaging; however, in this formulation is applied to full field-of-view (FOV) images. The phased array method for ghost elimination may result in greater flexibility in designing acquisition strategies. For example, in multi-shot EPI applications ghosts are typically mitigated by the use of an interleaved phase encode acquisition order. An alternative strategy is to use a sequential, non-interleaved phase encode order and cancel the resultant ghosts using PAGE parallel imaging. Cancellation of ghosts by means of phased array processing makes sequential, non-interleaved phase encode acquisition order practical, and permits a reduction in repetition time, TR, by eliminating the need for echo-shifting. Sequential, non-interleaved phase encode order has benefits of reduced distortion due to off-resonance, in-plane flow and EPI delay misalignment. Furthermore, the use of EPI with PAGE has inherent fat-water separation and has been used to provide off-resonance correction using a technique referred to as lipid elimination with an echo-shifting N/2-ghost acquisition (LEENA), and may further generalized using the multi-point Dixon method. Other applications of PAGE include cancelling ghosts which arise due to amplitude or phase variation during the approach to steady state. Parallel imaging requires estimates of the complex coil sensitivities. In vivo estimates may be derived by temporally varying the phase encode ordering to obtain a full k-space dataset in a scheme similar to the autocalibrating TSENSE method. This scheme is a generalization of the UNFOLD method used for removing aliasing in undersampled acquisitions. The more general scheme may be used to modulate each EPI ghost image to a separate temporal frequency as described in this paper.", "title": "" }, { "docid": "022f0b83e93b82dfbdf7ae5f5ebe6f8f", "text": "Most pregnant women at risk of for infection with Plasmodium vivax live in the Asia-Pacific region. However, malaria in pregnancy is not recognised as a priority by many governments, policy makers, and donors in this region. Robust data for the true burden of malaria throughout pregnancy are scarce. Nevertheless, when women have little immunity, each infection is potentially fatal to the mother, fetus, or both. WHO recommendations for the control of malaria in pregnancy are largely based on the situation in Africa, but strategies in the Asia-Pacific region are complicated by heterogeneous transmission settings, coexistence of multidrug-resistant Plasmodium falciparum and Plasmodium vivax parasites, and different vectors. Most knowledge of the epidemiology, effect, treatment, and prevention of malaria in pregnancy in the Asia-Pacific region comes from India, Papua New Guinea, and Thailand. Improved estimates of the morbidity and mortality of malaria in pregnancy are urgently needed. When malaria in pregnancy cannot be prevented, accurate diagnosis and prompt treatment are needed to avert dangerous symptomatic disease and to reduce effects on fetuses.", "title": "" }, { "docid": "62218093e4d3bf81b23512043fc7a013", "text": "The Internet of things (IoT) refers to every object, which is connected over a network with the ability to transfer data. Users perceive this interaction and connection as useful in their daily life. However any improperly designed and configured technology will exposed to security threats. Therefore an ecosystem for IoT should be designed with security embedded in each layer of its ecosystem. This paper will discussed the security threats to IoT and then proposed an IoT Security Framework to mitigate it. Then IoT Security Framework will be used to develop a Secure IoT Sensor to Cloud Ecosystem.", "title": "" }, { "docid": "ba0051fdc72efa78a7104587042cea64", "text": "Open innovation breaks the original innovation border of organization and emphasizes the use of suppliers, customers, partners, and other internal and external innovative thinking and resources. How to effectively implement and manage open innovation has become a new business problem. Business ecosystem is the network system of value creation and co-evolution achieved by suppliers, users, partner, and other groups with self-organization mode. This study began with the risk analysis of open innovation implementation; then innovation process was embedded into business ecosystem structure; open innovation mode based on business ecosystem was proposed; business ecosystem based on open innovation was built according to influence degree of each innovative object. Study finds that both sides have a mutual promotion relationship, which provides a new analysis perspective for open innovation and business ecosystem; at the same time, it is also conducive to guiding the concrete practice of implementing open innovation.", "title": "" }, { "docid": "f10d79d1eb6d3ec994c1ec7ec3769437", "text": "The security of embedded devices often relies on the secrecy of proprietary cryptographic algorithms. These algorithms and their weaknesses are frequently disclosed through reverse-engineering software, but it is commonly thought to be too expensive to reconstruct designs from a hardware implementation alone. This paper challenges that belief by presenting an approach to reverse-engineering a cipher from a silicon implementation. Using this mostly automated approach, we reveal a cipher from an RFID tag that is not known to have a software or micro-code implementation. We reconstruct the cipher from the widely used Mifare Classic RFID tag by using a combination of image analysis of circuits and protocol analysis. Our analysis reveals that the security of the tag is even below the level that its 48-bit key length suggests due to a number of design flaws. Weak random numbers and a weakness in the authentication protocol allow for pre-computed rainbow tables to be used to find any key in a matter of seconds. Our approach of deducing functionality from circuit images is mostly automated, hence it is also feasible for large chips. The assumption that algorithms can be kept secret should therefore to be avoided for any type of silicon chip. Il faut qu’il n’exige pas le secret, et qu’il puisse sans inconvénient tomber entre les mains de l’ennemi. ([A cipher] must not depend on secrecy, and it must not matter if it falls into enemy hands.) August Kerckhoffs, La Cryptographie Militaire, January 1883 [13]", "title": "" }, { "docid": "8410b8b76ab690ed4389efae15608d13", "text": "The most natural way to speed-up the training of large networks is to use dataparallelism on multiple GPUs. To scale Stochastic Gradient (SG) based methods to more processors, one need to increase the batch size to make full use of the computational power of each GPU. However, keeping the accuracy of network with increase of batch size is not trivial. Currently, the state-of-the art method is to increase Learning Rate (LR) proportional to the batch size, and use special learning rate with \"warm-up\" policy to overcome initial optimization difficulty. By controlling the LR during the training process, one can efficiently use largebatch in ImageNet training. For example, Batch-1024 for AlexNet and Batch-8192 for ResNet-50 are successful applications. However, for ImageNet-1k training, state-of-the-art AlexNet only scales the batch size to 1024 and ResNet50 only scales it to 8192. The reason is that we can not scale the learning rate to a large value. To enable large-batch training to general networks or datasets, we propose Layer-wise Adaptive Rate Scaling (LARS). LARS LR uses different LRs for different layers based on the norm of the weights (||w||) and the norm of the gradients (||∇w||). By using LARS algoirithm, we can scale the batch size to 32768 for ResNet50 and 8192 for AlexNet. Large batch can make full use of the system’s computational power. For example, batch-4096 can achieve 3× speedup over batch-512 for ImageNet training by AlexNet model on a DGX-1 station (8 P100 GPUs).", "title": "" }, { "docid": "bde5a1876e93f10ad5942c416063bef6", "text": "This paper describes an innovative agent-based architecture for mixed-initiative interaction between a human and a robot that interacts via a graphical user interface (GUI). Mixed-initiative interaction typically refers to a flexible interaction strategy between a human and a computer to contribute what is best-suited at the most appropriate time [1]. In this paper, we extend this concept to human-robot interaction (HRI). When compared to pure humancomputer interaction, HRIs encounter additional difficulty, as the user must assess the situation at the robot’s remote location via limited sensory feedback. We propose an agent-based adaptive human-robot interface for mixed-initiative interaction to address this challenge. The proposed adaptive user interface (UI) architecture provides a platform for developing various agents that control robots and user interface components (UICs). Such components permit the human and the robot to communicate missionrelevant information.", "title": "" }, { "docid": "2b2c30fa2dc19ef7c16cf951a3805242", "text": "A standard approach to estimating online click-based metrics of a ranking function is to run it in a controlled experiment on live users. While reliable and popular in practice, configuring and running an online experiment is cumbersome and time-intensive. In this work, inspired by recent successes of offline evaluation techniques for recommender systems, we study an alternative that uses historical search log to reliably predict online click-based metrics of a \\emph{new} ranking function, without actually running it on live users. To tackle novel challenges encountered in Web search, variations of the basic techniques are proposed. The first is to take advantage of diversified behavior of a search engine over a long period of time to simulate randomized data collection, so that our approach can be used at very low cost. The second is to replace exact matching (of recommended items in previous work) by \\emph{fuzzy} matching (of search result pages) to increase data efficiency, via a better trade-off of bias and variance. Extensive experimental results based on large-scale real search data from a major commercial search engine in the US market demonstrate our approach is promising and has potential for wide use in Web search.", "title": "" } ]
scidocsrr
0f6c92d1fd23fab6d2ef7e67ef22a415
Evaluating the Usability of Optimizing Text-based CAPTCHA Generation
[ { "docid": "96d6173f58e36039577c8e94329861b2", "text": "Reverse Turing tests, or CAPTCHAs, have become an ubiquitous defense used to protect open Web resources from being exploited at scale. An effective CAPTCHA resists existing mechanistic software solving, yet can be solved with high probability by a human being. In response, a robust solving ecosystem has emerged, reselling both automated solving technology and realtime human labor to bypass these protections. Thus, CAPTCHAs can increasingly be understood and evaluated in purely economic terms; the market price of a solution vs the monetizable value of the asset being protected. We examine the market-side of this question in depth, analyzing the behavior and dynamics of CAPTCHA-solving service providers, their price performance, and the underlying labor markets driving this economy.", "title": "" }, { "docid": "36b4c028bcd92115107cf245c1e005c8", "text": "CAPTCHA is now almost a standard security technology, and has found widespread application in commercial websites. Usability and robustness are two fundamental issues with CAPTCHA, and they often interconnect with each other. This paper discusses usability issues that should be considered and addressed in the design of CAPTCHAs. Some of these issues are intuitive, but some others have subtle implications for robustness (or security). A simple but novel framework for examining CAPTCHA usability is also proposed.", "title": "" } ]
[ { "docid": "25196ef0c4385ec44b62183d9c282fc6", "text": "It is not well understood how privacy concern and trust influence social interactions within social networking sites. An online survey of two popular social networking sites, Facebook and MySpace, compared perceptions of trust and privacy concern, along with willingness to share information and develop new relationships. Members of both sites reported similar levels of privacy concern. Facebook members expressed significantly greater trust in both Facebook and its members, and were more willing to share identifying information. Even so, MySpace members reported significantly more experience using the site to meet new people. These results suggest that in online interaction, trust is not as necessary in the building of new relationships as it is in face to face encounters. They also show that in an online site, the existence of trust and the willingness to share information do not automatically translate into new social interaction. This study demonstrates online relationships can develop in sites where perceived trust and privacy safeguards are weak.", "title": "" }, { "docid": "528726032a0cfbd366c278cd247b0008", "text": "It is difficult to develop a computational model that can accurately predict the quality of the video summary. This paper proposes a novel algorithm to summarize one-shot landmark videos. The algorithm can optimally combine multiple unedited consumer video skims into an aesthetically pleasing summary. In particular, to effectively select the representative key frames from multiple videos, an active learning algorithm is derived by taking advantage of the locality of the frames within each video. Toward a smooth video summary, we define skimlet, a video clip with adjustable length, starting frame, and positioned by each skim. Thereby, a probabilistic framework is developed to transfer the visual cues from a collection of aesthetically pleasing photos into the video summary. The length and the starting frame of each skimlet are calculated to maximally smoothen the video summary. At the same time, the unstable frames are removed from each skimlet. Experiments on multiple videos taken from different sceneries demonstrated the aesthetics, the smoothness, and the stability of the generated summary.", "title": "" }, { "docid": "07409cd81cc5f0178724297245039878", "text": "In recent years, the number of sensor network deployments for real-life applications has rapidly increased and it is expected to expand even more in the near future. Actually, for a credible deployment in a real environment three properties need to be fulfilled, i.e., energy efficiency, scalability and reliability. In this paper we focus on IEEE 802.15.4 sensor networks and show that they can suffer from a serious MAC unreliability problem, also in an ideal environment where transmission errors never occur. This problem arises whenever power management is enabled - for improving the energy efficiency - and results in a very low delivery ratio, even when the number of nodes in the network is very low (e.g., 5). We carried out an extensive analysis, based on simulations and real measurements, to investigate the ultimate reasons of this problem. We found that it is caused by the default MAC parameter setting suggested by the 802.15.4 standard. We also found that, with a more appropriate parameter setting, it is possible to achieve the desired level of reliability (as well as a better energy efficiency). However, in some scenarios this is possible only by choosing parameter values formally not allowed by the standard.", "title": "" }, { "docid": "c70e11160c90bd67caa2294c499be711", "text": "The vital sign monitoring through Impulse Radio Ultra-Wide Band (IR-UWB) radar provides continuous assessment of a patient's respiration and heart rates in a non-invasive manner. In this paper, IR UWB radar is used for monitoring respiration and the human heart rate. The breathing and heart rate frequencies are extracted from the signal reflected from the human body. A Kalman filter is applied to reduce the measurement noise from the vital signal. An algorithm is presented to separate the heart rate signal from the breathing harmonics. An auto-correlation based technique is applied for detecting random body movements (RBM) during the measurement process. Experiments were performed in different scenarios in order to show the validity of the algorithm. The vital signs were estimated for the signal reflected from the chest, as well as from the back side of the body in different experiments. The results from both scenarios are compared for respiration and heartbeat estimation accuracy.", "title": "" }, { "docid": "d52efc862c68ec09a5ae3395464996ed", "text": "The growth of digital video has given rise to a need for computational methods for evaluating the visual quality of digital video. We have developed a new digital video quality metric, which we call DVQ (Digital Video Quality). Here we provide a brief description of the metric, and give a preliminary report on its performance. DVQ accepts a pair of digital video sequences, and computes a measure of the magnitude of the visible difference between them. The metric is based on the Discrete Cosine Transform. It incorporates aspects of early visual processing, including light adaptation, luminance and chromatic channels, spatial and temporal filtering, spatial frequency channels, contrast masking, and probability summation. It also includes primitive dynamics of light adaptation and contrast masking. We have applied the metric to digital video sequences corrupted by various typical compression artifacts, and compared the results to quality ratings made by human observers.", "title": "" }, { "docid": "49a041e18a063876dc595f33fe8239a8", "text": "Significant vulnerabilities have recently been identified in collaborative filtering recommender systems. These vulnerabilities mostly emanate from the open nature of such systems and their reliance on userspecified judgments for building profiles. Attackers can easily introduce biased data in an attempt to force the system to “adapt” in a manner advantageous to them. Our research in secure personalization is examining a range of attack models, from the simple to the complex, and a variety of recommendation techniques. In this chapter, we explore an attack model that focuses on a subset of users with similar tastes and show that such an attack can be highly successful against both user-based and item-based collaborative filtering. We also introduce a detection model that can significantly decrease the impact of this attack.", "title": "" }, { "docid": "6226b650540d812b6c40939a582331ef", "text": "With an increasingly mobile society and the worldwide deployment of mobile and wireless networks, the wireless infrastructure can support many current and emerging healthcare applications. This could fulfill the vision of “Pervasive Healthcare” or healthcare to anyone, anytime, and anywhere by removing locational, time and other restraints while increasing both the coverage and the quality. In this paper, we present applications and requirements of pervasive healthcare, wireless networking solutions and several important research problems. The pervasive healthcare applications include pervasive health monitoring, intelligent emergency management system, pervasive healthcare data access, and ubiquitous mobile telemedicine. One major application in pervasive healthcare, termed comprehensive health monitoring is presented in significant details using wireless networking solutions of wireless LANs, ad hoc wireless networks, and, cellular/GSM/3G infrastructureoriented networks.Many interesting challenges of comprehensive wireless health monitoring, including context-awareness, reliability, and, autonomous and adaptable operation are also presented along with several high-level solutions. Several interesting research problems have been identified and presented for future research.", "title": "" }, { "docid": "d86eb65183f059a4ca7cb0ad9190a0ca", "text": "Different short circuits, load growth, generation shortage, and other faults which disturb the voltage and frequency stability are serious threats to the system security. The frequency and voltage instability causes dispersal of a power system into sub-systems, and leads to blackout as well as heavy damages of the system equipment. This paper presents a fast and optimal adaptive load shedding method, for isolated power system using Artificial Neural Networks (ANN). The proposed method is able to determine the necessary load shedding in all steps simultaneously and is much faster than conventional methods. This method has been tested on the New-England power system. The simulation results show that the proposed algorithm is fast, robust and optimal values of load shedding in different loading scenarios are obtained in comparison with conventional method.", "title": "" }, { "docid": "cca431043c72db900f45e7b79bb9fb66", "text": "During the past decade, there have been a variety of significant developments in data mining techniques. Some of these developments are implemented in customized service to develop customer relationship. Customized service is actually crucial in retail markets. Marketing managers can develop long-term and pleasant relationships with customers if they can detect and predict changes in customer behavior. In the dynamic retail market, understanding changes in customer behavior can help managers to establish effective promotion campaigns. This study integrates customer behavioral variables, demographic variables, and transaction database to establish a method of mining changes in customer behavior. For mining change patterns, two extended measures of similarity and unexpectedness are designed to analyze the degree of resemblance between patterns at different time periods. The proposed approach for mining changes in customer behavior can assist managers in developing better marketing strategies. q 2005 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "28600f0ee7ca1128874e830e01a028de", "text": "This paper presents and analyzes a three-tier architecture for collecting sensor data in sparse sensor networks. Our approach exploits the presence of mobile entities (called MULEs) present in the environment. When in close range, MULEs pick up data from the sensors, buffer it, and deliver it to wired access points. This can lead to substantial power savings at the sensors as they only have to transmit over a short-range. This paper focuses on a simple analytical model for understanding performance as system parameters are scaled. Our model assumes a two-dimensional random walk for mobility and incorporates key system variables such as number of MULEs, sensors and access points. The performance metrics observed are the data success rate (the fraction of generated data that reaches the access points), latency and the required buffer capacities on the sensors and the MULEs. The modeling and simulation results can be used for further analysis and provide certain guidelines for deployment of such systems. 2003 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "a992be26d6b41ee4d3a8f8fa7014727b", "text": "In this paper, we develop a heart disease prediction model that can assist medical professionals in predicting heart disease status based on the clinical data of patients. Firstly, we select 14 important clinical features, i.e., age, sex, chest pain type, trestbps, cholesterol, fasting blood sugar, resting ecg, max heart rate, exercise induced angina, old peak, slope, number of vessels colored, thal and diagnosis of heart disease. Secondly, we develop an prediction model using J48 decision tree for classifying heart disease based on these clinical features against unpruned, pruned and pruned with reduced error pruning approach.. Finally, the accuracy of Pruned J48 Decision Tree with Reduced Error Pruning Approach is more better then the simple Pruned and Unpruned approach. The result obtained that which shows that fasting blood sugar is the most important attribute which gives better classification against the other attributes but its gives not better accuracy. Keywords—Data mining, Reduced Error Pruning, Gain Ratio and Decision Tree.", "title": "" }, { "docid": "85ab2edb48dd57f259385399437ea8e9", "text": "Training robust deep video representations has proven to be much more challenging than learning deep image representations. This is in part due to the enormous size of raw video streams and the high temporal redundancy; the true and interesting signal is often drowned in too much irrelevant data. Motivated by that the superfluous information can be reduced by up to two orders of magnitude by video compression (using H.264, HEVC, etc.), we propose to train a deep network directly on the compressed video. This representation has a higher information density, and we found the training to be easier. In addition, the signals in a compressed video provide free, albeit noisy, motion information. We propose novel techniques to use them effectively. Our approach is about 4.6 times faster than Res3D and 2.7 times faster than ResNet-152. On the task of action recognition, our approach outperforms all the other methods on the UCF-101, HMDB-51, and Charades dataset.", "title": "" }, { "docid": "cb8a21bf8d0642ee9410419ecf472b21", "text": "Sentiment analysis or opinion mining is one of the major tasks of NLP (Natural Language Processing). Sentiment analysis has gain much attention in recent years. In this paper, we aim to tackle the problem of sentiment polarity categorization, which is one of the fundamental problems of sentiment analysis. A general process for sentiment polarity categorization is proposed with detailed process descriptions. Data used in this study are online product reviews collected from Amazon.com. Experiments for both sentence-level categorization and review-level categorization are performed with promising outcomes. At last, we also give insight into our future work on sentiment analysis.", "title": "" }, { "docid": "585ec3229d7458f5d6bca3c7936eb306", "text": "Graph processing has gained renewed attention. The increasing large scale and wealth of connected data, such as those accrued by social network applications, demand the design of new techniques and platforms to efficiently derive actionable information from large scale graphs. Hybrid systems that host processing units optimized for both fast sequential processing and bulk processing (e.g., GPUaccelerated systems) have the potential to cope with the heterogeneous structure of real graphs and enable high performance graph processing. Reaching this point, however, poses multiple challenges. The heterogeneity of the processing elements (e.g., GPUs implement a different parallel processing model than CPUs and have much less memory) and the inherent irregularity of graph workloads require careful graph partitioning and load assignment. In particular, the workload generated by a partitioning scheme should match the strength of the processing element the partition is allocated to. This work explores the feasibility and quantifies the performance gains of such low-cost partitioning schemes. We propose to partition the workload between the two types of processing elements based on vertex connectivity. We show that such partitioning schemes offer a simple, yet efficient way to boost the overall performance of the hybrid system. Our evaluation illustrates that processing a 4-billion edges graph on a system with one CPU socket and one GPU, while offloading as little as 25% of the edges to the GPU, achieves 2x performance improvement over state-of-the-art implementations running on a dual-socket symmetric system. Moreover, for the same graph, a hybrid system with dualsocket and dual-GPU is capable of 1.13 Billion breadth-first search traversed edge per second, a performance rate that is competitive with the latest entries in the Graph500 list, yet at a much lower price point.", "title": "" }, { "docid": "6c72b38246e35d1f49d7f55e89b42f21", "text": "The success of IT project related to numerous factors. It had an important significance to find the critical factors for the success of project. Based on the general analysis of IT project management, this paper analyzed some factors of project management for successful IT project from the angle of modern project management. These factors include project participators, project communication, collaboration, and information sharing mechanism as well as project management process. In the end, it analyzed the function of each factor for a successful IT project. On behalf of the collective goal, by the use of the favorable project communication and collaboration, the project participants carry out successfully to the management of the process, which is significant to the project, and make project achieve success eventually.", "title": "" }, { "docid": "0fafa2597726dfeb4d35721c478f1038", "text": "Visual saliency models have enjoyed a big leap in performance in recent years, thanks to advances in deep learning and large scale annotated data. Despite enormous effort and huge breakthroughs, however, models still fall short in reaching human-level accuracy. In this work, I explore the landscape of the field emphasizing on new deep saliency models, benchmarks, and datasets. A large number of image and video saliency models are reviewed and compared over two image benchmarks and two large scale video datasets. Further, I identify factors that contribute to the gap between models and humans and discuss the remaining issues that need to be addressed to build the next generation of more powerful saliency models. Some specific questions that are addressed include: in what ways current models fail, how to remedy them, what can be learned from cognitive studies of attention, how explicit saliency judgments relate to fixations, how to conduct fair model comparison, and what are the emerging applications of saliency models.", "title": "" }, { "docid": "fd18b3d4799d23735c48bff3da8fd5ff", "text": "There is need for an Integrated Event Focused Crawling system to collect Web data about key events. When a disaster or other significant event occurs, many users try to locate the most up-to-date information about that event. Yet, there is little systematic collecting and archiving anywhere of event information. We propose intelligent event focused crawling for automatic event tracking and archiving, ultimately leading to effective access. We developed an event model that can capture key event information, and incorporated that model into a focused crawling algorithm. For the focused crawler to leverage the event model in predicting webpage relevance, we developed a function that measures the similarity between two event representations. We then conducted two series of experiments to evaluate our system about two recent events: California shooting and Brussels attack. The first experiment series evaluated the effectiveness of our proposed event model representation when assessing the relevance of webpages. Our event model-based representation outperformed the baseline method (topic-only); it showed better results in precision, recall, and F1-score with an improvement of 20% in F1-score. The second experiment series evaluated the effectiveness of the event model-based focused crawler for collecting relevant webpages from the WWW. Our event model-based focused crawler outperformed the state-of-the-art baseline focused crawler (best-first); it showed better results in harvest ratio with an average improvement of 40%.", "title": "" }, { "docid": "4142b1fc9e37ffadc6950105c3d99749", "text": "Just-noticeable distortion (JND), which refers to the maximum distortion that the human visual system (HVS) cannot perceive, plays an important role in perceptual image and video processing. In comparison with JND estimation for images, estimation of the JND profile for video needs to take into account the temporal HVS properties in addition to the spatial properties. In this paper, we develop a spatio-temporal model estimating JND in the discrete cosine transform domain. The proposed model incorporates the spatio-temporal contrast sensitivity function, the influence of eye movements, luminance adaptation, and contrast masking to be more consistent with human perception. It is capable of yielding JNDs for both still images and video with significant motion. The experiments conducted in this study have demonstrated that the JND values estimated for video sequences with moving objects by the model are in line with the HVS perception. The accurate JND estimation of the video towards the actual visibility bounds can be translated into resource savings (e.g., for bandwidth/storage or computation) and performance improvement in video coding and other visual processing tasks (such as perceptual quality evaluation, visual signal restoration/enhancement, watermarking, authentication, and error protection)", "title": "" }, { "docid": "52d2004c762d4701ab275d9757c047fc", "text": "Somatic mosaicism — the presence of genetically distinct populations of somatic cells in a given organism — is frequently masked, but it can also result in major phenotypic changes and reveal the expression of otherwise lethal genetic mutations. Mosaicism can be caused by DNA mutations, epigenetic alterations of DNA, chromosomal abnormalities and the spontaneous reversion of inherited mutations. In this review, we discuss the human disorders that result from somatic mosaicism, as well as the molecular genetic mechanisms by which they arise. Specifically, we emphasize the role of selection in the phenotypic manifestations of mosaicism.", "title": "" } ]
scidocsrr
ec6484ba5c85d5feffa574b53588b534
Houdini, an Annotation Assistant for ESC/Java
[ { "docid": "cb1952a4931955856c6479d7054c57e7", "text": "This paper presents a static race detection analysis for multithreaded Java programs. Our analysis is based on a formal type system that is capable of capturing many common synchronization patterns. These patterns include classes with internal synchronization, classes thatrequire client-side synchronization, and thread-local classes. Experience checking over 40,000 lines of Java code with the type system demonstrates that it is an effective approach for eliminating races conditions. On large examples, fewer than 20 additional type annotations per 1000 lines of code were required by the type checker, and we found a number of races in the standard Java libraries and other test programs.", "title": "" } ]
[ { "docid": "3fcce3664db5812689c121138e2af280", "text": "We examine and compare simulation-based algorithms for solving the agent scheduling problem in a multiskill call center. This problem consists in minimizing the total costs of agents under constraints on the expected service level per call type, per period, and aggregated. We propose a solution approach that combines simulation with integer or linear programming, with cut generation. In our numerical experiments with realistic problem instances, this approach performs better than all other methods proposed previously for this problem. We also show that the two-step approach, which is the standard method for solving this problem, sometimes yield solutions that are highly suboptimal and inferior to those obtained by our proposed method. 2009 Published by Elsevier B.V.", "title": "" }, { "docid": "e7968b6bfb3535907b380cfd93128b0e", "text": "We present a novel solution to the problem of depth reconstruction from a single image. Single view 3D reconstruction is an ill-posed problem. We address this problem by using an example-based synthesis approach. Our method uses a database of objects from a single class (e.g. hands, human figures) containing example patches of feasible mappings from the appearance to the depth of each object. Given an image of a novel object, we combine the known depths of patches from similar objects to produce a plausible depth estimate. This is achieved by optimizing a global target function representing the likelihood of the candidate depth. We demonstrate how the variability of 3D shapes and their poses can be handled by updating the example database on-the-fly. In addition, we show how we can employ our method for the novel task of recovering an estimate for the occluded backside of the imaged objects. Finally, we present results on a variety of object classes and a range of imaging conditions.", "title": "" }, { "docid": "89596e6eedbc1f13f63ea144b79fdc64", "text": "This paper describes our work in integrating three different lexical resources: FrameNet, VerbNet, and WordNet, into a unified, richer knowledge-base, to the end of enabling more robust semantic parsing. The construction of each of these lexical resources has required many years of laborious human effort, and they all have their strengths and shortcomings. By linking them together, we build an improved resource in which (1) the coverage of FrameNet is extended, (2) the VerbNet lexicon is augmented with frame semantics, and (3) selectional restrictions are implemented using WordNet semantic classes. The synergistic exploitation of various lexical resources is crucial for many complex language processing applications, and we prove it once again effective in building a robust semantic parser.", "title": "" }, { "docid": "24e31f9cdedcc7aa8f9489db9db13f94", "text": "A basic ingredient in transformational leadership development consists in identifying leadership qualities via distribution of the multifactor leadership questionnaire (MLQ) to followers of the target leaders. It is vital that the MLQ yields an accurate and unbiased assessment of leaders on the various leadership dimensions. This article focuses on two sources of bias which may occur in identifying leadership qualities. First, when followers assess the strengths and weaknesses of their leaders, they may have difficulty in differentiating between the various transformational and transactional leadership behaviours. It is found that this is only the case for the transformational leadership attributes because the four transformational leadership dimensions measured by the MLQ correlate highly and cluster into one factor. MLQ ratings on the three transactional leadership dimensions are found not to be interrelated and show evidence for three distinct factors: contingent reward, active management-by-exception and passive leadership. Second, social desirability does not seem to be a strong biasing factor, although the transformational leadership scale is somewhat more socially desirable. These findings emphasize that the measurement of so-called “new” leadership qualities remains a controversial issue in leadership development. Practical implications of these findings and avenues for future research are also discussed.", "title": "" }, { "docid": "44e7e452b9b27d2028d15c88256eff30", "text": "In social media communication, multilingual speakers often switch between languages, and, in such an environment, automatic language identification becomes both a necessary and challenging task. In this paper, we describe our work in progress on the problem of automatic language identification for the language of social media. We describe a new dataset that we are in the process of creating, which contains Facebook posts and comments that exhibit code mixing between Bengali, English and Hindi. We also present some preliminary word-level language identification experiments using this dataset. Different techniques are employed, including a simple unsupervised dictionary-based approach, supervised word-level classification with and without contextual clues, and sequence labelling using Conditional Random Fields. We find that the dictionary-based approach is surpassed by supervised classification and sequence labelling, and that it is important to take contextual clues into consideration.", "title": "" }, { "docid": "32fd7a91091f74a5ea55226aa44403d3", "text": "Previous research has shown that patients with schizophrenia are impaired in reinforcement learning tasks. However, behavioral learning curves in such tasks originate from the interaction of multiple neural processes, including the basal ganglia- and dopamine-dependent reinforcement learning (RL) system, but also prefrontal cortex-dependent cognitive strategies involving working memory (WM). Thus, it is unclear which specific system induces impairments in schizophrenia. We recently developed a task and computational model allowing us to separately assess the roles of RL (slow, cumulative learning) mechanisms versus WM (fast but capacity-limited) mechanisms in healthy adult human subjects. Here, we used this task to assess patients' specific sources of impairments in learning. In 15 separate blocks, subjects learned to pick one of three actions for stimuli. The number of stimuli to learn in each block varied from two to six, allowing us to separate influences of capacity-limited WM from the incremental RL system. As expected, both patients (n = 49) and healthy controls (n = 36) showed effects of set size and delay between stimulus repetitions, confirming the presence of working memory effects. Patients performed significantly worse than controls overall, but computational model fits and behavioral analyses indicate that these deficits could be entirely accounted for by changes in WM parameters (capacity and reliability), whereas RL processes were spared. These results suggest that the working memory system contributes strongly to learning impairments in schizophrenia.", "title": "" }, { "docid": "3df8f7669b6a9d3509cf72eaa8d94248", "text": "Current forensic tools for examination of embedded systems like mobile phones and PDA’s mostly perform data extraction on a logical level and do not consider the type of storage media during data analysis. This paper suggests a low level approach for the forensic examination of flash memories and describes three low-level data acquisition methods for making full memory copies of flash memory devices. Results are presented of a file system study in which USB memory sticks from 45 different make and models were used. For different mobile phones is shown how full memory copies of their flash memories can be made and which steps are needed to translate the extracted data into a format that can be understood by common forensic media analysis tools. Artifacts, caused by flash specific operations like block erasing and wear leveling, are discussed and directions are given for enhanced data recovery and analysis on data originating from flash memory.", "title": "" }, { "docid": "7a180e503a0b159d545047443524a05a", "text": "We present two methods for determining the sentiment expressed by a movie review. The semantic orientation of a review can be positive, negative, or neutral. We examine the effect of valence shifters on classifying the reviews. We examine three types of valence shifters: negations, intensifiers, and diminishers. Negations are used to reverse the semantic polarity of a particular term, while intensifiers and diminishers are used to increase and decrease, respectively, the degree to which a term is positive or negative. The first method classifies reviews based on the number of positive and negative terms they contain. We use the General Inquirer to identify positive and negative terms, as well as negation terms, intensifiers, and diminishers. We also use positive and negative terms from other sources, including a dictionary of synonym differences and a very large Web corpus. To compute corpus-based semantic orientation values of terms, we use their association scores with a small group of positive and negative terms. We show that extending the term-counting method with contextual valence shifters improves the accuracy of the classification. The second method uses a Machine Learning algorithm, Support Vector Machines. We start with unigram features and then add bigrams that consist of a valence shifter and another word. The accuracy of classification is very high, and the valence shifter bigrams slightly improve it. The features that contribute to the high accuracy are the words in the lists of positive and negative terms. Previous work focused on either the term-counting method or the Machine Learning method. We show that combining the two methods achieves better results than either method alone.", "title": "" }, { "docid": "5c8923335dd4ee4c2123b5b3245fb595", "text": "Virtualization is a key enabler of Cloud computing. Due to the numerous vulnerabilities in current implementations of virtualization, security is the major concern of Cloud computing. In this paper, we propose an enhanced security framework to detect intrusions at the virtual network layer of Cloud. It combines signature and anomaly based techniques to detect possible attacks. It uses different classifiers viz; naive bayes, decision tree, random forest, extra trees and linear discriminant analysis for an efficient and effective detection of intrusions. To detect distributed attacks at each cluster and at whole Cloud, it collects intrusion evidences from each region of Cloud and applies Dempster-Shafer theory (DST) for final decision making. We analyze the proposed security framework in terms of Cloud IDS requirements through offline simulation using different intrusion datasets.", "title": "" }, { "docid": "24411f7fe027e5eb617cf48c3e36ce05", "text": "Reliability assessment of distribution system, based on historical data and probabilistic methods, leads to an unreliable estimation of reliability indices since the data for the distribution components are usually inaccurate or unavailable. Fuzzy logic is an efficient method to deal with the uncertainty in reliability inputs. In this paper, the ENS index along with other commonly used indices in reliability assessment are evaluated for the distribution system using fuzzy logic. Accordingly, the influential variables on the failure rate and outage duration time of the distribution components, which are natural or human-made, are explained using proposed fuzzy membership functions. The reliability indices are calculated and compared for different cases of the system operations by simulation on the IEEE RBTS Bus 2. The results of simulation show how utilities can significantly improve the reliability of their distribution system by considering the risk of the influential variables.", "title": "" }, { "docid": "ef771fa11d9f597f94cee5e64fcf9fd6", "text": "The principle of artificial curiosity directs active exploration towards the most informative or most interesting data. We show its usefulness for global black box optimization when data point evaluations are expensive. Gaussian process regression is used to model the fitness function based on all available observations so far. For each candidate point this model estimates expected fitness reduction, and yields a novel closed-form expression of expected information gain. A new type of Pareto-front algorithm continually pushes the boundary of candidates not dominated by any other known data according to both criteria, using multi-objective evolutionary search. This makes the exploration-exploitation trade-off explicit, and permits maximally informed data selection. We illustrate the robustness of our approach in a number of experimental scenarios.", "title": "" }, { "docid": "53b6315bfb8fcfef651dd83138b11378", "text": "We illustrate the correspondence between uncertainty sets in robust optimization and some popular risk measures in finance, and show how robust optimization can be used to generalize the concepts of these risk measures. We also show that by using properly defined uncertainty sets in robust optimization models, one can construct coherent risk measures. Our results have implications for efficient portfolio optimization under different measures of risk. Department of Mathematics, National University of Singapore, Singapore 117543. Email: matkbn@nus.edu.sg. The research of the author was partially supported by Singapore-MIT Alliance, NUS Risk Management Institute and NUS startup grants R-146-050-070-133 & R146-050-070-101. Division of Mathematics and Sciences, Babson College, Babson Park, MA 02457, USA. E-mail: dpachamanova@babson.edu. Research supported by the Gill grant from the Babson College Board of Research. NUS Business School, National University of Singapore. Email: dscsimm@nus.edu.sg. The research of the author was partially supported by Singapore-MIT Alliance, NUS Risk Management Institute and NUS academic research grant R-314-000-066-122 and R-314-000-068-122.", "title": "" }, { "docid": "913e167521f0ce7a7f1fb0deac58ae9c", "text": "Prospect theory is a descriptive theory of how individuals choose among risky alternatives. The theory challenged the conventional wisdom that economic decision makers are rational expected utility maximizers. We present a number of empirical demonstrations that are inconsistent with the classical theory, expected utility, but can be explained by prospect theory. We then discuss the prospect theory model, including the value function and the probability weighting function. We conclude by highlighting several applications of the theory.", "title": "" }, { "docid": "cbaf7cd4e17c420b7546d132959b3283", "text": "User mobility has given rise to a variety of Web applications, in which the global positioning system (GPS) plays many important roles in bridging between these applications and end users. As a kind of human behavior, transportation modes, such as walking and driving, can provide pervasive computing systems with more contextual information and enrich a user's mobility with informative knowledge. In this article, we report on an approach based on supervised learning to automatically infer users' transportation modes, including driving, walking, taking a bus and riding a bike, from raw GPS logs. Our approach consists of three parts: a change point-based segmentation method, an inference model and a graph-based post-processing algorithm. First, we propose a change point-based segmentation method to partition each GPS trajectory into separate segments of different transportation modes. Second, from each segment, we identify a set of sophisticated features, which are not affected by differing traffic conditions (e.g., a person's direction when in a car is constrained more by the road than any change in traffic conditions). Later, these features are fed to a generative inference model to classify the segments of different modes. Third, we conduct graph-based postprocessing to further improve the inference performance. This postprocessing algorithm considers both the commonsense constraints of the real world and typical user behaviors based on locations in a probabilistic manner. The advantages of our method over the related works include three aspects. (1) Our approach can effectively segment trajectories containing multiple transportation modes. (2) Our work mined the location constraints from user-generated GPS logs, while being independent of additional sensor data and map information like road networks and bus stops. (3) The model learned from the dataset of some users can be applied to infer GPS data from others. Using the GPS logs collected by 65 people over a period of 10 months, we evaluated our approach via a set of experiments. As a result, based on the change-point-based segmentation method and Decision Tree-based inference model, we achieved prediction accuracy greater than 71 percent. Further, using the graph-based post-processing algorithm, the performance attained a 4-percent enhancement.", "title": "" }, { "docid": "c8f9d10de0d961e4ee14b6b118b5f89a", "text": "Deep learning is having a transformative effect on how sensor data are processed and interpreted. As a result, it is becoming increasingly feasible to build sensor-based computational models that are much more robust to real-world noise and complexity than previously possible. It is paramount that these innovations reach mobile and embedded devices that often rely on understanding and reacting to sensor data. However, deep models conventionally demand a level of system resources (e.g., memory and computation) that makes them problematic to run directly on constrained devices. In this work, we present the DeepX toolkit (DXTK); an opensource collection of software components for simplifying the execution of deep models on resource-sensitive platforms. DXTK contains a number of pre-trained low-resource deep models that users can quickly adopt and integrate for their particular application needs. It also offers a range of runtime options for executing deep models on range of devices including both Android and Linux variants. But the heart of DXTK is a series of optimization techniques (viz. weight/sparse factorization, convolution separation, precision scaling, and parameter cleaning). Each technique offers a complementary approach to shaping system resource requirements, and is compatible with deep and convolutional neural networks. We hope that DXTK proves to be a valuable resource for the community, and accelerates the adoption and study of resource-constrained deep learning.", "title": "" }, { "docid": "ddf09617b266d483d5e3ab3dcb479b69", "text": "Writing a research article can be a daunting task, and often, writers are not certain what should be included and how the information should be conveyed. Fortunately, scientific and engineering journal articles follow an accepted format. They contain an introduction which includes a statement of the problem, a literature review, and a general outline of the paper, a methods section detailing the methods used, separate or combined results, discussion and application sections, and a final summary and conclusions section. Here, each of these elements is described in detail using examples from the published literature as illustration. Guidance is also provided with respect to style, getting started, and the revision/review process.", "title": "" }, { "docid": "16de36d6bf6db7c294287355a44d0f61", "text": "The Computational Linguistics (CL) Summarization Pilot Task was created to encourage a community effort to address the research problem of summarizing research articles as “faceted summaries” in the domain of computational linguistics. In this pilot stage, a handannotated set of citing papers was provided for ten reference papers to help in automating the citation span and discourse facet identification problems. This paper details the corpus construction efforts by the organizers and the participating teams, who also participated in the task-based evaluation. The annotated development corpus used for this pilot task is publicly available at: https://github.com/WING-", "title": "" }, { "docid": "c718b84951edfe294b8287ef3f5a9c6a", "text": "Dynamic Searchable Symmetric Encryption (DSSE) allows a client to perform keyword searches over encrypted files via an encrypted data structure. Despite its merits, DSSE leaks search and update patterns when the client accesses the encrypted data structure. These leakages may create severe privacy problems as already shown, for example, in recent statistical attacks on DSSE. While Oblivious Random Access Memory (ORAM) can hide such access patterns, it incurs significant communication overhead and, therefore, it is not yet fully practical for cloud computing systems. Hence, there is a critical need to develop private access schemes over the encrypted data structure that can seal the leakages of DSSE while achieving practical search/update operations.\n In this paper, we propose a new oblivious access scheme over the encrypted data structure for searchable encryption purposes, that we call <u>D</u>istributed <u>O</u>blivious <u>D</u>ata structure <u>DSSE</u> (DOD-DSSE). The main idea is to create a distributed encrypted incidence matrix on two non-colluding servers such that no arbitrary queries on these servers can be linked to each other. This strategy prevents not only recent statistical attacks on the encrypted data structure but also other potential threats exploiting query linkability. Our security analysis proves that DOD-DSSE ensures the unlink-ability of queries and, therefore, offers much higher security than traditional DSSE. At the same time, our performance evaluation demonstrates that DOD-DSSE is two orders of magnitude faster than ORAM-based techniques (e.g., Path ORAM), since it only incurs a small-constant number of communication overhead. That is, we deployed DOD-DSSE on geographically distributed Amazon EC2 servers, and showed that, a search/update operation on a very large dataset only takes around one second with DOD-DSSE, while it takes 3 to 13 minutes with Path ORAM-based methods.", "title": "" }, { "docid": "6dfb4c016db41a27587ef08011a7cf0e", "text": "The objective of this work is to detect shadows in images. We pose this as the problem of labeling image regions, where each region corresponds to a group of superpixels. To predict the label of each region, we train a kernel Least-Squares Support Vector Machine (LSSVM) for separating shadow and non-shadow regions. The parameters of the kernel and the classifier are jointly learned to minimize the leave-one-out cross validation error. Optimizing the leave-one-out cross validation error is typically difficult, but it can be done efficiently in our framework. Experiments on two challenging shadow datasets, UCF and UIUC, show that our region classifier outperforms more complex methods. We further enhance the performance of the region classifier by embedding it in a Markov Random Field (MRF) framework and adding pairwise contextual cues. This leads to a method that outperforms the state-of-the-art for shadow detection. In addition we propose a new method for shadow removal based on region relighting. For each shadow region we use a trained classifier to identify a neighboring lit region of the same material. Given a pair of lit-shadow regions we perform a region relighting transformation based on histogram matching of luminance values between the shadow region and the lit region. Once a shadow is detected, we demonstrate that our shadow removal approach produces results that outperform the state of the art by evaluating our method using a publicly available benchmark dataset.", "title": "" }, { "docid": "b82f7b7a317715ba0c7ca87db92c7bf6", "text": "Regions of hypoxia in tumours can be modelled in vitro in 2D cell cultures with a hypoxic chamber or incubator in which oxygen levels can be regulated. Although this system is useful in many respects, it disregards the additional physiological gradients of the hypoxic microenvironment, which result in reduced nutrients and more acidic pH. Another approach to hypoxia modelling is to use three-dimensional spheroid cultures. In spheroids, the physiological gradients of the hypoxic tumour microenvironment can be inexpensively modelled and explored. In addition, spheroids offer the advantage of more representative modelling of tumour therapy responses compared with 2D culture. Here, we review the use of spheroids in hypoxia tumour biology research and highlight the different methodologies for spheroid formation and how to obtain uniformity. We explore the challenge of spheroid analyses and how to determine the effect on the hypoxic versus normoxic components of spheroids. We discuss the use of high-throughput analyses in hypoxia screening of spheroids. Furthermore, we examine the use of mathematical modelling of spheroids to understand more fully the hypoxic tumour microenvironment.", "title": "" } ]
scidocsrr
bf2e1edf4dde4e9429269f3d342102fe
False Confessions : Causes , Consequences , and Implications for Reform
[ { "docid": "da6a74341c8b12658aea2a267b7a0389", "text": "An experiment demonstrated that false incriminating evidence can lead people to accept guilt for a crime they did not commit. Subjects in a fastor slow-paced reaction time task were accused of damaging a computer by pressing the wrong key. All were truly innocent and initially denied the charge. A confederate then said she saw the subject hit the key or did not see the subject hit the key. Compared with subjects in the slowpacelno-witness group, those in the fast-pace/witness group were more likely to sign a confession, internalize guilt for the event, and confabulate details in memory consistent with that belief Both legal and conceptual implications are discussed. In criminal law, confession evidence is a potent weapon for the prosecution and a recurring source of controversy. Whether a suspect's self-incriminating statement was voluntary or coerced and whether a suspect was of sound mind are just two of the issues that trial judges and juries consider on a routine basis. To guard citizens against violations of due process and to minimize the risk that the innocent would confess to crimes they did not commit, the courts have erected guidelines for the admissibility of confession evidence. Although there is no simple litmus test, confessions are typically excluded from triai if elicited by physical violence, a threat of harm or punishment, or a promise of immunity or leniency, or without the suspect being notified of his or her Miranda rights. To understand the psychology of criminal confessions, three questions need to be addressed: First, how do police interrogators elicit self-incriminating statements (i.e., what means of social influence do they use)? Second, what effects do these methods have (i.e., do innocent suspects ever confess to crimes they did not commit)? Third, when a coerced confession is retracted and later presented at trial, do juries sufficiently discount the evidence in accordance with the law? General reviews of relevant case law and research are available elsewhere (Gudjonsson, 1992; Wrightsman & Kassin, 1993). The present research addresses the first two questions. Informed by developments in case law, the police use various methods of interrogation—including the presentation of false evidence (e.g., fake polygraph, fingerprints, or other forensic test results; staged eyewitness identifications), appeals to God and religion, feigned friendship, and the use of prison informants. A number of manuals are available to advise detectives on how to extract confessions from reluctant crime suspects (Aubry & Caputo, 1965; O'Hara & O'Hara, 1981). The most popular manual is Inbau, Reid, and Buckley's (1986) Criminal Interrogation and Confessions, originally published in 1%2, and now in its third edition. Address correspondence to Saul Kassin, Department of Psychology, Williams College, WllUamstown, MA 01267. After advising interrogators to set aside a bare, soundproof room absent of social support and distraction, Inbau et al, (1986) describe in detail a nine-step procedure consisting of various specific ploys. In general, two types of approaches can be distinguished. One is minimization, a technique in which the detective lulls Che suspect into a false sense of security by providing face-saving excuses, citing mitigating circumstances, blaming the victim, and underplaying the charges. The second approach is one of maximization, in which the interrogator uses scare tactics by exaggerating or falsifying the characterization of evidence, the seriousness of the offense, and the magnitude of the charges. In a recent study (Kassin & McNall, 1991), subjects read interrogation transcripts in which these ploys were used and estimated the severity of the sentence likely to be received. The results indicated that minimization communicated an implicit offer of leniency, comparable to that estimated in an explicit-promise condition, whereas maximization implied a threat of harsh punishment, comparable to that found in an explicit-threat condition. Yet although American courts routinely exclude confessions elicited by explicit threats and promises, they admit those produced by contingencies that are pragmatically implied. Although police often use coercive methods of interrogation, research suggests that juries are prone to convict defendants who confess in these situations. In the case of Arizona v. Fulminante (1991), the U.S. Supreme Court ruled that under certain conditions, an improperly admitted coerced confession may be considered upon appeal to have been nonprejudicial, or \"harmless error.\" Yet mock-jury research shows that people find it hard to believe that anyone would confess to a crime that he or she did not commit (Kassin & Wrightsman, 1980, 1981; Sukel & Kassin, 1994). Still, it happens. One cannot estimate the prevalence of the problem, which has never been systematically examined, but there are numerous documented instances on record (Bedau & Radelet, 1987; Borchard, 1932; Rattner, 1988). Indeed, one can distinguish three types of false confession (Kassin & Wrightsman, 1985): voluntary (in which a subject confesses in the absence of extemal pressure), coercedcompliant (in which a suspect confesses only to escape an aversive interrogation, secure a promised benefit, or avoid a threatened harm), and coerced-internalized (in which a suspect actually comes to believe that he or she is guilty of the crime). This last type of false confession seems most unlikely, but a number of recent cases have come to light in which the police had seized a suspect who was vulnerable (by virtue of his or her youth, intelligence, personality, stress, or mental state) and used false evidence to convince the beleaguered suspect that he or she was guilty. In one case that received a great deal of attention, for example, Paul Ingram was charged with rape and a host of Satanic cult crimes that included the slaughter of newbom babies. During 6 months of interrogation, he was hypnoVOL. 7, NO. 3, MAY 1996 Copyright © 1996 American Psychological Society 125 PSYCHOLOGICAL SCIENCE", "title": "" }, { "docid": "6103a365705a6083e40bb0ca27f6ca78", "text": "Confirmation bias, as the term is typically used in the psychological literature, connotes the seeking or interpreting of evidence in ways that are partial to existing beliefs, expectations, or a hypothesis in hand. The author reviews evidence of such a bias in a variety of guises and gives examples of its operation in several practical contexts. Possible explanations are considered, and the question of its utility or disutility is discussed.", "title": "" } ]
[ { "docid": "df2b7382996a5bedb592b26bc866fd19", "text": "BACKGROUND/AIMS\nTo investigate the possible clinical risk factors contributing to PGS after subtotal gastrectomy.\n\n\nMETHODOLOGY\nThe clinical data of 422 patients administering subtotal gastrectomy in our hospital were reviewed retrospectively from Jan, 1, 2005 to May, 1, 2012.\n\n\nRESULTS\nThe higher morbility of PGS were found in the patients whose age were over 65 years, combining with anxiety disorder or diabetes mellitus, with low-albuminemia in perioperative period or having pyloric obstruction in preoperative period, administering Billroth II gastroenterostomy, whose operation time over 4 hours, using patient-controlled analgesia, injecting liquid per day over 3500 ml.\n\n\nCONCLUSION\nThe clinical factors referred previously maybe the identified risk factors of PGS after subtotal gastrectomy, avoiding these clinical factors in perioperative period would reduce the occurrences of PGS after subtotal gastrectomy.", "title": "" }, { "docid": "1836291f68e18f8975803f6acbb302be", "text": "We review key challenges of developing spoken dialog systems that can engage in interactions with one or multiple participants in relatively unconstrained environments. We outline a set of core competencies for open-world dialog, and describe three prototype systems. The systems are built on a common underlying conversational framework which integrates an array of predictive models and component technologies, including speech recognition, head and pose tracking, probabilistic models for scene analysis, multiparty engagement and turn taking, and inferences about user goals and activities. We discuss the current models and showcase their function by means of a sample recorded interaction, and we review results from an observational study of open-world, multiparty dialog in the wild.", "title": "" }, { "docid": "1945d4663a49a5e1249e43dc7f64d15b", "text": "The current generation of adolescents grows up in a media-saturated world. However, it is unclear how media influences the maturational trajectories of brain regions involved in social interactions. Here we review the neural development in adolescence and show how neuroscience can provide a deeper understanding of developmental sensitivities related to adolescents’ media use. We argue that adolescents are highly sensitive to acceptance and rejection through social media, and that their heightened emotional sensitivity and protracted development of reflective processing and cognitive control may make them specifically reactive to emotion-arousing media. This review illustrates how neuroscience may help understand the mutual influence of media and peers on adolescents’ well-being and opinion formation. The current generation of adolescents grows up in a media-saturated world. Here, Crone and Konijn review the neural development in adolescence and show how neuroscience can provide a deeper understanding of developmental sensitivities related to adolescents’ media use.", "title": "" }, { "docid": "8b43d399ec64a1d89a62a744720f453e", "text": "Object tracking is one of the key components of the perception system of autonomous cars and ADASs. With tracking, an ego-vehicle can make a prediction about the location of surrounding objects in the next time epoch and plan for next actions. Object tracking algorithms typically rely on sensory data (from RGB cameras or LIDAR). In fact, the integration of 2D-RGB camera images and 3D-LIDAR data can provide some distinct benefits. This paper proposes a 3D object tracking algorithm using a 3D-LIDAR, an RGB camera and INS (GPS/IMU) sensors data by analyzing sequential 2D-RGB, 3D point-cloud, and the ego-vehicle's localization data and outputs the trajectory of the tracked object, an estimation of its current velocity, and its predicted location in the 3D world coordinate system in the next time-step. Tracking starts with a known initial 3D bounding box for the object. Two parallel mean-shift algorithms are applied for object detection and localization in the 2D image and 3D point-cloud, followed by a robust 2D/3D Kalman filter based fusion and tracking. Reported results, from both quantitative and qualitative experiments using the KITTI database demonstrate the applicability and efficiency of the proposed approach in driving environments.", "title": "" }, { "docid": "8c3a76aa28177f64e72c52df5ff4a679", "text": "Learning sophisticated feature interactions behind user behaviors is critical in maximizing CTR for recommender systems. Despite great progress, existing methods seem to have a strong bias towards lowor high-order interactions, or require expertise feature engineering. In this paper, we show that it is possible to derive an end-to-end learning model that emphasizes both lowand highorder feature interactions. The proposed model, DeepFM, combines the power of factorization machines for recommendation and deep learning for feature learning in a new neural network architecture. Compared to the latest Wide & Deep model from Google, DeepFM has a shared input to its “wide” and “deep” parts, with no need of feature engineering besides raw features. Comprehensive experiments are conducted to demonstrate the effectiveness and efficiency of DeepFM over the existing models for CTR prediction, on both benchmark data and commercial data.", "title": "" }, { "docid": "28415a26b69057231f1cd063e3dbed40", "text": "OBJECTIVE\nTo determine if ovariectomy (OVE) is a safe alternative to ovariohysterectomy (OVH) for canine gonadectomy.\n\n\nSTUDY DESIGN\nLiterature review.\n\n\nMETHODS\nAn on-line bibliographic search in MEDLINE and PubMed was performed in December 2004, covering the period 1969-2004. Relevant studies were compared and evaluated with regard to study design, surgical technique, and both short-term and long-term follow-up.\n\n\nCONCLUSIONS\nOVH is technically more complicated, time consuming, and is probably associated with greater morbidity (larger incision, more intraoperative trauma, increased discomfort) compared with OVE. No significant differences between techniques were observed for incidence of long-term urogenital problems, including endometritis/pyometra and urinary incontinence, making OVE the preferred method of gonadectomy in the healthy bitch.\n\n\nCLINICAL RELEVANCE\nCanine OVE can replace OVH as the procedure of choice for routine neutering of healthy female dogs.", "title": "" }, { "docid": "9af350b2d1e5b00df37ab8bd5b8f1f0b", "text": "Memory access latency has significant impact on application performance. Unfortunately, the random access latency of DRAM has been scaling relatively slowly, and often directly affects the critical path of execution, especially for applications with insufficient locality or memory-level parallelism. The existing low-latency DRAM organizations either incur significant area overhead or burden the software stack with non-uniform access latency. This paper proposes SALAD, a new DRAM device architecture that provides symmetric access l atency with asymmetric DRAM bank organizations. Since local banks have lower data transfer time due to their proximity to the I/O pads, SALAD applies high aspect-ratio (i.e., low-latency) mats only to remote banks to offset the difference in data transfer time, thus providing uniformly low access time (tAC) over the whole device. Our evaluation demonstrates that SALAD improves the IPC by 13 percent (10 percent) without any software modifications, while incurring only 6 percent (3 percent) area overhead.", "title": "" }, { "docid": "b36341d38ca1484fb1ebb15f1836fa3b", "text": "This paper addresses the important problem of discerning hateful content in social media. We propose a detection scheme that is an ensemble of Recurrent Neural Network (RNN) classifiers, and it incorporates various features associated with user-related information, such as the users’ tendency towards racism or sexism. This data is fed as input to the above classifiers along with the word frequency vectors derived from the textual content. We evaluate our approach on a publicly available corpus of 16k tweets, and the results demonstrate its effectiveness in comparison to existing state-of-the-art solutions. More specifically, our scheme can successfully distinguish racism and sexism messages from normal text, and achieve higher classification quality than current state-of-the-art algorithms.", "title": "" }, { "docid": "72138b8acfb7c9e11cfd92c0b78a737c", "text": "We study the task of entity linking for tweets, which tries to associate each mention in a tweet with a knowledge base entry. Two main challenges of this task are the dearth of information in a single tweet and the rich entity mention variations. To address these challenges, we propose a collective inference method that simultaneously resolves a set of mentions. Particularly, our model integrates three kinds of similarities, i.e., mention-entry similarity, entry-entry similarity, and mention-mention similarity, to enrich the context for entity linking, and to address irregular mentions that are not covered by the entity-variation dictionary. We evaluate our method on a publicly available data set and demonstrate the effectiveness of our method.", "title": "" }, { "docid": "a786837b12c07039d4eca34c02e5c7d6", "text": "The wafer level package (WLP) is a cost-effective solution for electronic package, and it has been increasingly applied during recent years. In this study, a new packaging technology which retains the advantages of WLP, the panel level package (PLP) technology, is proposed to further obtain the capability of signals fan-out for the fine-pitched integrated circuit (IC). In the PLP, the filler material is selected to fill the trench around the chip and provide a smooth surface for the redistribution lines. Therefore, the solder bumps could be located on both the filler and the chip surface, and the pitch of the chip side is fanned-out. In our previous research, it was found that the lifetime of solder joints in PLP can easily pass 3,500 cycles. The outstanding performance is explained by the application of a soft filler and a lamination material. However, it is also learned that the deformation of the lamination material during thermal loading may affect the reliability of the adjacent metal trace. In this study, the material effects of the proposed PLP technology are investigated and discussed through finite element analysis (FEA). A factorial analysis with three levels and three factors (the chip carrier, the lamination, and the filler material) is performed to obtain sensitivity information. Based on the results, the suggested combinations of packaging material in the PLP are provided. The reliability of the metal trace can be effectively improved by means of wisely applying materials in the PLP, and therefore, the PLP technology is expected to have a high potential for various applications in the near future.", "title": "" }, { "docid": "c5c205c8a1fdd6f6def3e28b6477ecec", "text": "The growth and popularity of Internet applications has reinforced the need for effective information filtering techniques. The collaborative filtering approach is now a popular choice and has been implemented in many on-line systems. While many researchers have proposed and compared the performance of various collaborative filtering algorithms, one important performance measure has been omitted from the research to date that is the robustness of the algorithm. In essence, robustness measures the power of the algorithm to make good predictions in the presence of noisy data. In this paper, we argue that robustness is an important system characteristic, and that it must be considered from the point-of-view of potential attacks that could be made on a system by malicious users. We propose a definition for system robustness, and identify system characteristics that influence robustness. Several attack strategies are described in detail, and experimental results are presented for the scenarios outlined.", "title": "" }, { "docid": "57d3505a655e9c0efdc32101fd09b192", "text": "POX is a Python based open source OpenFlow/Software Defined Networking (SDN) Controller. POX is used for faster development and prototyping of new network applications. POX controller comes pre installed with the mininet virtual machine. Using POX controller you can turn dumb openflow devices into hub, switch, load balancer, firewall devices. The POX controller allows easy way to run OpenFlow/SDN experiments. POX can be passed different parameters according to real or experimental topologies, thus allowing you to run experiments on real hardware, testbeds or in mininet emulator. In this paper, first section will contain introduction about POX, OpenFlow and SDN, then discussion about relationship between POX and Mininet. Final Sections will be regarding creating and verifying behavior of network applications in POX.", "title": "" }, { "docid": "7d03c3e0e20b825809bebb5b2da1baed", "text": "Flexoelectricity and the concomitant emergence of electromechanical size-effects at the nanoscale have been recently exploited to propose tantalizing concepts such as the creation of “apparently piezoelectric” materials without piezoelectric materials, e.g. graphene, emergence of “giant” piezoelectricity at the nanoscale, enhanced energy harvesting, among others. The aforementioned developments pertain primarily to hard ceramic crystals. In this work, we develop a nonlinear theoretical framework for flexoelectricity in soft materials. Using the concept of soft electret materials, we illustrate an interesting nonlinear interplay between the so-called Maxwell stress effect and flexoelectricity, and propose the design of a novel class of apparently piezoelectric materials whose constituents are intrinsically non-piezoelectric. In particular, we show that the electret-Maxwell stress based mechanism can be combined with flexoelectricity to achieve unprecedentedly high values of electromechanical coupling. Flexoelectricity is also important for a special class of soft materials: biological membranes. In this context, flexoelectricity manifests itself as the development of polarization upon changes in curvature. Flexoelectricity is found to be important in a number of biological functions including hearing, ion transport and in some situations where mechanotransduction is necessary. In this work, we present a simple linearized theory of flexoelectricity in biological membranes and some illustrative examples. & 2013 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "438093b14f983499ada7ce392ba27664", "text": "The spline under tension was introduced by Schweikert in an attempt to imitate cubic splines but avoid the spurious critical points they induce. The defining equations are presented here, together with an efficient method for determining the necessary parameters and computing the resultant spline. The standard scalar-valued curve fitting problem is discussed, as well as the fitting of open and closed curves in the plane. The use of these curves and the importance of the tension in the fitting of contour lines are mentioned as application.", "title": "" }, { "docid": "e5ecbd3728e93badd4cfbf5eef6957f9", "text": "Live-cell imaging has opened an exciting window into the role cellular heterogeneity plays in dynamic, living systems. A major critical challenge for this class of experiments is the problem of image segmentation, or determining which parts of a microscope image correspond to which individual cells. Current approaches require many hours of manual curation and depend on approaches that are difficult to share between labs. They are also unable to robustly segment the cytoplasms of mammalian cells. Here, we show that deep convolutional neural networks, a supervised machine learning method, can solve this challenge for multiple cell types across the domains of life. We demonstrate that this approach can robustly segment fluorescent images of cell nuclei as well as phase images of the cytoplasms of individual bacterial and mammalian cells from phase contrast images without the need for a fluorescent cytoplasmic marker. These networks also enable the simultaneous segmentation and identification of different mammalian cell types grown in co-culture. A quantitative comparison with prior methods demonstrates that convolutional neural networks have improved accuracy and lead to a significant reduction in curation time. We relay our experience in designing and optimizing deep convolutional neural networks for this task and outline several design rules that we found led to robust performance. We conclude that deep convolutional neural networks are an accurate method that require less curation time, are generalizable to a multiplicity of cell types, from bacteria to mammalian cells, and expand live-cell imaging capabilities to include multi-cell type systems.", "title": "" }, { "docid": "1b5a5a9c08cb3054d1201dae0d1aca95", "text": "The exponential increase of the traffic volume makes Distributed Denial-of-Service (DDoS) attacks a top security threat to service providers. Existing DDoS defense mechanisms lack resources and flexibility to cope with attacks by themselves, and by utilizing other’s companies resources, the burden of the mitigation can be shared. Technologies as blockchain and smart contracts allow distributing attack information across multiple domains, while SDN (Software-Defined Networking) and NFV (Network Function Virtualization) enables to scale defense capabilities on demand for a single network domain. This proposal presents the design of a novel architecture combining these elements and introducing novel opportunities for flexible and efficient DDoS mitigation solutions across multiple domains.", "title": "" }, { "docid": "924146534d348e7a44970b1d78c97e9c", "text": "Little is known of the extent to which heterosexual couples are satisfied with their current frequency of sex and the degree to which this predicts overall sexual and relationship satisfaction. A population-based survey of 4,290 men and 4,366 women was conducted among Australians aged 16 to 64 years from a range of sociodemographic backgrounds, of whom 3,240 men and 3,304 women were in regular heterosexual relationships. Only 46% of men and 58% of women were satisfied with their current frequency of sex. Dissatisfied men were overwhelmingly likely to desire sex more frequently; among dissatisfied women, only two thirds wanted sex more frequently. Age was a significant factor but only for men, with those aged 35-44 years tending to be least satisfied. Men and women who were dissatisfied with their frequency of sex were also more likely to express overall lower sexual and relationship satisfaction. The authors' findings not only highlight desired frequency of sex as a major factor in satisfaction, but also reveal important gender and other sociodemographic differences that need to be taken into account by researchers and therapists seeking to understand and improve sexual and relationship satisfaction among heterosexual couples. Other issues such as length of time spent having sex and practices engaged in may also be relevant, particularly for women.", "title": "" }, { "docid": "6347b642cec08bf062f6e5594f805bd3", "text": "Using a multimethod approach, the authors conducted 4 studies to test life span hypotheses about goal orientations across adulthood. Confirming expectations, in Studies 1 and 2 younger adults reported a primary growth orientation in their goals, whereas older adults reported a stronger orientation toward maintenance and loss prevention. Orientation toward prevention of loss correlated negatively with well-being in younger adults. In older adults, orientation toward maintenance was positively associated with well-being. Studies 3 and 4 extend findings of a self-reported shift in goal orientation to the level of behavioral choice involving cognitive and physical fitness goals. Studies 3 and 4 also examine the role of expected resource demands. The shift in goal orientation is discussed as an adaptive mechanism to manage changing opportunities and constraints across adulthood.", "title": "" }, { "docid": "7593c8e9eb1520f65d7780efbbcedd7d", "text": "We show how to achieve better illumination estimates for color constancy by combining the results of several existing algorithms. We consider committee methods based on both linear and non–linear ways of combining the illumination estimates from the original set of color constancy algorithms. Committees of grayworld, white patch and neural net methods are tested. The committee results are always more accurate than the estimates of any of the other algorithms taken in isolation.", "title": "" }, { "docid": "eebcb9e0e2f08d91174b8476e580e8b6", "text": "Plants are recognized in the pharmaceutical industry for their broad structural diversity as well as their wide range of pharmacological activities. The biologically active compounds present in plants are called phytochemicals. These phytochemicals are derived from various parts of plants such as leaves, flowers, seeds, barks, roots and pulps. These phytochemicals are used as sources of direct medicinal agents. They serve as a raw material base for elaboration of more complex semi-synthetic chemical compounds. This paper mainly deals with the collection of plants, the extraction of active compounds from the various parts of plants, qualitative and quantitative analysis of the phytochemicals.", "title": "" } ]
scidocsrr
8ad9b655796db2d971c252034babffb7
Table Detection in Noisy Off-line Handwritten Documents
[ { "docid": "823c0e181286d917a610f90d1c9db0c3", "text": "Table characteristics vary widely. Consequently, a great variety of computational approaches have been applied to table recognition. In this survey, the table recognition literature is presented as an interaction of table models, observations, transformations and inferences. A table model defines the physical and logical structure of tables; the model is used to detect tables, and to analyze and decompose the detected tables. Observations perform feature measurements and data lookup, transformations alter or restructure data, and inferences generate and test hypotheses. This presentation clarifies the decisions that are made by a table recognizer, and the assumptions and inferencing techniques that underlie these decisions.", "title": "" }, { "docid": "0343f1a0be08ff53e148ef2eb22aaf14", "text": "Tables are a ubiquitous form of communication. While everyone seems to know what a table is, a precise, analytical definition of “tabularity” remains elusive because some bureaucratic forms, multicolumn text layouts, and schematic drawings share many characteristics of tables. There are significant differences between typeset tables, electronic files designed for display of tables, and tables in symbolic form intended for information retrieval. Most past research has addressed the extraction of low-level geometric information from raster images of tables scanned from printed documents, although there is growing interest in the processing of tables in electronic form as well. Recent research on table composition and table analysis has improved our understanding of the distinction between the logical and physical structures of tables, and has led to improved formalisms for modeling tables. This review, which is structured in terms of generalized paradigms for table processing, indicates that progress on half-a-dozen specific research issues would open the door to using existing paper and electronic tables for database update, tabular browsing, structured information retrieval through graphical and audio interfaces, multimedia table editing, and platform-independent display.", "title": "" } ]
[ { "docid": "b215d3604e19c7023049c082b10d7aac", "text": "In this paper, we discuss how we can extend probabilistic topic models to analyze the relationship graph of popular social-network data, so that we can group or label the edges and nodes in the graph based on their topic similarity. In particular, we first apply the well-known Latent Dirichlet Allocation (LDA) model and its existing variants to the graph-labeling task and argue that the existing models do not handle popular nodes (nodes with many incoming edges) in the graph very well. We then propose possible extensions to this model to deal with popular nodes. Our experiments show that the proposed extensions are very effective in labeling popular nodes, showing significant improvements over the existing methods. Our proposed methods can be used for providing, for instance, more relevant friend recommendations within a social network.", "title": "" }, { "docid": "20cfcfde25db033db8d54fe7ae6fcca1", "text": "We present the first study that evaluates both speaker and listener identification for direct speech in literary texts. Our approach consists of two steps: identification of speakers and listeners near the quotes, and dialogue chain segmentation. Evaluation results show that this approach outperforms a rule-based approach that is stateof-the-art on a corpus of literary texts.", "title": "" }, { "docid": "e2c239bed763d13117e943ef988827f1", "text": "This paper presents a comprehensive review of 196 studies which employ operational research (O.R.) and artificial intelligence (A.I.) techniques in the assessment of bank performance. Several key issues in the literature are highlighted. The paper also points to a number of directions for future research. We first discuss numerous applications of data envelopment analysis which is the most widely applied O.R. technique in the field. Then we discuss applications of other techniques such as neural networks, support vector machines, and multicriteria decision aid that have also been used in recent years, in bank failure prediction studies and the assessment of bank creditworthiness and underperformance. 2009 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "8b34b86cb1ce892a496740bfbff0f9c5", "text": "Common subexpression elimination is commonly employed to reduce the number of operations in DSP algorithms after decomposing constant multiplications into shifts and additions. Conventional optimization techniques for finding common subexpressions can optimize constant multiplications with only a single variable at a time, and hence cannot fully optimize the computations with multiple variables found in matrix form of linear systems like DCT, DFT etc. We transform these computations such that all common subexpressions involving any number of variables can be detected. We then present heuristic algorithms to select the best set of common subexpressions. Experimental results show the superiority of our technique over conventional techniques for common subexpression elimination.", "title": "" }, { "docid": "c61efe1758f6599e5cc069185bb02d48", "text": "Modeling the face aging process is a challenging task due to large and non-linear variations present in different stages of face development. This paper presents a deep model approach for face age progression that can efficiently capture the non-linear aging process and automatically synthesize a series of age-progressed faces in various age ranges. In this approach, we first decompose the long-term age progress into a sequence of short-term changes and model it as a face sequence. The Temporal Deep Restricted Boltzmann Machines based age progression model together with the prototype faces are then constructed to learn the aging transformation between faces in the sequence. In addition, to enhance the wrinkles of faces in the later age ranges, the wrinkle models are further constructed using Restricted Boltzmann Machines to capture their variations in different facial regions. The geometry constraints are also taken into account in the last step for more consistent age-progressed results. The proposed approach is evaluated using various face aging databases, i.e. FGNET, Cross-Age Celebrity Dataset (CACD) and MORPH, and our collected large-scale aging database named AginG Faces in the Wild (AGFW). In addition, when ground-truth age is not available for input image, our proposed system is able to automatically estimate the age of the input face before aging process is employed.", "title": "" }, { "docid": "f3727bfc3965bcb49d8897f144ac13a3", "text": "Presenteeism refers to attending work while ill. Although it is a subject of intense interest to scholars in occupational medicine, relatively few organizational scholars are familiar with the concept. This article traces the development of interest in presenteeism, considers its various conceptualizations, and explains how presenteeism is typically measured. Organizational and occupational correlates of attending work when ill are reviewed, as are medical correlates of resulting productivity loss. It is argued that presenteeism has important implications for organizational theory and practice, and a research agenda for organizational scholars is presented. Copyright # 2009 John Wiley & Sons, Ltd.", "title": "" }, { "docid": "b14010454fe4b9f9712c13cbf9a5e23b", "text": "In this paper we propose an approach to Part of Speech (PoS) tagging using a combination of Hidden Markov Model and error driven learning. For the NLPAI joint task, we also implement a chunker using Conditional Random Fields (CRFs). The results for the PoS tagging and chunking task are separately reported along with the results of the joint task.", "title": "" }, { "docid": "8bf0d60fcda4aea9f905b8df6ddc5d65", "text": "We present kinematics, actuation, detailed design, characterization results and initial user evaluations of AssistOn-Knee, a novel self-aligning active exoskeleton for robot-assisted knee rehabilitation. AssistOn-Knee can, not only assist flexion/extension movements of the knee joint but also accommodate its translational movements in the sagittal plane. Automatically aligning its joint axes, AssistOn-Knee enables an ideal match between human knee axis and the exoskeleton axis, guaranteeing ergonomy and comfort throughout the therapy. Self-aligning feature significantly shortens the setup time required to attach the patient to the exoskeleton, allowing more effective time spent on exercises. The proposed exoskeleton actively controls the rotational degree of freedom of the knee through a Bowden cable-driven series elastic actuator, while the translational movements of the knee joints are passively accommodated through use of a 3 degrees of freedom planar parallel mechanism. AssistOn-Knee possesses a lightweight and compact design with significantly low apparent inertia, thanks to its Bowden cable based transmission that allows remote location of the actuator and reduction unit. Furthermore, thanks to its series-elastic actuation, AssistOn-Knee enables high-fidelity force control and active backdrive-ability within its control bandwidth, while featuring passive elasticity for excitations above this bandwidth, ensuring safety and robustness throughout the whole frequency spectrum.", "title": "" }, { "docid": "177f95dc300186f519bd3ac48081a6e0", "text": "TAI's multi-sensor fusion technology is accelerating the development of accurate MEMS sensor-based inertial navigation in situations where GPS does not operate reliably (GPS-denied environments). TAI has demonstrated that one inertial device per axis is not sufficient to produce low drift errors for long term accuracy needed for GPS-denied applications. TAI's technology uses arrays of off-the-shelf MEMS inertial sensors to create an inertial measurement unit (IMU) suitable for inertial navigation systems (INS) that require only occasional GPS updates. Compared to fiber optics gyros, properly combined MEMS gyro arrays are lower cost, fit into smaller volume, use less power and have equal or better performance. The patents TAI holds address this development for both gyro and accelerometer arrays. Existing inertial measurement units based on such array combinations, the backbone of TAI's inertial navigation system (INS) design, have demonstrated approximately 100 times lower sensor drift error to support very accurate angular rates, very accurate position measurements, and very low angle error for long durations. TAI's newest, fourth generation, product occupies small volume, has low weight, and consumes little power. The complete assembly can be potted in a protective sheath to form a rugged standalone product. An external exoskeleton case protects the electronic assembly for munitions and UAV applications. TAI's IMU/INS will provide the user with accurate real-time navigation information in difficult situations where GPS is not reliable. The key to such accurate performance is to achieve low sensor drift errors. The INS responds to quick movements without introducing delays while sharply reducing sensor drift errors that result in significant navigation errors. Discussed in the paper are physical characteristics of the IMU, an overview of the system design, TAI's systematic approach to drift reduction and some early results of applying a sigma point Kalman filter to sustain low gyro drift.", "title": "" }, { "docid": "ba2a9451fa1f794c7a819acaa9bc5d82", "text": "In this paper we briefly address DLR’s (German Aerospace Center) background in space robotics by hand of corresponding milestone projects including systems on the International Space Station. We then discuss the key technologies needed for the development of an artificial “robonaut” generation with mechatronic ultra-lightweight arms and multifingered hands. The third arm generation is nearly finished now, approaching the limits of what is technologically achievable today with respect to light weight and power losses. In a similar way DLR’s second generation of artificial four-fingered hands was a big step towards higher reliability, manipulability and overall", "title": "" }, { "docid": "e95336e305ac921c01198554da91dcdb", "text": "We consider the problem of staffing call-centers with multip le customer classes and agent types operating under quality-of-service (QoS) constraints and demand rate uncertainty. We introduce a formulation of the staffing problem that requires that the Q oS constraints are met with high probability with respect to the uncertainty in the demand ra te. We contrast this chance-constrained formulation with the average-performance constraints tha t have been used so far in the literature. We then propose a two-step solution for the staffing problem u nder chance constraints. In the first step, we introduce a Random Static Planning Problem (RSPP) a nd discuss how it can be solved using two different methods. The RSPP provides us with a first -order (or fluid) approximation for the true optimal staffing levels and a staffing frontier. In the second step, we solve a finite number of staffing problems with known arrival rates–the arrival rate s on the optimal staffing frontier. Hence, our formulation and solution approach has the important pro perty that it translates the problem with uncertain demand rates to one with known arrival rates. The o utput of our procedure is a solution that is feasible with respect to the chance constraint and ne arly optimal for large call centers.", "title": "" }, { "docid": "15dfa65d40eb6cd60c3df952a7b864c4", "text": "The lack of theoretical progress in the IS field may be surprising. From an empirical viewpoint, the IS field resembles other management fields. Specifically, as fields of inquiry develop, their theories are often placed on a hierarchy from ad hoc classification systems (in which categories are used to summarize empirical observations), to taxonomies (in which the relationships between the categories can be described), to conceptual frameworks (in which propositions summarize explanations and predictions), to theoretical systems (in which laws are contained within axiomatic or formal theories) (Parsons and Shils 1962). In its short history, IS research has developed from classification systems to conceptual frameworks. In the 1970s, it was considered pre-paradigmatic. Today, it is approaching the level of development in empirical research of other management fields, like organizational behavior (Webster 2001). However, unlike other fields that have journals devoted to review articles (e.g., the Academy of Management Review), we see few review articles in IS—and hence the creation of MISQ Review as a device for accelerating development of the discipline.", "title": "" }, { "docid": "9db83d9bb1acfa49e7546a8976893180", "text": "Private query processing on encrypted databases allows users to obtain data from encrypted databases in such a way that the user’s sensitive data will be protected from exposure. Given an encrypted database, the users typically submit queries similar to the following examples: – How many employees in an organization make over $100,000? – What is the average age of factory workers suffering from leukemia? Answering the above questions requires one to search and then compute over the encrypted databases in sequence. In the case of privately processing queries with only one of these operations, many efficient solutions have been developed using a special-purpose encryption scheme (e.g., searchable encryption). In this paper, we are interested in efficiently processing queries that need to perform both operations on fully encrypted databases. One immediate solution is to use several special-purpose encryption schemes at the same time, but this approach is associated with a high computational cost for maintaining multiple encryption contexts. The other solution is to use a privacy homomorphism (or fully homomorphic encryption) scheme. However, no secure solutions have been developed that meet the efficiency requirements. In this work, we construct a unified framework so as to efficiently and privately process queries with “search” and “compute” operations. To this end, the first part of our work involves devising some underlying circuits as primitives for queries on encrypted data. Second, we apply two optimization techniques to improve the efficiency of the circuit primitives. One technique is to exploit SIMD techniques to accelerate their basic operations. In contrast to general SIMD approaches, our SIMD implementation can be applied even when one basic operation is executed. The other technique is to take a large integer ring (e.g., Z2t) as a message space instead of a binary field. Even for an integer of k bits with k ą t, addition can be performed with degree 1 circuits with lazy carry operations. As a result, search queries including a conjunctive or disjunctive query on encrypted databases of N tuples with μ-bit attributes require OpN logμq homomorphic operations with depth Oplogμq circuits. Search-and-compute queries, such as a conjunctive query with aggregate functions in the same conditions, are processed using OpμNq homomorphic operations with at most depth Oplogμ logNq circuits. Further, we can process search-and-compute queries using only OpN logμq homomorphic operations with depth Oplogμq circuits, even in the large domain. Finally, we present various experiments by varying the parameters, such as the query type and the number of tuples.", "title": "" }, { "docid": "7c5f1b12f540c8320587ead7ed863ee5", "text": "This paper studies the non-fragile mixed H∞ and passive synchronization problem for Markov jump neural networks. The randomly occurring controller gain fluctuation phenomenon is investigated for non-fragile strategy. Moreover, the mixed time-varying delays composed of discrete and distributed delays are considered. By employing stochastic stability theory, synchronization criteria are developed for the Markov jump neural networks. On the basis of the derived criteria, the non-fragile synchronization controller is designed. Finally, an illustrative example is presented to demonstrate the validity of the control approach.", "title": "" }, { "docid": "1165be411612c7d6c09ec0408ffdeaad", "text": "OBJECTIVES\nTo describe and compare 20 m shuttle run test (20mSRT) performance among children and youth across 50 countries; to explore broad socioeconomic indicators that correlate with 20mSRT performance in children and youth across countries and to evaluate the utility of the 20mSRT as an international population health indicator for children and youth.\n\n\nMETHODS\nA systematic review was undertaken to identify papers that explicitly reported descriptive 20mSRT (with 1-min stages) data on apparently healthy 9-17 year-olds. Descriptive data were standardised to running speed (km/h) at the last completed stage. Country-specific 20mSRT performance indices were calculated as population-weighted mean z-scores relative to all children of the same age and sex from all countries. Countries were categorised into developed and developing groups based on the Human Development Index, and a correlational analysis was performed to describe the association between country-specific performance indices and broad socioeconomic indicators using Spearman's rank correlation coefficient.\n\n\nRESULTS\nPerformance indices were calculated for 50 countries using collated data on 1 142 026 children and youth aged 9-17 years. The best performing countries were from Africa and Central-Northern Europe. Countries from South America were consistently among the worst performing countries. Country-specific income inequality (Gini index) was a strong negative correlate of the performance index across all 50 countries.\n\n\nCONCLUSIONS\nThe pattern of variability in the performance index broadly supports the theory of a physical activity transition and income inequality as the strongest structural determinant of health in children and youth. This simple and cost-effective assessment would be a powerful tool for international population health surveillance.", "title": "" }, { "docid": "7539af35786fba888fa3a7cafa5db0b0", "text": "Multi-view stereo algorithms typically rely on same-exposure images as inputs due to the brightness constancy assumption. While state-of-the-art depth results are excellent, they do not produce high-dynamic range textures required for high-quality view reconstruction. In this paper, we propose a technique that adapts multi-view stereo for different exposure inputs to simultaneously recover reliable dense depth and high dynamic range textures. In our technique, we use an exposure-invariant similarity statistic to establish correspondences, through which we robustly extract the camera radiometric response function and the image exposures. This enables us to then convert all images to radiance space and selectively use the radiance data for dense depth and high dynamic range texture recovery. We show results for synthetic and real scenes.", "title": "" }, { "docid": "8b5d7965ac154da1266874027f0b10a0", "text": "Matching pedestrians across disjoint camera views, known as person re-identification (re-id), is a challenging problem that is of importance to visual recognition and surveillance. Most existing methods exploit local regions within spatial manipulation to perform matching in local correspondence. However, they essentially extract fixed representations from pre-divided regions for each image and perform matching based on the extracted representation subsequently. For models in this pipeline, local finer patterns that are crucial to distinguish positive pairs from negative ones cannot be captured, and thus making them underperformed. In this paper, we propose a novel deep multiplicative integration gating function, which answers the question of what-and-where to match for effective person re-id. To address what to match, our deep network emphasizes common local patterns by learning joint representations in a multiplicative way. The network comprises two Convolutional Neural Networks (CNNs) to extract convolutional activations, and generates relevant descriptors for pedestrian matching. This thus, leads to flexible representations for pair-wise images. To address where to match, we combat the spatial misalignment by performing spatially recurrent pooling via a four-directional recurrent neural network to impose spatial depenEmail addresses: lin.wu@uq.edu.au (Lin Wu ), wangy@cse.unsw.edu.au (Yang Wang), xueli@itee.uq.edu.au (Xue Li), junbin.gao@sydney.edu.au (Junbin Gao) Preprint submitted to Elsevier 25·7·2017 ar X iv :1 70 7. 07 07 4v 1 [ cs .C V ] 2 1 Ju l 2 01 7 dency over all positions with respect to the entire image. The proposed network is designed to be end-to-end trainable to characterize local pairwise feature interactions in a spatially aligned manner. To demonstrate the superiority of our method, extensive experiments are conducted over three benchmark data sets: VIPeR, CUHK03 and Market-1501.", "title": "" }, { "docid": "d6a585443f5829b556a1064b9b92113a", "text": "The water quality monitoring system is designed for the need of environmental protection department in a particular area of the water quality requirements. The system is based on the Wireless Sensor Network (WSN). It consists of Wireless Water Quality Monitoring Network and Remote Data Center. The hardware platform use wireless microprocessor CC2430 as the core of the node. The sensor network is builted in accordance with Zigbee wireless transmission agreement. WSN Sample the water quality, and send the data to Internet with the help of the GPRS DTU which has a built-in TCP/IP protocol. Through the Internet, Remote Data Center gets the real-time water quality data, and then analysis, process and record the data. Environmental protection department can provide real-time guidance to those industry which depends on regional water quality conditions, like industrial, plant and aquaculture. The most important is that the work can be more efficient and less cost.", "title": "" }, { "docid": "786a31d5c189c8376a08be6050ddbd9c", "text": "In this article, we present a meta-analysis of research examining visibility of disability. In interrogating the issue of visibility and invisibility in the design of assistive technologies, we open a discussion about how perceptions surrounding disability can be probed through an examination of visibility and how these tensions do, and perhaps should, influence assistive technology design and research.", "title": "" }, { "docid": "9a12ec03e4521a33a7e76c0c538b6b43", "text": "Sparse representation of information provides a powerful means to perform feature extraction on high-dimensional data and is of broad interest for applications in signal processing, computer vision, object recognition and neurobiology. Sparse coding is also believed to be a key mechanism by which biological neural systems can efficiently process a large amount of complex sensory data while consuming very little power. Here, we report the experimental implementation of sparse coding algorithms in a bio-inspired approach using a 32 × 32 crossbar array of analog memristors. This network enables efficient implementation of pattern matching and lateral neuron inhibition and allows input data to be sparsely encoded using neuron activities and stored dictionary elements. Different dictionary sets can be trained and stored in the same system, depending on the nature of the input signals. Using the sparse coding algorithm, we also perform natural image processing based on a learned dictionary.", "title": "" } ]
scidocsrr
b5c64b6ec84258c7052997aaa8bd071f
DECISION BOUNDARY ANALYSIS OF ADVERSARIAL EXAMPLES
[ { "docid": "8c0c2d5abd8b6e62f3184985e8e01d66", "text": "Neural networks are known to be vulnerable to adversarial examples: inputs that are close to natural inputs but classified incorrectly. In order to better understand the space of adversarial examples, we survey ten recent proposals that are designed for detection and compare their efficacy. We show that all can be defeated by constructing new loss functions. We conclude that adversarial examples are significantly harder to detect than previously appreciated, and the properties believed to be intrinsic to adversarial examples are in fact not. Finally, we propose several simple guidelines for evaluating future proposed defenses.", "title": "" }, { "docid": "17611b0521b69ad2b22eeadc10d6d793", "text": "Neural networks provide state-of-the-art results for most machine learning tasks. Unfortunately, neural networks are vulnerable to adversarial examples: given an input x and any target classification t, it is possible to find a new input x' that is similar to x but classified as t. This makes it difficult to apply neural networks in security-critical areas. Defensive distillation is a recently proposed approach that can take an arbitrary neural network, and increase its robustness, reducing the success rate of current attacks' ability to find adversarial examples from 95% to 0.5%.In this paper, we demonstrate that defensive distillation does not significantly increase the robustness of neural networks by introducing three new attack algorithms that are successful on both distilled and undistilled neural networks with 100% probability. Our attacks are tailored to three distance metrics used previously in the literature, and when compared to previous adversarial example generation algorithms, our attacks are often much more effective (and never worse). Furthermore, we propose using high-confidence adversarial examples in a simple transferability test we show can also be used to break defensive distillation. We hope our attacks will be used as a benchmark in future defense attempts to create neural networks that resist adversarial examples.", "title": "" }, { "docid": "0a8c009d1bccbaa078f95cc601010af3", "text": "Deep neural networks (DNNs) have transformed several artificial intelligence research areas including computer vision, speech recognition, and natural language processing. However, recent studies demonstrated that DNNs are vulnerable to adversarial manipulations at testing time. Specifically, suppose we have a testing example, whose label can be correctly predicted by a DNN classifier. An attacker can add a small carefully crafted noise to the testing example such that the DNN classifier predicts an incorrect label, where the crafted testing example is called adversarial example. Such attacks are called evasion attacks. Evasion attacks are one of the biggest challenges for deploying DNNs in safety and security critical applications such as self-driving cars.\n In this work, we develop new DNNs that are robust to state-of-the-art evasion attacks. Our key observation is that adversarial examples are close to the classification boundary. Therefore, we propose region-based classification to be robust to adversarial examples. Specifically, for a benign/adversarial testing example, we ensemble information in a hypercube centered at the example to predict its label. In contrast, traditional classifiers are point-based classification, i.e., given a testing example, the classifier predicts its label based on the testing example alone. Our evaluation results on MNIST and CIFAR-10 datasets demonstrate that our region-based classification can significantly mitigate evasion attacks without sacrificing classification accuracy on benign examples. Specifically, our region-based classification achieves the same classification accuracy on testing benign examples as point-based classification, but our region-based classification is significantly more robust than point-based classification to state-of-the-art evasion attacks.", "title": "" }, { "docid": "b12bae586bc49a12cebf11cca49c0386", "text": "Deep neural networks (DNNs) are powerful nonlinear architectures that are known to be robust to random perturbations of the input. However, these models are vulnerable to adversarial perturbations—small input changes crafted explicitly to fool the model. In this paper, we ask whether a DNN can distinguish adversarial samples from their normal and noisy counterparts. We investigate model confidence on adversarial samples by looking at Bayesian uncertainty estimates, available in dropout neural networks, and by performing density estimation in the subspace of deep features learned by the model. The result is a method for implicit adversarial detection that is oblivious to the attack algorithm. We evaluate this method on a variety of standard datasets including MNIST and CIFAR-10 and show that it generalizes well across different architectures and attacks. Our findings report that 85-93% ROC-AUC can be achieved on a number of standard classification tasks with a negative class that consists of both normal and noisy samples.", "title": "" } ]
[ { "docid": "7cc41229d0368f702a4dde3ccf597604", "text": "State Machines", "title": "" }, { "docid": "59f3c511765c52702b9047a688256532", "text": "Mobile robots are dependent upon a model of the environment for many of their basic functions. Locally accurate maps are critical to collision avoidance, while large-scale maps (accurate both metrically and topologically) are necessary for efficient route planning. Solutions to these problems have immediate and important applications to autonomous vehicles, precision surveying, and domestic robots. Building accurate maps can be cast as an optimization problem: find the map that is most probable given the set of observations of the environment. However, the problem rapidly becomes difficult when dealing with large maps or large numbers of observations. Sensor noise and non-linearities make the problem even more difficult— especially when using inexpensive (and therefore preferable) sensors. This thesis describes an optimization algorithm that can rapidly estimate the maximum likelihood map given a set of observations. The algorithm, which iteratively reduces map error by considering a single observation at a time, scales well to large environments with many observations. The approach is particularly robust to noise and non-linearities, quickly escaping local minima that trap current methods. Both batch and online versions of the algorithm are described. In order to build a map, however, a robot must first be able to recognize places that it has previously seen. Limitations in sensor processing algorithms, coupled with environmental ambiguity, make this difficult. Incorrect place recognitions can rapidly lead to divergence of the map. This thesis describes a place recognition algorithm that can robustly handle ambiguous data. We evaluate these algorithms on a number of challenging datasets and provide quantitative comparisons to other state-of-the-art methods, illustrating the advantages of our methods.", "title": "" }, { "docid": "2746d538694db54381639e5e5acdb4ca", "text": "In the present research, the aqueous stability of leuprolide acetate (LA) in phosphate buffered saline (PBS) medium was studied (pH = 2.0-7.4). For this purpose, the effect of temperature, dissolved oxygen and pH on the stability of LA during 35 days was investigated. Results showed that the aqueous stability of LA was higher at low temperatures. Degassing of the PBS medium partially increased the stability of LA at 4 °C, while did not change at 37 °C. The degradation of LA was accelerated at lower pH values. In addition, complexes of LA with different portions of β-cyclodextrin (β-CD) were prepared through freeze-drying procedure and characterized by Fourier transform infrared (FTIR) and differential scanning calorimetry (DSC) analyses. Studying their aqueous stability at various pH values (2.0-7.4) showed LA/β-CD complexes exhibited higher stability when compared with LA at all pH values. The stability of complexes was also improved by increasing the portion of LA/β-CD up to 1/10.", "title": "" }, { "docid": "27c56cabe2742fbe69154e63073e193e", "text": "Developing a good model for oscillometric blood-pressure measurements is a hard task. This is mainly due to the fact that the systolic and diastolic pressures cannot be directly measured by noninvasive automatic oscillometric blood-pressure meters (NIBP) but need to be computed based on some kind of algorithm. This is in strong contrast with the classical Korotkoff method, where the diastolic and systolic blood pressures can be directly measured by a sphygmomanometer. Although an NIBP returns results similar to the Korotkoff method for patients with normal blood pressures, a big discrepancy exist between both methods for severe hyper- and hypotension. For these severe cases, a statistical model is needed to compensate or calibrate the oscillometric blood-pressure meters. Although different statistical models have been already studied, no immediate calibration method has been proposed. The reason is that the step from a model, describing the measurements, to a calibration, correcting the blood-pressure meters, is a rather large leap. In this paper, we study a “databased” Fourier series approach to model the oscillometric waveform and use the Windkessel model for the blood flow to correct the oscillometric blood-pressure meters. The method is validated on a measurement campaign consisting of healthy patients and patients suffering from either hyper- or hypotension.", "title": "" }, { "docid": "c51e7c171de42ed19f69c6ccf893ec52", "text": "The fibroblast growth factor signaling pathway (FGFR signaling) is an evolutionary conserved signaling cascade that regulates several basic biologic processes, including tissue development, angiogenesis, and tissue regeneration. Substantial evidence indicates that aberrant FGFR signaling is involved in the pathogenesis of cancer. Recent developments of deep sequencing technologies have allowed the discovery of frequent molecular alterations in components of FGFR signaling among several solid tumor types. Moreover, compelling preclinical models have demonstrated the oncogenic potential of these aberrations in driving tumor growth, promoting angiogenesis, and conferring resistance mechanisms to anticancer therapies. Recently, the field of FGFR targeting has exponentially progressed thanks to the development of novel agents inhibiting FGFs or FGFRs, which had manageable safety profiles in early-phase trials. Promising treatment efficacy has been observed in different types of malignancies, particularly in tumors harboring aberrant FGFR signaling, thus offering novel therapeutic opportunities in the era of precision medicine. The most exciting challenges now focus on selecting patients who are most likely to benefit from these agents, increasing the efficacy of therapies with the development of novel potent compounds and combination strategies, and overcoming toxicities associatedwith FGFR inhibitors. After examination of the basic and translational research studies that validated the oncogenic potential of aberrant FGFR signaling, this review focuses on recent data from clinical trials evaluating FGFR targeting therapies and discusses the challenges and perspectives for the development of these agents. Clin Cancer Res; 21(12); 2684–94. 2015 AACR. Disclosure of Potential Conflicts of Interest F. Andr e is a consultant/advisory board member for Novartis. J.-C. Soria is a consultant/advisory board member for AstraZeneca, Clovis Oncology, EOS, Johnson & Johnson, and Servier. No potential conflicts of interest were disclosed by the other authors. Editor's Disclosures The following editor(s) reported relevant financial relationships: S.E. Bates reports receiving a commercial research grant from Celgene", "title": "" }, { "docid": "3bd2941e72695b3214247c8c7071410b", "text": "The paper contributes to the emerging literature linking sustainability as a concept to problems researched in HRM literature. Sustainability is often equated with social responsibility. However, emphasizing mainly moral or ethical values neglects that sustainability can also be economically rational. This conceptual paper discusses how the notion of sustainability has developed and emerged in HRM literature. A typology of sustainability concepts in HRM is presented to advance theorizing in the field of Sustainable HRM. The concepts of paradox, duality, and dilemma are reviewed to contribute to understanding the emergence of sustainability in HRM. It is argued in this paper that sustainability can be applied as a concept to cope with the tensions of shortvs. long-term HRM and to make sense of paradoxes, dualities, and dilemmas. Furthermore, it is emphasized that the dualities cannot be reconciled when sustainability is interpreted in a way that leads to ignorance of one of the values or logics. Implications for further research and modest suggestions for managerial practice are derived.", "title": "" }, { "docid": "863c806d29c15dd9b9160eae25316dfc", "text": "This paper presents new structural statistical matrices which are gray level size zone matrix (SZM) texture descriptor variants. The SZM is based on the cooccurrences of size/intensity of each flat zone (connected pixels with the same gray level). The first improvement increases the information processed by merging multiple gray-level quantizations and reduces the required parameter numbers. New improved descriptors were especially designed for supervised cell texture classification. They are illustrated thanks to two different databases built from quantitative cell biology. The second alternative characterizes the DNA organization during the mitosis, according to zone intensities radial distribution. The third variant is a matrix structure generalization for the fibrous texture analysis, by changing the intensity/size pair into the length/orientation pair of each region.", "title": "" }, { "docid": "e59f3f8e0deea8b4caa32b54049ad76b", "text": "We present AD, a new algorithm for approximate maximum a posteriori (MAP) inference on factor graphs, based on the alternating directions method of multipliers. Like other dual decomposition algorithms, AD has a modular architecture, where local subproblems are solved independently, and their solutions are gathered to compute a global update. The key characteristic of AD is that each local subproblem has a quadratic regularizer, leading to faster convergence, both theoretically and in practice. We provide closed-form solutions for these AD subproblems for binary pairwise factors and factors imposing first-order logic constraints. For arbitrary factors (large or combinatorial), we introduce an active set method which requires only an oracle for computing a local MAP configuration, making AD applicable to a wide range of problems. Experiments on synthetic and real-world problems show that AD compares favorably with the state-of-the-art.", "title": "" }, { "docid": "c4feca5e27cfecdd2913e18cc7b7a21a", "text": "one component of intelligent transportation systems, IV systems use sensing and intelligent algorithms to understand the vehicle’s immediate environment, either assisting the driver or fully controlling the vehicle. Following the success of information-oriented systems, IV systems will likely be the “next wave” for ITS, functioning at the control layer to enable the driver–vehicle “subsystem” to operate more effectively. This column provides a broad overview of applications and selected activities in this field. IV application areas", "title": "" }, { "docid": "610ec093f08d62548925918d6e64b923", "text": "Word embeddings encode semantic meanings of words into low-dimension word vectors. In most word embeddings, one cannot interpret the meanings of specific dimensions of those word vectors. Nonnegative matrix factorization (NMF) has been proposed to learn interpretable word embeddings via non-negative constraints. However, NMF methods suffer from scale and memory issue because they have to maintain a global matrix for learning. To alleviate this challenge, we propose online learning of interpretable word embeddings from streaming text data. Experiments show that our model consistently outperforms the state-of-the-art word embedding methods in both representation ability and interpretability. The source code of this paper can be obtained from http: //github.com/skTim/OIWE.", "title": "" }, { "docid": "669b4b1574c22a0c18dd1dc107bc54a1", "text": "T lymphocytes respond to foreign antigens both by producing protein effector molecules known as lymphokines and by multiplying. Complete activation requires two signaling events, one through the antigen-specific receptor and one through the receptor for a costimulatory molecule. In the absence of the latter signal, the T cell makes only a partial response and, more importantly, enters an unresponsive state known as clonal anergy in which the T cell is incapable of producing its own growth hormone, interleukin-2, on restimulation. Our current understanding at the molecular level of this modulatory process and its relevance to T cell tolerance are reviewed.", "title": "" }, { "docid": "ae3a54128bb29272e5cb3552236b6f12", "text": "Traditionally, human facial expressions have been studied using either 2D static images or 2D video sequences. The 2D-based analysis is incapable of handing large pose variations. Although 3D modeling techniques have been extensively used for 3D face recognition and 3D face animation, barely any research on 3D facial expression recognition using 3D range data has been reported. A primary factor for preventing such research is the lack of a publicly available 3D facial expression database. In this paper, we present a newly developed 3D facial expression database, which includes both prototypical 3D facial expression shapes and 2D facial textures of 2,500 models from 100 subjects. This is the first attempt at making a 3D facial expression database available for the research community, with the ultimate goal of fostering the research on affective computing and increasing the general understanding of facial behavior and the fine 3D structure inherent in human facial expressions. The new database can be a valuable resource for algorithm assessment, comparison and evaluation", "title": "" }, { "docid": "a1a4b028fba02904333140e6791709bb", "text": "Cross-site scripting (also referred to as XSS) is a vulnerability that allows an attacker to send malicious code (usually in the form of JavaScript) to another user. XSS is one of the top 10 vulnerabilities on Web application. While a traditional cross-site scripting vulnerability exploits server-side codes, DOM-based XSS is a type of vulnerability which affects the script code being executed in the clients browser. DOM-based XSS vulnerabilities are much harder to be detected than classic XSS vulnerabilities because they reside on the script codes from Web sites. An automated scanner needs to be able to execute the script code without errors and to monitor the execution of this code to detect such vulnerabilities. In this paper, we introduce a distributed scanning tool for crawling modern Web applications on a large scale and detecting, validating DOMbased XSS vulnerabilities. Very few Web vulnerability scanners can really accomplish this.", "title": "" }, { "docid": "8e648261dc529f8e28ce3b2a40d9f0b0", "text": "C 34 35 36 37 38 39 40 41 42 43 44 Article history: Received 21 July 2006 Received in revised form 25 June 2007 Accepted 27 July 2007 Available online xxxx", "title": "" }, { "docid": "1f972cc136f47288888657e84464412e", "text": "This paper evaluates the impact of machine translation on the software localization process and the daily work of professional translators when SMT is applied to low-resourced languages with rich morphology. Translation from English into six low-resourced languages (Czech, Estonian, Hungarian, Latvian, Lithuanian and Polish) from different language groups are examined. Quality, usability and applicability of SMT for professional translation were evaluated. The building of domain and project tailored SMT systems for localization purposes was evaluated in two setups. The results of the first evaluation were used to improve SMT systems and MT platform. The second evaluation analysed a more complex situation considering tag translation and its effects on the translator’s productivity.", "title": "" }, { "docid": "ffcd59b9cf48f61ad0278effa6c167dd", "text": "The first of this two-part series on critical illness in pregnancy dealt with obstetric disorders. In Part II, medical conditions that commonly affect pregnant women or worsen during pregnancy are discussed. ARDS occurs more frequently in pregnancy. Strategies commonly used in nonpregnant patients, including permissive hypercapnia, limits for plateau pressure, and prone positioning, may not be acceptable, especially in late pregnancy. Genital tract infections unique to pregnancy include chorioamnionitis, group A streptococcal infection causing toxic shock syndrome, and polymicrobial infection with streptococci, staphylococci, and Clostridium perfringens causing necrotizing vulvitis or fasciitis. Pregnancy predisposes to VTE; D-dimer levels have low specificity in pregnancy. A ventilation-perfusion scan is preferred over CT pulmonary angiography in some situations to reduce radiation to the mother's breasts. Low-molecular-weight or unfractionated heparins form the mainstay of treatment; vitamin K antagonists, oral factor Xa inhibitors, and direct thrombin inhibitors are not recommended in pregnancy. The physiologic hyperdynamic circulation in pregnancy worsens many cardiovascular disorders. It increases risk of pulmonary edema or arrhythmias in mitral stenosis, heart failure in pulmonary hypertension or aortic stenosis, aortic dissection in Marfan syndrome, or valve thrombosis in mechanical heart valves. Common neurologic problems in pregnancy include seizures, altered mental status, visual symptoms, and strokes. Other common conditions discussed are aspiration of gastric contents, OSA, thyroid disorders, diabetic ketoacidosis, and cardiopulmonary arrest in pregnancy. Studies confined to pregnant women are available for only a few of these conditions. We have, therefore, reviewed pregnancy-specific adjustments in the management of these disorders.", "title": "" }, { "docid": "f84011e3b4c8b1e80d4e79dee3ccad53", "text": "What is the future of fashion? Tackling this question from a data-driven vision perspective, we propose to forecast visual style trends before they occur. We introduce the first approach to predict the future popularity of styles discovered from fashion images in an unsupervised manner. Using these styles as a basis, we train a forecasting model to represent their trends over time. The resulting model can hypothesize new mixtures of styles that will become popular in the future, discover style dynamics (trendy vs. classic), and name the key visual attributes that will dominate tomorrow’s fashion. We demonstrate our idea applied to three datasets encapsulating 80,000 fashion products sold across six years on Amazon. Results indicate that fashion forecasting benefits greatly from visual analysis, much more than textual or meta-data cues surrounding products.", "title": "" }, { "docid": "0ee435f59529fa0e1c5a01d3488aa6ed", "text": "The additivity of wavelet subband quantization distortions was investigated in an unmasked detection task and in masked detection and discrimination tasks. Contrast thresholds were measured for both simple targets (artifacts induced by uniform quantization of individual discrete wavelet transform subbands) and compound targets (artifacts induced by uniform quantization of pairs of discrete wavelet transform subbands) in the presence of no mask and eight different natural image maskers. The results were used to assess summation between wavelet subband quantization distortions on orientation and spatial-frequency dimensions. In the unmasked detection experiment, subthreshold quantization distortions pooled in a non-linear fashion and the amount of summation agreed with those of previous summation-atthreshold experiments (ß=2.43; relative sensitivity=1.33). In the masked detection and discrimination experiments, suprathreshold quantization distortions pooled in a linear fashion. Summation increased as the distortions became increasingly suprathreshold but quickly settled to near-linear values. Summation on the spatial-frequency dimension was greater than summation on the orientation dimension for all suprathreshold contrasts. A high degree of uncertainty imposed by the natural image maskers precludes quantifying an absolute measure of summation.", "title": "" }, { "docid": "268ab0ae541eb2555a464af0e8ab58c5", "text": "Melanocytes are melanin-producing cells found in skin, hair follicles, eyes, inner ear, bones, heart and brain of humans. They arise from pluripotent neural crest cells and differentiate in response to a complex network of interacting regulatory pathways. Melanins are pigment molecules that are endogenously synthesized by melanocytes. The light absorption of melanin in skin and hair leads to photoreceptor shielding, thermoregulation, photoprotection, camouflage and display coloring. Melanins are also powerful cation chelators and may act as free radical sinks. Melanin formation is a product of complex biochemical events that starts from amino acid tyrosine and its metabolite, dopa. The types and amounts of melanin produced by melanocytes are determined genetically and are influenced by a variety of extrinsic and intrinsic factors such as hormonal changes, inflammation, age and exposure to UV light. These stimuli affect the different pathways in melanogenesis. In this review we will discuss the regulatory mechanisms involved in melanogenesis and explain how intrinsic and extrinsic factors regulate melanin production. We will also explain the regulatory roles of different proteins involved in melanogenesis.", "title": "" }, { "docid": "dd05688335b4240bbc40919870e30f39", "text": "In this tool report, we present an overview of the Watson system, a Semantic Web search engine providing various functionalities not only to find and locate ontologies and semantic data online, but also to explore the content of these semantic documents. Beyond the simple facade of a search engine for the Semantic Web, we show that the availability of such a component brings new possibilities in terms of developing semantic applications that exploit the content of the Semantic Web. Indeed, Watson provides a set of APIs containing high level functions for finding, exploring and querying semantic data and ontologies that have been published online. Thanks to these APIs, new applications have emerged that connect activities such as ontology construction, matching, sense disambiguation and question answering to the Semantic Web, developed by our group and others. In addition, we also describe Watson as a unprecedented research platform for the study the Semantic Web, and of formalised knowledge in general.", "title": "" } ]
scidocsrr
09ada2c726f12a28265f15a68d1a9f85
Spatiotemporal social media analytics for abnormal event detection and examination using seasonal-trend decomposition
[ { "docid": "d3d471b6b377d8958886a2f6c89d5061", "text": "In common Web-based search interfaces, it can be difficult to formulate queries that simultaneously combine temporal, spatial, and topical data filters. We investigate how coordinated visualizations can enhance search and exploration of information on the World Wide Web by easing the formulation of these types of queries. Drawing from visual information seeking and exploratory search, we introduce VisGets - interactive query visualizations of Web-based information that operate with online information within a Web browser. VisGets provide the information seeker with visual overviews of Web resources and offer a way to visually filter the data. Our goal is to facilitate the construction of dynamic search queries that combine filters from more than one data dimension. We present a prototype information exploration system featuring three linked VisGets (temporal, spatial, and topical), and used it to visually explore news items from online RSS feeds.", "title": "" } ]
[ { "docid": "b389cf1f4274b250039414101cf0cc98", "text": "We present a framework for analyzing the structure of digital media streams. Though our methods work for video, text, and audio, we concentrate on detecting the structure of digital music files. In the first step, spectral data is used to construct a similarity matrix calculated from inter-frame spectral similarity. The digital audio can be robustly segmented by correlating a kernel along the diagonal of the similarity matrix. Once segmented, spectral statistics of each segment are computed. In the second step, segments are clustered based on the selfsimilarity of their statistics. This reveals the structure of the digital music in a set of segment boundaries and labels. Finally, the music can be summarized by selecting clusters with repeated segments throughout the piece. The summaries can be customized for various applications based on the structure of the original music.", "title": "" }, { "docid": "425ee6d9de68116692d1e449f7be639b", "text": "Copy-move forgery is one of the most common types of image forgeries, where a region from one part of an image is copied and pasted onto another part, thereby concealing the image content in the latter region. Keypoint based copy-move forgery detection approaches extract image feature points and use local visual features, rather than image blocks, to identify duplicated regions. Keypoint based approaches exhibit remarkable performance with respect to computational cost, memory requirement, and robustness. But unfortunately, they usually do not work well if smooth background areas are used to hide small objects, as image keypoints cannot be extracted effectively from those areas. It is a challenging work to design a keypoint-based method for detecting forgeries involving small smooth regions. In this paper, we propose a new keypoint-based copy-move forgery detection for small smooth regions. Firstly, the original tampered image is segmented into nonoverlapping and irregular superpixels, and the superpixels are classified into smooth, texture and strong texture based on local information entropy. Secondly, the stable image keypoints are extracted from each superpixel, including smooth, texture and strong texture ones, by utilizing the superpixel content based adaptive feature points detector. Thirdly, the local visual features, namely exponent moments magnitudes, are constructed for each image keypoint, and the best bin first and reversed generalized 2 nearest-neighbor algorithm are utilized to find rapidly the matching image keypoints. Finally, the falsely matched image keypoints are removed by customizing the random sample consensus, and the duplicated regions are localized by using zero mean normalized cross-correlation measure. Extensive experimental results show that the newly proposed scheme can achieve much better detection results for copy-move forgery images under various challenging conditions, such as geometric transforms, JPEG compression, and additive white Gaussian noise, compared with the existing state-of-the-art copy-move forgery detection methods.", "title": "" }, { "docid": "5706ae68d5e2b56679e0c89361fcc8b8", "text": "Quantum computers promise to exceed the computational efficiency of ordinary classical machines because quantum algorithms allow the execution of certain tasks in fewer steps. But practical implementation of these machines poses a formidable challenge. Here I present a scheme for implementing a quantum-mechanical computer. Information is encoded onto the nuclear spins of donor atoms in doped silicon electronic devices. Logical operations on individual spins are performed using externally applied electric fields, and spin measurements are made using currents of spin-polarized electrons. The realization of such a computer is dependent on future refinements of conventional silicon electronics.", "title": "" }, { "docid": "5562bb6fdc8864a23e7ec7992c7bb023", "text": "Bacteria are known to communicate primarily via secreted extracellular factors. Here we identify a previously uncharacterized type of bacterial communication mediated by nanotubes that bridge neighboring cells. Using Bacillus subtilis as a model organism, we visualized transfer of cytoplasmic fluorescent molecules between adjacent cells. Additionally, by coculturing strains harboring different antibiotic resistance genes, we demonstrated that molecular exchange enables cells to transiently acquire nonhereditary resistance. Furthermore, nonconjugative plasmids could be transferred from one cell to another, thereby conferring hereditary features to recipient cells. Electron microscopy revealed the existence of variously sized tubular extensions bridging neighboring cells, serving as a route for exchange of intracellular molecules. These nanotubes also formed in an interspecies manner, between B. subtilis and Staphylococcus aureus, and even between B. subtilis and the evolutionary distant bacterium Escherichia coli. We propose that nanotubes represent a major form of bacterial communication in nature, providing a network for exchange of cellular molecules within and between species.", "title": "" }, { "docid": "2ae773f548c1727a53a7eb43550d8063", "text": "Today's Internet hosts are threatened by large-scale distributed denial-of-service (DDoS) attacks. The path identification (Pi) DDoS defense scheme has recently been proposed as a deterministic packet marking scheme that allows a DDoS victim to filter out attack packets on a per packet basis with high accuracy after only a few attack packets are received (Yaar , 2003). In this paper, we propose the StackPi marking, a new packet marking scheme based on Pi, and new filtering mechanisms. The StackPi marking scheme consists of two new marking methods that substantially improve Pi's incremental deployment performance: Stack-based marking and write-ahead marking. Our scheme almost completely eliminates the effect of a few legacy routers on a path, and performs 2-4 times better than the original Pi scheme in a sparse deployment of Pi-enabled routers. For the filtering mechanism, we derive an optimal threshold strategy for filtering with the Pi marking. We also develop a new filter, the PiIP filter, which can be used to detect Internet protocol (IP) spoofing attacks with just a single attack packet. Finally, we discuss in detail StackPi's compatibility with IP fragmentation, applicability in an IPv6 environment, and several other important issues relating to potential deployment of StackPi", "title": "" }, { "docid": "0dd9fc4317dc99a2ca55a822cfc5c36e", "text": "Recently, research has shown that it is possible to spoof a variety of fingerprint scanners using some simple techniques with molds made from plastic, clay, Play-Doh, silicone or gelatin materials. To protect against spoofing, methods of liveness detection measure physiological signs of life from fingerprints ensuring only live fingers are captured for enrollment or authentication. In this paper, a new liveness detection method is proposed which is based on noise analysis along the valleys in the ridge-valley structure of fingerprint images. Unlike live fingers which have a clear ridge-valley structure, artificial fingers have a distinct noise distribution due to the material’s properties when placed on a fingerprint scanner. Statistical features are extracted in multiresolution scales using wavelet decomposition technique. Based on these features, liveness separation (live/non-live) is performed using classification trees and neural networks. We test this method on the dataset which contains about 58 live, 80 spoof (50 made from Play-Doh and 30 made from gelatin), and 25 cadaver subjects for 3 different scanners. Also, we test this method on a second dataset which contains 28 live and 28 spoof (made from silicone) subjects. Results show that we can get approximately 90.9-100% classification of spoof and live fingerprints. The proposed liveness detection method is purely software based and application of this method can provide anti-spoofing protection for fingerprint scanners.", "title": "" }, { "docid": "d0bacaa267599486356c175ca5419ede", "text": "As P4 and its associated compilers move beyond relative immaturity, there is a need for common evaluation criteria. In this paper, we propose Whippersnapper, a set of benchmarks for P4. Rather than simply selecting a set of representative data-plane programs, the benchmark is designed from first principles, identifying and exploring key features and metrics. We believe the benchmark will not only provide a vehicle for comparing implementations and designs, but will also generate discussion within the larger community about the requirements for data-plane languages.", "title": "" }, { "docid": "307dac4f0cc964a539160780abb1c123", "text": "One of the main current applications of intelligent systems is recommender systems (RS). RS can help users to find relevant items in huge information spaces in a personalized way. Several techniques have been investigated for the development of RS. One of them is evolutionary computational (EC) techniques, which is an emerging trend with various application areas. The increasing interest in using EC for web personalization, information retrieval and RS fostered the publication of survey papers on the subject. However, these surveys have analyzed only a small number of publications, around ten. This study provides a comprehensive review of more than 65 research publications focusing on five aspects we consider relevant for such: the recommendation technique used, the datasets and the evaluation methods adopted in their experimental parts, the baselines employed in the experimental comparison of proposed approaches and the reproducibility of the reported experiments. At the end of this review, we discuss negative and positive aspects of these papers, as well as point out opportunities, challenges and possible future research directions. To the best of our knowledge, this review is the most comprehensive review of various approaches using EC in RS. Thus, we believe this review will be a relevant material for researchers interested in EC and RS.", "title": "" }, { "docid": "f1bd28aba519845b3a6ea8ef92695e79", "text": "Web 2.0 communities are a quite recent phenomenon which involve large numbers of users and where communication between members is carried out in real time. Despite of those good characteristics, there is still a necessity of developing tools to help users to reach decisions with a high level of consensus in those new virtual environments. In this contribution a new consensus reaching model is presented which uses linguistic preferences and is designed to minimize the main problems that this kind of organization", "title": "" }, { "docid": "42c0f8504f26d46a4cc92d3c19eb900d", "text": "Research into suicide prevention has been hampered by methodological limitations such as low sample size and recall bias. Recently, Natural Language Processing (NLP) strategies have been used with Electronic Health Records to increase information extraction from free text notes as well as structured fields concerning suicidality and this allows access to much larger cohorts than previously possible. This paper presents two novel NLP approaches – a rule-based approach to classify the presence of suicide ideation and a hybrid machine learning and rule-based approach to identify suicide attempts in a psychiatric clinical database. Good performance of the two classifiers in the evaluation study suggest they can be used to accurately detect mentions of suicide ideation and attempt within free-text documents in this psychiatric database. The novelty of the two approaches lies in the malleability of each classifier if a need to refine performance, or meet alternate classification requirements arises. The algorithms can also be adapted to fit infrastructures of other clinical datasets given sufficient clinical recording practice knowledge, without dependency on medical codes or additional data extraction of known risk factors to predict suicidal behaviour.", "title": "" }, { "docid": "1b347401820c826db444cc3580bde210", "text": "Utilization of Natural Fibers in Plastic Composites: Problems and Opportunities Roger M. Rowell, Anand R, Sanadi, Daniel F. Caulfield and Rodney E. Jacobson Forest Products Laboratory, ESDA, One Gifford Pinchot Drive, Madison, WI 53705 Department of Forestry, 1630 Linden Drive, University of Wisconsin, WI 53706 recycled. Results suggest that agro-based fibers are a viable alternative to inorganic/material based reinforcing fibers in commodity fiber-thermoplastic composite materials as long as the right processing conditions are used and for applications where higher water absorption may be so critical. These renewable fibers hav low densities and high specific properties and their non-abrasive nature permits a high volume of filling in the composite. Kenaf fivers, for example, have excellent specific properties and have potential to be outstanding reinforcing fillers in plastics. In our experiments, several types of natural fibers were blended with polyprolylene(PP) and then injection molded, with the fiber weight fractions varying to 60%. A compatibilizer or a coupling agent was used to improve the interaction and adhesion between the non-polar matrix and the polar lignocellulosic fibers. The specific tensile and flexural moduli of a 50% by weight (39% by volume) of kenaf-PP composites compares favorably with 40% by weight of glass fiber (19% by volume)-PP injection molded composites. Furthermore, prelimimary results sugget that natural fiber-PP composites can be regrounded and", "title": "" }, { "docid": "198ad1ba78ac0aa315dac6f5730b4f88", "text": "Life history theory posits that behavioral adaptation to various environmental (ecological and/or social) conditions encountered during childhood is regulated by a wide variety of different traits resulting in various behavioral strategies. Unpredictable and harsh conditions tend to produce fast life history strategies, characterized by early maturation, a higher number of sexual partners to whom one is less attached, and less parenting of offspring. Unpredictability and harshness not only affects dispositional social and emotional functioning, but may also promote the development of personality traits linked to higher rates of instability in social relationships or more self-interested behavior. Similarly, detrimental childhood experiences, such as poor parental care or high parent-child conflict, affect personality development and may create a more distrustful, malicious interpersonal style. The aim of this brief review is to survey and summarize findings on the impact of negative early-life experiences on the development of personality and fast life history strategies. By demonstrating that there are parallels in adaptations to adversity in these two domains, we hope to lend weight to current and future attempts to provide a comprehensive insight of personality traits and functions at the ultimate and proximate levels.", "title": "" }, { "docid": "e5f4b8d4e02f68c90fe4b18dfed2719e", "text": "The evolution of modern electronic devices is outpacing the scalability and effectiveness of the tools used to analyze digital evidence recovered from them. Indeed, current digital forensic techniques and tools are unable to handle large datasets in an efficient manner. As a result, the time and effort required to conduct digital forensic investigations are increasing. This paper describes a promising digital forensic visualization framework that displays digital evidence in a simple and intuitive manner, enhancing decision making and facilitating the explanation of phenomena in evidentiary data.", "title": "" }, { "docid": "a692778b7f619de5ad4bc3b2d627c265", "text": "Many students are being left behind by an educational system that some people believe is in crisis. Improving educational outcomes will require efforts on many fronts, but a central premise of this monograph is that one part of a solution involves helping students to better regulate their learning through the use of effective learning techniques. Fortunately, cognitive and educational psychologists have been developing and evaluating easy-to-use learning techniques that could help students achieve their learning goals. In this monograph, we discuss 10 learning techniques in detail and offer recommendations about their relative utility. We selected techniques that were expected to be relatively easy to use and hence could be adopted by many students. Also, some techniques (e.g., highlighting and rereading) were selected because students report relying heavily on them, which makes it especially important to examine how well they work. The techniques include elaborative interrogation, self-explanation, summarization, highlighting (or underlining), the keyword mnemonic, imagery use for text learning, rereading, practice testing, distributed practice, and interleaved practice. To offer recommendations about the relative utility of these techniques, we evaluated whether their benefits generalize across four categories of variables: learning conditions, student characteristics, materials, and criterion tasks. Learning conditions include aspects of the learning environment in which the technique is implemented, such as whether a student studies alone or with a group. Student characteristics include variables such as age, ability, and level of prior knowledge. Materials vary from simple concepts to mathematical problems to complicated science texts. Criterion tasks include different outcome measures that are relevant to student achievement, such as those tapping memory, problem solving, and comprehension. We attempted to provide thorough reviews for each technique, so this monograph is rather lengthy. However, we also wrote the monograph in a modular fashion, so it is easy to use. In particular, each review is divided into the following sections: General description of the technique and why it should work How general are the effects of this technique?  2a. Learning conditions  2b. Student characteristics  2c. Materials  2d. Criterion tasks Effects in representative educational contexts Issues for implementation Overall assessment The review for each technique can be read independently of the others, and particular variables of interest can be easily compared across techniques. To foreshadow our final recommendations, the techniques vary widely with respect to their generalizability and promise for improving student learning. Practice testing and distributed practice received high utility assessments because they benefit learners of different ages and abilities and have been shown to boost students' performance across many criterion tasks and even in educational contexts. Elaborative interrogation, self-explanation, and interleaved practice received moderate utility assessments. The benefits of these techniques do generalize across some variables, yet despite their promise, they fell short of a high utility assessment because the evidence for their efficacy is limited. For instance, elaborative interrogation and self-explanation have not been adequately evaluated in educational contexts, and the benefits of interleaving have just begun to be systematically explored, so the ultimate effectiveness of these techniques is currently unknown. Nevertheless, the techniques that received moderate-utility ratings show enough promise for us to recommend their use in appropriate situations, which we describe in detail within the review of each technique. Five techniques received a low utility assessment: summarization, highlighting, the keyword mnemonic, imagery use for text learning, and rereading. These techniques were rated as low utility for numerous reasons. Summarization and imagery use for text learning have been shown to help some students on some criterion tasks, yet the conditions under which these techniques produce benefits are limited, and much research is still needed to fully explore their overall effectiveness. The keyword mnemonic is difficult to implement in some contexts, and it appears to benefit students for a limited number of materials and for short retention intervals. Most students report rereading and highlighting, yet these techniques do not consistently boost students' performance, so other techniques should be used in their place (e.g., practice testing instead of rereading). Our hope is that this monograph will foster improvements in student learning, not only by showcasing which learning techniques are likely to have the most generalizable effects but also by encouraging researchers to continue investigating the most promising techniques. Accordingly, in our closing remarks, we discuss some issues for how these techniques could be implemented by teachers and students, and we highlight directions for future research.", "title": "" }, { "docid": "f6266e5c4adb4fa24cc353dccccaf6db", "text": "Clustering plays an important role in many large-scale data analyses providing users with an overall understanding of their data. Nonetheless, clustering is not an easy task due to noisy features and outliers existing in the data, and thus the clustering results obtained from automatic algorithms often do not make clear sense. To remedy this problem, automatic clustering should be complemented with interactive visualization strategies. This paper proposes an interactive visual analytics system for document clustering, called iVisClustering, based on a widelyused topic modeling method, latent Dirichlet allocation (LDA). iVisClustering provides a summary of each cluster in terms of its most representative keywords and visualizes soft clustering results in parallel coordinates. The main view of the system provides a 2D plot that visualizes cluster similarities and the relation among data items with a graph-based representation. iVisClustering provides several other views, which contain useful interaction methods. With help of these visualization modules, we can interactively refine the clustering results in various ways.", "title": "" }, { "docid": "79a02a35c02858a6510fc92b9eadde4e", "text": "Distributed word representations have been demonstrated to be effective in capturing semantic and syntactic regularities. Unsupervised representation learning from large unlabeled corpora can learn similar representations for those words that present similar cooccurrence statistics. Besides local occurrence statistics, global topical information is also important knowledge that may help discriminate a word from another. In this paper, we incorporate category information of documents in the learning of word representations and to learn the proposed models in a documentwise manner. Our models outperform several state-of-the-art models in word analogy and word similarity tasks. Moreover, we evaluate the learned word vectors on sentiment analysis and text classification tasks, which shows the superiority of our learned word vectors. We also learn high-quality category embeddings that reflect topical meanings.", "title": "" }, { "docid": "75b6168dd008fd1d30851d3cf24d7679", "text": "We introduce Deep Linear Discriminant Analysis (DeepLDA) which learns linearly separable latent representations in an end-to-end fashion. Classic LDA extracts features which preserve class separability and is used for dimensionality reduction for many classification problems. The central idea of this paper is to put LDA on top of a deep neural network. This can be seen as a non-linear extension of classic LDA. Instead of maximizing the likelihood of target labels for individual samples, we propose an objective function that pushes the network to produce feature distributions which: (a) have low variance within the same class and (b) high variance between different classes. Our objective is derived from the general LDA eigenvalue problem and still allows to train with stochastic gradient descent and back-propagation. For evaluation we test our approach on three different benchmark datasets (MNIST, CIFAR-10 and STL-10). DeepLDA produces competitive results on MNIST and CIFAR-10 and outperforms a network trained with categorical cross entropy (same architecture) on a supervised setting of STL-10.", "title": "" }, { "docid": "c0bf378bd6c763b83249163733c21f07", "text": "Although videos appear to be very high-dimensional in terms of duration × frame-rate × resolution, temporal smoothness constraints ensure that the intrinsic dimensionality for videos is much lower. In this paper, we use this idea for investigating Domain Adaptation (DA) in videos, an area that remains under-explored. An approach that has worked well for the image DA is based on the subspace modeling of the source and target domains, which works under the assumption that the two domains share a latent subspace where the domain shift can be reduced or eliminated. In this paper, first we extend three subspace based image DA techniques for human action recognition and then combine it with our proposed Eclectic Domain Mixing (EDM) approach to improve the effectiveness of the DA. Further, we use discrepancy measures such as Symmetrized KL Divergence and Target Density Around Source for empirical study of the proposed EDM approach. While, this work mainly focuses on Domain Adaptation in videos, for completeness of the study, we comprehensively evaluate our approach using both object and action datasets. In this paper, we have achieved consistent improvements over chosen baselines and obtained some state-of-the-art results for the datasets.", "title": "" }, { "docid": "76f11326d1a2573aae8925d63a10a1f9", "text": "It has been widely claimed that attention and awareness are doubly dissociable and that there is no causal relation between them. In support of this view are numerous claims of attention without awareness, and awareness without attention. Although there is evidence that attention can operate on or be drawn to unconscious stimuli, various recent findings demonstrate that there is no empirical support for awareness without attention. To properly test for awareness without attention, we propose that a stimulus be studied using a battery of tests based on diverse, mainstream paradigms from the current attention literature. When this type of analysis is performed, the evidence is fully consistent with a model in which attention is necessary, but not sufficient, for awareness.", "title": "" }, { "docid": "99a728e8b9a351734db9b850fe79bd61", "text": "Predicting anchor links across social networks has important implications to an array of applications, including cross-network information diffusion and cross-domain recommendation. One challenging problem is: whether and to what extent we can address the anchor link prediction problem, if only structural information of networks is available. Most existing methods, unsupervised or supervised, directly work on networks themselves rather than on their intrinsic structural regularities, and thus their effectiveness is sensitive to the high dimension and sparsity of networks. To offer a robust method, we propose a novel supervised model, called PALE, which employs network embedding with awareness of observed anchor links as supervised information to capture the major and specific structural regularities and further learns a stable cross-network mapping for predicting anchor links. Through extensive experiments on two realistic datasets, we demonstrate that PALE significantly outperforms the state-of-the-art methods.", "title": "" } ]
scidocsrr
f7aa9fe40d401b8e23e6d58dde8991f4
Music Similarity Measures: What's the use?
[ { "docid": "59b928fab5d53519a0a020b7461690cf", "text": "Musical genres are categorical descriptions that are used to describe music. They are commonly used to structure the increasing amounts of music available in digital form on the Web and are important for music information retrieval. Genre categorization for audio has traditionally been performed manually. A particular musical genre is characterized by statistical properties related to the instrumentation, rhythmic structure and form of its members. In this work, algorithms for the automatic genre categorization of audio signals are described. More specifically, we propose a set of features for representing texture and instrumentation. In addition a novel set of features for representing rhythmic structure and strength is proposed. The performance of those feature sets has been evaluated by training statistical pattern recognition classifiers using real world audio collections. Based on the automatic hierarchical genre classification two graphical user interfaces for browsing and interacting with large audio collections have been developed.", "title": "" } ]
[ { "docid": "6d70ac4457983c7df8896a9d31728015", "text": "This brief presents a differential transmit-receive (T/R) switch integrated in a 0.18-mum standard CMOS technology for wireless applications up to 6 GHz. This switch design employs fully differential architecture to accommodate the design challenge of differential transceivers and improve the linearity performance. It exhibits less than 2-dB insertion loss, higher than 15-dB isolation, in a 60 mumtimes40 mum area. 15-dBm power at 1-dB compression point (P1dB) is achieved without using additional techniques to enhance the linearity. This switch is suitable for differential transceiver front-ends with a moderate power level. To the best of the authors' knowledge, this is the first reported differential T/R switch in CMOS for multistandard and wideband wireless applications", "title": "" }, { "docid": "c0ddc4b83145a1ee7b252d65066b8969", "text": "Embedding knowledge graphs (KGs) into continuous vector spaces is a focus of current research. Combining such an embedding model with logic rules has recently attracted increasing attention. Most previous attempts made a one-time injection of logic rules, ignoring the interactive nature between embedding learning and logical inference. And they focused only on hard rules, which always hold with no exception and usually require extensive manual effort to create or validate. In this paper, we propose Rule-Guided Embedding (RUGE), a novel paradigm of KG embedding with iterative guidance from soft rules. RUGE enables an embedding model to learn simultaneously from 1) labeled triples that have been directly observed in a given KG, 2) unlabeled triples whose labels are going to be predicted iteratively, and 3) soft rules with various confidence levels extracted automatically from the KG. In the learning process, RUGE iteratively queries rules to obtain soft labels for unlabeled triples, and integrates such newly labeled triples to update the embedding model. Through this iterative procedure, knowledge embodied in logic rules may be better transferred into the learned embeddings. We evaluate RUGE in link prediction on Freebase and YAGO. Experimental results show that: 1) with rule knowledge injected iteratively, RUGE achieves significant and consistent improvements over state-of-the-art baselines; and 2) despite their uncertainties, automatically extracted soft rules are highly beneficial to KG embedding, even those with moderate confidence levels. The code and data used for this paper can be obtained from https://github.com/iieir-km/RUGE.", "title": "" }, { "docid": "2793e8eb1410b2379a8a416f0560df0a", "text": "Alzheimer’s disease (AD) transgenic mice have been used as a standard AD model for basic mechanistic studies and drug discovery. These mouse models showed symbolic AD pathologies including β-amyloid (Aβ) plaques, gliosis and memory deficits but failed to fully recapitulate AD pathogenic cascades including robust phospho tau (p-tau) accumulation, clear neurofibrillary tangles (NFTs) and neurodegeneration, solely driven by familial AD (FAD) mutation(s). Recent advances in human stem cell and three-dimensional (3D) culture technologies made it possible to generate novel 3D neural cell culture models that recapitulate AD pathologies including robust Aβ deposition and Aβ-driven NFT-like tau pathology. These new 3D human cell culture models of AD hold a promise for a novel platform that can be used for mechanism studies in human brain-like environment and high-throughput drug screening (HTS). In this review, we will summarize the current progress in recapitulating AD pathogenic cascades in human neural cell culture models using AD patient-derived induced pluripotent stem cells (iPSCs) or genetically modified human stem cell lines. We will also explain how new 3D culture technologies were applied to accelerate Aβ and p-tau pathologies in human neural cell cultures, as compared the standard two-dimensional (2D) culture conditions. Finally, we will discuss a potential impact of the human 3D human neural cell culture models on the AD drug-development process. These revolutionary 3D culture models of AD will contribute to accelerate the discovery of novel AD drugs.", "title": "" }, { "docid": "c43b77b56a6e2cb16a6b85815449529d", "text": "We propose a new method for clustering multivariate time series. A univariate time series can be represented by a fixed-length vector whose components are statistical features of the time series, capturing the global structure. These descriptive vectors, one for each component of the multivariate time series, are concatenated, before being clustered using a standard fast clustering algorithm such as k-means or hierarchical clustering. Such statistical feature extraction also serves as a dimension-reduction procedure for multivariate time series. We demonstrate the effectiveness and simplicity of our proposed method by clustering human motion sequences: dynamic and high-dimensional multivariate time series. The proposed method based on univariate time series structure and statistical metrics provides a novel, yet simple and flexible way to cluster multivariate time series data efficiently with promising accuracy. The success of our method on the case study suggests that clustering may be a valuable addition to the tools available for human motion pattern recognition research.", "title": "" }, { "docid": "4a5131ec6e40545765e400d738441376", "text": "Experiments have been performed to investigate the operating modes of a generator of 2/spl times/500-ps bipolar high-voltage, nanosecond pulses with the double amplitude (270 kV) close to that of the charge pulse of the RADAN-303 nanosecond driver. The generator contains an additional peaker shortening the risetime of the starting pulse and a pulse-forming line with two untriggered gas gaps operating with a total jitter of 200 ps.", "title": "" }, { "docid": "ae7117416b4a07d2b15668c2c8ac46e3", "text": "We present OntoWiki, a tool providing support for agile, distributed knowledge engineering scenarios. OntoWiki facilitates the visual presentation of a knowledge base as an information map, with different views on instance data. It enables intuitive authoring of semantic content, with an inline editing mode for editing RDF content, similar to WYSIWYG for text documents. It fosters social collaboration aspects by keeping track of changes, allowing comments and discussion on every single part of a knowledge base, enabling to rate and measure the popularity of content and honoring the activity of users. OntoWiki enhances the browsing and retrieval by offering semantic enhanced search strategies. All these techniques are applied with the ultimate goal of decreasing the entrance barrier for projects and domain experts to collaborate using semantic technologies. In the spirit of the Web 2.0 OntoWiki implements an ”architecture of participation” that allows users to add value to the application as they use it. It is available as open-source software and a demonstration platform can be accessed at http://3ba.se.", "title": "" }, { "docid": "d95ae6900ae353fa0ed32167e0c23f16", "text": "As well known, fully convolutional network (FCN) becomes the state of the art for semantic segmentation in deep learning. Currently, new hardware designs for deep learning have focused on improving the speed and parallelism of processing units. This motivates memristive solutions, in which the memory units (i.e., memristors) have computing capabilities. However, designing a memristive deep learning network is challenging, since memristors work very differently from the traditional CMOS hardware. This paper proposes a complete solution to implement memristive FCN (MFCN). Voltage selectors are firstly utilized to realize max-pooling layers with the detailed MFCN deconvolution hardware circuit by the massively parallel structure, which is effective since the deconvolution kernel and the input feature are similar in size. Then, deconvolution calculation is realized by converting the image into a column matrix and converting the deconvolution kernel into a sparse matrix. Meanwhile, the convolution realization in MFCN is also studied with the traditional sliding window method rather than the large matrix theory to overcome the shortcoming of low efficiency. Moreover, the conductance values of memristors are predetermined in Tensorflow with ex-situ training method. In other words, we train MFCN in software, then download the trained parameters to the simulink system by writing memristor. The effectiveness of the designed MFCN scheme is verified with improved accuracy over some existing machine learning methods. The proposed scheme is also adapt to LFW dataset with three-classification tasks. However, the MFCN training is time consuming as the computational burden is heavy with thousands of weight parameters with just six layers. In future, it is necessary to sparsify the weight parameters and layers of the MFCN network to speed up computing.", "title": "" }, { "docid": "60d90ae1407c86559af63f20536202dc", "text": "TCP Westwood (TCPW) is a sender-side modification of the TCP congestion window algorithm that improves upon the performance of TCP Reno in wired as well as wireless networks. The improvement is most significant in wireless networks with lossy links. In fact, TCPW performance is not very sensitive to random errors, while TCP Reno is equally sensitive to random loss and congestion loss and cannot discriminate between them. Hence, the tendency of TCP Reno to overreact to errors. An important distinguishing feature of TCP Westwood with respect to previous wireless TCP “extensions” is that it does not require inspection and/or interception of TCP packets at intermediate (proxy) nodes. Rather, TCPW fully complies with the end-to-end TCP design principle. The key innovative idea is to continuously measure at the TCP sender side the bandwidth used by the connection via monitoring the rate of returning ACKs. The estimate is then used to compute congestion window and slow start threshold after a congestion episode, that is, after three duplicate acknowledgments or after a timeout. The rationale of this strategy is simple: in contrast with TCP Reno which “blindly” halves the congestion window after three duplicate ACKs, TCP Westwood attempts to select a slow start threshold and a congestion window which are consistent with the effective bandwidth used at the time congestion is experienced. We call this mechanism faster recovery. The proposed mechanism is particularly effective over wireless links where sporadic losses due to radio channel problems are often misinterpreted as a symptom of congestion by current TCP schemes and thus lead to an unnecessary window reduction. Experimental studies reveal improvements in throughput performance, as well as in fairness. In addition, friendliness with TCP Reno was observed in a set of experiments showing that TCP Reno connections are not starved by TCPW connections. Most importantly, TCPW is extremely effective in mixed wired and wireless networks where throughput improvements of up to 550% are observed. Finally, TCPW performs almost as well as localized link layer approaches such as the popular Snoop scheme, without incurring the overhead of a specialized link layer protocol.", "title": "" }, { "docid": "136deaa8656bdb1c2491de4effd09838", "text": "The fabrication technology advancements lead to place more logic on a silicon die which makes verification more challenging task than ever. The large number of resources is required because more than 70% of the design cycle is used for verification. Universal Verification Methodology was developed to provide a well structured and reusable verification environment which does not interfere with the device under test (DUT). This paper contrasts the reusability of I2C using UVM and introduces how the verification environment is constructed and test cases are implemented for this protocol.", "title": "" }, { "docid": "8fbb53199fab6383b8dd01347d62cf86", "text": "In this paper, we analyze ring oscillator (RO) based physical unclonable function (PUF) on FPGAs. We show that the systematic process variation adversely affects the ability of the RO-PUF to generate unique chip-signatures, and propose a compensation method to mitigate it. Moreover, a configurable ring oscillator (CRO) technique is proposed to reduce noise in PUF responses. Our compensation method could improve the uniqueness of the PUF by an amount as high as 18%. The CRO technique could produce nearly 100% error-free PUF outputs over varying environmental conditions without post-processing while consuming minimum area.", "title": "" }, { "docid": "26b13a3c03014fc910ed973c264e4c9d", "text": "Deep convolutional neural networks (CNNs) have shown great potential for numerous real-world machine learning applications, but performing inference in large CNNs in real-time remains a challenge. We have previously demonstrated that traditional CNNs can be converted into deep spiking neural networks (SNNs), which exhibit similar accuracy while reducing both latency and computational load as a consequence of their data-driven, event-based style of computing. Here we provide a novel theory that explains why this conversion is successful, and derive from it several new tools to convert a larger and more powerful class of deep networks into SNNs. We identify the main sources of approximation errors in previous conversion methods, and propose simple mechanisms to fix these issues. Furthermore, we develop spiking implementations of common CNN operations such as max-pooling, softmax, and batch-normalization, which allow almost loss-less conversion of arbitrary CNN architectures into the spiking domain. Empirical evaluation of different network architectures on the MNIST and CIFAR10 benchmarks leads to the best SNN results reported to date.", "title": "" }, { "docid": "2b7ac1941127e1d47401d67e6d7856de", "text": "Alert correlation is an important technique for managing large the volume of intrusion alerts that are raised by heterogenous Intrusion Detection Systems (IDSs). The recent trend of research in this area is towards extracting attack strategies from raw intrusion alerts. It is generally believed that pure intrusion detection no longer can satisfy the security needs of organizations. Intrusion response and prevention are now becoming crucially important for protecting the network and minimizing damage. Knowing the real security situation of a network and the strategies used by the attackers enables network administrators to launches appropriate response to stop attacks and prevent them from escalating. This is also the primary goal of using alert correlation technique. However, most of the current alert correlation techniques only focus on clustering inter-connected alerts into different groups without further analyzing the strategies of the attackers. Some techniques for extracting attack strategies have been proposed in recent years, but they normally require defining a larger number of rules. This paper focuses on developing a new alert correlation technique that can help to automatically extract attack strategies from a large volume of intrusion alerts, without specific prior knowledge about these alerts. The proposed approach is based on two different neural network approaches, namely, Multilayer Perceptron (MLP) and Support Vector Machine (SVM). The probabilistic output of these two methods is used to determine with which previous alerts this current alert should be correlated. This suggests the causal relationship of two alerts, which is helpful for constructing attack scenarios. One of the distinguishing feature of the proposed technique is that an Alert Correlation Matrix (ACM) is used to store correlation strengthes of any two types of alerts. ACM is updated in the training process, and the information (correlation strength) is then used for extracting high level attack strategies.", "title": "" }, { "docid": "b8cec6cfbc55c9fd6a7d5ed951bcf4eb", "text": "Increasingly large amount of multidimensional data are being generated on a daily basis in many applications. This leads to a strong demand for learning algorithms to extract useful information from these massive data. This paper surveys the field of multilinear subspace learning (MSL) for dimensionality reduction of multidimensional data directly from their tensorial representations. It discusses the central issues of MSL, including establishing the foundations of the field via multilinear projections, formulating a unifying MSL framework for systematic treatment of the problem, examining the algorithmic aspects of typical MSL solutions, and categorizing both unsupervised and supervised MSL algorithms into taxonomies. Lastly, the paper summarizes a wide range of MSL applications and concludes with perspectives on future research directions.", "title": "" }, { "docid": "b25b7100c035ad2953fb43087ede1625", "text": "In this paper, a novel 10W substrate integrated waveguide (SIW) high power amplifier (HPA) designed with SIW matching network (MN) is presented. The SIW MN is connected with microstrip line using microstrip-to-SIW transition. An inductive metallized post in SIW is employed to realize impedance matching. At the fundamental frequency of 2.14 GHz, the impedance matching is realized by moving the position of the inductive metallized post in the SIW. Both the input and output MNs are designed with the proposed SIW-based MN concept. One SIW-based 10W HPA using GaN HEMT at 2.14 GHz is designed, fabricated, and measured. The proposed SIW-based HPA can be easily connected with any microstrip circuit with microstrip-to-SIW transition. Measured results show that the maximum power added efficiency (PAE) is 65.9 % with 39.8 dBm output power and the maximum gain is 20.1 dB with 30.9 dBm output power at 2.18 GHz. The size of the proposed SIW-based HPA is comparable with other microstrip-based PAs designed at the operating frequency.", "title": "" }, { "docid": "529ca36809a7052b9495279aa1081fcc", "text": "To effectively control complex dynamical systems, accurate nonlinear models are typically needed. However, these models are not always known. In this paper, we present a data-driven approach based on Gaussian processes that learns models of quadrotors operating in partially unknown environments. What makes this challenging is that if the learning process is not carefully controlled, the system will go unstable, i.e., the quadcopter will crash. To this end, barrier certificates are employed for safe learning. The barrier certificates establish a non-conservative forward invariant safe region, in which high probability safety guarantees are provided based on the statistics of the Gaussian Process. A learning controller is designed to efficiently explore those uncertain states and expand the barrier certified safe region based on an adaptive sampling scheme. Simulation results are provided to demonstrate the effectiveness of the proposed approach.", "title": "" }, { "docid": "613b014ea02019a78be488a302ff4794", "text": "In this study, the robustness of approaches to the automatic classification of emotions in speech is addressed. Among the many types of emotions that exist, two groups of emotions are considered, adult-to-adult acted vocal expressions of common types of emotions like happiness, sadness, and anger and adult-to-infant vocal expressions of affective intents also known as ‘‘motherese’’. Specifically, we estimate the generalization capability of two feature extraction approaches, the approach developed for Sony’s robotic dog AIBO (AIBO) and the segment-based approach (SBA) of [Shami, M., Kamel, M., 2005. Segment-based approach to the recognition of emotions in speech. In: IEEE Conf. on Multimedia and Expo (ICME05), Amsterdam, The Netherlands]. Three machine learning approaches are considered, K-nearest neighbors (KNN), Support vector machines (SVM) and Ada-boosted decision trees and four emotional speech databases are employed, Kismet, BabyEars, Danish, and Berlin databases. Single corpus experiments show that the considered feature extraction approaches AIBO and SBA are competitive on the four databases considered and that their performance is comparable with previously published results on the same databases. The best choice of machine learning algorithm seems to depend on the feature extraction approach considered. Multi-corpus experiments are performed with the Kismet–BabyEars and the Danish–Berlin database pairs that contain parallel emotional classes. Automatic clustering of the emotional classes in the database pairs shows that the patterns behind the emotions in the Kismet–BabyEars pair are less database dependent than the patterns in the Danish–Berlin pair. In off-corpus testing the classifier is trained on one database of a pair and tested on the other. This provides little improvement over baseline classification. In integrated corpus testing, however, the classifier is machine learned on the merged databases and this gives promisingly robust classification results, which suggest that emotional corpora with parallel emotion classes recorded under different conditions can be used to construct a single classifier capable of distinguishing the emotions in the merged corpora. Such a classifier is more robust than a classifier learned on a single corpus as it can recognize more varied expressions of the same emotional classes. These findings suggest that the existing approaches for the classification of emotions in speech are efficient enough to handle larger amounts of training data without any reduction in classification accuracy. 2007 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "2549ed70fd2e06c749bf00193dad1f4d", "text": "Phenylketonuria (PKU) is an inborn error of metabolism caused by deficiency of the hepatic enzyme phenylalanine hydroxylase (PAH) which leads to high blood phenylalanine (Phe) levels and consequent damage of the developing brain with severe mental retardation if left untreated in early infancy. The current dietary Phe restriction treatment has certain clinical limitations. To explore a long-term nondietary restriction treatment, a somatic gene transfer approach in a PKU mouse model (C57Bl/6-Pahenu2) was employed to examine its preclinical feasibility. A recombinant adeno-associated virus (rAAV) vector containing the murine Pah-cDNA was generated, pseudotyped with capsids from AAV serotype 8, and delivered into the liver of PKU mice via single intraportal or tail vein injections. The blood Phe concentrations decreased to normal levels (⩽100 μM or 1.7 mg/dl) 2 weeks after vector application, independent of the sex of the PKU animals and the route of application. In particular, the therapeutic long-term correction in females was also dramatic, which had previously been shown to be difficult to achieve. Therapeutic ranges of Phe were accompanied by the phenotypic reversion from brown to black hair. In treated mice, PAH enzyme activity in whole liver extracts reversed to normal and neither hepatic toxicity nor immunogenicity was observed. In contrast, a lentiviral vector expressing the murine Pah-cDNA, delivered via intraportal vein injection into PKU mice, did not result in therapeutic levels of blood Phe. This study demonstrates the complete correction of hyperphenylalaninemia in both males and females with a rAAV serotype 8 vector. More importantly, the feasibility of a single intravenous injection may pave the way to develop a clinical gene therapy procedure for PKU patients.", "title": "" }, { "docid": "87f05972a93b2b432d0dad6d55e97502", "text": "The daunting volumes of community-contributed media contents on the Internet have become one of the primary sources for online advertising. However, conventional advertising treats image and video advertising as general text advertising by displaying relevant ads based on the contents of the Web page, without considering the inherent characteristics of visual contents. This article presents a contextual advertising system driven by images, which automatically associates relevant ads with an image rather than the entire text in a Web page and seamlessly inserts the ads in the nonintrusive areas within each individual image. The proposed system, called ImageSense, supports scalable advertising of, from root to node, Web sites, pages, and images. In ImageSense, the ads are selected based on not only textual relevance but also visual similarity, so that the ads yield contextual relevance to both the text in the Web page and the image content. The ad insertion positions are detected based on image salience, as well as face and text detection, to minimize intrusiveness to the user. We evaluate ImageSense on a large-scale real-world images and Web pages, and demonstrate the effectiveness of ImageSense for online image advertising.", "title": "" }, { "docid": "0d1da055e444a90ec298a2926de9fe7b", "text": "Cryptocurrencies have experienced recent surges in interest and price. It has been discovered that there are time intervals where cryptocurrency prices and certain online and social media factors appear related. In addition it has been noted that cryptocurrencies are prone to experience intervals of bubble-like price growth. The hypothesis investigated here is that relationships between online factors and price are dependent on market regime. In this paper, wavelet coherence is used to study co-movement between a cryptocurrency price and its related factors, for a number of examples. This is used alongside a well-known test for financial asset bubbles to explore whether relationships change dependent on regime. The primary finding of this work is that medium-term positive correlations between online factors and price strengthen significantly during bubble-like regimes of the price series; this explains why these relationships have previously been seen to appear and disappear over time. A secondary finding is that short-term relationships between the chosen factors and price appear to be caused by particular market events (such as hacks / security breaches), and are not consistent from one time interval to another in the effect of the factor upon the price. In addition, for the first time, wavelet coherence is used to explore the relationships between different cryptocurrencies.", "title": "" }, { "docid": "3115c716a065334dc0cdec9e33e24149", "text": "With the recent advances in the field of artificial intelligence, an increasing number of decision-making tasks are delegated to software systems. A key requirement for the success and adoption of such systems is that users must trust system choices or even fully automated decisions. To achieve this, explanation facilities have been widely investigated as a means of establishing trust in these systems since the early years of expert systems. With today’s increasingly sophisticated machine learning algorithms, new challenges in the context of explanations, accountability, and trust towards such systems constantly arise. In this work, we systematically review the literature on explanations in advice-giving systems. This is a family of systems that includes recommender systems, which is one of the most successful classes of advice-giving software in practice. We investigate the purposes of explanations as well as how they are generated, presented to users, and evaluated. As a result, we derive a novel comprehensive taxonomy of aspects to be considered when designing explanation facilities for current and future decision support systems. The taxonomy includes a variety of different facets, such as explanation objective, responsiveness, content and presentation. Moreover, we identified several challenges that remain unaddressed so far, for example related to fine-grained issues associated with the presentation of explanations and how explanation facilities are evaluated.", "title": "" } ]
scidocsrr
cb09844b251fc81d12f255fabf2fd246
Electrodes for transcutaneous (surface) electrical stimulation
[ { "docid": "66dc20e12d8b6b99b67485203293ad07", "text": "A parametric model was developed to describe the variation of dielectric properties of tissues as a function of frequency. The experimental spectrum from 10 Hz to 100 GHz was modelled with four dispersion regions. The development of the model was based on recently acquired data, complemented by data surveyed from the literature. The purpose is to enable the prediction of dielectric data that are in line with those contained in the vast body of literature on the subject. The analysis was carried out on a Microsoft Excel spreadsheet. Parameters are given for 17 tissue types.", "title": "" } ]
[ { "docid": "53dc606897bd6388c729cc8138027b31", "text": "Abstract|This paper presents transient stability and power ow models of Thyristor Controlled Reactor (TCR) and Voltage Sourced Inverter (VSI) based Flexible AC Transmission System (FACTS) Controllers. Models of the Static VAr Compensator (SVC), the Thyristor Controlled Series Compensator (TCSC), the Static VAr Compensator (STATCOM), the Static Synchronous Source Series Compensator (SSSC), and the Uni ed Power Flow Controller (UPFC) appropriate for voltage and angle stability studies are discussed in detail. Validation procedures obtained for a test system with a detailed as well as a simpli ed UPFC model are also presented and brie y discussed.", "title": "" }, { "docid": "6ad07075bdeff6e662b3259ba39635be", "text": "We discuss a new deblurring problems in this paper. Focus measurements play a fundamental role in image processing techniques. Most traditional methods neglect spatial information in the frequency domain. Therefore, this study analyzed image data in the frequency domain to determine the value of spatial information. but instead misleading noise reduction results . We found that the local feature is not always a guide for noise reduction. This finding leads to a new method to measure the image edges in focus deblurring. We employed an all-in-focus measure in the frequency domain, based on the energy level of frequency components. We also used a multi-circle enhancement model to analyze this spatial information to provide a more accurate method for measuring images. We compared our results with those using other methods in similar studies. Findings demonstrate the effectiveness of our new method.", "title": "" }, { "docid": "e6c3326af0af36a1197b08e7d2435041", "text": "HUMAN speech requires complex planning and coordination of mouth and tongue movements. Certain types of brain injury can lead to a condition known as apraxia of speech, in which patients are impaired in their ability to coordinate speech movements but their ability to perceive speech sounds, including their own errors, is unaffected1,3. The brain regions involved in coordinating speech, however, remain largely unknown. In this study, brain lesions of 25 stroke patients with a disorder in the motor planning of articulatory movements were compared with lesions of 19 patients without such deficits. A robust double dissociation was found between these two groups. All patients with articulatory planning deficits had lesions that included a discrete region of the left precentral gyms of the insula, a cortical area beneath the frontal and temporal lobes. This area was completely spared in all patients without these articulation deficits. Thus this area seems to be specialized for the motor planning of speech.", "title": "" }, { "docid": "89eee86640807e11fa02d0de4862b3a5", "text": "The evolving fifth generation (5G) cellular wireless networks are envisioned to overcome the fundamental challenges of existing cellular networks, for example, higher data rates, excellent end-to-end performance, and user-coverage in hot-spots and crowded areas with lower latency, energy consumption, and cost per information transfer. To address these challenges, 5G systems will adopt a multi-tier architecture consisting of macrocells, different types of licensed small cells, relays, and device-to-device (D2D) networks to serve users with different quality-of-service (QoS) requirements in a spectrum and energy-efficient manner. Starting with the visions and requirements of 5G multi-tier networks, this article outlines the challenges of interference management (e.g. power control, cell association) in these networks with shared spectrum access (i.e. when the different network tiers share the same licensed spectrum). It is argued that the existing interference management schemes will not be able to address the interference management problem in prioritized 5G multi-tier networks where users in different tiers have different priorities for channel access. In this context a survey and qualitative comparison of the existing cell association and power control schemes is provided to demonstrate their limitations for interference management in 5G networks. Open challenges are highlighted and guidelines are provided to modify the existing schemes in order to overcome these limitations and make them suitable for the emerging 5G systems.", "title": "" }, { "docid": "69cdb1c8a277c69a167a9a98a52d407c", "text": "ABSTRACT This paper describes methodologies applied and results achieved in the framework of the ESPRIT Basic Research Action B-Learn II (project no. 7274). B-Learn II is one of the rst projects working towards an application of Machine Learning techniques in elds of industrial relevance, which are much more complex than the domains usually treated in ML research. In particular, B-Learn II aims at easing the programming of robots and enhancing their ability to cooperate with humans. The paper gives a short introduction to learning in robotics and to the three applications under consideration in B-Learn II. Afterwards, learning methodologies used in each of the applications, the experimental setups, and the results obtained are described. In general, it can be found that providing good examples and a good interface between the learning and the performance components is crucial for success, so the extension of the \"Programming by Demonstration\" paradigm to robotics has become one of the key aspects of B-Learn II.", "title": "" }, { "docid": "562bce85b8bb43390b87817be4da8cb3", "text": "Variational autoencoders (vaes) learn distributions of high-dimensional data. They model data with a deep latent-variable model and then fit the model by maximizing a lower bound of the log marginal likelihood. vaes can capture complex distributions, but they can also suffer from an issue known as \"latent variable collapse,\" especially if the likelihood model is powerful. Specifically, the lower bound involves an approximate posterior of the latent variables; this posterior \"collapses\" when it is set equal to the prior, i.e., when the approximate posterior is independent of the data.Whilevaes learn good generativemodels, latent variable collapse prevents them from learning useful representations. In this paper, we propose a simple new way to avoid latent variable collapse by including skip connections in our generative model; these connections enforce strong links between the latent variables and the likelihood function. We study generative skip models both theoretically and empirically. Theoretically, we prove that skip models increase the mutual information between the observations and the inferred latent variables. Empirically, we study images (MNIST and Omniglot) and text (Yahoo). Compared to existing VAE architectures, we show that generative skip models maintain similar predictive performance but lead to less collapse and provide more meaningful representations of the data.", "title": "" }, { "docid": "ea84c28e02a38caff14683681ea264d7", "text": "This paper presents a hierarchical framework for detecting local and global anomalies via hierarchical feature representation and Gaussian process regression. While local anomaly is typically detected as a 3D pattern matching problem, we are more interested in global anomaly that involves multiple normal events interacting in an unusual manner such as car accident. To simultaneously detect local and global anomalies, we formulate the extraction of normal interactions from training video as the problem of efficiently finding the frequent geometric relations of the nearby sparse spatio-temporal interest points. A codebook of interaction templates is then constructed and modeled using Gaussian process regression. A novel inference method for computing the likelihood of an observed interaction is also proposed. As such, our model is robust to slight topological deformations and can handle the noise and data unbalance problems in the training data. Simulations show that our system outperforms the main state-of-the-art methods on this topic and achieves at least 80% detection rates based on three challenging datasets.", "title": "" }, { "docid": "5aa20cb4100085a12d02c6789ad44097", "text": "The rapid progress in nanoelectronics showed an urgent need for microwave measurement of impedances extremely different from the 50Ω reference impedance of measurement instruments. In commonly used methods input impedance or admittance of a device under test (DUT) is derived from measured value of its reflection coefficient causing serious accuracy problems for very high and very low impedances due to insufficient sensitivity of the reflection coefficient to impedance of the DUT. This paper brings theoretical description and experimental verification of a method developed especially for measurement of extreme impedances. The method can significantly improve measurement sensitivity and reduce errors caused by the VNA. It is based on subtraction (or addition) of a reference reflection coefficient and the reflection coefficient of the DUT by a passive network, amplifying the resulting signal by an amplifier and measuring the amplified signal as a transmission coefficient by a common vector network analyzer (VNA). A suitable calibration technique is also presented.", "title": "" }, { "docid": "03e267aeeef5c59aab348775d264afce", "text": "Visual relations, such as person ride bike and bike next to car, offer a comprehensive scene understanding of an image, and have already shown their great utility in connecting computer vision and natural language. However, due to the challenging combinatorial complexity of modeling subject-predicate-object relation triplets, very little work has been done to localize and predict visual relations. Inspired by the recent advances in relational representation learning of knowledge bases and convolutional object detection networks, we propose a Visual Translation Embedding network (VTransE) for visual relation detection. VTransE places objects in a low-dimensional relation space where a relation can be modeled as a simple vector translation, i.e., subject + predicate &#x2248; object. We propose a novel feature extraction layer that enables object-relation knowledge transfer in a fully-convolutional fashion that supports training and inference in a single forward/backward pass. To the best of our knowledge, VTransE is the first end-toend relation detection network. We demonstrate the effectiveness of VTransE over other state-of-the-art methods on two large-scale datasets: Visual Relationship and Visual Genome. Note that even though VTransE is a purely visual model, it is still competitive to the Lu&#x2019;s multi-modal model with language priors [27].", "title": "" }, { "docid": "15ad5044900511277e0cd602b0c07c5e", "text": "Intentional facial expression of emotion is critical to healthy social interactions. Patients with neurodegenerative disease, particularly those with right temporal or prefrontal atrophy, show dramatic socioemotional impairment. This was an exploratory study examining the neural and behavioral correlates of intentional facial expression of emotion in neurodegenerative disease patients and healthy controls. One hundred and thirty three participants (45 Alzheimer's disease, 16 behavioral variant frontotemporal dementia, 8 non-fluent primary progressive aphasia, 10 progressive supranuclear palsy, 11 right-temporal frontotemporal dementia, 9 semantic variant primary progressive aphasia patients and 34 healthy controls) were video recorded while imitating static images of emotional faces and producing emotional expressions based on verbal command; the accuracy of their expression was rated by blinded raters. Participants also underwent face-to-face socioemotional testing and informants described participants' typical socioemotional behavior. Patients' performance on emotion expression tasks was correlated with gray matter volume using voxel-based morphometry (VBM) across the entire sample. We found that intentional emotional imitation scores were related to fundamental socioemotional deficits; patients with known socioemotional deficits performed worse than controls on intentional emotion imitation; and intentional emotional expression predicted caregiver ratings of empathy and interpersonal warmth. Whole brain VBMs revealed a rightward cortical atrophy pattern homologous to the left lateralized speech production network was associated with intentional emotional imitation deficits. Results point to a possible neural mechanisms underlying complex socioemotional communication deficits in neurodegenerative disease patients.", "title": "" }, { "docid": "63a3126fb97982e6d52265ae3d07c0cc", "text": "This work complements our previous efforts in generating realistic fingerprint images for test purposes. The main variability which characterizes the acquisition of a fingerprint through an on-line sensor is modeled and a sequence of steps is defined to derive a series of impressions from the same master-fingerprint. This allows large fingerprint databases to be randomly generated according to some given parameters. The experimental results validate our technique and prove that it can be very useful for performance evaluation, learning and testing in fingerprint-based systems.", "title": "" }, { "docid": "c61107e9c5213ddb8c5e3b1b14dca661", "text": "In advanced driving assistance systems, it is important to be able to detect the region covered by the road in the images. This paper proposes a method for estimating the road region in images captured by a vehicle-mounted monocular camera. Our proposed method first estimates all of relevant parameters for the camera motion and the 3D road plane from correspondence points between successive images. By calculating a homography matrix from the estimated camera motion and the estimated road plane parameters, and then warping the image at the previous frame, the road region can be determined. To achieve robustness in various road scenes, our method selects the threshold for determining the road region adaptively and incorporates the assumption of a simple road region boundary. In our experiments, it has been shown that the proposed method is able to estimate the road region in real road environments.", "title": "" }, { "docid": "cc12bd6dcd844c49c55f4292703a241b", "text": "Eleven cases of sudden death of men restrained in a prone position by police officers are reported. Nine of the men were hogtied, one was tied to a hospital gurney, and one was manually held prone. All subjects were in an excited delirious state when restrained. Three were psychotic, whereas the others were acutely delirious from drugs (six from cocaine, one from methamphetamine, and one from LSD). Two were shocked with stun guns shortly before death. The literature is reviewed and mechanisms of death are discussed.", "title": "" }, { "docid": "61f079cb59505d9bf1de914330dd852e", "text": "Bayesian filters have now become the standard for spam filtering; unfortunately most Bayesian filters seem to reach a plateau of accuracy at 99.9 percent. We experimentally compare the training methods TEFT, TOE, and TUNE, as well as pure Bayesian, token-bag, tokensequence, SBPH, and Markovian ddiscriminators. The results deomonstrate that TUNE is indeed best for training, but computationally exorbitant, and that Markovian discrimination is considerably more accurate than Bayesian, but not sufficient to reach four-nines accuracy, and that other techniques such as inoculation are needed. MIT Spam Conference 2004 This work may not be copied or reproduced in whole or in part for any commercial purpose. Permission to copy in whole or in part without payment of fee is granted for nonprofit educational and research purposes provided that all such whole or partial copies include the following: a notice that such copying is by permission of Mitsubishi Electric Research Laboratories, Inc.; an acknowledgment of the authors and individual contributions to the work; and all applicable portions of the copyright notice. Copying, reproduction, or republishing for any other purpose shall require a license with payment of fee to Mitsubishi Electric Research Laboratories, Inc. All rights reserved. Copyright c © Mitsubishi Electric Research Laboratories, Inc., 2004 201 Broadway, Cambridge, Massachusetts 02139 The Spam-Filtering Accuracy Plateau at 99.9% Accuracy and How to Get Past It. William S. Yerazunis, PhD* Presented at the 2004 MIT Spam Conference January 18, 2004 MIT, Cambridge, Massachusetts Abstract: Bayesian filters have now become the standard for spam filtering; unfortunately most Bayesian filters seem to reach a plateau of accuracy at 99.9%. We experimentally compare the training methods TEFT, TOE, and TUNE, as well as pure Bayesian, token-bag, token-sequence, SBPH, and Markovian discriminators. The results demonstrate that TUNE is indeed best for training, but computationally exorbitant, and that Markovian discrimination is considerably more accurate than Bayesian, but not sufficient to reach four-nines accuracy, and that other techniques such as inoculation are needed. Bayesian filters have now become the standard for spam filtering; unfortunately most Bayesian filters seem to reach a plateau of accuracy at 99.9%. We experimentally compare the training methods TEFT, TOE, and TUNE, as well as pure Bayesian, token-bag, token-sequence, SBPH, and Markovian discriminators. The results demonstrate that TUNE is indeed best for training, but computationally exorbitant, and that Markovian discrimination is considerably more accurate than Bayesian, but not sufficient to reach four-nines accuracy, and that other techniques such as inoculation are needed.", "title": "" }, { "docid": "e016d5fc261def252f819f350b155c1a", "text": "Risk reduction is one of the key objectives pursued by transport safety policies. Particularly, the formulation and implementation of transport safety policies needs the systematic assessment of the risks, the specification of residual risk targets and the monitoring of progresses towards those ones. Risk and safety have always been considered critical in civil aviation. The purpose of this paper is to describe and analyse safety aspects in civil airports. An increase in airport capacity usually involves changes to runways layout, route structures and traffic distribution, which in turn effect the risk level around the airport. For these reasons third party risk becomes an important issue in airports development. To avoid subjective interpretations and to increase model accuracy, risk information are colleted and evaluated in a rational and mathematical manner. The method may be used to draw risk contour maps so to provide a guide to local and national authorities, to population who live around the airport, and to airports operators. Key-Words: Risk Management, Risk assessment methodology, Safety Civil aviation.", "title": "" }, { "docid": "1b812ef6c607790a0dbcf5e050871fc2", "text": "This paper introduces Adaptive Music for Affect Improvement (AMAI), a music generation and playback system whose goal is to steer the listener towards a state of more positive affect. AMAI utilizes techniques from game music in order to adjust elements of the music being heard; such adjustments are made adaptively in response to the valence levels of the listener as measured via facial expression and emotion detection. A user study involving AMAI was conducted, with N=19 participants across three groups, one for each strategy of Discharge, Diversion, and Discharge→ Diversion. Significant differences in valence levels between music-related stages of the study were found between the three groups, with Discharge → Diversion exhibiting the greatest increase in valence, followed by Diversion and finally Discharge. Significant differences in positive affect between groups were also found in one before-music and after-music pair of self-reported affect surveys, with Discharge→ Diversion exhibiting the greatest decrease in positive affect, followed by Diversion and finally Discharge; the resulting differences in facial expression valence and self-reported affect offer contrasting con-", "title": "" }, { "docid": "f193757e5ce1e1da8d28bf57175cc7cb", "text": "Tim Bailey Doctor of Philosophy The University of Sydney August 2002 Mobile Robot Localisation and Mapping in Extensive Outdoor Environments This thesis addresses the issues of scale for practical implementations of simultaneous localisation and mapping (SLAM) in extensive outdoor environments. Building an incremental map while also using it for localisation is of prime importance for mobile robot navigation but, until recently, has been confined to small-scale, mostly indoor, environments. The critical problems for large-scale implementations are as follows. First, data association— finding correspondences between map landmarks and robot sensor measurements—becomes difficult in complex, cluttered environments, especially if the robot location is uncertain. Second, the information required to maintain a consistent map using traditional methods imposes a prohibitive computational burden as the map increases in size. And third, the mathematics for SLAM relies on assumptions of small errors and near-linearity, and these become invalid for larger maps. In outdoor environments, the problems of scale are exacerbated by complex structure and rugged terrain. This can impede the detection of stable discrete landmarks, and can degrade the utility of motion estimates derived from wheel-encoder odometry. This thesis presents the following contributions for large-scale SLAM. First, a batch data association method called combined constraint data association (CCDA) is developed, which permits robust association in cluttered environments even if the robot pose is completely unknown. Second, an alternative to feature-based data association is presented, based on correlation of unprocessed sensor data with the map, for environments that don’t contain easily detectable discrete landmarks. Third, methods for feature management are presented to control the addition and removal of map landmarks, which facilitates map reliability and reduces computation. Fourth, a new map framework called network coupled feature maps (NCFM) is introduced, where the world is divided into a graph of connected submaps. This map framework is shown to solve the problems of consistency and tractability for very large-scale SLAM. The theoretical contributions of this thesis are demonstrated with a series of practical implementations using a scanning range laser in three different outdoor environments. These include: sensor-based dead reckoning, which is a highly accurate alternative to odometry for rough terrain; correlation-based localisation using particle filter methods; and NCFM SLAM over a region greater than 50000 square metres, and including trajectories with large loops.", "title": "" }, { "docid": "270e593aa89fb034d0de977fe6d618b2", "text": "According to the website AcronymFinder.com which is one of the world's largest and most comprehensive dictionaries of acronyms, an average of 37 new human-edited acronym definitions are added every day. There are 379,918 acronyms with 4,766,899 definitions on that site up to now, and each acronym has 12.5 definitions on average. It is a very important research topic to identify what exactly an acronym means in a given context for document comprehension as well as for document retrieval. In this paper, we propose two word embedding based models for acronym disambiguation. Word embedding is to represent words in a continuous and multidimensional vector space, so that it is easy to calculate the semantic similarity between words by calculating the vector distance. We evaluate the models on MSH Dataset and ScienceWISE Dataset, and both models outperform the state-of-art methods on accuracy. The experimental results show that word embedding helps to improve acronym disambiguation.", "title": "" }, { "docid": "4e938aed527769ad65d85bba48151d21", "text": "We provide a thorough description of all the artifacts that are generated by the messenger application Telegram on Android OS. We also provide interpretation of messages that are generated and how they relate to one another. Based on the results of digital forensics investigation and analysis in this paper, an analyst/investigator will be able to read, reconstruct and provide chronological explanations of messages which are generated by the user. Using three different smartphone device vendors and Android OS versions as the objects of our experiments, we conducted tests in a forensically sound manner.", "title": "" } ]
scidocsrr
b8d4821e7398675fb93265e3ed8ba517
PoseShop: Human Image Database Construction and Personalized Content Synthesis
[ { "docid": "5cfc4911a59193061ab55c2ce5013272", "text": "What can you do with a million images? In this paper, we present a new image completion algorithm powered by a huge database of photographs gathered from the Web. The algorithm patches up holes in images by finding similar image regions in the database that are not only seamless, but also semantically valid. Our chief insight is that while the space of images is effectively infinite, the space of semantically differentiable scenes is actually not that large. For many image completion tasks, we are able to find similar scenes which contain image fragments that will convincingly complete the image. Our algorithm is entirely data driven, requiring no annotations or labeling by the user. Unlike existing image completion methods, our algorithm can generate a diverse set of image completions and we allow users to select among them. We demonstrate the superiority of our algorithm over existing image completion approaches.", "title": "" } ]
[ { "docid": "000bdac12cd4254500e22b92b1906174", "text": "In this paper we address the topic of generating automatically accurate, meaning preserving and syntactically correct paraphrases of natural language sentences. The design of methods and tools for paraphrasing natural language text is a core task of natural language processing and is quite useful in many applications and procedures. We present a methodology and a tool developed that performs deep analysis of natural language sentences and generate paraphrases of them. The tool performs deep analysis of the natural language sentence and utilizes sets of paraphrasing techniques that can be used to transform structural parts of the dependency tree of a sentence to an equivalent form and also change sentence words with their synonyms and antonyms. In the evaluation study the performance of the method is examined and the accuracy of the techniques is assessed in terms of syntactic correctness and meaning preserving. The results collected are very promising and show the method to be accurate and able to generate quality paraphrases.", "title": "" }, { "docid": "d8d0b6d8b422b8d1369e99ff8b9dee0e", "text": "The advent of massive open online courses (MOOCs) poses new learning opportunities for learners as well as challenges for researchers and designers. MOOC students approach MOOCs in a range of fashions, based on their learning goals and preferred approaches, which creates new opportunities for learners but makes it difficult for researchers to figure out what a student’s behavior means, and makes it difficult for designers to develop MOOCs appropriate for all of their learners. Towards better understanding the learners who take MOOCs, we conduct a survey of MOOC learners’ motivations and correlate it to which students complete the course according to the pace set by the instructor/platform (which necessitates having the goal of completing the course, as well as succeeding in that goal). The results showed that course completers tend to be more interested in the course content, whereas non-completers tend to be more interested in MOOCs as a type of learning experience. Contrary to initial hypotheses, however, no substantial differences in mastery-goal orientation or general academic efficacy were observed between completers and non-completers. However, students who complete the course tend to have more self-efficacy for their ability to complete the course, from the beginning.", "title": "" }, { "docid": "b79fb02d0b89d288b1733c3194e304ec", "text": "In this paper, the idea of a Prepaid energy meter using an AT89S52 microcontroller has been introduced. This concept provides a cost efficient manner of electricity billing. The present energy billing systems are discrete, inaccurate, costly and slow. They are also time and labour consuming. The major drawback of traditional billing system is power and energy theft. This drawback is reduced by using a prepaid energy meter which is based on the concept “Pay first and then use it”. Prepaid energy meter also reduces the error made by humans while taking readings to a large extent and there is no need to take reading in it. The prepaid energy meter uses a recharge card which is available in various ranges (i.e. Rs. 50, Rs. 100, Rs. 200, etc.). The recharge is done by using a keypad and the meter is charged with the amount. According to the power consumption, the amount will be reduced. An LDR (light Dependant Resistor) circuit counts the amount of energy consumed and displays the remaining amount of energy on the LCD. A relay system has been used which shut down or disconnect the energy meter and load through supply mains when the recharge amount is depleted. A buzzer is used as an alarm which starts before the recharge amount reaches a minimum value.", "title": "" }, { "docid": "60edfab6fa5f127dd51a015b20d12a68", "text": "We discuss the ethical implications of Natural Language Generation systems. We use one particular system as a case study to identify and classify issues, and we provide an ethics checklist, in the hope that future system designers may benefit from conducting their own ethics reviews based on our checklist.", "title": "" }, { "docid": "35aa75f5bd79c8d97e374c33f5bad615", "text": "Historically, much attention has been given to the unit processes and the integration of those unit processes to improve product yield. Less attention has been given to the wafer environment, either during or post processing. This paper contains a detailed discussion on how particles and Airborne Molecular Contaminants (AMCs) from the wafer environment interact and produce undesired effects on the wafer. Sources of wafer environmental contamination are the process itself, ambient environment, outgassing from wafers, and FOUP contamination. Establishing a strategy that reduces contamination inside the FOUP will increase yield and decrease defect variability. Three primary variables that greatly impact this strategy are FOUP contamination mitigation, FOUP material, and FOUP metrology and cleaning method.", "title": "" }, { "docid": "034bf47c5982756a1cf1c1ccd777d604", "text": "We present weight normalization: a reparameterization of the weight vectors in a neural network that decouples the length of those weight vectors from their direction. By reparameterizing the weights in this way we improve the conditioning of the optimization problem and we speed up convergence of stochastic gradient descent. Our reparameterization is inspired by batch normalization but does not introduce any dependencies between the examples in a minibatch. This means that our method can also be applied successfully to recurrent models such as LSTMs and to noise-sensitive applications such as deep reinforcement learning or generative models, for which batch normalization is less well suited. Although our method is much simpler, it still provides much of the speed-up of full batch normalization. In addition, the computational overhead of our method is lower, permitting more optimization steps to be taken in the same amount of time. We demonstrate the usefulness of our method on applications in supervised image recognition, generative modelling, and deep reinforcement learning.", "title": "" }, { "docid": "4b2b199aeb61128cbee7691bc49e16f5", "text": "Although deep learning approaches have achieved performance surpassing humans for still image-based face recognition, unconstrained video-based face recognition is still a challenging task due to large volume of data to be processed and intra/inter-video variations on pose, illumination, occlusion, scene, blur, video quality, etc. In this work, we consider challenging scenarios for unconstrained video-based face recognition from multiple-shot videos and surveillance videos with low-quality frames. To handle these problems, we propose a robust and efficient system for unconstrained video-based face recognition, which is composed of face/fiducial detection, face association, and face recognition. First, we use multi-scale single-shot face detectors to efficiently localize faces in videos. The detected faces are then grouped respectively through carefully designed face association methods, especially for multi-shot videos. Finally, the faces are recognized by the proposed face matcher based on an unsupervised subspace learning approach and a subspace-tosubspace similarity metric. Extensive experiments on challenging video datasets, such as Multiple Biometric Grand Challenge (MBGC), Face and Ocular Challenge Series (FOCS), JANUS Challenge Set 6 (CS6) for low-quality surveillance videos and IARPA JANUS Benchmark B (IJB-B) for multiple-shot videos, demonstrate that the proposed system can accurately detect and associate faces from unconstrained videos and effectively learn robust and discriminative features for recognition.", "title": "" }, { "docid": "ca9da9f8113bc50aaa79d654a9eaf95a", "text": "Given an ensemble of randomized regression trees, it is possible to restructure them as a collection of multilayered neural networks with particular connection weights. Following this principle, we reformulate the random forest method of Breiman (2001) into a neural network setting, and in turn propose two new hybrid procedures that we call neural random forests. Both predictors exploit prior knowledge of regression trees for their architecture, have less parameters to tune than standard networks, and less restrictions on the geometry of the decision boundaries. Consistency results are proved, and substantial numerical evidence is provided on both synthetic and real data sets to assess the excellent performance of our methods in a large variety of prediction problems. Index Terms — Random forests, neural networks, ensemble methods, randomization, sparse networks. 2010 Mathematics Subject Classification: 62G08, 62G20, 68T05.", "title": "" }, { "docid": "bbe59dd74c554d92167f42701a1f8c3d", "text": "Finding subgraph isomorphisms is an important problem in many applications which deal with data modeled as graphs. While this problem is NP-hard, in recent years, many algorithms have been proposed to solve it in a reasonable time for real datasets using different join orders, pruning rules, and auxiliary neighborhood information. However, since they have not been empirically compared one another in most research work, it is not clear whether the later work outperforms the earlier work. Another problem is that reported comparisons were often done using the original authors’ binaries which were written in different programming environments. In this paper, we address these serious problems by re-implementing five state-of-the-art subgraph isomorphism algorithms in a common code base and by comparing them using many real-world datasets and their query loads. Through our in-depth analysis of experimental results, we report surprising empirical findings.", "title": "" }, { "docid": "80af9f789b334aae324b549fffe4511a", "text": "The research community is interested in developing automatic systems for the detection of events in video. This is particularly important in the field of sports data analytics. This paper presents an approach for identifying major complex events in soccer videos, starting from object detection and spatial relations between objects. The proposed framework, firstly, detects objects from each single video frame providing a set of candidate objects with associated confidence scores. The event detection system, then, detects events by means of rules which are based on temporal and logical combinations of the detected objects and their relative distances. The effectiveness of the framework is preliminary demonstrated over different events like \"Ball possession\" and \"Kicking the ball\".", "title": "" }, { "docid": "17ec5256082713e85c819bb0a0dd3453", "text": "Scholarly documents contain multiple figures representing experimental findings. These figures are generated from data which is not reported anywhere else in the paper. We propose a modular architecture for analyzing such figures. Our architecture consists of the following modules: 1. An extractor for figures and associated metadata (figure captions and mentions) from PDF documents; 2. A Search engine on the extracted figures and metadata; 3. An image processing module for automated data extraction from the figures and 4. A natural language processing module to understand the semantics of the figure. We discuss the challenges in each step, report an extractor algorithm to extract vector graphics from scholarly documents and a classification algorithm for figures. Our extractor algorithm improves the state of the art by more than 10% and the classification process is very scalable, yet achieves 85\\% accuracy. We also describe a semi-automatic system for data extraction from figures which is integrated with our search engine to improve user experience.", "title": "" }, { "docid": "a2376c57c3c1c51f57f84788f4c6669f", "text": "Text categorization is a significant tool to manage and organize the surging text data. Many text categorization algorithms have been explored in previous literatures, such as KNN, Naïve Bayes and Support Vector Machine. KNN text categorization is an effective but less efficient classification method. In this paper, we propose an improved KNN algorithm for text categorization, which builds the classification model by combining constrained one pass clustering algorithm and KNN text categorization. Empirical results on three benchmark corpuses show that our algorithm can reduce the text similarity computation substantially and outperform the-state-of-the-art KNN, Naïve Bayes and Support Vector Machine classifiers. In addition, the classification model constructed by the proposed algorithm can be updated incrementally, and it is valuable in practical application.", "title": "" }, { "docid": "cc06553e4d03bf8541597d01de4d5eae", "text": "Several technologies are used today to improve safety in transportation systems. The development of a system for drivability based on both V2V and V2I communication is considered an important task for the future. V2X communication will be a next step for the transportation safety in the nearest time. A lot of different structures, architectures and communication technologies for V2I based systems are under development. Recently a global paradigm shift known as the Internet-of-Things (IoT) appeared and its integration with V2I communication could increase the safety of future transportation systems. This paper brushes up on the state-of-the-art of systems based on V2X communications and proposes an approach for system architecture design of a safe intelligent driver assistant system using IoT communication. In particular, the paper presents the design process of the system architecture using IDEF modeling methodology and data flows investigations. The proposed approach shows the system design based on IoT architecture reference model.", "title": "" }, { "docid": "db857ce571add6808493f64d9e254655", "text": "(MANETs). MANET is a temporary network with a group of wireless infrastructureless mobile nodes that communicate with each other within a rapidly dynamic topology. The FMLB protocol distributes transmitted packets over multiple paths through the mobile nodes using Fibonacci sequence. Such distribution can increase the delivery ratio since it reduces the congestion. The FMLB protocol's responsibility is balancing the packet transmission over the selected paths and ordering them according to hops count. The shortest path is used frequently more than other ones. The simulation results show that the proposed protocol has achieved an enhancement on packet delivery ratio, up to 21%, as compared to the Ad Hoc On-demand Distance Vector routing protocol (AODV) protocol. Also the results show the effect of nodes pause time on the data delivery. Finally, the simulation results are obtained by the well-known Glomosim Simulator, version 2.03, without any distance or location measurements devices.", "title": "" }, { "docid": "5552216832bb7315383d1c4f2bfe0635", "text": "Semantic parsing maps sentences to formal meaning representations, enabling question answering, natural language interfaces, and many other applications. However, there is no agreement on what the meaning representation should be, and constructing a sufficiently large corpus of sentence-meaning pairs for learning is extremely challenging. In this paper, we argue that both of these problems can be avoided if we adopt a new notion of semantics. For this, we take advantage of symmetry group theory, a highly developed area of mathematics concerned with transformations of a structure that preserve its key properties. We define a symmetry of a sentence as a syntactic transformation that preserves its meaning. Semantically parsing a sentence then consists of inferring its most probable orbit under the language’s symmetry group, i.e., the set of sentences that it can be transformed into by symmetries in the group. The orbit is an implicit representation of a sentence’s meaning that suffices for most applications. Learning a semantic parser consists of discovering likely symmetries of the language (e.g., paraphrases) from a corpus of sentence pairs with the same meaning. Once discovered, symmetries can be composed in a wide variety of ways, potentially resulting in an unprecedented degree of immunity to syntactic variation.", "title": "" }, { "docid": "cea53ea6ff16808a2dbc8680d3ef88ee", "text": "Applying deep reinforcement learning (RL) on real systems suffers from slow data sampling. We propose an enhanced generative adversarial network (EGAN) to initialize an RL agent in order to achieve faster learning. The EGAN utilizes the relation between states and actions to enhance the quality of data samples generated by a GAN. Pre-training the agent with the EGAN shows a steeper learning curve with a 20% improvement of training time in the beginning of learning, compared to no pre-training, and an improvement compared to training with GAN by about 5% with smaller variations. For real time systems with sparse and slow data sampling the EGAN could be used to speed up the early phases of the training process.", "title": "" }, { "docid": "a90dd405d9bd2ed912cacee098c0f9db", "text": "Many telecommunication companies today have actively started to transform the way they do business, going beyond communication infrastructure providers are repositioning themselves as data-driven service providers to create new revenue streams. In this paper, we present a novel industrial application where a scalable Big data approach combined with deep learning is used successfully to classify massive mobile web log data, to get new aggregated insights on customer web behaviors that could be applied to various industry verticals.", "title": "" }, { "docid": "0952701dd63326f8a78eb5bc9a62223f", "text": "The self-organizing map (SOM) is an automatic data-analysis method. It is widely applied to clustering problems and data exploration in industry, finance, natural sciences, and linguistics. The most extensive applications, exemplified in this paper, can be found in the management of massive textual databases and in bioinformatics. The SOM is related to the classical vector quantization (VQ), which is used extensively in digital signal processing and transmission. Like in VQ, the SOM represents a distribution of input data items using a finite set of models. In the SOM, however, these models are automatically associated with the nodes of a regular (usually two-dimensional) grid in an orderly fashion such that more similar models become automatically associated with nodes that are adjacent in the grid, whereas less similar models are situated farther away from each other in the grid. This organization, a kind of similarity diagram of the models, makes it possible to obtain an insight into the topographic relationships of data, especially of high-dimensional data items. If the data items belong to certain predetermined classes, the models (and the nodes) can be calibrated according to these classes. An unknown input item is then classified according to that node, the model of which is most similar with it in some metric used in the construction of the SOM. A new finding introduced in this paper is that an input item can even more accurately be represented by a linear mixture of a few best-matching models. This becomes possible by a least-squares fitting procedure where the coefficients in the linear mixture of models are constrained to nonnegative values.", "title": "" }, { "docid": "154f5455f593e8ebf7058cc0a32426a2", "text": "Many life-log analysis applications, which transfer data from cameras and sensors to a Cloud and analyze them in the Cloud, have been developed with the spread of various sensors and Cloud computing technologies. However, difficulties arise because of the limitation of the network bandwidth between the sensors and the Cloud. In addition, sending raw sensor data to a Cloud may introduce privacy issues. Therefore, we propose distributed deep learning processing between sensors and the Cloud in a pipeline manner to reduce the amount of data sent to the Cloud and protect the privacy of the users. In this paper, we have developed a pipeline-based distributed processing method for the Caffe deep learning framework and investigated the processing times of the classification by varying a division point and the parameters of the network models using data sets, CIFAR-10 and ImageNet. The experiments show that the accuracy of deep learning with coarse-grain data is comparable to that with the default parameter settings, and the proposed distributed processing method has performance advantages in cases of insufficient network bandwidth with actual sensors and a Cloud environment.", "title": "" }, { "docid": "11ddbce61cb175e9779e0fcb5622436f", "text": "When rewards are sparse and efficient exploration essential, deep Q-learning with -greedy exploration tends to fail. This poses problems for otherwise promising domains such as task-oriented dialog systems, where the primary reward signal, indicating successful completion, typically occurs only at the end of each episode but depends on the entire sequence of utterances. A poor agent encounters such successful dialogs rarely, and a random agent may never stumble upon a successful outcome in reasonable time. We present two techniques that significantly improve the efficiency of exploration for deep Q-learning agents in dialog systems. First, we demonstrate that exploration by Thompson sampling, using Monte Carlo samples from a Bayes-by-Backprop neural network, yields marked improvement over standard DQNs with Boltzmann or -greedy exploration. Second, we show that spiking the replay buffer with a small number of successes, as are easy to harvest for dialog tasks, can make Q-learning feasible when it might otherwise fail catastrophically.", "title": "" } ]
scidocsrr
e11b4608b217a3f5a0935eb948bf582b
Revisiting Android reuse studies in the context of code obfuscation and library usages
[ { "docid": "8d9f5f5569f0281765a60705c7e9c752", "text": "Software repositories hold applications that are often categorized to improve the effectiveness of various maintenance tasks. Properly categorized applications allow stakeholders to identify requirements related to their applications and predict maintenance problems in software projects. Manual categorization is expensive, tedious, and laborious – this is why automatic categorization approaches are gaining widespread importance. Unfortunately, for different legal and organizational reasons, the applications’ source code is often not available, thus making it difficult to automatically categorize these applications. In this paper, we propose a novel approach in which we use Application Programming Interface (API) calls from third-party libraries for automatic categorization of software applications that use these API calls. Our approach is general since it enables different categorization algorithms to be applied to repositories that contain both source code and bytecode of applications, since API calls can be extracted from both the source code and byte-code. We compare our approach to a state-of-the-art approach that uses machine learning algorithms for software categorization, and conduct experiments on two large Java repositories: an open-source repository containing 3,286 projects and a closed-source repository with 745 applications, where the source code was not available. Our contribution is twofold: we propose a new approach that makes it possible to categorize software projects without any source code using a small number of API calls as attributes, and furthermore we carried out a comprehensive empirical evaluation of automatic categorization approaches.", "title": "" } ]
[ { "docid": "aaa1ed7c041123e0f7a2f948fdbd9e1a", "text": "The present study evaluated the venous anatomy of the craniocervical junction, focusing on the suboccipital cavernous sinus (SCS), a vertebral venous plexus surrounding the horizontal portion of the vertebral artery at the skull base. MR imaging was reviewed to clarify the venous anatomy of the SCS in 33 patients. Multiplanar reconstruction MR images were obtained using contrast-enhanced three-dimensional fast spoiled gradient–recalled acquisition in the steady state (3-D fast SPGR) with fat suppression. Connections with the SCS were evaluated for the following venous structures: anterior condylar vein (ACV); posterior condylar vein (PCV); lateral condylar vein (LCV); vertebral artery venous plexus (VAVP); and anterior internal vertebral venous plexus (AVVP). The SCS connected with the ACV superomedially, with the VAVP inferolaterally, and with the AVVP medially. The LCV connected with the external orifice of the ACV and superoanterior aspect of the SCS. The PCV connected with the posteromedial aspect of the jugular bulb and superoposterior aspect of the SCS. The findings of craniocervical junction venography performed in eight patients corresponded with those on MR imaging, other than with regard to the PCV. Contrast-enhanced 3-D fast SPGR allows visualization of the detailed anatomy of these venous structures, and this technique facilitates interventions and description of pathologies occurring in this area.", "title": "" }, { "docid": "c460179cbdb40b9d89b3cc02276d54e1", "text": "In recent years the sport of climbing has seen consistent increase in popularity. Climbing requires a complex skill set for successful and safe exercising. While elite climbers receive intensive expert coaching to refine this skill set, this progression approach is not viable for the amateur population. We have developed ClimbAX - a climbing performance analysis system that aims for replicating expert assessments and thus represents a first step towards an automatic coaching system for climbing enthusiasts. Through an accelerometer based wearable sensing platform, climber's movements are captured. An automatic analysis procedure detects climbing sessions and moves, which form the basis for subsequent performance assessment. The assessment parameters are derived from sports science literature and include: power, control, stability, speed. ClimbAX was evaluated in a large case study with 53 climbers under competition settings. We report a strong correlation between predicted scores and official competition results, which demonstrate the effectiveness of our automatic skill assessment system.", "title": "" }, { "docid": "4b665ffb50963308818176d4277cfe71", "text": "Aligning IT to business needs is still one of the most important challenges for many organizations. In a recent survey amongst European IT managers, 78% indicate that their IT is not aligned with business strategy. Another recent survey shows similar results. The message of Business & IT Alignment is logical and undisputed. But if this message is so clear, how can practice be so difficult? To explore the issues with and approaches to BITA in practice, a focused group discussion was organized with IT managers and CIOs of medium sized and large organizations in the Netherlands. In total 23 participants from trade, manufacturing and financial companies joined the discussions. This paper explores the practice of Business & IT Alignment in mult-business-companies. The parenting theory for the role of the corporate center is used to explain the different practical approaches that the participants in the focused groups took.", "title": "" }, { "docid": "9adf653a332e07b8aa055b62449e1475", "text": "False-belief task have mainly been associated with the explanatory notion of the theory of mind and the theory-theory. However, it has often been pointed out that this kind of highlevel reasoning is computational and time expensive. During the last decades, the idea of embodied intelligence, i.e. complex behavior caused by sensorimotor contingencies, has emerged in both the fields of neuroscience, psychology and artificial intelligence. Viewed from this perspective, the failing in a false-belief test can be the result of the impairment to recognize and track others’ sensorimotor contingencies and affordances. Thus, social cognition is explained in terms of lowlevel signals instead of high-level reasoning. In this work, we present a generative model for optimal action selection which simultaneously can be employed to make predictions of others’ actions. As we base the decision making on a hidden state representation of sensorimotor signals, this model is in line with the ideas of embodied intelligence. We demonstrate how the tracking of others’ hidden states can give rise to correct falsebelief inferences, while a lack thereof leads to failing. With this work, we want to emphasize the importance of sensorimotor contingencies in social cognition, which might be a key to artificial, socially intelligent systems.", "title": "" }, { "docid": "da5c56f30c9c162eb80c418ba9dbc31a", "text": "Text detection and recognition in a natural environment are key components of many applications, ranging from business card digitization to shop indexation in a street. This competition aims at assessing the ability of state-of-the-art methods to detect Multi-Lingual Text (MLT) in scene images, such as in contents gathered from the Internet media and in modern cities where multiple cultures live and communicate together. This competition is an extension of the Robust Reading Competition (RRC) which has been held since 2003 both in ICDAR and in an online context. The proposed competition is presented as a new challenge of the RRC. The dataset built for this challenge largely extends the previous RRC editions in many aspects: the multi-lingual text, the size of the dataset, the multi-oriented text, the wide variety of scenes. The dataset is comprised of 18,000 images which contain text belonging to 9 languages. The challenge is comprised of three tasks related to text detection and script classification. We have received a total of 16 participations from the research and industrial communities. This paper presents the dataset, the tasks and the findings of this RRC-MLT challenge.", "title": "" }, { "docid": "3b0f2413234109c6df1b643b61dc510b", "text": "Most people think computers will never be able to think. That is, really think. Not now or ever. To be sure, most people also agree that computers can do many things that a person would have to be thinking to do. Then how could a machine seem to think but not actually think? Well, setting aside the question of what thinking actually is, I think that most of us would answer that by saying that in these cases, what the computer is doing is merely a superficial imitation of human intelligence. It has been designed to obey certain simple commands, and then it has been provided with programs composed of those commands. Because of this, the computer has to obey those commands, but without any idea of what's happening.", "title": "" }, { "docid": "c824c8bb8fd9b0b3f0f89df24e8f53d0", "text": "Ovarian cysts are an extremely common gynecological problem in adolescent. Majority of ovarian cysts are benign with few cases being malignant. Ovarian serous cystadenoma are rare in children. A 14-year-old presented with abdominal pain and severe abdominal distention. She underwent laparotomy and after surgical removal, the mass was found to be ovarian serous cystadenoma on histology. In conclusions, germ cell tumors the most important causes for the giant ovarian masses in children. Epithelial tumors should not be forgotten in the differential diagnosis. Keyword: Adolescent; Ovarian Cysts/diagnosis*; Cystadenoma, Serous/surgery; Ovarian Neoplasms/surgery; Ovarian cystadenoma", "title": "" }, { "docid": "0ff27e119ec045674b9111bb5a9e5d29", "text": "Description: This book provides an introduction to the complex field of ubiquitous computing Ubiquitous Computing (also commonly referred to as Pervasive Computing) describes the ways in which current technological models, based upon three base designs: smart (mobile, wireless, service) devices, smart environments (of embedded system devices) and smart interaction (between devices), relate to and support a computing vision for a greater range of computer devices, used in a greater range of (human, ICT and physical) environments and activities. The author details the rich potential of ubiquitous computing, the challenges involved in making it a reality, and the prerequisite technological infrastructure. Additionally, the book discusses the application and convergence of several current major and future computing trends.-Provides an introduction to the complex field of ubiquitous computing-Describes how current technology models based upon six different technology form factors which have varying degrees of mobility wireless connectivity and service volatility: tabs, pads, boards, dust, skins and clay, enable the vision of ubiquitous computing-Describes and explores how the three core designs (smart devices, environments and interaction) based upon current technology models can be applied to, and can evolve to, support a vision of ubiquitous computing and computing for the future-Covers the principles of the following current technology models, including mobile wireless networks, service-oriented computing, human computer interaction, artificial intelligence, context-awareness, autonomous systems, micro-electromechanical systems, sensors, embedded controllers and robots-Covers a range of interactions, between two or more UbiCom devices, between devices and people (HCI), between devices and the physical world.-Includes an accompanying website with PowerPoint slides, problems and solutions, exercises, bibliography and further reading Graduate students in computer science, electrical engineering and telecommunications courses will find this a fascinating and useful introduction to the subject. It will also be of interest to ICT professionals, software and network developers and others interested in future trends and models of computing and interaction over the next decades.", "title": "" }, { "docid": "461ec14463eb20962ef168de781ac2a2", "text": "Local descriptors based on the image noise residual have proven extremely effective for a number of forensic applications, like forgery detection and localization. Nonetheless, motivated by promising results in computer vision, the focus of the research community is now shifting on deep learning. In this paper we show that a class of residual-based descriptors can be actually regarded as a simple constrained convolutional neural network (CNN). Then, by relaxing the constraints, and fine-tuning the net on a relatively small training set, we obtain a significant performance improvement with respect to the conventional detector.", "title": "" }, { "docid": "d18d67949bae399cdc148f2ded81903a", "text": "Stock market news and investing tips are popular topics in Twitter. In this paper, first we utilize a 5-year financial news corpus comprising over 50,000 articles collected from the NASDAQ website for the 30 stock symbols in Dow Jones Index (DJI) to train a directional stock price prediction system based on news content. Then we proceed to prove that information in articles indicated by breaking Tweet volumes leads to a statistically significant boost in the hourly directional prediction accuracies for the prices of DJI stocks mentioned in these articles. Secondly, we show that using document-level sentiment extraction does not yield to a statistically significant boost in the directional predictive accuracies in the presence of other 1-gram keyword features.", "title": "" }, { "docid": "8c91fe2785ae0a7b907315ae52d9a905", "text": "A method for the determination of pixel correspondence in stereo image pairs is presented. The optic ̄ow vectors that result from the displacement of the point of projection are obtained and the correspondence between pixels of the various objects in the scene is derived from the optic ̄ow vectors. The proposed algorithm is implemented and the correspondence vectors are obtained. Various specialized improvements of the method are implemented and thoroughly tested on sequences of image pairs giving rise to interesting conclusions. The algorithm is highly-parallelizable and therefore suitable for real-time applications.", "title": "" }, { "docid": "18c507d6624f153cb1b7beaf503b0d54", "text": "The critical period hypothesis for language acquisition (CP) proposes that the outcome of language acquisition is not uniform over the lifespan but rather is best during early childhood. The CP hypothesis was originally proposed for spoken language but recent research has shown that it applies equally to sign language. This paper summarizes a series of experiments designed to investigate whether and how the CP affects the outcome of sign language acquisition. The results show that the CP has robust effects on the development of sign language comprehension. Effects are found at all levels of linguistic structure (phonology, morphology and syntax, the lexicon and semantics) and are greater for first as compared to second language acquisition. In addition, CP effects have been found on all measures of language comprehension examined to date, namely, working memory, narrative comprehension, sentence memory and interpretation, and on-line grammatical processing. The nature of these effects with respect to a model of language comprehension is discussed.", "title": "" }, { "docid": "7a6691ce9d93b42179cd2ce954aeb8c5", "text": "In this paper, a new dance training system based on the motion capture and virtual reality (VR) technologies is proposed. Our system is inspired by the traditional way to learn new movements-imitating the teacher's movements and listening to the teacher's feedback. A prototype of our proposed system is implemented, in which a student can imitate the motion demonstrated by a virtual teacher projected on the wall screen. Meanwhile, the student's motions will be captured and analyzed by the system based on which feedback is given back to them. The result of user studies showed that our system can successfully guide students to improve their skills. The subjects agreed that the system is interesting and can motivate them to learn.", "title": "" }, { "docid": "83466fa7c291f6a21f6eedd4150043dc", "text": "E-mail communication has become the need of the hour, with the advent of Internet. However, it is being abused for various illegitimate purposes, such as, spamming, drug trafficking, cyber bullying, phishing, racial vilification, child pornography, and sexual harassment, etc. Several cyber crimes such as identity theft, plagiarism, internet fraud stipulate that the true identity of the e-mail&apos;s author be revealed, so that the culprits can be punished in the court of law, by gathering credible evidence against them. Forensic analysis can play a crucial role here, by letting the forensic investigator to gather evidence by examining suspected e-mail accounts. In this context, automated authorship identification can assist the forensic investigator in cyber crime investigation. In this paper we discuss how existing state-of-the-art techniques have been employed for author identification of e-mails and we propose our model for identifying most plausible author of e-mails.", "title": "" }, { "docid": "366800edb32efd098351bc711984854a", "text": "Building credible Non-Playing Characters (NPCs) in games requires not only to enhance the graphic animation but also the behavioral model. This paper tackles the problem of the dynamics of NPCs social relations depending on their emotional interactions. First, we discuss the need for a dynamic model of social relations. Then, we present our model of social relations for NPCs and we give a qualitative model of the influence of emotions on social relations. We describe the implementation of this model and we briefly illustrate its features on a simple scene.", "title": "" }, { "docid": "7cc9b6f1837d992b64071e2149e81a9a", "text": "This article presents an application of Augmented Reality technology for interior design. Plus, an Educational Interior Design Project is reviewed. Along with the dramatic progress of digital technology, virtual information techniques are also required for architectural projects. Thus, the new technology of Augmented Reality offers many advantages for digital architectural design and construction fields. AR is also being considered as a new design approach for interior design. In an AR environment, the virtual furniture can be displayed and modified in real-time on the screen, allowing the user to have an interactive experience with the virtual furniture in a real-world environment. Here, AR environment is exploited as the new working environment for architects in architectural design works, and then they can do their work conveniently as such collaborative discussion through AR environment. Finally, this study proposes a new method for applying AR technology to interior design work, where a user can view virtual furniture and communicate with 3D virtual furniture data using a dynamic and flexible user interface. Plus, all the properties of the virtual furniture can be adjusted using occlusionbased interaction method for a Tangible Augmented Reality.", "title": "" }, { "docid": "a6ba94c0faf2fd41d8b1bd5a068c6d3d", "text": "The main mechanisms responsible for performance degradation of millimeter wave (mmWave) and terahertz (THz) on-chip antennas are reviewed. Several techniques to improve the performance of the antennas and several high efficiency antenna types are presented. In order to illustrate the effects of the chip topology on the antenna, simulations and measurements of mmWave and THz on-chip antennas are shown. Finally, different transceiver architectures are explored with emphasis on the challenges faced in a wireless multi-core environment.", "title": "" }, { "docid": "79c80b3aea50ab971f405b8b58da38de", "text": "In this paper, the design and implementation of small inductors in printed circuit board (PCB) for domestic induction heating applications is presented. With this purpose, we have developed both a manufacturing technique and an electromagnetic model of the system based on finite-element method (FEM) simulations. The inductor arrangement consists of a stack of printed circuit boards in which a planar litz wire structure is implemented. The developed PCB litz wire structure minimizes the losses in a similar way to the conventional multi-stranded litz wires; whereas the stack of PCBs allows increasing the power transferred to the pot. Different prototypes of the proposed PCB inductor have been measured at low signal levels. Finally, a PCB inductor has been integrated in an electronic stage to test at high signal levels, i.e. in the similar working conditions to the commercial application.", "title": "" }, { "docid": "92a00453bc0c2115a8b37e5acc81f193", "text": "Choosing the appropriate software development methodology is something which continues to occupy the minds of many IT professionals. The introduction of “Agile” development methodologies such as XP and SCRUM held the promise of improved software quality and reduced delivery times. Combined with a Lean philosophy, there would seem to be potential for much benefit. While evidence does exist to support many of the Lean/Agile claims, we look here at how such methodologies are being adopted in the rigorous environment of safety-critical embedded software development due to its high regulation. Drawing on the results of a systematic literature review we find that evidence is sparse for Lean/Agile adoption in these domains. However, where it has been trialled, “out-of-the-box” Agile practices do not seem to fully suit these environments but rather tailored Agile versions combined with more planbased practices seem to be making inroads.", "title": "" }, { "docid": "135158b230016bb80a08b4c7e2c4f3f2", "text": "Quite recently, two smart-card-based passwords authenticated key exchange protocols were proposed by Lee et al. and Hwang et al. respectively. However, neither of them achieves two-factor authentication fully since they would become completely insecure once one factor is broken. To overcome these congenital defects, this study proposes such a secure authenticated key exchange protocol that achieves fully two-factor authentication and provides forward security of session keys. And yet our scheme is simple and reasonably efficient. Furthermore, we can provide the rigorous proof of the security for it.", "title": "" } ]
scidocsrr
6c71a1f3fd813d27efa4b205e5cb8dac
Advanced Demand Side Management for the Future Smart Grid Using Mechanism Design
[ { "docid": "adec3b3578d56cefed73fd74d270ca22", "text": "In the framework of liberalized electricity markets, distributed generation and controllable demand have the opportunity to participate in the real-time operation of transmission and distribution networks. This may be done by using the virtual power plant (VPP) concept, which consists of aggregating the capacity of many distributed energy resources (DER) in order to make them more accessible and manageable across energy markets. This paper provides an optimization algorithm to manage a VPP composed of a large number of customers with thermostatically controlled appliances. The algorithm, based on a direct load control (DLC), determines the optimal control schedules that an aggregator should apply to the controllable devices of the VPP in order to optimize load reduction over a specified control period. The results define the load reduction bid that the aggregator can present in the electricity market, thus helping to minimize network congestion and deviations between generation and demand. The proposed model, which is valid for both transmission and distribution networks, is tested on a real power system to demonstrate its applicability.", "title": "" } ]
[ { "docid": "5f393e79895bf234c0b96b7ece0d1cae", "text": "Energy consumption of routers in commonly used mesh-based on-chip networks for chip multiprocessors is an increasingly important concern: these routers consist of a crossbar and complex control logic and can require significant buffers, hence high energy and area consumption. In contrast, an alternative design uses ring-based networks to connect network nodes with small and simple routers. Rings have been used in recent commercial designs, and are well-suited to smaller core counts. However, rings do not scale as efficiently as meshes. In this paper, we propose an energy-efficient yet high performance alternative to traditional mesh-based and ringbased on-chip networks. We aim to attain the scalability of meshes with the router simplicity and efficiency of rings. Our design is a hierarchical ring topology which consists of small local rings connected via one or more global ring. Routing between rings is accomplished using bridge routers that have minimal buffering, and use deflection in place of buffered flow control for simplicity. We comprehensively explore new issues in the design of such a topology, including the design of the routers, livelock freedom, energy, performance and scalability. We propose new router microarchitectures and show that these routers are significantly simpler and more area and energy efficient than both buffered and bufferless mesh based routers. We develop new mechanisms to preserve livelock-free routing in our topology and router design. Our evaluations compare our proposal to a traditional ring network and conventional buffered and bufferless mesh based networks, showing that our proposal reduces average network power by 52.4% (30.4%) and router area footprint by 70.5% from a buffered mesh in 16-node (64-node) configurations, while also improving system performance by 0.6% (5.0%).", "title": "" }, { "docid": "9a2ab1d198468819f32a2b74334528ae", "text": "This paper introduces GeoSpark an in-memory cluster computing framework for processing large-scale spatial data. GeoSpark consists of three layers: Apache Spark Layer, Spatial RDD Layer and Spatial Query Processing Layer. Apache Spark Layer provides basic Spark functionalities that include loading / storing data to disk as well as regular RDD operations. Spatial RDD Layer consists of three novel Spatial Resilient Distributed Datasets (SRDDs) which extend regular Apache Spark RDDs to support geometrical and spatial objects. GeoSpark provides a geometrical operations library that accesses Spatial RDDs to perform basic geometrical operations (e.g., Overlap, Intersect). System users can leverage the newly defined SRDDs to effectively develop spatial data processing programs in Spark. The Spatial Query Processing Layer efficiently executes spatial query processing algorithms (e.g., Spatial Range, Join, KNN query) on SRDDs. GeoSpark also allows users to create a spatial index (e.g., R-tree, Quad-tree) that boosts spatial data processing performance in each SRDD partition. Preliminary experiments show that GeoSpark achieves better run time performance than its Hadoop-based counterparts (e.g., SpatialHadoop).", "title": "" }, { "docid": "b27224825bb28b9b8d0eea37f8900d42", "text": "The use of Convolutional Neural Networks (CNN) in natural im age classification systems has produced very impressive results. Combined wit h the inherent nature of medical images that make them ideal for deep-learning, fu rther application of such systems to medical image classification holds much prom ise. However, the usefulness and potential impact of such a system can be compl etely negated if it does not reach a target accuracy. In this paper, we present a s tudy on determining the optimum size of the training data set necessary to achiev e igh classification accuracy with low variance in medical image classification s ystems. The CNN was applied to classify axial Computed Tomography (CT) imag es into six anatomical classes. We trained the CNN using six different sizes of training data set ( 5, 10, 20, 50, 100, and200) and then tested the resulting system with a total of 6000 CT images. All images were acquired from the Massachusetts G eneral Hospital (MGH) Picture Archiving and Communication System (PACS). U sing this data, we employ the learning curve approach to predict classificat ion ccuracy at a given training sample size. Our research will present a general me thodology for determining the training data set size necessary to achieve a cert in target classification accuracy that can be easily applied to other problems within such systems.", "title": "" }, { "docid": "46768aeb3c9295a38ff64b3e40a34ec1", "text": "Google's monolithic repository provides a common source of truth for tens of thousands of developers around the world.", "title": "" }, { "docid": "09f743b18655305b7ad1e39432756525", "text": "Several applications of chalcones and their derivatives encouraged researchers to increase their synthesis as an alternative for the treatment of pathogenic bacterial and fungal infections. In the present study, chalcone derivatives were synthesized through cross aldol condensation reaction between 4-(N,N-dimethylamino)benzaldehyde and multiarm aromatic ketones. The multiarm aromatic ketones were synthesized through nucleophilic substitution reaction between 4-hydroxy acetophenone and benzyl bromides. The benzyl bromides, multiarm aromatic ketones, and corresponding chalcone derivatives were evaluated for their activities against eleven clinical pathogenic Gram-positive, Gram-negative bacteria, and three pathogenic fungi by the disk diffusion method. The minimum inhibitory concentration was determined by the microbroth dilution technique. The results of the present study demonstrated that benzyl bromide derivatives have strong antibacterial and antifungal properties as compared to synthetic chalcone derivatives and ketones. Benzyl bromides (1a and 1c) showed high ester activity against Gram-positive bacteria and fungi but moderate activity against Gram-negative bacteria. Therefore, these compounds may be considered as good antibacterial and antifungal drug discovery. However, substituted ketones (2a-b) as well as chalcone derivatives (3a-c) showed no activity against all the tested strains except for ketone (2c), which showed moderate activity against Candida albicans.", "title": "" }, { "docid": "d88523afba42431989f5d3bd22f2ad85", "text": "The visual cues from multiple support regions of different sizes and resolutions are complementary in classifying a candidate box in object detection. How to effectively integrate local and contextual visual cues from these regions has become a fundamental problem in object detection. Most existing works simply concatenated features or scores obtained from support regions. In this paper, we proposal a novel gated bi-directional CNN (GBD-Net) to pass messages between features from different support regions during both feature learning and feature extraction. Such message passing can be implemented through convolution in two directions and can be conducted in various layers. Therefore, local and contextual visual patterns can validate the existence of each other by learning their nonlinear relationships and their close iterations are modeled in a much more complex way. It is also shown that message passing is not always helpful depending on individual samples. Gated functions are further introduced to control message transmission and their on-and-off is controlled by extra visual evidence from the input sample. GBD-Net is implemented under the Fast RCNN detection framework. Its effectiveness is shown through experiments on three object detection datasets, ImageNet, Pascal VOC2007 and Microsoft COCO.", "title": "" }, { "docid": "4bdccdda47aea04c5877587daa0e8118", "text": "Recognizing text character from natural scene images is a challenging problem due to background interferences and multiple character patterns. Scene Text Character (STC) recognition, which generally includes feature representation to model character structure and multi-class classification to predict label and score of character class, mostly plays a significant role in word-level text recognition. The contribution of this paper is a complete performance evaluation of image-based STC recognition, by comparing different sampling methods, feature descriptors, dictionary sizes, coding and pooling schemes, and SVM kernels. We systematically analyze the impact of each option in the feature representation and classification. The evaluation results on two datasets CHARS74K and ICDAR2003 demonstrate that Histogram of Oriented Gradient (HOG) descriptor, soft-assignment coding, max pooling, and Chi-Square Support Vector Machines (SVM) obtain the best performance among local sampling based feature representations. To improve STC recognition, we apply global sampling feature representation. We generate Global HOG (GHOG) by computing HOG descriptor from global sampling. GHOG enables better character structure modeling and obtains better performance than local sampling based feature representations. The GHOG also outperforms existing methods in the two benchmark datasets.", "title": "" }, { "docid": "dcda412c18e92650d9791023f13e4392", "text": "Graph can straightforwardly represent the relations between the objects, which inevitably draws a lot of attention of both academia and industry. Achievements mainly concentrate on homogeneous graph and bipartite graph. However, it is difficult to use existing algorithm in actual scenarios. Because in the real world, the type of the objects and the relations are diverse and the amount of the data can be very huge. Considering of the characteristics of \"black market\", we proposeHGsuspector, a novel and scalable algorithm for detecting collective fraud in directed heterogeneous graphs.We first decompose directed heterogeneous graphs into a set of bipartite graphs, then we define a metric on each connected bipartite graph and calculate scores of it, which fuse the structure information and event probability. The threshold for distinguishing between normal and abnormal can be obtained by statistic or other anomaly detection algorithms in scores space. We also provide a technical solution for fraud detection in e-commerce scenario, which has been successfully applied in Jingdong e-commerce platform to detect collective fraud in real time. The experiments on real-world datasets, which has billion nodes and edges, demonstrate that HGsuspector is more accurate and fast than the most practical and state-of-the-art approach by far.", "title": "" }, { "docid": "e6300989e5925d38d09446b3e43092e5", "text": "Cloud computing provides resources as services in pay-as-you-go mode to customers by using virtualization technology. As virtual machine (VM) is hosted on physical server, great energy is consumed by maintaining the servers in data center. More physical servers means more energy consumption and more money cost. Therefore, the VM placement (VMP) problem is significant in cloud computing. This paper proposes an approach based on ant colony optimization (ACO) to solve the VMP problem, named as ACO-VMP, so as to effectively use the physical resources and to reduce the number of running physical servers. The number of physical servers is the same as the number of the VMs at the beginning. Then the ACO approach tries to reduce the physical server one by one. We evaluate the performance of the proposed ACO-VMP approach in solving VMP with the number of VMs being up to 600. Experimental results compared with the ones obtained by the first-fit decreasing (FFD) algorithm show that ACO-VMP can solve VMP more efficiently to reduce the number of physical servers significantly, especially when the number of VMs is large.", "title": "" }, { "docid": "af81774bce83971009c26fba730bfba3", "text": "In this paper, we present a stereo visual-inertial odometry algorithm assembled with three separated Kalman filters, i.e., attitude filter, orientation filter, and position filter. Our algorithm carries out the orientation and position estimation with three filters working on different fusion intervals, which can provide more robustness even when the visual odometry estimation fails. In our orientation estimation, we propose an improved indirect Kalman filter, which uses the orientation error space represented by unit quaternion as the state of the filter. The performance of the algorithm is demonstrated through extensive experimental results, including the benchmark KITTI datasets and some challenging datasets captured in a rough terrain campus.", "title": "" }, { "docid": "b776bf3acb830552eb1ecf353b08edee", "text": "The size and high rate of change of source code comprising a software system make it difficult for software developers to keep up with who on the team knows about particular parts of the code. Existing approaches to this problem are based solely on authorship of code. In this paper, we present data from two professional software development teams to show that both authorship and interaction information about how a developer interacts with the code are important in characterizing a developer's knowledge of code. We introduce the degree-of-knowledge model that computes automatically a real value for each source code element based on both authorship and interaction information. We show that the degree-of-knowledge model can provide better results than an existing expertise finding approach and also report on case studies of the use of the model to support knowledge transfer and to identify changes of interest.", "title": "" }, { "docid": "c3218724e6237c3d51eb41bed1cd5268", "text": "Recently, wireless sensor networks (WSNs) have become mature enough to go beyond being simple fine-grained continuous monitoring platforms and become one of the enabling technologies for disaster early-warning systems. Event detection functionality of WSNs can be of great help and importance for (near) real-time detection of, for example, meteorological natural hazards and wild and residential fires. From the data-mining perspective, many real world events exhibit specific patterns, which can be detected by applying machine learning (ML) techniques. In this paper, we introduce ML techniques for distributed event detection in WSNs and evaluate their performance and applicability for early detection of disasters, specifically residential fires. To this end, we present a distributed event detection approach incorporating a novel reputation-based voting and the decision tree and evaluate its performance in terms of detection accuracy and time complexity.", "title": "" }, { "docid": "8e8b199787fcc8bf813037fbc26d1be3", "text": "Recent work on imitation learning has generated policies that reproduce expert behavior from multi-modal data. However, past approaches have focused only on recreating a small number of distinct, expert maneuvers, or have relied on supervised learning techniques that produce unstable policies. This work extends InfoGAIL, an algorithm for multi-modal imitation learning, to reproduce behavior over an extended period of time. Our approach involves reformulating the typical imitation learning setting to include “burn-in demonstrations” upon which policies are conditioned at test time. We demonstrate that our approach outperforms standard InfoGAIL in maximizing the mutual information between predicted and unseen style labels in road scene simulations, and we show that our method leads to policies that imitate expert autonomous driving systems over long time horizons.", "title": "" }, { "docid": "8434630dc54c3015a50d04abba004aca", "text": "Wolfram syndrome, also known by the mnemonic DIDMOAD (diabetes insipidus, diabetes mellitus, optic atrophy and deafness) is a rare progressive neurodegenerative disorder. This syndrome is further divided to WFS1 and WFS2 based on the different genetic molecular basis and clinical features. In this report, we described a known case of Wolfram syndrome requiring anesthesia for cochlear implantation. Moreover, a brief review of molecular genetics and anesthetic considerations are presented.", "title": "" }, { "docid": "9f3e9e7c493b3b62c7ec257a00f43c20", "text": "The wind stroke is a common syndrome in clinical disease; the physicians of past generations accumulated much experience in long-term clinical practice and left abundant literature. Looking from this literature, the physicians of past generations had different cognitions of the wind stroke, especially the concept of wind stroke. The connotation of wind stroke differed at different stages, going through a gradually changing process from exogenous disease, true wind stroke, apoplectic wind stroke to cerebral apoplexy.", "title": "" }, { "docid": "bdaa8b87cdaef856b88b7397ddc77d97", "text": "In artificial neural networks (ANNs), the activation function most used in practice are the logistic sigmoid function and the hyperbolic tangent function. The activation functions used in ANNs have been said to play an important role in the convergence of the learning algorithms. In this paper, we evaluate the use of different activation functions and suggest the use of three new simple functions, complementary log-log, probit and log-log, as activation functions in order to improve the performance of neural networks. Financial time series were used to evaluate the performance of ANNs models using these new activation functions and to compare their performance with some activation functions existing in the literature. This evaluation is performed through two learning algorithms: conjugate gradient backpropagation with Fletcher–Reeves updates and Levenberg–Marquardt.", "title": "" }, { "docid": "d34759a882df6bc482b64530999bcda3", "text": "The Static Single Assignment (SSA) form is a program representation used in many optimizing compilers. The key step in converting a program to SSA form is called φ-placement. Many algorithms for φ-placement have been proposed in the literature, but the relationships between these algorithms are not well understood.In this article, we propose a framework within which we systematically derive (i) properties of the SSA form and (ii) φ-placement algorithms. This framework is based on a new relation called merge which captures succinctly the structure of a program's control flow graph that is relevant to its SSA form. The φ-placement algorithms we derive include most of the ones described in the literature, as well as several new ones. We also evaluate experimentally the performance of some of these algorithms on the SPEC92 benchmarks.Some of the algorithms described here are optimal for a single variable. However, their repeated application is not necessarily optimal for multiple variables. We conclude the article by describing such an optimal algorithm, based on the transitive reduction of the merge relation, for multi-variable φ-placement in structured programs. The problem for general programs remains open.", "title": "" }, { "docid": "7e9dbc7f1c3855972dbe014e2223424c", "text": "Speech disfluencies (filled pauses, repe titions, repairs, and false starts) are pervasive in spontaneous speech. The ab ility to detect and correct disfluencies automatically is important for effective natural language understanding, as well as to improve speech models in general. Previous approaches to disfluency detection have relied heavily on lexical information, which makes them less applicable when word recognition is unreliable. We have developed a disfluency detection method using decision tree classifiers that use only local and automatically extracted prosodic features. Because the model doesn’t rely on lexical information, it is widely applicable even when word recognition is unreliable. The model performed significantly better than chance at detecting four disfluency types. It also outperformed a language model in the detection of false starts, given the correct transcription. Combining the prosody model with a specialized language model improved accuracy over either model alone for the detection of false starts. Results suggest that a prosody-only model can aid the automatic detection of disfluencies in spontaneous speech.", "title": "" }, { "docid": "d24ca3024b5abc27f6eb2ad5698a320b", "text": "Purpose. To study the fracture behavior of the major habit faces of paracetamol single crystals using microindentation techniques and to correlate this with crystal structure and molecular packing. Methods. Vicker's microindentation techniques were used to measure the hardness and crack lengths. The development of all the major radial cracks was analyzed using the Laugier relationship and fracture toughness values evaluated. Results. Paracetamol single crystals showed severe cracking and fracture around all Vicker's indentations with a limited zone of plastic deformation close to the indent. This is consistent with the material being a highly brittle solid that deforms principally by elastic deformation to fracture rather than by plastic flow. Fracture was associated predominantly with the (010) cleavage plane, but was also observed parallel to other lattice planes including (110), (210) and (100). The cleavage plane (010) had the lowest fracture toughness value, Kc = 0.041MPa m1/2, while the greatest value, Kc = 0.105MPa m1/2; was obtained for the (210) plane. Conclusions. Paracetamol crystals showed severe cracking and fracture because of the highly brittle nature of the material. The fracture behavior could be explained on the basis of the molecular packing arrangement and the calculated attachment energies across the fracture planes.", "title": "" } ]
scidocsrr
d0e78e9ca94c071572481a37bbeda677
Tempo And Beat Estimation Of Musical Signals
[ { "docid": "05cf044dcb3621a0190403a7961ecb00", "text": "This paper describes a real-time beat tracking system that recognizes a hierarchical beat structure comprising the quarter-note, half-note, and measure levels in real-world audio signals sampled from popular-music compact discs. Most previous beat-tracking systems dealt with MIDI signals and had difficulty in processing, in real time, audio signals containing sounds of various instruments and in tracking beats above the quarter-note level. The system described here can process music with drums and music without drums and can recognize the hierarchical beat structure by using three kinds of musical knowledge: of onset times, of chord changes, and of drum patterns. This paper also describes several applications of beat tracking, such as beat-driven real-time computer graphics and lighting control.", "title": "" }, { "docid": "01bb8e6af86aa1545958a411653e014c", "text": "Estimating the tempo of a musical piece is a complex problem, which has received an increasing amount of attention in the past few years. The problem consists of estimating the number of beats per minute (bpm) at which the music is played and identifying exactly when these beats occur. Commercial devices already exist that attempt to extract a musical instrument digital interface (MIDI) clock from an audio signal, indicating both the tempo and the actual location of the beat. Such MIDI clocks can then be used to synchronize other devices (such as drum machines and audio effects) to the audio source, enabling a new range of \" beat-synchronized \" audio processing. Beat detection can also simplify the usually tedious process of manipulating audio material in audio-editing software. Cut and paste operations are made considerably easier if markers are positioned at each beat or at bar boundaries. Looping a drum track over two bars becomes trivial once the location of the beats is known. A third range of applications is the fairly new area of automatic playlist generation, where a computer is given the task to choose a series of audio tracks from a track database in a way similar to what a human deejay would do. The track tempo is a very important selection criterion in this context , as deejays will tend to string tracks with similar tempi back to back. Furthermore, deejays also tend to perform beat-synchronous crossfading between successive tracks manually, slowing down or speeding up one of the tracks so that the beats in the two tracks line up exactly during the crossfade. This can easily be done automatically once the beats are located in the two tracks. The tempo detection systems commercially available appear to be fairly unsophisticated, as they rely mostly on the presence of a strong and regular bass-drum kick at every beat, an assumption that holds mostly with modern musical genres such as techno or drums and bass. For music with a less pronounced tempo such techniques fail miserably and more sophisticated algorithms are needed. This paper describes an off-line tempo detection algorithm , able to estimate a time-varying tempo from an audio track stored, for example, on an audio CD or on a computer hard disk. The technique works in three successive steps: 1) an \" energy flux \" signal is extracted from the track, 2) at each tempo-analysis time, several …", "title": "" } ]
[ { "docid": "6db5f103fa479fc7c7c33ea67d7950f6", "text": "Problem statement: To design, implement, and test an algorithm for so lving the square jigsaw puzzle problem, which has many applications in image processing, pattern recognition, and computer vision such as restoration of archeologica l artifacts and image descrambling. Approach: The algorithm used the gray level profiles of border pi xels for local matching of the puzzle pieces, which was performed using dynamic programming to facilita te non-rigid alignment of pixels of two gray level profiles. Unlike the classical best-first sea rch, the algorithm simultaneously located the neigh bors of a puzzle piece during the search using the wellknown Hungarian procedure, which is an optimal assignment procedure. To improve the search for a g lobal solution, every puzzle piece was considered as starting piece at various starting locations. Results: Experiments using four well-known images demonstrated the effectiveness of the proposed appr o ch over the classical piece-by-piece matching approach. The performance evaluation was based on a new precision performance measure. For all four test images, the proposed algorithm achieved 1 00% precision rate for puzzles up to 8×8. Conclusion: The proposed search mechanism based on simultaneou s all cation of puzzle pieces using the Hungarian procedure provided better performance than piece-by-piece used in classical methods.", "title": "" }, { "docid": "88a15c0efdfeba3e791ea88862aee0c3", "text": "Logic-based approaches to legal problem solving model the rule-governed nature of legal argumentation, justification, and other legal discourse but suffer from two key obstacles: the absence of efficient, scalable techniques for creating authoritative representations of legal texts as logical expressions; and the difficulty of evaluating legal terms and concepts in terms of the language of ordinary discourse. Data-centric techniques can be used to finesse the challenges of formalizing legal rules and matching legal predicates with the language of ordinary parlance by exploiting knowledge latent in legal corpora. However, these techniques typically are opaque and unable to support the rule-governed discourse needed for persuasive argumentation and justification. This paper distinguishes representative legal tasks to which each approach appears to be particularly well suited and proposes a hybrid model that exploits the complementarity of each.", "title": "" }, { "docid": "c711fa74e32891553404b989c1ee1b44", "text": "This paper presents a fully actuated UAV platform with a nonparallel design. Standard multirotor UAVs equipped with a number of parallel thrusters would result in underactuation. Fighting horizontal wind would require the robot to tilt its whole body toward the direction of the wind. We propose a hexrotor UAV with nonparallel thrusters which results in faster response to disturbances for precision position keeping. A case study is presented to show that hexrotor with a nonparallel design takes less time to resist wind gust than a standard design. We also give the results of a staged peg-in-hole task that measures the rising time of exerting forces using different actuation mechanisms.", "title": "" }, { "docid": "faa5037145abef48d2acf5435df97bf2", "text": "This clinical report describes the rehabilitation of a patient with a history of mandibulectomy that involved the use of a fibula free flap and an implant-supported fixed complete denture. A recently introduced material, polyetherketoneketone (PEKK), was used as the framework material for the prosthesis, and the treatment produced favorable esthetic and functional results.", "title": "" }, { "docid": "1ee352ff083da1f307674414a5640d64", "text": "The present article examines personality as a predictor of college achievement beyond the traditional predictors of high school grades (HSGPA) and SAT scores. In an undergraduate sample (N=131), self and informant-rated conscientiousness using the Big Five Inventory (BFI; John, Donahue & Kentle, 1991) robustly correlated with academic achievement as indexed by both freshman GPA and senior GPA. A model including traditional predictors and informant ratings of conscientiousness accounted for 18% of the variance in freshman GPA and 37% of the variance in senior GPA; conscientiousness alone explained unique variance in senior GPA beyond the traditional predictors, even when freshman GPA was included in the model. Conscientiousness is a valid and unique predictor of college performance, and informant ratings may be useful in its assessment for this purpose. Acquaintance reports 3 Acquaintance reports of personality and academic achievement: A case for conscientiousness The question of what makes a good student “good” lies at the core of a socially-relevant discussion of college admissions criteria. While past research has shown personality variables to be related to school performance (e.g. Costa & McCrae, 1992; De Raad, 1996), academic achievement is still widely assumed to be more a function of intellectual ability than personality. The purpose of this study is to address two ambiguities that trouble past research in this area: the choice of conceptually appropriate outcome measures and the overuse of self-report data. A highly influential meta-analysis by Barrick and Mount (1991) concluded that conscientiousness is a robust and valid predictor of job performance across all criteria and occupations. Soon after, Costa and McCrae (1992) provided evidence that conscientiousness is likewise related to academic performance. This finding has been replicated by others (recently, Chamorro-Premuzic & Farnham, 2003a and 2003b). Moreover, conscientiousness appears to be free of some of the unwanted complications associated with ability as assessed by the SAT: Hogan and Hogan (1995) reported that personality inventories generally do not systematically discriminate against any ethnic or national group, and thus may offer more equitable bases for selection (see also John, et al., 1991). Still, skepticism remains. Farsides and Woodfield (2003) called the relationship between personality variables and academic performance in previous literature “erratic and, where present, modest” (p. 1229). Green, Peters and Webster (1991) found academic success only weakly associated with personality factors; Rothstien, Paunonen, Rush and King (1994) found that the Big Five factors failed to significantly predict academic performance criteria among a sample of MBA students; Allik and Realo (1997) and Diseth (2003) found most of the valid variance in achievement to be unrelated to personality. Acquaintance reports 4 The current study seeks to address two pervasive obstructions to conceptual clarity in the previous literature: 1) Lack of consistency in the measurement of “academic achievement.” Past studies have used individual exam grades, final grades in a single course, semester GPA, year-end GPA, GPA at the time of the study, or variables such as attendance or participation. The present study uses concrete and consequential outcomes: freshman cumulative GPA (fGPA; the measure most commonly employed in previous research) and senior cumulative GPA (sGPA; a final, more comprehensive measure of college success.). 2) Near-exclusive use of self-report personality measures. Reliance on self-reports can be problematic because what one believes of oneself may or may not be an accurate or complete assessment of one’s true strengths and weaknesses. Thus, the present research utilizes ratings of personality provided by the self and by informants. As the personality inventories used in these analyses were administered up to three years prior to the measurement of the dependent variable, finding a meaningful relationship between the two will provide evidence that one’s traits – evaluated by someone else and a number of years in the past! – are consistent enough to serve as useful predictors of a real and important outcome. Within the confines of these parameters, and based upon previous literature, it is hypothesized that: conscientiousness will fail to show the mean differences in ethnicity problematic of SAT scores; both selfand informant-rated conscientiousness will be positively and significantly related to both outcome measures; and finally, conscientiousness will be capable of explaining incremental variance in both outcome measures beyond what is accounted for by the traditional predictors. Method Acquaintance reports 5 Participants This study examined the predictors of academic achievement in a sample of 131 target participants (54.2% female, 45.8% male), who were those among an original sample of 217 undergraduates with sufficiently complete data for the present analyses, as described below. Due to the “minority majority” status of the UCR campus population, the diverse sample included substantial proportions of Asians or Asian Americans (43.5%), Hispanics or Latin Americans (19.8%), Caucasians (16.0%), African Americans (12.9%), and students of other ethnic descent (7.6%). The study also includes 258 informants who described participants with whom they were acquainted. Each target participant and informant was paid $10 per hour. The larger data set was originally designed to explore issues of accuracy in personality judgment. Other analyses, completed and planned (see Letzring, Block & Funder, 2004; Letzring, Wells & Funder, in press; Vazire & Funder, 2006), address different topics and do not overlap with those in the current study. Targets & Informants To deal with missing data, all participants in the larger sample who were lacking any one of the predictor variables (SAT score, HSGPA, or either selfor informant-rated Conscientiousness) were dropped (reducing the N from 217 to 153 at this stage of selection). Among the remaining participants, 21 were missing sGPA (i.e., had not yet graduated at the time the GPAs were collected from the University) but had a junior-level GPA; for these, a regression using junior GPA to predict sGPA was performed (r = 0.96 between the two) and the resulting score was imputed. 22 participants had neither sGPA nor a junior GPA; these last were dropped, leaving the final N = 131 for target participants. Means and standard deviations for both the Acquaintance reports 6 dependent and predictor variables in this smaller sample were comparable to those of the larger group from which they were drawn. Each participant provided contact information for two people who knew him or her best and would be willing to provide information about him or her. 127 participants in our target sample recruited the requested 2 informants, while 4 participants recruited only 1, for a total of 258 informants. Measures Traditional Predictors Participants completed a release form granting access to their academic records; HSGPA and SAT scores were later obtained from the UCR Registrar’s Office. The Registrar provided either an SAT score or an SAT score converted from an American College Testing (ACT) score. Only the total score (rather than the separate verbal/quantitative sub-scores) was used. Personality In order to assess traits at a global level, participants provided self-reports and informants provided peer ratings using the Big Five Inventory (BFI; John, Donahue & Kentle, 1991), which assesses extraversion, agreeableness, conscientiousness, neuroticism, and openness to experience. BFI-scale reliabilities and other psychometric properties have been shown to be similar to those of the much longer scales of Costa and McCrae’s (1990) NEO-FFI (John, et al. 1991). Where two informants were available (all but 4 cases), a composite of their ratings was created by averaging the conscientiousness scale scores. Reliability of the averaged informants’ conscientiousness rating was .59. Academic performance Acquaintance reports 7 Cumulative fGPA and sGPA were collected from the campus Registrar. While the data collection phase of the original, larger project began a few years before the analyses completed for this study and all of the participants had progressed in their academic standing, not all of them had yet completed their senior year. Participants missing GPA data were handled as described above. Results Analyses examined mean differences among ethnic groups and correlations between each of the potential predictors and the two outcome measures. A final set of analyses entered the predictor variables into hierarchical regressions predicting GPA. Descriptive Statistics Mean differences by ethnicity in HSGPA, SAT scores, and BFI scores were examined with one-way ANOVAs (see Table 1). Members of the different groups were admitted to UCR with approximately the same incoming HSGPA (M = 3.51) and very little variation (SD = 0.37), F(4, 126) = 0.68, p = 0.609. There was, however, a significant difference between ethnicities in their entering SAT scores, F(4, 126) = 5.56, p = 3.7 x 10, with Caucasians the highest and African Americans the lowest. As predicted, there were no significant differences in conscientiousness across ethnicities. Correlations There were no significant correlations between gender and any of the variables included in this study. HSGPA and SAT scores – the two traditional predictors – are only modestly related in this sample: r(131) = 0.12, n.s., indicating that they are independently capable of explaining variance in college GPA. sGPA, containing all the variance of fGPA, is thus well correlated with it, r(131) = 0.68, p < .05. Correlations between academic performance and the Acquaintance reports 8 hypothesized predictors of performance (HSGPA, SAT scores, and conscientiousness) are presented in Table 2. While the traditional ", "title": "" }, { "docid": "881325bbeb485fc405c2cb77f9a12dfb", "text": "Drawing on social capital theory, this study examined whether college students’ self-disclosure on a social networking site was directly associated with social capital, or related indirectly through the degree of positive feedback students got from Internet friends. Structural equation models applied to anonymous, self-report survey data from 264 first-year students at 3 universities in Beijing, China, indicated direct effects on bridging social capital and indirect effects on bonding social capital. Effects remained significant, though modest in magnitude, after controlling for social skills level. Findings suggest ways in which social networking sites can foster social adjustment as an adolescent transition to residential col-", "title": "" }, { "docid": "095dbdc1ac804487235cdd0aeffe8233", "text": "Sentiment analysis is the task of identifying whether the opinion expressed in a document is positive or negative about a given topic. Unfortunately, many of the potential applications of sentiment analysis are currently infeasible due to the huge number of features found in standard corpora. In this paper we systematically evaluate a range of feature selectors and feature weights with both Naı̈ve Bayes and Support Vector Machine classifiers. This includes the introduction of two new feature selection methods and three new feature weighting methods. Our results show that it is possible to maintain a state-of-the art classification accuracy of 87.15% while using less than 36% of the features.", "title": "" }, { "docid": "46200c35a82b11d989c111e8398bd554", "text": "A physics-based compact gallium nitride power semiconductor device model is presented in this work, which is the first of its kind. The model derivation is based on the classical drift-diffusion model of carrier transport, which expresses the channel current as a function of device threshold voltage and externally applied electric fields. The model is implemented in the Saber® circuit simulator using the MAST hardware description language. The model allows the user to extract the parameters from the dc I-V and C-V characteristics that are also available in the device datasheets. A commercial 80 V EPC GaN HEMT is used to demonstrate the dynamic validation of the model against the transient device characteristics in a double-pulse test and a boost converter circuit configuration. The simulated versus measured device characteristics show good agreement and validate the model for power electronics design and applications using the next generation of GaN HEMT devices.", "title": "" }, { "docid": "88afb98c0406d7c711b112fbe2a6f25e", "text": "This paper provides a new metric, knowledge management performance index (KMPI), for assessing the performance of a firm in its knowledge management (KM) at a point in time. Firms are assumed to have always been oriented toward accumulating and applying knowledge to create economic value and competitive advantage. We therefore suggest the need for a KMPI which we have defined as a logistic function having five components that can be used to determine the knowledge circulation process (KCP): knowledge creation, knowledge accumulation, knowledge sharing, knowledge utilization, and knowledge internalization. When KCP efficiency increases, KMPI will also expand, enabling firms to become knowledgeintensive. To prove KMPI’s contribution, a questionnaire survey was conducted on 101 firms listed in the KOSDAQ market in Korea. We associated KMPI with three financial measures: stock price, price earnings ratio (PER), and R&D expenditure. Statistical results show that the proposed KMPI can represent KCP efficiency, while the three financial performance measures are also useful. # 2004 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "117242011595e9c7de501a2360199a48", "text": "This paper proposes a supervised learning approach to jointly perform facial Action Unit (AU) localisation and intensity estimation. Contrary to previous works that try to learn an unsupervised representation of the Action Unit regions, we propose to directly and jointly estimate all AU intensities through heatmap regression, along with the location in the face where they cause visible changes. Our approach aims to learn a pixel-wise regression function returning a score per AU, which indicates an AU intensity at a given spatial location. Heatmap regression then generates an image, or channel, per AU, in which each pixel indicates the corresponding AU intensity. To generate the ground-truth heatmaps for a target AU, the facial landmarks are first estimated, and a 2D Gaussian is drawn around the points where the AU is known to cause changes. The amplitude and size of the Gaussian is determined by the intensity of the AU. We show that using a single Hourglass network suffices to attain new state of the art results, demonstrating the effectiveness of such a simple approach. The use of heatmap regression allows learning of a shared representation between AUs without the need to rely on latent representations, as these are implicitly learned from the data. We validate the proposed approach on the BP4D dataset, showing a modest improvement on recent, complex, techniques, as well as robustness against misalignment errors. Code for testing and models will be available to download from https://github.com/ESanchezLozano/ Action-Units-Heatmaps.", "title": "" }, { "docid": "353761bae5088e8ee33025fc04695297", "text": " Land use can exert a powerful influence on ecological systems, yet our understanding of the natural and social factors that influence land use and land-cover change is incomplete. We studied land-cover change in an area of about 8800 km2 along the lower part of the Wisconsin River, a landscape largely dominated by agriculture. Our goals were (a) to quantify changes in land cover between 1938 and 1992, (b) to evaluate the influence of abiotic and socioeconomic variables on land cover in 1938 and 1992, and (c) to characterize the major processes of land-cover change between these two points in time. The results showed a general shift from agricultural land to forest. Cropland declined from covering 44% to 32% of the study area, while forests and grassland both increased (from 32% to 38% and from 10% to 14% respectively). Multiple linear regressions using three abiotic and two socioeconomic variables captured 6% to 36% of the variation in land-cover categories in 1938 and 9% to 46% of the variation in 1992. Including socioeconomic variables always increased model performance. Agricultural abandonment and a general decline in farming intensity were the most important processes of land-cover change among the processes considered. Areas characterized by the different processes of land-cover change differed in the abiotic and socioeconomic variables that had explanatory power and can be distinguished spatially. Understanding the dynamics of landscapes dominated by human impacts requires methods to incorporate socioeconomic variables and anthropogenic processes in the analyses. Our method of hypothesizing and testing major anthropogenic processes may be a useful tool for studying the dynamics of cultural landscapes.", "title": "" }, { "docid": "db637c4e90111ebe0218fa4ccc2ce759", "text": "Existing datasets for natural language inference (NLI) have propelled research on language understanding. We propose a new method for automatically deriving NLI datasets from the growing abundance of largescale question answering datasets. Our approach hinges on learning a sentence transformation model which converts question-answer pairs into their declarative forms. Despite being primarily trained on a single QA dataset, we show that it can be successfully applied to a variety of other QA resources. Using this system, we automatically derive a new freely available dataset of over 500k NLI examples (QA-NLI), and show that it exhibits a wide range of inference phenomena rarely seen in previous NLI datasets.", "title": "" }, { "docid": "3823f92483c12b7ff9c7b5b9a020088f", "text": "This paper addresses the credit card fraud detection problem in the context of Big Data, based on machine learning techniques. In the fraud detection task, typically the available datasets for ML training present some peculiarities, such as the unavoidable condition of a strong class imbalance, the existence of unlabeled transactions, and the large number of records that must be processed. The present paper aims to propose a methodology for automatic detection of fraudulent transactions, that tackle all these problems. The methodology is based on a Balanced Random Forest, that can be used in supervised and semi-supervised scenarios through a co-training approach. Two different schemes for the co-training approach are tested, in order to overcome the class imbalance problem. Moreover, a Spark platform and Hadoop file system support our solution, in order to enable the scalability of the proposed solution. The proposed approach achieves an absolute improvement of around 24% in terms of geometric mean in comparison to a standard random forest learning strategy.", "title": "" }, { "docid": "a530b9b997f6e471f74beca325038067", "text": "Do you remember insult sword fi ghting in Monkey Island? The moment when you got off the elevator in the fourth mission of Call of Duty: Modern Warfare 2? Your romantic love affair with Leliana or Alistair in Dragon Age? Dancing as Madison for Paco in his nightclub in Heavy Rain? Climbing and fi ghting Cronos in God of War 3? Some of the most memorable moments from successful video games, have a strong emotional impact on us. It is only natural that game designers and user researchers are seeking methods to better understand the positive and negative emotions that we feel when we are playing games. While game metrics provide excellent methods and techniques to infer behavior from the interaction of the player in the virtual game world, they cannot infer or see emotional signals of a player. Emotional signals are observable changes in the state of the human player, such as facial expressions, body posture, or physiological changes in the player’s body. The human eye can observe facial expression, gestures or human sounds that could tell us how a player is feeling, but covert physiological changes are only revealed to us when using sensor equipment, such as", "title": "" }, { "docid": "83b79fc95e90a303f29a44ef8730a93f", "text": "Internet of Things (IoT) is a concept that envisions all objects around us as part of internet. IoT coverage is very wide and includes variety of objects like smart phones, tablets, digital cameras and sensors. Once all these devices are connected to each other, they enable more and more smart processes and services that support our basic needs, environment and health. Such enormous number of devices connected to internet provides many kinds of services. They also produce huge amount of data and information. Cloud computing is one such model for on-demand access to a shared pool of configurable resources (computer, networks, servers, storage, applications, services, and software) that can be provisioned as infrastructures ,software and applications. Cloud based platforms help to connect to the things around us so that we can access anything at any time and any place in a user friendly manner using customized portals and in built applications. Hence, cloud acts as a front end to access IoT. Applications that interact with devices like sensors have special requirements of massive storage to store big data, huge computation power to enable the real time processing of the data, information and high speed network to stream audio or video. Here we have describe how Internet of Things and Cloud computing can work together can address the Big Data problems. We have also illustrated about Sensing as a service on cloud using few applications like Augmented Reality, Agriculture, Environment monitoring,etc. Finally, we propose a prototype model for providing sensing as a service on cloud.", "title": "" }, { "docid": "79041480e35083e619bd804423459f2b", "text": "Dynamic pricing is the dynamic adjustment of prices to consumers depending upon the value these customers attribute to a product or service. Today’s digital economy is ready for dynamic pricing; however recent research has shown that the prices will have to be adjusted in fairly sophisticated ways, based on sound mathematical models, to derive the benefits of dynamic pricing. This article attempts to survey different models that have been used in dynamic pricing. We first motivate dynamic pricing and present underlying concepts, with several examples, and explain conditions under which dynamic pricing is likely to succeed. We then bring out the role of models in computing dynamic prices. The models surveyed include inventory-based models, data-driven models, auctions, and machine learning. We present a detailed example of an e-business market to show the use of reinforcement learning in dynamic pricing.", "title": "" }, { "docid": "d5e5d79b8a06d4944ee0c3ddcd84ce4c", "text": "Recent years have observed a significant progress in information retrieval and natural language processing with deep learning technologies being successfully applied into almost all of their major tasks. The key to the success of deep learning is its capability of accurately learning distributed representations (vector representations or structured arrangement of them) of natural language expressions such as sentences, and effectively utilizing the representations in the tasks. This tutorial aims at summarizing and introducing the results of recent research on deep learning for information retrieval, in order to stimulate and foster more significant research and development work on the topic in the future.\n The tutorial mainly consists of three parts. In the first part, we introduce the fundamental techniques of deep learning for natural language processing and information retrieval, such as word embedding, recurrent neural networks, and convolutional neural networks. In the second part, we explain how deep learning, particularly representation learning techniques, can be utilized in fundamental NLP and IR problems, including matching, translation, classification, and structured prediction. In the third part, we describe how deep learning can be used in specific application tasks in details. The tasks are search, question answering (from either documents, database, or knowledge base), and image retrieval.", "title": "" }, { "docid": "9a758183aa6bf6ee8799170b5a526e7e", "text": "The field of serverless computing has recently emerged in support of highly scalable, event-driven applications. A serverless application is a set of stateless functions, along with the events that should trigger their activation. A serverless runtime allocates resources as events arrive, avoiding the need for costly pre-allocated or dedicated hardware. \nWhile an attractive economic proposition, serverless computing currently lags behind the state of the art when it comes to function composition. This paper addresses the challenge of programming a composition of functions, where the composition is itself a serverless function. \nWe demonstrate that engineering function composition into a serverless application is possible, but requires a careful evaluation of trade-offs. To help in evaluating these trade-offs, we identify three competing constraints: functions should be considered as black boxes; function composition should obey a substitution principle with respect to synchronous invocation; and invocations should not be double-billed. \nFurthermore, we argue that, if the serverless runtime is limited to a reactive core, i.e. one that deals only with dispatching functions in response to events, then these constraints form the serverless trilemma. Without specific runtime support, compositions-as-functions must violate at least one of the three constraints. \nFinally, we demonstrate an extension to the reactive core of an open-source serverless runtime that enables the sequential composition of functions in a trilemma-satisfying way. We conjecture that this technique could be generalized to support other combinations of functions.", "title": "" }, { "docid": "7e6bc406394f5621b02acb9f0187667f", "text": "A model predictive control (MPC) approach to active steering is presented for autonomous vehicle systems. The controller is designed to stabilize a vehicle along a desired path while rejecting wind gusts and fulfilling its physical constraints. Simulation results of a side wind rejection scenario and a double lane change maneuver on slippery surfaces show the benefits of the systematic control methodology used. A trade-off between the vehicle speed and the required preview on the desired path for vehicle stabilization is highlighted", "title": "" }, { "docid": "a68ccab91995603b3dbb54e014e79091", "text": "Qualitative models arising in artificial intelligence domain often concern real systems that are difficult to represent with traditional means. However, some promise for dealing with such systems is offered by research in simulation methodology. Such research produces models that combine both continuous and discrete-event formalisms. Nevertheless, the aims and approaches of the AI and the simulation communities remain rather mutually ill understood. Consequently, there is a need to bridge theory and methodology in order to have a uniform language when either analyzing or reasoning about physical systems. This article introduces a methodology and formalism for developing multiple, cooperative models of physical systems of the type studied in qualitative physics. The formalism combines discrete-event and continuous models and offers an approach to building intelligent machines capable of physical modeling and reasoning.", "title": "" } ]
scidocsrr
fbe7cfaf6c9981468179c6654f36d700
Validating viral marketing strategies in Twitter via agent-based social simulation
[ { "docid": "1991322dce13ee81885f12322c0e0f79", "text": "The quality of the interpretation of the sentiment in the online buzz in the social media and the online news can determine the predictability of financial markets and cause huge gains or losses. That is why a number of researchers have turned their full attention to the different aspects of this problem lately. However, there is no well-rounded theoretical and technical framework for approaching the problem to the best of our knowledge. We believe the existing lack of such clarity on the topic is due to its interdisciplinary nature that involves at its core both behavioral-economic topics as well as artificial intelligence. We dive deeper into the interdisciplinary nature and contribute to the formation of a clear frame of discussion. We review the related works that are about market prediction based on onlinetext-mining and produce a picture of the generic components that they all have. We, furthermore, compare each system with the rest and identify their main differentiating factors. Our comparative analysis of the systems expands onto the theoretical and technical foundations behind each. This work should help the research community to structure this emerging field and identify the exact aspects which require further research and are of special significance. 2014 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "b4d58813c09030e1c68b4fb573d45389", "text": "With the empirical evidence that Twitter influences the financial market, there is a need for a bottom-up approach focusing on individual Twitter users and their message propagation among a selected Twitter community with regard to the financial market. This paper presents an agent-based simulation framework to model the Twitter network growth and message propagation mechanism in the Twitter financial community. Using the data collected through the Twitter API, the model generates a dynamic community network with message propagation rates by different agent types. The model successfully validates against the empirical characteristics of the Twitter financial community in terms of network demographics and aggregated message propagation pattern. Simulation of the 2013 Associated Press hoax incident demonstrates that removing critical nodes of the network (users with top centrality) dampens the message propagation process linearly and critical node of the highest betweenness centrality has the optimal effect in reducing the spread of the malicious message to lesser ratio of the community.", "title": "" } ]
[ { "docid": "eb3e3c7b255cd8f86a4b02d1f2c23a83", "text": "Style transfer is a process of migrating a style from a given image to the content of another, synthesizing a new image, which is an artistic mixture of the two. Recent work on this problem adopting convolutional neural-networks (CNN) ignited a renewed interest in this field, due to the very impressive results obtained. There exists an alternative path toward handling the style transfer task, via the generalization of texture synthesis algorithms. This approach has been proposed over the years, but its results are typically less impressive compared with the CNN ones. In this paper, we propose a novel style transfer algorithm that extends the texture synthesis work of Kwatra et al. (2005), while aiming to get stylized images that are closer in quality to the CNN ones. We modify Kwatra’s algorithm in several key ways in order to achieve the desired transfer, with emphasis on a consistent way for keeping the content intact in selected regions, while producing hallucinated and rich style in others. The results obtained are visually pleasing and diverse, shown to be competitive with the recent CNN style transfer algorithms. The proposed algorithm is fast and flexible, being able to process any pair of content + style images.", "title": "" }, { "docid": "363dc30dbf42d5309366ec109c445c48", "text": "There has been significant recent interest in fast imaging with sparse sampling. Conventional imaging methods are based on Shannon-Nyquist sampling theory. As such, the number of required samples often increases exponentially with the dimensionality of the image, which limits achievable resolution in high-dimensional scenarios. The partially-separable function (PSF) model has previously been proposed to enable sparse data sampling in this context. Existing methods to leverage PSF structure utilize tailored data sampling strategies, which enable a specialized two-step reconstruction procedure. This work formulates the PSF reconstruction problem using the matrix-recovery framework. The explicit matrix formulation provides new opportunities for data acquisition and image reconstruction with rank constraints. Theoretical results from the emerging field of low-rank matrix recovery (which generalizes theory from sparse-vector recovery) and our empirical results illustrate the potential of this new approach.", "title": "" }, { "docid": "06e50887ddec8b0e858173499ce2ee11", "text": "Over the last few years, we've seen a plethora of Internet of Things (IoT) solutions, products, and services make their way into the industry's marketplace. All such solutions will capture large amounts of data pertaining to the environment as well as their users. The IoT's objective is to learn more and better serve system users. Some IoT solutions might store data locally on devices (\"things\"), whereas others might store it in the cloud. The real value of collecting data comes through data processing and aggregation on a large scale, where new knowledge can be extracted. However, such procedures can lead to user privacy issues. This article discusses some of the main challenges of privacy in the IoT as well as opportunities for research and innovation. The authors also introduce some of the ongoing research efforts that address IoT privacy issues.", "title": "" }, { "docid": "be8b65d39ee74dbee0835052092040da", "text": "We examine the problem of question answering over knowledge graphs, focusing on simple questions that can be answered by the lookup of a single fact. Adopting a straightforward decomposition of the problem into entity detection, entity linking, relation prediction, and evidence combination, we explore simple yet strong baselines. On the popular SIMPLEQUESTIONS dataset, we find that basic LSTMs and GRUs plus a few heuristics yield accuracies that approach the state of the art, and techniques that do not use neural networks also perform reasonably well. These results show that gains from sophisticated deep learning techniques proposed in the literature are quite modest and that some previous models exhibit unnecessary complexity.", "title": "" }, { "docid": "83856fb0a5e53c958473fdf878b89b20", "text": "Due to the expensive nature of an industrial robot, not all universities are equipped with areal robots for students to operate. Learning robotics without accessing to an actual robotic system has proven to be difficult for undergraduate students. For instructors, it is also an obstacle to effectively teach fundamental robotic concepts. Virtual robot simulator has been explored by many researchers to create a virtual environment for teaching and learning. This paper presents structure of a course project which requires students to develop a virtual robot simulator. The simulator integrates concept of kinematics, inverse kinematics and controls. Results show that this approach assists and promotes better students‟ understanding of robotics.", "title": "" }, { "docid": "f0d55892fb927c5c5324cfb7b8380bda", "text": "The paper presents application of data mining methods for recognizing the most significant genes and gene sequences (treated as features) stored in a dataset of gene expression microarray. The investigations are performed for autism data. Few chosen methods of feature selection have been applied and their results integrated in the final outcome. In this way we find the contents of small set of the most important genes associated with autism. They have been applied in the classification procedure aimed on recognition of autism from reference group members. The results of numerical experiments concerning selection of the most important genes and classification of the cases on the basis of the selected genes will be discussed. The main contribution of the paper is in developing the fusion system of the results of many selection approaches into the final set, most closely associated with autism. We have also proposed special procedure of estimating the number of highest rank genes used in classification procedure. 2014 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "dfa611e19a3827c66ea863041a3ef1e2", "text": "We study the problem of malleability of Bitcoin transactions. Our first two contributions can be summarized as follows: (i) we perform practical experiments on Bitcoin that show that it is very easy to maul Bitcoin transactions with high probability, and (ii) we analyze the behavior of the popular Bitcoin wallets in the situation when their transactions are mauled; we conclude that most of them are to some extend not able to handle this situation correctly. The contributions in points (i) and (ii) are experimental. We also address a more theoretical problem of protecting the Bitcoin distributed contracts against the “malleability” attacks. It is well-known that malleability can pose serious problems in some of those contracts. It concerns mostly the protocols which use a “refund” transaction to withdraw a financial deposit in case the other party interrupts the protocol. Our third contribution is as follows: (iii) we show a general method for dealing with the transaction malleability in Bitcoin contracts. In short: this is achieved by creating a malleability-resilient “refund” transaction which does not require any modification of the Bitcoin protocol.", "title": "" }, { "docid": "e6e1b1e282449e8c75be714ff022ce39", "text": "AIMS\nThe aims of this paper were (1) to raise awareness of the issues in questionnaire development and subsequent psychometric evaluation, and (2) to provide strategies to enable nurse researchers to design and develop their own measure and evaluate the quality of existing nursing measures.\n\n\nBACKGROUND\nThe number of questionnaires developed by nurses has increased in recent years. While the rigour applied to the questionnaire development process may be improving, we know that nurses are still not generally adept at the psychometric evaluation of new measures. This paper explores the process by which a reliable and valid questionnaire can be developed.\n\n\nMETHODS\nWe critically evaluate the theoretical and methodological issues associated with questionnaire design and development and present a series of heuristic decision-making strategies at each stage of such development. The range of available scales is presented and we discuss strategies to enable item generation and development. The importance of stating a priori the number of factors expected in a prototypic measure is emphasized. Issues of reliability and validity are explored using item analysis and exploratory factor analysis and illustrated using examples from recent nursing research literature.\n\n\nCONCLUSION\nQuestionnaire design and development must be supported by a logical, systematic and structured approach. To aid this process we present a framework that supports this and suggest strategies to demonstrate the reliability and validity of the new and developing measure.\n\n\nRELEVANCE TO CLINICAL PRACTICE\nIn developing the evidence base of nursing practice using this method of data collection, it is vital that questionnaire design incorporates preplanned methods to establish reliability and validity. Failure to develop a questionnaire sufficiently may lead to difficulty interpreting results, and this may impact upon clinical or educational practice. This paper presents a critical evaluation of the questionnaire design and development process and demonstrates good practice at each stage of this process.", "title": "" }, { "docid": "f320e7f092040e72de062dc8203bbcfb", "text": "This research provides a security assessment of the Android framework-Google's software stack for mobile devices. The authors identify high-risk threats to the framework and suggest several security solutions for mitigating them.", "title": "" }, { "docid": "c62bc7391e55d66c9e27befe81446ebe", "text": "Opaque predicates have been widely used to insert superfluous branches for control flow obfuscation. Opaque predicates can be seamlessly applied together with other obfuscation methods such as junk code to turn reverse engineering attempts into arduous work. Previous efforts in detecting opaque predicates are far from mature. They are either ad hoc, designed for a specific problem, or have a considerably high error rate. This paper introduces LOOP, a Logic Oriented Opaque Predicate detection tool for obfuscated binary code. Being different from previous work, we do not rely on any heuristics; instead we construct general logical formulas, which represent the intrinsic characteristics of opaque predicates, by symbolic execution along a trace. We then solve these formulas with a constraint solver. The result accurately answers whether the predicate under examination is opaque or not. In addition, LOOP is obfuscation resilient and able to detect previously unknown opaque predicates. We have developed a prototype of LOOP and evaluated it with a range of common utilities and obfuscated malicious programs. Our experimental results demonstrate the efficacy and generality of LOOP. By integrating LOOP with code normalization for matching metamorphic malware variants, we show that LOOP is an appealing complement to existing malware defenses.", "title": "" }, { "docid": "05518ac3a07fdfb7bfede8df8a7a500b", "text": "The prevalence of food allergy is rising for unclear reasons, with prevalence estimates in the developed world approaching 10%. Knowledge regarding the natural course of food allergies is important because it can aid the clinician in diagnosing food allergies and in determining when to consider evaluation for food allergy resolution. Many food allergies with onset in early childhood are outgrown later in childhood, although a minority of food allergy persists into adolescence and even adulthood. More research is needed to improve food allergy diagnosis, treatment, and prevention.", "title": "" }, { "docid": "189522686d83ff7761afe6e105bec409", "text": "This paper emphasizes on safeguarding the hierarchical structure in wireless sensor network by presenting an Intrusion Detection Technique, which is very useful, simple and works effectively in improving and enhancing the security in wireless sensor Network. This IDS works on the combination of anomaly detection algorithm and intrusion detection nodes. Here all the features and numerous architectures of popular IDS(s) along with their confines and benefits are also being described.", "title": "" }, { "docid": "83991055d207c47bc2d5af0d83bfcf9c", "text": "BACKGROUND\nThe present study aimed at investigating the role of depression and attachment styles in predicting cell phone addiction.\n\n\nMETHODS\nIn this descriptive correlational study, a sample including 100 students of Payame Noor University (PNU), Reyneh Center, Iran, in the academic year of 2013-2014 was selected using volunteer sampling. Participants were asked to complete the adult attachment inventory (AAI), Beck depression inventory-13 (BDI-13) and the cell phone overuse scale (COS).\n\n\nFINDINGS\nResults of the stepwise multiple regression analysis showed that depression and avoidant attachment style were the best predictors of students' cell phone addiction (R(2) = 0.23).\n\n\nCONCLUSION\nThe results of this study highlighted the predictive value of depression and avoidant attachment style concerning students' cell phone addiction.", "title": "" }, { "docid": "e32068682c313637f97718e457914381", "text": "Optimal load shedding is a very critical issue in power systems. It plays a vital role, especially in third world countries. A sudden increase in load can affect the important parameters of the power system like voltage, frequency and phase angle. This paper presents a case study of Pakistan’s power system, where the generated power, the load demand, frequency deviation and load shedding during a 24-hour period have been provided. An artificial neural network ensemble is aimed for optimal load shedding. The objective of this paper is to maintain power system frequency stability by shedding an accurate amount of load. Due to its fast convergence and improved generalization ability, the proposed algorithm helps to deal with load shedding in an efficient manner.", "title": "" }, { "docid": "114ec493a4b0b26c643a49bc0cc3c9c7", "text": "Automatic emotion recognition has attracted great interest and numerous solutions have been proposed, most of which focus either individually on facial expression or acoustic information. While more recent research has considered multimodal approaches, individual modalities are often combined only by simple fusion at the feature and/or decision-level. In this paper, we introduce a novel approach using 3-dimensional convolutional neural networks (C3Ds) to model the spatio-temporal information, cascaded with multimodal deep-belief networks (DBNs) that can represent the audio and video streams. Experiments conducted on the eNTERFACE multimodal emotion database demonstrate that this approach leads to improved multimodal emotion recognition performance and significantly outperforms recent state-of-the-art proposals.", "title": "" }, { "docid": "3d301d13d54b0abd5157b4640820ae0a", "text": "Plant hormones regulate many aspects of plant growth and development. Both auxin and cytokinin have been known for a long time to act either synergistically or antagonistically to control several significant developmental processes, such as the formation and maintenance of meristem. Over the past few years, exciting progress has been made to reveal the molecular mechanisms underlying the auxin-cytokinin action and interaction. In this review, we shall briefly discuss the major progress made in auxin and cytokinin biosynthesis, auxin transport, and auxin and cytokinin signaling. The frameworks for the complicated interaction of these two hormones in the control of shoot apical meristem and root apical meristem formation as well as their roles in in vitro organ regeneration are the major focus of this review.", "title": "" }, { "docid": "f69b170e9ccd7f04cbc526373b0ad8ee", "text": "meaning (overall M = 5.89) and significantly higher than with any of the other three abstract meanings (overall M = 2.05, all ps < .001). Procedure. Under a cover story of studying advertising slogans, participants saw one of the 22 target brands and thought about its abstract concept in memory. They were then presented, on a single screen, with four alternative slogans (in random order) for the target brand and were asked to rank the slogans, from 1 (“best”) to 4 (“worst”), in terms of how well the slogan fits the image of the target brand. Each slogan was intended to distinctively communicate the abstract meaning associated with one of the four high-levelmeaning associated with one of the four high-level brand value dimensions uncovered in the pilot study. After a series of filler tasks, participants indicated their attitude toward the brand on a seven-point scale (1 = “very unfavorable,” and 7 = “very favorable”). Ranking of the slogans. We conducted separate nonparametric Kruskal-Wallis tests on each country’s data to evaluate differences in the rank order for each of the four slogans among the four types of brand concepts. In all countries, the tests were significant (the United States: all 2(3, N = 539) ≥ 145.4, all ps < .001; China: all 2(3, N = 208) ≥ 52.8, all ps < .001; Canada: all 2(3, N = 380) ≥ 33.3, all ps < .001; Turkey: all 2(3, N = 380) ≥ 51.0, all ps < .001). We pooled the data from the four countries and conducted follow-up tests to evaluate pairwise differences in the rank order of each slogan among the four brand concepts, controlling for Type I error across tests using the Bonferroni approach. The results of these tests indicated that each slogan was ranked at the top in terms of favorability when it matched the brand concept (self-enhancement brand concept: Mself-enhancement slogan = 1.77; openness brand FIGURE 2 Structural Relations Among Value Dimensions from Multidimensional Scaling (Pilot: Study 1) b = benevolence, t = tradition, c = conformity, sec = security S e l f E n h a n c e m e n t IN D VID U A L C O N C ER N S C O LL EC TI VE C O N C ER N S", "title": "" }, { "docid": "ead5432cb390756a99e4602a9b6266bf", "text": "In this paper, we present a new approach for text localization in natural images, by discriminating text and non-text regions at three levels: pixel, component and text line levels. Firstly, a powerful low-level filter called the Stroke Feature Transform (SFT) is proposed, which extends the widely-used Stroke Width Transform (SWT) by incorporating color cues of text pixels, leading to significantly enhanced performance on inter-component separation and intra-component connection. Secondly, based on the output of SFT, we apply two classifiers, a text component classifier and a text-line classifier, sequentially to extract text regions, eliminating the heuristic procedures that are commonly used in previous approaches. The two classifiers are built upon two novel Text Covariance Descriptors (TCDs) that encode both the heuristic properties and the statistical characteristics of text stokes. Finally, text regions are located by simply thresholding the text-line confident map. Our method was evaluated on two benchmark datasets: ICDAR 2005 and ICDAR 2011, and the corresponding F-measure values are 0.72 and 0.73, respectively, surpassing previous methods in accuracy by a large margin.", "title": "" }, { "docid": "0c509f98c65a48c31d32c0c510b4c13f", "text": "An EM based straight forward design and pattern synthesis technique for series fed microstrip patch array antennas is proposed. An optimization of each antenna element (λ/4-transmission line, λ/2-patch, λ/4-transmission line) of the array is performed separately. By introducing an equivalent circuit along with an EM parameter extraction method, each antenna element can be optimized for its resonance frequency and taper amplitude, so to shape the aperture distribution for the cascaded elements. It will be shown that the array design based on the multiplication of element factor and array factor fails in case of patch width tapering, due to the inconsistency of the element patterns. To overcome this problem a line width tapering is suggested which keeps the element patterns nearly constant while still providing a broad amplitude taper range. A symmetric 10 element antenna array with a Chebyshev tapering (-20dB side lobe level) operating at 5.8 GHz has been designed, compared for the two tapering methods and validated with measurement.", "title": "" }, { "docid": "5bb6e93244e976725bc9663c0afe8136", "text": "Video streaming platforms like Twitch.tv or YouNow have attracted the attention of both users and researchers in the last few years. Users increasingly adopt these platforms to share user-generated videos while researchers study their usage patterns to learn how to provide better and new services.", "title": "" } ]
scidocsrr
c99c16b7a14e22ae7b05c3f30fff9491
Financial Cryptography and Data Security
[ { "docid": "f6fc0992624fd3b3e0ce7cc7fc411154", "text": "Digital currencies are a globally spreading phenomenon that is frequently and also prominently addressed by media, venture capitalists, financial and governmental institutions alike. As exchange prices for Bitcoin have reached multiple peaks within 2013, we pose a prevailing and yet academically unaddressed question: What are users' intentions when changing their domestic into a digital currency? In particular, this paper aims at giving empirical insights on whether users’ interest regarding digital currencies is driven by its appeal as an asset or as a currency. Based on our evaluation, we find strong indications that especially uninformed users approaching digital currencies are not primarily interested in an alternative transaction system but seek to participate in an alternative investment vehicle.", "title": "" }, { "docid": "68c1a1fdd476d04b936eafa1f0bc6d22", "text": "Smart contracts are computer programs that can be correctly executed by a network of mutually distrusting nodes, without the need of an external trusted authority. Since smart contracts handle and transfer assets of considerable value, besides their correct execution it is also crucial that their implementation is secure against attacks which aim at stealing or tampering the assets. We study this problem in Ethereum, the most well-known and used framework for smart contracts so far. We analyse the security vulnerabilities of Ethereum smart contracts, providing a taxonomy of common programming pitfalls which may lead to vulnerabilities. We show a series of attacks which exploit these vulnerabilities, allowing an adversary to steal money or cause other damage.", "title": "" }, { "docid": "1315247aa0384097f5f9e486bce09bd4", "text": "We give an overview of the scripting languages used in existing cryptocurrencies, and in particular we review in some detail the scripting languages of Bitcoin, Nxt and Ethereum, in the context of a high-level overview of Distributed Ledger Technology and cryptocurrencies. We survey different approaches, and give an overview of critiques of existing languages. We also cover technologies that might be used to underpin extensions and innovations in scripting and contracts, including technologies for verification, such as zero knowledge proofs, proof-carrying code and static analysis, as well as approaches to making systems more efficient, e.g. Merkelized Abstract Syntax Trees.", "title": "" } ]
[ { "docid": "37f157cdcd27c1647548356a5194f2bc", "text": "Purpose – The aim of this paper is to propose a novel evaluation framework to explore the “root causes” that hinder the acceptance of using internal cloud services in a university. Design/methodology/approach – The proposed evaluation framework incorporates the duo-theme DEMATEL (decision making trial and evaluation laboratory) with TAM (technology acceptance model). The operational procedures were proposed and tested on a university during the post-implementation phase after introducing the internal cloud services. Findings – According to the results, clear understanding and operational ease under the theme perceived ease of use (PEOU) are more imperative; whereas improved usefulness and productivity under the theme perceived usefulness (PU) are more urgent to foster the usage of internal clouds in the case university. Research limitations/implications – Based on the findings, some intervention activities were suggested to enhance the level of users’ acceptance of internal cloud solutions in the case university. However, the results should not be generalized to apply to other educational establishments. Practical implications – To reduce the resistance from using internal clouds, some necessary intervention activities such as developing attractive training programs, creating interesting workshops, and rewriting user friendly manual or handbook are recommended. Originality/value – The novel two-theme DEMATEL has greatly contributed to the conventional one-theme DEMATEL theory. The proposed two-theme DEMATEL procedures were the first attempt to evaluate the acceptance of using internal clouds in university. The results have provided manifest root-causes under two distinct themes, which help derive effectual intervention activities to foster the acceptance of usage of internal clouds in a university.", "title": "" }, { "docid": "106915eaac271c255aef1f1390577c64", "text": "Parking is costly and limited in almost every major city in the world. Innovative parking systems for meeting near-term parking demand are needed. This paper proposes a novel, secure, and intelligent parking system (SmartParking) based on secured wireless network and sensor communication. From the point of users' view, SmartParking is a secure and intelligent parking service. The parking reservation is safe and privacy preserved. The parking navigation is convenient and efficient. The whole parking process will be a non-stop service. From the point of management's view, SmartParking is an intelligent parking system. The parking process can be modeled as birth-death stochastic process and the prediction of revenues can be made. Based on the prediction, new business promotion can be made, for example, on-sale prices and new parking fees. In SmartParking, new promotions can be published through wireless network. We address hardware/software architecture, implementations, and analytical models and results. The evaluation of this proposed system proves its efficiency.", "title": "" }, { "docid": "1d2ffd37c15b41ec5124e4ec4dfbc80c", "text": "Developing transparent predictive analytics has attracted significant research attention recently. There have been multiple theories on how to model learning transparency but none of them aims to understand the internal and often complicated modeling processes. In this paper we adopt a contemporary philosophical concept called \"constructivism\", which is a theory regarding how human learns. We hypothesize that a critical aspect of transparent machine learning is to \"reveal\" model construction with two key process: (1) the assimilation process where we enhance our existing learning models and (2) the accommodation process where we create new learning models. With this intuition we propose a new learning paradigm, constructivism learning, using a Bayesian nonparametric model to dynamically handle the creation of new learning tasks. Our empirical study on both synthetic and real data sets demonstrate that the new learning algorithm is capable of delivering higher quality models (as compared to base lines and state-of-the-art) and at the same time increasing the transparency of the learning process.", "title": "" }, { "docid": "0cf9ef0e5e406509f35c0dcd7ea598af", "text": "This paper proposes a method to reduce cogging torque of a single side Axial Flux Permanent Magnet (AFPM) motor according to analysis results of finite element analysis (FEA) method. First, the main cause of generated cogging torque will be studied using three dimensional FEA method. In order to reduce the cogging torque, a dual layer magnet step skewed (DLMSS) method is proposed to determine the shape of dual layer magnets. The skewed angle of magnetic poles between these two layers is determined using equal air gap flux of inner and outer layers. Finally, a single-sided AFPM motor based on the proposed methods is built as experimental platform to verify the effectiveness of the design. Meanwhile, the differences between design and tested results will be analyzed for future research and improvement.", "title": "" }, { "docid": "96bc9c8fa154d8e6cc7d0486c99b43d5", "text": "A Transmission Line Transformer (TLT) can be used to transform high-voltage nanosecond pulses. These transformers rely on the fact that the length of the pulse is shorter than the transmission lines used. This allows connecting the transmission lines in parallel at the input and in series at the output. In the ideal case such structures achieve a voltage gain which equals the number of transmission lines used. To achieve maximum efficiency, mismatch and secondary modes must be suppressed. Here we describe a TLT based on parallel plate transmission lines. The chosen geometry results in a high efficiency, due to good matching and minimized secondary modes. A second advantage of this design is that the electric field strength between the conductors is the same throughout the entire TLT. This makes the design suitable for high voltage applications. To investigate the concept of this TLT design, measurements are done on two different TLT designs. One TLT consists of 4 transmission lines, while the other one has 8 lines. Both designs are constructed of DiBond™. This material consists of a flat polyethylene inner core with an aluminum sheet on both sides. Both TLT's have an input impedance of 3.125 Ω. Their output impedances are 50 and 200 Ω, respectively. The measurements show that, on a matched load, this structure achieves a voltage gain factor of 3.9 when using 4 transmission lines and 7.9 when using 8 lines.", "title": "" }, { "docid": "215d867487afac8ab0641b144f99b312", "text": "Post-marketing surveillance systems rely on spontaneous reporting databases maintained by health regulators to identify safety issues arising from medicines once they are marketed. Quantitative safety signal detection methods such as Proportional Reporting ratio (PRR), Reporting Odds Ratio (ROR), Bayesian Confidence Propagation Neural Network (BCPNN), and empirical Bayesian technique are applied to spontaneous reporting data to identify safety signals [1-3]. These methods have been adopted as standard quantitative methods by many pharmaco-surveillance centres to screen for safety signals of medicines [2-5]. Studies have validated these methods and showed that the methods have low to moderate sensitivity to detect adverse drug reaction (ADR) signals, ranging between 28% to 56%, while the specificity of the methods ranged from 82% to 95% [6-8].", "title": "" }, { "docid": "86cb3c072e67bed8803892b72297812c", "text": "Internet of Things (IoT) will comprise billions of devices that can sense, communicate, compute and potentially actuate. Data streams coming from these devices will challenge the traditional approaches to data management and contribute to the emerging paradigm of big data. This paper discusses emerging Internet of Things (IoT) architecture, large scale sensor network applications, federating sensor networks, sensor data and related context capturing techniques, challenges in cloud-based management, storing, archiving and processing of", "title": "" }, { "docid": "81d933a449c0529ab40f5661f3b1afa1", "text": "Scene classification plays a key role in interpreting the remotely sensed high-resolution images. With the development of deep learning, supervised learning in classification of Remote Sensing with convolutional networks (CNNs) has been frequently adopted. However, researchers paid less attention to unsupervised learning in remote sensing with CNNs. In order to filling the gap, this paper proposes a set of CNNs called Multiple lAyeR feaTure mAtching(MARTA) generative adversarial networks (GANs) to learn representation using only unlabeled data. There will be two models of MARTA GANs involved: (1) a generative model G that captures the data distribution and provides more training data; (2) a discriminative model D that estimates the possibility that a sample came from the training data rather than G and in this way a well-formed representation of dataset can be learned. Therefore, MARTA GANs obtain the state-of-the-art results which outperform the results got from UC-Merced Land-use dataset and Brazilian Coffee Scenes dataset.", "title": "" }, { "docid": "513b378c3fc2e2e6f23a406b63dc33a9", "text": "Mining frequent itemsets from the large transactional database is a very critical and important task. Many algorithms have been proposed from past many years, But FP-tree like algorithms are considered as very effective algorithms for efficiently mine frequent item sets. These algorithms considered as efficient because of their compact structure and also for less generation of candidates itemsets compare to Apriori and Apriori like algorithms. Therefore this paper aims to presents a basic Concepts of some of the algorithms (FP-Growth, COFI-Tree, CT-PRO) based upon the FPTree like structure for mining the frequent item sets along with their capabilities and comparisons.", "title": "" }, { "docid": "f3c1ad1431d3aced0175dbd6e3455f39", "text": "BACKGROUND\nMethylxanthine therapy is commonly used for apnea of prematurity but in the absence of adequate data on its efficacy and safety. It is uncertain whether methylxanthines have long-term effects on neurodevelopment and growth.\n\n\nMETHODS\nWe randomly assigned 2006 infants with birth weights of 500 to 1250 g to receive either caffeine or placebo until therapy for apnea of prematurity was no longer needed. The primary outcome was a composite of death, cerebral palsy, cognitive delay (defined as a Mental Development Index score of <85 on the Bayley Scales of Infant Development), deafness, or blindness at a corrected age of 18 to 21 months.\n\n\nRESULTS\nOf the 937 infants assigned to caffeine for whom adequate data on the primary outcome were available, 377 (40.2%) died or survived with a neurodevelopmental disability, as compared with 431 of the 932 infants (46.2%) assigned to placebo for whom adequate data on the primary outcome were available (odds ratio adjusted for center, 0.77; 95% confidence interval [CI], 0.64 to 0.93; P=0.008). Treatment with caffeine as compared with placebo reduced the incidence of cerebral palsy (4.4% vs. 7.3%; adjusted odds ratio, 0.58; 95% CI, 0.39 to 0.87; P=0.009) and of cognitive delay (33.8% vs. 38.3%; adjusted odds ratio, 0.81; 95% CI, 0.66 to 0.99; P=0.04). The rates of death, deafness, and blindness and the mean percentiles for height, weight, and head circumference at follow-up did not differ significantly between the two groups.\n\n\nCONCLUSIONS\nCaffeine therapy for apnea of prematurity improves the rate of survival without neurodevelopmental disability at 18 to 21 months in infants with very low birth weight. (ClinicalTrials.gov number, NCT00182312 [ClinicalTrials.gov].).", "title": "" }, { "docid": "352bcf1c407568871880ad059053e1ec", "text": "In this paper we present a novel system for sketching the motion of a character. The process begins by sketching a character to be animated. An animated motion is then created for the character by drawing a continuous sequence of lines, arcs, and loops. These are parsed and mapped to a parameterized set of output motions that further reflect the location and timing of the input sketch. The current system supports a repertoire of 18 different types of motions in 2D and a subset of these in 3D. The system is unique in its use of a cursive motion specification, its ability to allow for fast experimentation, and its ease of use for non-experts.", "title": "" }, { "docid": "bff34a024324774d28ccaa23722e239e", "text": "We review the Philippine frogs of the genus Leptobrachuim. All previous treatments have referred Philippine populations to L. hasseltii, a species we restrict to Java and Bali, Indonesia. We use external morphology, body proportions, color pattern, advertisement calls, and phylogenetic analysis of molecular sequence data to show that Philippine populations of Leptobrachium represent three distinct and formerly unrecognized evolutionary lineages, and we describe each (populations on Mindoro, Palawan, and Mindanao Island groups) as new species. Our findings accentuate the degree to which the biodiversity of Philippine amphibians is currently underestimated and in need of comprehensive review with new and varied types of data. LAGOM: Pinagbalik aralan namin ang mga palaka sa Pilipinas mula sa genus Leptobrachium. Ang nakaraang mga palathala ay tumutukoy sa populasyon ng L. hasseltii, ang uri ng palaka na aming tinakda lamang sa Java at Bali, Indonesia. Ginamit namin ang panglabas na morpolohiya, proporsiyon ng pangangatawan, kulay disenyo, pantawag pansin, at phylogenetic na pagsusuri ng molekular na pagkakasunod-sunod ng datos upang maipakita na ang populasyon sa Pilipinas ng Leptobrachium ay kumakatawan sa tatlong natatangi at dating hindi pa nakilalang ebolusyonaryong lipi. Inilalarawan din naming ang bawat isa (populasyon sa Mindoro, Palawan, at mga grupo ng isla sa Mindanao) na bagong uri ng palaka. Ang aming natuklasan ay nagpapatingkad sa antas kung saan ang biodibersidad ng amphibians sa Pilipinas sa kasalukuyan ay may mababang pagtatantya at nangangailangan ng malawakang pagbabalik-aral ng mga bago at iba’t ibang uri ng", "title": "" }, { "docid": "472f5def60d3cb1be23f63a78f84080e", "text": "In financial terms, a business strategy is much more like a series of options than like a single projected cash flow. Executing a strategy almost always involves making a sequence of major decisions. Some actions are taken immediately while others are deliberately deferred so that managers can optimize their choices as circumstances evolve. While executives readily grasp the analogy between strategy and real options, until recently the mechanics of option pricing was so complex that few companies found it practical to use when formulating strategy. But advances in both computing power and our understanding of option pricing over the last 20 years now make it feasible to apply real-options thinking to strategic decision making. To analyze a strategy as a portfolio of related real options, this article exploits a framework presented by the author in \"Investment Opportunities as Real Options: Getting Started on the Numbers\" (HBR July-August 1998). That article explained how to get from discounted-cash-flow value to option value for a typical project; in other words, it was about reaching a number. This article extends that framework, exploring how, once you've worked out the numbers, you can use option pricing to improve decision making about the sequence and timing of a portfolio of strategic investments. Timothy Luehrman shows executives how to plot their strategies in two-dimensional \"option space,\" giving them a way to \"draw\" a strategy in terms that are neither wholly strategic nor wholly financial, but some of both. Such pictures inject financial discipline and new insight into how a company's future opportunities can be actively cultivated and harvested.", "title": "" }, { "docid": "d7ff935c38f2adad660ba580e6f3bc6c", "text": "In this report, we provide a comparative analysis of different techniques for user intent classification towards the task of app recommendation. We analyse the performance of different models and architectures for multi-label classification over a dataset with a relative large number of classes and only a handful examples of each class. We focus, in particular, on memory network architectures, and compare how well the different versions perform under the task constraints. Since the classifier is meant to serve as a module in a practical dialog system, it needs to be able to work with limited training data and incorporate new data on the fly. We devise a 1-shot learning task to test the models under the above constraint. We conclude that relatively simple versions of memory networks perform better than other approaches. Although, for tasks with very limited data, simple non-parametric methods perform comparably, without needing the extra training data.", "title": "" }, { "docid": "ffd04d534aefbfb00879fed5c8480dd7", "text": "This paper deals with the mechanical construction and static strength analysis of an axial flux permanent magnet machine with segmented armature torus topology, which consists of two external rotors and an inner stator. In order to conduct the three dimensional magnetic flux, the soft magnetic composites is used to manufacture the stator segments and the rotor yoke. On the basis of the detailed electromagnetic analysis, the main geometric dimensions of the machine are determined, which is also the precondition of the mechanical construction. Through the application of epoxy with high thermal conductivity and high mechanical strength, the independent segments of the stator are bounded together with the liquid-cooling system, which makes a high electrical load possible. Due to the unavoidable errors in the manufacturing and montage, there might be large force between the rotors and the stator. Thus, the rotor is held with a rotor carrier made from aluminum alloy with high elastic modulus and the form of the rotor carrier is optimized, in order to reduce the axial deformation. In addition, the shell and the shaft are designed and the choice of bearings is discussed. Finally, the strain and deformation of different parts are analyzed with the help of finite element method to validate the mechanical construction.", "title": "" }, { "docid": "ec6bfc49858a2a4ae3c8122fad68d437", "text": "A major aim of the current study was to determine what classroom teachers perceived to be the greatest barriers affecting their capacity to deliver successful physical education (PE) programs. An additional aim was to examine the impact of these barriers on the type and quality of PE programs delivered. This study applied a mixed-mode design involving data source triangulation using semistructured interviews with classroom teachers (n = 31) and teacher-completed questionnaires (n = 189) from a random sample of 38 schools. Results identified the key factors inhibiting PE teachers, which were categorized as teacher-related or institutional. Interestingly, the five greatest barriers were defined as institutional or out of the teacher's control. The major adverse effects of these barriers were evident in reduced time spent teaching PE and delivering PE lessons of questionable quality.", "title": "" }, { "docid": "a92aa1ea6faf19a2257dce1dda9cd0d0", "text": "This paper introduces a novel content-adaptive image downscaling method. The key idea is to optimize the shape and locations of the downsampling kernels to better align with local image features. Our content-adaptive kernels are formed as a bilateral combination of two Gaussian kernels defined over space and color, respectively. This yields a continuum ranging from smoothing to edge/detail preserving kernels driven by image content. We optimize these kernels to represent the input image well, by finding an output image from which the input can be well reconstructed. This is technically realized as an iterative maximum-likelihood optimization using a constrained variation of the Expectation-Maximization algorithm. In comparison to previous downscaling algorithms, our results remain crisper without suffering from ringing artifacts. Besides natural images, our algorithm is also effective for creating pixel art images from vector graphics inputs, due to its ability to keep linear features sharp and connected.", "title": "" }, { "docid": "64d711b609fb683b5679ed9f4a42275c", "text": "We address the problem of image feature learning for the applications where multiple factors exist in the image generation process and only some factors are of our interest. We present a novel multi-task adversarial network based on an encoder-discriminator-generator architecture. The encoder extracts a disentangled feature representation for the factors of interest. The discriminators classify each of the factors as individual tasks. The encoder and the discriminators are trained cooperatively on factors of interest, but in an adversarial way on factors of distraction. The generator provides further regularization on the learned feature by reconstructing images with shared factors as the input image. We design a new optimization scheme to stabilize the adversarial optimization process when multiple distributions need to be aligned. The experiments on face recognition and font recognition tasks show that our method outperforms the state-of-the-art methods in terms of both recognizing the factors of interest and generalization to images with unseen variations.", "title": "" }, { "docid": "4e50e68e099ab77aedcb0abe8b7a9ca2", "text": "In the downlink transmission scenario, power allocation and beamforming design at the transmitter are essential when using multiple antenna arrays. This paper considers a multiple input–multiple output broadcast channel to maximize the weighted sum-rate under the total power constraint. The classical weighted minimum mean-square error (WMMSE) algorithm can obtain suboptimal solutions but involves high computational complexity. To reduce this complexity, we propose a fast beamforming design method using unsupervised learning, which trains the deep neural network (DNN) offline and provides real-time service online only with simple neural network operations. The training process is based on an end-to-end method without labeled samples avoiding the complicated process of obtaining labels. Moreover, we use the “APoZ”-based pruning algorithm to compress the network volume, which further reduces the computational complexity and volume of the DNN, making it more suitable for low computation-capacity devices. Finally, the experimental results demonstrate that the proposed method improves computational speed significantly with performance close to the WMMSE algorithm.", "title": "" }, { "docid": "fb0e9f6f58051b9209388f81e1d018ff", "text": "Because many databases contain or can be embellished with structural information, a method for identifying interesting and repetitive substructures is an essential component to discovering knowledge in such databases. This paper describes the SUBDUE system, which uses the minimum description length (MDL) principle to discover substructures that compress the database and represent structural concepts in the data. By replacing previously-discovered substructures in the data, multiple passes of SUBDUE produce a hierarchical description of the structural regularities in the data. Inclusion of background knowledgeguides SUBDUE toward appropriate substructures for a particular domain or discovery goal, and the use of an inexact graph match allows a controlled amount of deviations in the instance of a substructure concept. We describe the application of SUBDUE to a variety of domains. We also discuss approaches to combining SUBDUE with non-structural discovery systems.", "title": "" } ]
scidocsrr
920c43b711d430a0095d54456bf40d2f
Real-Time Lane Departure Warning System on a Lower Resource Platform
[ { "docid": "be283056a8db3ab5b2481f3dc1f6526d", "text": "Numerous groups have applied a variety of deep learning techniques to computer vision problems in highway perception scenarios. In this paper, we presented a number of empirical evaluations of recent deep learning advances. Computer vision, combined with deep learning, has the potential to bring about a relatively inexpensive, robust solution to autonomous driving. To prepare deep learning for industry uptake and practical applications, neural networks will require large data sets that represent all possible driving environments and scenarios. We collect a large data set of highway data and apply deep learning and computer vision algorithms to problems such as car and lane detection. We show how existing convolutional neural networks (CNNs) can be used to perform lane and vehicle detection while running at frame rates required for a real-time system. Our results lend credence to the hypothesis that deep learning holds promise for autonomous driving.", "title": "" } ]
[ { "docid": "c504800ce08654fb5bf49356d2f7fce3", "text": "Memristive synapses, the most promising passive devices for synaptic interconnections in artificial neural networks, are the driving force behind recent research on hardware neural networks. Despite significant efforts to utilize memristive synapses, progress to date has only shown the possibility of building a neural network system that can classify simple image patterns. In this article, we report a high-density cross-point memristive synapse array with improved synaptic characteristics. The proposed PCMO-based memristive synapse exhibits the necessary gradual and symmetrical conductance changes, and has been successfully adapted to a neural network system. The system learns, and later recognizes, the human thought pattern corresponding to three vowels, i.e. /a /, /i /, and /u/, using electroencephalography signals generated while a subject imagines speaking vowels. Our successful demonstration of a neural network system for EEG pattern recognition is likely to intrigue many researchers and stimulate a new research direction.", "title": "" }, { "docid": "5b91e467d87f42fa6ca352a09b44cc48", "text": "We present a method for learning a low-dimensional representation which is shared across a set of multiple related tasks. The method builds upon the wellknown 1-norm regularization problem using a new regularizer which controls the number of learned features common for all the tasks. We show that this problem is equivalent to a convex optimization problem and develop an iterative algorithm for solving it. The algorithm has a simple interpretation: it alternately performs a supervised and an unsupervised step, where in the latter step we learn commonacross-tasks representations and in the former step we learn task-specific functions using these representations. We report experiments on a simulated and a real data set which demonstrate that the proposed method dramatically improves the performance relative to learning each task independently. Our algorithm can also be used, as a special case, to simply select – not learn – a few common features across the tasks.", "title": "" }, { "docid": "b9e8007220be2887b9830c05c283f8a5", "text": "INTRODUCTION\nHealth-care professionals are trained health-care providers who occupy a potential vanguard position in human immunodeficiency virus (HIV)/acquired immune deficiency syndrome (AIDS) prevention programs and the management of AIDS patients. This study was performed to assess HIV/AIDS-related knowledge, attitude, and practice (KAP) and perceptions among health-care professionals at a tertiary health-care institution in Uttarakhand, India, and to identify the target group where more education on HIV is needed.\n\n\nMATERIALS AND METHODS\nA cross-sectional KAP survey was conducted among five groups comprising consultants, residents, medical students, laboratory technicians, and nurses. Probability proportional to size sampling was used for generating random samples. Data analysis was performed using charts and tables in Microsoft Excel 2016, and statistical analysis was performed using the Statistical Package for the Social Science software version 20.0.\n\n\nRESULTS\nMost participants had incomplete knowledge regarding the various aspects of HIV/AIDS. Attitude in all the study groups was receptive toward people living with HIV/AIDS. Practical application of knowledge was best observed in the clinicians as well as medical students. Poor performance by technicians and nurses was observed in prevention and prophylaxis. All groups were well informed about the National AIDS Control Policy except technicians.\n\n\nCONCLUSION\nPoor knowledge about HIV infection, particularly among the young medical students and paramedics, is evidence of the lacunae in the teaching system, which must be kept in mind while formulating teaching programs. As suggested by the respondents, Information Education Communication activities should be improvised making use of print, electronic, and social media along with interactive awareness sessions, regular continuing medical educations, and seminars to ensure good quality of safe modern medical care.", "title": "" }, { "docid": "6aaabe17947bc455d940047745ed7962", "text": "In this paper, we want to study how natural and engineered systems could perform complex optimizations with limited computational and communication capabilities. We adopt a continuous-time dynamical system view rooted in early work on optimization and more recently in network protocol design, and merge it with the dynamic view of distributed averaging systems. We obtain a general approach, based on the control system viewpoint, that allows to analyze and design (distributed) optimization systems converging to the solution of given convex optimization problems. The control system viewpoint provides many insights and new directions of research. We apply the framework to a distributed optimal location problem and demonstrate the natural tracking and adaptation capabilities of the system to changing constraints.", "title": "" }, { "docid": "2aa885b2b531d4035a25928c242ad2ca", "text": "Doxorubicin (Dox) is a cytotoxic drug widely incorporated in various chemotherapy protocols. Severe side effects such as cardiotoxicity, however, limit Dox application. Mechanisms by which Dox promotes cardiac damage and cardiomyocyte cell death have been investigated extensively, but a definitive picture has yet to emerge. Autophagy, regarded generally as a protective mechanism that maintains cell viability by recycling unwanted and damaged cellular constituents, is nevertheless subject to dysregulation having detrimental effects for the cell. Autophagic cell death has been described, and has been proposed to contribute to Dox-cardiotoxicity. Additionally, mitophagy, autophagic removal of damaged mitochondria, is affected by Dox in a manner contributing to toxicity. Here we will review Dox-induced cardiotoxicity and cell death in the broad context of the autophagy and mitophagy processes.", "title": "" }, { "docid": "b0bd9a0b3e1af93a9ede23674dd74847", "text": "This paper introduces WaveNet, a deep neural network for generating raw audio waveforms. The model is fully probabilistic and autoregressive, with the predictive distribution for each audio sample conditioned on all previous ones; nonetheless we show that it can be efficiently trained on data with tens of thousands of samples per second of audio. When applied to text-to-speech, it yields state-ofthe-art performance, with human listeners rating it as significantly more natural sounding than the best parametric and concatenative systems for both English and Mandarin. A single WaveNet can capture the characteristics of many different speakers with equal fidelity, and can switch between them by conditioning on the speaker identity. When trained to model music, we find that it generates novel and often highly realistic musical fragments. We also show that it can be employed as a discriminative model, returning promising results for phoneme recognition.", "title": "" }, { "docid": "e0f48803b24826cbcf897c062dc512b7", "text": "Performance of the support vector machine strongly depends on parameters settings. One of the most common algorithms for parameter tuning is grid search, combined with cross validation. This algorithm is often time consuming and inaccurate. In this paper we propose the use of stochastic metaheuristic algorithm, firefly algorithm, for effective support vector machine parameter tuning. The experimental results on 13 standard benchmark datasets show that our proposed method achieve better results compared to other state-of-the-art algorithms from literature.", "title": "" }, { "docid": "3ba0b5d6be06f65cd2048e054eae4d7d", "text": "Figure 1.1 shows the basic construction of a 3D graphics computer. That is also the general organization of this book, with each block more or less representing a chapter (there is no chapter on memory, but memory is discussed in multiple chapters). The book traces the earliest understanding of 3D and then the foundational mathematics to explain and construct 3D. From there we follow the history of the computer, beginning with mechanical computers, and ending up with tablets. Next, we present the amazing computer graphics (CG) algorithms and tricks, and it’s difficult to tell the story because there were a couple of periods where eruptions of new ideas and techniques seem to occur all at once. With the fundamentals of how to draw lines and create realistic images better understood, the applications that exploited those foundations. The applications of course can’t do the work by themselves and so the following chapter is on the 3D controllers that drive the display. The chapter that logically follows that is on the development of the displays, and a chapter follows that on stereovision.", "title": "" }, { "docid": "74b8fd7767f1d08563103a13ad0247b7", "text": "The segmentation of moving objects become challenging when the object motion is small, the shape of object changes, and there is global background motion in unconstrained videos. In this paper, we propose a fully automatic, efficient, fast and composite framework to segment the moving object on the basis of saliency, locality, color and motion cues. First, we propose a new saliency measure to predict the potential salient regions. In the second step, we use the RANSAC homography and optical flow to compensate the background motion and get reliable motion information, called motion cues. Furthermore, the saliency information and motion cues are combined to get the initial segmented object (seeded region). A refinement is performed to remove the unwanted noisy details and expand the seeded region to the whole object. Detailed experimentation is carried out on challenging video benchmarks to evaluate the performance of the proposed method. The results show that the proposed method is faster and performs better than state-of-the-art approaches.", "title": "" }, { "docid": "7a10f559d9bbf1b6853ff6b89f5857f7", "text": "Despite the much-ballyhooed increase in outsourcing, most companies are in do-it-yourself mode for the bulk of their processes, in large part because there's no way to compare outside organizations' capabilities with those of internal functions. Given the lack of comparability, it's almost surprising that anyone outsources today. But it's not surprising that cost is by far companies' primary criterion for evaluating outsourcers or that many companies are dissatisfied with their outsourcing relationships. A new world is coming, says the author, and it will lead to dramatic changes in the shape and structure of corporations. A broad set of process standards will soon make it easy to determine whether a business capability can be improved by outsourcing it. Such standards will also help businesses compare service providers and evaluate the costs versus the benefits of outsourcing. Eventually these costs and benefits will be so visible to buyers that outsourced processes will become a commodity, and prices will drop significantly. The low costs and low risk of outsourcing will accelerate the flow of jobs offshore, force companies to reassess their strategies, and change the basis of competition. The speed with which some businesses have already adopted process standards suggests that many previously unscrutinized areas are ripe for change. In the field of technology, for instance, the Carnegie Mellon Software Engineering Institute has developed a global standard for software development processes, called the Capability Maturity Model (CMM). For companies that don't have process standards in place, it makes sense for them to create standards by working with customers, competitors, software providers, businesses that processes may be outsourced to, and objective researchers and standard-setters. Setting standards is likely to lead to the improvement of both internal and outsourced processes.", "title": "" }, { "docid": "329f6c340218e7ecd62c93a1e7ff727a", "text": "To enhance video streaming experience for mobile users, we propose an approach towards Quality-of-Experience (QoE) aware on-the-fly transcoding. The proposed approach relies on the concept of Mobile Edge Computing (MEC) as a key enabler in enhancing service quality. Our scheme involves an autonomic creation of a transcoding service as a Virtual Network Function (VNF) and ensures dynamic rate switching of the streamed video to maintain the desirable quality. This edge-assistive transcoding and adaptive streaming results in reduced computational loads and reduced core network traffic. The proposed solution represents a complete miniature content delivery network infrastructure on the edge, ensuring reduced latency and better quality of experience", "title": "" }, { "docid": "d852b0b89a748086a74d43adbf1ac867", "text": "Community-based question-answering (CQA) services contribute to solving many difficult questions we have. For each question in such services, one best answer can be designated, among all answers, often by the asker. However, many questions on typical CQA sites are left without a best answer even if when good candidates are available. In this paper, we attempt to address the problem of predicting if an answer may be selected as the best answer, based on learning from labeled data. The key tasks include designing features measuring important aspects of an answer and identifying the most importance features. Experiments with a Stack Overflow dataset show that the contextual information among the answers should be the most important factor to consider.", "title": "" }, { "docid": "0c9228dd4a65587e43fc6d2d1f0b03ce", "text": "Secure multi-party computation (MPC) is a technique well suited for privacy-preserving data mining. Even with the recent progress in two-party computation techniques such as fully homomorphic encryption, general MPC remains relevant as it has shown promising performance metrics in real-world benchmarks. Sharemind is a secure multi-party computation framework designed with real-life efficiency in mind. It has been applied in several practical scenarios, and from these experiments, new requirements have been identified. Firstly, large datasets require more efficient protocols for standard operations such as multiplication and comparison. Secondly, the confidential processing of financial data requires the use of more complex primitives, including a secure division operation. This paper describes new protocols in the Sharemind model for secure multiplication, share conversion, equality, bit shift, bit extraction, and division. All the protocols are implemented and benchmarked, showing that the current approach provides remarkable speed improvements over the previous work. This is verified using real-world benchmarks for both operations and algorithms.", "title": "" }, { "docid": "de3aee8ca694d59eb0ef340b3b1c8161", "text": "In recent years, organisations have begun to realise the importance of knowing their customers better. Customer relationship management (CRM) is an approach to managing customer related knowledge of increasing strategic significance. The successful adoption of IT-enabled CRM redefines the traditional models of interaction between businesses and their customers, both nationally and globally. It is regarded as a source for competitive advantage because it enables organisations to explore and use knowledge of their customers and to foster profitable and long-lasting one-to-one relationships. This paper discusses the results of an exploratory survey conducted in the UK financial services sector; it discusses CRM practice and expectations, the motives for implementing it, and evaluates post-implementation experiences. It also investigates the CRM tools functionality in the strategic, process, communication, and business-to-customer (B2C) organisational context and reports the extent of their use. The results show that despite the anticipated potential, the benefits from such tools are rather small. # 2004 Published by Elsevier B.V.", "title": "" }, { "docid": "35c59e626d2d98f273d1978048f6436a", "text": "OBJECTIVE\nTo evaluate the prevalence, type and severity of prescribing errors observed between grades of prescriber, ward area, admission or discharge and type of medication prescribed.\n\n\nDESIGN\nWard-based clinical pharmacists prospectively documented prescribing errors at the point of clinically checking admission or discharge prescriptions. Error categories and severities were assigned at the point of data collection, and verified independently by the study team.\n\n\nSETTING\nProspective study of nine diverse National Health Service hospitals in North West England, including teaching hospitals, district hospitals and specialist services for paediatrics, women and mental health.\n\n\nRESULTS\nOf 4238 prescriptions evaluated, one or more error was observed in 1857 (43.8%) prescriptions, with a total of 3011 errors observed. Of these, 1264 (41.9%) were minor, 1629 (54.1%) were significant, 109 (3.6%) were serious and 9 (0.30%) were potentially life threatening. The majority of errors considered to be potentially lethal (n=9) were dosing errors (n=8), mostly relating to overdose (n=7). The rate of error was not significantly different between newly qualified doctors compared with junior, middle grade or senior doctors. Multivariable analyses revealed the strongest predictor of error was the number of items on a prescription (risk of error increased 14% for each additional item). We observed a high rate of error from medication omission, particularly among patients admitted acutely into hospital. Electronic prescribing systems could potentially have prevented up to a quarter of (but not all) errors.\n\n\nCONCLUSIONS\nIn contrast to other studies, prescriber experience did not impact on overall error rate (although there were qualitative differences in error category). Given that multiple drug therapies are now the norm for many medical conditions, health systems should introduce and retain safeguards which detect and prevent error, in addition to continuing training and education, and migration to electronic prescribing systems.", "title": "" }, { "docid": "ead93ea218664f371de64036e1788aa5", "text": "OBJECTIVE\nTo assess the diagnostic efficacy of the first-trimester anomaly scan including first-trimester fetal echocardiography as a screening procedure in a 'medium-risk' population.\n\n\nMETHODS\nIn a prospective study, we evaluated 3094 consecutive fetuses with a crown-rump length (CRL) of 45-84 mm and gestational age between 11 + 0 and 13 + 6 weeks, using transabdominal and transvaginal ultrasonography. The majority of patients were referred without prior abnormal scan or increased nuchal translucency (NT) thickness, the median maternal age was, however, 35 (range, 15-46) years, and 53.8% of the mothers (1580/2936) were 35 years or older. This was therefore a self-selected population reflecting an increased percentage of older mothers opting for prenatal diagnosis. The follow-up rate was 92.7% (3117/3363).\n\n\nRESULTS\nThe prevalence of major abnormalities in 3094 fetuses was 2.8% (86/3094). The detection rate of major anomalies at the 11 + 0 to 13 + 6-week scan was 83.7% (72/86), 51.9% (14/27) for NT < 2.5 mm and 98.3% (58/59) for NT >or= 2.5 mm. The prevalence of major congenital heart defects (CHD) was 1.2% (38/3094). The detection rate of major CHD at the 11 to 13 + 6-week scan was 84.2% (32/38), 37.5% (3/8) for NT < 2.5 mm and 96.7% (29/30) for NT >or= 2.5 mm.\n\n\nCONCLUSION\nThe overall detection rate of fetal anomalies including fetal cardiac defects following a specialist scan at 11 + 0 to 13 + 6 weeks' gestation is about 84% and is increased when NT >or= 2.5 mm. This extends the possibilities of a first-trimester scan beyond risk assessment for fetal chromosomal defects. In experienced hands with adequate equipment, the majority of severe malformations as well as major CHD may be detected at the end of the first trimester, which offers parents the option of deciding early in pregnancy how to deal with fetuses affected by genetic or structural abnormalities without pressure of time.", "title": "" }, { "docid": "fd472acf79142719a20862deab9c1302", "text": "Gesture recognition has lured everyone's attention as a new generation of HCI and visual input mode. FPGA presents a better overall performance and flexibility than DSP for parallel processing and pipelined operations in order to process high resolution and high frame rate video processing. Vision-based gesture recognition technique is the best way to recognize the gesture. In gesture recognition, the image acquisition and image segmentation is there. In this paper, the image acquisition is shown and also the image segmentation techniques are discussed. In this, to capture the gesture the OV7670 CMOS camera chip sensor is used that is attached to FPGA DE-1 board. By using this gesture recognition, we can control any application in a non-tangible way.", "title": "" }, { "docid": "40dc2dc28dca47137b973757cdf3bf34", "text": "In this paper we propose a new word-order based graph representation for text. In our graph representation vertices represent words or phrases and edges represent relations between contiguous words or phrases. The graph representation also includes dependency information. Our text representation is suitable for applications involving the identification of relevance or paraphrases across texts, where word-order information would be useful. We show that this word-order based graph representation performs better than a dependency tree representation while identifying the relevance of one piece of text to another.", "title": "" }, { "docid": "dba5777004cf43d08a58ef3084c25bd3", "text": "This paper investigates the problem of automatic humour recognition, and provides and in-depth analysis of two of the most frequently observ ed features of humorous text: human-centeredness and negative polarity. T hrough experiments performed on two collections of humorous texts, we show that th ese properties of verbal humour are consistent across different data s ets.", "title": "" }, { "docid": "c4346bf13f8367fe3046ab280ac94183", "text": "Human world is becoming more and more dependent on computers and information technology (IT). The autonomic capabilities in computers and IT have become the need of the day. These capabilities in software and systems increase performance, accuracy, availability and reliability with less or no human intervention (HI). Database has become the integral part of information system in most of the organizations. Databases are growing w.r.t size, functionality, heterogeneity and due to this their manageability needs more attention. Autonomic capabilities in Database Management Systems (DBMSs) are also essential for ease of management, cost of maintenance and hide the low level complexities from end users. With autonomic capabilities administrators can perform higher-level tasks. The DBMS that has the ability to manage itself according to the environment and resources without any human intervention is known as Autonomic DBMS (ADBMS). The paper explores and analyzes the autonomic components of Oracle by considering autonomic characteristics. This analysis illustrates how different components of Oracle manage itself autonomically. The research is focused to find and earmark those areas in Oracle where the human intervention is required. We have performed the same type of research over Microsoft SQL Server and DB2 [1, 2]. A comparison of autonomic components of Oracle with SQL Server is provided to show their autonomic status.", "title": "" } ]
scidocsrr
70f0364b6dd31fd832d9f8e323819f73
RFID tags: Positioning principles and localization techniques
[ { "docid": "259c17740acd554463731d3e1e2912eb", "text": "In recent years, radio frequency identification technology has moved from obscurity into mainstream applications that help speed the handling of manufactured goods and materials. RFID enables identification from a distance, and unlike earlier bar-code technology, it does so without requiring a line of sight. In this paper, the author introduces the principles of RFID, discusses its primary technologies and applications, and reviews the challenges organizations will face in deploying this technology.", "title": "" } ]
[ { "docid": "ccabbaf3caded63d94c77562a47a978f", "text": "Modern deep artificial neural networks have achieved impressive results through models with very large capacity—compared to the number of training examples— that control overfitting with the help of different forms of regularization. Regularization can be implicit, as is the case of stochastic gradient descent and parameter sharing in convolutional layers, or explicit. Most common explicit regularization techniques, such as weight decay and dropout, reduce the effective capacity of the model and typically require the use of deeper and wider architectures to compensate for the reduced capacity. Although these techniques have been proven successful in terms of improved generalization, they seem to waste capacity. In contrast, data augmentation techniques do not reduce the effective capacity and improve generalization by increasing the number of training examples. In this paper we systematically analyze the effect of data augmentation on some popular architectures and conclude that data augmentation alone—without any other explicit regularization techniques—can achieve the same performance or higher as regularized models, especially when training with fewer examples, and exhibits much higher adaptability to changes in the architecture.", "title": "" }, { "docid": "89469027347d0118f2ba576d7b372ae7", "text": "We are given a large population database that contains information about population instances. The population is known to comprise of m groups, but the population instances are not labeled with the group identi cation. Also given is a population sample (much smaller than the population but representative of it) in which the group labels of the instances are known. We present an interval classi er (IC) which generates a classi cation function for each group that can be used to e ciently retrieve all instances of the specied group from the population database. To allow IC to be embedded in interactive loops to answer adhoc queries about attributes with missing values, IC has been designed to be e cient in the generation of classi cation functions. Preliminary experimental results indicate that IC not only has retrieval and classi er generation e ciency advantages, but also compares favorably in the classi cation accuracy with current tree classi ers, such as ID3, which were primarily designed for minimizing classi cation errors. We also describe some new applications that arise from encapsulating the classi cation capability in database systems and discuss extensions to IC for it to be used in these new application domains. Current address: Computer Science Department, Rutgers University, NJ 08903 Permission to copy without fee all or part of this material is granted provided that the copies are not made or distributed for direct commercial advantage, the VLDB copyright notice and the title of the publication and its date appear, and notice is given that copying is by permission of the Very Large Data Base Endowment. To copy otherwise, or to republish, requires a fee and/or special permission from the Endowment. Proceedings of the 18th VLDB Conference Vancouver, British Columbia, Canada 1992", "title": "" }, { "docid": "c256283819014d79dd496a3183116b68", "text": "For the 5th generation of terrestrial mobile communications, Multi-Carrier (MC) transmission based on non-orthogonal waveforms is a promising technology component compared to orthogonal frequency division multiplex (OFDM) in order to achieve higher throughput and enable flexible spectrum management. Coverage extension and service continuity can be provided considering satellites as additional components in future networks by allowing vertical handover to terrestrial radio interfaces. In this paper, the properties of Filter Bank Multicarrier (FBMC) as potential MC transmission scheme is discussed taking into account the requirements for the satellite-specific PHY-Layer like non-linear distortions due to High Power Amplifiers (HPAs). The performance for specific FBMC configurations is analyzed in terms of peak-to-average power ratio (PAPR), computational complexity, non-linear distortions as well as carrier frequency offsets sensitivity (CFOs). Even though FBMC and OFDM have similar PAPR and suffer comparable spectral regrowth at the output of the non linear amplifier, simulations on link level show that FBMC still outperforms OFDM in terms of CFO sensitivity and symbol error rate in the presence of non-linear distortions.", "title": "" }, { "docid": "0d6d8578b41d736a6df373c08e3e1f95", "text": "We provide a Matlab package p1afem for an adaptive P1-finite element method (AFEM). This includes functions for the assembly of the data, different error estimators, and an indicator-based adaptive mesh-refining algorithm. Throughout, the focus is on an efficient realization by use of Matlab built-in functions and vectorization. Numerical experiments underline the efficiency of the code which is observed to be of almost linear complexity with respect to the runtime. Although the scope of this paper is on AFEM, the general ideas can be understood as a guideline for writing efficient Matlab code.", "title": "" }, { "docid": "9e4adad2e248895d80f28cf6134f68c1", "text": "Maltodextrin (MX) is an ingredient in high demand in the food industry, mainly for its useful physical properties which depend on the dextrose equivalent (DE). The DE has however been shown to be an inaccurate parameter for predicting the performance of the MXs in technological applications, hence commercial MXs were characterized by mass spectrometry (MS) to determine their molecular weight distribution (MWD) and degree of polymerization (DP). Samples were subjected to different water activities (aw). Water adsorption was similar at low aw, but radically increased with the DP at higher aw. The decomposition temperature (Td) showed some variations attributed to the thermal hydrolysis induced by the large amount of adsorbed water and the supplied heat. The glass transition temperature (Tg) linearly decreased with both, aw and DP. The microstructural analysis by X-ray diffraction showed that MXs did not crystallize with the adsorption of water, preserving their amorphous structure. The optical micrographs showed radical changes in the overall appearance of the MXs, indicating a transition from a glassy to a rubbery state. Based on these characterizations, different technological applications for the MXs were suggested.", "title": "" }, { "docid": "fc4e32d6bafbc3cf18802f0af12e3092", "text": "Self-report instruments commonly used to assess depression in adolescents have limited or unknown reliability and validity in this age group. We describe a new self-report scale, the Kutcher Adolescent Depression Scale (KADS), designed specifically to diagnose and assess the severity of adolescent depression. This report compares the diagnostic validity of the full 16-item instrument, brief versions of it, and the Beck Depression Inventory (BDI) against the criteria for major depressive episode (MDE) from the Mini International Neuropsychiatric Interview (MINI). Some 309 of 1,712 grade 7 to grade 12 students who completed the BDI had scores that exceeded 15. All were invited for further assessment, of whom 161 agreed to assessment by the KADS, the BDI again, and a MINI diagnostic interview for MDE. Receiver operating characteristic (ROC) curve analysis was used to determine which KADS items best identified subjects experiencing an MDE. Further ROC curve analyses established that the overall diagnostic ability of a six-item subscale of the KADS was at least as good as that of the BDI and was better than that of the full-length KADS. Used with a cutoff score of 6, the six-item KADS achieved sensitivity and specificity rates of 92% and 71%, respectively-a combination not achieved by other self-report instruments. The six-item KADS may prove to be an efficient and effective means of ruling out MDE in adolescents.", "title": "" }, { "docid": "5e7b935a73180c9ccad3bc0e82311503", "text": "What happens if one pushes a cup sitting on a table toward the edge of the table? How about pushing a desk against a wall? In this paper, we study the problem of understanding the movements of objects as a result of applying external forces to them. For a given force vector applied to a specific location in an image, our goal is to predict long-term sequential movements caused by that force. Doing so entails reasoning about scene geometry, objects, their attributes, and the physical rules that govern the movements of objects. We design a deep neural network model that learns long-term sequential dependencies of object movements while taking into account the geometry and appearance of the scene by combining Convolutional and Recurrent Neural Networks. Training our model requires a large-scale dataset of object movements caused by external forces. To build a dataset of forces in scenes, we reconstructed all images in SUN RGB-D dataset in a physics simulator to estimate the physical movements of objects caused by external forces applied to them. Our Forces in Scenes (ForScene) dataset contains 10,335 images in which a variety of external forces are applied to different types of objects resulting in more than 65,000 object movements represented in 3D. Our experimental evaluations show that the challenging task of predicting longterm movements of objects as their reaction to external forces is possible from a single image.", "title": "" }, { "docid": "87748bcc07ab498218233645bdd4dd0c", "text": "This paper proposes a method of recognizing and classifying the basic activities such as forward and backward motions by applying a deep learning framework on passive radio frequency (RF) signals. The echoes from the moving body possess unique pattern which can be used to recognize and classify the activity. A passive RF sensing test- bed is set up with two channels where the first one is the reference channel providing the un- altered echoes of the transmitter signals and the other one is the surveillance channel providing the echoes of the transmitter signals reflecting from the moving body in the area of interest. The echoes of the transmitter signals are eliminated from the surveillance signals by performing adaptive filtering. The resultant time series signal is classified into different motions as predicted by proposed novel method of convolutional neural network (CNN). Extensive amount of training data has been collected to train the model, which serves as a reference benchmark for the later studies in this field.", "title": "" }, { "docid": "84f3e4354af23ece035ee604507eec71", "text": "Speech is not recognized with an accuracy of 100%. Even humans are not able to do that. There will always be some uncertainty in the recognized input, requiring strategies to cope. This is different from the experience with graphical user interfaces, where keyboard and mouse input are recognized without any doubts. Speech recognition and other errors occur frequently and reduce both the usefulness of applications and user satisfaction. This turns error handling into a crucial aspect of speech applications. Successful error handling methods can make even applications with poor recognition accuracy more successful. In [Sagawa et al. 2004] the authors show that the task completion rate increased from 86.4% to 93.4% and the average number of turns reduced by three after a better error handling method had been installed. On the other hand, poorly constructed error handling may bring unwanted complexity to the system and cause new errors and annoyances.", "title": "" }, { "docid": "281e8785214bb209a142d420dfdc5f26", "text": "This study examined achievement when podcasts were used in place of lecture in the core technology course required for all students seeking teacher licensure at a large research-intensive university in the Southeastern United States. Further, it examined the listening preferences of the podcast group and the barriers to podcast use. The results revealed that there was no significant difference in the achievement of preservice teachers who experienced podcast instruction versus those who received lecture instruction. Further, there was no significant difference in their study habits. Participants preferred to use a computer and Blackboard for downloading the podcasts, which they primarily listened to at home. They tended to like the podcasts as well as the length of the podcasts and felt that they were reasonably effective for learning. They agreed that the podcasts were easy to use but disagreed that they should be used to replace lecture. Barriers to podcast use include unfamiliarity with podcasts, technical problems in accessing and downloading podcasts, and not seeing the relevance of podcasts to their learning. 2011 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "cf90703045e958c48282d758f84f2568", "text": "One expectation about the future Internet is the participation of billions of sensor nodes, integrating the physical with the digital world. This Internet of Things can offer new and enhanced services and applications based on knowledge about the environment and the entities within. Millions of micro-providers could come into existence, forming a highly fragmented market place with new business opportunities to offer commercial services. In the related field of Internet and Telecommunication services, the design of markets and pricing schemes has been a vital research area in itself. We discuss how these findings can be transferred to the Internet of Things. Both the appropriate market structure and corresponding pricing schemes need to be well understood to enable a commercial success of sensor-based services. We show some steps that an evolutionary establishment of this market might have to take.", "title": "" }, { "docid": "6922a913c6ede96d5062f055b55377e7", "text": "This paper presents the issue of a nonharmonic multitone generation with the use of singing bowls and the digital signal processors. The authors show the possibility of therapeutic applications of such multitone signals. Some known methods of the digital generation of the tone signal with the additional modulation are evaluated. Two projects of the very precise multitone generators are presented. In described generators, the digital signal processors synthesize the signal, while the additional microcontrollers realize the operator's interface. As a final result, the sound of the original singing bowls is confronted with the sound synthesized by one of the generators.", "title": "" }, { "docid": "9cc5fddebc5c45c4c7f5535136275076", "text": "This paper details the winning method in the IEEE GOLD category of the PHM psila08 Data Challenge. The task was to estimate the remaining useable life left of an unspecified complex system using a purely data driven approach. The method involves the construction of Multi-Layer Perceptron and Radial Basis Function networks for regression. A suitable selection of these networks has been successfully combined in an ensemble using a Kalman filter. The Kalman filter provides a mechanism for fusing multiple neural network model predictions over time. The essential initial stages of pre-processing and data exploration are also discussed.", "title": "" }, { "docid": "785cb08c500aea1ead360138430ba018", "text": "A recent “third wave” of neural network (NN) approaches now delivers state-of-the-art performance in many machine learning tasks, spanning speech recognition, computer vision, and natural language processing. Because these modern NNs often comprise multiple interconnected layers, work in this area is often referred to as deep learning. Recent years have witnessed an explosive growth of research into NN-based approaches to information retrieval (IR). A significant body of work has now been created. In this paper, we survey the current landscape of Neural IR research, paying special attention to the use of learned distributed representations of textual units. We highlight the successes of neural IR thus far, catalog obstacles to its wider adoption, and suggest potentially promising directions for future research.", "title": "" }, { "docid": "2f208da3cc0dab71e82bf2f83f0d5639", "text": "Automatic music type classification is very helpful for the management of digital music database. In this paper, Octavebased Spectral Contrast feature is proposed to represent the spectral characteristics of a music clip. It represented the relative spectral distribution instead of average spectral envelope. Experiments showed that Octave-based Spectral Contrast feature performed well in music type classification. Another comparison experiment demonstrated that Octave-based Spectral Contrast feature has a better discrimination among different music types than Mel-Frequency Cepstral Coefficients (MFCC), which is often used in previous music type classification systems.", "title": "" }, { "docid": "d8be338cbe411c79905f108fbbe55814", "text": "Head-up displays (HUD) permit augmented reality (AR) information in cars. Simulation is a convenient way to design and evaluate the benefit of such innovation for the driver. For this purpose, we have developed a virtual HUD that we compare to real AR HUDs from depth perception features. User testing was conducted with 24 participants in a stereoscopic driving simulator. It showed the ability of the virtual HUD to reproduce the perception of the distance between real objects and their augmentation. Three AR overlay designs to highlight the car ahead were compared: the trapezoid shape was perceived as more congruent that the U shape overlay.", "title": "" }, { "docid": "2f1e059a0c178b3703c31ad31761dadc", "text": "This paper will serve as an introduction to the body of work on robust subspace recovery. Robust subspace recovery involves finding an underlying low-dimensional subspace in a data set that is possibly corrupted with outliers. While this problem is easy to state, it has been difficult to develop optimal algorithms due to its underlying nonconvexity. This work emphasizes advantages and disadvantages of proposed approaches and unsolved problems in the area.", "title": "" }, { "docid": "de8633682653e9f979ec7a9177e461b4", "text": "The increasingly widespread use of social network sites to expand and deepen one’s social connections is a relatively new but potentially important phenomenon that has implications for teaching and learning and teacher education in the 21st century. This paper surveys the educational research literature to examine: How such technologies are perceived and used by K-12 learners and teachers with what impacts on pedagogy or students' learning. Selected studies were summarized and categorized according to the four types introduced by Roblyer (2005) as studies most needed to move the educational technology field forward. These include studies that establish the technology’s effectiveness at improving student learning; investigate implementation strategies; monitor social impact; and report on common uses to shape the direction of the field. We found the most prevalent type of study conducted related to our focal topic was research on common uses. The least common type of study conducted was research that established the technology’s effectiveness at improving student learning. Implications for the design of future research and teacher education initiatives are discussed.", "title": "" }, { "docid": "86ad395a553495de5f297a2b5fde3f0e", "text": "⇒ NOT written, but spoken language. [Intuitions come from written.] ⇒ NOT meaning as thing, but use of linguistic forms for communicative functions o Direct att. in shared conceptual space like gestures (but w/conventions) ⇒ NOT grammatical rules, but patterns of use => schemas o Constructions themselves as complex symbols \"She sneezed him the ball\" o NOT 'a grammar' but a structured inventory of constructions: continuum of regularity => idiomaticity grammaticality = normativity • Many complexities = \"unification\" of constructions w/ incompatibilities o NOT innate UG, but \"teeming modularity\" (1) symbols, pred-arg structure,", "title": "" }, { "docid": "18f530c400498658d73aba21f0ce984e", "text": "Anomaly and event detection has been studied widely for having many applications in fraud detection, network intrusion detection, detection of epidemic outbreaks, and so on. In this paper we propose an algorithm that operates on a time-varying network of agents with edges representing interactions between them and (1) spots \"anomalous\" points in time at which many agents \"change\" their behavior in a way it deviates from the norm; and (2) attributes the detected anomaly to those agents that contribute to the \"change\" the most. Experiments on a large mobile phone network (of 2 million anonymous customers with 50 million interactions over a period of 6 months) shows that the \"change\"-points detected by our algorithm coincide with the social events and the festivals in our data.", "title": "" } ]
scidocsrr
8faaeab9c3ab915adb0ce9a47c6b4b1c
FastRunner: A fast, efficient and robust bipedal robot. Concept and planar simulation
[ { "docid": "2997be0d8b1f7a183e006eba78135b13", "text": "The basic mechanics of human locomotion are associated with vaulting over stiff legs in walking and rebounding on compliant legs in running. However, while rebounding legs well explain the stance dynamics of running, stiff legs cannot reproduce that of walking. With a simple bipedal spring-mass model, we show that not stiff but compliant legs are essential to obtain the basic walking mechanics; incorporating the double support as an essential part of the walking motion, the model reproduces the characteristic stance dynamics that result in the observed small vertical oscillation of the body and the observed out-of-phase changes in forward kinetic and gravitational potential energies. Exploring the parameter space of this model, we further show that it not only combines the basic dynamics of walking and running in one mechanical system, but also reveals these gaits to be just two out of the many solutions to legged locomotion offered by compliant leg behaviour and accessed by energy or speed.", "title": "" } ]
[ { "docid": "3d310295592775bbe785692d23649c56", "text": "BACKGROUND\nEvidence indicates that sexual assertiveness is one of the important factors affecting sexual satisfaction. According to some studies, traditional gender norms conflict with women's capability in expressing sexual desires. This study examined the relationship between gender roles and sexual assertiveness in married women in Mashhad, Iran.\n\n\nMETHODS\nThis cross-sectional study was conducted on 120 women who referred to Mashhad health centers through convenient sampling in 2014-15. Data were collected using Bem Sex Role Inventory (BSRI) and Hulbert index of sexual assertiveness. Data were analyzed using SPSS 16 by Pearson and Spearman's correlation tests and linear Regression Analysis.\n\n\nRESULTS\nThe mean scores of sexual assertiveness was 54.93±13.20. According to the findings, there was non-significant correlation between Femininity and masculinity score with sexual assertiveness (P=0.069 and P=0.080 respectively). Linear regression analysis indicated that among the predictor variables, only Sexual function satisfaction was identified as the sexual assertiveness summary predictor variables (P=0.001).\n\n\nCONCLUSION\nBased on the results, sexual assertiveness in married women does not comply with gender role, but it is related to Sexual function satisfaction. So, counseling psychologists need to consider this variable when designing intervention programs for modifying sexual assertiveness and find other variables that affect sexual assertiveness.", "title": "" }, { "docid": "c804a0b91f79bc80b5156e182a628650", "text": "Software as a Service (SaaS) is an online software delivery model which permits a third party provider offering software services to be used on-demand by tenants over the internet, instead of installing and maintaining them in their premises. Nowadays, more and more companies are offering their web-base business application by adopting this model. Multi-tenancy is the primary characteristic of SaaS, it allows SaaS vendors to run a single instance application which supports multiple tenants on the same hardware and software infrastructure. This application should be highly customizable to meet tenants' expectations and business requirements. In this paper, we propose a novel customizable database design for multi-tenant applications. Our design introduces an Elastic Extension Tables (EET) which consists of Common Tenant Tables (CTT) and Virtual Extension Tables (VET). This design enables tenants to create their own elastic database schema during multi-tenant application run-time execution to satisfy their business needs.", "title": "" }, { "docid": "93e6194dc3d8922edb672ac12333ea82", "text": "Sensors including RFID tags have been widely deployed for measuring environmental parameters such as temperature, humidity, oxygen concentration, monitoring the location and velocity of moving objects, tracking tagged objects, and many others. To support effective, efficient, and near real-time phenomena probing and objects monitoring, streaming sensor data have to be gracefully managed in an event processing manner. Different from the traditional events, sensor events come with temporal or spatio-temporal constraints and can be non-spontaneous. Meanwhile, like general event streams, sensor event streams can be generated with very high volumes and rates. Primitive sensor events need to be filtered, aggregated and correlated to generate more semantically rich complex events to facilitate the requirements of up-streaming applications. Motivated by such challenges, many new methods have been proposed in the past to support event processing in sensor event streams. In this chapter, we survey state-of-the-art research on event processing in sensor networks, and provide a broad overview of major topics in Springer Science+Business Media New York 2013 © Managing and Mining Sensor Data, DOI 10.1007/978-1-4614-6309-2_4, C.C. Aggarwal (ed.), 77 78 MANAGING AND MINING SENSOR DATA complex RFID event processing, including event specification languages, event detection models, event processing methods and their optimizations. Additionally, we have presented an open discussion on advanced issues such as processing uncertain and out-of-order sensor events.", "title": "" }, { "docid": "4f43c8ba81a8b828f225923690e9f7dd", "text": "Melody extraction algorithms aim to produce a sequence of frequency values corresponding to the pitch of the dominant melody from a musical recording. Over the past decade, melody extraction has emerged as an active research topic, comprising a large variety of proposed algorithms spanning a wide range of techniques. This article provides an overview of these techniques, the applications for which melody extraction is useful, and the challenges that remain. We start with a discussion of ?melody? from both musical and signal processing perspectives and provide a case study that interprets the output of a melody extraction algorithm for specific excerpts. We then provide a comprehensive comparative analysis of melody extraction algorithms based on the results of an international evaluation campaign. We discuss issues of algorithm design, evaluation, and applications that build upon melody extraction. Finally, we discuss some of the remaining challenges in melody extraction research in terms of algorithmic performance, development, and evaluation methodology.", "title": "" }, { "docid": "0e4722012aeed8dc356aa8c49da8c74f", "text": "The Android software stack for mobile devices defines and enforces its own security model for apps through its application-layer permissions model. However, at its foundation, Android relies upon the Linux kernel to protect the system from malicious or flawed apps and to isolate apps from one another. At present, Android leverages Linux discretionary access control (DAC) to enforce these guarantees, despite the known shortcomings of DAC. In this paper, we motivate and describe our work to bring flexible mandatory access control (MAC) to Android by enabling the effective use of Security Enhanced Linux (SELinux) for kernel-level MAC and by developing a set of middleware MAC extensions to the Android permissions model. We then demonstrate the benefits of our security enhancements for Android through a detailed analysis of how they mitigate a number of previously published exploits and vulnerabilities for Android. Finally, we evaluate the overheads imposed by our security enhancements.", "title": "" }, { "docid": "69d9bfd0ba72724e560f499a4807d7e7", "text": "Is it possible to recover an image from its noisy version using convolutional neural networks? This is an interesting problem as convolutional layers are generally used as feature detectors for tasks like classification, segmentation and object detection. We present a new CNN architecture for blind image denoising which synergically combines three architecture components, a multi-scale feature extraction layer which helps in reducing the effect of noise on feature maps, an ℓp regularizer which helps in selecting only the appropriate feature maps for the task of reconstruction, and finally a three step training approach which leverages adversarial training to give the final performance boost to the model. The proposed model shows competitive denoising performance when compared to the state-of-the-art approaches.", "title": "" }, { "docid": "ef6678881f503c1cec330ddde3e30929", "text": "Complex queries over high speed data streams often need to rely on approximations to keep up with their input. The research community has developed a rich literature on approximate streaming algorithms for this application. Many of these algorithms produce samples of the input stream, providing better properties than conventional random sampling. In this paper, we abstract the stream sampling process and design a new stream sample operator. We show how it can be used to implement a wide variety of algorithms that perform sampling and sampling-based aggregations. Also, we show how to implement the operator in Gigascope - a high speed stream database specialized for IP network monitoring applications. As an example study, we apply the operator within such an enhanced Gigascope to perform subset-sum sampling which is of great interest for IP network management. We evaluate this implemention on a live, high speed internet traffic data stream and find that (a) the operator is a flexible, versatile addition to Gigascope suitable for tuning and algorithm engineering, and (b) the operator imposes only a small evaluation overhead. This is the first operational implementation we know of, for a wide variety of stream sampling algorithms at line speed within a data stream management system.", "title": "" }, { "docid": "48ba3cad9e20162b6dcbb28ead47d997", "text": "This paper compares the accuracy of several variations of the B LEU algorithm when applied to automatically evaluating student essays. The different configurations include closed-class word removal, stemming, two baseline wordsense disambiguation procedures, and translating the texts into a simple semantic representation. We also prove empirically that the accuracy is kept when the student answers are translated automatically. Although none of the representations clearly outperform the others, some conclusions are drawn from the results.", "title": "" }, { "docid": "cf419597981ba159ac3c1e85af683871", "text": "Energy is a vital input for social and economic development. As a result of the generalization of agricultural, industrial and domestic activities the demand for energy has increased remarkably, especially in emergent countries. This has meant rapid grower in the level of greenhouse gas emissions and the increase in fuel prices, which are the main driving forces behind efforts to utilize renewable energy sources more effectively, i.e. energy which comes from natural resources and is also naturally replenished. Despite the obvious advantages of renewable energy, it presents important drawbacks, such as the discontinuity of ulti-criteria decision analysis", "title": "" }, { "docid": "a9e26514ffc78c1018e00c63296b9584", "text": "When labeled examples are limited and difficult to obtain, transfer learning employs knowledge from a source domain to improve learning accuracy in the target domain. However, the assumption made by existing approaches, that the marginal and conditional probabilities are directly related between source and target domains, has limited applicability in either the original space or its linear transformations. To solve this problem, we propose an adaptive kernel approach that maps the marginal distribution of target-domain and source-domain data into a common kernel space, and utilize a sample selection strategy to draw conditional probabilities between the two domains closer. We formally show that under the kernel-mapping space, the difference in distributions between the two domains is bounded; and the prediction error of the proposed approach can also be bounded. Experimental results demonstrate that the proposed method outperforms both traditional inductive classifiers and the state-of-the-art boosting-based transfer algorithms on most domains, including text categorization and web page ratings. In particular, it can achieve around 10% higher accuracy than other approaches for the text categorization problem. The source code and datasets are available from the authors.", "title": "" }, { "docid": "e6dd43c6e5143c519b40ab423b403193", "text": "Tables and forms are a very common way to organize information in structured documents. Their recognition is fundamental for the recognition of the documents. Indeed, the physical organization of a table or a form gives a lot of information concerning the logical meaning of the content. This chapter presents the different tasks that are related to the recognition of tables and forms and the associated well-known methods and remaining B. Coüasnon ( ) IRISA/INSA de Rennes, Rennes Cedex, France e-mail: couasnon@irisa.fr A. Lemaitre IRISA/Université Rennes 2, Rennes Cedex, France e-mail:couasnon@irisa.fr D. Doermann, K. Tombre (eds.), Handbook of Document Image Processing and Recognition, DOI 10.1007/978-0-85729-859-1 20, © Springer-Verlag London 2014 647 648 B. Coüasnon and A. Lemaitre challenges. Three main tasks are pointed out: the detection of tables in heterogeneous documents; the classification of tables and forms, according to predefined models; and the recognition of table and form contents. The complexity of these three tasks is related to the kind of studied document: image-based document or digital-born documents. At last, this chapter will introduce some existing systems for table and form analysis.", "title": "" }, { "docid": "51b8fe57500d1d74834d1f9faa315790", "text": "Simulations of smoke are pervasive in the production of visual effects for commercials, movies and games: from cigarette smoke and subtle dust to large-scale clouds of soot and vapor emanating from fires and explosions. In this talk we present a new Eulerian method that targets the simulation of such phenomena on a structured spatially adaptive voxel grid --- thereby achieving an improvement in memory usage and computational performance over regular dense and sparse grids at uniform resolution. Contrary to e.g. Setaluri et al. [2014], we use velocities collocated at voxel corners which allows sharper interpolation for spatially adaptive simulations, is faster for sampling, and promotes ease-of-use in an open procedural environment where technical artists often construct small computational graphs that apply forces, dissipation etc. to the velocities. The collocated method requires special treatment when projecting out the divergent velocity modes to prevent non-physical high frequency oscillations (not addressed by Ferstl et al. [2014]). To this end we explored discretization and filtering methods from computational physics, combining them with a matrix-free adaptive multigrid scheme based on MLAT and FAS [Trottenberg and Schuller 2001]. Finally we contribute a new volumetric quadrature approach to temporally smooth emission which outperforms e.g. Gaussian quadrature at large time steps. We have implemented our method in the cross-platform Autodesk Bifrost procedural environment which facilitates customization by the individual technical artist, and our implementation is in production use at several major studios. We refer the reader to the accompanying video for examples that illustrate our novel workflows for spatially adaptive simulations and the benefits of our approach. We note that several methods for adaptive fluid simulation have been proposed in recent years, e.g. [Ferstl et al. 2014; Setaluri et al. 2014], and we have drawn a lot of inspiration from these. However, to the best of our knowledge we are the first in computer graphics to propose a collocated velocity, spatially adaptive and matrix-free smoke simulation method that explicitly mitigates non-physical divergent modes.", "title": "" }, { "docid": "69179341377477af8ebe9013c664828c", "text": "1. Intensive agricultural practices drive biodiversity loss with potentially drastic consequences for ecosystem services. To advance conservation and production goals, agricultural practices should be compatible with biodiversity. Traditional or less intensive systems (i.e. with fewer agrochemicals, less mechanisation, more crop species) such as shaded coffee and cacao agroforests are highlighted for their ability to provide a refuge for biodiversity and may also enhance certain ecosystem functions (i.e. predation). 2. Ants are an important predator group in tropical agroforestry systems. Generally, ant biodiversity declines with coffee and cacao intensification yet the literature lacks a summary of the known mechanisms for ant declines and how this diversity loss may affect the role of ants as predators. 3. Here, how shaded coffee and cacao agroforestry systems protect biodiversity and may preserve related ecosystem functions is discussed in the context of ants as predators. Specifically, the relationships between biodiversity and predation, links between agriculture and conservation, patterns and mechanisms for ant diversity loss with agricultural intensification, importance of ants as control agents of pests and fungal diseases, and whether ant diversity may influence the functional role of ants as predators are addressed. Furthermore, because of the importance of homopteran-tending by ants in the ecological and agricultural literature, as well as to the success of ants as predators, the costs and benefits of promoting ants in agroforests are discussed. 4. Especially where the diversity of ants and other predators is high, as in traditional agroforestry systems, both agroecosystem function and conservation goals will be advanced by biodiversity protection.", "title": "" }, { "docid": "5eddede4043c78a41eb59a938da6e26b", "text": "In Named-Data Networking (NDN), content is cached in network nodes and served for future requests. This property of NDN allows attackers to inject poisoned content into the network and isolate users from valid content sources. Since a digital signature is embedded in every piece of content in NDN architecture, poisoned content is discarded if routers perform signature verification; however, if every content is verified by every router, it would be overly expensive to do. In our preliminary work, we have suggested a content verification scheme that minimizes unnecessary verification and favors already verified content in the content store, which reduces the verification overhead by as much as 90% without failing to detect every piece of poisoned content. Under this scheme, however, routers are vulnerable to verification attack, in which a large amount of unverified content is accessed to exhaust system resources. In this paper, we carefully look at the possible concerns of our preliminary work, including verification attack, and present a simple but effective solution. The proposed solution mitigates the weakness of our preliminary work and allows this paper to be deployed for real-world applications.", "title": "" }, { "docid": "355fca41993ea19b08d2a9fc19e25722", "text": "People and companies selling goods or providing services have always desired to know what people think about their products. The number of opinions on the Web has significantly increased with the emergence of microblogs. In this paper we present a novel method for sentiment analysis of a text that allows the recognition of opinions in microblogs which are connected to a particular target or an entity. This method differs from other approaches in utilizing appraisal theory, which we employ for the analysis of microblog posts. The results of the experiments we performed on Twitter showed that our method improves sentiment classification and is feasible even for such specific content as presented on microblogs.", "title": "" }, { "docid": "7a0ed38af9775a77761d6c089db48188", "text": "We introduce polyglot language models, recurrent neural network models trained to predict symbol sequences in many different languages using shared representations of symbols and conditioning on typological information about the language to be predicted. We apply these to the problem of modeling phone sequences—a domain in which universal symbol inventories and cross-linguistically shared feature representations are a natural fit. Intrinsic evaluation on held-out perplexity, qualitative analysis of the learned representations, and extrinsic evaluation in two downstream applications that make use of phonetic features show (i) that polyglot models better generalize to held-out data than comparable monolingual models and (ii) that polyglot phonetic feature representations are of higher quality than those learned monolingually.", "title": "" }, { "docid": "430c4f8912557f4286d152608ce5eab8", "text": "The latex of the tropical species Carica papaya is well known for being a rich source of the four cysteine endopeptidases papain, chymopapain, glycyl endopeptidase and caricain. Altogether, these enzymes are present in the laticifers at a concentration higher than 1 mM. The proteinases are synthesized as inactive precursors that convert into mature enzymes within 2 min after wounding the plant when the latex is abruptly expelled. Papaya latex also contains other enzymes as minor constituents. Several of these enzymes namely a class-II and a class-III chitinase, an inhibitor of serine proteinases and a glutaminyl cyclotransferase have already been purified up to apparent homogeneity and characterized. The presence of a beta-1,3-glucanase and of a cystatin is also suspected but they have not yet been isolated. Purification of these papaya enzymes calls on the use of ion-exchange supports (such as SP-Sepharose Fast Flow) and hydrophobic supports [such as Fractogel TSK Butyl 650(M), Fractogel EMD Propyl 650(S) or Thiophilic gels]. The use of covalent or affinity gels is recommended to provide preparations of cysteine endopeptidases with a high free thiol content (ideally 1 mol of essential free thiol function per mol of enzyme). The selective grafting of activated methoxypoly(ethylene glycol) chains (with M(r) of 5000) on the free thiol functions of the proteinases provides an interesting alternative to the use of covalent and affinity chromatographies especially in the case of enzymes such as chymopapain that contains, in its native state, two thiol functions.", "title": "" }, { "docid": "72e0824602462a21781e9a881041e726", "text": "In an effort to develop a genomics-based approach to the prediction of drug response, we have developed an algorithm for classification of cell line chemosensitivity based on gene expression profiles alone. Using oligonucleotide microarrays, the expression levels of 6,817 genes were measured in a panel of 60 human cancer cell lines (the NCI-60) for which the chemosensitivity profiles of thousands of chemical compounds have been determined. We sought to determine whether the gene expression signatures of untreated cells were sufficient for the prediction of chemosensitivity. Gene expression-based classifiers of sensitivity or resistance for 232 compounds were generated and then evaluated on independent sets of data. The classifiers were designed to be independent of the cells' tissue of origin. The accuracy of chemosensitivity prediction was considerably better than would be expected by chance. Eighty-eight of 232 expression-based classifiers performed accurately (with P < 0.05) on an independent test set, whereas only 12 of the 232 would be expected to do so by chance. These results suggest that at least for a subset of compounds genomic approaches to chemosensitivity prediction are feasible.", "title": "" }, { "docid": "ec1da767db4247990c26f97483f1b9e1", "text": "We survey foundational features underlying modern graph query languages. We first discuss two popular graph data models: edge-labelled graphs, where nodes are connected by directed, labelled edges, and property graphs, where nodes and edges can further have attributes. Next we discuss the two most fundamental graph querying functionalities: graph patterns and navigational expressions. We start with graph patterns, in which a graph-structured query is matched against the data. Thereafter, we discuss navigational expressions, in which patterns can be matched recursively against the graph to navigate paths of arbitrary length; we give an overview of what kinds of expressions have been proposed and how they can be combined with graph patterns. We also discuss several semantics under which queries using the previous features can be evaluated, what effects the selection of features and semantics has on complexity, and offer examples of such features in three modern languages that are used to query graphs: SPARQL, Cypher, and Gremlin. We conclude by discussing the importance of formalisation for graph query languages; a summary of what is known about SPARQL, Cypher, and Gremlin in terms of expressivity and complexity; and an outline of possible future directions for the area.", "title": "" }, { "docid": "08d0c860298b03d30a6ef47ec19a2b27", "text": "This survey paper starts with a critical analysis of various performance metrics for supply chain management (SCM), used by a specific manufacturing company. Then it summarizes how economic theory treats multiple performance metrics. Actually, the paper proposes to deal with multiple metrics in SCM via the balanced scorecard — which measures customers, internal processes, innovations, and finance. To forecast how the values of these metrics will change — once a supply chain is redesigned — simulation may be used. This paper distinguishes four simulation types for SCM: (i) spreadsheet simulation, (ii) system dynamics, (iii) discrete-event simulation, and (iv) business games. These simulation types may explain the bullwhip effect, predict fill rate values, and educate and train users. Validation of simulation models requires sensitivity analysis; a statistical methodology is proposed. The paper concludes with suggestions for a possible research agenda in SCM. A list with 50 references for further study is included. Journal of the Operational Research Society (2003) 00, 000–000. doi:10.1057/palgrave.jors.2601539", "title": "" } ]
scidocsrr
9833a2433885a7438b81d64f39712970
Theoretical Design of Broadband Multisection Wilkinson Power Dividers With Arbitrary Power Split Ratio
[ { "docid": "786d1ba82d326370684395eba5ef7cd3", "text": "A miniaturized dual-band Wilkinson power divider with a parallel LC circuit at the midpoints of two coupled-line sections is proposed in this paper. General design equations for parallel inductor L and capacitor C are derived from even- and odd-mode analysis. Generally speaking, characteristic impedances between even and odd modes are different in two coupled-line sections, and their electrical lengths are also different in inhomogeneous medium. This paper proved that a parallel LC circuit compensates for the characteristic impedance differences and the electrical length differences for dual-band operation. In other words, the proposed model provides self-compensation structure, and no extra compensation circuits are needed. Moreover, the upper limit of the frequency ratio range can be adjusted by two coupling strengths, where loose coupling for the first coupled-line section and tight coupling for the second coupled-line section are preferred for a wider frequency ratio range. Finally, an experimental circuit shows good agreement with the theoretical simulation.", "title": "" } ]
[ { "docid": "6850b52405e8056710f4b3010858cfbe", "text": "spread of misinformation, rumors and hoaxes. The goal of this work is to introduce a simple modeling framework to study the diffusion of hoaxes and in particular how the availability of debunking information may contain their diffusion. As traditionally done in the mathematical modeling of information diffusion processes, we regard hoaxes as viruses: users can become infected if they are exposed to them, and turn into spreaders as a consequence. Upon verification, users can also turn into non-believers and spread the same attitude with a mechanism analogous to that of the hoax-spreaders. Both believers and non-believers, as time passes, can return to a susceptible state. Our model is characterized by four parameters: spreading rate, gullibility, probability to verify a hoax, and that to forget one's current belief. Simulations on homogeneous, heterogeneous, and real networks for a wide range of parameters values reveal a threshold for the fact-checking probability that guarantees the complete removal of the hoax from the network. Via a mean field approximation, we establish that the threshold value does not depend on the spreading rate but only on the gullibility and forgetting probability. Our approach allows to quantitatively gauge the minimal reaction necessary to eradicate a hoax.", "title": "" }, { "docid": "28b2bbcfb8960ff40f2fe456a5b00729", "text": "This paper presents an adaptation of Lesk’s dictionary– based word sense disambiguation algorithm. Rather than using a standard dictionary as the source of glosses for our approach, the lexical database WordNet is employed. This provides a rich hierarchy of semantic relations that our algorithm can exploit. This method is evaluated using the English lexical sample data from the Senseval-2 word sense disambiguation exercise, and attains an overall accuracy of 32%. This represents a significant improvement over the 16% and 23% accuracy attained by variations of the Lesk algorithm used as benchmarks during the Senseval-2 comparative exercise among word sense disambiguation", "title": "" }, { "docid": "ca990b1b43ca024366a2fe73e2a21dae", "text": "Guanabenz (2,6-dichlorobenzylidene-amino-guanidine) is a centrally acting antihypertensive drug whose mechanism of action is via alpha2 adrenoceptors or, more likely, imidazoline receptors. Guanabenz is marketed as an antihypertensive agent in human medicine (Wytensin tablets, Wyeth Pharmaceuticals). Guanabenz has reportedly been administered to racing horses and is classified by the Association of Racing Commissioners International as a class 3 foreign substance. As such, its identification in a postrace sample may result in significant sanctions against the trainer of the horse. The present study examined liquid chromatographic/tandem quadrupole mass spectrometric (LC-MS/MS) detection of guanabenz in serum samples from horses treated with guanabenz by rapid i.v. injection at 0.04 and 0.2 mg/kg. Using a method adapted from previous work with clenbuterol, the parent compound was detected in serum with an apparent limit of detection of approximately 0.03 ng/ml and the limit of quantitation was 0.2 ng/ml. Serum concentrations of guanabenz peaked at approximately 100 ng/ml after the 0.2 mg/kg dose, and the parent compound was detected for up to 8 hours after the 0.04 mg/kg dose. Urine samples tested after administration of guanabenz at these dosages yielded evidence of at least one glucuronide metabolite, with the glucuronide ring apparently linked to a ring hydroxyl group or a guanidinium hydroxylamine. The LC-MS/MS results presented here form the basis of a confirmatory test for guanabenz in racing horses.", "title": "" }, { "docid": "c4ab0d1934e5c2eb4fc16915f1868ab8", "text": "During medicine studies, visualization of certain elements is common and indispensable in order to get more information about the way they work. Currently, we resort to the use of photographs -which are insufficient due to being staticor tests in patients, which can be invasive or even risky. Therefore, a low-cost approach is proposed by using a 3D visualization. This paper presents a holographic system built with low-cost materials for teaching obstetrics, where student interaction is performed by using voice and gestures. Our solution, which we called HoloMed, is focused on the projection of a euthocic normal delivery under a web-based infrastructure which also employs a Kinect. HoloMed is divided in three (3) essential modules: a gesture analyzer, a data server, and a holographic projection architecture, which can be executed in several interconnected computers using different network protocols. Tests used for determining the user’s position, illumination factors, and response times, demonstrate HoloMed’s effectiveness as a low-cost system for teaching, using a natural user interface and 3D images.", "title": "" }, { "docid": "4a5c784fd5678666b57c841dfc26f5e8", "text": "This paperdemonstratesa methodology tomodel and evaluatethe faulttolerancecharacteristics of operational software. The methodology is illustrated through case studies on three different operating systems: the Tandem GUARDIAN fault-tolerant system, the VAX/VMS distributed system, and the IBM/MVS system. Measurements are made on these systems for substantial periods to collect software error and recovery data. In addition to investigating basic dependability characteristics such as major so_ problems and error distributions, we develop two leveis of models to describe error and recovery processes inside an operating system and on multiple instances of an operating system running in a dislributed environmenL Based oft the models, reward analysis is conducted to evaluate the loss of service due to software errors and the effect of the fault-tolerance techniques implemented in the systems. Software error correlation in multicomputer systems is also investigated. Results show that I/O management and program flow control are the major sources of software problems in the measured IBM/MVS and VAX/VMS operating systems, while memory management is the major source of software problems in the TandeJn/GUARDIAN operating system. Software errors tend to occur in bursts on both IBM and VAX machines. This phenomemm islesspronounced in theTandem system,which can be attributed to its fault-tolerant design. The fault tolerance in the Tandem system reduces the loss of service due to software failures by an order of magnitude. Although the measured Tandem system is an experimental system working under accelerated stresses, the loss of service due to software problems is much smaller than that in the measured VAX/VMS and IBM/MVS systems. It is shown that the softwme Time To Error distributions obtained _rom data are not simple exponentials. This is in contrast with the conunon assumption of exponential failure times made in fanh-tolerant software models. Investigation of error conelatiom show that about 10% of software failures in the VAXcluster and 20% in the Tandem system occuned conctmeafly on multiple machines. The network-related software in the VAXcluster and the memory management software in the Tandem system are suspected to be software reliability bottlenecks for concurrent failures.", "title": "" }, { "docid": "b27dc4a19b44bf2fd13f299de8c33108", "text": "A large proportion of the world’s population lives in remote rural areas that are geographically isolated and sparsely populated. This paper proposed a hybrid power generation system suitable for remote area application. The concept of hybridizing renewable energy sources is that the base load is to be covered by largest and firmly available renewable source(s) and other intermittent source(s) should augment the base load to cover the peak load of an isolated mini electric grid system. The study is based on modeling, simulation and optimization of renewable energy system in rural area in Sundargarh district of Orissa state, India. The model has designed to provide an optimal system conFigureuration based on hour-by-hour data for energy availability and demands. Various renewable/alternative energy sources, energy storage and their applicability in terms of cost and performance are discussed. The homer software is used to study and design the proposed hybrid alternative energy power system model. The Sensitivity analysis was carried out using Homer program. Based on simulation results, it has been found that renewable/alternative energy sources will replace the conventional energy sources and would be a feasible solution for distribution of electric power for stand alone applications at remote and distant locations.", "title": "" }, { "docid": "d0bacaa267599486356c175ca5419ede", "text": "As P4 and its associated compilers move beyond relative immaturity, there is a need for common evaluation criteria. In this paper, we propose Whippersnapper, a set of benchmarks for P4. Rather than simply selecting a set of representative data-plane programs, the benchmark is designed from first principles, identifying and exploring key features and metrics. We believe the benchmark will not only provide a vehicle for comparing implementations and designs, but will also generate discussion within the larger community about the requirements for data-plane languages.", "title": "" }, { "docid": "5399b924cdf1d034a76811360b6c018d", "text": "Psychological construction models of emotion state that emotions are variable concepts constructed by fundamental psychological processes, whereas according to basic emotion theory, emotions cannot be divided into more fundamental units and each basic emotion is represented by a unique and innate neural circuitry. In a previous study, we found evidence for the psychological construction account by showing that several brain regions were commonly activated when perceiving different emotions (i.e. a general emotion network). Moreover, this set of brain regions included areas associated with core affect, conceptualization and executive control, as predicted by psychological construction models. Here we investigate directed functional brain connectivity in the same dataset to address two questions: 1) is there a common pathway within the general emotion network for the perception of different emotions and 2) if so, does this common pathway contain information to distinguish between different emotions? We used generalized psychophysiological interactions and information flow indices to examine the connectivity within the general emotion network. The results revealed a general emotion pathway that connects neural nodes involved in core affect, conceptualization, language and executive control. Perception of different emotions could not be accurately classified based on the connectivity patterns from the nodes of the general emotion pathway. Successful classification was achieved when connections outside the general emotion pathway were included. We propose that the general emotion pathway functions as a common pathway within the general emotion network and is involved in shared basic psychological processes across emotions. However, additional connections within the general emotion network are required to classify different emotions, consistent with a constructionist account.", "title": "" }, { "docid": "64dc0a4b8392efc03b20fef7437eb55c", "text": "This paper investigates how retailers at different stages of e-commerce maturity evaluate their entry to e-commerce activities. The study was conducted using qualitative approach interviewing 16 retailers in Saudi Arabia. It comes up with 22 factors that are believed the most influencing factors for retailers in Saudi Arabia. Interestingly, there seem to be differences between retailers in companies at different maturity stages in terms of having different attitudes regarding the issues of using e-commerce. The businesses that have reached a high stage of e-commerce maturity provide practical evidence of positive and optimistic attitudes and practices regarding use of e-commerce, whereas the businesses that have not reached higher levels of maturity provide practical evidence of more negative and pessimistic attitudes and practices. The study, therefore, should contribute to efforts leading to greater e-commerce development in Saudi Arabia and other countries with similar context.", "title": "" }, { "docid": "c21c58dbdf413a54036ac5e6849f81e1", "text": "We discuss the problem of extending data mining approaches to cases in which data points arise in the form of individual graphs. Being able to find the intrinsic low-dimensionality in ensembles of graphs can be useful in a variety of modeling contexts, especially when coarse-graining the detailed graph information is of interest. One of the main challenges in mining graph data is the definition of a suitable pairwise similarity metric in the space of graphs. We explore two practical solutions to solving this problem: one based on finding subgraph densities, and one using spectral information. The approach is illustrated on three test data sets (ensembles of graphs); two of these are obtained from standard graph generating algorithms, while the graphs in the third example are sampled as dynamic snapshots from an evolving network simulation.", "title": "" }, { "docid": "7875910ad044232b4631ecacfec65656", "text": "In this study, a questionnaire (Cyberbullying Questionnaire, CBQ) was developed to assess the prevalence of numerous modalities of cyberbullying (CB) in adolescents. The association of CB with the use of other forms of violence, exposure to violence, acceptance and rejection by peers was also examined. In the study, participants were 1431 adolescents, aged between 12 and17 years (726 girls and 682 boys). The adolescents responded to the CBQ, measures of reactive and proactive aggression, exposure to violence, justification of the use of violence, and perceived social support of peers. Sociometric measures were also used to assess the use of direct and relational aggression and the degree of acceptance and rejection by peers. The results revealed excellent psychometric properties for the CBQ. Of the adolescents, 44.1% responded affirmatively to at least one act of CB. Boys used CB to greater extent than girls. Lastly, CB was significantly associated with the use of proactive aggression, justification of violence, exposure to violence, and less perceived social support of friends. 2010 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "ff4c2f1467a141894dbe76491bc06d3b", "text": "Railways is the major means of transport in most of the countries. Rails are the backbone of the track structure and should be protected from defects. Surface defects are irregularities in the rails caused due to the shear stresses between the rails and wheels of the trains. This type of defects should be detected to avoid rail fractures. The objective of this paper is to propose an innovative technique to detect the surface defect on rail heads. In order to identify the defects, it is essential to extract the rails from the background and further enhance the image for thresholding. The proposed method uses Binary Image Based Rail Extraction (BIBRE) algorithm to extract the rails from the background. The extracted rails are enhanced to achieve uniform background with the help of direct enhancement method. The direct enhancement method enhance the image by enhancing the brightness difference between objects and their backgrounds. The enhanced rail image uses Gabor filters to identify the defects from the rails. The Gabor filters maximizes the energy difference between defect and defect less surface. Thresholding is done based on the energy of the defects. From the thresholded image the defects are identified and a message box is generated when there is a presence of defects.", "title": "" }, { "docid": "c9cd19c2e8ee4b07f969280672d521bf", "text": "The owner and users of a sensor network may be different, which necessitates privacy-preserving access control. On the one hand, the network owner need enforce strict access control so that the sensed data are only accessible to users willing to pay. On the other hand, users wish to protect their respective data access patterns whose disclosure may be used against their interests. This paper presents DP2AC, a Distributed Privacy-Preserving Access Control scheme for sensor networks, which is the first work of its kind. Users in DP2AC purchase tokens from the network owner whereby to query data from sensor nodes which will reply only after validating the tokens. The use of blind signatures in token generation ensures that tokens are publicly verifiable yet unlinkable to user identities, so privacy-preserving access control is achieved. A central component in DP2AC is to prevent malicious users from reusing tokens, for which we propose a suite of distributed token reuse detection (DTRD) schemes without involving the base station. These schemes share the essential idea that a sensor node checks with some other nodes (called witnesses) whether a token has been used, but they differ in how the witnesses are chosen. We thoroughly compare their performance with regard to TRD capability, communication overhead, storage overhead, and attack resilience. The efficacy and efficiency of DP2AC are confirmed by detailed performance evaluations.", "title": "" }, { "docid": "21e17ad2d2a441940309b7eacd4dec6e", "text": "ÐWith a huge amount of data stored in spatial databases and the introduction of spatial components to many relational or object-relational databases, it is important to study the methods for spatial data warehousing and OLAP of spatial data. In this paper, we study methods for spatial OLAP, by integration of nonspatial OLAP methods with spatial database implementation techniques. A spatial data warehouse model, which consists of both spatial and nonspatial dimensions and measures, is proposed. Methods for computation of spatial data cubes and analytical processing on such spatial data cubes are studied, with several strategies proposed, including approximation and selective materialization of the spatial objects resulted from spatial OLAP operations. The focus of our study is on a method for spatial cube construction, called object-based selective materialization, which is different from cuboid-based selective materialization proposed in previous studies of nonspatial data cube construction. Rather than using a cuboid as an atomic structure during the selective materialization, we explore granularity on a much finer level, that of a single cell of a cuboid. Several algorithms are proposed for object-based selective materialization of spatial data cubes and the performance study has demonstrated the effectiveness of these techniques. Index TermsÐData warehouse, data mining, online analytical processing (OLAP), spatial databases, spatial data analysis, spatial", "title": "" }, { "docid": "b7bf3ae864ce774874041b0e5308323f", "text": "This paper examines factors that influence prices of most common five cryptocurrencies such Bitcoin, Ethereum, Dash, Litecoin, and Monero over 20102018 using weekly data. The study employs ARDL technique and documents several findings. First, cryptomarket-related factors such as market beta, trading volume, and volatility appear to be significant determinant for all five cryptocurrencies both in shortand long-run. Second, attractiveness of cryptocurrencies also matters in terms of their price determination, but only in long-run. This indicates that formation (recognition) of the attractiveness of cryptocurrencies are subjected to time factor. In other words, it travels slowly within the market. Third, SP500 index seems to have weak positive long-run impact on Bitcoin, Ethereum, and Litcoin, while its sign turns to negative losing significance in short-run, except Bitcoin that generates an estimate of -0.20 at 10% significance level. Lastly, error-correction models for Bitcoin, Etherem, Dash, Litcoin, and Monero show that cointegrated series cannot drift too far apart, and converge to a longrun equilibrium at a speed of 23.68%, 12.76%, 10.20%, 22.91%, and 14.27% respectively.", "title": "" }, { "docid": "85fc78cc3f71b784063b8b564e6509a9", "text": "Numerous research papers have listed different vectors of personally identifiable information leaking via tradition al and mobile Online Social Networks (OSNs) and highlighted the ongoing aggregation of data about users visiting popular We b sites. We argue that the landscape is worsening and existing proposals (including the recent U.S. Federal Trade Commission’s report) do not address several key issues. We examined over 100 popular non-OSN Web sites across a number of categories where tens of millions of users representing d iverse demographics have accounts, to see if these sites leak private information to prominent aggregators. Our results raise considerable concerns: we see leakage in sites for every category we examined; fully 56% of the sites directly leak pieces of private information with this result growing to 75% if we also include leakage of a site userid. Sensitive search strings sent to healthcare Web sites and travel itineraries on flight reservation sites are leaked in 9 of the top 10 sites studied for each category. The community needs a clear understanding of the shortcomings of existing privac y protection measures and the new proposals. The growing disconnect between the protection measures and increasing leakage and linkage suggests that we need to move beyond the losing battle with aggregators and examine what roles first-party sites can play in protecting privacy of their use rs.", "title": "" }, { "docid": "587f7821fc7ecfe5b0bbbd3b08b9afe2", "text": "The most commonly used method for cuffless blood pressure (BP) measurement is using pulse transit time (PTT), which is based on Moens-Korteweg (M-K) equation underlying the assumption that arterial geometries such as the arterial diameter keep unchanged. However, the arterial diameter is dynamic which varies over the cardiac cycle, and it is regulated through the contraction or relaxation of the vascular smooth muscle innervated primarily by the sympathetic nervous system. This may be one of the main reasons that impair the BP estimation accuracy. In this paper, we propose a novel indicator, the photoplethysmogram (PPG) intensity ratio (PIR), to evaluate the arterial diameter change. The deep breathing (DB) maneuver and Valsalva maneuver (VM) were performed on five healthy subjects for assessing parasympathetic and sympathetic nervous activities, respectively. Heart rate (HR), PTT, PIR and BP were measured from the simultaneously recorded electrocardiogram (ECG), PPG, and continuous BP. It was found that PIR increased significantly from inspiration to expiration during DB, whilst BP dipped correspondingly. Nevertheless, PIR changed positively with BP during VM. In addition, the spectral analysis revealed that the dominant frequency component of PIR, HR and SBP, shifted significantly from high frequency (HF) to low frequency (LF), but not obvious in that of PTT. These results demonstrated that PIR can be potentially used to evaluate the smooth muscle tone which modulates arterial BP in the LF range. The PTT-based BP measurement that take into account the PIR could therefore improve its estimation accuracy.", "title": "" }, { "docid": "9b17c6ff30e91f88e52b2db4eb331478", "text": "Network traffic classification has become significantly important with rapid growth of current Internet network and online applications. There have been numerous studies on this topic which have led to many different approaches. Most of these approaches use predefined features extracted by an expert in order to classify network traffic. In contrast, in this study, we propose a deep learning based approach which integrates both feature extraction and classification phases into one system. Our proposed scheme, called “Deep Packet,” can handle both traffic characterization, in which the network traffic is categorized into major classes (e.g., FTP and P2P), and application identification in which identification of end-user applications (e.g., BitTorrent and Skype) is desired. Contrary to the most of current methods, Deep Packet can identify encrypted traffic and also distinguishes between VPN and non-VPN network traffic. After an initial pre-processing phase on data, packets are fed into Deep Packet framework that embeds stacked autoencoder and convolution neural network (CNN) in order to classify network traffic. Deep packet with CNN as its classification model achieved F1 score of 0.95 in application identification task and it also accomplished F1 score of 0.97 in traffic characterization task. To the best of our knowledge, Deep Packet outperforms all of the proposed classification methods on UNB ISCX VPN-nonVPN dataset.", "title": "" }, { "docid": "9fd5e182851ff0be67e8865c336a1f77", "text": "Following the developments of wireless and mobile communication technologies, mobile-commerce (M-commerce) has become more and more popular. However, most of the existing M-commerce protocols do not consider the user anonymity during transactions. This means that it is possible to trace the identity of a payer from a M-commerce transaction. Luo et al. in 2014 proposed an NFC-based anonymous mobile payment protocol. It used an NFC-enabled smartphone and combined a built-in secure element (SE) as a trusted execution environment to build an anonymous mobile payment service. But their scheme has several problems and cannot be functional in practice. In this paper, we introduce a new NFC-based anonymous mobile payment protocol. Our scheme has the following features:(1) Anonymity. It prevents the disclosure of user's identity by using virtual identities instead of real identity during the transmission. (2) Efficiency. Confidentiality is achieved by symmetric key cryptography instead of public key cryptography so as to increase the performance. (3) Convenience. The protocol is based on NFC and is EMV compatible. (4) Security. All the transaction is either encrypted or signed by the sender so the confidentiality and authenticity are preserved.", "title": "" }, { "docid": "3d04155f68912f84b02788f93e9da74c", "text": "Data partitioning significantly improves the query performance in distributed database systems. A large number of techniques have been proposed to efficiently partition a dataset for a given query workload. However, many modern analytic applications involve ad-hoc or exploratory analysis where users do not have a representative query workload upfront. Furthermore, workloads change over time as businesses evolve or as analysts gain better understanding of their data. Static workload-based data partitioning techniques are therefore not suitable for such settings. In this paper, we describe the demonstration of Amoeba, a distributed storage system which uses adaptive multi-attribute data partitioning to efficiently support ad-hoc as well as recurring queries. Amoeba applies a robust partitioning algorithm such that ad-hoc queries on all attributes have similar performance gains. Thereafter, Amoeba adaptively repartitions the data based on the observed query sequence, i.e., the system improves over time. All along Amoeba offers both adaptivity (i.e., adjustments according to workload changes) as well as robustness (i.e., avoiding performance spikes due to workload changes). We propose to demonstrate Amoeba on scenarios from an internet-ofthings startup that tracks user driving patterns. We invite the audience to interactively fire fast ad-hoc queries, observe multi-dimensional adaptivity, and play with a robust/reactive knob in Amoeba. The web front end displays the layout changes, runtime costs, and compares it to Spark with both default and workload-aware partitioning.", "title": "" } ]
scidocsrr
4aa78772e58fc845b17d5a6588b80637
3D feature points detection on sparse and non-uniform pointcloud for SLAM
[ { "docid": "33915af49384d028a591d93336feffd6", "text": "This paper presents a new approach for recognition of 3D objects that are represented as 3D point clouds. We introduce a new 3D shape descriptor called Intrinsic Shape Signature (ISS) to characterize a local/semi-local region of a point cloud. An intrinsic shape signature uses a view-independent representation of the 3D shape to match shape patches from different views directly, and a view-dependent transform encoding the viewing geometry to facilitate fast pose estimation. In addition, we present a highly efficient indexing scheme for the high dimensional ISS shape descriptors, allowing for fast and accurate search of large model databases. We evaluate the performance of the proposed algorithm on a very challenging task of recognizing different vehicle types using a database of 72 models in the presence of sensor noise, obscuration and scene clutter.", "title": "" } ]
[ { "docid": "938afbc53340a3aa6e454d17789bf021", "text": "BACKGROUND\nAll cultural groups in the world place paramount value on interpersonal trust. Existing research suggests that although accurate judgments of another's trustworthiness require extensive interactions with the person, we often make trustworthiness judgments based on facial cues on the first encounter. However, little is known about what facial cues are used for such judgments and what the bases are on which individuals make their trustworthiness judgments.\n\n\nMETHODOLOGY/PRINCIPAL FINDINGS\nIn the present study, we tested the hypothesis that individuals may use facial attractiveness cues as a \"shortcut\" for judging another's trustworthiness due to the lack of other more informative and in-depth information about trustworthiness. Using data-driven statistical models of 3D Caucasian faces, we compared facial cues used for judging the trustworthiness of Caucasian faces by Caucasian participants who were highly experienced with Caucasian faces, and the facial cues used by Chinese participants who were unfamiliar with Caucasian faces. We found that Chinese and Caucasian participants used similar facial cues to judge trustworthiness. Also, both Chinese and Caucasian participants used almost identical facial cues for judging trustworthiness and attractiveness.\n\n\nCONCLUSIONS/SIGNIFICANCE\nThe results suggest that without opportunities to interact with another person extensively, we use the less racially specific and more universal attractiveness cues as a \"shortcut\" for trustworthiness judgments.", "title": "" }, { "docid": "b017fd773265c73c7dccad86797c17b8", "text": "Active learning, which has a strong impact on processing data prior to the classification phase, is an active research area within the machine learning community, and is now being extended for remote sensing applications. To be effective, classification must rely on the most informative pixels, while the training set should be as compact as possible. Active learning heuristics provide capability to select unlabeled data that are the “most informative” and to obtain the respective labels, contributing to both goals. Characteristics of remotely sensed image data provide both challenges and opportunities to exploit the potential advantages of active learning. We present an overview of active learning methods, then review the latest techniques proposed to cope with the problem of interactive sampling of training pixels for classification of remotely sensed data with support vector machines (SVMs). We discuss remote sensing specific approaches dealing with multisource and spatially and time-varying data, and provide examples for high-dimensional hyperspectral imagery.", "title": "" }, { "docid": "e7bf90ed4a5b4a509f41a7afc7ffde1e", "text": "Previous theorizing by clinical psychologists suggests that adolescent narcissism may be related to parenting practices (Kernberg, 1975; Kohut, 1977). Two studies investigated the relations between parenting dimensions (i.e., warmth, monitoring, and psychological control) and narcissism both with and without removing from narcissism variance associated with trait self-esteem. Two hundred and twenty-two college students (Study 1) and 212 high school students (Study 2) completed the Narcissistic Personality Inventory, a trait self-esteem scale, and standard measures of the three parenting dimensions. Parental warmth was associated positively and monitoring was associated negatively with both types of narcissism. Psychological control was positively associated with narcissism scores from which trait self-esteem variance had been removed. Clinical implications of the findings are discussed, limitations are addressed, and future research directions are suggested.", "title": "" }, { "docid": "7621e0dcdad12367dc2cfcd12d31c719", "text": "Microblogging sites have emerged as major platforms for bloggers to create and consume posts as well as to follow other bloggers and get informed of their updates. Due to the large number of users, and the huge amount of posts they create, it becomes extremely difficult to identify relevant and interesting blog posts. In this paper, we propose a novel convex collective matrix completion (CCMC) method that effectively utilizes user-item matrix and incorporates additional user activity and topic-based signals to recommend relevant content. The key advantage of CCMC over existing methods is that it can obtain a globally optimal solution and can easily scale to large-scale matrices using Hazan’s algorithm. To the best of our knowledge, this is the first work which applies and studies CCMC as a recommendation method in social media. We conduct a large scale study and show significant improvement over existing state-ofthe-art approaches.", "title": "" }, { "docid": "bd516d0b64e483d2210b20e4905ecd52", "text": "With the rapid growth of the internet and the spread of the information contained therein, the volume of information available on the web is more than the ability of users to manage, capture and keep the information up to date. One solution to this problem are personalization and recommender systems. Recommender systems use the comments of the group of users so that, to help people in that group more effectively to identify their favorite items from a huge set of choices. In recent years, the web has seen very strong growth in the use of blogs. Considering the high volume of information in blogs, bloggers are in trouble to find the desired information and find blogs with similar thoughts and desires. Therefore, considering the mass of information for the blogs, a blog recommender system seems to be necessary. In this paper, by combining different methods of clustering and collaborative filtering, personalized recommender system for Persian blogs is suggested.", "title": "" }, { "docid": "d6cf367f29ed1c58fb8fd0b7edf69458", "text": "Diabetes mellitus is a chronic disease that leads to complications including heart disease, stroke, kidney failure, blindness and nerve damage. Type 2 diabetes, characterized by target-tissue resistance to insulin, is epidemic in industrialized societies and is strongly associated with obesity; however, the mechanism by which increased adiposity causes insulin resistance is unclear. Here we show that adipocytes secrete a unique signalling molecule, which we have named resistin (for resistance to insulin). Circulating resistin levels are decreased by the anti-diabetic drug rosiglitazone, and increased in diet-induced and genetic forms of obesity. Administration of anti-resistin antibody improves blood sugar and insulin action in mice with diet-induced obesity. Moreover, treatment of normal mice with recombinant resistin impairs glucose tolerance and insulin action. Insulin-stimulated glucose uptake by adipocytes is enhanced by neutralization of resistin and is reduced by resistin treatment. Resistin is thus a hormone that potentially links obesity to diabetes.", "title": "" }, { "docid": "2b23723ab291aeff31781cba640b987b", "text": "As the urban population is increasing, more and more cars are circulating in the city to search for parking spaces which contributes to the global problem of traffic congestion. To alleviate the parking problems, smart parking systems must be implemented. In this paper, the background on parking problems is introduced and relevant algorithms, systems, and techniques behind the smart parking are reviewed and discussed. This paper provides a good insight into the guidance, monitoring and reservations components of the smart car parking and directions to the future development.", "title": "" }, { "docid": "c63fa63e8af9d5b25ca7f40a710cfcc2", "text": "With the recent development of deep learning, research in AI has gained new vigor and prominence. While machine learning has succeeded in revitalizing many research fields, such as computer vision, speech recognition, and medical diagnosis, we are yet to witness impressive progress in natural language understanding. One of the reasons behind this unmatched expectation is that, while a bottom-up approach is feasible for pattern recognition, reasoning and understanding often require a top-down approach. In this work, we couple sub-symbolic and symbolic AI to automatically discover conceptual primitives from text and link them to commonsense concepts and named entities in a new three-level knowledge representation for sentiment analysis. In particular, we employ recurrent neural networks to infer primitives by lexical substitution and use them for grounding common and commonsense knowledge by means of multi-dimensional scaling.", "title": "" }, { "docid": "7f7a67af972d26746ce1ae0c7ec09499", "text": "We describe Microsoft's conversational speech recognition system, in which we combine recent developments in neural-network-based acoustic and language modeling to advance the state of the art on the Switchboard recognition task. Inspired by machine learning ensemble techniques, the system uses a range of convolutional and recurrent neural networks. I-vector modeling and lattice-free MMI training provide significant gains for all acoustic model architectures. Language model rescoring with multiple forward and backward running RNNLMs, and word posterior-based system combination provide a 20% boost. The best single system uses a ResNet architecture acoustic model with RNNLM rescoring, and achieves a word error rate of 6.9% on the NIST 2000 Switchboard task. The combined system has an error rate of 6.2%, representing an improvement over previously reported results on this benchmark task.", "title": "" }, { "docid": "f31555cb1720843ec4921428dc79449e", "text": "Software architectures shift developers’ focus from lines-of-code to coarser-grained architectural elements and their interconnection structure. Architecture description languages (ADLs) have been proposed as modeling notations to support architecture-based development. There is, however, little consensus in the research community on what is an ADL, what aspects of an architecture should be modeled in an ADL, and which ADL is best suited for a particular problem. Furthermore, the distinction is rarely made between ADLs on one hand and formal specification, module interconnection, simulation, and programming languages on the other. This paper attempts to provide an answer to these questions. It motivates and presents a definition and a classification framework for ADLs. The utility of the definition is demonstrated by using it to differentiate ADLs from other modeling notations. The framework is used to classify and compare several existing ADLs.1", "title": "" }, { "docid": "25a94dbd1c02a6183df945d4684a0f31", "text": "The success of applying policy gradient reinforcement learning (RL) to difficult control tasks hinges crucially on the ability to determine a sensible initialization for the policy. Transfer learning methods tackle this problem by reusing knowledge gleaned from solving other related tasks. In the case of multiple task domains, these algorithms require an inter-task mapping to facilitate knowledge transfer across domains. However, there are currently no general methods to learn an inter-task mapping without requiring either background knowledge that is not typically present in RL settings, or an expensive analysis of an exponential number of inter-task mappings in the size of the state and action spaces. This paper introduces an autonomous framework that uses unsupervised manifold alignment to learn intertask mappings and effectively transfer samples between different task domains. Empirical results on diverse dynamical systems, including an application to quadrotor control, demonstrate its effectiveness for cross-domain transfer in the context of policy gradient RL. Introduction Policy gradient reinforcement learning (RL) algorithms have been applied with considerable success to solve highdimensional control problems, such as those arising in robotic control and coordination (Peters & Schaal 2008). These algorithms use gradient ascent to tune the parameters of a policy to maximize its expected performance. Unfortunately, this gradient ascent procedure is prone to becoming trapped in local maxima, and thus it has been widely recognized that initializing the policy in a sensible manner is crucial for achieving optimal performance. For instance, one typical strategy is to initialize the policy using human demonstrations (Peters & Schaal 2006), which may be infeasible when the task cannot be easily solved by a human. This paper explores a different approach: instead of initializing the policy at random (i.e., tabula rasa) or via human demonstrations, we instead use transfer learning (TL) to initialize the policy for a new target domain based on knowledge from one or more source tasks. In RL transfer, the source and target tasks may differ in their formulations (Taylor & Stone 2009). In particular, Copyright c © 2015, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. when the source and target tasks have different state and/or action spaces, an inter-task mapping (Taylor et al. 2007a) that describes the relationship between the two tasks is typically needed. This paper introduces a framework for autonomously learning an inter-task mapping for cross-domain transfer in policy gradient RL. First, we learn an inter-state mapping (i.e., a mapping between states in two tasks) using unsupervised manifold alignment. Manifold alignment provides a powerful and general framework that can discover a shared latent representation to capture intrinsic relations between different tasks, irrespective of their dimensionality. The alignment also yields an implicit inter-action mapping that is generated by mapping tracking states from the source to the target. Given the mapping between task domains, source task trajectories are then used to initialize a policy in the target task, significantly improving the speed of subsequent learning over an uninformed initialization. This paper provides the following contributions. First, we introduce a novel unsupervised method for learning interstate mappings using manifold alignment. Second, we show that the discovered subspace can be used to initialize the target policy. Third, our empirical validation conducted on four dissimilar and dynamically chaotic task domains (e.g., controlling a three-link cart-pole and a quadrotor aerial vehicle) shows that our approach can a) automatically learn an inter-state mapping across MDPs from the same domain, b) automatically learn an inter-state mapping across MDPs from very different domains, and c) transfer informative initial policies to achieve higher initial performance and reduce the time needed for convergence to near-optimal behavior.", "title": "" }, { "docid": "afabc44116cc1141c00c3528f1509c18", "text": "Low-rank representation (LRR) has recently attracted a great deal of attention due to its pleasing efficacy in exploring low-dimensional subspace structures embedded in data. For a given set of observed data corrupted with sparse errors, LRR aims at learning a lowest-rank representation of all data jointly. LRR has broad applications in pattern recognition, computer vision and signal processing. In the real world, data often reside on low-dimensional manifolds embedded in a high-dimensional ambient space. However, the LRR method does not take into account the non-linear geometric structures within data, thus the locality and similarity information among data may be missing in the learning process. To improve LRR in this regard, we propose a general Laplacian regularized low-rank representation framework for data representation where a hypergraph Laplacian regularizer can be readily introduced into, i.e., a Non-negative Sparse Hyper-Laplacian regularized LRR model (NSHLRR). By taking advantage of the graph regularizer, our proposed method not only can represent the global low-dimensional structures, but also capture the intrinsic non-linear geometric information in data. The extensive experimental results on image clustering, semi-supervised image classification and dimensionality reduction tasks demonstrate the effectiveness of the proposed method.", "title": "" }, { "docid": "1448b02c9c14e086a438d76afa1b2fde", "text": "This paper analyzes the classification of hyperspectral remote sensing images with linear discriminant analysis (LDA) in the presence of a small ratio between the number of training samples and the number of spectral features. In these particular ill-posed problems, a reliable LDA requires one to introduce regularization for problem solving. Nonetheless, in such a challenging scenario, the resulting regularized LDA (RLDA) is highly sensitive to the tuning of the regularization parameter. In this context, we introduce in the remote sensing community an efficient version of the RLDA recently presented by Ye to cope with critical ill-posed problems. In addition, several LDA-based classifiers (i.e., penalized LDA, orthogonal LDA, and uncorrelated LDA) are compared theoretically and experimentally with the standard LDA and the RLDA. Method differences are highlighted through toy examples and are exhaustively tested on several ill-posed problems related to the classification of hyperspectral remote sensing images. Experimental results confirm the effectiveness of the presented RLDA technique and point out the main properties of other analyzed LDA techniques in critical ill-posed hyperspectral image classification problems.", "title": "" }, { "docid": "329343cec99c221e6f6ce8e3f1dbe83f", "text": "Artificial Neural Networks (ANN) play a very vital role in making stock market predictions. As per the literature survey, various researchers have used various approaches to predict the prices of stock market. Some popular approaches used by researchers are Artificial Neural Networks, Genetic Algorithms, Fuzzy Logic, Auto Regressive Models and Support Vector Machines. This study presents ANN based computational approach for predicting the one day ahead closing prices of companies from the three different sectors:IT Sector (Wipro, TCS and Infosys), Automobile Sector (Maruti Suzuki Ltd.) and Banking Sector (ICICI Bank). Different types of artificial neural networks based models like Back Propagation Neural Network (BPNN), Radial Basis Function Neural Network (RBFNN), Generalized Regression Neural Network (GRNN) and Layer Recurrent Neural Network (LRNN) have been studied and used to forecast the short term and long term share prices of Wipro, TCS, Infosys, Maruti Suzuki and ICICI Bank. All the networks were trained with the 1100 days of trading data and predicted the prices up to next 6 months. Predicted output was generated through available historical data. Experimental results show that BPNN model gives minimum error (MSE) as compared to the RBFNN and GRNN models. GRNN model performs better as compared to RBFNN model. Forecasting performance of LRNN model is found to be much better than other three models. Keywordsartificial intelligence, back propagation, mean square error, artificial neural network.", "title": "" }, { "docid": "0ac9ad839f21bd03342dd786b09155fe", "text": "Graphs are fundamental data structures which concisely capture the relational structure in many important real-world domains, such as knowledge graphs, physical and social interactions, language, and chemistry. Here we introduce a powerful new approach for learning generative models over graphs, which can capture both their structure and attributes. Our approach uses graph neural networks to express probabilistic dependencies among a graph’s nodes and edges, and can, in principle, learn distributions over any arbitrary graph. In a series of experiments our results show that once trained, our models can generate good quality samples of both synthetic graphs as well as real molecular graphs, both unconditionally and conditioned on data. Compared to baselines that do not use graph-structured representations, our models often perform far better. We also explore key challenges of learning generative models of graphs, such as how to handle symmetries and ordering of elements during the graph generation process, and offer possible solutions. Our work is the first and most general approach for learning generative models over arbitrary graphs, and opens new directions for moving away from restrictions of vectorand sequence-like knowledge representations, toward more expressive and flexible relational data structures.", "title": "" }, { "docid": "33fe68214ea062f2cdb310a74a9d6d8b", "text": "In this study, the authors examine the relationship between abusive supervision and employee workplace deviance. The authors conceptualize abusive supervision as a type of aggression. They use work on retaliation and direct and displaced aggression as a foundation for examining employees' reactions to abusive supervision. The authors predict abusive supervision will be related to supervisor-directed deviance, organizational deviance, and interpersonal deviance. Additionally, the authors examine the moderating effects of negative reciprocity beliefs. They hypothesized that the relationship between abusive supervision and supervisor-directed deviance would be stronger when individuals hold higher negative reciprocity beliefs. The results support this hypothesis. The implications of the results for understanding destructive behaviors in the workplace are examined.", "title": "" }, { "docid": "777f87414c0185739a92bbdb0f6aa994", "text": "Limb apraxia (LA), is a neuropsychological syndrome characterized by difficulty in performing gestures and may therefore be an ideal model for investigating whether action execution deficits are causatively linked to deficits in action understanding. We tested 33 left brain-damaged patients and 8 right brain-damaged patients for the presence of the LA. Importantly, we also tested all the patients in an ad hoc developed gesture recognition task wherein an actor performs, either correctly or incorrectly, transitive (using objects) or intransitive (without objects) meaningful conventional limb gestures. Patients were instructed to judge whether the observed gesture was correct or incorrect. Lesion analysis enabled us to evaluate the relationship between specific brain regions and behavioral performance in gesture execution and gesture comprehension. We found that LA was present in 21 left brain-damaged patients and it was linked to frontal and parietal lesions. Moreover, we found that recognition of correct execution of familiar gestures performed by others was more impaired in patients with LA than in nonapraxic patients. Crucially, the gesture comprehension deficit correlated with damage to the opercular and triangularis portions of the inferior frontal gyrus, two regions that are involved in complex aspects of action-related processing. In contrast, no such relationship was observed with lesions centered on the inferior parietal cortex. The present findings suggest that lesions to left frontal regions that are involved in planning and performing actions are causatively associated with deficits in the recognition of the correct execution of meaningful gestures.", "title": "" }, { "docid": "c3e8960170cb72f711263e7503a56684", "text": "BACKGROUND\nThe deltoid ligament has both superficial and deep layers and consists of up to six ligamentous bands. The prevalence of the individual bands is variable, and no consensus as to which bands are constant or variable exists. Although other studies have looked at the variance in the deltoid anatomy, none have quantified the distance to relevant osseous landmarks.\n\n\nMETHODS\nThe deltoid ligaments from fourteen non-paired, fresh-frozen cadaveric specimens were isolated and the ligamentous bands were identified. The lengths, footprint areas, orientations, and distances from relevant osseous landmarks were measured with a three-dimensional coordinate measurement device.\n\n\nRESULTS\nIn all specimens, the tibionavicular, tibiospring, and deep posterior tibiotalar ligaments were identified. Three additional bands were variable in our specimen cohort: the tibiocalcaneal, superficial posterior tibiotalar, and deep anterior tibiotalar ligaments. The deep posterior tibiotalar ligament was the largest band of the deltoid ligament. The origins from the distal center of the intercollicular groove were 16.1 mm (95% confidence interval, 14.7 to 17.5 mm) for the tibionavicular ligament, 13.1 mm (95% confidence interval, 11.1 to 15.1 mm) for the tibiospring ligament, and 7.6 mm (95% confidence interval, 6.7 to 8.5 mm) for the deep posterior tibiotalar ligament. Relevant to other pertinent osseous landmarks, the tibionavicular ligament inserted at 9.7 mm (95% confidence interval, 8.4 to 11.0 mm) from the tuberosity of the navicular, the tibiospring inserted at 35% (95% confidence interval, 33.4% to 36.6%) of the spring ligament's posteroanterior distance, and the deep posterior tibiotalar ligament inserted at 17.8 mm (95% confidence interval, 16.3 to 19.3 mm) from the posteromedial talar tubercle.\n\n\nCONCLUSIONS\nThe tibionavicular, tibiospring, and deep posterior tibiotalar ligament bands were constant components of the deltoid ligament. The deep posterior tibiotalar ligament was the largest band of the deltoid ligament.\n\n\nCLINICAL RELEVANCE\nThe anatomical data regarding the deltoid ligament bands in this study will help to guide anatomical placement of repairs and reconstructions for deltoid ligament injury or instability.", "title": "" }, { "docid": "8e3b73204d1d62337c4b2aabdbaa8973", "text": "The goal of this paper is to analyze the geometric properties of deep neural network classifiers in the input space. We specifically study the topology of classification regions created by deep networks, as well as their associated decision boundary. Through a systematic empirical investigation, we show that state-of-the-art deep nets learn connected classification regions, and that the decision boundary in the vicinity of datapoints is flat along most directions. We further draw an essential connection between two seemingly unrelated properties of deep networks: their sensitivity to additive perturbations in the inputs, and the curvature of their decision boundary. The directions where the decision boundary is curved in fact characterize the directions to which the classifier is the most vulnerable. We finally leverage a fundamental asymmetry in the curvature of the decision boundary of deep nets, and propose a method to discriminate between original images, and images perturbed with small adversarial examples. We show the effectiveness of this purely geometric approach for detecting small adversarial perturbations in images, and for recovering the labels of perturbed images.", "title": "" } ]
scidocsrr
12ad2563791538b48623e362b2392f05
Game-theoretic Analysis of Computation Offloading for Cloudlet-based Mobile Cloud Computing
[ { "docid": "0cbd3587fe466a13847e94e29bb11524", "text": "The cloud heralds a new era of computing where application services are provided through the Internet. Cloud computing can enhance the computing capability of mobile systems, but is it the ultimate solution for extending such systems' battery lifetimes?", "title": "" }, { "docid": "956799f28356850fda78a223a55169bf", "text": "Despite increasing usage of mobile computing, exploiting its full potential is difficult due to its inherent problems such as resource scarcity, frequent disconnections, and mobility. Mobile cloud computing can address these problems by executing mobile applications on resource providers external to the mobile device. In this paper, we provide an extensive survey of mobile cloud computing research, while highlighting the specific concerns in mobile cloud computing. We present a taxonomy based on the key issues in this area, and discuss the different approaches taken to tackle these issues. We conclude the paper with a critical analysis of challenges that have not yet been fully met, and highlight directions for", "title": "" }, { "docid": "aa18c10c90af93f38c8fca4eff2aab09", "text": "The unabated flurry of research activities to augment various mobile devices by leveraging heterogeneous cloud resources has created a new research domain called Mobile Cloud Computing (MCC). In the core of such a non-uniform environment, facilitating interoperability, portability, and integration among heterogeneous platforms is nontrivial. Building such facilitators in MCC requires investigations to understand heterogeneity and its challenges over the roots. Although there are many research studies in mobile computing and cloud computing, convergence of these two areas grants further academic efforts towards flourishing MCC. In this paper, we define MCC, explain its major challenges, discuss heterogeneity in convergent computing (i.e. mobile computing and cloud computing) and networking (wired and wireless networks), and divide it into two dimensions, namely vertical and horizontal. Heterogeneity roots are analyzed and taxonomized as hardware, platform, feature, API, and network. Multidimensional heterogeneity in MCC results in application and code fragmentation problems that impede development of cross-platform mobile applications which is mathematically described. The impacts of heterogeneity in MCC are investigated, related opportunities and challenges are identified, and predominant heterogeneity handling approaches like virtualization, middleware, and service oriented architecture (SOA) are discussed. We outline open issues that help in identifying new research directions in MCC.", "title": "" } ]
[ { "docid": "6c14243c49a2d119d768685b59f9548b", "text": "Over the past decade, researchers have shown significant advances in the area of radio frequency identification (RFID) and metamaterials. RFID is being applied to a wide spectrum of industries and metamaterial-based antennas are beginning to perform just as well as existing larger printed antennas. This paper presents two novel metamaterial-based antennas for passive ultra-high frequency (UHF) RFID tags. It is shown that by implementing omega-like elements and split-ring resonators into the design of an antenna for an UHF RFID tag, the overall size of the antenna can be significantly reduced to dimensions of less than 0.15λ0, while preserving the performance of the antenna.", "title": "" }, { "docid": "03280447faf00c523b099d4bdbbfe7a5", "text": "Ostrzenski’s G-pot anatomical structure discovery has been verified by the anatomy, histology, MRI in vivo, and electrovaginography in vivo studies. The objectives of this scientific-clinical investigation were to develop a new surgical reconstructive intervention (G-spotplasty); to determine the ability of G-spotplasty surgical implementation; to observe for potential complications; and to gather initial information on whether G-spotplasty improves female sexual activity, sexual behaviors, and sexual concerns. A case series study was designed and conducted with 5-year follow-up (October 2013 and October 2017). The rehearsal of new G-spotplasty was performed on fresh female cadavers. Three consecutive live women constituted this clinical study population, and they were subjected to the newly developed G-spotplasty procedure in October 2013. Preoperatively and postoperatively, a validated, self-completion instrument of Sexual Relationships and Activities Questionnaire (SRA-Q) was used to measure female sexual activity, sexual behaviors, and sexual concerns. Three out of twelve women met inclusion criteria and were incorporated into this study. All patients were subjected to G-spotplasty, completed 5-year follow-up, and returned completed SRA-Q in a sealed envelope. New G-spotplasty was successfully implemented without surgical difficulty and without complications. All patients reported re-establishing vaginal orgasms with different degrees of difficulties, observing return of anterior vaginal wall engorgement, and were very pleased with the outcome of G-spotplasty. The G-spotplasty is a simple surgical intervention, easy to implement, and improves sexual activities, sexual behaviors, and sexual concerns. The preliminary results are very promising and paved the way for additional clinical-scientific research. This journal requires that authors assign a level of evidence to each article. For a full description of these Evidence-Based Medicine ratings, please refer to the Table of Contents or the online Instructions to Authors www.springer.com/00266.", "title": "" }, { "docid": "349a5c840daa587aa5d42c6e584e2103", "text": "We propose a class of functional dependencies for graphs, referred to as GFDs. GFDs capture both attribute-value dependencies and topological structures of entities, and subsume conditional functional dependencies (CFDs) as a special case. We show that the satisfiability and implication problems for GFDs are coNP-complete and NP-complete, respectively, no worse than their CFD counterparts. We also show that the validation problem for GFDs is coNP-complete. Despite the intractability, we develop parallel scalable algorithms for catching violations of GFDs in large-scale graphs. Using real-life and synthetic data, we experimentally verify that GFDs provide an effective approach to detecting inconsistencies in knowledge and social graphs.", "title": "" }, { "docid": "15f2f4ba8635366e5f2879d085511f46", "text": "Vessel segmentation is a key step for various medical applications, it is widely used in monitoring the disease progression, and evaluation of various ophthalmologic diseases. However, manual vessel segmentation by trained specialists is a repetitive and time-consuming task. In the last two decades, many approaches have been introduced to segment the retinal vessels automatically. With the more recent advances in the field of neural networks and deep learning, multiple methods have been implemented with focus on the segmentation and delineation of the blood vessels. Deep Learning methods, such as the Convolutional Neural Networks (CNN), have recently become one of the new trends in the Computer Vision area. Their ability to find strong spatially local correlations in the data at different abstraction levels allows them to learn a set of filters that are useful to correctly segment the data, when given a labeled training set. In this dissertation, different approaches based on deep learning techniques for the segmentation of retinal blood vessels are studied. Furthermore, in this dissertation are also studied and evaluated the different techniques that have been used for vessel segmentation, based on machine learning (Random Forests and Support vector machine algorithms), and how these can be combined with the deep learning approaches.", "title": "" }, { "docid": "8dc2f16d4f4ed1aa0acf6a6dca0ccc06", "text": "This is the second paper in a four-part series detailing the relative merits of the treatment strategies, clinical techniques and dental materials for the restoration of health, function and aesthetics for the dentition. In this paper the management of wear in the anterior dentition is discussed, using three case studies as illustration.", "title": "" }, { "docid": "05610fd0e6373291bdb4bc28cf1c691b", "text": "In this work, we acknowledge the need for software engineers to devise specialized tools and techniques for blockchain-oriented software development. Ensuring effective testing activities, enhancing collaboration in large teams, and facilitating the development of smart contracts all appear as key factors in the future of blockchain-oriented software development.", "title": "" }, { "docid": "26cc29177040461634929eb1fa13395d", "text": "In this paper, we first characterize distributed real-time systems by the following two properties that have to be supported: best eflorl and leas2 suffering. Then, we propose a distributed real-time object model DRO which complies these properties. Based on the DRO model, we design an object oriented programming language DROL: an extension of C++ with the capa.bility of describing distributed real-time systems. The most eminent feature of DROL is that users can describe on meta level the semantics of message communications as a communication protocol with sending and receiving primitives. With this feature, we can construct a flexible distributed real-time system satisfying specifications which include timing constraints. We implement a runtime system of DROL on the ARTS kernel, and evaluate the efficiency of the prototype implementation as well as confirm the high expressive power of the language.", "title": "" }, { "docid": "f76b587a1bc282a98cf8e42bdd6f5032", "text": "Ensemble-based methods are among the most widely used techniques for data stream classification. Their popularity is attributable to their good performance in comparison to strong single learners while being relatively easy to deploy in real-world applications. Ensemble algorithms are especially useful for data stream learning as they can be integrated with drift detection algorithms and incorporate dynamic updates, such as selective removal or addition of classifiers. This work proposes a taxonomy for data stream ensemble learning as derived from reviewing over 60 algorithms. Important aspects such as combination, diversity, and dynamic updates, are thoroughly discussed. Additional contributions include a listing of popular open-source tools and a discussion about current data stream research challenges and how they relate to ensemble learning (big data streams, concept evolution, feature drifts, temporal dependencies, and others).", "title": "" }, { "docid": "5473962c6c270df695b965cbcc567369", "text": "Medical professionals need a reliable prediction methodology to diagnose cancer and distinguish between the different stages in cancer. Classification is a data mining function that assigns items in a collection to target groups or classes. C4.5 classification algorithm has been applied to SEER breast cancer dataset to classify patients into either “Carcinoma in situ” (beginning or pre-cancer stage) or “Malignant potential” group. Pre-processing techniques have been applied to prepare the raw dataset and identify the relevant attributes for classification. Random test samples have been selected from the pre-processed data to obtain classification rules. The rule set obtained was tested with the remaining data. The results are presented and discussed. Keywords— Breast Cancer Diagnosis, Classification, Clinical Data, SEER Dataset, C4.5 Algorithm", "title": "" }, { "docid": "0250d6bb0bcf11ca8af6c2661c1f7f57", "text": "Chemoreception is a biological process essential for the survival of animals, as it allows the recognition of important volatile cues for the detection of food, egg-laying substrates, mates, or predators, among other purposes. Furthermore, its role in pheromone detection may contribute to evolutionary processes, such as reproductive isolation and speciation. This key role in several vital biological processes makes chemoreception a particularly interesting system for studying the role of natural selection in molecular adaptation. Two major gene families are involved in the perireceptor events of the chemosensory system: the odorant-binding protein (OBP) and chemosensory protein (CSP) families. Here, we have conducted an exhaustive comparative genomic analysis of these gene families in 20 Arthropoda species. We show that the evolution of the OBP and CSP gene families is highly dynamic, with a high number of gains and losses of genes, pseudogenes, and independent origins of subfamilies. Taken together, our data clearly support the birth-and-death model for the evolution of these gene families with an overall high gene turnover rate. Moreover, we show that the genome organization of the two families is significantly more clustered than expected by chance and, more important, that this pattern appears to be actively maintained across the Drosophila phylogeny. Finally, we suggest the homologous nature of the OBP and CSP gene families, dating back their most recent common ancestor after the terrestrialization of Arthropoda (380--450 Ma) and we propose a scenario for the origin and diversification of these families.", "title": "" }, { "docid": "0321ef8aeb0458770cd2efc35615e11c", "text": "Entity-relationship-structured data is becoming more important on the Web. For example, large knowledge bases have been automatically constructed by information extraction from Wikipedia and other Web sources. Entities and relationships can be represented by subject-property-object triples in the RDF model, and can then be precisely searched by structured query languages like SPARQL. Because of their Boolean-match semantics, such queries often return too few or even no results. To improve recall, it is thus desirable to support users by automatically relaxing or reformulating queries in such a way that the intention of the original user query is preserved while returning a sufficient number of ranked results. In this paper we describe comprehensive methods to relax SPARQL-like triplepattern queries in a fully automated manner. Our framework produces a set of relaxations by means of statistical language models for structured RDF data and queries. The query processing algorithms merge the results of different relaxations into a unified result list, with ranking based on any ranking function for structured queries over RDF-data. Our experimental evaluation, with two different datasets about movies and books, shows the effectiveness of the automatically generated relaxations and the improved quality of query results based on assessments collected on the Amazon Mechanical Turk platform.", "title": "" }, { "docid": "e576b8677816ec54c7dcf52e633e6c9f", "text": "OBJECTIVE\nThe objective of this study was to determine the level of knowledge, comfort, and training related to the medical management of child abuse among pediatrics, emergency medicine, and family medicine residents.\n\n\nMETHODS\nSurveys were administered to program directors and third-year residents at 67 residency programs. The resident survey included a 24-item quiz to assess knowledge regarding the medical management of physical and sexual child abuse. Sites were solicited from members of a network of child abuse physicians practicing at institutions with residency programs.\n\n\nRESULTS\nAnalyzable surveys were received from 53 program directors and 462 residents. Compared with emergency medicine and family medicine programs, pediatric programs were significantly larger and more likely to have a medical provider specializing in child abuse pediatrics, have faculty primarily responsible for child abuse training, use a written curriculum for child abuse training, and offer an elective rotation in child abuse. Exposure to child abuse training and abused patients was highest for pediatric residents and lowest for family medicine residents. Comfort with managing child abuse cases was lowest among family medicine residents. On the knowledge quiz, pediatric residents significantly outperformed emergency medicine and family medicine residents. Residents with high knowledge scores were significantly more likely to come from larger programs and programs that had a center, provider, or interdisciplinary team that specialized in child abuse pediatrics; had a physician on faculty responsible for child abuse training; used a written curriculum for child abuse training; and had a required rotation in child abuse pediatrics.\n\n\nCONCLUSIONS\nBy analyzing the relationship between program characteristics and residents' child abuse knowledge, we found that pediatric programs provide far more training and resources for child abuse education than emergency medicine and family medicine programs. As leaders, pediatricians must establish the importance of this topic in the pediatric education of residents of all specialties.", "title": "" }, { "docid": "7ccd75f1626966b4ffb22f2788d64fdc", "text": "Diabetes has affected over 246 million people worldwide with a majority of them being women. According to the WHO report, by 2025 this number is expected to rise to over 380 million. The disease has been named the fifth deadliest disease in the United States with no imminent cure in sight. With the rise of information technology and its continued advent into the medical and healthcare sector, the cases of diabetes as well as their symptoms are well documented. This paper aims at finding solutions to diagnose the disease by analyzing the patterns found in the data through classification analysis by employing Decision Tree and Naïve Bayes algorithms. The research hopes to propose a quicker and more efficient technique of diagnosing the disease, leading to timely treatment of the patients.", "title": "" }, { "docid": "104fa95b500df05a052a230e80797f59", "text": "Stochastic variational inference finds good posterior approximations of probabilistic models with very large data sets. It optimizes the variational objective with stochastic optimization, following noisy estimates of the natural gradient. Operationally, stochastic inference iteratively subsamples from the data, analyzes the subsample, and updates parameters with a decreasing learning rate. However, the algorithm is sensitive to that rate, which usually requires hand-tuning to each application. We solve this problem by developing an adaptive learning rate for stochastic inference. Our method requires no tuning and is easily implemented with computations already made in the algorithm. We demonstrate our approach with latent Dirichlet allocation applied to three large text corpora. Inference with the adaptive learning rate converges faster and to a better approximation than the best settings of hand-tuned rates.", "title": "" }, { "docid": "fdc4d23fa336ca122fdfb12818901180", "text": "Concept of communication systems, which use smart antennas is based on digital signal processing algorithms. Thus, the smart antennas system becomes capable to locate and track signals by the both: users and interferers and dynamically adapts the antenna pattern to enhance the reception in Signal-Of-Interest direction and minimizing interference in Signal-Of-Not-Interest direction. Hence, Space Division Multiple Access system, which uses smart antennas, is being used more often in wireless communications, because it shows improvement in channel capacity and co-channel interference. However, performance of smart antenna system greatly depends on efficiency of digital signal processing algorithms. The algorithm uses the Direction of Arrival (DOA) algorithms to estimate the number of incidents plane waves on the antenna array and their angle of incidence. This paper investigates performance of the DOA algorithms like MUSIC, ESPRIT and ROOT MUSIC on the uniform linear array in the presence of white noise. The simulation results show that MUSIC algorithm is the best. The resolution of the DOA techniques improves as number of snapshots, number of array elements and signalto-noise ratio increases.", "title": "" }, { "docid": "a361214a42392cbd0ba3e0775d32c839", "text": "We propose a design methodology to exploit adaptive nanodevices (memristors), virtually immune to their variability. Memristors are used as synapses in a spiking neural network performing unsupervised learning. The memristors learn through an adaptation of spike timing dependent plasticity. Neurons' threshold is adjusted following a homeostasis-type rule. System level simulations on a textbook case show that performance can compare with traditional supervised networks of similar complexity. They also show the system can retain functionality with extreme variations of various memristors' parameters, thanks to the robustness of the scheme, its unsupervised nature, and the power of homeostasis. Additionally the network can adjust to stimuli presented with different coding schemes.", "title": "" }, { "docid": "71b5708fb9d078b370689cac22a66013", "text": "This paper presents a model, synthesized from the literature, of factors that explain how business analytics contributes to business value. It also reports results from a preliminary test of that model. The model consists of two parts: a process and a variance model. The process model depicts the analyze-insight-decision-action process through which use of an organization’s business-analytic capabilities create business value. The variance model proposes that the five factors in Davenport et al.’s (2010) DELTA model of BA success factors, six from Watson and Wixom (2007), and three from Seddon et al.’s (2010) model of organizational benefits from enterprise systems, assist a firm to gain business value from business analytics. A preliminary test of the model was conducted using data from 100 customer-success stories from vendors such as IBM, SAP, and Teradata. Our conclusion is that the model is likely to be a useful basis for future research.", "title": "" }, { "docid": "7cfc2866218223ba6bd56eb1f10ce29f", "text": "This paper deals with prediction of anopheles number, the main vector of malaria risk, using environmental and climate variables. The variables selection is based on an automatic machine learning method using regression trees, and random forests combined with stratified two levels cross validation. The minimum threshold of variables importance is accessed using the quadratic distance of variables importance while the optimal subset of selected variables is used to perform predictions. Finally the results revealed to be qualitatively better, at the selection, the prediction, and the CPU time point of view than those obtained by GLM-Lasso method.", "title": "" }, { "docid": "577841609abb10a978ed54429f057def", "text": "Smart environments integrates various types of technologies, including cloud computing, fog computing, and the IoT paradigm. In such environments, it is essential to organize and manage efficiently the broad and complex set of heterogeneous resources. For this reason, resources classification and categorization becomes a vital issue in the control system. In this paper we make an exhaustive literature survey about the various computing systems and architectures which defines any type of ontology in the context of smart environments, considering both, authors that explicitly propose resources categorization and authors that implicitly propose some resources classification as part of their system architecture. As part of this research survey, we have built a table that summarizes all research works considered, and which provides a compact and graphical snapshot of the current classification trends. The goal and primary motivation of this literature survey has been to understand the current state of the art and identify the gaps between the different computing paradigms involved in smart environment scenarios. As a result, we have found that it is essential to consider together several computing paradigms and technologies, and that there is not, yet, any research work that integrates a merged resources classification, taxonomy or ontology required in such heterogeneous scenarios.", "title": "" }, { "docid": "6a240e0f0944117cf17f4ec1e613d94a", "text": "This paper presents a simple method for “do as I do\" motion transfer: given a source video of a person dancing we can transfer that performance to a novel (amateur) target after only a few minutes of the target subject performing standard moves. We pose this problem as a per-frame image-to-image translation with spatio-temporal smoothing. Using pose detections as an intermediate representation between source and target, we learn a mapping from pose images to a target subject’s appearance. We adapt this setup for temporally coherent video generation including realistic face synthesis. Our video demo can be found at https://youtu.be/PCBTZh41Ris.", "title": "" } ]
scidocsrr
546a64b871f37f1b67c7731641cd8ce4
Assessment , Enhancement , and Verification Determinants of the Self-Evaluation Process
[ { "docid": "0b88b9b165a74cc630a0cf033308d6c2", "text": "It is proposed that motivation may affect reasoning through reliance on a biased set of cognitive processes--that is, strategies for accessing, constructing, and evaluating beliefs. The motivation to be accurate enhances use of those beliefs and strategies that are considered most appropriate, whereas the motivation to arrive at particular conclusions enhances use of those that are considered most likely to yield the desired conclusion. There is considerable evidence that people are more likely to arrive at conclusions that they want to arrive at, but their ability to do so is constrained by their ability to construct seemingly reasonable justifications for these conclusions. These ideas can account for a wide variety of research concerned with motivated reasoning.", "title": "" } ]
[ { "docid": "9775396477ccfde5abdd766588655539", "text": "The use of hand gestures offers an alternative to the commonly used human computer interfaces, providing a more intuitive way of navigating among menus and multimedia applications. This paper presents a system for hand gesture recognition devoted to control windows applications. Starting from the images captured by a time-of-flight camera (a camera that produces images with an intensity level inversely proportional to the depth of the objects observed) the system performs hand segmentation as well as a low-level extraction of potentially relevant features which are related to the morphological representation of the hand silhouette. Classification based on these features discriminates between a set of possible static hand postures which results, combined with the estimated motion pattern of the hand, in the recognition of dynamic hand gestures. The whole system works in real-time, allowing practical interaction between user and application.", "title": "" }, { "docid": "f462de59dd8b45f7c7e27672125010d2", "text": "Researchers have recently noted (14; 27) the potential of fast poisoning attacks against DNS servers, which allows attackers to easily manipulate records in open recursive DNS resolvers. A vendor-wide upgrade mitigated but did not eliminate this attack. Further, existing DNS protection systems, including bailiwick-checking (12) and IDS-style filtration, do not stop this type of DNS poisoning. We therefore propose Anax, a DNS protection system that detects poisoned records in cache. Our system can observe changes in cached DNS records, and applies machine learning to classify these updates as malicious or benign. We describe our classification features and machine learning model selection process while noting that the proposed approach is easily integrated into existing local network protection systems. To evaluate Anax, we studied cache changes in a geographically diverse set of 300,000 open recursive DNS servers (ORDNSs) over an eight month period. Using hand-verified data as ground truth, evaluation of Anax showed a very low false positive rate (0.6% of all new resource records) and a high detection", "title": "" }, { "docid": "fb44e3c2624d92c9ed408ebd00bdb793", "text": "A novel method for online data acquisition of cursive handwriting is described. A video camera is used to record the handwriting of a user. From the acquired sequence of images, the movement of the tip of the pen is reconstructed. A prototype of the system has been implemented and tested. In one series of tests, the performance of the system was visually assessed. In another series of experiments, the system was combined with an existing online handwriting recognizer. Good results have been obtained in both sets of experiments.", "title": "" }, { "docid": "3a17d60c2eb1df3bf491be3297cffe79", "text": "Received: 3 October 2009 Revised: 22 June 2011 Accepted: 3 July 2011 Abstract Studies claiming to use the Grounded theory methodology (GTM) have been quite prevalent in information systems (IS) literature. A cursory review of this literature reveals conflict in the understanding of GTM, with a variety of grounded theory approaches apparent. The purpose of this investigation was to establish what alternative grounded theory approaches have been employed in IS, and to what extent each has been used. In order to accomplish this goal, a comprehensive set of IS articles that claimed to have followed a grounded theory approach were reviewed. The articles chosen were those published in the widely acknowledged top eight IS-centric journals, since these journals most closely represent exemplar IS research. Articles for the period 1985-2008 were examined. The analysis revealed four main grounded theory approaches in use, namely (1) the classic grounded theory approach, (2) the evolved grounded theory approach, (3) the use of the grounded theory approach as part of a mixed methodology, and (4) the application of grounded theory techniques, typically for data analysis purposes. The latter has been the most common approach in IS research. The classic approach was the least often employed, with many studies opting for an evolved or mixed method approach. These and other findings are discussed and implications drawn. European Journal of Information Systems (2013) 22, 119–129. doi:10.1057/ejis.2011.35; published online 30 August 2011", "title": "" }, { "docid": "98c3588648676eea3bb78a43aef92af4", "text": "Data mining (DM) techniques are being increasingly used in many modern organizations to retrieve valuable knowledge structures from organizational databases, including data warehouses. An important knowledge structure that can result from data mining activities is the decision tree (DT) that is used for the classi3cation of future events. The induction of the decision tree is done using a supervised knowledge discovery process in which prior knowledge regarding classes in the database is used to guide the discovery. The generation of a DT is a relatively easy task but in order to select the most appropriate DT it is necessary for the DM project team to generate and analyze a signi3cant number of DTs based on multiple performance measures. We propose a multi-criteria decision analysis based process that would empower DM project teams to do thorough experimentation and analysis without being overwhelmed by the task of analyzing a signi3cant number of DTs would o7er a positive contribution to the DM process. We also o7er some new approaches for measuring some of the performance criteria. ? 2003 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "31add593ce5597c24666d9662b3db89d", "text": "Estimating the body shape and posture of a dressed human subject in motion represented as a sequence of (possibly incomplete) 3D meshes is important for virtual change rooms and security. To solve this problem, statistical shape spaces encoding human body shape and posture variations are commonly used to constrain the search space for the shape estimate. In this work, we propose a novel method that uses a posture-invariant shape space to model body shape variation combined with a skeleton-based deformation to model posture variation. Our method can estimate the body shape and posture of both static scans and motion sequences of dressed human body scans. In case of motion sequences, our method takes advantage of motion cues to solve for a single body shape estimate along with a sequence of posture estimates. We apply our approach to both static scans and motion sequences and demonstrate that using our method, higher fitting accuracy is achieved than when using a variant of the popular SCAPE model [2, 18] as statistical model.", "title": "" }, { "docid": "42d2f3c2cc7ed0c08dd8f450091e5a7a", "text": "Analytical methods validation is an important regulatory requirement in pharmaceutical analysis. High-Performance Liquid Chromatography (HPLC) is commonly used as an analytical technique in developing and validating assay methods for drug products and drug substances. Method validation provides documented evidence, and a high degree of assurance, that an analytical method employed for a specific test, is suitable for its intended use. Over recent years, regulatory authorities have become increasingly aware of the necessity of ensuring that the data submitted to them in applications for marketing authorizations have been acquired using validated analytical methodology. The International Conference on Harmonization (ICH) has introduced guidelines for analytical methods validation. 1,2 The U.S. Food and Drug Administration (FDA) methods validation draft guidance document, 3-5 as well as United States Pharmacopoeia (USP) both refer to ICH guidelines. These draft guidances define regulatory and alternative analytical procedures and stability-indicating assays. The FDA has proposed adding section CFR 211.222 on analytical methods validation to the current Good Manufacturing Practice (cGMP) regulations. 7 This would require pharmaceutical manufacturers to establish and document the accuracy, sensitivity, specificity, reproducibility, and any other attribute (e.g., system suitability, stability of solutions) necessary to validate test methods. Regulatory analytical procedures are of two types: compendial and noncompendial. The noncompendial analytical procedures in the USP are those legally recognized as regulatory procedures under section 501(b) of the Federal Food, Drug and Cosmetic Act. When using USP analytical methods, the guidance recommends that information be provided for the following characteristics: specificity of the method, stability of the analytical sample solution, and intermediate precision. Compendial analytical methods may not be stability indicating, and this concern must be addressed when developing a drug product specification, because formulation based interference may not be considered in the monograph specifications. Additional analytical tests for impurities may be necessary to support the quality of the drug substance or drug product. Noncompendial analytical methods must be fully validated. The most widely applied validation characteristics are accuracy, precision (repeatability and intermediate precision), specificity, detection limit, quantitation limit, linearity, range, and stability of analytical solutions. The parameters that require validation and the approach adopted for each particular case are dependent on the type and applications of the method. Before undertaking the task of method validation, it is necessary that the analytical system itself is adequately designed, maintained, calibrated, and validated. 8 The first step in method validation is to prepare a protocol, preferably written with the instructions in a clear step-by-step format. This A Practical Approach to Validation of HPLC Methods Under Current Good Manufacturing Practices", "title": "" }, { "docid": "2f5776d8ce9714dcee8d458b83072f74", "text": "The componential theory of creativity is a comprehensive model of the social and psychological components necessary for an individual to produce creative work. The theory is grounded in a definition of creativity as the production of ideas or outcomes that are both novel and appropriate to some goal. In this theory, four components are necessary for any creative response: three components within the individual – domainrelevant skills, creativity-relevant processes, and intrinsic task motivation – and one component outside the individual – the social environment in which the individual is working. The current version of the theory encompasses organizational creativity and innovation, carrying implications for the work environments created by managers. This entry defines the components of creativity and how they influence the creative process, describing modifications to the theory over time. Then, after comparing the componential theory to other creativity theories, the article describes this theory’s evolution and impact.", "title": "" }, { "docid": "981b4977ed3524545d9ae5016d45c8d6", "text": "Related to different international activities in the Optical Wireless Communications (OWC) field Graz University of Technology (TUG) has high experience on developing different high data rate transmission systems and is well known for measurements and analysis of the OWC-channel. In this paper, a novel approach for testing Free Space Optical (FSO) systems in a controlled laboratory condition is proposed. Based on fibre optics technology, TUG testbed could effectively emulate the operation of real wireless optical communication systems together with various atmospheric perturbation effects such as fog and clouds. The suggested architecture applies an optical variable attenuator as a main device representing the tropospheric influences over the launched Gaussian beam in the free space channel. In addition, the current scheme involves an attenuator control unit with an external Digital Analog Converter (DAC) controlled by self-developed software. To obtain optimal results in terms of the presented setup, a calibration process including linearization of the non-linear attenuation versus voltage graph is performed. Finally, analytical results of the attenuation based on real measurements with the hardware channel emulator under laboratory conditions are shown. The implementation can be used in further activities to verify OWC-systems, before testing under real conditions.", "title": "" }, { "docid": "048cc782baeec3a7f46ef5ee7abf0219", "text": "Autoerotic asphyxiation is an unusual but increasingly more frequently occurring phenomenon, with >1000 fatalities in the United States per year. Understanding of this manner of death is likewise increasing, as noted by the growing number of cases reported in the literature. However, this form of accidental death is much less frequently seen in females (male:female ratio >50:1), and there is correspondingly less literature on female victims of autoerotic asphyxiation. The authors present the case of a 31-year-old woman who died of an autoerotic ligature strangulation and review the current literature on the subject. The forensic examiner must be able to discern this syndrome from similar forms of accidental and suicidal death, and from homicidal hanging/strangulation.", "title": "" }, { "docid": "f262e911b5254ad4d4419ed7114b8a4f", "text": "User Satisfaction is one of the most extensively used dimensions for Information Systems (IS) success evaluation with a large body of literature and standardized instruments of User Satisfaction. Despite the extensive literature on User Satisfaction, there exist much controversy over the measures of User Satisfaction and the adequacy of User Satisfaction measures to gauge the level of success in complex, contemporary IS. Recent studies in IS have suggested treating User Satisfaction as an overarching construct of success, rather than a measure of success. Further perplexity is introduced over the alleged overlaps between User Satisfaction measures and the measures of IS success (e.g. system quality, information quality) suggested in the literature. The following study attempts to clarify the aforementioned confusions by gathering data from 310 Enterprise System users and analyzing 16 User Satisfaction instruments. The statistical analysis of the 310 responses and the content analysis of the 16 instruments suggest the appropriateness of treating User Satisfaction as an overarching measure of success rather a dimension of success.", "title": "" }, { "docid": "3b32ade20fbdd7474ee10fc10d80d90a", "text": "We report the modulation performance of micro-light-emitting diode arrays with peak emission ranging from 370 to 520 nm, and emitter diameters ranging from 14 to 84 μm. Bandwidths in excess of 400 MHz and error-free data transmission up to 1.1Gbit/s is shown. These devices are shown integrated with electronic drivers, allowing convenient control of individual array emitters. Transmission using such a device is shown at 512 Mbit/s.", "title": "" }, { "docid": "e1d9ff28da38fcf8ea3a428e7990af25", "text": "The Autonomous car is a complex topic, different technical fields like: Automotive engineering, Control engineering, Informatics, Artificial Intelligence etc. are involved in solving the human driver replacement with an artificial (agent) driver. The problem is even more complicated because usually, nowadays, having and driving a car defines our lifestyle. This means that the mentioned (major) transformation is also a cultural issue. The paper will start with the mentioned cultural aspects related to a self-driving car and will continue with the big picture of the system.", "title": "" }, { "docid": "715fda02bad1633be9097cc0a0e68c8d", "text": "Data uncertainty is common in real-world applications due to various causes, including imprecise measurement, network latency, outdated sources and sampling errors. These kinds of uncertainty have to be handled cautiously, or else the mining results could be unreliable or even wrong. In this paper, we propose a new rule-based classification and prediction algorithm called uRule for classifying uncertain data. This algorithm introduces new measures for generating, pruning and optimizing rules. These new measures are computed considering uncertain data interval and probability distribution function. Based on the new measures, the optimal splitting attribute and splitting value can be identified and used for classification and prediction. The proposed uRule algorithm can process uncertainty in both numerical and categorical data. Our experimental results show that uRule has excellent performance even when data is highly uncertain.", "title": "" }, { "docid": "7dd3183ee59b800f3391f893d3578d64", "text": "This paper reports on a bio-inspired angular accelerometer based on a two-mask microfluidic process using a PDMS mold. The sensor is inspired by the semicircular canals in mammalian vestibular systems and pairs a fluid-filled microtorus with a thermal detection principle based on thermal convection. With inherent linear acceleration insensitivity, the sensor features a sensitivity of 29.8μV/deg/s2=1.7mV/rad/s2, a dynamic range of 14,000deg/s2 and a detection limit of ~20deg/s2.", "title": "" }, { "docid": "76a2bc6a8649ffe9111bfaa911572c9d", "text": "URL shortening services have become extremely popular. However, it is still unclear whether they are an effective and reliable tool that can be leveraged to hide malicious URLs, and to what extent these abuses can impact the end users. With these questions in mind, we first analyzed existing countermeasures adopted by popular shortening services. Surprisingly, we found such countermeasures to be ineffective and trivial to bypass. This first measurement motivated us to proceed further with a large-scale collection of the HTTP interactions that originate when web users access live pages that contain short URLs. To this end, we monitored 622 distinct URL shortening services between March 2010 and April 2012, and collected 24,953,881 distinct short URLs. With this large dataset, we studied the abuse of short URLs. Despite short URLs are a significant, new security risk, in accordance with the reports resulting from the observation of the overall phishing and spamming activity, we found that only a relatively small fraction of users ever encountered malicious short URLs. Interestingly, during the second year of measurement, we noticed an increased percentage of short URLs being abused for drive-by download campaigns and a decreased percentage of short URLs being abused for spam campaigns. In addition to these security-related findings, our unique monitoring infrastructure and large dataset allowed us to complement previous research on short URLs and analyze these web services from the user's perspective.", "title": "" }, { "docid": "20be8363ae04659061a56a1c7d3ee4d5", "text": "The popularity of level sets for segmentation is mainly based on the sound and convenient treatment of regions and their boundaries. Unfortunately, this convenience is so far not known from level set methods when applied to images with more than two regions. This communication introduces a comparatively simple way how to extend active contours to multiple regions keeping the familiar quality of the two-phase case. We further suggest a strategy to determine the optimum number of regions as well as initializations for the contours", "title": "" }, { "docid": "1b92f2391b35ca30b86f6d5e8fae7ffe", "text": "In this paper, two novel compact diplexers for satellite applications are presented. The first covers the Ku-band with two closely spaced channels (Ku-transmission band: 10.7–13 GHz and Ku-reception band: 13.75–14.8 GHz). The second is wider than the first (overall bandwidth up to 50%) achieves the suppression of the higher order modes, and covers the Ku/K-band with a reception channel between 17.2 and 18.5 GHz. Both diplexers are composed of two novel bandpass filters, joined together with an E-plane T-junction. The bandpass filters are designed by combining a low-pass filtering function (based on $\\lambda $ /4-step-shaped band-stop elements separated by very short waveguide sections) and a high-pass filtering structure (based on the waveguide propagation cutoff effect). The novel diplexers show a very compact footprint and very relaxed fabrication tolerances, and are especially attractive for wideband applications. A prototype Ku/K-band diplexer has also been fabricated by milling. Measurements show a very good agreement with simulations, thereby demonstrating the validity and manufacturing robustness of the proposed topology.", "title": "" }, { "docid": "e17f9e8d57c98928ecccb27e3259f2a3", "text": "A broadcast encryption scheme allows the sender to securely distribute data to a dynamically changing set of users over an insecure channel. It has numerous applications including pay-TV systems, distribution of copyrighted material, streaming audio/video and many others. One of the most challenging settings for this problem is that of stateless receivers, where each user is given a fixed set of keys which cannot be updated through the lifetime of the system. This setting was considered by Naor, Naor and Lotspiech [NNL01], who also present a very efficient “subset difference” (SD) method for solving this problem. The efficiency of this method (which also enjoys efficient traitor tracing mechanism and several other useful features) was recently improved by Halevi and Shamir [HS02], who called their refinement the “Layered SD” (LSD) method. Both of the above methods were originally designed to work in the centralized (symmetric key) setting, where only the trusted designer of the system can encrypt messages to users. On the other hand, in many applications it is desirable not to store the secret keys “on-line”, or to allow untrusted users to broadcast information. This leads to the question of building a public key broadcast encryption scheme for stateless receivers; in particular, of extending the elegant SD/LSD methods to the public key setting. Unfortunately, Naor et al. [NNL01] notice that the natural technique for doing so will result in an enormous public key and very large storage for every user. In fact, [NNL01] pose this question of reducing the public key size and user’s storage as the first open problem of their paper. We resolve this question in the affirmative, by demonstrating that an O(1) size public key can be achieved for both of SD/LSD methods, in addition to the same (small) user’s storage and ciphertext size as in the symmetric key setting. Courant Institute of Mathematical Sciences, New York University.", "title": "" }, { "docid": "212a7c22310977f6b8ada29437668ed5", "text": "Gait analysis and machine learning classification on healthy subjects in normal walking Tomohiro Shirakawa, Naruhisa Sugiyama, Hiroshi Sato, Kazuki Sakurai & Eri Sato To cite this article: Tomohiro Shirakawa, Naruhisa Sugiyama, Hiroshi Sato, Kazuki Sakurai & Eri Sato (2015): Gait analysis and machine learning classification on healthy subjects in normal walking, International Journal of Parallel, Emergent and Distributed Systems, DOI: 10.1080/17445760.2015.1044007 To link to this article: http://dx.doi.org/10.1080/17445760.2015.1044007", "title": "" } ]
scidocsrr
57f990a09435872f0bd68386b04f0149
Deep Speaker Feature Learning for Text-independent Speaker Verification
[ { "docid": "7a3aaec6e397b416619bcde0c565b0f6", "text": "This paper gives an overview of automatic speaker recognition technology, with an emphasis on text-independent recognition. Speaker recognition has been studied actively for several decades. We give an overview of both the classical and the state-of-the-art methods. We start with the fundamentals of automatic speaker recognition, concerning feature extraction and speaker modeling. We elaborate advanced computational techniques to address robustness and session variability. The recent progress from vectors towards supervectors opens up a new area of exploration and represents a technology trend. We also provide an overview of this recent development and discuss the evaluation methodology of speaker recognition systems. We conclude the paper with discussion on future directions. 2009 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "935fb5a196358764fda82ac50b87cf1b", "text": "Linear dimensionality reduction methods, such as LDA, are often used in object recognition for feature extraction, but do not address the problem of how to use these features for recognition. In this paper, we propose Probabilistic LDA, a generative probability model with which we can both extract the features and combine them for recognition. The latent variables of PLDA represent both the class of the object and the view of the object within a class. By making examples of the same class share the class variable, we show how to train PLDA and use it for recognition on previously unseen classes. The usual LDA features are derived as a result of training PLDA, but in addition have a probability model attached to them, which automatically gives more weight to the more discriminative features. With PLDA, we can build a model of a previously unseen class from a single example, and can combine multiple examples for a better representation of the class. We show applications to classification, hypothesis testing, class inference, and clustering, on classes not observed during training.", "title": "" }, { "docid": "83525470a770a036e9c7bb737dfe0535", "text": "It is known that the performance of the i-vectors/PLDA based speaker verification systems is affected in the cases of short utterances and limited training data. The performance degradation appears because the shorter the utterance, the less reliable the extracted i-vector is, and because the total variability covariance matrix and the underlying PLDA matrices need a significant amount of data to be robustly estimated. Considering the “MIT Mobile Device Speaker Verification Corpus” (MIT-MDSVC) as a representative dataset for robust speaker verification tasks on limited amount of training data, this paper investigates which configuration and which parameters lead to the best performance of an i-vectors/PLDA based speaker verification. The i-vectors/PLDA based system achieved good performance only when the total variability matrix and the underlying PLDA matrices were trained with data belonging to the enrolled speakers. This way of training means that the system should be fully retrained when new enrolled speakers were added. The performance of the system was more sensitive to the amount of training data of the underlying PLDA matrices than to the amount of training data of the total variability matrix. Overall, the Equal Error Rate performance of the i-vectors/PLDA based system was around 1% below the performance of a GMM-UBM system on the chosen dataset. The paper presents at the end some preliminary experiments in which the utterances comprised in the CSTR VCTK corpus were used besides utterances from MIT-MDSVC for training the total variability covariance matrix and the underlying PLDA matrices.", "title": "" } ]
[ { "docid": "4cb34eda6145a8ea0ccc22b3e547b5e5", "text": "The factors that contribute to individual differences in the reward value of cute infant facial characteristics are poorly understood. Here we show that the effect of cuteness on a behavioural measure of the reward value of infant faces is greater among women reporting strong maternal tendencies. By contrast, maternal tendencies did not predict women's subjective ratings of the cuteness of these infant faces. These results show, for the first time, that the reward value of infant facial cuteness is greater among women who report being more interested in interacting with infants, implicating maternal tendencies in individual differences in the reward value of infant cuteness. Moreover, our results indicate that the relationship between maternal tendencies and the reward value of infant facial cuteness is not due to individual differences in women's ability to detect infant cuteness. This latter result suggests that individual differences in the reward value of infant cuteness are not simply a by-product of low-cost, functionless biases in the visual system.", "title": "" }, { "docid": "8d570c7d70f9003b9d2f9bfa89234c35", "text": "BACKGROUND\nThe targeting of the prostate-specific membrane antigen (PSMA) is of particular interest for radiotheragnostic purposes of prostate cancer. Radiolabeled PSMA-617, a 1,4,7,10-tetraazacyclododecane-N,N',N'',N'''-tetraacetic acid (DOTA)-functionalized PSMA ligand, revealed favorable kinetics with high tumor uptake, enabling its successful application for PET imaging (68Ga) and radionuclide therapy (177Lu) in the clinics. In this study, PSMA-617 was labeled with cyclotron-produced 44Sc (T 1/2 = 4.04 h) and investigated preclinically for its use as a diagnostic match to 177Lu-PSMA-617.\n\n\nRESULTS\n44Sc was produced at the research cyclotron at PSI by irradiation of enriched 44Ca targets, followed by chromatographic separation. 44Sc-PSMA-617 was prepared under standard labeling conditions at elevated temperature resulting in a radiochemical purity of >97% at a specific activity of up to 10 MBq/nmol. 44Sc-PSMA-617 was evaluated in vitro and compared to the 177Lu- and 68Ga-labeled match, as well as 68Ga-PSMA-11 using PSMA-positive PC-3 PIP and PSMA-negative PC-3 flu prostate cancer cells. In these experiments it revealed similar in vitro properties to that of 177Lu- and 68Ga-labeled PSMA-617. Moreover, 44Sc-PSMA-617 bound specifically to PSMA-expressing PC-3 PIP tumor cells, while unspecific binding to PC-3 flu cells was not observed. The radioligands were investigated with regard to their in vivo properties in PC-3 PIP/flu tumor-bearing mice. 44Sc-PSMA-617 showed high tumor uptake and a fast renal excretion. The overall tissue distribution of 44Sc-PSMA-617 resembled that of 177Lu-PSMA-617 most closely, while the 68Ga-labeled ligands, in particular 68Ga-PSMA-11, showed different distribution kinetics. 44Sc-PSMA-617 enabled distinct visualization of PC-3 PIP tumor xenografts shortly after injection, with increasing tumor-to-background contrast over time while unspecific uptake in the PC-3 flu tumors was not observed.\n\n\nCONCLUSIONS\nThe in vitro characteristics and in vivo kinetics of 44Sc-PSMA-617 were more similar to 177Lu-PSMA-617 than to 68Ga-PSMA-617 and 68Ga-PSMA-11. Due to the almost four-fold longer half-life of 44Sc as compared to 68Ga, a centralized production of 44Sc-PSMA-617 and transport to satellite PET centers would be feasible. These features make 44Sc-PSMA-617 particularly appealing for clinical application.", "title": "" }, { "docid": "69d5af002ebad67099dc9d1793e89aec", "text": "Deep models that are both effective and explainable are desirable in many settings; prior explainable models have been unimodal, offering either image-based visualization of attention weights or text-based generation of post-hoc justifications. We propose a multimodal approach to explanation, and argue that the two modalities provide complementary explanatory strengths. We collect two new datasets to define and evaluate this task, and propose a novel model which can provide joint textual rationale generation and attention visualization. Our datasets define visual and textual justifications of a classification decision for activity recognition tasks (ACT-X) and for visual question answering tasks (VQA-X). We quantitatively show that training with the textual explanations not only yields better textual justification models, but also better localizes the evidence that supports the decision. We also qualitatively show cases where visual explanation is more insightful than textual explanation, and vice versa, supporting our thesis that multimodal explanation models offer significant benefits over unimodal approaches.", "title": "" }, { "docid": "f8ec274fc83aded74eed231d6723f4fe", "text": "Sampling is a well-known technique to speed up architectural simulation of long-running workloads while maintaining accurate performance predictions. A number of sampling techniques have recently been developed that extend well-known single-threaded techniques to allow sampled simulation of multi-threaded applications. Unfortunately, prior work is limited to non-synchronizing applications (e.g., server throughput workloads); requires the functional simulation of the entire application using a detailed cache hierarchy which limits the overall simulation speedup potential; leads to different units of work across different processor architectures which complicates performance analysis; or, requires massive machine resources to achieve reasonable simulation speedups. In this work, we propose BarrierPoint, a sampling methodology to accelerate simulation by leveraging globally synchronizing barriers in multi-threaded applications. BarrierPoint collects microarchitecture-independent code and data signatures to determine the most representative inter-barrier regions, called barrierpoints. BarrierPoint estimates total application execution time (and other performance metrics of interest) through detailed simulation of these barrierpoints only, leading to substantial simulation speedups. Barrierpoints can be simulated in parallel, use fewer simulation resources, and define fixed units of work to be used in performance comparisons across processor architectures. Our evaluation of BarrierPoint using NPB and Parsec benchmarks reports average simulation speedups of 24.7× (and up to 866.6×) with an average simulation error of 0.9% and 2.9% at most. On average, BarrierPoint reduces the number of simulation machine resources needed by 78×.", "title": "" }, { "docid": "26508379e41da5e3b38dd944fc9e4783", "text": "We describe the Photobook system, which is a set of interactive tools for browsing and searching images and image sequences. These tools differ from those used in standard image databases in that they make direct use of the image content rather than relying on annotations. Direct search on image content is made possible by use of semantics-preserving image compression, which reduces images to a small set of perceptually-significant coefficients. We describe three Photobook tools in particular: one that allows search based on grey-level appearance, one that uses 2-D shape, and a third that allows search based on textural properties.", "title": "" }, { "docid": "4d3de2d03431e8f06a5b8b31a784ecaa", "text": "For medical students, virtual patient dialogue systems can provide useful training opportunities without the cost of employing actors to portray standardized patients. This work utilizes wordand character-based convolutional neural networks (CNNs) for question identification in a virtual patient dialogue system, outperforming a strong wordand characterbased logistic regression baseline. While the CNNs perform well given sufficient training data, the best system performance is ultimately achieved by combining CNNs with a hand-crafted pattern matching system that is robust to label sparsity, providing a 10% boost in system accuracy and an error reduction of 47% as compared to the pattern-matching system alone.", "title": "" }, { "docid": "d22c8390e6ea9ea8c7a84e188cd10ba5", "text": "BACKGROUND\nNutrition interventions targeted to individuals are unlikely to significantly shift US dietary patterns as a whole. Environmental and policy interventions are more promising for shifting these patterns. We review interventions that influenced the environment through food availability, access, pricing, or information at the point-of-purchase in worksites, universities, grocery stores, and restaurants.\n\n\nMETHODS\nThirty-eight nutrition environmental intervention studies in adult populations, published between 1970 and June 2003, were reviewed and evaluated on quality of intervention design, methods, and description (e.g., sample size, randomization). No policy interventions that met inclusion criteria were found.\n\n\nRESULTS\nMany interventions were not thoroughly evaluated or lacked important evaluation information. Direct comparison of studies across settings was not possible, but available data suggest that worksite and university interventions have the most potential for success. Interventions in grocery stores appear to be the least effective. The dual concerns of health and taste of foods promoted were rarely considered. Sustainability of environmental change was never addressed.\n\n\nCONCLUSIONS\nInterventions in \"limited access\" sites (i.e., where few other choices were available) had the greatest effect on food choices. Research is needed using consistent methods, better assessment tools, and longer durations; targeting diverse populations; and examining sustainability. Future interventions should influence access and availability, policies, and macroenvironments.", "title": "" }, { "docid": "41240dccf91b1a3ea3ec9b12f5e451ce", "text": "This study applied the concept of online consumer social experiences (OCSEs) to reduce online shopping post-payment dissonance (i.e., dissonance occurring between online payment and product receipt). Two types of OCSEs were developed: indirect social experiences (IDSEs) and virtual social experiences (VSEs). Two studies were conducted, in which 447 college students were enrolled. Study 1 compared the effects of OCSEs and non-OCSEs when online shopping post-payment dissonance occurred. The results indicate that providing consumers affected by online shopping post-payment dissonance with OCSEs reduces dissonance and produces higher satisfaction, higher repurchase intention, and lower complaint intention than when no OCSEs are provided. In addition, consumers’ interpersonal trust (IPT) and susceptibility to interpersonal informational influence (SIII) moderated the positive effects of OCSEs. Study 2 compared the effects of IDSEs and VSEs when online shopping post-payment dissonance occurred. The results sugomputing need for control omputer-mediated communication pprehension gest that the effects of IDSEs and VSEs on satisfaction, repurchase intention, and complaint intention are moderated by consumers’ computing need for control (CNC) and computer-mediated communication apprehension (CMCA). The consumers with high CNC and low CMCA preferred VSEs, whereas the consumers with low CNC and high CMCA preferred IDSEs. The effects of VSEs and IDSEs on consumers with high CNC and CMCA and those with low CNC and CMCA were not significantly different. © 2017 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "6380b60d47e49c9237208d48de9907e4", "text": "To date, conversations about cloud computing have been dominated by vendors who focus more on technology and less on business value. While it is still not fully agreed as to what components constitute cloud computing technology, some examples of its potential uses are emerging. We identify seven cloud capabilities that executives can use to formulate cloud-based strategies. Firms can change the mix of these capabilities to develop cloud strategies for unique competitive benefits. We predict that cloud strategies will lead to more intense ecosystem-based competition; it is therefore imperative that companies prepare for such a future now.", "title": "" }, { "docid": "5ec4451889beb4698c6ffb6fba4a53a3", "text": "We survey recent work on the elliptic curve discrete logarithm problem. In particular we review index calculus algorithms using summation polynomials, and claims about their complexity.", "title": "" }, { "docid": "79041480e35083e619bd804423459f2b", "text": "Dynamic pricing is the dynamic adjustment of prices to consumers depending upon the value these customers attribute to a product or service. Today’s digital economy is ready for dynamic pricing; however recent research has shown that the prices will have to be adjusted in fairly sophisticated ways, based on sound mathematical models, to derive the benefits of dynamic pricing. This article attempts to survey different models that have been used in dynamic pricing. We first motivate dynamic pricing and present underlying concepts, with several examples, and explain conditions under which dynamic pricing is likely to succeed. We then bring out the role of models in computing dynamic prices. The models surveyed include inventory-based models, data-driven models, auctions, and machine learning. We present a detailed example of an e-business market to show the use of reinforcement learning in dynamic pricing.", "title": "" }, { "docid": "bca33101885391147e411898026c0269", "text": "The algorithmic stock trading has developed exponentially in the past years, while the automatism of the technical analysis was the main research are for implementing the algorithms. This paper proposes a model for a trading algorithm that combines the signals from different technical indicators in order to provide more accurate trading signals.", "title": "" }, { "docid": "8bd3f52cfbeca614887fe1cbe92798ec", "text": "This paper introduces a new supervised segmentation algorithm for remotely sensed hyperspectral image data which integrates the spectral and spatial information in a Bayesian framework. A multinomial logistic regression (MLR) algorithm is first used to learn the posterior probability distributions from the spectral information, using a subspace projection method to better characterize noise and highly mixed pixels. Then, contextual information is included using a multilevel logistic Markov-Gibbs Markov random field prior. Finally, a maximum a posteriori segmentation is efficiently computed by the min-cut-based integer optimization algorithm. The proposed segmentation approach is experimentally evaluated using both simulated and real hyperspectral data sets, exhibiting state-of-the-art performance when compared with recently introduced hyperspectral image classification methods. The integration of subspace projection methods with the MLR algorithm, combined with the use of spatial-contextual information, represents an innovative contribution in the literature. This approach is shown to provide accurate characterization of hyperspectral imagery in both the spectral and the spatial domain.", "title": "" }, { "docid": "c2a297417553cb46fd98353d8b8351ac", "text": "Recent advances in methods and techniques enable us to develop an interactive overlay to the global map of science based on aggregated citation relations among the 9,162 journals contained in the Science Citation Index and Social Science Citation Index 2009 combined. The resulting mapping is provided by VOSViewer. We first discuss the pros and cons of the various options: cited versus citing, multidimensional scaling versus spring-embedded algorithms, VOSViewer versus Gephi, and the various clustering algorithms and similarity criteria. Our approach focuses on the positions of journals in the multidimensional space spanned by the aggregated journal-journal citations. A number of choices can be left to the user, but we provide default options reflecting our preferences. Some examples are also provided; for example, the potential of using this technique to assess the interdisciplinarity of organizations and/or document sets.", "title": "" }, { "docid": "f10eb96de9181085e249fdca1f4a568d", "text": "This paper argues that tracking, object detection, and model building are all similar activities. We describe a fully automatic system that builds 2D articulated models known as pictorial structures from videos of animals. The learned model can be used to detect the animal in the original video - in this sense, the system can be viewed as a generalized tracker (one that is capable of modeling objects while tracking them). The learned model can be matched to a visual library; here, the system can be viewed as a video recognition algorithm. The learned model can also be used to detect the animal in novel images - in this case, the system can be seen as a method for learning models for object recognition. We find that we can significantly improve the pictorial structures by augmenting them with a discriminative texture model learned from a texture library. We develop a novel texture descriptor that outperforms the state-of-the-art for animal textures. We demonstrate the entire system on real video sequences of three different animals. We show that we can automatically track and identify the given animal. We use the learned models to recognize animals from two data sets; images taken by professional photographers from the Corel collection, and assorted images from the Web returned by Google. We demonstrate quite good performance on both data sets. Comparing our results with simple baselines, we show that, for the Google set, we can detect, localize, and recover part articulations from a collection demonstrably hard for object recognition", "title": "" }, { "docid": "ee75e43f0bd61dd215299a188cecb2ed", "text": "The ultra low-latency operations of communications and computing enable many potential IoT applications, and thus have gained widespread attention recently. Existing mobile devices and telecommunication systems may not be able to provide the highly desired low-latency computing and communications services. To meet the needs of those applications, we introduce the Fog-Radio Access Network (F-RAN) architecture, which brings the efficient computing capability of the cloud to the edge of the network. By distributing computing-intensive tasks to multiple F-RAN nodes, F-RAN has the potential to meet the requirements of those ultra low-latency applications. In this article, we first introduce the F-RAN and its rationale in serving ultra low-latency applications. Then we discuss the need for a service framework for F-RAN to cope with the complex tradeoff among performance, computing cost, and communication cost. Finally, we illustrate the mobile AR service as an exemplary scenario to provide insights for the design of the framework. Examples and numerical results show that ultra low-latency services can be achieved by the F-RAN by properly handling the tradeoff.", "title": "" }, { "docid": "120007860a5fbf6a3bbc9b2fe6074b87", "text": "For the last few decades, optimization has been developing at a fast rate. Bio-inspired optimization algorithms are metaheuristics inspired by nature. These algorithms have been applied to solve different problems in engineering, economics, and other domains. Bio-inspired algorithms have also been applied in different branches of information technology such as networking and software engineering. Time series data mining is a field of information technology that has its share of these applications too. In previous works we showed how bio-inspired algorithms such as the genetic algorithms and differential evolution can be used to find the locations of the breakpoints used in the symbolic aggregate approximation of time series representation, and in another work we showed how we can utilize the particle swarm optimization, one of the famous bio-inspired algorithms, to set weights to the different segments in the symbolic aggregate approximation representation. In this paper we present, in two different approaches, a new meta optimization process that produces optimal locations of the breakpoints in addition to optimal weights of the segments. The experiments of time series classification task that we conducted show an interesting example of how the overfitting phenomenon, a frequently encountered problem in data mining which happens when the model overfits the training set, can interfere in the optimization process and hide the superior performance of an optimization algorithm.", "title": "" }, { "docid": "4e368af438658472eb2d7e3db118f61b", "text": "Radiological diagnosis of acetabular retroversion is based on the presence of the cross-over sign (COS), the posterior wall sign (PWS), and prominence of the ischial spine (PRISS). The primary purpose of the study was to correlate the quantitative cross-over sign with the presence or absence of the PRISS and PWS signs. The hypothesis was that both, PRISS and PWS are associated with a higher cross-over sign ratio or higher amount of acetabular retroversion. A previous study identified 1417 patients with a positive acetabular cross-over sign. Among these, three radiological parameters were assessed: (1) the amount of acetabular retroversion, quantified as a cross-over sign ratio; (2) the presence of the PRISS sign; (3) the presence of the PWS sign. The relation of these three parameters was analysed using Fisher's exact test, ANOVA, and linear regression analysis. In hips with cross-over sign, the PRISS was present in 61.7%. A direct association between PRISS and the cross-over sign ratio (p < 0.001) was seen. The PWS was positive in 31% of the hips and was also significantly related with the cross-over sign ratio (p < 0.001). In hips with a PRISS, 39.7% had a PWS sign, which was a significant relation (p < 0.001). In patients with positive PWS, 78.8% of the cases also had a PRISS (p < 0.001). Both the PRISS and PWS signs were significantly associated with higher grade cross-over values. Both the PRISS and PWS signs as well as the coexistence of COS, PRISS, and PWS are significantly associated with higher grade of acetabular retroversion. In conjunction with the COS, the PRISS and PWS signs indicate severe acetabular retroversion. Presence and recognition of distinct radiological signs around the hip joint might raise the awareness of possible femoroacetabular impingement (FAI).", "title": "" }, { "docid": "cebfc5224413c5acb7831cbf29ae5a8e", "text": "Radio Frequency (RF) Energy Harvesting holds a pro mising future for generating a small amount of electrical power to drive partial circuits in wirelessly communicating electronics devices. Reducing power consumption has become a major challenge in wireless sensor networks. As a vital factor affecting system cost and lifetime, energy consumption in wireless sensor networks is an emerging and active res arch area. This chapter presents a practical approach for RF Energy harvesting and man agement of the harvested and available energy for wireless sensor networks using the Impro ved Energy Efficient Ant Based Routing Algorithm (IEEABR) as our proposed algorithm. The c hapter looks at measurement of the RF power density, calculation of the received power, s torage of the harvested power, and management of the power in wireless sensor networks . The routing uses IEEABR technique for energy management. Practical and real-time implemen tatio s of the RF Energy using PowercastTM harvesters and simulations using the ene rgy model of our Libelium Waspmote to verify the approach were performed. The chapter con cludes with performance analysis of the harvested energy, comparison of IEEABR and other tr aditional energy management techniques, while also looking at open research areas of energy harvesting and management for wireless sensor networks.", "title": "" }, { "docid": "0b9ae0bf6f6201249756d87a56f0005e", "text": "To reduce energy consumption and wastage, effective energy management at home is key and an integral part of the future Smart Grid. In this paper, we present the design and implementation of Green Home Service (GHS) for home energy management. Our approach addresses the key issues of home energy management in Smart Grid: a holistic management solution, improved device manageability, and an enabler of Demand-Response. We also present the scheduling algorithms in GHS for smart energy management and show the results in simulation studies.", "title": "" } ]
scidocsrr
c6098e292e527976dcc27b9459eab4f0
A Dynamic Window Neural Network for CCG Supertagging
[ { "docid": "49a6d062d3e24a8b325e9e7b142d32be", "text": "Rumelhart, Hinton and Williams [Rumelhart et al. 86] describe a learning procedure for layered networks of deterministic, neuron-like units. This paper describes further research on the learning procedure. We start by describing the units, the way they are connected, the learning procedure, and the extension to iterative nets. We then give an example in which a network learns a set of filters that enable it to discriminate formant-like patterns in the presence of noise. The speed of learning is strongly dependent on the shape of the surface formed by the error measure in \"weight space.\" We give examples of the shape of the error surface for a typical task and illustrate how an acceleration method speeds up descent in weight space. The main drawback of the learning procedure is the way it scales as the size of the task and the network increases. We give some preliminary results on scaling and show how the magnitude of the optimal weight changes depends on the fan-in of the units. Additional results illustrate the effects on learning speed of the amount of interaction between the weights. A variation of the learning procedure that back-propagates desired state information rather than error gradients is developed and compared with the standard procedure. Finally, we discuss the relationship between our iterative networks and the \"analog\" networks described by Hopfield and Tank [Hopfield and Tank 85]. The learning procedure can discover appropriate weights in their kind of network, as well as determine an optimal schedule for varying the nonlinearity of the units during a search.", "title": "" }, { "docid": "59c24fb5b9ac9a74b3f89f74b332a27c", "text": "This paper addresses the problem of learning to map sentences to logical form, given training data consisting of natural language sentences paired with logical representations of their meaning. Previous approaches have been designed for particular natural languages or specific meaning representations; here we present a more general method. The approach induces a probabilistic CCG grammar that represents the meaning of individual words and defines how these meanings can be combined to analyze complete sentences. We use higher-order unification to define a hypothesis space containing all grammars consistent with the training data, and develop an online learning algorithm that efficiently searches this space while simultaneously estimating the parameters of a log-linear parsing model. Experiments demonstrate high accuracy on benchmark data sets in four languages with two different meaning representations.", "title": "" }, { "docid": "6af09f57f2fcced0117dca9051917a0d", "text": "We present a novel per-dimension learning rate method for gradient descent called ADADELTA. The method dynamically adapts over time using only first order information and has minimal computational overhead beyond vanilla stochastic gradient descent. The method requires no manual tuning of a learning rate and appears robust to noisy gradient information, different model architecture choices, various data modalities and selection of hyperparameters. We show promising results compared to other methods on the MNIST digit classification task using a single machine and on a large scale voice dataset in a distributed cluster environment.", "title": "" }, { "docid": "09df260d26638f84ec3bd309786a8080", "text": "If we take an existing supervised NLP system, a simple and general way to improve accuracy is to use unsupervised word representations as extra word features. We evaluate Brown clusters, Collobert and Weston (2008) embeddings, and HLBL (Mnih & Hinton, 2009) embeddings of words on both NER and chunking. We use near state-of-the-art supervised baselines, and find that each of the three word representations improves the accuracy of these baselines. We find further improvements by combining different word representations. You can download our word features, for off-the-shelf use in existing NLP systems, as well as our code, here: http://metaoptimize. com/projects/wordreprs/", "title": "" } ]
[ { "docid": "4e42d29a924c6e1e11456255c1f6cba0", "text": "We present a reformulation of the stochastic optimal control problem in terms of KL divergence minimisation, not only providing a unifying perspective of previous approaches in this area, but also demonstrating that the formalism leads to novel practical approaches to the control problem. Specifically, a natural relaxation of the dual formulation gives rise to exact iterative solutions to the finite and infinite horizon stochastic optimal control problem, while direct application of Bayesian inference methods yields instances of risk sensitive control. We furthermore study corresponding formulations in the reinforcement learning setting and present model free algorithms for problems with both discrete and continuous state and action spaces. Evaluation of the proposed methods on the standard Gridworld and Cart-Pole benchmarks verifies the theoretical insights and shows that the proposed methods improve upon current approaches.", "title": "" }, { "docid": "b17a8e121f865b7143bc2e38fa367b07", "text": "Radio frequency (r.f.) has been investigated as a means of externally powering miniature and long term implant telemetry systems. Optimum power transfer from the transmitter to the receiving coil is desired for total system efficiency. A seven step design procedure for the transmitting and receiving coils is described based on r.f., coil diameter, coil spacing, load and the number of turns of the coil. An inductance tapping circuit and a voltage doubler circuit have been built in accordance with the design procedure. Experimental results were within the desired total system efficiency ranges of 18% and 23%, respectively. On a étudié la fréquence radio (f.r.) en tant que source extérieure permettant de faire fonctionner les systèmes télémétriques d'implants miniatures à long terme. Afin d'assurer une efficacité totale au système, il est nécessaire d'obtenir un transfert de puissance optimum de l'émetteur à la bobine réceptrice. On donne la description d'une technique de conception en sept temps, fondée sur la fréquence radio, le diamètre de la bobine, l'espacement des spires, la charge et le nombre de tours de la bobine. Un circuit de captage de tension par induction et un circuit doubleur de tension ont été construits conformément à la méthode de conception. Les résultats expérimentaux étaient compris dans les limites d'efficacité totale souhaitable pour le système, soit 18% à 23%, respectivement. Hochfrequenz wurde als Mittel zur externen Energieversorgung von Miniatur und langfristigen Implantat-Telemetriesystemen untersucht. Zur Verwirklichung der höchsten Leistungsfähigkeit braucht das System optimale Energieübertragung von Sendegerät zu Empfangsspule. Ein auf Hochfrequenz beruhendes siebenstufiges Konstruktionssystem für Sende- und Empfangsspulen wird beschrieben, mit Hinweisen über Spulendurchmesser, Spulenanordnung, Ladung und die Anzahl der Wicklungen. Ein Induktionsanzapfstromkreis und ein Spannungsverdoppler wurden dem Konstruktionsverfahren entsprechend gebaut. Versuchsergebnisse lagen im Bereich des gewünschten Systemleistungsgrades von 18% und 23%.", "title": "" }, { "docid": "086f9cbed93553ca00b2afeff1cb8508", "text": "Rapid advance of location acquisition technologies boosts the generation of trajectory data, which track the traces of moving objects. A trajectory is typically represented by a sequence of timestamped geographical locations. A wide spectrum of applications can benefit from the trajectory data mining. Bringing unprecedented opportunities, large-scale trajectory data also pose great challenges. In this paper, we survey various applications of trajectory data mining, e.g., path discovery, location prediction, movement behavior analysis, and so on. Furthermore, this paper reviews an extensive collection of existing trajectory data mining techniques and discusses them in a framework of trajectory data mining. This framework and the survey can be used as a guideline for designing future trajectory data mining solutions.", "title": "" }, { "docid": "f8c238001bb72ed4f3e1bc2241f22d26", "text": "The resource management system is the central component of network computing systems. There have been many projects focused on network computing that have designed and implemented resource management systems with a variety of architectures and functionalities. In this paper, we develop a comprehensive taxonomy for describing resource management architectures and use this taxonomy to survey existing resource management implementations in very large-scale network computing systems known as Grids. We use the taxonomy and the survey results to identify architectural approaches that have not been fully explored in the research.", "title": "" }, { "docid": "0281a146c98cce5dd6a8990c4adf5bba", "text": "We propose a highly efficient and faster Single Image Super-Resolution (SISR) model with Deep Convolutional neural networks (Deep CNN). Deep CNN have recently shown that they have a significant reconstruction performance on single-image super-resolution. The current trend is using deeper CNN layers to improve performance. However, deep models demand larger computation resources and are not suitable for network edge devices like mobile, tablet and IoT devices. Our model achieves state-of-the-art reconstruction performance with at least 10 times lower calculation cost by Deep CNN with Residual Net, Skip Connection and Network in Network (DCSCN). A combination of Deep CNNs and Skip connection layers are used as a feature extractor for image features on both local and global areas. Parallelized 1x1 CNNs, like the one called Network in Network, are also used for image reconstruction. That structure reduces the dimensions of the previous layer’s output for faster computation with less information loss, and make it possible to process original images directly. Also we optimize the number of layers and filters of each CNN to significantly reduce the calculation cost. Thus, the proposed algorithm not only achieves stateof-the-art performance but also achieves faster and more efficient computation. Code is available at https://github.com/jiny2001/dcscn-super-resolution.", "title": "" }, { "docid": "a13d1144c4a719b1d6d5f4f0e645c2e3", "text": "Array antennas for 77GHz automotive radar application are designed and measured. Linear series-fed patch array (SFPA) antenna is designed for transmitters of middle range radar (MRR) and all the receivers. A planar SFPA based on the linear one and substrate integrated waveguide (SIW) feeding network is proposed for transmitter of long range radar (LRR), which can decline the radiation from feeding network itself. The array antennas are fabricated, both the performances with and without radome of these array antennas are measured. Good agreement between simulation and measurement has been achieved. They can be good candidates for 77GHz automotive application.", "title": "" }, { "docid": "fce925493fc9f7cbbe4c202e5e625605", "text": "Topic models are a useful and ubiquitous tool for understanding large corpora. However, topic models are not perfect, and for many users in computational social science, digital humanities, and information studies—who are not machine learning experts—existing models and frameworks are often a “take it or leave it” proposition. This paper presents a mechanism for giving users a voice by encoding users’ feedback to topic models as correlations between words into a topic model. This framework, interactive topic modeling (itm), allows untrained users to encode their feedback easily and iteratively into the topic models. Because latency in interactive systems is crucial, we develop more efficient inference algorithms for tree-based topic models. We validate the framework both with simulated and real users.", "title": "" }, { "docid": "01beae2504022968153e73be91d1765d", "text": "User studies in the music information retrieval and music digital library fields have been gradually increasing in recent years, but large-scale studies that can help detect common user behaviors are still lacking. We have conducted a large-scale user survey in which we asked numerous questions related to users’ music needs, uses, seeking, and management behaviors. In this paper, we present our preliminary findings, specifically focusing on the responses to questions of users’ favorite music related websites/applications and the reasons why they like them. We provide a list of popular music services, as well as an analysis of how these services are used, and what qualities are valued. Our findings suggest several trends in the types of music services people like: an increase in the popularity of music streaming and mobile music consumption, the emergence of new functionality, such as music identification and cloud music services, an appreciation of music videos, serendipitous discovery of music, and customizability, as well as users’ changing expectations of particular types of music information.", "title": "" }, { "docid": "44ea81d223e3c60c7b4fd1192ca3c4ba", "text": "Existing classification and rule learning algorithms in machine learning mainly use heuristic/greedy search to find a subset of regularities (e.g., a decision tree or a set of rules) in data for classification. In the past few years, extensive research was done in the database community on learning rules using exhaustive search under the name of association rule mining. The objective there is to find all rules in data that satisfy the user-specified minimum support and minimum confidence. Although the whole set of rules may not be used directly for accurate classification, effective and efficient classifiers have been built using the rules. This paper aims to improve such an exhaustive search based classification system CBA (Classification Based on Associations). The main strength of this system is that it is able to use the most accurate rules for classification. However, it also has weaknesses. This paper proposes two new techniques to deal with these weaknesses. This results in remarkably accurate classifiers. Experiments on a set of 34 benchmark datasets show that on average the new techniques reduce the error of CBA by 17% and is superior to CBA on 26 of the 34 datasets. They reduce the error of the decision tree classifier C4.5 by 19%, and improve performance on 29 datasets. Similar good results are also achieved against the existing classification systems, RIPPER, LB and a Naïve-Bayes", "title": "" }, { "docid": "2b1a9bc5ae7e9e6c2d2d008e2a2384b5", "text": "Network information distribution is a fundamental service for any anonymization network. Even though anonymization and information distribution about the network are two orthogonal issues, the design of the distribution service has a direct impact on the anonymization. Requiring each node to know about all other nodes in the network (as in Tor and AN.ON -- the most popular anonymization networks) limits scalability and offers a playground for intersection attacks. The distributed designs existing so far fail to meet security requirements and have therefore not been accepted in real networks.\n In this paper, we combine probabilistic analysis and simulation to explore DHT-based approaches for distributing network information in anonymization networks. Based on our findings we introduce NISAN, a novel approach that tries to scalably overcome known security problems. It allows for selecting nodes uniformly at random from the full set of all available peers, while each of the nodes has only limited knowledge about the network. We show that our scheme has properties similar to a centralized directory in terms of preventing malicious nodes from biasing the path selection. This is done, however, without requiring to trust any third party. At the same time our approach provides high scalability and adequate performance. Additionally, we analyze different design choices and come up with diverse proposals depending on the attacker model. The proposed combination of security, scalability, and simplicity, to the best of our knowledge, is not available in any other existing network information distribution system.", "title": "" }, { "docid": "4936f1c5dfa5da581c4bcaf147050041", "text": "With the popularity of social networks, such as mi-croblogs and Twitter, topic inference for short text is increasingly significant and essential for many content analysis tasks. Biterm topic model (BTM) is superior to conventional topic models in uncovering latent semantic relevance for short text. However, Gibbs sampling employed by BTM is very time consuming when inferring topics, especially for large-scale datasets. It requires O{K) operations per sample for K topics, where K denotes the number of topics in the corpus. In this paper, we propose an acceleration algorithm of BTM, FastBTM, using an efficient sampling method for BTM which only requires O(1) amortized time while the traditional ones scale linearly with the number of topics. FastBTM is based on Metropolis-Hastings and alias method, both of which have been widely adopted in latent Dirichlet allocation (LDA) model and achieved outstanding speedup. We carry out a number of experiments on Tweets2011 Collection dataset and Enron dataset, indicating that our method is robust enough for both short texts and normal documents. Our work can be approximately 9 times faster than traditional Gibbs sampling method per iteration, when setting K = 1000. The source code of FastBTM can be obtained from https://github.com/paperstudy/FastBTM.", "title": "" }, { "docid": "b7bf7d430e4132a4d320df3a155ee74c", "text": "We present Wave menus, a variant of multi-stroke marking menus designed for improving the novice mode of marking while preserving their efficiency in the expert mode of marking. Focusing on the novice mode, a criteria-based analysis of existing marking menus motivates the design of Wave menus. Moreover a user experiment is presented that compares four hierarchical marking menus in novice mode. Results show that Wave and compound-stroke menus are significantly faster and more accurate than multi-stroke menus in novice mode, while it has been shown that in expert mode the multi-stroke menus and therefore the Wave menus outperform the compound-stroke menus. Wave menus also require significantly less screen space than compound-stroke menus. As a conclusion, Wave menus offer the best performance for both novice and expert modes in comparison with existing multi-level marking menus, while requiring less screen space than compound-stroke menus.", "title": "" }, { "docid": "b8f23ec8e704ee1cf9dbe6063a384b09", "text": "The Dirichlet distribution and its compound variant, the Dirichlet-multinomial, are two of the most basic models for proportional data, such as the mix of vocabulary words in a text document. Yet the maximum-likelihood estimate of these distributions is not available in closed-form. This paper describes simple and efficient iterative schemes for obtaining parameter estimates in these models. In each case, a fixed-point iteration and a Newton-Raphson (or generalized Newton-Raphson) iteration is provided. 1 The Dirichlet distribution The Dirichlet distribution is a model of how proportions vary. Let p denote a random vector whose elements sum to 1, so that pk represents the proportion of item k. Under the Dirichlet model with parameter vector α, the probability density at p is p(p) ∼ D(α1, ..., αK) = Γ( ∑ k αk) ∏ k Γ(αk) ∏ k pk k (1) where pk > 0 (2)", "title": "" }, { "docid": "0c9228dd4a65587e43fc6d2d1f0b03ce", "text": "Secure multi-party computation (MPC) is a technique well suited for privacy-preserving data mining. Even with the recent progress in two-party computation techniques such as fully homomorphic encryption, general MPC remains relevant as it has shown promising performance metrics in real-world benchmarks. Sharemind is a secure multi-party computation framework designed with real-life efficiency in mind. It has been applied in several practical scenarios, and from these experiments, new requirements have been identified. Firstly, large datasets require more efficient protocols for standard operations such as multiplication and comparison. Secondly, the confidential processing of financial data requires the use of more complex primitives, including a secure division operation. This paper describes new protocols in the Sharemind model for secure multiplication, share conversion, equality, bit shift, bit extraction, and division. All the protocols are implemented and benchmarked, showing that the current approach provides remarkable speed improvements over the previous work. This is verified using real-world benchmarks for both operations and algorithms.", "title": "" }, { "docid": "e4e97569f53ddde763f4f28559c96ba6", "text": "With a goal of understanding what drives generalization in deep networks, we consider several recently suggested explanations, including norm-based control, sharpness and robustness. We study how these measures can ensure generalization, highlighting the importance of scale normalization, and making a connection between sharpness and PAC-Bayes theory. We then investigate how well the measures explain different observed phenomena.", "title": "" }, { "docid": "f794d4a807a4d69727989254c557d2d1", "text": "The purpose of this study was to describe the operative procedures and clinical outcomes of a new three-column internal fixation system with anatomical locking plates on the tibial plateau to treat complex three-column fractures of the tibial plateau. From June 2011 to May 2015, 14 patients with complex three-column fractures of the tibial plateau were treated with reduction and internal fixation through an anterolateral approach combined with a posteromedial approach. The patients were randomly divided into two groups: a control group which included seven cases using common locking plates, and an experimental group which included seven cases with a new three-column internal fixation system with anatomical locking plates. The mean operation time of the control group was 280.7 ± 53.7 minutes, which was 215.0 ± 49.1 minutes in the experimental group. The mean intra-operative blood loss of the control group was 692.8 ± 183.5 ml, which was 471.4 ± 138.0 ml in the experimental group. The difference was statistically significant between the two groups above. The differences were not statistically significant between the following mean numbers of the two groups: Rasmussen score immediately after operation; active extension–flexion degrees of knee joint at three and 12 months post-operatively; tibial plateau varus angle (TPA) and posterior slope angle (PA) immediately after operation, at three and at 12 months post-operatively; HSS (The Hospital for Special Surgery) knee-rating score at 12 months post-operatively. All fractures healed. A three-column internal fixation system with anatomical locking plates on tibial plateau is an effective and safe tool to treat complex three-column fractures of the tibial plateau and it is more convenient than the common plate.", "title": "" }, { "docid": "6d5bd23414cc9f61534cddb0987eee5b", "text": "Open source software has witnessed an exponential growth in the last two decades and it is playing an increasingly important role in many companies and organizations leading to the formation of open source software ecosystems. In this paper we present a quality model that will allow the evaluation of those ecosystems in terms of their relevant quality characteristics such as health or activeness. To design this quality model we started by analysing the quality measures found during the execution of a systematic literature review on open source software ecosystems and, then, we classified and reorganized the set of measures in order to build a solid quality model. Finally, we test the suitability of the constructed quality model using the GNOME ecosystem.", "title": "" }, { "docid": "72b6d3039d8bfd1375bfa426db66ecfd", "text": "Die kutane Myiasis ist eine temporäre Besiedlung der Haut beim Menschen oder beim Vertebraten durch Fliegenlarven von vor allem zwei Spezies. In Zentral- und Südamerika wird die kutane Myiasis meist durch die Larven der Dermatobia hominis verursacht, in Afrika meist von Larven der Cordylobia sp. Wir beschreiben einen Fall von kutaner Myiasis bei einer Familie, die von einer 3-wöchigen Reise aus Ghana zurückgekehrt war. Die Parasiten (ca. 1–2 cm im Durchmesser und 0,5–1 cm hohe tumorähnliche Schwellungen) wurden vom Rücken des 48-jährigen Mannes, von der Nase, der Schulter und einem Handgelenk seiner 47-jährigen Frau sowie vom Rücken der 14-jährigen Tochter entfernt. Die Parasiten wurden als Larven der Cordylobia antropophaga Fliege indentifiziert. Nach Entfernung der ungefähr 8 mm großen Larven heilten die Läsionen innerhalb von 2 Wochen ohne weitere Therapie. Fälle von kutaner Myiasis beim Menschen sind höchstwahrscheinlich häufiger als angenommen, weil viele nicht diagnostiziert bzw. nicht veröffentlicht werden. Da Reisen in tropische und suptropische Gebiete aber immer häufiger werden, sollten Kliniker und Labors bei Furunkel-ähnlichen Läsionen auch an die Möglichkeit einer solchen Cordylobia-Myiasis denken. Dies gilt vor allem für Reisende, die aus dem tropischen Afrika zurückkehren. Cutaneous myiasis is a temporary parasitic infestation of the skin of human and other vertebrates by fly larvae, primarily species of the flies Dermatobia and Cordylobia. In Central and South America cutaneous myiasis is mainly caused by the larvae of Dermatobia hominis; in Africa it is mostly due to the larvae of Cordylobia spp. We describe a case of cutaneous myiasis in a family who returned to Slovenia from a three-week trip to Ghana. The parasites, in tumor-like swellings about 1–2 cm in diameter and 0.5–1 cm high, were removed from the back of the 48-year-old man, the nose, shoulder and wrist of his 47-year-old wife, and the back of their 14-year-old daughter. The parasites were identified as larvae of the fly C. anthropophaga. After removal of the larvae, which were oval-shaped and about 8 mm long, the lesions healed in two weeks without further treatment. Human cases of cutaneous myiasis are most probably underreported because many remain undiagnosed or unpublished. Because of increasing travel to tropical and subtropical areas, clinical and laboratory staff will need to be more alert to the possibility of Cordylobia myiasis in patients with furuncle-like lesions, particularly in individuals who have recently returned from tropical Africa.", "title": "" }, { "docid": "b1ae52dfa5ed1bb9c835816ca3fd52b4", "text": "The use of the halide-sensitive fluorescent probes (6-methoxy-N-(-sulphopropyl)quinolinium (SPQ) and N-(ethoxycarbonylmethyl)-6-methoxyquinolinium bromide (MQAE)) to measure chloride transport in cells has now been established as an alternative to the halide-selective electrode technique, radioisotope efflux assays and patch-clamp electrophysiology. We report here procedures for the assessment of halide efflux, using SPQ/MQAE halide-sensitive fluorescent indicators, from both adherent cultured epithelial cells and freshly obtained primary human airway epithelial cells. The procedure describes the calculation of efflux rate constants using experimentally derived SPQ/MQAE fluorescence intensities and empirically derived Stern-Volmer calibration constants. These fluorescence methods permit the quantitative analysis of CFTR function.", "title": "" }, { "docid": "4a0c2ad7f07620fa5ea5a97a68672131", "text": "The Philadelphia Neurodevelopmental Cohort (PNC) is a large-scale, NIMH funded initiative to understand how brain maturation mediates cognitive development and vulnerability to psychiatric illness, and understand how genetics impacts this process. As part of this study, 1445 adolescents ages 8-21 at enrollment underwent multimodal neuroimaging. Here, we highlight the conceptual basis for the effort, the study design, and the measures available in the dataset. We focus on neuroimaging measures obtained, including T1-weighted structural neuroimaging, diffusion tensor imaging, perfusion neuroimaging using arterial spin labeling, functional imaging tasks of working memory and emotion identification, and resting state imaging of functional connectivity. Furthermore, we provide characteristics regarding the final sample acquired. Finally, we describe mechanisms in place for data sharing that will allow the PNC to become a freely available public resource to advance our understanding of normal and pathological brain development.", "title": "" } ]
scidocsrr
ab1949ce70ded63ab2c218c5b557c221
Deep Recurrent Neural Networks for Human Activity Recognition
[ { "docid": "6dfc558d273ec99ffa7dc638912d272c", "text": "Recurrent neural networks (RNNs) with Long Short-Term memory cells currently hold the best known results in unconstrained handwriting recognition. We show that their performance can be greatly improved using dropout - a recently proposed regularization method for deep architectures. While previous works showed that dropout gave superior performance in the context of convolutional networks, it had never been applied to RNNs. In our approach, dropout is carefully used in the network so that it does not affect the recurrent connections, hence the power of RNNs in modeling sequences is preserved. Extensive experiments on a broad range of handwritten databases confirm the effectiveness of dropout on deep architectures even when the network mainly consists of recurrent and shared connections.", "title": "" }, { "docid": "ddc18f2d129d95737b8f0591560d202d", "text": "A variety of real-life mobile sensing applications are becoming available, especially in the life-logging, fitness tracking and health monitoring domains. These applications use mobile sensors embedded in smart phones to recognize human activities in order to get a better understanding of human behavior. While progress has been made, human activity recognition remains a challenging task. This is partly due to the broad range of human activities as well as the rich variation in how a given activity can be performed. Using features that clearly separate between activities is crucial. In this paper, we propose an approach to automatically extract discriminative features for activity recognition. Specifically, we develop a method based on Convolutional Neural Networks (CNN), which can capture local dependency and scale invariance of a signal as it has been shown in speech recognition and image recognition domains. In addition, a modified weight sharing technique, called partial weight sharing, is proposed and applied to accelerometer signals to get further improvements. The experimental results on three public datasets, Skoda (assembly line activities), Opportunity (activities in kitchen), Actitracker (jogging, walking, etc.), indicate that our novel CNN-based approach is practical and achieves higher accuracy than existing state-of-the-art methods.", "title": "" }, { "docid": "efa20ddb621568b4e3a590a72d1e762c", "text": "The increasing popularity of wearable devices in recent years means that a diverse range of physiological and functional data can now be captured continuously for applications in sports, wellbeing, and healthcare. This wealth of information requires efficient methods of classification and analysis where deep learning is a promising technique for large-scale data analytics. While deep learning has been successful in implementations that utilize high-performance computing platforms, its use on low-power wearable devices is limited by resource constraints. In this paper, we propose a deep learning methodology, which combines features learned from inertial sensor data together with complementary information from a set of shallow features to enable accurate and real-time activity classification. The design of this combined method aims to overcome some of the limitations present in a typical deep learning framework where on-node computation is required. To optimize the proposed method for real-time on-node computation, spectral domain preprocessing is used before the data are passed onto the deep learning framework. The classification accuracy of our proposed deep learning approach is evaluated against state-of-the-art methods using both laboratory and real world activity datasets. Our results show the validity of the approach on different human activity datasets, outperforming other methods, including the two methods used within our combined pipeline. We also demonstrate that the computation times for the proposed method are consistent with the constraints of real-time on-node processing on smartphones and a wearable sensor platform.", "title": "" }, { "docid": "d46594f40795de0feef71b480a53553f", "text": "Feed-forward, Deep neural networks (DNN)-based text-tospeech (TTS) systems have been recently shown to outperform decision-tree clustered context-dependent HMM TTS systems [1, 4]. However, the long time span contextual effect in a speech utterance is still not easy to accommodate, due to the intrinsic, feed-forward nature in DNN-based modeling. Also, to synthesize a smooth speech trajectory, the dynamic features are commonly used to constrain speech parameter trajectory generation in HMM-based TTS [2]. In this paper, Recurrent Neural Networks (RNNs) with Bidirectional Long Short Term Memory (BLSTM) cells are adopted to capture the correlation or co-occurrence information between any two instants in a speech utterance for parametric TTS synthesis. Experimental results show that a hybrid system of DNN and BLSTM-RNN, i.e., lower hidden layers with a feed-forward structure which is cascaded with upper hidden layers with a bidirectional RNN structure of LSTM, can outperform either the conventional, decision tree-based HMM, or a DNN TTS system, both objectively and subjectively. The speech trajectory generated by the BLSTM-RNN TTS is fairly smooth and no dynamic constraints are needed.", "title": "" } ]
[ { "docid": "e9047e59f58e71404107b065e584c547", "text": "Dermoscopic skin images are often obtained with different imaging devices, under varying acquisition conditions. In this work, instead of attempting to perform intensity and color normalization, we propose to leverage computational color constancy techniques to build an artificial data augmentation technique suitable for this kind of images. Specifically, we apply the shades of gray color constancy technique to color-normalize the entire training set of images, while retaining the estimated illuminants. We then draw one sample from the distribution of training set illuminants and apply it on the normalized image. We employ this technique for training two deep convolutional neural networks for the tasks of skin lesion segmentation and skin lesion classification, in the context of the ISIC 2017 challenge and without using any external dermatologic image set. Our results on the validation set are promising, and will be supplemented with extended results on the hidden test set when available.", "title": "" }, { "docid": "0c0d46af8cbb0486d12c7d60f72ea715", "text": "Predicting the development of artificial intelligence (AI) is a difficult project – but a vital one, according to some analysts. AI predictions already abound: but are they reliable? This paper will start by proposing a decomposition schema for classifying them. Then it constructs a variety of theoretical tools for analysing, judging and improving them. These tools are demonstrated by careful analysis of five famous AI predictions: the initial Dartmouth conference, Dreyfus’s criticism of AI, Searle’s Chinese Room paper, Kurzweil’s predictions in the ‘Age of Spiritual Machines’, and Omohundro’s ‘AI Drives’ paper. These case studies illustrate several important principles, such as the general overconfidence of experts, the superiority of models over expert judgement, and the need for greater uncertainty in all types of predictions. The general reliability of expert judgement in AI timeline predictions is shown to be poor, a result that fits in with previous studies of expert competence.//", "title": "" }, { "docid": "781890e1325126fe262a0587b26f9b6b", "text": "We evaluate the character-level translation method for neural semantic parsing on a large corpus of sentences annotated with Abstract Meaning Representations (AMRs). Using a sequence-tosequence model, and some trivial preprocessing and postprocessing of AMRs, we obtain a baseline accuracy of 53.1 (F-score on AMR-triples). We examine five different approaches to improve this baseline result: (i) reordering AMR branches to match the word order of the input sentence increases performance to 58.3; (ii) adding part-of-speech tags (automatically produced) to the input shows improvement as well (57.2); (iii) So does the introduction of super characters (conflating frequent sequences of characters to a single character), reaching 57.4; (iv) optimizing the training process by using pre-training and averaging a set of models increases performance to 58.7; (v) adding silver-standard training data obtained by an off-the-shelf parser yields the biggest improvement, resulting in an F-score of 64.0. Combining all five techniques leads to an F-score of 71.0 on holdout data, which is state-of-the-art in AMR parsing. This is remarkable because of the relative simplicity of the approach.", "title": "" }, { "docid": "207b24c58d8417fc309a42e3bbd6dc16", "text": "This study mainly remarks the efficiency of black-box modeling capacity of neural networks in the case of forecasting soccer match results, and opens up several debates on the nature of prediction and selection of input parameters. The selection of input parameters is a serious problem in soccer match prediction systems based on neural networks or statistical methods. Several input vector suggestions are implemented in literature which is mostly based on direct data from weekly charts. Here in this paper, two different input vector parameters have been tested via learning vector quantization networks in order to emphasize the importance of input parameter selection. The input vector parameters introduced in this study are plain and also meaningful when compared to other studies. The results of different approaches presented in this study are compared to each other, and also compared with the results of other neural network approaches and statistical methods in order to give an idea about the successful prediction performance. The paper is concluded with discussions about the nature of soccer match forecasting concept that may draw the interests of researchers willing to work in this area.", "title": "" }, { "docid": "b68da205eb9bf4a6367250c6f04d2ad4", "text": "Trends change rapidly in today’s world, prompting this key question: What is the mechanism behind the emergence of new trends? By representing real-world dynamic systems as complex networks, the emergence of new trends can be symbolized by vertices that “shine.” That is, at a specific time interval in a network’s life, certain vertices become increasingly connected to other vertices. This process creates new high-degree vertices, i.e., network stars. Thus, to study trends, we must look at how networks evolve over time and determine how the stars behave. In our research, we constructed the largest publicly available network evolution dataset to date, which contains 38,000 real-world networks and 2.5 million graphs. Then, we performed the first precise wide-scale analysis of the evolution of networks with various scales. Three primary observations resulted: (a) links are most prevalent among vertices that join a network at a similar time; (b) the rate that new vertices join a network is a central factor in molding a network’s topology; and (c) the emergence of network stars (high-degree vertices) is correlated with fast-growing networks. We applied our learnings to develop a flexible network-generation model based on large-scale, real-world data. This model gives a better understanding of how stars rise and fall within networks, and is applicable to dynamic systems both in nature and society. Multimedia Links I Video I Interactive Data Visualization I Data I Code Tutorials", "title": "" }, { "docid": "dda021771ca1b1e3c56d978149fb30c3", "text": "Intelligent interaction between humans and computers has been a dream of artificial intelligence since the beginning of digital era and one of the original motivations behind the creation of artificial intelligence. A key step towards the achievement of such an ambitious goal is to enable the Question Answering systems understand the information need of the user. In this thesis, we attempt to enable the QA system’s ability to understand the user’s information need by three approaches. First, an clarification question generation method is proposed to help the user clarify the information need and bridge information need gap between QA system and the user. Next, a translation based model is obtained from the large archives of Community Question Answering data, to model the information need behind a question and boost the performance of question recommendation. Finally, a fine-grained classification framework is proposed to enable the systems to recommend answered questions based on information need satisfaction.", "title": "" }, { "docid": "be9c234d05dc6f6b2afafa05b3337cf4", "text": "There has been much research on various aspects of Approximate Query Processing (AQP), such as different sampling strategies, error estimation mechanisms, and various types of data synopses. However, many subtle challenges arise when building an actual AQP engine that can be deployed and used by real world applications. These subtleties are often ignored (or at least not elaborated) by the theoretical literature and academic prototypes alike. For the first time to the best of our knowledge, in this article, we focus on these subtle challenges that one must address when designing an AQP system. Our intention for this article is to serve as a handbook listing critical design choices that database practitioners must be aware of when building or using an AQP system, not to prescribe a specific solution to each challenge.", "title": "" }, { "docid": "1a65b9d35bce45abeefe66882dcf4448", "text": "Data is nowadays an invaluable resource, indeed it guides all business decisions in most of the computer-aided human activities. Threats to data integrity are thus of paramount relevance, as tampering with data may maliciously affect crucial business decisions. This issue is especially true in cloud computing environments, where data owners cannot control fundamental data aspects, like the physical storage of data and the control of its accesses. Blockchain has recently emerged as a fascinating technology which, among others, provides compelling properties about data integrity. Using the blockchain to face data integrity threats seems to be a natural choice, but its current limitations of low throughput, high latency, and weak stability hinder the practical feasibility of any blockchain-based solutions. In this paper, by focusing on a case study from the European SUNFISH project, which concerns the design of a secure by-design cloud federation platform for the public sector, we precisely delineate the actual data integrity needs of cloud computing environments and the research questions to be tackled to adopt blockchain-based databases. First, we detail the open research questions and the difficulties inherent in addressing them. Then, we outline a preliminary design of an effective blockchain-based database for cloud computing environments.", "title": "" }, { "docid": "c61a39f0ba3f24f10c5edd8ad39c7a20", "text": "REINFORCEMENT LEARNING AND ITS APPLICATION TO CONTROL", "title": "" }, { "docid": "60bb725cf5f0923101949fc11e93502a", "text": "An important ability of cognitive systems is the ability to familiarize themselves with the properties of objects and their environment as well as to develop an understanding of the consequences of their own actions on physical objects. Developing developmental approaches that allow cognitive systems to familiarize with objects in this sense via guided self-exploration is an important challenge within the field of developmental robotics. In this paper we present a novel approach that allows cognitive systems to familiarize themselves with the properties of objects and the effects of their actions on them in a self-exploration fashion. Our approach is inspired by developmental studies that hypothesize that infants have a propensity to systematically explore the connection between own actions and their perceptual consequences in order to support inter-modal calibration of their bodies. We propose a reinforcement-based approach operating in a continuous state space in which the function predicting cumulated future rewards is learned via a deep Q-network. We investigate the impact of the structure of rewards, the impact of different regularization approaches as well as the impact of different exploration strategies.", "title": "" }, { "docid": "ed33b5fae6bc0af64668b137a3a64202", "text": "In this study the effect of the Edmodo social learning environment on mobile assisted language learning (MALL) was examined by seeking the opinions of students. Using a quantitative experimental approach, this study was conducted by conducting a questionnaire before and after using the social learning network Edmodo. Students attended lessons with their mobile devices. The course materials were shared in the network via Edmodo group sharing tools. The students exchanged idea and developed projects, and felt as though they were in a real classroom setting. The students were also able to access various multimedia content. The results of the study indicate that Edmodo improves students’ foreign language learning, increases their success, strengthens communication among students, and serves as an entertaining learning environment for them. The educationally suitable sharing structure and the positive user opinions described in this study indicate that Edmodo is also usable in other lessons. Edmodo can be used on various mobile devices, including smartphones, in addition to the web. This advantageous feature contributes to the usefulness of Edmodo as a scaffold for education.", "title": "" }, { "docid": "6a33013c19dc59d8871e217461d479e9", "text": "Cancer tissues in histopathology images exhibit abnormal patterns; it is of great clinical importance to label a histopathology image as having cancerous regions or not and perform the corresponding image segmentation. However, the detailed annotation of cancer cells is often an ambiguous and challenging task. In this paper, we propose a new learning method, multiple clustered instance learning (MCIL), to classify, segment and cluster cancer cells in colon histopathology images. The proposed MCIL method simultaneously performs image-level classification (cancer vs. non-cancer image), pixel-level segmentation (cancer vs. non-cancer tissue), and patch-level clustering (cancer subclasses). We embed the clustering concept into the multiple instance learning (MIL) setting and derive a principled solution to perform the above three tasks in an integrated framework. Experimental results demonstrate the efficiency and effectiveness of MCIL in analyzing colon cancers.", "title": "" }, { "docid": "4c788138dd1b390c059bb9156cd54941", "text": "We introduce second-order vector representations of words, induced from nearest neighborhood topological features in pre-trained contextual word embeddings. We then analyze the effects of using second-order embeddings as input features in two deep natural language processing models, for named entity recognition and recognizing textual entailment, as well as a linear model for paraphrase recognition. Surprisingly, we find that nearest neighbor information alone is sufficient to capture most of the performance benefits derived from using pre-trained word embeddings. Furthermore, second-order embeddings are able to handle highly heterogeneous data better than first-order representations, though at the cost of some specificity. Additionally, augmenting contextual embeddings with second-order information further improves model performance in some cases. Due to variance in the random initializations of word embeddings, utilizing nearest neighbor features from multiple first-order embedding samples can also contribute to downstream performance gains. Finally, we identify intriguing characteristics of second-order embedding spaces for further research, including much higher density and different semantic interpretations of cosine similarity.", "title": "" }, { "docid": "e86f1f37eac7c2182c5f77c527d8fac6", "text": "Eating members of one's own species is one of the few remaining taboos in modern human societies. In humans, aggression cannibalism has been associated with mental illness. The objective of this report is to examine the unique set of circumstances and characteristics revealing the underlying etiology leading to such an act and the type of psychological effect it has for the perpetrator. A case report of a patient with paranoid schizophrenia who committed patricide and cannibalism is presented. The psychosocial implications of anthropophagy on the particular patient management are outlined.", "title": "" }, { "docid": "0182e6dcf7c8ec981886dfa2586a0d5d", "text": "MOTIVATION\nMetabolomics is a post genomic technology which seeks to provide a comprehensive profile of all the metabolites present in a biological sample. This complements the mRNA profiles provided by microarrays, and the protein profiles provided by proteomics. To test the power of metabolome analysis we selected the problem of discrimating between related genotypes of Arabidopsis. Specifically, the problem tackled was to discrimate between two background genotypes (Col0 and C24) and, more significantly, the offspring produced by the crossbreeding of these two lines, the progeny (whose genotypes would differ only in their maternally inherited mitichondia and chloroplasts).\n\n\nOVERVIEW\nA gas chromotography--mass spectrometry (GCMS) profiling protocol was used to identify 433 metabolites in the samples. The metabolomic profiles were compared using descriptive statistics which indicated that key primary metabolites vary more than other metabolites. We then applied neural networks to discriminate between the genotypes. This showed clearly that the two background lines can be discrimated between each other and their progeny, and indicated that the two progeny lines can also be discriminated. We applied Euclidean hierarchical and Principal Component Analysis (PCA) to help understand the basis of genotype discrimination. PCA indicated that malic acid and citrate are the two most important metabolites for discriminating between the background lines, and glucose and fructose are two most important metabolites for discriminating between the crosses. These results are consistant with genotype differences in mitochondia and chloroplasts.", "title": "" }, { "docid": "e3b92d76bb139d0601c85416e8afaca4", "text": "Conventional supervised object recognition methods have been investigated for many years. Despite their successes, there are still two suffering limitations: (1) various information of an object is represented by artificial features only derived from RGB images, (2) lots of manually labeled data is required by supervised learning. To address those limitations, we propose a new semi-supervised learning framework based on RGB and depth (RGB-D) images to improve object recognition. In particular, our framework has two modules: (1) RGB and depth images are represented by convolutional-recursive neural networks to construct high level features, respectively, (2) co-training is exploited to make full use of unlabeled RGB-D instances due to the existing two independent views. Experiments on the standard RGB-D object dataset demonstrate that our method can compete against with other state-of-the-art methods with only 20% labeled data.", "title": "" }, { "docid": "6cc203d16e715cbd71efdeca380f3661", "text": "PURPOSE\nTo determine a population-based estimate of communication disorders (CDs) in children; the co-occurrence of intellectual disability (ID), autism, and emotional/behavioral disorders; and the impact of these conditions on the prevalence of CDs.\n\n\nMETHOD\nSurveillance targeted 8-year-olds born in 1994 residing in 2002 in the 3 most populous counties in Utah (n = 26,315). A multiple-source record review was conducted at all major health and educational facilities.\n\n\nRESULTS\nA total of 1,667 children met the criteria of CD. The prevalence of CD was estimated to be 63.4 per 1,000 8-year-olds (95% confidence interval = 60.4-66.2). The ratio of boys to girls was 1.8:1. Four percent of the CD cases were identified with an ID and 3.7% with autism spectrum disorders (ASD). Adjusting the CD prevalence to exclude ASD and/or ID cases significantly affected the CD prevalence rate. Other frequently co-occurring emotional/behavioral disorders with CD were attention deficit/hyperactivity disorder, anxiety, and conduct disorder.\n\n\nCONCLUSIONS\nFindings affirm that CDs and co-occurring mental health conditions are a major educational and public health concern.", "title": "" }, { "docid": "b266ab6e6a0fd75fb3d97b25970cab99", "text": "a r t i c l e i n f o Keywords: Customer relationship management CRM Customer relationship performance Information technology Marketing capabilities Social media technology This study examines how social media technology usage and customer-centric management systems contribute to a firm-level capability of social customer relationship management (CRM). Drawing from the literature in marketing, information systems, and strategic management, the first contribution of this study is the conceptu-alization and measurement of social CRM capability. The second key contribution is the examination of how social CRM capability is influenced by both customer-centric management systems and social media technologies. These two resources are found to have an interactive effect on the formation of a firm-level capability that is shown to positively relate to customer relationship performance. The study analyzes data from 308 organizations using a structural equation modeling approach. Much like marketing managers in the late 1990s through early 2000s, who participated in the widespread deployment of customer relationship management (CRM) technologies, today's managers are charged with integrating nascent technologies – namely, social media applications – with existing systems and processes to develop new capabilities that foster stronger relationships with customers. This merger of existing CRM systems with social media technology has given way to a new concept of CRM that incorporates a more collaborative and network-focused approach to managing customer relationships. The term social CRM has recently emerged to describe this new way of developing and maintaining customer relationships (Greenberg, 2010). Marketing scholars have defined social CRM as the integration of customer-facing activities, including processes, systems, and technologies, with emergent social media applications to engage customers in collaborative conversations and enhance customer relationships (Greenberg, 2010; Trainor, 2012). Organizations are recognizing the potential of social CRM and have made considerable investments in social CRM technology over the past two years. According to Sarner et al. (2011), spending in social CRM technology increased by more than 40% in 2010 and is expected to exceed $1 billion by 2013. Despite the current hype surrounding social media applications, the efficacy of social CRM technology remains largely unknown and underexplored. Several questions remain unanswered, such as: 1) Can social CRM increase customer retention and loyalty? 2) How do social CRM technologies contribute to firm outcomes? 3) What role is played by CRM processes and technologies? As a result, companies are largely left to experiment with their social application implementations (Sarner et al., 2011), and they …", "title": "" }, { "docid": "8b3431783f1dc699be1153ad80348d3e", "text": "Quality Function Deployment (QFD) was conceived in Japan in the late 1960's, and introduced to America and Europe in 1983. This paper will provide a general overview of the QFD methodology and approach to product development. Once familiarity with the tool is established, a real-life application of the technique will be provided in a case study. The case study will illustrate how QFD was used to develop a new tape product and provide counsel to those that may want to implement the QFD process. Quality function deployment (QFD) is a “method to transform user demands into design quality, to deploy the functions forming quality, and to deploy methods for achieving the design quality into subsystems and component parts, and ultimately to specific elements of the manufacturing process.”", "title": "" }, { "docid": "9b1f40687d0c9b78efdf6d1e19769294", "text": "The ideal cell type to be used for cartilage therapy should possess a proven chondrogenic capacity, not cause donor-site morbidity, and should be readily expandable in culture without losing their phenotype. There are several cell sources being investigated to promote cartilage regeneration: mature articular chondrocytes, chondrocyte progenitors, and various stem cells. Most recently, stem cells isolated from joint tissue, such as chondrogenic stem/progenitors from cartilage itself, synovial fluid, synovial membrane, and infrapatellar fat pad (IFP) have gained great attention due to their increased chondrogenic capacity over the bone marrow and subcutaneous adipose-derived stem cells. In this review, we first describe the IFP anatomy and compare and contrast it with other adipose tissues, with a particular focus on the embryological and developmental aspects of the tissue. We then discuss the recent advances in IFP stem cells for regenerative medicine. We compare their properties with other stem cell types and discuss an ontogeny relationship with other joint cells and their role on in vivo cartilage repair. We conclude with a perspective for future clinical trials using IFP stem cells.", "title": "" } ]
scidocsrr
c916a926992a6bb405d068ff46b736a2
Searching Trajectories by Regions of Interest
[ { "docid": "4d4219d8e4fd1aa86724f3561aea414b", "text": "Trajectory search has long been an attractive and challenging topic which blooms various interesting applications in spatial-temporal databases. In this work, we study a new problem of searching trajectories by locations, in which context the query is only a small set of locations with or without an order specified, while the target is to find the k Best-Connected Trajectories (k-BCT) from a database such that the k-BCT best connect the designated locations geographically. Different from the conventional trajectory search that looks for similar trajectories w.r.t. shape or other criteria by using a sample query trajectory, we focus on the goodness of connection provided by a trajectory to the specified query locations. This new query can benefit users in many novel applications such as trip planning.\n In our work, we firstly define a new similarity function for measuring how well a trajectory connects the query locations, with both spatial distance and order constraint being considered. Upon the observation that the number of query locations is normally small (e.g. 10 or less) since it is impractical for a user to input too many locations, we analyze the feasibility of using a general-purpose spatial index to achieve efficient k-BCT search, based on a simple Incremental k-NN based Algorithm (IKNN). The IKNN effectively prunes and refines trajectories by using the devised lower bound and upper bound of similarity. Our contributions mainly lie in adapting the best-first and depth-first k-NN algorithms to the basic IKNN properly, and more importantly ensuring the efficiency in both search effort and memory usage. An in-depth study on the adaption and its efficiency is provided. Further optimization is also presented to accelerate the IKNN algorithm. Finally, we verify the efficiency of the algorithm by extensive experiments.", "title": "" } ]
[ { "docid": "ce7000befa45746d7cc8cf8c2ffb3246", "text": "The quantity of text information published in Arabic language on the net requires the implementation of effective techniques for the extraction and classifying of relevant information contained in large corpus of texts. In this paper we presented an implementation of an enhanced k-NN Arabic text classifier. We apply the traditional k-NN and Naive Bayes from Weka Toolkit for comparison purpose. Our proposed modified k-NN algorithm features an improved decision rule to skip the classes that are less similar and identify the right class from k nearest neighbours which increases the accuracy. The study evaluates the improved decision rule technique using the standard of recall, precision and f-measure as the basis of comparison. We concluded that the effectiveness of the proposed classifier is promising and outperforms the classical k-NN classifier.", "title": "" }, { "docid": "9677d364752d50160557bd8e9dfa0dfb", "text": "a Junior Research Group of Primate Sexual Selection, Department of Reproductive Biology, German Primate Center Courant Research Center ‘Evolution of Social Behavior’, Georg-August-Universität, Germany c Junior Research Group of Primate Kin Selection, Department of Primatology, Max-Planck-Institute for Evolutionary Anthropology, Germany d Institute of Biology, Faculty of Bioscience, Pharmacy and Psychology, University of Leipzig, Germany e Faculty of Veterinary Medicine, Bogor Agricultural University, Indonesia", "title": "" }, { "docid": "6cf1bcb5396a096a9bcb69186292060a", "text": "Existing feature-based recommendation methods incorporate auxiliary features about users and/or items to address data sparsity and cold start issues. They mainly consider features that are organized in a flat structure, where features are independent and in a same level. However, auxiliary features are often organized in rich knowledge structures (e.g. hierarchy) to describe their relationships. In this paper, we propose a novel matrix factorization framework with recursive regularization -- ReMF, which jointly models and learns the influence of hierarchically-organized features on user-item interactions, thus to improve recommendation accuracy. It also provides characterization of how different features in the hierarchy co-influence the modeling of user-item interactions. Empirical results on real-world data sets demonstrate that ReMF consistently outperforms state-of-the-art feature-based recommendation methods.", "title": "" }, { "docid": "8da45338656d4cd92a09ce7c1fdc3353", "text": "Revelations over the past couple of years highlight the importance of understanding malicious and surreptitious weakening of cryptographic systems. We provide an overview of this domain, using a number of historical examples to drive development of a weaknesses taxonomy. This allows comparing different approaches to sabotage. We categorize a broader set of potential avenues for weakening systems using this taxonomy, and discuss what future research is needed to provide sabotage-resilient cryptography.", "title": "" }, { "docid": "f657ec927e0cd39d06428dc3ee37e5e2", "text": "Muscle hernias of the lower leg involving the tibialis anterior, peroneus brevis, and lateral head of the gastrocnemius were found in three different patients. MRI findings allowed recognition of herniated muscle in all cases and identification of fascial defect in two of them. MR imaging findings and the value of dynamic MR imaging is emphasized.", "title": "" }, { "docid": "544cdcd97568a61e4a02a3ea37d6a0b5", "text": "In this paper, we describe a data-driven approach to leverage repositories of 3D models for scene understanding. Our ability to relate what we see in an image to a large collection of 3D models allows us to transfer information from these models, creating a rich understanding of the scene. We develop a framework for auto-calibrating a camera, rendering 3D models from the viewpoint an image was taken, and computing a similarity measure between each 3D model and an input image. We demonstrate this data-driven approach in the context of geometry estimation and show the ability to find the identities, poses and styles of objects in a scene. The true benefit of 3DNN compared to a traditional 2D nearest-neighbor approach is that by generalizing across viewpoints, we free ourselves from the need to have training examples captured from all possible viewpoints. Thus, we are able to achieve comparable results using orders of magnitude less data, and recognize objects from never-before-seen viewpoints. In this work, we describe the 3DNN algorithm and rigorously evaluate its performance for the tasks of geometry estimation and object detection/segmentation, as well as two novel applications: affordance estimation and photorealistic object insertion.", "title": "" }, { "docid": "f9c78846a862930470899f62efffa6a8", "text": "For fast and accurate motion of 6-axis articulated robot, more noble motion control strategy is needed. In general, the movement strategy of industrial robots can be divided into two kinds, PTP (Point to Point) and CP (Continuous Path). In recent, industrial robots which should be co-worked with machine tools are increasingly needed for performing various jobs, as well as simple handling or welding. Therefore, in order to cope with high-speed handling of the cooperation of industrial robots with machine tools or other devices, CP should be implemented so as to reduce vibration and noise, as well as decreasing operation time. This paper will realize CP motion (especially joint-linear) blending in 3-dimensional space for a 6-axis articulated (lab-manufactured) robot (called as “RS2”) by using LabVIEW® [6] programming, based on a parametric interpolation. Another small contribution of this paper is the proposal of motion blending simulation technique based on Recurdyn® V7, in order to figure out whether the joint-linear blending motion can generate the stable motion of robot in the sense of velocity magnitude at the end-effector of robot or not. In order to evaluate the performance of joint-linear motion blending, simple PTP (i.e., linear-linear) is also physically implemented on RS2. The implementation results of joint-linear motion blending and PTP are compared in terms of vibration magnitude and travel time by using the vibration testing equipment of Medallion of Zonic®. It can be confirmed verified that the vibration peak of joint-linear motion blending has been reduced to 1/10, compared to that of PTP.", "title": "" }, { "docid": "21f6ca062098c0dcf04fe8fadfc67285", "text": "The Key study in this paper is to begin the investigation process with the initial forensic analysis in the segments of the storage media which would definitely contain the digital forensic evidences. These Storage media Locations is referred as the Windows registry. Identifying the forensic evidence from windows registry may take less time than required in the case of all locations of a storage media. Our main focus in this research will be to study the registry structure of Windows 7 and identify the useful information within the registry keys of windows 7 that may be extremely useful to carry out any task of digital forensic analysis. The main aim is to describe the importance of the study on computer & digital forensics. The Idea behind the research is to implement a forensic tool which will be very useful in extracting the digital evidences and present them in usable form to a forensic investigator. The work includes identifying various events registry keys value such as machine last shut down time along with machine name, List of all the wireless networks that the computer has connected to; List of the most recently used files or applications, List of all the USB devices that have been attached to the computer and many more. This work aims to point out the importance of windows forensic analysis to extract and identify the hidden information which shall act as an evidence tool to track and gather the user activities pattern. All Research was conducted in a Windows 7 Environment. Keywords—Windows Registry, Windows 7 Forensic Analysis, Windows Registry Structure, Analysing Registry Key, Digital Forensic Identification, Forensic data Collection, Examination of Windows Registry, Decoding of Windows Registry Keys, Discovering User Activities Patterns, Computer Forensic Investigation Tool.", "title": "" }, { "docid": "8cdd4a8910467974dc7cfee30f6f570b", "text": "This work contains a theoretical study and computer simulations of a new self-organizing process. The principal discovery is that in a simple network of adaptive physical elements which receives signals from a primary event space, the signal representations are automatically mapped onto a set of output responses in such a way that the responses acquire the same topological order as that of the primary events. In other words, a principle has been discovered which facilitates the automatic formation of topologically correct maps of features of observable events. The basic self-organizing system is a one- or two-dimensional array of processing units resembling a network of threshold-logic units, and characterized by short-range lateral feedback between neighbouring units. Several types of computer simulations are used to demonstrate the ordering process as well as the conditions under which it fails.", "title": "" }, { "docid": "c7f944e3c31fbb45dcd83252b43f73ff", "text": "The moderation of content in many social media systems, such as Twitter and Facebook, motivated the emergence of a new social network system that promotes free speech, named Gab. Soon after that, Gab has been removed from Google Play Store for violating the company's hate speech policy and it has been rejected by Apple for similar reasons. In this paper we characterize Gab, aiming at understanding who are the users who joined it and what kind of content they share in this system. Our findings show that Gab is a very politically oriented system that hosts banned users from other social networks, some of them due to possible cases of hate speech and association with extremism. We provide the first measurement of news dissemination inside a right-leaning echo chamber, investigating a social media where readers are rarely exposed to content that cuts across ideological lines, but rather are fed with content that reinforces their current political or social views.", "title": "" }, { "docid": "35dd6675e287b5e364998ee138677032", "text": "Focussed structured document retrieval employs the concept of best entry points (BEPs), which are intended to provide optimal starting-points from which users can browse to relevant document components. This paper describes two small-scale studies, using experimental data from the Shakespeare user study, which developed and evaluated different approaches to the problem of automatic identification of BEPs.", "title": "" }, { "docid": "31555a5981fd234fe9dce3ed47f690f2", "text": "An accredited biennial 2012 study by the Association of Certified Fraud Examiners claims that on average 5% of a company’s revenue is lost because of unchecked fraud every year. The reason for such heavy losses are that it takes around 18 months for a fraud to be caught and audits catch only 3% of the actual fraud. This begs the need for better tools and processes to be able to quickly and cheaply identify potential malefactors. In this paper, we describe a robust tool to identify procurement related fraud/risk, though the general design and the analytical components could be adapted to detecting fraud in other domains. Besides analyzing standard transactional data, our solution analyzes multiple public and private data sources leading to wider coverage of fraud types than what generally exists in the marketplace. Moreover, our approach is more principled in the sense that the learning component, which is based on investigation feedback has formal guarantees. Though such a tool is ever evolving, an initial deployment of this tool over the past 6 months has found many interesting cases from compliance risk and fraud point of view, increasing the number of true positives found by over 80% compared with other state-of-the-art tools that the domain experts were previously using.", "title": "" }, { "docid": "ca267729b10d10abdd529d002d679e3a", "text": "Software development organizations are increasingly interested in the possibility of adopting agile development methods. Organizations that have been employing the Capability Maturity Model (CMM/CMMI) for making improvements are now changing their software development processes towards agility. By deploying agile methods, these organizations are making an investment the success of which needs to be proven. However, CMMI does not always support interpretations in an agile context. Consequently, assessments should be implemented in a manner that takes the agile context into account, while still producing useful results. This paper proposes an approach for agile software development assessment using CMMI and describes how this approach was used for software process improvement purposes in organizations that had either been planning to use or were using agile software development methods.", "title": "" }, { "docid": "6897a459e95ac14772de264545970726", "text": "There is a need for a system which provides real-time local environmental data in rural crop fields for the detection and management of fungal diseases. This paper presents the design of an Internet of Things (IoT) system consisting of a device capable of sending real-time environmental data to cloud storage and a machine learning algorithm to predict environmental conditions for fungal detection and prevention. The stored environmental data on conditions such as air temperature, relative air humidity, wind speed, and rain fall is accessed and processed by a remote computer for analysis and management purposes. A machine learning algorithm using Support Vector Machine regression (SVMr) was developed to process the raw data and predict short-term (day-to-day) air temperature, relative air humidity, and wind speed values to assist in predicting the presence and spread of harmful fungal diseases through the local crop field. Together, the environmental data and environmental predictions made easily accessible by this IoT system will ultimately assist crop field managers by facilitating better management and prevention of fungal disease spread.", "title": "" }, { "docid": "5b73fd2439e02906349f3afe2c2e331c", "text": "This paper presents a varactor-based power divider with reconfigurable power-dividing ratio and reconfigurable in-phase or out-of-phase phase relation between outputs. By properly controlling the tuning varactors, the power divider can be either in phase or out of phase and each with a wide range of tunable power-dividing ratio. The proposed microstrip power divider was prototyped and experimentally characterized. Measured and simulated results are in good agreement.", "title": "" }, { "docid": "8e6a83df0235cd6e27fbc14abb61c5fc", "text": "The management of postprandial hyperglycemia is an important strategy in the control of diabetes mellitus and complications associated with the disease, especially in the diabetes type 2. Therefore, inhibitors of carbohydrate hydrolyzing enzymes can be useful in the treatment of diabetes and medicinal plants can offer an attractive strategy for the purpose. Vaccinium arctostaphylos leaves are considered useful for the treatment of diabetes mellitus in some countries. In our research for antidiabetic compounds from natural sources, we found that the methanol extract of the leaves of V. arctostaphylos displayed a potent inhibitory activity on pancreatic α-amylase activity (IC50 = 0.53 (0.53 - 0.54) mg/mL). The bioassay-guided fractionation of the extract resulted in the isolation of quercetin as an active α-amylase inhibitor. Quercetin showed a dose-dependent inhibitory effect with IC50 value 0.17 (0.16 - 0.17) mM.", "title": "" }, { "docid": "4213993be9e2cf6d3470c59db20ea091", "text": "The virtual instrument is the main content of instrument technology nowadays. This article details the implementation process of the virtual oscilloscope. It is designed by LabVIEW graphical programming language. The virtual oscilloscope can achieve waveform display, channel selection, data collection, data reading, writing and storage, spectrum analysis, printing and waveform parameters measurement. It also has a friendly user interface and can be operated conveniently.", "title": "" }, { "docid": "b0593843ce815016a003c60f8f154006", "text": "This paper introduces a method for acquiring forensic-grade evidence from Android smartphones using open source tools. We investigate in particular cases where the suspect has made use of the smartphone's Wi-Fi or Bluetooth interfaces. We discuss the forensic analysis of four case studies, which revealed traces that were left in the inner structure of three mobile Android devices and also indicated security vulnerabilities. Subsequently, we propose a detailed plan for forensic examiners to follow when dealing with investigations of potential crimes committed using the wireless facilities of a suspect Android smartphone. This method can be followed to perform physical acquisition of data without using commercial tools and then to examine them safely in order to discover any activity associated with wireless communications. We evaluate our method using the Association of Chief Police Officers' (ACPO) guidelines of good practice for computer-based, electronic evidence and demonstrate that it is made up of an acceptable host of procedures for mobile forensic analysis, focused specifically on device Bluetooth and Wi-Fi facilities.", "title": "" }, { "docid": "5780d05a410270bfb3aa6ba511caf3a1", "text": "We present an extension to Continuous Time Bayesian Networks (CTBN) called Generalized CTBN (GCTBN). The formalism allows one to model continuous time delayed variables (with exponentially distributed transition rates), as well as non delayed or “immediate” variables, which act as standard chance nodes in a Bayesian Network. The usefulness of this kind of model is discussed through an example concerning the reliability of a simple component-based system. The interpretation of GCTBN is proposed in terms of Generalized Stochastic Petri Nets (GSPN); the purpose is twofold: to provide a well-defined semantics for GCTBNin terms of the underlying stochastic process, and to provide an actual mean to perform inference (both prediction and smoothing) on GCTBN.", "title": "" }, { "docid": "0fef8af603b8529d408e95610b981132", "text": "Leveraging the concept of software-defined network (SDN), the integration of terrestrial 5G and satellite networks brings us lots of benefits. The placement problem of controllers and satellite gateways is of fundamental importance for design of such SDN-enabled integrated network, especially, for the network reliability and latency, since different placement schemes would produce various network performances. To the best of our knowledge, it is an entirely new problem. Toward this end, in this paper, we first explore the satellite gateway placement problem to obtain the minimum average latency. A simulated annealing based approximate solution (SAA), is developed for this problem, which is able to achieve a near-optimal latency. Based on the analysis of latency, we further investigate a more challenging problem, i.e., the joint placement of controllers and gateways, for the maximum network reliability while satisfying the latency constraint. A simulated annealing and clustering hybrid algorithm (SACA) is proposed to solve this problem. Extensive experiments based on real world online network topologies have been conducted and as validated by our numerical results, enumeration algorithms are able to produce optimal results but having extremely long running time, while SAA and SACA can achieve approximate optimal performances with much lower computational complexity.", "title": "" } ]
scidocsrr
e29efa1679ce80dc8db4812b8a7bacba
Improved Selective Refinement Network for Face Detection
[ { "docid": "ccfb258fa88118aedbba5fa803808f75", "text": "Face detection has been well studied for many years and one of remaining challenges is to detect small, blurred and partially occluded faces in uncontrolled environment. This paper proposes a novel contextassisted single shot face detector, named PyramidBox to handle the hard face detection problem. Observing the importance of the context, we improve the utilization of contextual information in the following three aspects. First, we design a novel context anchor to supervise high-level contextual feature learning by a semi-supervised method, which we call it PyramidAnchors. Second, we propose the Low-level Feature Pyramid Network to combine adequate high-level context semantic feature and Low-level facial feature together, which also allows the PyramidBox to predict faces of all scales in a single shot. Third, we introduce a contextsensitive structure to increase the capacity of prediction network to improve the final accuracy of output. In addition, we use the method of Data-anchor-sampling to augment the training samples across different scales, which increases the diversity of training data for smaller faces. By exploiting the value of context, PyramidBox achieves superior performance among the state-of-the-art over the two common face detection benchmarks, FDDB and WIDER FACE. Our code is available in PaddlePaddle: https://github.com/PaddlePaddle/models/tree/develop/ fluid/face_detection.", "title": "" } ]
[ { "docid": "4044d493ac6c38fcb590a7fa5ced84d9", "text": "Use of sub-design-rule (SDR) thick-gate-oxide MOS structures can significantly improve RF performance. Utilizing 3-stack 3.3-V MOSFET's with an SDR channel length, a 31.3-dBm 900-MHz Bulk CMOS T/R switch with transmit (TX) and receive (RX) insertion losses of 0.5 and 1.0 dB is realized. A 28-dBm 2.4-GHz T/R switch with TX and RX insertion losses of 0.8 and 1.2 dB is also demonstrated. SDR MOS varactors achieve Qmin of ~ 80 at 24 GHz with a tuning range of ~ 40%.", "title": "" }, { "docid": "abb748541b980385e4b8bc477c5adc0e", "text": "Spin–orbit torque, a torque brought about by in-plane current via the spin–orbit interactions in heavy-metal/ferromagnet nanostructures, provides a new pathway to switch the magnetization direction. Although there are many recent studies, they all build on one of two structures that have the easy axis of a nanomagnet lying orthogonal to the current, that is, along the z or y axes. Here, we present a new structure with the third geometry, that is, with the easy axis collinear with the current (along the x axis). We fabricate a three-terminal device with a Ta/CoFeB/MgO-based stack and demonstrate the switching operation driven by the spin–orbit torque due to Ta with a negative spin Hall angle. Comparisons with different geometries highlight the previously unknown mechanisms of spin–orbit torque switching. Our work offers a new avenue for exploring the physics of spin–orbit torque switching and its application to spintronics devices.", "title": "" }, { "docid": "408122795467ff0247f95a997a1ed90a", "text": "With the popularity of mobile devices, photo retargeting has become a useful technique that adapts a high-resolution photo onto a low-resolution screen. Conventional approaches are limited in two aspects. The first factor is the de-emphasized role of semantic content that is many times more important than low-level features in photo aesthetics. Second is the importance of image spatial modeling: toward a semantically reasonable retargeted photo, the spatial distribution of objects within an image should be accurately learned. To solve these two problems, we propose a new semantically aware photo retargeting that shrinks a photo according to region semantics. The key technique is a mechanism transferring semantics of noisy image labels (inaccurate labels predicted by a learner like an SVM) into different image regions. In particular, we first project the local aesthetic features (graphlets in this work) onto a semantic space, wherein image labels are selectively encoded according to their noise level. Then, a category-sharing model is proposed to robustly discover the semantics of each image region. The model is motivated by the observation that the semantic distribution of graphlets from images tagged by a common label remains stable in the presence of noisy labels. Thereafter, a spatial pyramid is constructed to hierarchically encode the spatial layout of graphlet semantics. Based on this, a probabilistic model is proposed to enforce the spatial layout of a retargeted photo to be maximally similar to those from the training photos. Experimental results show that (1) noisy image labels predicted by different learners can improve the retargeting performance, according to both qualitative and quantitative analysis, and (2) the category-sharing model stays stable even when 32.36% of image labels are incorrectly predicted.", "title": "" }, { "docid": "26597dea3d011243a65a1d2acdae19e8", "text": "Erasure coding techniques are used to increase the reliability of distributed storage systems while minimizing storage overhead. The bandwidth required to repair the system after a node failure also plays a crucial role in the system performance. In [1] authors have shown that a tradeoff exists between storage and repair bandwidth. They also have introduced the scheme of regenerating codes which meet this tradeoff. In this paper, a scheme of Exact Regenerating Codes is introduced, which are regenerating codes with an additional property of regenerating back the same node upon failure. For the minimum bandwidth point, which is suitable for applications like distributed mail servers, explicit construction for exact regenerating codes is provided. A subspace approach is provided, using which the necessary and sufficient conditions for a linear code to be an exact regenerating code are derived. This leads to the uniqueness of our construction. For the minimum storage point which suits applications such as storage in peer-to-peer systems, an explicit construction of regenerating codes for certain suitable parameters is provided. This code supports variable number of nodes and can handle multiple simultaneous node failures. The constructions given for both the points require a low field size and have low complexity.", "title": "" }, { "docid": "9c349ef0f3a48eaeaf678b8730d4b82c", "text": "This paper discusses the effectiveness of the EEG signal for human identification using four or less of channels of two different types of EEG recordings. Studies have shown that the EEG signal has biometric potential because signal varies from person to person and impossible to replicate and steal. Data were collected from 10 male subjects while resting with eyes open and eyes closed in 5 separate sessions conducted over a course of two weeks. Features were extracted using the wavelet packet decomposition and analyzed to obtain the feature vectors. Subsequently, the neural networks algorithm was used to classify the feature vectors. Results show that, whether or not the subjects’ eyes were open are insignificant for a 4– channel biometrics system with a classification rate of 81%. However, for a 2–channel system, the P4 channel should not be included if data is acquired with the subjects’ eyes open. It was observed that for 2– channel system using only the C3 and C4 channels, a classification rate of 71% was achieved. Keywords—Biometric, EEG, Wavelet Packet Decomposition, Neural Networks", "title": "" }, { "docid": "8930924a223ef6a8d19e52ab5c6e7736", "text": "Modern perception systems are notoriously complex, featuring dozens of interacting parameters that must be tuned to achieve good performance. Conventional tuning approaches require expensive ground truth, while heuristic methods are difficult to generalize. In this work, we propose an introspective ground-truth-free approach to evaluating the performance of a generic perception system. By using the posterior distribution estimate generated by a Bayesian estimator, we show that the expected performance can be estimated efficiently and without ground truth. Our simulated and physical experiments in a demonstrative indoor ground robot state estimation application show that our approach can order parameters similarly to using a ground-truth system, and is able to accurately identify top-performing parameters in varying contexts. In contrast, baseline approaches that reason only about observation log-likelihood fail in the face of challenging perceptual phenomena.", "title": "" }, { "docid": "5d398e35d6dc58b56a9257623cb83db0", "text": "BACKGROUND\nAlthough much has been published with regard to the columella assessed on the frontal and lateral views, a paucity of literature exists regarding the basal view of the columella. The objective of this study was to evaluate the spectrum of columella deformities and devise a working classification system based on underlying anatomy.\n\n\nMETHODS\nA retrospective study was performed of 100 consecutive patients who presented for primary rhinoplasty. The preoperative basal view photographs for each patient were reviewed to determine whether they possessed ideal columellar aesthetics. Patients who had deformity of their columella were further scrutinized to determine the most likely underlying cause of the subsequent abnormality.\n\n\nRESULTS\nOf the 100 patient photographs assessed, only 16 (16 percent) were found to display ideal norms of the columella. The remaining 84 of 100 patients (84 percent) had some form of aesthetic abnormality and were further classified based on the most likely underlying cause. Type 1 deformities (caudal septum and/or spine) constituted 18 percent (18 of 100); type 2 (medial crura), 12 percent (12 of 100); type 3 (soft tissue), 6 percent (six of 100); and type 4 (combination), 48 percent (48 of 100).\n\n\nCONCLUSIONS\nDeformities may be classified according to the underlying cause, with combined deformity being the most common. Use of the herein discussed classification scheme will allow surgeons to approach this region in a comprehensive manner. Furthermore, use of such a system allows for a more standardized approach for surgical treatment.", "title": "" }, { "docid": "aeba4012971d339a9a953a7b86f57eb8", "text": "Bridging the ‘reality gap’ that separates simulated robotics from experiments on hardware could accelerate robotic research through improved data availability. This paper explores domain randomization, a simple technique for training models on simulated images that transfer to real images by randomizing rendering in the simulator. With enough variability in the simulator, the real world may appear to the model as just another variation. We focus on the task of object localization, which is a stepping stone to general robotic manipulation skills. We find that it is possible to train a real-world object detector that is accurate to 1.5 cm and robust to distractors and partial occlusions using only data from a simulator with non-realistic random textures. To demonstrate the capabilities of our detectors, we show they can be used to perform grasping in a cluttered environment. To our knowledge, this is the first successful transfer of a deep neural network trained only on simulated RGB images (without pre-training on real images) to the real world for the purpose of robotic control.", "title": "" }, { "docid": "db66428e21d473b7d77fde0c3ae6d6c3", "text": "In order to improve electric vehicle lead-acid battery charging speed, analysis the feasibility of shortening the charging time used the charge method with negative pulse discharge, presenting the negative pulse parameters determined method for the fast charging with pulse discharge, determined the negative pulse amplitude and negative pulse duration in the pulse charge with negative pulse. Experiments show that the determined parameters with this method has some Advantages such as short charging time, small temperature rise etc, and the method of negative pulse parameters determined can used for different capacity of lead-acid batteries.", "title": "" }, { "docid": "d41d5ed278337cf3138880e628272f2d", "text": "Technological changes and improved electronic communications seem, paradoxically, to be making cities more, rather than less, important. There is a strong correlation between urbanization and economic development across countries, and within-country evidence suggests that productivity rises in dense agglomerations. But urban economic advantages are often offset by the perennial urban curses of crime, congestion and contagious disease. The past history of the developed world suggests that these problems require more capable governments that use a combination of economic and engineering solutions. Though the scope of urban challenges can make remaining rural seem attractive, agrarian poverty has typically also been quite costly.", "title": "" }, { "docid": "e7ada5ce425b1e814c9e4f958f0f3c11", "text": "Œe recent boom of AI has seen the emergence of many humancomputer conversation systems such as Google Assistant, Microso‰ Cortana, Amazon Echo and Apple Siri. We introduce and formalize the task of predicting questions in conversations, where the goal is to predict the new question that the user will ask, given the past conversational context. Œis task can be modeled as a “sequence matching” problem, where two sequences are given and the aim is to learn a model that maps any pair of sequences to a matching probability. Neural matching models, which adopt deep neural networks to learn sequence representations and matching scores, have aŠracted immense research interests of information retrieval and natural language processing communities. In this paper, we €rst study neural matching models for the question retrieval task that has been widely explored in the literature, whereas the e‚ectiveness of neural models for this task is relatively unstudied. We further evaluate the neural matching models in the next question prediction task in conversations. We have used the publicly available ‹ora data and Ubuntu chat logs in our experiments. Our evaluations investigate the potential of neural matching models with representation learning for question retrieval and next question prediction in conversations. Experimental results show that neural matching models perform well for both tasks.", "title": "" }, { "docid": "bbeb6f28ae02876dcce8a4cf205b6194", "text": "We propose the design of a programming language for quantum computing. Traditionally, quantum algorithms are frequently expressed at the hardware level, for instance in terms of the quantum circuit model or quantum Turing machines. These approaches do not encourage structured programming or abstractions such as data types. In this paper, we describe the syntax and semantics of a simple quantum programming language with high-level features such as loops, recursive procedures, and structured data types. The language is functional in nature, statically typed, free of run-time errors, and it has an interesting denotational semantics in terms of complete partial orders of superoperators.", "title": "" }, { "docid": "2e2d3ace23e70bf318b13543184aff86", "text": "Let G “ pVG, EGq be a connected graph. The distance dGpu, vq between vertices u and v in G is the length of a shortest u ́ v path in G. The eccentricity of a vertex v in G is the integer eGpvq “ maxtdGpv, uq : u P VGu. The diameter of G is the integer dpGq “ maxteGpvq : v P VGu. The periphery of a vertex v of G is the set PGpvq “ tu P VG : dGpv, uq “ eGpvqu, while the periphery of G is the set P pGq “ tv P VG : eGpvq “ dpGqu. We say that graph G is hangable if PGpvq ⊆ P pGq for every vertex v of G. In this paper we prove that every block graph is hangable and discuss the hangability of products of graphs.", "title": "" }, { "docid": "97d10e997f09554baa2c34556f49f1bf", "text": "Computerized generation of humor is a notoriously difficult AI problem. We develop an algorithm called Libitum that helps humans generate humor in a Mad Lib R ©, which is a popular fill-in-the-blank game. The algorithm is based on a machine learned classifier that determines whether a potential fill-in word is funny in the context of the Mad Lib story. We use Amazon Mechanical Turk to create ground truth data and to judge humor for our classifier to mimic, and we make this data freely available. Our testing shows that Libitum successfully aids humans in filling in Mad Libs that are usually judged funnier than those filled in by humans with no computerized help. We go on to analyze why some words are better than others at making a Mad Lib funny.", "title": "" }, { "docid": "6bcfc93a3bee13d2c5416e4cc5663646", "text": "The choice of an adequate object shape representation is critical for efficient grasping and robot manipulation. A good representation has to account for two requirements: it should allow uncertain sensory fusion in a probabilistic way and it should serve as a basis for efficient grasp and motion generation. We consider Gaussian process implicit surface potentials as object shape representations. Sensory observations condition the Gaussian process such that its posterior mean defines an implicit surface which becomes an estimate of the object shape. Uncertain visual, haptic and laser data can equally be fused in the same Gaussian process shape estimate. The resulting implicit surface potential can then be used directly as a basis for a reach and grasp controller, serving as an attractor for the grasp end-effectors and steering the orientation of contact points. Our proposed controller results in a smooth reach and grasp trajectory without strict separation of phases. We validate the shape estimation using Gaussian processes in a simulation on randomly sampled shapes and the grasp controller on a real robot with 7DoF arm and 7DoF hand.", "title": "" }, { "docid": "ed2ad5cd12eb164a685a60dc0d0d4a06", "text": "Explainable Recommendation refers to the personalized recommendation algorithms that address the problem of why they not only provide users with the recommendations, but also provide explanations to make the user or system designer aware of why such items are recommended. In this way, it helps to improve the effectiveness, efficiency, persuasiveness, and user satisfaction of recommendation systems. In recent years, a large number of explainable recommendation approaches – especially model-based explainable recommendation algorithms – have been proposed and adopted in real-world systems. In this survey, we review the work on explainable recommendation that has been published in or before the year of 2018. We first highlight the position of explainable recommendation in recommender system research by categorizing recommendation problems into the 5W, i.e., what, when, who, where, and why. We then conduct a comprehensive survey of explainable recommendation itself in terms of three aspects: 1) We provide a chronological research line of explanations in recommender systems, including the user study approaches in the early years, as well as the more recent model-based approaches. 2) We provide a taxonomy for explainable recommendation algorithms, including user-based, item-based, model-based, and post-model explanations. 3) We summarize the application of explainable recommendation in different recommendation tasks, including product recommendation, social recommendation, POI recommendation, etc. We devote a section to discuss the explanation perspectives in the broader IR and machine learning settings, as well as their relationship with explainable recommendation research. We end the survey by discussing potential future research directions to promote the explainable recommendation research area. now Publishers Inc.. Explainable Recommendation: A Survey and New Perspectives. Foundations and Trends © in Information Retrieval, vol. XX, no. XX, pp. 1–87, 2018. DOI: 10.1561/XXXXXXXXXX.", "title": "" }, { "docid": "010926d088cf32ba3fafd8b4c4c0dedf", "text": "The number and the size of spatial databases, e.g. for geomarketing, traffic control or environmental studies, are rapidly growing which results in an increasing need for spatial data mining. In this paper, we present new algorithms for spatial characterization and spatial trend analysis. For spatial characterization it is important that class membership of a database object is not only determined by its non-spatial attributes but also by the attributes of objects in its neighborhood. In spatial trend analysis, patterns of change of some non-spatial attributes in the neighborhood of a database object are determined. We present several algorithms for these tasks. These algorithms were implemented within a general framework for spatial data mining providing a small set of database primitives on top of a commercial spatial database management system. A performance evaluation using a real geographic database demonstrates the effectiveness of the proposed algorithms. Furthermore, we show how the algorithms can be combined to discover even more interesting spatial knowledge.", "title": "" }, { "docid": "b5b6fc6ce7690ae8e49e1951b08172ce", "text": "The output voltage derivative term associated with a PID controller injects significant noise in a dc-dc converter. This is mainly due to the parasitic resistance and inductance of the output capacitor. Particularly, during a large-signal transient, noise injection significantly degrades phase margin. Although noise characteristics can be improved by reducing the cutoff frequency of the low-pass filter associated with the voltage derivative, this degrades the closed-loop bandwidth. A formulation of a PID controller is introduced to replace the output voltage derivative with information about the capacitor current, thus reducing noise injection. It is shown that this formulation preserves the fundamental principle of a PID controller and incorporates a load current feedforward, as well as inductor current dynamics. This can be helpful to further improve bandwidth and phase margin. The proposed method is shown to be equivalent to a voltage-mode-controlled buck converter and a current-mode-controlled boost converter with a PID controller in the voltage feedback loop. A buck converter prototype is tested, and the proposed algorithm is implemented using a field-programmable gate array.", "title": "" }, { "docid": "9c00313926a8c625fd15da8708aa941e", "text": "OBJECTIVE\nThe objective of this study was to evaluate the effect of a dental water jet on plaque biofilm removal using scanning electron microscopy (SEM).\n\n\nMETHODOLOGY\nEight teeth with advanced aggressive periodontal disease were extracted. Ten thin slices were cut from four teeth. Two slices were used as the control. Eight were inoculated with saliva and incubated for 4 days. Four slices were treated using a standard jet tip, and four slices were treated using an orthodontic jet tip. The remaining four teeth were treated with the orthodontic jet tip but were not inoculated with saliva to grow new plaque biofilm. All experimental teeth were treated using a dental water jet for 3 seconds on medium pressure.\n\n\nRESULTS\nThe standard jet tip removed 99.99% of the salivary (ex vivo) biofilm, and the orthodontic jet tip removed 99.84% of the salivary biofilm. Observation of the remaining four teeth by the naked eye indicated that the orthodontic jet tip removed significant amounts of calcified (in vivo) plaque biofilm. This was confirmed by SEM evaluations.\n\n\nCONCLUSION\nThe Waterpik dental water jet (Water Pik, Inc, Fort Collins, CO) can remove both ex vivo and in vivo plaque biofilm significantly.", "title": "" }, { "docid": "67d0e2d74f5b52d70bb194464d5c5b71", "text": "Mobile phones can provide a number of benefits to older people. However, most mobile phone designs and form factors are targeted at younger people and middle-aged adults. To inform the design of mobile phones for seniors, we ran several participatory activities where seniors critiqued current mobile phones, chose important applications, and built their own imagined mobile phone system. We prototyped this system on a real mobile phone and evaluated the seniors' performance through user tests and a real-world deployment. We found that our participants wanted more than simple phone functions, and instead wanted a variety of application areas. While they were able to learn to use the software with little difficulty, hardware design made completing some tasks frustrating or difficult. Based on our experience with our participants, we offer considerations for the community about how to design mobile devices for seniors and how to engage them in participatory activities.", "title": "" } ]
scidocsrr
1edd666e01785a141c316f7cd0f5e270
A Chatbot Based On AIML Rules Extracted From Twitter Dialogues
[ { "docid": "d103d7793a9ff39c43dce47d45742905", "text": "This paper proposes an architecture for an open-domain conversational system and evaluates an implemented system. The proposed architecture is fully composed of modules based on natural language processing techniques. Experimental results using human subjects show that our architecture achieves significantly better naturalness than a retrieval-based baseline and that its naturalness is close to that of a rule-based system using 149K hand-crafted rules.", "title": "" } ]
[ { "docid": "90f3c2ea17433ee296702cca53511b9e", "text": "This paper presents the design process, detailed analysis, and prototyping of a novel-structured line-start solid-rotor-based axial-flux permanent-magnet (AFPM) motor capable of autostarting with solid-rotor rings. The preliminary design is a slotless double-sided AFPM motor with four poles for high torque density and stable operation. Two concentric unilevel-spaced raised rings are added to the inner and outer radii of the rotor discs for smooth line-start of the motor. The design allows the motor to operate at both starting and synchronous speeds. The basic equations for the solid rings of the rotor of the proposed AFPM motor are discussed. Nonsymmetry of the designed motor led to its 3-D time-stepping finite-element analysis (FEA) via Vector Field Opera 14.0, which evaluates the design parameters and predicts the transient performance. To verify the design, a prototype 1-hp four-pole three-phase line-start AFPM synchronous motor is built and is used to test the performance in real time. There is a good agreement between experimental and FEA-based computed results. It is found that the prototype motor maintains high starting torque and good synchronization.", "title": "" }, { "docid": "a2845e100c20153f19e32b4e713ebbaa", "text": "The efficiency of the ground penetrating radar (GPR) system significantly depends on the antenna performance as signal has to propagate through lossy and inhomogeneous media. In this research work a resistively loaded compact Bow-tie antenna which can operate through a wide bandwidth of 4.1 GHz is proposed. The sharp corners of the slot antenna are rounded so as to minimize the end-fire reflections. The proposed antenna employs a resistive loading technique through a thin sheet of graphite to attain the ultra-wide bandwidth. The simulated results obtained from CST Microwave Studio v14 and HFSS v14 show a good amount of agreement for the antenna performance parameters. The proposed antenna has potential to apply for the GPR applications as it provides improved radiation efficiency, enhanced bandwidth, gain, directivity and reduced end-fire reflections.", "title": "" }, { "docid": "86fb01912ab343b95bb31e0b06fff851", "text": "Serial periodic data exhibit both serial and periodic properties. For example, time continues forward serially, but weeks, months, and years are periods that recur. While there are extensive visualization techniques for exploring serial data, and a few for exploring periodic data, no existing technique simultaneously displays serial and periodic attributes of a data set. We introduce a spiral visualization technique, which displays data along a spiral to highlight serial attributes along the spiral axis and periodic ones along the radii. We show several applications of the spiral visualization to data exploration tasks, present our implementation, discuss the capacity for data analysis, and present findings of our informal study with users in data-rich scientific domains.", "title": "" }, { "docid": "9c97262605b3505bbc33c64ff64cfcd5", "text": "This essay focuses on possible nonhuman applications of CRISPR/Cas9 that are likely to be widely overlooked because they are unexpected and, in some cases, perhaps even \"frivolous.\" We look at five uses for \"CRISPR Critters\": wild de-extinction, domestic de-extinction, personal whim, art, and novel forms of disease prevention. We then discuss the current regulatory framework and its possible limitations in those contexts. We end with questions about some deeper issues raised by the increased human control over life on earth offered by genome editing.", "title": "" }, { "docid": "6a602e4f48c0eb66161bce46d53f0409", "text": "In this paper, we propose three metrics for detecting botnets through analyzing their behavior. Our social infrastructure (i.e., the Internet) is currently experiencing the danger of bots' malicious activities as the scale of botnets increases. Although it is imperative to detect botnet to help protect computers from attacks, effective metrics for botnet detection have not been adequately researched. In this work we measure enormous amounts of traffic passing through the Asian Internet Interconnection Initiatives (AIII) infrastructure. To validate the effectiveness of our proposed metrics, we analyze measured traffic in three experiments. The experimental results reveal that our metrics are applicable for detecting botnets, but further research is needed to refine their performance", "title": "" }, { "docid": "3dd732828151a63d090a2633e3e48fac", "text": "This article shows the potential for convex optimization methods to be much more widely used in signal processing. In particular, automatic code generation makes it easier to create convex optimization solvers that are made much faster by being designed for a specific problem family. The disciplined convex programming framework that has been shown useful in transforming problems to a standard form may be extended to create solvers themselves. Much work remains to be done in exploring the capabilities and limitations of automatic code generation. As computing power increases, and as automatic code generation improves, the authors expect convex optimization solvers to be found more and more often in real-time signal processing applications.", "title": "" }, { "docid": "610629d3891c10442fe5065e07d33736", "text": "We investigate in this paper deep learning (DL) solutions for prediction of driver's cognitive states (drowsy or alert) using EEG data. We discussed the novel channel-wise convolutional neural network (CCNN) and CCNN-R which is a CCNN variation that uses Restricted Boltzmann Machine in order to replace the convolutional filter. We also consider bagging classifiers based on DL hidden units as an alternative to the conventional DL solutions. To test the performance of the proposed methods, a large EEG dataset from 3 studies of driver's fatigue that includes 70 sessions from 37 subjects is assembled. All proposed methods are tested on both raw EEG and Independent Component Analysis (ICA)-transformed data for cross-session predictions. The results show that CCNN and CCNN-R outperform deep neural networks (DNN) and convolutional neural networks (CNN) as well as other non-DL algorithms and DL with raw EEG inputs achieves better performance than ICA features.", "title": "" }, { "docid": "ebe93eda9810af02812dc4529ac6d651", "text": "We present three smart contracts that allow a briber to fairly exchange bribes to miners who pursue a mining strategy benefiting the briber. The first contract, CensorshipCon, highlights that Ethereum’s uncle block reward policy can directly subsidise the cost of bribing miners. The second contract, HistoryRevisionCon, rewards miners via an in-band payment for reversing transactions or enforcing a new state of another contract. The third contract, GoldfingerCon, rewards miners in one cryptocurrency for reducing the utility of another cryptocurrency. This work is motivated by the need to understand the extent to which smart contracts can impact the incentive mechanisms involved in Nakamoto-style consensus protocols.", "title": "" }, { "docid": "3d10a5dd2e58608ea6369d2c67e6401e", "text": "We propose a novel ‘‘e-brush’’ for calligraphy and painting, which meets all the criteria for a good e-brush. We use only four attributes to capture the essential features of the brush, and a suitably powerful modeling metaphor for its behavior. The e-brush s geometry, dynamic motions, and pigment changes are all dealt with in a single model. A single model simplifies the synchronization between the various system modules, thus giving rise to a more stable system, and lower costs. By a careful tradeoff between the complexity of the model and computation efficiency, more elaborate simulation of the e-brush s deformation and its recovery for interactive painterly rendering is made possible. We also propose a novel paper–ink model to complement the brush s model, and a machine intelligence module to empower the user to easily create beautiful calligraphy and painting. Despite the complexity of the modeling behind the scene, the high-level user interface has a simplistic and friendly design. The final results created by our e-brush can rival the real artwork. 2004 Elsevier Inc. All rights reserved.", "title": "" }, { "docid": "a3d7a0a672e9090072d0e3e7834844a2", "text": "Hyper spectral remote sensors collect image data for a large number of narrow, adjacent spectral bands. Every pixel in hyperspectral image involves a continuous spectrum that is used to classify the objects with great detail and precision. This paper presents hyperspectral image classification mechanism using genetic algorithm with empirical mode decomposition and image fusion used in preprocessing stage. 2-D Empirical mode decomposition method is used to remove any noisy components in each band of the hyperspectral data. After filtering, image fusion is performed on the hyperspectral bands to selectively merge the maximum possible features from the source images to form a single image. This fused image is classified using genetic algorithm. Different indices, such as K-means (KMI), Davies-Bouldin Index (DBI), and Xie-Beni Index (XBI) are used as objective functions. This method increases classification accuracy of hyperspectral image.", "title": "" }, { "docid": "8ae21da19b8afabb941bc5bb450434a9", "text": "A 7-month-old child presented with imperforate anus, penoscrotal hypospadias and transposition, and a midline mucosa-lined perineal mass. At surgery the mass was found to be supplied by the median sacral artery. It was excised and the anorectal malformation was repaired by posterior sagittal anorectoplasty. Histologically the mass revealed well-differentiated colonic tissue. The final diagnosis was well-differentiated sacrococcygeal teratoma in association with anorectal malformation.", "title": "" }, { "docid": "a8695230b065ae2e4c5308dfe4f8c10e", "text": "The paper describes a solution for the Yandex Personalized Web Search Challenge. The goal of the challenge is to rerank top ten web search query results to bring most personally relevant results on the top, thereby improving the search quality. The paper focuses on feature engineering for learning to rank in web search, including a novel pair-wise feature, shortand long-term personal navigation features. The paper demonstrates that point-wise logistic regression can achieve the stat-of-the-art performance in terms of normalized discounted cumulative gain with capability to scale up.", "title": "" }, { "docid": "324291450811b40a006cc93525443633", "text": "Many problems in radar and communication signal processing involve radio frequency (RF) signals of very high bandwidth. This presents a serious challenge to systems that might attempt to use a high-rate analog-to-digital converter (ADC) to sample these signals, as prescribed by the Shannon/Nyquist sampling theorem. In these situations, however, the information level of the signal is often far lower than the actual bandwidth, which prompts the question of whether more efficient schemes can be developed for measuring such signals. In this paper we propose a system that uses modulation, filtering, and sampling to produce a low-rate set of digital measurements. Our \"analog-to-information converter\" (AIC) is inspired by the theory of compressive sensing (CS), which states that a discrete signal having a sparse representation in some dictionary can be recovered from a small number of linear projections of that signal. We generalize the CS theory to continuous-time sparse signals, explain our proposed AIC system in the CS context, and discuss practical issues regarding implementation", "title": "" }, { "docid": "6daa1bc00a4701a2782c1d5f82c518e2", "text": "An 8-year-old Caucasian girl was referred with perineal bleeding of sudden onset during micturition. There was no history of trauma, fever or dysuria, but she had a history of constipation. Family history was unremarkable. Physical examination showed a prepubertal girl with a red ‘doughnut’-shaped lesion surrounding the urethral meatus (figure 1). Laboratory findings, including platelet count and coagulation, were normal. A vaginoscopy, performed using sedation, was negative. Swabs tested negative for sexually transmitted pathogens. A diagnosis of urethral prolapse (UP) was made on clinical appearance. Treatment with topical oestrogen cream was started and constipation treated with oral polyethylene glycol. On day 10, the bleeding stopped, and at week 5 there was a moderate regression of the UP. However, occasional mild bleeding persisted at 10 months, so she was referred to a urologist (figure 2). UP is an eversion of the distal urethral mucosa through the external meatus. It is most commonly seen in postmenopausal women and is uncommon in prepubertal girls. UP is rare in Caucasian children and more common in patients of African descent. 2 It may be asymptomatic or present with bleeding, spotting or urinary symptoms. The exact pathophysiological process of UP is unknown. Increased intra-abdominal pressure with straining, inadequate periurethral supporting tissue, neuromuscular dysfunction and a relative oestrogen deficiency are possible predisposing factors. Differential diagnoses include ureterocele, polyps, tumours and non-accidental injury. 3 Management options include conservative treatments such as tepid water baths and topical oestrogens. Surgery is indicated if bleeding, dysuria or pain persist. 5 Vaginoscopy in this case was possibly unnecessary, as there were no signs of trauma to the perineal area or other concerning signs or history of abuse. In the presence of typical UP, invasive diagnostic procedures should not be considered as first-line investigations and they should be reserved for cases of diagnostic uncertainty.", "title": "" }, { "docid": "714df72467bc3e919b7ea7424883cf26", "text": "Although a lot of attention has been paid to software cost estimation since 1960, making accurate effort and schedule estimation is still a challenge. To collect evidence and identify potential areas of improvement in software cost estimation, it is important to investigate the estimation accuracy, the estimation method used, and the factors influencing the adoption of estimation methods in current industry. This paper analyzed 112 projects from the Chinese software project benchmarking dataset and conducted questionnaire survey on 116 organizations to investigate the above information. The paper presents the current situations related to software project estimation in China and provides evidence-based suggestions on how to improve software project estimation. Our survey results suggest, e.g., that large projects were more prone to cost and schedule overruns, that most computing managers and professionals were neither satisfied nor dissatisfied with the project estimation, that very few organizations (15%) used model-based methods, and that the high adoption cost and insignificant benefit after adoption were the main causes for low use of model-based methods.", "title": "" }, { "docid": "74fb666c47afc81b8e080f730e0d1fe0", "text": "In current commercial Web search engines, queries are processed in the conjunctive mode, which requires the search engine to compute the intersection of a number of posting lists to determine the documents matching all query terms. In practice, the intersection operation takes a significant fraction of the query processing time, for some queries dominating the total query latency. Hence, efficient posting list intersection is critical for achieving short query latencies. In this work, we focus on improving the performance of posting list intersection by leveraging the compute capabilities of recent multicore systems. To this end, we consider various coarse-grained and fine-grained parallelization models for list intersection. Specifically, we present an algorithm that partitions the work associated with a given query into a number of small and independent tasks that are subsequently processed in parallel. Through a detailed empirical analysis of these alternative models, we demonstrate that exploiting parallelism at the finest-level of granularity is critical to achieve the best performance on multicore systems. On an eight-core system, the fine-grained parallelization method is able to achieve more than five times reduction in average query processing time while still exploiting the parallelism for high query throughput.", "title": "" }, { "docid": "1303770cf8d0f1b0f312feb49281aa10", "text": "A terahertz metamaterial absorber (MA) with properties of broadband width, polarization-insensitive, wide angle incidence is presented. Different from the previous methods to broaden the absorption width, this letter proposes a novel combinatorial way which units a nested structure with multiple metal-dielectric layers. We numerically investigate the proposed MA, and the simulation results show that the absorber achieves a broadband absorption over a frequency range of 0.896 THz with the absorptivity greater than 90%. Moreover, the full-width at half maximum of the absorber is up to 1.224 THz which is 61.2% with respect to the central frequency. The mechanism for the broadband absorption originates from the overlapping of longitudinal coupling between layers and coupling of the nested structure. Importantly, the nested structure makes a great contribution to broaden the absorption width. Thus, constructing a nested structure in a multi-layer absorber may be considered as an effective way to design broadband MAs.", "title": "" }, { "docid": "b7a08eaeb69fa6206cb9aec9cc54f2c3", "text": "This paper describes a computational pragmatic model which is geared towards providing helpful answers to modal and hypothetical questions. The work brings together elements from fonna l . semantic theories on modality m~d question answering, defines a wkler, pragmatically flavoured, notion of answerhood based on non-monotonic inference aod develops a notion of context, within which aspects of more cognitively oriented theories, such as Relevance Theory, can be accommodated. The model has been inlplemented. The research was fundexl by ESRC grant number R000231279.", "title": "" }, { "docid": "8ccf463aa3eae6ed20001b8fad6f94a6", "text": "Nowadays, contactless payments are becoming increasingly common as new smartphones, tablets, point-of-sale (POS) terminals and payment cards (often termed \"tap-and-pay\" cards) are designed to support Near Field Communication (NFC) technology. However, as NFC technology becomes pervasive, there have been concerns about how well NFC-enabled contactless payment systems protect individuals and organizations from emerging security and privacy threats. In this paper, we examine the security of contactless payment systems by considering the privacy threats and the different adversarial attacks that these systems must defend against. We focus our analysis on the underlying trust assumptions, security measures and technologies that form the basis on which contactless payment cards and NFC-enabled mobile wallets exchange sensitive transaction data with contactless POS terminals. We also explore the EMV and ISO standards for contactless payments and disclose their shortcomings with regards to enforcing security and privacy in contactless payment transactions. Our findings shed light on the discrepancies between the EMV and ISO standards, as well as how card issuing banks and mobile wallet providers configure their contactless payment cards and NFC-enabled mobile wallets based on these standards, respectively. These inconsistencies are disconcerting as they can be exploited by an adversary to compromise the integrity of contactless payment transactions.", "title": "" }, { "docid": "75f5679d9c1bab3585c1bf28d50327d8", "text": "From medical charts to national census, healthcare has traditionally operated under a paper-based paradigm. However, the past decade has marked a long and arduous transformation bringing healthcare into the digital age. Ranging from electronic health records, to digitized imaging and laboratory reports, to public health datasets, today, healthcare now generates an incredible amount of digital information. Such a wealth of data presents an exciting opportunity for integrated machine learning solutions to address problems across multiple facets of healthcare practice and administration. Unfortunately, the ability to derive accurate and informative insights requires more than the ability to execute machine learning models. Rather, a deeper understanding of the data on which the models are run is imperative for their success. While a significant effort has been undertaken to develop models able to process the volume of data obtained during the analysis of millions of digitalized patient records, it is important to remember that volume represents only one aspect of the data. In fact, drawing on data from an increasingly diverse set of sources, healthcare data presents an incredibly complex set of attributes that must be accounted for throughout the machine learning pipeline. This chapter focuses on highlighting such challenges, and is broken down into three distinct components, each representing a phase of the pipeline. We begin with attributes of the data accounted for during preprocessing, then move to considerations during model building, and end with challenges to the interpretation of model output. For each component, we present a discussion around data as it relates to the healthcare domain and offer insight into the challenges each may impose on the efficiency of machine learning techniques.", "title": "" } ]
scidocsrr
64f6d30312d3aec78ea534e3d712a9f0
A Taxonomy of Replay Attacks
[ { "docid": "5a28fbdcce61256fd67d97fc353b138b", "text": "Use of encryption to achieve authenticated communication in computer networks is discussed. Example protocols are presented for the establishment of authenticated connections, for the management of authenticated mail, and for signature verification and document integrity guarantee. Both conventional and public-key encryption algorithms are considered as the basis for protocols.", "title": "" } ]
[ { "docid": "fd455a51a5f96251b31db5e6eae34ecc", "text": "As an infrastructural and productive industry, tourism is very important in modern economy and includes different scopes and functions. If it is developed appropriately, cultural relations and economic development of countries will be extended and provided. Web development as an applied tool in the internet plays a very determining role in tourism success and proper exploitation of it can pave the way for more development and success of this industry. On the other hand, the amount of data in the current world has been increased and analysis of large sets of data that is referred to as big data has been converted into a strategic approach to enhance competition and establish new methods for development, growth, innovation, and enhancement of the number of customers. Today, big data is one of the important issues of information management in digital age and one of the main opportunities in tourism industry for optimal exploitation of maximum information. Big data can shape experiences of smart travel. Remarkable growth of these data sources has inspired new Strategies to understand the socio-economic phenomenon in different fields. The analytical approach of big data emphasizes the capacity of data collection and analysis with an unprecedented extent, depth and scale for solving the problems of real life and uses it. Indeed, big data analyses open the doors to various opportunities for developing the modern knowledge or changing our understanding of this scope and support decision-making in tourism industry. The purpose of this study is to show helpfulness of big data analysis to discover behavioral patterns in tourism industry and propose a model for employing data in tourism.", "title": "" }, { "docid": "2450ccfdff4503fc642550a876976f10", "text": "The purpose of this paper is to introduce sequential investment strategies that guarantee an optimal rate of growth of the capital, under minimal assumptions on the behavior of the market. The new strategies are analyzed both theoretically and empirically. The theoretical results show that the asymptotic rate of growth matches the optimal one that one could achieve with a full knowledge of the statistical properties of the underlying process generating the market, under the only assumption that the market is stationary and ergodic. The empirical results show that the performance of the proposed investment strategies measured on past NYSE and currency exchange data is solid, and sometimes even spectacular.", "title": "" }, { "docid": "6d262d30db4d6db112f40e5820393caf", "text": "This study sought to examine the effects of service quality and customer satisfaction on the repurchase intentions of customers of restaurants on University of Cape Coast Campus. The survey method was employed involving a convenient sample of 200 customers of 10 restaurants on the University of Cape Coast Campus. A modified DINESERV scale was used to measure customers’ perceived service quality. The results of the study indicate that four factors accounted for 50% of the variance in perceived service quality, namely; responsivenessassurance, empathy-equity, reliability and tangibles. Service quality was found to have a significant effect on customer satisfaction. Also, both service quality and customer satisfaction had significant effects on repurchase intention. However, customer satisfaction could not moderate the effect of service quality on repurchase intention. This paper adds to the debate on the dimensions of service quality and provides evidence on the effects of service quality and customer satisfaction on repurchase intention in a campus food service context.", "title": "" }, { "docid": "c91aa493ec980f81dad8ba2f86540301", "text": "Unresolved controversies regarding the functional impairments at the origin of dyscalculia, including working memory, approximate number system and attention have pervaded the field of mathematical learning disabilities. These controversies are fed by the tendency to focus on a single explanatory factor. We argue that we are in need of neurocognitive frameworks involving multiple functional components that contribute to inefficient numerical problem solving and dyscalculia. & 2013 Elsevier GmbH. All rights reserved.", "title": "" }, { "docid": "f0be968bc3e9427345c6238a956ef9b1", "text": "Abstract. In this paper, we introduce contractive conditions of integral type in the setting of dislocated quasi b-metric spaces. Using contractive conditions of integral type, we have presented a fixed point theorem in the framework of dislocated quasi b-metric spaces. Our established result generalize and extend various fixed point theorems of the literature in the context of dislocated quasi b-metric spaces. An example is given in the support of our main results.", "title": "" }, { "docid": "ccee5411cefccf0f9db35fead317e6b5", "text": "In recent years, deep learning algorithms have become increasingly more prominent for their unparalleled ability to automatically learn discriminant features from large amounts of data. However, within the field of electromyographybased gesture recognition, deep learning algorithms are seldom employed as they require an unreasonable amount of effort from a single person, to generate tens of thousands of examples. This work's hypothesis is that general, informative features can be learned from the large amounts of data generated by aggregating the signals of multiple users, thus reducing the recording burden while enhancing gesture recognition. Consequently, this paper proposes applying transfer learning on aggregated data from multiple users, while leveraging the capacity of deep learning algorithms to learn discriminant features from large datasets. Two datasets comprised of 19 and 17 able-bodied participants respectively (the first one is employed for pre-training) were recorded for this work, using the Myo Armband. A third Myo Armband dataset was taken from the NinaPro database and is comprised of 10 able-bodied participants. Three different deep learning networks employing three different modalities as input (raw EMG, Spectrograms and Continuous Wavelet Transform (CWT)) are tested on the second and third dataset. The proposed transfer learning scheme is shown to systematically and significantly enhance the performance for all three networks on the two datasets, achieving an offline accuracy of 98.31% for 7 gestures over 17 participants for the CWT-based ConvNet and 68.98% for 18 gestures over 10 participants for the raw EMG-based ConvNet. Finally, a use-case study employing eight able-bodied participants suggests that real-time feedback allows users to adapt their muscle activation strategy which reduces the degradation in accuracy normally experienced over time.", "title": "" }, { "docid": "9a071b23eb370f053a5ecfd65f4a847d", "text": "INTRODUCTION\nConcomitant obesity significantly impairs asthma control. Obese asthmatics show more severe symptoms and an increased use of medications.\n\n\nOBJECTIVES\nThe primary aim of the study was to identify genes that are differentially expressed in the peripheral blood of asthmatic patients with obesity, asthmatic patients with normal body mass, and obese patients without asthma. Secondly, we investigated whether the analysis of gene expression in peripheral blood may be helpful in the differential diagnosis of obese patients who present with symptoms similar to asthma.\n\n\nPATIENTS AND METHODS\nThe study group included 15 patients with asthma (9 obese and 6 normal-weight patients), while the control group-13 obese patients in whom asthma was excluded. The analysis of whole-genome expression was performed on RNA samples isolated from peripheral blood.\n\n\nRESULTS\nThe comparison of gene expression profiles between asthmatic patients with obesity and those with normal body mass revealed a significant difference in 6 genes. The comparison of the expression between controls and normal-weight patients with asthma showed a significant difference in 23 genes. The analysis of genes with a different expression revealed a group of transcripts that may be related to an increased body mass (PI3, LOC100008589, RPS6KA3, LOC441763, IFIT1, and LOC100133565). Based on gene expression results, a prediction model was constructed, which allowed to correctly classify 92% of obese controls and 89% of obese asthmatic patients, resulting in the overall accuracy of the model of 90.9%.\n\n\nCONCLUSIONS\nThe results of our study showed significant differences in gene expression between obese asthmatic patients compared with asthmatic patients with normal body mass as well as in obese patients without asthma compared with asthmatic patients with normal body mass.", "title": "" }, { "docid": "cc21d54f763176994602f9ae598596ce", "text": "BACKGROUND\nRecent studies have revealed that nursing staff turnover remains a major problem in emerging economies. In particular, nursing staff turnover in Malaysia remains high due to a lack of job satisfaction. Despite a shortage of healthcare staff, the Malaysian government plans to create 181 000 new healthcare jobs by 2020 through the Economic Transformation Programme (ETP). This study investigated the causal relationships among perceived transformational leadership, empowerment, and job satisfaction among nurses and medical assistants in two selected large private and public hospitals in Malaysia. This study also explored the mediating effect of empowerment between transformational leadership and job satisfaction.\n\n\nMETHODS\nThis study used a survey to collect data from 200 nursing staff, i.e., nurses and medical assistants, employed by a large private hospital and a public hospital in Malaysia. Respondents were asked to answer 5-point Likert scale questions regarding transformational leadership, employee empowerment, and job satisfaction. Partial least squares-structural equation modeling (PLS-SEM) was used to analyze the measurement models and to estimate parameters in a path model. Statistical analysis was performed to examine whether empowerment mediated the relationship between transformational leadership and job satisfaction.\n\n\nRESULTS\nThis analysis showed that empowerment mediated the effect of transformational leadership on the job satisfaction in nursing staff. Employee empowerment not only is indispensable for enhancing job satisfaction but also mediates the relationship between transformational leadership and job satisfaction among nursing staff.\n\n\nCONCLUSIONS\nThe results of this research contribute to the literature on job satisfaction in healthcare industries by enhancing the understanding of the influences of empowerment and transformational leadership on job satisfaction among nursing staff. This study offers important policy insight for healthcare managers who seek to increase job satisfaction among their nursing staff.", "title": "" }, { "docid": "4e23bf1c89373abaf5dc096f76c893f3", "text": "Clock and data recovery (CDR) circuit plays a vital role for wired serial link communication in multi mode based system on chip (SOC). In wire linked communication systems, when data flows without any accompanying clock over a single wire, the receiver of the system is required to recover this data synchronously without losing the information. Therefore there exists a need for CDR circuits in the receiver of the system for recovering the clock or timing information from these data. The existing Octa-rate CDR circuit is not compatible to real time data, such a data is unpredictable, non periodic and has different arrival times and phase widths. Thus the proposed PRN based Octa-rate Clock and Data Recovery circuit is made compatible to real time data by introducing a Random Sequence Generator. The proposed PRN based Octa-rate Clock and Data Recovery circuit consists of PRN Sequence Generator, 16-Phase Generator, Early Late Phase Detector and Delay Line Controller. The FSM based Delay Line Controller controls the delay length and introduces the required delay in the input data. The PRN based Octa-rate CDR circuit has been realized using Xilinx ISE 13.2 and implemented on Vertex-5 FPGA target device for real time verification. The delay between the input and the generation of output is measured and analyzed using Logic Analyzer AGILENT 1962 A.", "title": "" }, { "docid": "ce282fba1feb109e03bdb230448a4f8a", "text": "The goal of two-sample tests is to assess whether two samples, SP ∼ P and SQ ∼ Q, are drawn from the same distribution. Perhaps intriguingly, one relatively unexplored method to build two-sample tests is the use of binary classifiers. In particular, construct a dataset by pairing the n examples in SP with a positive label, and by pairing the m examples in SQ with a negative label. If the null hypothesis “P = Q” is true, then the classification accuracy of a binary classifier on a held-out subset of this dataset should remain near chance-level. As we will show, such Classifier Two-Sample Tests (C2ST) learn a suitable representation of the data on the fly, return test statistics in interpretable units, have a simple null distribution, and their predictive uncertainty allow to interpret where P and Q differ. The goal of this paper is to establish the properties, performance, and uses of C2ST. First, we analyze their main theoretical properties. Second, we compare their performance against a variety of state-of-the-art alternatives. Third, we propose their use to evaluate the sample quality of generative models with intractable likelihoods, such as Generative Adversarial Networks (GANs). Fourth, we showcase the novel application of GANs together with C2ST for causal discovery.", "title": "" }, { "docid": "7bc84587e24d700e687c636a07ef863c", "text": "Person re-identification across disjoint camera views has been widely applied in video surveillance yet it is still a challenging problem. One of the major challenges lies in the lack of spatial and temporal cues, which makes it difficult to deal with large variations of lighting conditions, viewing angles, body poses, and occlusions. Recently, several deep-learning-based person re-identification approaches have been proposed and achieved remarkable performance. However, most of those approaches extract discriminative features from the whole frame at one glimpse without differentiating various parts of the persons to identify. It is essentially important to examine multiple highly discriminative local regions of the person images in details through multiple glimpses for dealing with the large appearance variance. In this paper, we propose a new soft attention-based model, i.e., the end-to-end comparative attention network (CAN), specifically tailored for the task of person re-identification. The end-to-end CAN learns to selectively focus on parts of pairs of person images after taking a few glimpses of them and adaptively comparing their appearance. The CAN model is able to learn which parts of images are relevant for discerning persons and automatically integrates information from different parts to determine whether a pair of images belongs to the same person. In other words, our proposed CAN model simulates the human perception process to verify whether two images are from the same person. Extensive experiments on four benchmark person re-identification data sets, including CUHK01, CHUHK03, Market-1501, and VIPeR, clearly demonstrate that our proposed end-to-end CAN for person re-identification outperforms well established baselines significantly and offer the new state-of-the-art performance.", "title": "" }, { "docid": "251c2f5ebc0d2c784b01802f8cd25e89", "text": "Reduced frequency range in vowel production is a well documented speech characteristic of individuals with psychological and neurological disorders. Affective disorders such as depression and post-traumatic stress disorder (PTSD) are known to influence motor control and in particular speech production. The assessment and documentation of reduced vowel space and reduced expressivity often either rely on subjective assessments or on analysis of speech under constrained laboratory conditions (e.g. sustained vowel production, reading tasks). These constraints render the analysis of such measures expensive and impractical. Within this work, we investigate an automatic unsupervised machine learning based approach to assess a speaker's vowel space. Our experiments are based on recordings of 253 individuals. Symptoms of depression and PTSD are assessed using standard self-assessment questionnaires and their cut-off scores. The experiments show a significantly reduced vowel space in subjects that scored positively on the questionnaires. We show the measure's statistical robustness against varying demographics of individuals and articulation rate. The reduced vowel space for subjects with symptoms of depression can be explained by the common condition of psychomotor retardation influencing articulation and motor control. These findings could potentially support treatment of affective disorders, like depression and PTSD in the future.", "title": "" }, { "docid": "5228454ef59c012b079885b2cce0c012", "text": "As a contribution to the HICSS 50 Anniversary Conference, we proposed a new mini-track on Text Mining in Big Data Analytics. This mini-track builds on the successful HICSS Workshop on Text Mining and recognizes the growing importance of unstructured text as a data source for descriptive and predictive analytics in research on collaboration systems and technologies. In this initial iteration of the mini-track, we have accepted three papers that cover conceptual issues, methodological approaches to social media, and the development of categorization models and dictionaries useful in a corporate context. The minitrack highlights the potential of an interdisciplinary research community within the HICSS collaboration systems and technologies track.", "title": "" }, { "docid": "0b0614f88f849aa5ecf135dcee55528a", "text": "This paper introduces a new statistical approach to automatically partitioning text into coherent segments. The approach is based on a technique that incrementally builds an exponential model to extract features that are correlated with the presence of boundaries in labeled training text. The models use two classes of features: topicality features that use adaptive language models in a novel way to detect broad changes of topic, and cue-word features that detect occurrences of specific words, which may be domain-specific, that tend to be used near segment boundaries. Assessment of our approach on quantitative and qualitative grounds demonstrates its effectiveness in two very different domains, Wall Street Journal news articles and television broadcast news story transcripts. Quantitative results on these domains are presented using a new probabilistically motivated error metric, which combines precision and recall in a natural and flexible way. This metric is used to make a quantitative assessment of the relative contributions of the different feature types, as well as a comparison with decision trees and previously proposed text segmentation algorithms.", "title": "" }, { "docid": "ba4f548c0543699546f1432980a7a414", "text": "Female genital mutilation (FGM) is a women's health and human rights issue attracting global interest. My purpose in this qualitative study was to report the knowledge and attitudes of Australian midwives toward FGM. Verbatim transcription and thematic analysis of semistructured interviews with 11 midwives resulted in these themes: knowledge of female genital mutilation and attitude toward female genital mutilation. Significant gaps in knowledge about FGM featured prominently. The midwives expressed anger toward FGM and empathy for affected women. Recommendations include increased information on FGM and associated legislation among midwives and other health providers in countries where FGM may be encountered.", "title": "" }, { "docid": "a870b0b347d15d8e8c788ede7ff5fa4a", "text": "On the twentieth anniversary of the original publication [10], following ten years of intense activity in the research literature, hardware support for transactional memory (TM) has finally become a commercial reality, with HTM-enabled chips currently or soon-to-be available from many hardware vendors. In this paper we describe architectural support for TM added to a future version of the Power ISA#8482;. Two imperatives drove the development: the desire to complement our weakly-consistent memory model with a more friendly interface to simplify the development and porting of multithreaded applications, and the need for robustness beyond that of some early implementations. In the process of commercializing the feature, we had to resolve some previously unexplored interactions between TM and existing features of the ISA, for example translation shootdown, interrupt handling, atomic read-modify-write primitives, and our weakly consistent memory model. We describe these interactions, the overall architecture, and discuss the motivation and rationale for our choices of architectural semantics, beyond what is typically found in reference manuals.", "title": "" }, { "docid": "672be163a987da17aca6ccbdbc4b9145", "text": "Clothing detection is an important step for retrieving similar clothing items, organizing fashion photos, artificial intelligence powered shopping assistants and automatic labeling of large catalogues. Training a deep learning based clothing detector requires pre-defined categories (dress, pants etc) and a high volume of annotated image data for each category. However, fashion evolves and new categories are constantly introduced in the marketplace. For example, consider the case of jeggings which is a combination of jeans and leggings. Detection of this new category will require adding annotated data specific to jegging class and subsequently relearning the weights for the deep network. In this paper, we propose a novel object detection method that can handle newer categories without the need of obtaining new labeled data and retraining the network. Our approach learns the visual similarities between various clothing categories and predicts a tree of categories. The resulting framework significantly improves the generalization capabilities of the detector to novel clothing products.", "title": "" }, { "docid": "3244dc9475ab3d4e51ce9dee3d5b46b9", "text": "Dielectric Elastomer Actuators (DEAs) are an emerging actuation technology which are inherent lightweight and compliant in nature, enabling the development of unique and versatile devices, such as the Dielectric Elastomer Minimum Energy Structure (DEMES). We present the development of a multisegment DEMES actuator for use in a deployable microsatellite gripper. The satellite, called CleanSpace One, will demonstrate active debris removal (ADR) in space using a small cost effective system. The inherent flexibility and lightweight nature of the DEMES actuator enables space efficient storage (e.g. in a rolled configuration) of the gripper prior to deployment. Multisegment DEMES have multiple open sections and are an effective way of amplifying bending deformation. We present the evolution of our DEMES actuator design from initial concepts up until the final design, describing briefly the trade-offs associated with each method. We describe the optimization of our chosen design concept and characterize this design in terms on bending angle as a function of input voltage and gripping force. Prior to the characterization the actuator was stored and subsequently deployed from a rolled state, a capability made possible thanks to the fabrication methodology and materials used. A tip angle change of approximately 60o and a gripping force of 0.8 mN (for small deflections from the actuator tip) were achieved. The prototype actuators (approximately 10 cm in length) weigh a maximum of 0.65 g and are robust and mechanically resilient, demonstrating over 80,000 activation cycles.", "title": "" }, { "docid": "ba9d274247f3f3da9274be52fa8a7096", "text": "Dysregulated growth hormone (GH) hypersecretion is usually caused by a GH-secreting pituitary adenoma and leads to acromegaly - a disorder of disproportionate skeletal, tissue, and organ growth. High GH and IGF1 levels lead to comorbidities including arthritis, facial changes, prognathism, and glucose intolerance. If the condition is untreated, enhanced mortality due to cardiovascular, cerebrovascular, and pulmonary dysfunction is associated with a 30% decrease in life span. This Review discusses acromegaly pathogenesis and management options. The latter include surgery, radiation, and use of novel medications. Somatostatin receptor (SSTR) ligands inhibit GH release, control tumor growth, and attenuate peripheral GH action, while GH receptor antagonists block GH action and effectively lower IGF1 levels. Novel peptides, including SSTR ligands, exhibiting polyreceptor subtype affinities and chimeric dopaminergic-somatostatinergic properties are currently in clinical trials. Effective control of GH and IGF1 hypersecretion and ablation or stabilization of the pituitary tumor mass lead to improved comorbidities and lowering of mortality rates for this hormonal disorder.", "title": "" }, { "docid": "6a2a1f6ff3fea681c37b19ac51c17fe6", "text": "The present research investigates the influence of culture on telemedicine adoption and patient information privacy, security, and policy. The results, based on the SEM analysis of the data collected in the United States, demonstrate that culture plays a significant role in telemedicine adoption. The results further show that culture also indirectly influences telemedicine adoption through information security, information privacy, and information policy. Our empirical results further indicate that information security, privacy, and policy impact telemedicine adoption.", "title": "" } ]
scidocsrr
f00be14d0d244e4a4a1d68da10e5b06c
Pix3D: Dataset and Methods for Single-Image 3D Shape Modeling
[ { "docid": "44cf90b2abb22a4a8d9cc031e154cfa0", "text": "Traditional approaches for learning 3D object categories use either synthetic data or manual supervision. In this paper, we propose a method which does not require manual annotations and is instead cued by observing objects from a moving vantage point. Our system builds on two innovations: a Siamese viewpoint factorization network that robustly aligns different videos together without explicitly comparing 3D shapes; and a 3D shape completion network that can extract the full shape of an object from partial observations. We also demonstrate the benefits of configuring networks to perform probabilistic predictions as well as of geometry-aware data augmentation schemes. We obtain state-of-the-art results on publicly-available benchmarks.", "title": "" }, { "docid": "98cc792a4fdc23819c877634489d7298", "text": "This paper introduces a product quantization-based approach for approximate nearest neighbor search. The idea is to decompose the space into a Cartesian product of low-dimensional subspaces and to quantize each subspace separately. A vector is represented by a short code composed of its subspace quantization indices. The euclidean distance between two vectors can be efficiently estimated from their codes. An asymmetric version increases precision, as it computes the approximate distance between a vector and a code. Experimental results show that our approach searches for nearest neighbors efficiently, in particular in combination with an inverted file system. Results for SIFT and GIST image descriptors show excellent search accuracy, outperforming three state-of-the-art approaches. The scalability of our approach is validated on a data set of two billion vectors.", "title": "" }, { "docid": "16a5313b414be4ae740677597291d580", "text": "We contribute a large scale database for 3D object recognition, named ObjectNet3D, that consists of 100 categories, 90,127 images, 201,888 objects in these images and 44,147 3D shapes. Objects in the 2D images in our database are aligned with the 3D shapes, and the alignment provides both accurate 3D pose annotation and the closest 3D shape annotation for each 2D object. Consequently, our database is useful for recognizing the 3D pose and 3D shape of objects from 2D images. We also provide baseline experiments on four tasks: region proposal generation, 2D object detection, joint 2D detection and 3D object pose estimation, and image-based 3D shape retrieval, which can serve as baselines for future research using our database. Our database is available online at http://cvgl.stanford.edu/projects/objectnet3d.", "title": "" }, { "docid": "c6c9643816533237a29dd93fd420018f", "text": "We present an algorithm for finding a meaningful vertex-to-vertex correspondence between two 3D shapes given as triangle meshes. Our algorithm operates on embeddings of the two shapes in the spectral domain so as to normalize them with respect to uniform scaling and rigid-body transformation. Invariance to shape bending is achieved by relying on geodesic point proximities on a mesh to capture its shape. To deal with stretching, we propose to use non-rigid alignment via thin-plate splines in the spectral domain. This is combined with a refinement step based on the geodesic proximities to improve dense correspondence. We show empirically that our algorithm outperforms previous spectral methods, as well as schemes that compute correspondence in the spatial domain via non-rigid iterative closest points or the use of local shape descriptors, e.g., 3D shape context", "title": "" } ]
[ { "docid": "90414004f8681198328fb48431a34573", "text": "Process models play important role in computer aided process engineering. Although the structure of these models are a priori known, model parameters should be estimated based on experiments. The accuracy of the estimated parameters largely depends on the information content of the experimental data presented to the parameter identification algorithm. Optimal experiment design (OED) can maximize the confidence on the model parameters. The paper proposes a new additive sequential evolutionary experiment design approach to maximize the amount of information content of experiments. The main idea is to use the identified models to design new experiments to gradually improve the model accuracy while keeping the collected information from previous experiments. This scheme requires an effective optimization algorithm, hence the main contribution of the paper is the incorporation of Evolutionary Strategy (ES) into a new iterative scheme of optimal experiment design (AS-OED). This paper illustrates the applicability of AS-OED for the design of feeding profile for a fed-batch biochemical reactor.", "title": "" }, { "docid": "2292c60d69c94f31c2831c2f21c327d8", "text": "With the abundance of raw data generated from various sources, Big Data has become a preeminent approach in acquiring, processing, and analyzing large amounts of heterogeneous data to derive valuable evidences. The size, speed, and formats in which data is generated and processed affect the overall quality of information. Therefore, Quality of Big Data (QBD) has become an important factor to ensure that the quality of data is maintained at all Big data processing phases. This paper addresses the QBD at the pre-processing phase, which includes sub-processes like cleansing, integration, filtering, and normalization. We propose a QBD model incorporating processes to support Data quality profile selection and adaptation. In addition, it tracks and registers on a data provenance repository the effect of every data transformation happened in the pre-processing phase. We evaluate the data quality selection module using large EEG dataset. The obtained results illustrate the importance of addressing QBD at an early phase of Big Data processing lifecycle since it significantly save on costs and perform accurate data analysis.", "title": "" }, { "docid": "29f1144b4f3203bab29d7cb6b24fd065", "text": "Virtual reality (VR)systems let users intuitively interact with 3D environments and have been used extensively for robotic teleoperation tasks. While more immersive than their 2D counterparts, early VR systems were expensive and required specialized hardware. Fortunately, there has been a recent proliferation of consumer-grade VR systems at affordable price points. These systems are inexpensive, relatively portable, and can be integrated into existing robotic frameworks. Our group has designed a VR teleoperation package for the Robot Operating System (ROS), ROS Reality, that can be easily integrated into such frameworks. ROS Reality is an open-source, over-the-Internet teleoperation interface between any ROS-enabled robot and any Unity-compatible VR headset. We completed a pilot study to test the efficacy of our system, with expert human users controlling a Baxter robot via ROS Reality to complete 24 dexterous manipulation tasks, compared to the same users controlling the robot via direct kinesthetic handling. This study provides insight into the feasibility of robotic teleoperation tasks in VR with current consumer-grade resources and exposes issues that need to be addressed in these VR systems. In addition, this paper presents a description of ROS Reality, its components, and architecture. We hope this system will be adopted by other research groups to allow for easy integration of VR teleoperated robots into future experiments.", "title": "" }, { "docid": "ea3fd6ece19949b09fd2f5f2de57e519", "text": "Multiple myeloma is the second most common hematologic malignancy. The treatment of this disease has changed considerably over the last two decades with the introduction to the clinical practice of novel agents such as proteasome inhibitors and immunomodulatory drugs. Basic research efforts towards better understanding of normal and missing immune surveillence in myeloma have led to development of new strategies and therapies that require the engagement of the immune system. Many of these treatments are under clinical development and have already started providing encouraging results. We, for the second time in the last two decades, are about to witness another shift of the paradigm in the management of this ailment. This review will summarize the major approaches in myeloma immunotherapies.", "title": "" }, { "docid": "9d84f58c0a2c8694bf2fe8d2ba0da601", "text": "Most existing Speech Emotion Recognition (SER) systems rely on turn-wise processing, which aims at recognizing emotions from complete utterances and an overly-complicated pipeline marred by many preprocessing steps and hand-engineered features. To overcome both drawbacks, we propose a real-time SER system based on end-to-end deep learning. Namely, a Deep Neural Network (DNN) that recognizes emotions from a one second frame of raw speech spectrograms is presented and investigated. This is achievable due to a deep hierarchical architecture, data augmentation, and sensible regularization. Promising results are reported on two databases which are the eNTERFACE database and the Surrey Audio-Visual Expressed Emotion (SAVEE) database.", "title": "" }, { "docid": "86cb3c072e67bed8803892b72297812c", "text": "Internet of Things (IoT) will comprise billions of devices that can sense, communicate, compute and potentially actuate. Data streams coming from these devices will challenge the traditional approaches to data management and contribute to the emerging paradigm of big data. This paper discusses emerging Internet of Things (IoT) architecture, large scale sensor network applications, federating sensor networks, sensor data and related context capturing techniques, challenges in cloud-based management, storing, archiving and processing of", "title": "" }, { "docid": "47929b2ff4aa29bf115a6728173feed7", "text": "This paper presents a metaobject protocol (MOP) for C++. This MOP was designed to bring the power of meta-programming to C++ programmers. It avoids penalties on runtime performance by adopting a new meta-architecture in which the metaobjects control the compilation of programs instead of being active during program execution. This allows the MOP to be used to implement libraries of efficient, transparent language extensions.", "title": "" }, { "docid": "f753712eed9e5c210810d2afd1366eb8", "text": "To improve FPGA performance for arithmetic circuits that are dominated by multi-input addition operations, an FPGA logic block is proposed that can be configured as a 6:2 or 7:2 compressor. Compressors have been used successfully in the past to realize parallel multipliers in VLSI technology; however, the peculiar structure of FPGA logic blocks, coupled with the high cost of the routing network relative to ASIC technology, renders compressors ineffective when mapped onto the general logic of an FPGA. On the other hand, current FPGA logic cells have already been enhanced with carry chains to improve arithmetic functionality, for example, to realize fast ternary carry-propagate addition. The contribution of this article is a new FPGA logic cell that is specialized to help realize efficient compressor trees on FPGAs. The new FPGA logic cell has two variants that can respectively be configured as a 6:2 or a 7:2 compressor using additional carry chains that, coupled with lookup tables, provide the necessary functionality. Experiments show that the use of these modified logic cells significantly reduces the delay of compressor trees synthesized on FPGAs compared to state-of-the-art synthesis techniques, with a moderate increase in area and power consumption.", "title": "" }, { "docid": "ebb024bbd923d35fd86adc2351073a48", "text": "Background: Depression is a chronic condition that results in considerable disability, and particularly in later life, severely impacts the life quality of the individual with this condition. The first aim of this review article was to summarize, synthesize, and evaluate the research base concerning the use of dance-based exercises on health status, in general, and secondly, specifically for reducing depressive symptoms, in older adults. A third was to provide directives for professionals who work or are likely to work with this population in the future. Methods: All English language peer reviewed publications detailing the efficacy of dance therapy as an intervention strategy for older people in general, and specifically for minimizing depression and dependence among the elderly were analyzed.", "title": "" }, { "docid": "3ab1222d051c42e400940afad76919ce", "text": "OBJECTIVES\nThe purpose of this study was to evaluate the feasibility, safety, and clinical outcomes up to 1 year in patients undergoing combined simultaneous thoracoscopic surgical and transvenous catheter atrial fibrillation (AF) ablation.\n\n\nBACKGROUND\nThe combination of the transvenous endocardial approach with the thoracoscopic epicardial approach in a single AF ablation procedure overcomes the limitations of both techniques and should result in better outcomes.\n\n\nMETHODS\nA cohort of 26 consecutive patients with AF who underwent hybrid thoracoscopic surgical and transvenous catheter ablation were followed, with follow-up of up to 1 year.\n\n\nRESULTS\nTwenty-six patients (42% with persistent AF) underwent successful hybrid procedures. There were no complications. The mean follow-up period was 470 ± 154 days. In 23% of the patients, the epicardial lesions were not transmural, and endocardial touch-up was necessary. One-year success, defined according to the Heart Rhythm Society, European Heart Rhythm Association, and European Cardiac Arrhythmia Society consensus statement for the catheter and surgical ablation of AF, was 93% for patients with paroxysmal AF and 90% for patients with persistent AF. Two patients underwent catheter ablation for recurrent AF or left atrial flutter after the hybrid procedure.\n\n\nCONCLUSIONS\nA combined transvenous endocardial and thoracoscopic epicardial ablation procedure for AF is feasible and safe, with a single-procedure success rate of 83% at 1 year.", "title": "" }, { "docid": "ad266da12fee45e4fbd060b56e998961", "text": "Does Child Abuse Cause Crime? Child maltreatment, which includes both child abuse and child neglect, is a major social problem. This paper focuses on measuring the effects of child maltreatment on crime using data from the National Longitudinal Study of Adolescent Health (Add Health). We focus on crime because it is one of the most socially costly potential outcomes of maltreatment, and because the proposed mechanisms linking maltreatment and crime are relatively well elucidated in the literature. Our work addresses many limitations of the existing literature on child maltreatment. First, we use a large national sample, and investigate different types of abuse in a similar framework. Second, we pay careful attention to identifying the causal impact of abuse, by using a variety of statistical methods that make differing assumptions. These methods include: Ordinary Least Squares (OLS), propensity score matching estimators, and twin fixed effects. Finally, we examine the extent to which the effects of maltreatment vary with socio-economic status (SES), gender, and the severity of the maltreatment. We find that maltreatment approximately doubles the probability of engaging in many types of crime. Low SES children are both more likely to be mistreated and suffer more damaging effects. Boys are at greater risk than girls, at least in terms of increased propensity to commit crime. Sexual abuse appears to have the largest negative effects, perhaps justifying the emphasis on this type of abuse in the literature. Finally, the probability of engaging in crime increases with the experience of multiple forms of maltreatment as well as the experience of Child Protective Services (CPS) investigation. JEL Classification: I1, K4", "title": "" }, { "docid": "01875eeb7da3676f46dd9d3f8bf3ecac", "text": "It is shown that a certain tour of 49 cities, one in each of the 48 states and Washington, D C , has the shortest road distance T HE TRAVELING-SALESMAN PROBLEM might be described as follows: Find the shortest route (tour) for a salesman starting from a given city, visiting each of a specified group of cities, and then returning to the original point of departure. More generally, given an n by n symmetric matrix D={d,j), where du represents the 'distance' from / to J, arrange the points in a cyclic order in such a way that the sum of the du between consecutive points is minimal. Since there are only a finite number of possibilities (at most 3>' 2 (« —1)0 to consider, the problem is to devise a method of picking out the optimal arrangement which is reasonably efficient for fairly large values of n. Although algorithms have been devised for problems of similar nature, e.g., the optimal assignment problem,''** little is known about the traveling-salesman problem. We do not claim that this note alters the situation very much; what we shall do is outline a way of approaching the problem that sometimes, at least, enables one to find an optimal path and prove it so. In particular, it will be shown that a certain arrangement of 49 cities, one m each of the 48 states and Washington, D. C, is best, the du used representing road distances as taken from an atlas. * HISTORICAL NOTE-The origin of this problem is somewhat obscure. It appears to have been discussed informally among mathematicians at mathematics meetings for many years. Surprisingly little in the way of results has appeared in the mathematical literature.'\" It may be that the minimal-distance tour problem was stimulated by the so-called Hamiltonian game' which is concerned with finding the number of different tours possible over a specified network The latter problem is cited by some as the origin of group theory and has some connections with the famou8 Four-Color Conjecture ' Merrill Flood (Columbia Universitj') should be credited with stimulating interest in the traveling-salesman problem in many quarters. As early as 1937, he tried to obtain near optimal solutions in reference to routing of school buses. Both Flood and A W. Tucker (Princeton University) recall that they heard about the problem first in a seminar talk by Hassler Whitney at Princeton in 1934 (although Whitney, …", "title": "" }, { "docid": "b04ae3842293f5f81433afbaa441010a", "text": "Rootkits Trojan virus, which can control attacked computers, delete import files and even steal password, are much popular now. Interrupt Descriptor Table (IDT) hook is rootkit technology in kernel level of Trojan. The paper makes deeply analysis on the IDT hooks handle procedure of rootkit Trojan according to previous other researchers methods. We compare its IDT structure and programs to find how Trojan interrupt handler code can respond the interrupt vector request in both real address mode and protected address mode. Finally, we analyze the IDT hook detection methods of rootkits Trojan by Windbg or other professional tools.", "title": "" }, { "docid": "afd1bc554857a1857ac4be5ee37cc591", "text": "0953-5438/$ see front matter 2011 Elsevier B.V. A doi:10.1016/j.intcom.2011.04.007 ⇑ Corresponding author. E-mail addresses: m.cole@rutgers.edu (M.J. Co (J. Gwizdka), changl@eden.rutgers.edu (C. Liu), ralf@b rutgers.edu (N.J. Belkin), xiangminz@gmail.com (X. Zh We report on an investigation into people’s behaviors on information search tasks, specifically the relation between eye movement patterns and task characteristics. We conducted two independent user studies (n = 32 and n = 40), one with journalism tasks and the other with genomics tasks. The tasks were constructed to represent information needs of these two different users groups and to vary in several dimensions according to a task classification scheme. For each participant we classified eye gaze data to construct models of their reading patterns. The reading models were analyzed with respect to the effect of task types and Web page types on reading eye movement patterns. We report on relationships between tasks and individual reading behaviors at the task and page level. Specifically we show that transitions between scanning and reading behavior in eye movement patterns and the amount of text processed may be an implicit indicator of the current task type facets. This may be useful in building user and task models that can be useful in personalization of information systems and so address design demands driven by increasingly complex user actions with information systems. One of the contributions of this research is a new methodology to model information search behavior and investigate information acquisition and cognitive processing in interactive information tasks. 2011 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "32731551289845c23452420fca121af5", "text": "This work presents the current status of the Springrobot autonomous vehicle project, whose main objective is to develop a safety-warning and driver-assistance system and an automatic pilot for rural and urban traffic environments. This system uses a high precise digital map and a combination of various sensors. The architecture and strategy for the system are briefly described and the details of lane-marking detection algorithms are presented. The R and G channels of the color image are used to form graylevel images. The size of the resulting gray image is reduced and the Sobel operator with a very low threshold is used to get a grayscale edge image. In the adaptive randomized Hough transform, pixels of the gray-edge image are sampled randomly according to their weights corresponding to their gradient magnitudes. The three-dimensional (3-D) parametric space of the curve is reduced to the two-dimensional (2-D) and the one-dimensional (1-D) space. The paired parameters in two dimensions are estimated by gradient directions and the last parameter in one dimension is used to verify the estimated parameters by histogram. The parameters are determined coarsely and quantization accuracy is increased relatively by a multiresolution strategy. Experimental results in different road scene and a comparison with other methods have proven the validity of the proposed method.", "title": "" }, { "docid": "22293b6953e2b28e1b3dc209649a7286", "text": "The Liquid State Machine (LSM) has emerged as a computational model that is more adequate than the Turing machine for describing computations in biological networks of neurons. Characteristic features of this new model are (i) that it is a model for adaptive computational systems, (ii) that it provides a method for employing randomly connected circuits, or even “found” physical objects for meaningful computations, (iii) that it provides a theoretical context where heterogeneous, rather than stereotypical, local gates or processors increase the computational power of a circuit, (iv) that it provides a method for multiplexing different computations (on a common input) within the same circuit. This chapter reviews the motivation for this model, its theoretical background, and current work on implementations of this model in innovative artificial computing devices.", "title": "" }, { "docid": "2c4c7f8dcf1681e278183525d520fc8c", "text": "In the course of studies on the isolation of bioactive compounds from Philippine plants, the seeds of Moringa oleifera Lam. were examined and from the ethanol extract were isolated the new O-ethyl-4-(alpha-L-rhamnosyloxy)benzyl carbamate (1) together with seven known compounds, 4(alpha-L-rhamnosyloxy)-benzyl isothiocyanate (2), niazimicin (3), niazirin (4), beta-sitosterol (5), glycerol-1-(9-octadecanoate) (6), 3-O-(6'-O-oleoyl-beta-D-glucopyranosyl)-beta-sitosterol (7), and beta-sitosterol-3-O-beta-D-glucopyranoside (8). Four of the isolates (2, 3, 7, and 8), which were obtained in relatively good yields, were tested for their potential antitumor promoting activity using an in vitro assay which tested their inhibitory effects on Epstein-Barr virus-early antigen (EBV-EA) activation in Raji cells induced by the tumor promoter, 12-O-tetradecanoyl-phorbol-13-acetate (TPA). All the tested compounds showed inhibitory activity against EBV-EA activation, with compounds 2, 3 and 8 having shown very significant activities. Based on the in vitro results, niazimicin (3) was further subjected to in vivo test and found to have potent antitumor promoting activity in the two-stage carcinogenesis in mouse skin using 7,12-dimethylbenz(a)anthracene (DMBA) as initiator and TPA as tumor promoter. From these results, niazimicin (3) is proposed to be a potent chemo-preventive agent in chemical carcinogenesis.", "title": "" }, { "docid": "29d1c63a3267501805b564613043cc89", "text": "INTRODUCTION\nOutcome data of penile traction therapy (PTT) for the acute phase (AP) of Peyronie's disease (PD) have not been specifically studied.\n\n\nAIM\nThe aim of this study was to assess the effectiveness of a penile extender device for the treatment of patients with AP of PD.\n\n\nMETHODS\nA total of 55 patients underwent PTT for 6 months and were compared with 41 patients with AP of PD who did not receive active treatment (\"no intervention group\" [NIG]).\n\n\nMAIN OUTCOMES MEASURES\nPre- and posttreatment variables included degree of curvature, penile length and girth, pain by 0-10 cm visual analog scale (VAS), erectile function (EF) domain of the International Index of Erectile Function questionnaire, Erection Hardness Scale, Sexual Encounter Profile 2 question, and penile sonographic evaluation (only patients in the intervention group).\n\n\nRESULTS\nThe mean curvature decreased from 33° at baseline to 15° at 6 months and 13° at 9 months with a mean decrease 20° (P < 0.05) in the PTT group. VAS score for pain decreased from 5.5 to 2.5 after 6 months (P < 0.05). EF and erection hardness also improved significantly. The percentage of patients who were not able to achieve penetration decreased from 62% to 20% (P < 0.03). In the NIG, deformity increased significantly, stretched flaccid penile length decreased, VAS score for pain increased, and EF and erection hardness worsened. PTT was associated with the disappearance of sonographic plaques in 48% of patients. Furthermore, the need for surgery was reduced in 40% of patients who would otherwise have been candidates for surgery and simplified the complexity of the surgical procedure (from grafting to plication) in one out of every three patients.\n\n\nCONCLUSIONS\nPTT seems an effective treatment for the AP of PD in terms of pain reduction, penile curvature decrease, and improvement in sexual function.", "title": "" }, { "docid": "27eaa5fe0c9684337ce8b6da9de9a8ed", "text": "When we observe someone performing an action, do our brains simulate making that action? Acquired motor skills offer a unique way to test this question, since people differ widely in the actions they have learned to perform. We used functional magnetic resonance imaging to study differences in brain activity between watching an action that one has learned to do and an action that one has not, in order to assess whether the brain processes of action observation are modulated by the expertise and motor repertoire of the observer. Experts in classical ballet, experts in capoeira and inexpert control subjects viewed videos of ballet or capoeira actions. Comparing the brain activity when dancers watched their own dance style versus the other style therefore reveals the influence of motor expertise on action observation. We found greater bilateral activations in premotor cortex and intraparietal sulcus, right superior parietal lobe and left posterior superior temporal sulcus when expert dancers viewed movements that they had been trained to perform compared to movements they had not. Our results show that this 'mirror system' integrates observed actions of others with an individual's personal motor repertoire, and suggest that the human brain understands actions by motor simulation.", "title": "" } ]
scidocsrr
1e80ee62264da24896de5947e9a5e266
"Ooh Aah... Just a Little Bit" : A Small Amount of Side Channel Can Go a Long Way
[ { "docid": "bc8b40babfc2f16144cdb75b749e3a90", "text": "The Bitcoin scheme is a rare example of a large scale global payment system in which all the transactions are publicly accessible (but in an anonymous way). We downloaded the full history of this scheme, and analyzed many statistical properties of its associated transaction graph. In this paper we answer for the first time a variety of interesting questions about the typical behavior of users, how they acquire and how they spend their bitcoins, the balance of bitcoins they keep in their accounts, and how they move bitcoins between their various accounts in order to better protect their privacy. In addition, we isolated all the large transactions in the system, and discovered that almost all of them are closely related to a single large transaction that took place in November 2010, even though the associated users apparently tried to hide this fact with many strange looking long chains and fork-merge structures in the transaction graph.", "title": "" } ]
[ { "docid": "ba79dd4818facbf0cef50bb1422f43e6", "text": "A nonlinear energy operator (NEO) gives an estimate of the energy content of a linear oscillator. This has been used to quantify the AM-FM modulating signals present in a sinusoid. Here, the authors give a new interpretation of NEO and extend its use in stochastic signals. They show that NEO accentuates the high-frequency content. This instantaneous nature of NEO and its very low computational burden make it an ideal tool for spike detection. The efficacy of the proposed method has been tested with simulated signals as well as with real electroencephalograms (EEGs).", "title": "" }, { "docid": "fa04e8e2e263d18ee821c7aa6ebed08e", "text": "In this study we examined the effect of physical activity based labels on the calorie content of meals selected from a sample fast food menu. Using a web-based survey, participants were randomly assigned to one of four menus which differed only in their labeling schemes (n=802): (1) a menu with no nutritional information, (2) a menu with calorie information, (3) a menu with calorie information and minutes to walk to burn those calories, or (4) a menu with calorie information and miles to walk to burn those calories. There was a significant difference in the mean number of calories ordered based on menu type (p=0.02), with an average of 1020 calories ordered from a menu with no nutritional information, 927 calories ordered from a menu with only calorie information, 916 calories ordered from a menu with both calorie information and minutes to walk to burn those calories, and 826 calories ordered from the menu with calorie information and the number of miles to walk to burn those calories. The menu with calories and the number of miles to walk to burn those calories appeared the most effective in influencing the selection of lower calorie meals (p=0.0007) when compared to the menu with no nutritional information provided. The majority of participants (82%) reported a preference for physical activity based menu labels over labels with calorie information alone and no nutritional information. Whether these labels are effective in real-life scenarios remains to be tested.", "title": "" }, { "docid": "ce24b783f2157fdb4365b60aa2e6163a", "text": "Geosciences is a field of great societal relevance that requires solutions to several urgent problems facing our humanity and the planet. As geosciences enters the era of big data, machine learning (ML)— that has been widely successful in commercial domains—offers immense potential to contribute to problems in geosciences. However, problems in geosciences have several unique challenges that are seldom found in traditional applications, requiring novel problem formulations and methodologies in machine learning. This article introduces researchers in the machine learning (ML) community to these challenges offered by geoscience problems and the opportunities that exist for advancing both machine learning and geosciences. We first highlight typical sources of geoscience data and describe their properties that make it challenging to use traditional machine learning techniques. We then describe some of the common categories of geoscience problems where machine learning can play a role, and discuss some of the existing efforts and promising directions for methodological development in machine learning. We conclude by discussing some of the emerging research themes in machine learning that are applicable across all problems in the geosciences, and the importance of a deep collaboration between machine learning and geosciences for synergistic advancements in both disciplines.", "title": "" }, { "docid": "148d0709c58111c2f703f68d348c09af", "text": "There has been tremendous growth in the use of mobile devices over the last few years. This growth has fueled the development of millions of software applications for these mobile devices often called as 'apps'. Current estimates indicate that there are hundreds of thousands of mobile app developers. As a result, in recent years, there has been an increasing amount of software engineering research conducted on mobile apps to help such mobile app developers. In this paper, we discuss current and future research trends within the framework of the various stages in the software development life-cycle: requirements (including non-functional), design and development, testing, and maintenance. While there are several non-functional requirements, we focus on the topics of energy and security in our paper, since mobile apps are not necessarily built by large companies that can afford to get experts for solving these two topics. For the same reason we also discuss the monetizing aspects of a mobile app at the end of the paper. For each topic of interest, we first present the recent advances done in these stages and then we present the challenges present in current work, followed by the future opportunities and the risks present in pursuing such research.", "title": "" }, { "docid": "881a495a8329c71a0202c3510e21b15d", "text": "We apply basic statistical reasoning to signal reconstruction by machine learning – learning to map corrupted observations to clean signals – with a simple and powerful conclusion: it is possible to learn to restore images by only looking at corrupted examples, at performance at and sometimes exceeding training using clean data, without explicit image priors or likelihood models of the corruption. In practice, we show that a single model learns photographic noise removal, denoising synthetic Monte Carlo images, and reconstruction of undersampled MRI scans – all corrupted by different processes – based on noisy data only.", "title": "" }, { "docid": "4a4a0dde01536789bd53ec180a136877", "text": "CONTEXT\nCurrent assessment formats for physicians and trainees reliably test core knowledge and basic skills. However, they may underemphasize some important domains of professional medical practice, including interpersonal skills, lifelong learning, professionalism, and integration of core knowledge into clinical practice.\n\n\nOBJECTIVES\nTo propose a definition of professional competence, to review current means for assessing it, and to suggest new approaches to assessment.\n\n\nDATA SOURCES\nWe searched the MEDLINE database from 1966 to 2001 and reference lists of relevant articles for English-language studies of reliability or validity of measures of competence of physicians, medical students, and residents.\n\n\nSTUDY SELECTION\nWe excluded articles of a purely descriptive nature, duplicate reports, reviews, and opinions and position statements, which yielded 195 relevant citations.\n\n\nDATA EXTRACTION\nData were abstracted by 1 of us (R.M.E.). Quality criteria for inclusion were broad, given the heterogeneity of interventions, complexity of outcome measures, and paucity of randomized or longitudinal study designs.\n\n\nDATA SYNTHESIS\nWe generated an inclusive definition of competence: the habitual and judicious use of communication, knowledge, technical skills, clinical reasoning, emotions, values, and reflection in daily practice for the benefit of the individual and the community being served. Aside from protecting the public and limiting access to advanced training, assessments should foster habits of learning and self-reflection and drive institutional change. Subjective, multiple-choice, and standardized patient assessments, although reliable, underemphasize important domains of professional competence: integration of knowledge and skills, context of care, information management, teamwork, health systems, and patient-physician relationships. Few assessments observe trainees in real-life situations, incorporate the perspectives of peers and patients, or use measures that predict clinical outcomes.\n\n\nCONCLUSIONS\nIn addition to assessments of basic skills, new formats that assess clinical reasoning, expert judgment, management of ambiguity, professionalism, time management, learning strategies, and teamwork promise a multidimensional assessment while maintaining adequate reliability and validity. Institutional support, reflection, and mentoring must accompany the development of assessment programs.", "title": "" }, { "docid": "4f1111b33789e25ed896ad366f0d98de", "text": "As an ubiquitous method in natural language processing, word embeddings are extensively employed to map semantic properties of words into a dense vector representation. They capture semantic and syntactic relations among words but the vector corresponding to the words are only meaningful relative to each other. Neither the vector nor its dimensions have any absolute, interpretable meaning. We introduce an additive modification to the objective function of the embedding learning algorithm that encourages the embedding vectors of words that are semantically related a predefined concept to take larger values along a specified dimension, while leaving the original semantic learning mechanism mostly unaffected. In other words, we align words that are already determined to be related, along predefined concepts. Therefore, we impart interpretability to the word embedding by assigning meaning to its vector dimensions. The predefined concepts are derived from an external lexical resource, which in this paper is chosen as Roget’s Thesaurus. We observe that alignment along the chosen concepts is not limited to words in the Thesaurus and extends to other related words as well. We quantify the extent of interpretability and assignment of meaning from our experimental results. We also demonstrate the preservation of semantic coherence of the resulting vector space by using word-analogy and word-similarity tests. These tests show that the interpretability-imparted word embeddings that are obtained by the proposed framework do not sacrifice performances in common benchmark tests.", "title": "" }, { "docid": "f7de8256c3d556a298e12cb555dd50b8", "text": "Intrusion Detection Systems (IDSs) detects the network attacks by self-learning, etc. (9). Using Genetic Algorithms for intrusion detection has. Cloud Computing Using Genetic Algorithm. 1. Ku. To overcome this problem we are implementing intrusion detection system in which we use genetic. From Ignite at OSCON 2010, a 5 minute presentation by Bill Lavender: SNORT is popular. Based Intrusion Detection System (IDS), by applying Genetic Algorithm (GA) and Networking Using Genetic Algorithm (IDS) and Decision Tree is to identify. Intrusion Detection System Using Genetic Algorithm >>>CLICK HERE<<< Genetic algorithm (GA) has received significant attention for the design and length chromosomes (VLCs) in a GA-based network intrusion detection system. The proposed approach is tested using Defense Advanced Research Project. Abstract. Intrusion Detection System (IDS) is one of the key security components in today's networking environment. A great deal of attention has been recently. Computer security has become an important part of the day today's life. Not only single computer systems but an extensive network of the computer system. presents an overview of intrusion detection system and a hybrid technique for", "title": "" }, { "docid": "e13fc2c9f5aafc6c8eb1909592c07a70", "text": "We introduce DropAll, a generalization of DropOut [1] and DropConnect [2], for regularization of fully-connected layers within convolutional neural networks. Applying these methods amounts to subsampling a neural network by dropping units. Training with DropOut, a randomly selected subset of activations are dropped, when training with DropConnect we drop a randomly subsets of weights. With DropAll we can perform both methods. We show the validity of our proposal by improving the classification error of networks trained with DropOut and DropConnect, on a common image classification dataset. To improve the classification, we also used a new method for combining networks, which was proposed in [3].", "title": "" }, { "docid": "6427c3d11772ca84b6e1ad039d3abd33", "text": "This paper proposes an algorithm that enables robots to efficiently learn human-centric models of their environment from natural language descriptions. Typical semantic mapping approaches augment metric maps with higher-level properties of the robot’s surroundings (e.g., place type, object locations), but do not use this information to improve the metric map. The novelty of our algorithm lies in fusing high-level knowledge, conveyed by speech, with metric information from the robot’s low-level sensor streams. Our method jointly estimates a hybrid metric, topological, and semantic representation of the environment. This semantic graph provides a common framework in which we integrate concepts from natural language descriptions (e.g., labels and spatial relations) with metric observations from low-level sensors. Our algorithm efficiently maintains a factored distribution over semantic graphs based upon the stream of natural language and low-level sensor information. We evaluate the algorithm’s performance and demonstrate that the incorporation of information from natural language increases the metric, topological and semantic accuracy of the recovered environment model.", "title": "" }, { "docid": "fd317c492ed68bf14bdef38c27ed6696", "text": "The systematic study of subcellular location patterns is required to fully characterize the human proteome, as subcellular location provides critical context necessary for understanding a protein's function. The analysis of tens of thousands of expressed proteins for the many cell types and cellular conditions under which they may be found creates a need for automated subcellular pattern analysis. We therefore describe the application of automated methods, previously developed and validated by our laboratory on fluorescence micrographs of cultured cell lines, to analyze subcellular patterns in tissue images from the Human Protein Atlas. The Atlas currently contains images of over 3000 protein patterns in various human tissues obtained using immunohistochemistry. We chose a 16 protein subset from the Atlas that reflects the major classes of subcellular location. We then separated DNA and protein staining in the images, extracted various features from each image, and trained a support vector machine classifier to recognize the protein patterns. Our results show that our system can distinguish the patterns with 83% accuracy in 45 different tissues, and when only the most confident classifications are considered, this rises to 97%. These results are encouraging given that the tissues contain many different cell types organized in different manners, and that the Atlas images are of moderate resolution. The approach described is an important starting point for automatically assigning subcellular locations on a proteome-wide basis for collections of tissue images such as the Atlas.", "title": "" }, { "docid": "fcc434f43baae2cb1dbddd2f76fb9c7f", "text": "For medical diagnoses and treatments, it is often desirable to wirelessly trace an object that moves inside the human body. A magnetic tracing technique suggested for such applications uses a small magnet as the excitation source, which does not require the power supply and connection wire. It provides good tracing accuracy and can be easily implemented. As the magnet moves, it establishes around the human body a static magnetic field, whose intensity is related to the magnet's 3-D position and 2-D orientation parameters. With magnetic sensors, these magnetic intensities can be detected in some predetermined spatial points, and the position and orientation parameters can be computed. Typically, a nonlinear optimization algorithm is applied to such a problem, but a linear algorithm is preferable for faster, more reliable computation, and lower complexity. In this paper, we propose a linear algorithm to determine the 5-D magnet's position and orientation parameters. With the data from five (or more) three-axis magnetic sensors, this algorithm results in a solution by the matrix and algebra computations. We applied this linear algorithm on the real localization system, and the results of simulations and real experiments show that satisfactory tracing accuracy can be achieved by using a sensor array with enough three-axis magnetic sensors.", "title": "" }, { "docid": "2c14b3968aadadaa62f569acccb37d46", "text": "The main objective of this paper is to review the technologies and models used in the Automatic music transcription system. Music Information Retrieval is a key problem in the field of music signal analysis and this can be achieved with the use of music transcription systems. It has proven to be a very difficult issue because of the complex and deliberately overlapped spectral structure of musical harmonies. Generally, the music transcription systems branched as automatic and semi-automatic approaches based on the user interventions needed in the transcription system. Among these we give a close view of the automatic music transcription systems. Different models and techniques were proposed so far in the automatic music transcription systems. However the performance of the systems derived till now not completely matched to the performance of a human expert. In this paper we go through the techniques used previously for the music transcription and discuss the limitations with them. Also, we give some directions for the enhancement of the music transcription system and this can be useful for the researches to develop fully automatic music transcription system.", "title": "" }, { "docid": "cab8928a995b0cd2becb653155ecd8d9", "text": "Inclusive in the engineering factors of growth of the economy of any country is the construction industry, of which Malaysia as a nation is not left out. In spite of its significant contribution, the industry is known to be an accident-prone consequent upon the dangerous activities taking place at the construction stage. However, occupational accidents of diverse categories do take place on the construction sites resulting in fatal and non-fatal injuries. This study was embarked upon by giving consideration to thirty fatal cases of accident that occurred in Malaysia during a period of fourteen months (September, 2015–October, 2016), with the reports extracted from the database of Department of Safety and Health (DOSH) in Malaysia. The research was aimed at discovering the types (categories) of fatal accident on the construction sites, with attention also given to the causes of the accidents. In achieving this, thirty cases were descriptively analysed, and availing a revelation of falls from height as the leading category of accident, and electrocution as the second, while the causative factors were discovered to be lack of compliance of workers to safe work procedures and nonchalant attitude towards harnessing themselves with personal protective equipment (PPE). Consequent upon the discovery through analysis, and an effort to avert subsequent accidents in order to save lives of construction workers it is recommended that the management should enforce the compliance of workers to safe work procedures and the compulsory use of PPE during operations, while the DOSH should embark on warding round the construction sites for inspection and giving a sanction to contractors failing to enforce compliance with safety regulations. Keywords— Construction Industry, Accident, Construction Site, Injuries, Safety", "title": "" }, { "docid": "8f0276f7a902fa02b6236dfc76b882d2", "text": "Support Vector Machines (SVMs) have successfully shown efficiencies in many areas such as text categorization. Although recommendation systems share many similarities with text categorization, the performance of SVMs in recommendation systems is not acceptable due to the sparsity of the user-item matrix. In this paper, we propose a heuristic method to improve the predictive accuracy of SVMs by repeatedly correcting the missing values in the user-item matrix. The performance comparison to other algorithms has been conducted. The experimental studies show that the accurate rates of our heuristic method are the highest.", "title": "" }, { "docid": "e236a7cd184bbd09c9ffd90ad4cfd636", "text": "It has been a challenge for financial economists to explain some stylized facts observed in securities markets, among them, high levels of trading volume. The most prominent explanation of excess volume is overconfidence. High market returns make investors overconfident and as a consequence, these investors trade more subsequently. The aim of our paper is to study the impact of the phenomenon of overconfidence on the trading volume and its role in the formation of the excess volume on the Tunisian stock market. Based on the work of Statman, Thorley and Vorkink (2006) and by using VAR models and impulse response functions, we find little evidence of the overconfidence hypothesis when we use volume (shares traded) as proxy of trading volume.", "title": "" }, { "docid": "6c6e4e776a3860d1df1ccd7af7f587d5", "text": "We introduce new families of Integral Probability Metrics (IPM) for training Generative Adversarial Networks (GAN). Our IPMs are based on matching statistics of distributions embedded in a finite dimensional feature space. Mean and covariance feature matching IPMs allow for stable training of GANs, which we will call McGan. McGan minimizes a meaningful loss between distributions.", "title": "" }, { "docid": "8f0ac7417daf0c995263274738dcbb13", "text": "Technology platform strategies offer a novel way to orchestrate a rich portfolio of contributions made by the many independent actors who form an ecosystem of heterogeneous complementors around a stable platform core. This form of organising has been successfully used in the smartphone, gaming, commercial software, and other industrial sectors. While technology ecosystems require stability and homogeneity to leverage common investments in standard components, they also need variability and heterogeneity to meet evolving market demand. Although the required balance between stability and evolvability in the ecosystem has been addressed conceptually in the literature, we have less understanding of its underlying mechanics or appropriate governance. Through an extensive case study of a business software ecosystem consisting of a major multinational manufacturer of enterprise resource planning (ERP) software at the core, and a heterogeneous system of independent implementation partners and solution developers on the periphery, our research identifies three salient tensions that characterize the ecosystem: standard-variety; control-autonomy; and collective-individual. We then highlight the specific ecosystem governance mechanisms designed to simultaneously manage desirable and undesirable variance across each tension. Paradoxical tensions may manifest as dualisms, where actors are faced with contradictory and disabling „either/or‟ decisions. Alternatively, they may manifest as dualities, where tensions are framed as complementary and mutually-enabling. We identify conditions where latent, mutually enabling tensions become manifest as salient, disabling tensions. By identifying conditions in which complementary logics are overshadowed by contradictory logics, our study further contributes to the understanding of the dynamics of technology ecosystems, as well as the effective design of technology ecosystem governance that can explicitly embrace paradoxical tensions towards generative outcomes.", "title": "" }, { "docid": "5e8fbfec1ff5bf432dbaadaf13c9ca75", "text": "Multiple studies have illustrated the potential for dramatic societal, environmental and economic benefits from significant penetration of autonomous driving. However, all the current approaches to autonomous driving require the automotive manufacturers to shoulder the primary responsibility and liability associated with replacing human perception and decision making with automation, potentially slowing the penetration of autonomous vehicles, and consequently slowing the realization of the societal benefits of autonomous vehicles. We propose here a new approach to autonomous driving that will re-balance the responsibility and liabilities associated with autonomous driving between traditional automotive manufacturers, private infrastructure players, and third-party players. Our proposed distributed intelligence architecture leverages the significant advancements in connectivity and edge computing in the recent decades to partition the driving functions between the vehicle, edge computers on the road side, and specialized third-party computers that reside in the vehicle. Infrastructure becomes a critical enabler for autonomy. With this Infrastructure Enabled Autonomy (IEA) concept, the traditional automotive manufacturers will only need to shoulder responsibility and liability comparable to what they already do today, and the infrastructure and third-party players will share the added responsibility and liabilities associated with autonomous functionalities. We propose a Bayesian Network Model based framework for assessing the risk benefits of such a distributed intelligence architecture. An additional benefit of the proposed architecture is that it enables “autonomy as a service” while still allowing for private ownership of automobiles.", "title": "" }, { "docid": "af752d0de962449acd9a22608bd7baba", "text": "Ї R is a real time visual surveillance system for detecting and tracking multiple people and monitoring their activities in an outdoor environment. It operates on monocular gray-scale video imagery, or on video imagery from an infrared camera. ‡ R employs a combination of shape analysis and tracking to locate people and their parts (head, hands, feet, torso) and to create models of people's appearance so that they can be tracked through interactions such as occlusions. It can determine whether a foreground region contains multiple people and can segment the region into its constituent people and track them. ‡ R can also determine whether people are carrying objects, and can segment objects from their silhouettes, and construct appearance models for them so they can be identified in subsequent frames. ‡ R can recognize events between people and objects, such as depositing an object, exchanging bags, or removing an object. It runs at 25 Hz for 320Â240 resolution images on a 400 Mhz dual-Pentium II PC.", "title": "" } ]
scidocsrr
c13d3e96ac7a5df8c96bc0de66a33a1f
Fine-Grained Image Search
[ { "docid": "9c47b068f7645dc5464328e80be24019", "text": "In this paper we propose a highly effective and scalable framework for recognizing logos in images. At the core of our approach lays a method for encoding and indexing the relative spatial layout of local features detected in the logo images. Based on the analysis of the local features and the composition of basic spatial structures, such as edges and triangles, we can derive a quantized representation of the regions in the logos and minimize the false positive detections. Furthermore, we propose a cascaded index for scalable multi-class recognition of logos.\n For the evaluation of our system, we have constructed and released a logo recognition benchmark which consists of manually labeled logo images, complemented with non-logo images, all posted on Flickr. The dataset consists of a training, validation, and test set with 32 logo-classes. We thoroughly evaluate our system with this benchmark and show that our approach effectively recognizes different logo classes with high precision.", "title": "" } ]
[ { "docid": "24a23aff0026141d1b6970e8216347f8", "text": "Internet of Things (IoT) is a technology paradigm where millions of sensors monitor, and help inform or manage, physical, environmental and human systems in real-time. The inherent closed-loop responsiveness and decision making of IoT applications makes them ideal candidates for using low latency and scalable stream processing platforms. Distributed Stream Processing Systems (DSPS) are becoming essential components of any IoT stack, but the efficacy and performance of contemporary DSPS have not been rigorously studied for IoT data streams and applications. Here, we develop a benchmark suite and performance metrics to evaluate DSPS for streaming IoT applications. The benchmark includes 13 common IoT tasks classified across various functional categories and forming micro-benchmarks, and two IoT applications for statistical summarization and predictive analytics that leverage various dataflow compositional features of DSPS. These are coupled with stream workloads sourced from real IoT observations from smart cities. We validate the IoT benchmark for the popular Apache Storm DSPS, and present empirical results.", "title": "" }, { "docid": "f8742208fef05beb86d77f1d5b5d25ef", "text": "The latest book on Genetic Programming, Poli, Langdon and McPhee’s (with contributions from John R. Koza) A Field Guide to Genetic Programming represents an exciting landmark with the authors choosing to make their work freely available by publishing using a form of the Creative Commons License[1]. In so doing they have created a must-read resource which is, to use their words, ’aimed at both newcomers and old-timers’. The book is freely available from the authors companion website [2] and Lulu.com [3] in both pdf and html form. For those who desire the more traditional page turning exercise, inexpensive printed copies can be ordered from Lulu.com. The Field Guides companion website also provides a link to the TinyGP code printed over eight pages of Appendix B, and a Discussion Group centered around the book. The book is divided into four parts with fourteen chapters and two appendices. Part I introduces the basics of Genetic Programming, Part II overviews more advanced topics, Part III highlights some of the real world applications and discusses issues facing the GP researcher or practitioner, while Part IV contains two appendices, the first introducing some key resources and the second appendix describes the TinyGP code. The pdf and html forms of the book have an especially useful feature, providing links to the articles available on-line at the time of publication, and to bibtex entries of the GP Bibliography. Following an overview of the book in chapter 1, chapter 2 introduces the basic concepts of GP focusing on the tree representation, initialisation, selection, and the search operators. Chapter 3 is centered around the preparatory steps in applying GP to a problem, which is followed by an outline of a sample run of GP on a simple instance of symbolic regression in Chapter 4. Overall these chapters provide a compact and useful introduction to GP. The first of the Advanced GP chapters in Part II looks at alternative strategies for initialisation and the search operators for tree-based GP. An overview of Modular, Grammatical and Developmental GP is provided in Chapter 6. While the chapter title", "title": "" }, { "docid": "d258a14fc9e64ba612f2c8ea77f85d08", "text": "In this paper we report exploratory analyses of high-density oligonucleotide array data from the Affymetrix GeneChip system with the objective of improving upon currently used measures of gene expression. Our analyses make use of three data sets: a small experimental study consisting of five MGU74A mouse GeneChip arrays, part of the data from an extensive spike-in study conducted by Gene Logic and Wyeth's Genetics Institute involving 95 HG-U95A human GeneChip arrays; and part of a dilution study conducted by Gene Logic involving 75 HG-U95A GeneChip arrays. We display some familiar features of the perfect match and mismatch probe (PM and MM) values of these data, and examine the variance-mean relationship with probe-level data from probes believed to be defective, and so delivering noise only. We explain why we need to normalize the arrays to one another using probe level intensities. We then examine the behavior of the PM and MM using spike-in data and assess three commonly used summary measures: Affymetrix's (i) average difference (AvDiff) and (ii) MAS 5.0 signal, and (iii) the Li and Wong multiplicative model-based expression index (MBEI). The exploratory data analyses of the probe level data motivate a new summary measure that is a robust multi-array average (RMA) of background-adjusted, normalized, and log-transformed PM values. We evaluate the four expression summary measures using the dilution study data, assessing their behavior in terms of bias, variance and (for MBEI and RMA) model fit. Finally, we evaluate the algorithms in terms of their ability to detect known levels of differential expression using the spike-in data. We conclude that there is no obvious downside to using RMA and attaching a standard error (SE) to this quantity using a linear model which removes probe-specific affinities.", "title": "" }, { "docid": "2ee0eb9ab9d6c5b9bdad02b9f95c8691", "text": "Aim: To describe lower extremity injuries for badminton in New Zealand. Methods: Lower limb badminton injuries that resulted in claims accepted by the national insurance company Accident Compensation Corporation (ACC) in New Zealand between 2006 and 2011 were reviewed. Results: The estimated national injury incidence for badminton injuries in New Zealand from 2006 to 2011 was 0.66%. There were 1909 lower limb badminton injury claims which cost NZ$2,014,337 (NZ$ value over 2006 to 2011). The age-bands frequently injured were 10–19 (22%), 40–49 (22%), 30–39 (14%) and 50–59 (13%) years. Sixty five percent of lower limb injuries were knee ligament sprains/tears. Males sustained more cruciate ligament sprains than females (75 vs. 39). Movements involving turning, changing direction, shifting weight, pivoting or twisting were responsible for 34% of lower extremity injuries. Conclusion: The knee was most frequently OPEN ACCESS", "title": "" }, { "docid": "0cb3cdb1e44fd9171156ad46fdf2d2ed", "text": "In this paper, from the viewpoint of scene under standing, a three-layer Bayesian hierarchical framework (BHF) is proposed for robust vacant parking space detection. In practice, the challenges of vacant parking space inference come from dramatic luminance variations, shadow effect, perspective distortion, and the inter-occlusion among vehicles. By using a hidden labeling layer between an observation layer and a scene layer, the BHF provides a systematic generative structure to model these variations. In the proposed BHF, the problem of luminance variations is treated as a color classification problem and is tack led via a classification process from the observation layer to the labeling layer, while the occlusion pattern, perspective distortion, and shadow effect are well modeled by the relationships between the scene layer and the labeling layer. With the BHF scheme, the detection of vacant parking spaces and the labeling of scene status are regarded as a unified Bayesian optimization problem subject to a shadow generation model, an occlusion generation model, and an object classification model. The system accuracy was evaluated by using outdoor parking lot videos captured from morning to evening. Experimental results showed that the proposed framework can systematically determine the vacant space number, efficiently label ground and car regions, precisely locate the shadowed regions, and effectively tackle the problem of luminance variations.", "title": "" }, { "docid": "8fe5ad58edf4a1c468fd0b6a303729ee", "text": "Das CDISC Operational Data Model (ODM) ist ein populärer Standard in klinischen Datenmanagementsystemen (CDMS). Er beschreibt sowohl die Struktur einer klinischen Prüfung inklusive der Visiten, Formulare, Datenele mente und Codelisten als auch administrative Informationen wie gültige Nutzeracco unts. Ferner enthält er alle erhobenen klinischen Fakten über die Pro banden. Sein originärer Einsatzzweck liegt in der Archivierung von Studiendatenbanken und dem Austausch klinischer Daten zwischen verschiedenen CDMS. Aufgrund de r reichhaltigen Struktur eignet er sich aber auch für weiterführende Anwendungsfälle. Im Rahmen studentischer Praktika wurden verschied ene Szenarien für funktionale Ergänzungen des freien CDMS OpenClinica unters ucht und implementiert, darunter die Generierung eines Annotated CRF, der Import vo n Studiendaten per Web-Service, das semiautomatisierte Anlegen von Studien so wie der Export von Studiendaten in einen relationalen Data Mart und in ein Forschungs-Data-Warehouse auf Basis von i2b2.", "title": "" }, { "docid": "81b2a039a391b5f2c1a9a15c94f1f67e", "text": "Evolution of resistance in pests can reduce the effectiveness of insecticidal proteins from Bacillus thuringiensis (Bt) produced by transgenic crops. We analyzed results of 77 studies from five continents reporting field monitoring data for resistance to Bt crops, empirical evaluation of factors affecting resistance or both. Although most pest populations remained susceptible, reduced efficacy of Bt crops caused by field-evolved resistance has been reported now for some populations of 5 of 13 major pest species examined, compared with resistant populations of only one pest species in 2005. Field outcomes support theoretical predictions that factors delaying resistance include recessive inheritance of resistance, low initial frequency of resistance alleles, abundant refuges of non-Bt host plants and two-toxin Bt crops deployed separately from one-toxin Bt crops. The results imply that proactive evaluation of the inheritance and initial frequency of resistance are useful for predicting the risk of resistance and improving strategies to sustain the effectiveness of Bt crops.", "title": "" }, { "docid": "596bb1265a375c68f0498df90f57338e", "text": "The concept of unintended pregnancy has been essential to demographers in seeking to understand fertility, to public health practitioners in preventing unwanted childbear-ing and to both groups in promoting a woman's ability to determine whether and when to have children. Accurate measurement of pregnancy intentions is important in understanding fertility-related behaviors, forecasting fertility, estimating unmet need for contraception, understanding the impact of pregnancy intentions on maternal and child health, designing family planning programs and evaluating their effectiveness, and creating and evaluating community-based programs that prevent unintended pregnancy. 1 Pregnancy unintendedness is a complex concept, and has been the subject of recent conceptual and method-ological critiques. 2 Pregnancy intentions are increasingly viewed as encompassing affective, cognitive, cultural and contextual dimensions. Developing a more complete understanding of pregnancy intentions should advance efforts to increase contraceptive use, to prevent unintended pregnancies and to improve the health of women and their children. To provide a scientific foundation for public health efforts to prevent unintended pregnancy, we conducted a review of unintended pregnancy between the fall of 1999 and the spring of 2001 as part of strategic planning activities within the Division of Reproductive Health at the Centers for Disease Control and Prevention (CDC). We reviewed the published and unpublished literature, consulted with experts in reproductive health and held several joint meetings with the Demographic and Behavioral Research Branch of the National Institute of Child Health and Human Development , and the Office of Population Affairs of the Department of Health and Human Services. We used standard scientific search engines, such as Medline, to find relevant articles published since 1975, and identified older references from bibliographies contained in recent articles; academic experts and federal officials helped to identify unpublished reports. This comment summarizes our findings and incorporates insights gained from the joint meetings and the strategic planning process. CURRENT DEFINITIONS AND MEASURES Conventional measures of unintended pregnancy are designed to reflect a woman's intentions before she became pregnant. 3 Unintended pregnancies are pregnancies that are reported to have been either unwanted (i.e., they occurred when no children, or no more children, were desired) or mistimed (i.e., they occurred earlier than desired). In contrast, pregnancies are described as intended if they are reported to have happened at the \" right time \" 4 or later than desired (because of infertility or difficulties in conceiving). A concept related to unintended pregnancy is unplanned pregnancy—one that occurred when …", "title": "" }, { "docid": "0e53caa9c6464038015a6e83b8953d92", "text": "Many interactive rendering algorithms require operations on multiple fragments (i.e., ray intersections) at the same pixel location: however, current Graphics Processing Units (GPUs) capture only a single fragment per pixel. Example effects include transparency, translucency, constructive solid geometry, depth-of-field, direct volume rendering, and isosurface visualization. With current GPUs, programmers implement these effects using multiple passes over the scene geometry, often substantially limiting performance. This paper introduces a generalization of the Z-buffer, called the k-buffer, that makes it possible to efficiently implement such algorithms with only a single geometry pass, yet requires only a small, fixed amount of additional memory. The k-buffer uses framebuffer memory as a read-modify-write (RMW) pool of k entries whose use is programmatically defined by a small k-buffer program. We present two proposals for adding k-buffer support to future GPUs and demonstrate numerous multiple-fragment, single-pass graphics algorithms running on both a software-simulated k-buffer and a k-buffer implemented with current GPUs. The goal of this work is to demonstrate the large number of graphics algorithms that the k-buffer enables and that the efficiency is superior to current multipass approaches.", "title": "" }, { "docid": "b64a2e6bb533043a48b7840b72f71331", "text": "Autonomous long range navigation in partially known planetary-like terrain is an open challenge for robotics. Navigating several hundreds of meters without any human intervention requires the robot to be able to build various representations of its environment, to plan and execute trajectories according to the kind of terrain traversed, to localize itself as it moves, and to schedule, start, control and interrupt these various activities. In this paper, we brie y describe some functionalities that are currently running on board the Marsokhod model robot Lama at LAAS/CNRS. We then focus on the necessity to integrate various instances of the perception and decision functionalities, and on the di culties raised by this integration.", "title": "" }, { "docid": "b990e62cb73c0f6c9dd9d945f72bb047", "text": "Admissible heuristics are an important class of heuristics worth discovering: they guarantee shortest path solutions in search algorithms such asA* and they guarantee less expensively produced, but boundedly longer solutions in search algorithms such as dynamic weighting. Unfortunately, effective (accurate and cheap to compute) admissible heuristics can take years for people to discover. Several researchers have suggested that certain transformations of a problem can be used to generate admissible heuristics. This article defines a more general class of transformations, calledabstractions, that are guaranteed to generate only admissible heuristics. It also describes and evaluates an implemented program (Absolver II) that uses a means-ends analysis search control strategy to discover abstracted problems that result in effective admissible heuristics. Absolver II discovered several well-known and a few novel admissible heuristics, including the first known effective one for Rubik's Cube, thus concretely demonstrating that effective admissible heuristics can be tractably discovered by a machine.", "title": "" }, { "docid": "bee4b2dfab47848e8429d4b4617ec9e5", "text": "Benefit from the quick development of deep learning techniques, salient object detection has achieved remarkable progresses recently. However, there still exists following two major challenges that hinder its application in embedded devices, low resolution output and heavy model weight. To this end, this paper presents an accurate yet compact deep network for efficient salient object detection. More specifically, given a coarse saliency prediction in the deepest layer, we first employ residual learning to learn side-output residual features for saliency refinement, which can be achieved with very limited convolutional parameters while keep accuracy. Secondly, we further propose reverse attention to guide such side-output residual learning in a top-down manner. By erasing the current predicted salient regions from side-output features, the network can eventually explore the missing object parts and details which results in high resolution and accuracy. Experiments on six benchmark datasets demonstrate that the proposed approach compares favorably against state-of-the-art methods, and with advantages in terms of simplicity, efficiency (45 FPS) and model size (81 MB).", "title": "" }, { "docid": "e840e1e77a8a5c2c187c79eda9143ade", "text": "The aim of this study is to find out the customer’s satisfaction with Yemeni Mobile service providers. Th is study examined the relationship between perceived quality, perceived value, customer expectation, and corporate image with customer satisfaction. The result of this study is based on data gathered online from 118 academic staff in public universit ies in Yemen. The study found that the relationship between perceived value, perceived quality and corporate image have a significant positive influence on customer satisfaction, whereas customer expectation has positive but without statistical significance.", "title": "" }, { "docid": "41fe7d2febb05a48daf69b4a41c77251", "text": "Multi-objective evolutionary algorithms for the construction of neural ensembles is a relatively new area of research. We recently proposed an ensemble learning algorithm called DIVACE (DIVerse and ACcurate Ensemble learning algorithm). It was shown that DIVACE tries to find an optimal trade-off between diversity and accuracy as it searches for an ensemble for some particular pattern recognition task by treating these two objectives explicitly separately. A detailed discussion of DIVACE together with further experimental studies form the essence of this paper. A new diversity measure which we call Pairwise Failure Crediting (PFC) is proposed. This measure forms one of the two evolutionary pressures being exerted explicitly in DIVACE. Experiments with this diversity measure as well as comparisons with previously studied approaches are hence considered. Detailed analysis of the results show that DIVACE, as a concept, has promise. Mathematical Subject Classification (2000): 68T05, 68Q32, 68Q10.", "title": "" }, { "docid": "3172304147c13068b6cec8fd252cda5e", "text": "Widespread growth of open wireless hotspots has made it easy to carry out man-in-the-middle attacks and impersonate web sites. Although HTTPS can be used to prevent such attacks, its universal adoption is hindered by its performance cost and its inability to leverage caching at intermediate servers (such as CDN servers and caching proxies) while maintaining end-to-end security. To complement HTTPS, we revive an old idea from SHTTP, a protocol that offers end-to-end web integrity without confidentiality. We name the protocol HTTPi and give it an efficient design that is easy to deploy for today’s web. In particular, we tackle several previously-unidentified challenges, such as supporting progressive page loading on the client’s browser, handling mixed content, and defining access control policies among HTTP, HTTPi, and HTTPS content from the same domain. Our prototyping and evaluation experience show that HTTPi incurs negligible performance overhead over HTTP, can leverage existing web infrastructure such as CDNs or caching proxies without any modifications to them, and can make many of the mixed-content problems in existing HTTPS web sites easily go away. Based on this experience, we advocate browser and web server vendors to adopt HTTPi.", "title": "" }, { "docid": "d7ebfe6e0f0fa07c5e22d24c69aca13e", "text": "Malware programs that incorporate trigger-based behavior initiate malicious activities based on conditions satisfied only by specific inputs. State-of-the-art malware analyzers discover code guarded by triggers via multiple path exploration, symbolic execution, or forced conditional execution, all without knowing the trigger inputs. We present a malware obfuscation technique that automatically conceals specific trigger-based behavior from these malware analyzers. Our technique automatically transforms a program by encrypting code that is conditionally dependent on an input value with a key derived from the input and then removing the key from the program. We have implemented a compiler-level tool that takes a malware source program and automatically generates an obfuscated binary. Experiments on various existing malware samples show that our tool can hide a significant portion of trigger based code. We provide insight into the strengths, weaknesses, and possible ways to strengthen current analysis approaches in order to defeat this malware obfuscation technique.", "title": "" }, { "docid": "1e306a31f5a9becadc267a895be40335", "text": "Knowledge has been lately recognized as one of the most important assets of organizations. Can information technology help the growth and the sustainment of organizational knowledge? The answer is yes, if care is taken to remember that IT here is just a part of the story (corporate culture and work practices being equally relevant) and that the information technologies best suited for this purpose should be expressly designed with knowledge management in view. This special issue of the Journal of Universal Computer Science contains a selection f papers from the First Conference on Practical Applications of Knowledge Management. Each paper describes a specific type of information technology suitable for the support of different aspects of knowledge management.", "title": "" }, { "docid": "3160dea1a6ebd67d57c0d304e17f4882", "text": "A Concept Inventory (CI) is a set of multiple choice questions used to reveal student's misconceptions related to some topic. Each available choice (besides the correct choice) is a distractor that is carefully developed to address a specific misunderstanding, a student wrong thought. In computer science introductory programming courses, the development of CIs is still beginning, with many topics requiring further study and analysis. We identify, through analysis of open-ended exams and instructor interviews, introductory programming course misconceptions related to function parameter use and scope, variables, recursion, iteration, structures, pointers and boolean expressions. We categorize these misconceptions and define high-quality distractors founded in words used by students in their responses to exam questions. We discuss the difficulty of assessing introductory programming misconceptions independent of the syntax of a language and we present a detailed discussion of two pilot CIs related to parameters: an open-ended question (to help identify new misunderstandings) and a multiple choice question with suggested distractors that we identified.", "title": "" }, { "docid": "8d890dba24fc248ee37653aad471713f", "text": "We consider the problem of constructing a spanning tree for a graph G = (V,E) with n vertices whose maximal degree is the smallest among all spanning trees of G. This problem is easily shown to be NP-hard. We describe an iterative polynomial time approximation algorithm for this problem. This algorithm computes a spanning tree whose maximal degree is at most O(Δ + log n), where Δ is the degree of some optimal tree. The result is generalized to the case where only some vertices need to be connected (Steiner case) and to the case of directed graphs. It is then shown that our algorithm can be refined to produce a spanning tree of degree at most Δ + 1. Unless P = NP, this is the best bound achievable in polynomial time.", "title": "" }, { "docid": "87eb69d6404bf42612806a5e6d67e7bb", "text": "In this paper we present an analysis of an AltaVista Search Engine query log consisting of approximately 1 billion entries for search requests over a period of six weeks. This represents almost 285 million user sessions, each an attempt to fill a single information need. We present an analysis of individual queries, query duplication, and query sessions. We also present results of a correlation analysis of the log entries, studying the interaction of terms within queries. Our data supports the conjecture that web users differ significantly from the user assumed in the standard information retrieval literature. Specifically, we show that web users type in short queries, mostly look at the first 10 results only, and seldom modify the query. This suggests that traditional information retrieval techniques may not work well for answering web search requests. The correlation analysis showed that the most highly correlated items are constituents of phrases. This result indicates it may be useful for search engines to consider search terms as parts of phrases even if the user did not explicitly specify them as such.", "title": "" } ]
scidocsrr
bb02d18b5e6d6ed00e90bfd82d79ce56
Deep Video Color Propagation
[ { "docid": "0d2e9d514586f083007f5e93d8bb9844", "text": "Recovering Matches: Analysis-by-Synthesis Results Starting point: Unsupervised learning of image matching Applications: Feature matching, structure from motion, dense optical flow, recognition, motion segmentation, image alignment Problem: Difficult to do directly (e.g. based on video) Insights: Image matching is a sub-problem of frame interpolation Frame interpolation can be learned from natural video sequences", "title": "" }, { "docid": "b401c0a7209d98aea517cf0e28101689", "text": "This paper introduces a deep-learning approach to photographic style transfer that handles a large variety of image content while faithfully transferring the reference style. Our approach builds upon the recent work on painterly transfer that separates style from the content of an image by considering different layers of a neural network. However, as is, this approach is not suitable for photorealistic style transfer. Even when both the input and reference images are photographs, the output still exhibits distortions reminiscent of a painting. Our contribution is to constrain the transformation from the input to the output to be locally affine in colorspace, and to express this constraint as a custom fully differentiable energy term. We show that this approach successfully suppresses distortion and yields satisfying photorealistic style transfers in a broad variety of scenarios, including transfer of the time of day, weather, season, and artistic edits.", "title": "" } ]
[ { "docid": "4ea81c5e995d074998ba34a820c3de1c", "text": "We address the delicate problem of offsetting polygonal meshes. Offsetting is important for stereolithography, NC machining, rounding corners, collision avoidance, and Hausdorff error calculation. We introduce a new fast, and very simple method for offsetting (growing and shrinking) a solid model by arbitrary distance r. Our approach is based on a hybrid data structure combining point samples, voxels, and continuous surfaces. Each face, edge, and vertex of the original solid generate a set of offset points spaced along the (pencil of) normals associated with it. The offset points and normals are sufficiently dense to ensure that all voxels between the original and the offset surfaces are properly labeled as either too close to the original solid or possibly containing the offset surface. Then the offset boundary is generated as the isosurface using these voxels and the associated offset points. We provide a tight error bound on the resulting surface and report experimental results on a variety of CAD models.", "title": "" }, { "docid": "6b9d5cbdf91d792d60621da0bb45a303", "text": "AR systems pose potential security concerns that should be addressed before the systems become widespread.", "title": "" }, { "docid": "2a76205b80c90ff9a4ca3ccb0434bb03", "text": "Finding out which e-shops offer a specific product is a central challenge for building integrated product catalogs and comparison shopping portals. Determining whether two offers refer to the same product involves extracting a set of features (product attributes) from the web pages containing the offers and comparing these features using a matching function. The existing gold standards for product matching have two shortcomings: (i) they only contain offers from a small number of e-shops and thus do not properly cover the heterogeneity that is found on the Web. (ii) they only provide a small number of generic product attributes and therefore cannot be used to evaluate whether detailed product attributes have been correctly extracted from textual product descriptions. To overcome these shortcomings, we have created two public gold standards: The WDC Product Feature Extraction Gold Standard consists of over 500 product web pages originating from 32 different websites on which we have annotated all product attributes (338 distinct attributes) which appear in product titles, product descriptions, as well as tables and lists. The WDC Product Matching Gold Standard consists of over 75 000 correspondences between 150 products (mobile phones, TVs, and headphones) in a central catalog and offers for these products on the 32 web sites. To verify that the gold standards are challenging enough, we ran several baseline feature extraction and matching methods, resulting in F-score values in the range 0.39 to 0.67. In addition to the gold standards, we also provide a corpus consisting of 13 million product pages from the same websites which might be useful as background knowledge for training feature extraction and matching methods.", "title": "" }, { "docid": "6eed03674521ecf9a558ab0059fc167f", "text": "University professors traditionally struggle to incorporate software testing into their course curriculum. Worries include double-grading for correctness of both source and test code and finding time to teach testing as a topic. Test-driven development (TDD) has been suggested as a possible solution to improve student software testing skills and to realize the benefits of testing. According to most existing studies, TDD improves software quality and student productivity. This paper surveys the current state of TDD experiments conducted exclusively at universities. Similar surveys compare experiments in both the classroom and industry, but none have focused strictly on academia.", "title": "" }, { "docid": "1eb30a6cf31e5c256b9d1ca091e532cc", "text": "The aim of this study was to evaluate the range of techniques used by radiologists performing shoulder, hip, and knee arthrography using fluoroscopic guidance. Questionnaires on shoulder, hip, and knee arthrography were distributed to radiologists at a national radiology meeting. We enquired regarding years of experience, preferred approaches, needle gauge, gadolinium dilution, and volume injected. For each approach, the radiologist was asked their starting and end needle position based on a numbered and lettered grid superimposed on a radiograph. Sixty-eight questionnaires were returned. Sixty-eight radiologists performed shoulder and hip arthrography, and 65 performed knee arthrograms. Mean experience was 13.5 and 12.8 years, respectively. For magnetic resonance arthrography, a gadolinium dilution of 1/200 was used by 69–71%. For shoulder arthrography, an anterior approach was preferred by 65/68 (96%). The most common site of needle end position, for anterior and posterior approaches, was immediately lateral to the humeral cortex. A 22-gauge needle was used by 46/66 (70%). Mean injected volume was 12.7 ml (5–30). For hip arthrography, an anterior approach was preferred by 51/68 (75%). The most common site of needle end position, for anterior and lateral approaches, was along the lateral femoral head/neck junction. A 22-gauge needle was used by 53/68 (78%). Mean injected volume was 11.5 ml (5–20). For knee arthrography, a lateral approach was preferred by 41/64 (64%). The most common site of needle end position, for lateral and medial approaches, was mid-patellofemoral joint level. A 22-gauge needle was used by 36/65 (56%). Mean injected volume was 28.2 ml (5–60). Arthrographic approaches for the shoulder, hip, and knee vary among radiologists over a wide range of experience levels.", "title": "" }, { "docid": "a1118a6310736fc36dbc70bd25bd5f28", "text": "Many studies have documented large and persistent productivity differences across producers, even within narrowly defined industries. This paper both extends and departs from the past literature, which focused on technological explanations for these differences, by proposing that demand-side features also play a role in creating the observed productivity variation. The specific mechanism investigated here is the effect of spatial substitutability in the product market. When producers are densely clustered in a market, it is easier for consumers to switch between suppliers (making the market in a certain sense more competitive). Relatively inefficient producers find it more difficult to operate profitably as a result. Substitutability increases truncate the productivity distribution from below, resulting in higher minimum and average productivity levels as well as less productivity dispersion. The paper presents a model that makes this process explicit and empirically tests it using data from U.S. ready-mixed concrete plants, taking advantage of geographic variation in substitutability created by the industry’s high transport costs. The results support the model’s predictions and appear robust. Markets with high demand density for ready-mixed concrete—and thus high concrete plant densities—have higher lower-bound and average productivity levels and exhibit less productivity dispersion among their producers.", "title": "" }, { "docid": "8b156fb8ced52d0135e8a80361d93757", "text": "Memcached is one of the world's largest key-value deployments. This article analyzes the Memcached workload at Facebook, looking at server-side performance, request composition, caching efficacy, and key locality. The observations presented here lead to several design insights and new research directions for key-value caches, such as the relative inadequacy of the least recently used (LRU) replacement policy.", "title": "" }, { "docid": "f83f9bb497ffdb8e09211e6058bd4d87", "text": "For monitoring the conditions of railway infrastructures, axle box acceleration (ABA) measurements on board of trains is used. In this paper, the focus is on the early detection of short surface defects called squats. Different classes of squats are classified based on the response in the frequency domain of the ABA signal, using the wavelet power spectrum. For the investigated Dutch tracks, the power spectrum in the frequencies between 1060-1160Hz and around 300Hz indicate existence of a squat and also provide information of whether a squat is light, moderate or severe. The detection procedure is then validated relying on real-life measurements of ABA signals from measuring trains, and data of severity and location of squats obtained via a visual inspection of the tracks. Based on the real-life tests in the Netherlands, the hit rate of the system for light squats is higher than 78%, with a false alarm rate of 15%. In the case of severe squats the hit rate was 100% and zero false alarms.", "title": "" }, { "docid": "f8ac5a0dbd0bf8228b8304c1576189b9", "text": "The importance of cost planning for solid waste management (SWM) in industrialising regions (IR) is not well recognised. The approaches used to estimate costs of SWM can broadly be classified into three categories - the unit cost method, benchmarking techniques and developing cost models using sub-approaches such as cost and production function analysis. These methods have been developed into computer programmes with varying functionality and utility. IR mostly use the unit cost and benchmarking approach to estimate their SWM costs. The models for cost estimation, on the other hand, are used at times in industrialised countries, but not in IR. Taken together, these approaches could be viewed as precedents that can be modified appropriately to suit waste management systems in IR. The main challenges (or problems) one might face while attempting to do so are a lack of cost data, and a lack of quality for what data do exist. There are practical benefits to planners in IR where solid waste problems are critical and budgets are limited.", "title": "" }, { "docid": "2ca40fc7cf2cb7377b9b89be2606b096", "text": "By “elementary” plane geometry I mean the geometry of lines and circles—straightedge and compass constructions—in both Euclidean and non-Euclidean planes. An axiomatic description of it is in Sections 1.1, 1.2, and 1.6. This survey highlights some foundational history and some interesting recent discoveries that deserve to be better known, such as the hierarchies of axiom systems, Aristotle’s axiom as a “missing link,” Bolyai’s discovery—proved and generalized by William Jagy—of the relationship of “circle-squaring” in a hyperbolic plane to Fermat primes, the undecidability, incompleteness, and consistency of elementary Euclidean geometry, and much more. A main theme is what Hilbert called “the purity of methods of proof,” exemplified in his and his early twentieth century successors’ works on foundations of geometry.", "title": "" }, { "docid": "b3fce50260d7f77e8ca294db9c6666f6", "text": "Nanotechnology is enabling the development of devices in a scale ranging from one to a few hundred nanometers. Coordination and information sharing among these nano-devices will lead towards the development of future nanonetworks, boosting the range of applications of nanotechnology in the biomédical, environmental and military fields. Despite the major progress in nano-device design and fabrication, it is still not clear how these atomically precise machines will communicate. Recently, the advancements in graphene-based electronics have opened the door to electromagnetic communications in the nano-scale. In this paper, a new quantum mechanical framework is used to analyze the properties of Carbon Nanotubes (CNTs) as nano-dipole antennas. For this, first the transmission line properties of CNTs are obtained using the tight-binding model as functions of the CNT length, diameter, and edge geometry. Then, relevant antenna parameters such as the fundamental resonant frequency and the input impedance are calculated and compared to those of a nano-patch antenna based on a Graphene Nanoribbon (GNR) with similar dimensions. The results show that for a maximum antenna size in the order of several hundred nanometers (the expected maximum size for a nano-device), both a nano-dipole and a nano-patch antenna will be able to radiate electromagnetic waves in the terahertz band (0.1–10.0 THz).", "title": "" }, { "docid": "8cbc15b5e5c957f464573e52f00f2924", "text": "Tennis is one of the most popular sports in the world. Many researchers have studied in tennis model to find out whose player will be the winner of the match by using the statistical data. This paper proposes a powerful technique to predict the winner of the tennis match. The proposed method provides more accurate prediction results by using the statistical data and environmental data based on Multi-Layer Perceptron (MLP) with back-propagation learning algorithm.", "title": "" }, { "docid": "7620ed24b84b741be8800b1b52f54807", "text": "JASVINDER A. SINGH, KENNETH G. SAAG, S. LOUIS BRIDGES JR., ELIE A. AKL, RAVEENDHARA R. BANNURU, MATTHEW C. SULLIVAN, ELIZAVETA VAYSBROT, CHRISTINE MCNAUGHTON, MIKALA OSANI, ROBERT H. SHMERLING, JEFFREY R. CURTIS, DANIEL E. FURST, DEBORAH PARKS, ARTHUR KAVANAUGH, JAMES O’DELL, CHARLES KING, AMYE LEONG, ERIC L. MATTESON, JOHN T. SCHOUSBOE, BARBARA DREVLOW, SETH GINSBERG, JAMES GROBER, E. WILLIAM ST.CLAIR, ELIZABETH TINDALL, AMY S. MILLER, AND TIMOTHY MCALINDON", "title": "" }, { "docid": "4ddb0d4bf09dc9244ee51d4b843db5f2", "text": "BACKGROUND\nMobile applications (apps) have potential for helping people increase their physical activity, but little is known about the behavior change techniques marketed in these apps.\n\n\nPURPOSE\nThe aim of this study was to characterize the behavior change techniques represented in online descriptions of top-ranked apps for physical activity.\n\n\nMETHODS\nTop-ranked apps (n=167) were identified on August 28, 2013, and coded using the Coventry, Aberdeen and London-Revised (CALO-RE) taxonomy of behavior change techniques during the following month. Analyses were conducted during 2013.\n\n\nRESULTS\nMost descriptions of apps incorporated fewer than four behavior change techniques. The most common techniques involved providing instruction on how to perform exercises, modeling how to perform exercises, providing feedback on performance, goal-setting for physical activity, and planning social support/change. A latent class analysis revealed the existence of two types of apps, educational and motivational, based on their configurations of behavior change techniques.\n\n\nCONCLUSIONS\nBehavior change techniques are not widely marketed in contemporary physical activity apps. Based on the available descriptions and functions of the observed techniques in contemporary health behavior theories, people may need multiple apps to initiate and maintain behavior change. This audit provides a starting point for scientists, developers, clinicians, and consumers to evaluate and enhance apps in this market.", "title": "" }, { "docid": "8ca3fe42e8a59262f319b995309cbd60", "text": "Deep neural networks (DNNs) are used by different applications that are executed on a range of computer architectures, from IoT devices to supercomputers. The footprint of these networks is huge as well as their computational and communication needs. In order to ease the pressure on resources, research indicates that in many cases a low precision representation (1-2 bit per parameter) of weights and other parameters can achieve similar accuracy while requiring less resources. Using quantized values enables the use of FPGAs to run NNs, since FPGAs are well fitted to these primitives; e.g., FPGAs provide efficient support for bitwise operations and can work with arbitrary-precision representation of numbers. This paper presents a new streaming architecture for running QNNs on FPGAs. The proposed architecture scales out better than alternatives, allowing us to take advantage of systems with multiple FPGAs. We also included support for skip connections, that are used in state-of-the art NNs, and shown that our architecture allows to add those connections almost for free. All this allowed us to implement an 18-layer ResNet for 224×224 images classification, achieving 57.5% top-1 accuracy. In addition, we implemented a full-sized quantized AlexNet. In contrast to previous works, we use 2-bit activations instead of 1-bit ones, which improves AlexNet's top-1 accuracy from 41.8% to 51.03% for the ImageNet classification. Both AlexNet and ResNet can handle 1000-class real-time classification on an FPGA. Our implementation of ResNet-18 consumes 5× less power and is 4× slower for ImageNet, when compared to the same NN on the latest Nvidia GPUs. Smaller NNs, that fit a single FPGA, are running faster then on GPUs on small (32×32) inputs, while consuming up to 20× less energy and power.", "title": "" }, { "docid": "43975c43de57d889b038cdee8b35e786", "text": "We present an algorithm for computing rigorous solutions to a large class of ordinary differential equations. The main algorithm is based on a partitioning process and the use of interval arithmetic with directed rounding. As an application, we prove that the Lorenz equations support a strange attractor, as conjectured by Edward Lorenz in 1963. This conjecture was recently listed by Steven Smale as one of several challenging problems for the twenty-first century. We also prove that the attractor is robust, i.e., it persists under small perturbations of the coefficients in the underlying differential equations. Furthermore, the flow of the equations admits a unique SRB measure, whose support coincides with the attractor. The proof is based on a combination of normal form theory and rigorous computations.", "title": "" }, { "docid": "b44600830a6aacd0a1b7ec199cba5859", "text": "Existing e-service quality scales mainly focus on goal-oriented e-shopping behavior excluding hedonic quality aspects. As a consequence, these scales do not fully cover all aspects of consumer's quality evaluation. In order to integrate both utilitarian and hedonic e-service quality elements, we apply a transaction process model to electronic service encounters. Based on this general framework capturing all stages of the electronic service delivery process, we develop a transaction process-based scale for measuring service quality (eTransQual). After conducting exploratory and confirmatory factor analysis, we identify five discriminant quality dimensions: functionality/design, enjoyment, process, reliability and responsiveness. All extracted dimensions of eTransQual show a significant positive impact on important outcome variables like perceived value and customer satisfaction. Moreover, enjoyment is a dominant factor in influencing both relationship duration and repurchase intention as major drivers of customer lifetime value. As a result, we present conceptual and empirical evidence for the need to integrate both utilitarian and hedonic e-service quality elements into one measurement scale. © 2006 Elsevier Inc. All rights reserved.", "title": "" }, { "docid": "a7765d68c277dbc712376a46a377d5d4", "text": "The trend of currency rates can be predicted with supporting from supervised machine learning in the transaction systems such as support vector machine. Not only representing models in use of machine learning techniques in learning, the support vector machine (SVM) model also is implemented with actual FoRex transactions. This might help automatically to make the transaction decisions of Bid/Ask in Foreign Exchange Market by using Expert Advisor (Robotics). The experimental results show the advantages of use SVM compared to the transactions without use SVM ones.", "title": "" }, { "docid": "1ab0308539bc6508b924316b39a963ca", "text": "Daily wafer fabrication in semiconductor foundry depends on considerable metrology operations for tool-quality and process-quality assurance. The metrology operations required a lot of metrology tools, which increase FAB's investment. Also, these metrology operations will increase cycle time of wafer process. Metrology operations do not bring any value added to wafer but only quality assurance. This article provides a new method denoted virtual metrology (VM) to utilize sensor data collected from 300 mm FAB's tools to forecast quality data of wafers and tools. This proposed method designs key steps to establish a VM control model based on neural networks and to develop and deploy applications following SEMI EDA (equipment data acquisition) standards.", "title": "" }, { "docid": "b03b34dc9708693f06ee4786c48ce9b5", "text": "Mobile Cloud Computing (MCC) enables smartphones to offload compute-intensive codes and data to clouds or cloudlets for energy conservation. Thus, MCC liberates smartphones from battery shortage and embraces more versatile mobile applications. Most pioneering MCC research work requires a consistent network performance for offloading. However, such consistency is challenged by frequent mobile user movements and unstable network quality, thereby resulting in a suboptimal offloading decision. To embrace network inconsistency, we propose ENDA, a three-tier architecture that leverages user track prediction, realtime network performance and server loads to optimize offloading decisions. On cloud tier, we first design a greedy searching algorithm to predict user track using historical user traces stored in database servers. We then design a cloud-enabled Wi-Fi access point (AP) selection scheme to find the most energy efficient AP for smartphone offloading. We evaluate the performance of ENDA through simulations under a real-world scenario. The results demonstrate that ENDA can generate offloading decisions with optimized energy efficiency, desirable response time, and potential adaptability to a variety of scenarios. ENDA outperforms existing offloading techniques that do not consider user mobility and server workload balance management.", "title": "" } ]
scidocsrr
3743db53598a7771508150db2f4a34a1
Towards a Robust Solution to People Counting
[ { "docid": "0a75a45141a7f870bba32bed890da782", "text": "Surveillance systems for public security are going beyond the conventional CCTV. A new generation of systems relies on image processing and computer vision techniques, deliver more ready-to-use information, and provide assistance for early detection of unusual events. Crowd density is a useful source of information because unusual crowdedness is often related to unusual events. Previous works on crowd density estimation either ignore perspective distortion or perform the correction based on incorrect formulation. Also there is no investigation on whether the geometric correction derived for the ground plane can be applied to human objects standing upright to the plane. This paper derives the relation for geometric correction for the ground plane and proves formally that it can be directly applied to all the foreground pixels. We also propose a very efficient implementation because it is important for a real-time application. Finally a time-adaptive criterion for unusual crowdedness detection is described.", "title": "" }, { "docid": "af752d0de962449acd9a22608bd7baba", "text": "Ї R is a real time visual surveillance system for detecting and tracking multiple people and monitoring their activities in an outdoor environment. It operates on monocular gray-scale video imagery, or on video imagery from an infrared camera. ‡ R employs a combination of shape analysis and tracking to locate people and their parts (head, hands, feet, torso) and to create models of people's appearance so that they can be tracked through interactions such as occlusions. It can determine whether a foreground region contains multiple people and can segment the region into its constituent people and track them. ‡ R can also determine whether people are carrying objects, and can segment objects from their silhouettes, and construct appearance models for them so they can be identified in subsequent frames. ‡ R can recognize events between people and objects, such as depositing an object, exchanging bags, or removing an object. It runs at 25 Hz for 320Â240 resolution images on a 400 Mhz dual-Pentium II PC.", "title": "" } ]
[ { "docid": "34bd9a54a1aeaf82f7c4b27047cb2f49", "text": "Choosing a good location when opening a new store is crucial for the future success of a business. Traditional methods include offline manual survey, which is very time consuming, and analytic models based on census data, which are unable to adapt to the dynamic market. The rapid increase of the availability of big data from various types of mobile devices, such as online query data and offline positioning data, provides us with the possibility to develop automatic and accurate data-driven prediction models for business store placement. In this paper, we propose a Demand Distribution Driven Store Placement (D3SP) framework for business store placement by mining search query data from Baidu Maps. D3SP first detects the spatial-temporal distributions of customer demands on different business services via query data from Baidu Maps, the largest online map search engine in China, and detects the gaps between demand and supply. Then we determine candidate locations via clustering such gaps. In the final stage, we solve the location optimization problem by predicting and ranking the number of customers. We not only deploy supervised regression models to predict the number of customers, but also learn to rank models to directly rank the locations. We evaluate our framework on various types of businesses in real-world cases, and the experiments results demonstrate the effectiveness of our methods. D3SP as the core function for store placement has already been implemented as a core component of our business analytics platform and could be potentially used by chain store merchants on Baidu Nuomi.", "title": "" }, { "docid": "ea50fcb63d7eeb37a3acd47ce4a7a572", "text": "Automated polyp detection in colonoscopy videos has been demonstrated to be a promising way for colorectal cancer prevention and diagnosis. Traditional manual screening is time consuming, operator dependent, and error prone; hence, automated detection approach is highly demanded in clinical practice. However, automated polyp detection is very challenging due to high intraclass variations in polyp size, color, shape, and texture, and low interclass variations between polyps and hard mimics. In this paper, we propose a novel offline and online three-dimensional (3-D) deep learning integration framework by leveraging the 3-D fully convolutional network (3D-FCN) to tackle this challenging problem. Compared with the previous methods employing hand-crafted features or 2-D convolutional neural network, the 3D-FCN is capable of learning more representative spatio-temporal features from colonoscopy videos, and hence has more powerful discrimination capability. More importantly, we propose a novel online learning scheme to deal with the problem of limited training data by harnessing the specific information of an input video in the learning process. We integrate offline and online learning to effectively reduce the number of false positives generated by the offline network and further improve the detection performance. Extensive experiments on the dataset of MICCAI 2015 Challenge on Polyp Detection demonstrated the better performance of our method when compared with other competitors.", "title": "" }, { "docid": "4ad535f3b4f1afba4497a4026236424e", "text": "We study the problem of noninvasively estimating Blood Pressure (BP) without using a cuff, which is attractive for continuous monitoring of BP over Body Area Networks. It has been shown that the Pulse Arrival Time (PAT) measured as the delay between the ECG peak and a point in the finger PPG waveform can be used to estimate systolic and diastolic BP. Our aim is to evaluate the performance of such a method using the available MIMIC database, while at the same time improve the performance of existing techniques. We propose an algorithm to estimate BP from a combination of PAT and heart rate, showing improvement over PAT alone. We also show how the method achieves recalibration using an RLS adaptive algorithm. Finally, we address the use case of ECG and PPG sensors wirelessly communicating to an aggregator and study the effect of skew and jitter on BP estimation.", "title": "" }, { "docid": "766b726231f9d9540deb40183b49a655", "text": "This paper presents a survey of georeferenced point clouds. Concentration is, on the one hand, put on features, which originate in the measurement process themselves, and features derived by processing the point cloud. On the other hand, approaches for the processing of georeferenced point clouds are reviewed. This includes the data structures, but also spatial processing concepts. We suggest a categorization of features into levels that reflect the amount of processing. Point clouds are found across many disciplines, which is reflected in the versatility of the literature suggesting specific features.", "title": "" }, { "docid": "4be9ae4bc6fb01e78d550bedf199d0b0", "text": "Protein timing is a popular dietary strategy designed to optimize the adaptive response to exercise. The strategy involves consuming protein in and around a training session in an effort to facilitate muscular repair and remodeling, and thereby enhance post-exercise strength- and hypertrophy-related adaptations. Despite the apparent biological plausibility of the strategy, however, the effectiveness of protein timing in chronic training studies has been decidedly mixed. The purpose of this paper therefore was to conduct a multi-level meta-regression of randomized controlled trials to determine whether protein timing is a viable strategy for enhancing post-exercise muscular adaptations. The strength analysis comprised 478 subjects and 96 ESs, nested within 41 treatment or control groups and 20 studies. The hypertrophy analysis comprised 525 subjects and 132 ESs, nested with 47 treatment or control groups and 23 studies. A simple pooled analysis of protein timing without controlling for covariates showed a small to moderate effect on muscle hypertrophy with no significant effect found on muscle strength. In the full meta-regression model controlling for all covariates, however, no significant differences were found between treatment and control for strength or hypertrophy. The reduced model was not significantly different from the full model for either strength or hypertrophy. With respect to hypertrophy, total protein intake was the strongest predictor of ES magnitude. These results refute the commonly held belief that the timing of protein intake in and around a training session is critical to muscular adaptations and indicate that consuming adequate protein in combination with resistance exercise is the key factor for maximizing muscle protein accretion.", "title": "" }, { "docid": "7d82c8d8fae92b9ac2a3d63f74e0b973", "text": "The security of sensitive data and the safety of control signal are two core issues in industrial control system (ICS). However, the prevalence of USB storage devices brings a great challenge on protecting ICS in those respects. Unfortunately, there is currently no solution especially for ICS to provide a complete defense against data transmission between untrusted USB storage devices and critical equipment without forbidding normal USB device function. This paper proposes a trust management scheme of USB storage devices for ICS (TMSUI). By fully considering the background of application scenarios, TMSUI is designed based on security chip to achieve authoring a certain USB storage device to only access some exact protected terminals in ICS for a particular period of time. The issues about digital forensics and revocation of authorization are discussed. The prototype system is finally implemented and the evaluation on it indicates that TMSUI effectively meets the security goals with high compatibility and good performance.", "title": "" }, { "docid": "c063474634eb427cf0215b4500182f8c", "text": "Factorization Machines offer good performance and useful embeddings of data. However, they are costly to scale to large amounts of data and large numbers of features. In this paper we describe DiFacto, which uses a refined Factorization Machine model with sparse memory adaptive constraints and frequency adaptive regularization. We show how to distribute DiFacto over multiple machines using the Parameter Server framework by computing distributed subgradients on minibatches asynchronously. We analyze its convergence and demonstrate its efficiency in computational advertising datasets with billions examples and features.", "title": "" }, { "docid": "cfb06477edaa39f53b1b892cdfc1621a", "text": "This paper presents ray casting as the methodological basis for a CAD/CAM solid modeling system. Solid objects are modeled by combining primitive solids, such as blocks and cylinders, using the set operators union, intersection, and difference. To visualize and analyze the composite solids modeled, virtual light rays are cast as probes. By virtue of its simplicity, ray casting is reliable and extensible. The most difficult mathematical problem is finding linesurface intersection points. So surfaces such as planes, quad&, tori, and probably even parametric surface patches may bound the primitive solids. The adequacy and efficiency of ray casting are issues addressed here. A fast picture generation capability for interactive modeling is the biggest challenge. New methods are presented, accompanied by sample pictures and CPU times, to meet the challenge.", "title": "" }, { "docid": "3d332b3ae4487a7272ae1e2204965f98", "text": "Robots are increasingly present in modern industry and also in everyday life. Their applications range from health-related situations, for assistance to elderly people or in surgical operations, to automatic and driver-less vehicles (on wheels or flying) or for driving assistance. Recently, an interest towards robotics applied in agriculture and gardening has arisen, with applications to automatic seeding and cropping or to plant disease control, etc. Autonomous lawn mowers are succesful market applications of gardening robotics. In this paper, we present a novel robot that is developed within the TrimBot2020 project, funded by the EU H2020 program. The project aims at prototyping the first outdoor robot for automatic bush trimming and rose pruning.", "title": "" }, { "docid": "eafe4aa1aada03bad956d8bed16546dd", "text": "The increasing prevalence of male-to-female (MtF) transsexualism in Western countries is largely due to the growing number of MtF transsexuals who have a history of sexual arousal with cross-dressing or cross-gender fantasy. Ray Blanchard proposed that these transsexuals have a paraphilia he called autogynephilia, which is the propensity to be sexually aroused by the thought or image of oneself as female. Autogynephilia defines a transsexual typology and provides a theory of transsexual motivation, in that Blanchard proposed that MtF transsexuals are either sexually attracted exclusively to men (homosexual) or are sexually attracted primarily to the thought or image of themselves as female (autogynephilic), and that autogynephilic transsexuals seek sex reassignment to actualize their autogynephilic desires. Despite growing professional acceptance, Blanchard's formulation is rejected by some MtF transsexuals as inconsistent with their experience. This rejection, I argue, results largely from the misconception that autogynephilia is a purely erotic phenomenon. Autogynephilia can more accurately be conceptualized as a type of sexual orientation and as a variety of romantic love, involving both erotic and affectional or attachment-based elements. This broader conception of autogynephilia addresses many of the objections to Blanchard's theory and is consistent with a variety of clinical observations concerning autogynephilic MtF transsexualism.", "title": "" }, { "docid": "5906d20bea1c95399395d045f84f11c9", "text": "Constructive interference (CI) enables concurrent transmissions to interfere non-destructively, so as to enhance network concurrency. In this paper, we propose deliberate synchronized constructive interference (Disco), which ensures concurrent transmissions of an identical packet to synchronize more precisely than traditional CI. Disco envisions concurrent transmissions to positively interfere at the receiver, and potentially allows orders of magnitude reductions in energy consumption and improvements in link quality. We also theoretically introduce a sufficient condition to construct Disco with IEEE 802.15.4 radio for the first time. Moreover, we propose Triggercast, a distributed middleware service, and show it is feasible to generate Disco on real sensor network platforms like TMote Sky. To synchronize transmissions of multiple senders at the chip level, Triggercast effectively compensates propagation and radio processing delays, and has 95th percentile synchronization errors of at most 250 ns. Triggercast also intelligently decides which co-senders to participate in simultaneous transmissions, and aligns their transmission time to maximize the overall link Packet Reception Ratio (PRR), under the condition of maximal system robustness. Extensive experiments in real testbeds demonstrate that Triggercast significantly improves PRR from 5 to 70 percent with seven concurrent senders. We also demonstrate that Triggercast provides 1.3χ PRR performance gains in average, when it is integrated with existing data forwarding protocols.", "title": "" }, { "docid": "7a6d32d50e3b1be70889fc85ffdcac45", "text": "Any image can be represented as a function defined on a weighted graph, in which the underlying structure of the image is encoded in kernel similarity and associated Laplacian matrices. In this paper, we develop an iterative graph-based framework for image restoration based on a new definition of the normalized graph Laplacian. We propose a cost function, which consists of a new data fidelity term and regularization term derived from the specific definition of the normalized graph Laplacian. The normalizing coefficients used in the definition of the Laplacian and associated regularization term are obtained using fast symmetry preserving matrix balancing. This results in some desired spectral properties for the normalized Laplacian such as being symmetric, positive semidefinite, and returning zero vector when applied to a constant image. Our algorithm comprises of outer and inner iterations, where in each outer iteration, the similarity weights are recomputed using the previous estimate and the updated objective function is minimized using inner conjugate gradient iterations. This procedure improves the performance of the algorithm for image deblurring, where we do not have access to a good initial estimate of the underlying image. In addition, the specific form of the cost function allows us to render the spectral analysis for the solutions of the corresponding linear equations. In addition, the proposed approach is general in the sense that we have shown its effectiveness for different restoration problems, including deblurring, denoising, and sharpening. Experimental results verify the effectiveness of the proposed algorithm on both synthetic and real examples.", "title": "" }, { "docid": "2c5f0763b6c4888babc04af50bb89aaf", "text": "A 1.8-V 14-b 12-MS/s pseudo-differential pipeline analog-to-digital converter (ADC) using a passive capacitor error-averaging technique and a nested CMOS gain-boosting technique is described. The converter is optimized for low-voltage low-power applications by applying an optimum stage-scaling algorithm at the architectural level and an opamp and comparator sharing technique at the circuit level. Prototyped in a 0.18-/spl mu/m 6M-1P CMOS process, this converter achieves a peak signal-to-noise plus distortion ratio (SNDR) of 75.5 dB and a 103-dB spurious-free dynamic range (SFDR) without trimming, calibration, or dithering. With a 1-MHz analog input, the maximum differential nonlinearity is 0.47 LSB and the maximum integral nonlinearity is 0.54 LSB. The large analog bandwidth of the front-end sample-and-hold circuit is achieved using bootstrapped thin-oxide transistors as switches, resulting in an SFDR of 97 dB when a 40-MHz full-scale input is digitized. The ADC occupies an active area of 10 mm/sup 2/ and dissipates 98 mW.", "title": "" }, { "docid": "79465d290ab299b9d75e9fa617d30513", "text": "In this paper we describe computational experience in solving unconstrained quadratic zero-one problems using a branch and bound algorithm. The algorithm incorporates dynamic preprocessing techniques for forcing variables and heuristics to obtain good starting points. Computational results and comparisons with previous studies on several hundred test problems with dimensions up to 200 demonstrate the efficiency of our algorithm. In dieser Arbeit beschreiben wir rechnerische Erfahrungen bei der Lösung von unbeschränkten quadratischen Null-Eins-Problemen mit einem “Branch and Bound”-Algorithmus. Der Algorithmus erlaubt dynamische Vorbereitungs-Techniken zur Erzwingung ausgewählter Variablen und Heuristiken zur Wahl von guten Startpunkten. Resultate von Berechnungen und Vergleiche mit früheren Arbeiten mit mehreren hundert Testproblemen mit Dimensionen bis 200 zeigen die Effizienz unseres Algorithmus.", "title": "" }, { "docid": "047c36e2650b8abde75cccaeb0368c88", "text": "Pancreas segmentation in computed tomography imaging has been historically difficult for automated methods because of the large shape and size variations between patients. In this work, we describe a custom-build 3D fully convolutional network (FCN) that can process a 3D image including the whole pancreas and produce an automatic segmentation. We investigate two variations of the 3D FCN architecture; one with concatenation and one with summation skip connections to the decoder part of the network. We evaluate our methods on a dataset from a clinical trial with gastric cancer patients, including 147 contrast enhanced abdominal CT scans acquired in the portal venous phase. Using the summation architecture, we achieve an average Dice score of 89.7 ± 3.8 (range [79.8, 94.8])% in testing, achieving the new state-of-the-art performance in pancreas segmentation on this dataset.", "title": "" }, { "docid": "3621dd85dc4ba3007cfa8ec1017b4e96", "text": "The current lack of knowledge about the effect of maternally administered drugs on the developing fetus is a major public health concern worldwide. The first critical step toward predicting the safety of medications in pregnancy is to screen drug compounds for their ability to cross the placenta. However, this type of preclinical study has been hampered by the limited capacity of existing in vitro and ex vivo models to mimic physiological drug transport across the maternal-fetal interface in the human placenta. Here the proof-of-principle for utilizing a microengineered model of the human placental barrier to simulate and investigate drug transfer from the maternal to the fetal circulation is demonstrated. Using the gestational diabetes drug glyburide as a model compound, it is shown that the microphysiological system is capable of reconstituting efflux transporter-mediated active transport function of the human placental barrier to limit fetal exposure to maternally administered drugs. The data provide evidence that the placenta-on-a-chip may serve as a new screening platform to enable more accurate prediction of drug transport in the human placenta.", "title": "" }, { "docid": "29cbdeb95a221820a6425e1249763078", "text": "The concept of “Industry 4.0” that covers the topics of Internet of Things, cyber-physical system, and smart manufacturing, is a result of increasing demand of mass customized manufacturing. In this paper, a smart manufacturing framework of Industry 4.0 is presented. In the proposed framework, the shop-floor entities (machines, conveyers, etc.), the smart products and the cloud can communicate and negotiate interactively through networks. The shop-floor entities can be considered as agents based on the theory of multi-agent system. These agents implement dynamic reconfiguration in a collaborative manner to achieve agility and flexibility. However, without global coordination, problems such as load-unbalance and inefficiency may occur due to different abilities and performances of agents. Therefore, the intelligent evaluation and control algorithms are proposed to reduce the load-unbalance with the assistance of big data feedback. The experimental results indicate that the presented algorithms can easily be deployed in smart manufacturing system and can improve both load-balance and efficiency.", "title": "" }, { "docid": "5744f6f5d6b2f0f5f150ec939d1f8c74", "text": "We introduce a novel active learning framework for video annotation. By judiciously choosing which frames a user should annotate, we can obtain highly accurate tracks with minimal user effort. We cast this problem as one of active learning, and show that we can obtain excellent performance by querying frames that, if annotated, would produce a large expected change in the estimated object track. We implement a constrained tracker and compute the expected change for putative annotations with efficient dynamic programming algorithms. We demonstrate our framework on four datasets, including two benchmark datasets constructed with key frame annotations obtained by Amazon Mechanical Turk. Our results indicate that we could obtain equivalent labels for a small fraction of the original cost.", "title": "" }, { "docid": "db8d146ad8e62fd7a558703ef20a6330", "text": "In this paper, we focus on the problem of completion of multidimensional arrays (also referred to as tensors), in particular three-dimensional (3-D) arrays, from limited sampling. Our approach is based on a recently proposed tensor algebraic framework where 3-D tensors are treated as linear operators over the set of 2-D tensors. In this framework, one can obtain a factorization for 3-D data, referred to as the tensor singular value decomposition (t-SVD), which is similar to the SVD for matrices. t-SVD results in a notion of rank referred to as the tubal-rank. Using this approach we consider the problem of sampling and recovery of 3-D arrays with low tubal-rank. We show that by solving a convex optimization problem, which minimizes a convex surrogate to the tubal-rank, one can guarantee exact recovery with high probability as long as number of samples is of the order <inline-formula><tex-math notation=\"LaTeX\">$O(rnk \\log (nk))$ </tex-math></inline-formula>, given a tensor of size <inline-formula><tex-math notation=\"LaTeX\">$n\\times n\\times k$ </tex-math></inline-formula> with tubal-rank <inline-formula><tex-math notation=\"LaTeX\">$r$</tex-math></inline-formula> . The conditions under which this result holds are similar to the incoherence conditions for low-rank matrix completion under random sampling. The difference is that we define incoherence under the algebraic setup of t-SVD, which is different from the standard matrix incoherence conditions. We also compare the numerical performance of the proposed algorithm with some state-of-the-art approaches on real-world datasets.", "title": "" }, { "docid": "e5cd0bdffd94215aa19a5fc29a1b6753", "text": "Anhedonia is a core symptom of major depressive disorder (MDD), long thought to be associated with reduced dopaminergic function. However, most antidepressants do not act directly on the dopamine system and all antidepressants have a delayed full therapeutic effect. Recently, it has been proposed that antidepressants fail to alter dopamine function in antidepressant unresponsive MDD. There is compelling evidence that dopamine neurons code a specific phasic (short duration) reward-learning signal, described by temporal difference (TD) theory. There is no current evidence for other neurons coding a TD reward-learning signal, although such evidence may be found in time. The neuronal substrates of the TD signal were not explored in this study. Phasic signals are believed to have quite different properties to tonic (long duration) signals. No studies have investigated phasic reward-learning signals in MDD. Therefore, adults with MDD receiving long-term antidepressant medication, and comparison controls both unmedicated and acutely medicated with the antidepressant citalopram, were scanned using fMRI during a reward-learning task. Three hypotheses were tested: first, patients with MDD have blunted TD reward-learning signals; second, controls given an antidepressant acutely have blunted TD reward-learning signals; third, the extent of alteration in TD signals in major depression correlates with illness severity ratings. The results supported the hypotheses. Patients with MDD had significantly reduced reward-learning signals in many non-brainstem regions: ventral striatum (VS), rostral and dorsal anterior cingulate, retrosplenial cortex (RC), midbrain and hippocampus. However, the TD signal was increased in the brainstem of patients. As predicted, acute antidepressant administration to controls was associated with a blunted TD signal, and the brainstem TD signal was not increased by acute citalopram administration. In a number of regions, the magnitude of the abnormal signals in MDD correlated with illness severity ratings. The findings highlight the importance of phasic reward-learning signals, and are consistent with the hypothesis that antidepressants fail to normalize reward-learning function in antidepressant-unresponsive MDD. Whilst there is evidence that some antidepressants acutely suppress dopamine function, the long-term action of virtually all antidepressants is enhanced dopamine agonist responsiveness. This distinction might help to elucidate the delayed action of antidepressants. Finally, analogous to recent work in schizophrenia, the finding of abnormal phasic reward-learning signals in MDD implies that an integrated understanding of symptoms and treatment mechanisms is possible, spanning physiology, phenomenology and pharmacology.", "title": "" } ]
scidocsrr
245bd7b11aef57b7c19cb348a24cd9dd
3D texture analysis on MRI images of Alzheimer’s disease
[ { "docid": "edd6d9843c8c24497efa336d1a26be9d", "text": "Alzheimer's disease (AD) can be diagnosed with a considerable degree of accuracy. In some centers, clinical diagnosis predicts the autopsy diagnosis with 90% certainty in series reported from academic centers. The characteristic histopathologic changes at autopsy include neurofibrillary tangles, neuritic plaques, neuronal loss, and amyloid angiopathy. Mutations on chromosomes 21, 14, and 1 cause familial AD. Risk factors for AD include advanced age, lower intelligence, small head size, and history of head trauma; female gender may confer additional risks. Susceptibility genes do not cause the disease by themselves but, in combination with other genes or epigenetic factors, modulate the age of onset and increase the probability of developing AD. Among several putative susceptibility genes (on chromosomes 19, 12, and 6), the role of apolipoprotein E (ApoE) on chromosome 19 has been repeatedly confirmed. Protective factors include ApoE-2 genotype, history of estrogen replacement therapy in postmenopausal women, higher educational level, and history of use of nonsteroidal anti-inflammatory agents. The most proximal brain events associated with the clinical expression of dementia are progressive neuronal dysfunction and loss of neurons in specific regions of the brain. Although the cascade of antecedent events leading to the final common path of neurodegeneration must be determined in greater detail, the accumulation of stable amyloid is increasingly widely accepted as a central pathogenetic event. All mutations known to cause AD increase the production of beta-amyloid peptide. This protein is derived from amyloid precursor protein and, when aggregated in a beta-pleated sheet configuration, is neurotoxic and forms the core of neuritic plaques. Nerve cell loss in selected nuclei leads to neurochemical deficiencies, and the combination of neuronal loss and neurotransmitter deficits leads to the appearance of the dementia syndrome. The destructive aspects include neurochemical deficits that disrupt cell-to-cell communications, abnormal synthesis and accumulation of cytoskeletal proteins (e.g., tau), loss of synapses, pruning of dendrites, damage through oxidative metabolism, and cell death. The concepts of cognitive reserve and symptom thresholds may explain the effects of education, intelligence, and brain size on the occurrence and timing of AD symptoms. Advances in understanding the pathogenetic cascade of events that characterize AD provide a framework for early detection and therapeutic interventions, including transmitter replacement therapies, antioxidants, anti-inflammatory agents, estrogens, nerve growth factor, and drugs that prevent amyloid formation in the brain.", "title": "" } ]
[ { "docid": "e7a260bfb238d8b4f147ac9c2a029d1d", "text": "The full-text may be used and/or reproduced, and given to third parties in any format or medium, without prior permission or charge, for personal research or study, educational, or not-for-pro t purposes provided that: • a full bibliographic reference is made to the original source • a link is made to the metadata record in DRO • the full-text is not changed in any way The full-text must not be sold in any format or medium without the formal permission of the copyright holders. Please consult the full DRO policy for further details.", "title": "" }, { "docid": "81476f837dd763301ba065ac78c5bb65", "text": "Background: The ideal lip augmentation technique provides the longest period of efficacy, lowest complication rate, and best aesthetic results. A myriad of techniques have been described for lip augmentation, but the optimal approach has not yet been established. This systematic review with metaregression will focus on the various filling procedures for lip augmentation (FPLA), with the goal of determining the optimal approach. Methods: A systematic search for all English, French, Spanish, German, Italian, Portuguese and Dutch language studies involving FPLA was performed using these databases: Elsevier Science Direct, PubMed, Highwire Press, Springer Standard Collection, SAGE, DOAJ, Sweetswise, Free E-Journals, Ovid Lippincott Williams & Wilkins, Willey Online Library Journals, and Cochrane Plus. The reference section of every study selected through this database search was subsequently examined to identify additional relevant studies. Results: The database search yielded 29 studies. Nine more studies were retrieved from the reference sections of these 29 studies. The level of evidence ratings of these 38 studies were as follows: level Ib, four studies; level IIb, four studies; level IIIb, one study; and level IV, 29 studies. Ten studies were prospective. Conclusions: This systematic review sought to highlight all the quality data currently available regarding FPLA. Because of the considerable diversity of procedures, no definitive comparisons or conclusions were possible. Additional prospective studies and clinical trials are required to more conclusively determine the most appropriate approach for this procedure. Level of evidence: IV. © 2015 European Association for Cranio-Maxillo-Facial Surgery. Published by Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "fb02f47ab50ebe817175f21f7192ae6b", "text": "Generative Adversarial Network (GAN) is a prominent generative model that are widely used in various applications. Recent studies have indicated that it is possible to obtain fake face images with a high visual quality based on this novel model. If those fake faces are abused in image tampering, it would cause some potential moral, ethical and legal problems. In this paper, therefore, we first propose a Convolutional Neural Network (CNN) based method to identify fake face images generated by the current best method [20], and provide experimental evidences to show that the proposed method can achieve satisfactory results with an average accuracy over 99.4%. In addition, we provide comparative results evaluated on some variants of the proposed CNN architecture, including the high pass filter, the number of the layer groups and the activation function, to further verify the rationality of our method.", "title": "" }, { "docid": "8d292592202c948c439f055ca5df9d56", "text": "This paper provides an overview of the current state of the art in persuasive systems design. All peer-reviewed full papers published at the first three International Conferences on Persuasive Technology were analyzed employing a literature review framework. Results from this analysis are discussed and directions for future research are suggested. Most research papers so far have been experimental. Five out of six of these papers (84.4%) have addressed behavioral change rather than an attitude change. Tailoring, tunneling, reduction and social comparison have been the most studied methods for persuasion. Quite, surprisingly ethical considerations have remained largely unaddressed in these papers. In general, many of the research papers seem to describe the investigated persuasive systems in a relatively vague manner leaving room for some improvement.", "title": "" }, { "docid": "c21280fa617bcf55991702211f1fde8b", "text": "How useful can machine learning be in a quantum laboratory? Here we raise the question of the potential of intelligent machines in the context of scientific research. A major motivation for the present work is the unknown reachability of various entanglement classes in quantum experiments. We investigate this question by using the projective simulation model, a physics-oriented approach to artificial intelligence. In our approach, the projective simulation system is challenged to design complex photonic quantum experiments that produce high-dimensional entangled multiphoton states, which are of high interest in modern quantum experiments. The artificial intelligence system learns to create a variety of entangled states and improves the efficiency of their realization. In the process, the system autonomously (re)discovers experimental techniques which are only now becoming standard in modern quantum optical experiments-a trait which was not explicitly demanded from the system but emerged through the process of learning. Such features highlight the possibility that machines could have a significantly more creative role in future research.", "title": "" }, { "docid": "df6d4e6d74d96b7ab1951cc869caad59", "text": "A broadband commonly fed antenna with dual polarization is proposed in this letter. The main radiator of the antenna is designed as a loop formed by four staircase-like branches. In this structure, the 0° polarization and 90° polarization share the same radiator and reflector. Measurement shows that the proposed antenna obtains a broad impedance bandwidth of 70% (1.5–3.1 GHz) with <inline-formula><tex-math notation=\"LaTeX\">$\\vert {{S}}_{11}\\vert < -{\\text{10 dB}}$</tex-math></inline-formula> and a high port-to-port isolation of 35 dB. The antenna gain within the operating frequency band is between 7.2 and 9.5 dBi, which indicates a stable broadband radiation performance. Moreover, a high cross-polarization discrimination of 25 dB is achieved across the whole operating frequency band.", "title": "" }, { "docid": "5c112eb4be8321d79b63790e84de278f", "text": "Service-dominant logic continues its evolution, facilitated by an active community of scholars throughout the world. Along its evolutionary path, there has been increased recognition of the need for a crisper and more precise delineation of the foundational premises and specification of the axioms of S-D logic. It also has become apparent that a limitation of the current foundational premises/axioms is the absence of a clearly articulated specification of the mechanisms of (often massive-scale) coordination and cooperation involved in the cocreation of value through markets and, more broadly, in society. This is especially important because markets are even more about cooperation than about the competition that is more frequently discussed. To alleviate this limitation and facilitate a better understanding of cooperation (and coordination), an eleventh foundational premise (fifth axiom) is introduced, focusing on the role of institutions and institutional arrangements in systems of value cocreation: service ecosystems. Literature on institutions across multiple social disciplines, including marketing, is briefly reviewed and offered as further support for this fifth axiom.", "title": "" }, { "docid": "080faec9dff683610f2e98609d53d044", "text": "We present a system which is able to reconstruct human faces on mobile devices with only on-device processing using the sensors which are typically built into a current commodity smart phone. Such technology can for example be used for facial authentication purposes or as a fast preview for further post-processing. Our method uses recently proposed techniques which compute depth maps by passive multi-view stereo directly on the device. We propose an efficient method which recovers the geometry of the face from the typically noisy point cloud. First, we show that we can safely restrict the reconstruction to a 2.5D height map representation. Therefore we then propose a novel low dimensional height map shape model for faces which can be fitted to the input data efficiently even on a mobile phone. In order to be able to represent instance specific shape details, such as moles, we augment the reconstruction from the shape model with a distance map which can be regularized efficiently. We thoroughly evaluate our approach on synthetic and real data, thereby we use both high resolution depth data acquired using high quality multi-view stereo and depth data directly computed on mobile phones.", "title": "" }, { "docid": "c694936a9b8f13654d06b72c077ed8f4", "text": "Druid is an open source data store designed for real-time exploratory analytics on large data sets. The system combines a column-oriented storage layout, a distributed, shared-nothing architecture, and an advanced indexing structure to allow for the arbitrary exploration of billion-row tables with sub-second latencies. In this paper, we describe Druid’s architecture, and detail how it supports fast aggregations, flexible filters, and low latency data ingestion.", "title": "" }, { "docid": "6e7098f39a8b860307dba52dcc7e0d42", "text": "The paper presents an experimental algorithm to detect conventionalized metaphors implicit in the lexical data in a resource like WordNet, where metaphors are coded into the senses and so would never be detected by any algorithm based on the violation of preferences, since there would always be a constraint satisfied by such senses. We report an implementation of this algorithm, which was implemented first the preference constraints in VerbNet. We then derived in a systematic way a far more extensive set of constraints based on WordNet glosses, and with this data we reimplemented the detection algorithm and got a substantial improvement in recall. We suggest that this technique could contribute to improve the performance of existing metaphor detection strategies that do not attempt to detect conventionalized metaphors. The new WordNet-derived data is of wider significance because it also contains adjective constraints, unlike any existing lexical resource, and can be applied to any language with a semantic parser (and", "title": "" }, { "docid": "fcea869f6aafdc0d341c87073422256f", "text": "Table A1 summarizes the various characteristics of the synthetic models used in the experiments, including the number of event types, the size of the state space, whether a challenging construct is contained (loops, duplicates, nonlocal choice, and concurrency), and the entropy of the process defined by the model (estimated based on a sample of size 10,000). The original models may contain either duplicate tasks (two conceptually different transitions with the same label) or invisible tasks (transitions that have no label, as their firing is not recorded in the event log). We transformed all invisible transitions to duplicates such that, when there was an invisible task i in the original model, we added duplicates for all transitions t that, when fired, enable the invisible transition. These duplicates emulate the combined firing of t and i. Since we do not distinguish between duplicates and invisible tasks, we combined this category.", "title": "" }, { "docid": "87748bcc07ab498218233645bdd4dd0c", "text": "This paper proposes a method of recognizing and classifying the basic activities such as forward and backward motions by applying a deep learning framework on passive radio frequency (RF) signals. The echoes from the moving body possess unique pattern which can be used to recognize and classify the activity. A passive RF sensing test- bed is set up with two channels where the first one is the reference channel providing the un- altered echoes of the transmitter signals and the other one is the surveillance channel providing the echoes of the transmitter signals reflecting from the moving body in the area of interest. The echoes of the transmitter signals are eliminated from the surveillance signals by performing adaptive filtering. The resultant time series signal is classified into different motions as predicted by proposed novel method of convolutional neural network (CNN). Extensive amount of training data has been collected to train the model, which serves as a reference benchmark for the later studies in this field.", "title": "" }, { "docid": "ebea79abc60a5d55d0397d21f54cc85e", "text": "The increasing availability of large-scale location traces creates unprecedent opportunities to change the paradigm for knowledge discovery in transportation systems. A particularly promising area is to extract useful business intelligence, which can be used as guidance for reducing inefficiencies in energy consumption of transportation sectors, improving customer experiences, and increasing business performances. However, extracting business intelligence from location traces is not a trivial task. Conventional data analytic tools are usually not customized for handling large, complex, dynamic, and distributed nature of location traces. To that end, we develop a taxi business intelligence system to explore the massive taxi location traces from different business perspectives with various data mining functions. Since we implement the system using the real-world taxi GPS data, this demonstration will help taxi companies to improve their business performances by understanding the behaviors of both drivers and customers. In addition, several identified technical challenges also motivate data mining people to develop more sophisticate techniques in the future.", "title": "" }, { "docid": "40ca946c3cd4c8617585c648de5ce883", "text": "Investigating the incidence, type, and preventability of adverse drug events (ADEs) and medication errors is crucial to improving the quality of health care delivery. ADEs, potential ADEs, and medication errors can be collected by extraction from practice data, solicitation of incidents from health professionals, and patient surveys. Practice data include charts, laboratory, prescription data, and administrative databases, and can be reviewed manually or screened by computer systems to identify signals. Research nurses, pharmacists, or research assistants review these signals, and those that are likely to represent an ADE or medication error are presented to reviewers who independently categorize them into ADEs, potential ADEs, medication errors, or exclusions. These incidents are also classified according to preventability, ameliorability, disability, severity, stage, and responsible person. These classifications, as well as the initial selection of incidents, have been evaluated for agreement between reviewers and the level of agreement found ranged from satisfactory to excellent (kappa = 0.32-0.98). The method of ADE and medication error detection and classification described is feasible and has good reliability. It can be used in various clinical settings to measure and improve medication safety.", "title": "" }, { "docid": "2d5464b91c5e8338c9bc697d89135b49", "text": "A new phototrophic sulfur bacterium has been isolated from a red layer in a laminated mat occurring underneath a gypsum crust in the mediterranean salterns of Salin-de-Giraud (Camargue, France). Single cells were coccus-shaped, non motile, without gas vacuoles and contained sulfur globules. Bacteriochlorophyll a and okenone were present as major photosynthetic pigments. These properties and the G+C content of DNA (65.9–66.6 mol% G+C) are typical characteristics of the genus Thiocapsa. However, the new isolate differs from known species in the genus, particularly in NaCl requirement (optimum, 7% NaCl; range, 3–20% NaCl) and some physiological characteristics. Therefore, a new species is proposed, Thiocapsa halophila, sp. nov.", "title": "" }, { "docid": "477be87ed75b8245de5e084a366b7a6d", "text": "This paper addresses the problem of using unmanned aerial vehicles for the transportation of suspended loads. The proposed solution introduces a novel control law capable of steering the aerial robot to a desired reference while simultaneously limiting the sway of the payload. The stability of the equilibrium is proven rigorously through the application of the nested saturation formalism. Numerical simulations demonstrating the effectiveness of the controller are provided.", "title": "" }, { "docid": "790c3da6f3f1d5aa1cc478db5a4ac0b8", "text": "The present article investigated the performance and corrosion behavior between Ag alloy wire bond and Al pad under molding compounds of different chlorine contents. The epoxy molding compounds (EMCs) were categorized as ultra-high chlorine, high chlorine and low chlorine, respectively, with 18.3 and 4.1 ppm chlorine contents. The ball bonds were stressed under 130°C/85%RH with biased voltage of 10V. The interfacial evolution between Ag alloy wire bond and Al pad was investigated in EMC of three chlorine contents after the biased-HAST test. The Ag bonding wires used in the plastic ball grid array (PBGA) package include low Ag wire (89wt%) and high Ag alloy wire (97wt%). The as bonded wire bond exhibits an average Ag-Al IMC thickness of ~0.56 μm in both types of Ag alloy wire. Two Cu-Al IMC layers, AgAl2 and Ag4Al, analyzed by EDX were formed after 96h of biased-HAST test. The joint failed in 96h and 480h, respectively, under high chlorine content EMC. The joint lasts longer than 1056h with low chlorine content EMC. The corrosion of IMC formed between Ag alloy wire and Al pad, occurs in the high Ag content alloy wire. The results of EDX analysis indicate that the chlorine ion diffuses from molding compound to IMC through the crack formed between IMC and Al pad. Al2O3 was formed within the IMC layer. It is believed the existence of Al2O3 accelerates the penetration of the chlorine ion and thus the corrosion.", "title": "" }, { "docid": "07ff0274408e9ba5d6cd2b1a2cb7cbf8", "text": "Though tremendous strides have been made in object recognition, one of the remaining open challenges is detecting small objects. We explore three aspects of the problem in the context of finding small faces: the role of scale invariance, image resolution, and contextual reasoning. While most recognition approaches aim to be scale-invariant, the cues for recognizing a 3px tall face are fundamentally different than those for recognizing a 300px tall face. We take a different approach and train separate detectors for different scales. To maintain efficiency, detectors are trained in a multi-task fashion: they make use of features extracted from multiple layers of single (deep) feature hierarchy. While training detectors for large objects is straightforward, the crucial challenge remains training detectors for small objects. We show that context is crucial, and define templates that make use of massively-large receptive fields (where 99% of the template extends beyond the object of interest). Finally, we explore the role of scale in pre-trained deep networks, providing ways to extrapolate networks tuned for limited scales to rather extreme ranges. We demonstrate state-of-the-art results on massively-benchmarked face datasets (FDDB and WIDER FACE). In particular, when compared to prior art on WIDER FACE, our results reduce error by a factor of 2 (our models produce an AP of 82% while prior art ranges from 29-64%).", "title": "" }, { "docid": "af752d0de962449acd9a22608bd7baba", "text": "Ї R is a real time visual surveillance system for detecting and tracking multiple people and monitoring their activities in an outdoor environment. It operates on monocular gray-scale video imagery, or on video imagery from an infrared camera. ‡ R employs a combination of shape analysis and tracking to locate people and their parts (head, hands, feet, torso) and to create models of people's appearance so that they can be tracked through interactions such as occlusions. It can determine whether a foreground region contains multiple people and can segment the region into its constituent people and track them. ‡ R can also determine whether people are carrying objects, and can segment objects from their silhouettes, and construct appearance models for them so they can be identified in subsequent frames. ‡ R can recognize events between people and objects, such as depositing an object, exchanging bags, or removing an object. It runs at 25 Hz for 320Â240 resolution images on a 400 Mhz dual-Pentium II PC.", "title": "" } ]
scidocsrr
69b81381861b35978d1138fab5be99ea
An interactive graph cut method for brain tumor segmentation
[ { "docid": "85a076e58f4d117a37dfe6b3d68f5933", "text": "We propose a new model for active contours to detect objects in a given image, based on techniques of curve evolution, Mumford-Shah (1989) functional for segmentation and level sets. Our model can detect objects whose boundaries are not necessarily defined by the gradient. We minimize an energy which can be seen as a particular case of the minimal partition problem. In the level set formulation, the problem becomes a \"mean-curvature flow\"-like evolving the active contour, which will stop on the desired boundary. However, the stopping term does not depend on the gradient of the image, as in the classical active contour models, but is instead related to a particular segmentation of the image. We give a numerical algorithm using finite differences. Finally, we present various experimental results and in particular some examples for which the classical snakes methods based on the gradient are not applicable. Also, the initial curve can be anywhere in the image, and interior contours are automatically detected.", "title": "" } ]
[ { "docid": "a73b9ce3d0808177c9f0739b67a1a3f3", "text": "Multiword expressions (MWEs) are lexical items that can be decomposed into multiple component words, but have properties that are unpredictable with respect to their component words. In this paper we propose the first deep learning models for token-level identification of MWEs. Specifically, we consider a layered feedforward network, a recurrent neural network, and convolutional neural networks. In experimental results we show that convolutional neural networks are able to outperform the previous state-of-the-art for MWE identification, with a convolutional neural network with three hidden layers giving the best performance.", "title": "" }, { "docid": "308e06ce00b1dfaf731b1a91e7c56836", "text": "OBJECTIVE\nTo systematically review the literature regarding how statistical process control--with control charts as a core tool--has been applied to healthcare quality improvement, and to examine the benefits, limitations, barriers and facilitating factors related to such application.\n\n\nDATA SOURCES\nOriginal articles found in relevant databases, including Web of Science and Medline, covering the period 1966 to June 2004.\n\n\nSTUDY SELECTION\nFrom 311 articles, 57 empirical studies, published between 1990 and 2004, met the inclusion criteria.\n\n\nMETHODS\nA standardised data abstraction form was used for extracting data relevant to the review questions, and the data were analysed thematically.\n\n\nRESULTS\nStatistical process control was applied in a wide range of settings and specialties, at diverse levels of organisation and directly by patients, using 97 different variables. The review revealed 12 categories of benefits, 6 categories of limitations, 10 categories of barriers, and 23 factors that facilitate its application and all are fully referenced in this report. Statistical process control helped different actors manage change and improve healthcare processes. It also enabled patients with, for example asthma or diabetes mellitus, to manage their own health, and thus has therapeutic qualities. Its power hinges on correct and smart application, which is not necessarily a trivial task. This review catalogs 11 approaches to such smart application, including risk adjustment and data stratification.\n\n\nCONCLUSION\nStatistical process control is a versatile tool which can help diverse stakeholders to manage change in healthcare and improve patients' health.", "title": "" }, { "docid": "e2c4f9cfce1db6282fe3a23fd5d6f3a4", "text": "In semi-structured case-oriented business processes, the sequence of process steps is determined by case workers based on available document content associated with a case. Transitions between process execution steps are therefore case specific and depend on independent judgment of case workers. In this paper, we propose an instance-specific probabilistic process model (PPM) whose transition probabilities are customized to the semi-structured business process instance it represents. An instance-specific PPM serves as a powerful representation to predict the likelihood of different outcomes. We also show that certain instance-specific PPMs can be transformed into a Markov chain under some non-restrictive assumptions. For instance-specific PPMs that contain parallel execution of tasks, we provide an algorithm to map them to an extended space Markov chain. This way existing Markov techniques can be leveraged to make predictions about the likelihood of executing future tasks. Predictions provided by our technique could generate early alerts for case workers about the likelihood of important or undesired outcomes in an executing case instance. We have implemented and validated our approach on a simulated automobile insurance claims handling semi-structured business process. Results indicate that an instance-specific PPM provides more accurate predictions than other methods such as conditional probability. We also show that as more document data become available, the prediction accuracy of an instance-specific PPM increases.", "title": "" }, { "docid": "186c928bf9f3639294bddc1b85c8c358", "text": "Domain adaptation methods aim to learn a good prediction model in a label-scarce target domain by leveraging labeled patterns from a related source domain where there is a large amount of labeled data. However, in many practical domain adaptation learning scenarios, the feature distribution in the source domain is different from that in the target domain. In the extreme, the two distributions could differ completely when the feature representation of the source domain is totally different from that of the target domain. To address the problems of substantial feature distribution divergence across domains and heterogeneous feature representations of different domains, we propose a novel feature space independent semi-supervised kernel matching method for domain adaptation in this work. Our approach learns a prediction function on the labeled source data while mapping the target data points to similar source data points by matching the target kernel matrix to a submatrix of the source kernel matrix based on a Hilbert Schmidt Independence Criterion. We formulate this simultaneous learning and mapping process as a non-convex integer optimization problem and present a local minimization procedure for its relaxed continuous form. We evaluate the proposed kernel matching method using both cross domain sentiment classification tasks of Amazon product reviews and cross language text classification tasks of Reuters multilingual newswire stories. Our empirical results demonstrate that the proposed kernel matching method consistently and significantly outperforms comparison methods on both cross domain classification problems with homogeneous feature spaces and cross domain classification problems with heterogeneous feature spaces.", "title": "" }, { "docid": "129a42c825850acd12b2f90a0c65f4ea", "text": "Vertical fractures in teeth can present difficulties in diagnosis. There are, however, many specific clinical and radiographical signs which, when present, can alert clinicians to the existence of a fracture. In this review, the diagnosis of vertical root fractures is discussed in detail, and examples are presented of clinical and radiographic signs associated with these fractured teeth. Treatment alternatives are discussed for both posterior and anterior teeth.", "title": "" }, { "docid": "209d202fd4b0e2376894345e3806bb70", "text": "Support vector data description (SVDD) is a useful method for outlier detection and has been applied to a variety of applications. However, in the existing optimization procedure of SVDD, there are some issues which may lead to improper usage of SVDD. Some of the issues might already be known in practice, but the theoretical discussion, justification and correction are still lacking. Given the wide use of SVDD, these issues inspire us to carefully study SVDD in the view of convex optimization. In particular, we derive the dual problem with strong duality, prove theorems to handle theoretical insufficiency in the literature of SVDD, investigate some novel extensions of SVDD, and come up with an implementation of training SVDD with theoretical guarantee.", "title": "" }, { "docid": "1f9d552c63f35b696d8fa0bc7d0cfc64", "text": "Using Argumentation to Control Lexical Choice: A Functional Unification Implementation", "title": "" }, { "docid": "2ae3a8bf304cfce89e8fcd331d1ec733", "text": "Linear Discriminant Analysis (LDA) is among the most optimal dimension reduction methods for classification, which provides a high degree of class separability for numerous applications from science and engineering. However, problems arise with this classical method when one or both of the scatter matrices is singular. Singular scatter matrices are not unusual in many applications, especially for highdimensional data. For high-dimensional undersampled and oversampled problems, the classical LDA requires modification in order to solve a wider range of problems. In recent work the generalized singular value decomposition (GSVD) has been shown to mitigate the issue of singular scatter matrices, and a new algorithm, LDA/GSVD, has been shown to be very robust for many applications in machine learning. However, the GSVD inherently has a considerable computational overhead. In this paper, we propose fast algorithms based on the QR decomposition and regularization that solve the LDA/GSVD computational bottleneck. In addition, we present fast algorithms for classical LDA and regularized LDA utilizing the framework based on LDA/GSVD and preprocessing by the Cholesky decomposition. Experimental results are presented that demonstrate substantial speedup in all of classical LDA, regularized LDA, and LDA/GSVD algorithms without any sacrifice in classification performance for a wide range of machine learning applications.", "title": "" }, { "docid": "71dedfe6f0df1ab1c8f44f28791db66c", "text": "Summarizing a document requires identifying the important parts of the document with an objective of providing a quick overview to a reader. However, a long article can span several topics and a single summary cannot do justice to all the topics. Further, the interests of readers can vary and the notion of importance can change across them. Existing summarization algorithms generate a single summary and are not capable of generating multiple summaries tuned to the interests of the readers. In this paper, we propose an attention based RNN framework to generate multiple summaries of a single document tuned to different topics of interest. Our method outperforms existing baselines and our results suggest that the attention of generative networks can be successfully biased to look at sentences relevant to a topic and effectively used to generate topic-tuned summaries.", "title": "" }, { "docid": "cbeaacd304c0fcb1bce3decfb8e76e33", "text": "One of the main problems with virtual reality as a learning tool is that there are hardly any theories or models upon which to found and justify the application development. This paper presents a model that defends the metaphorical design of educational virtual reality systems. The goal is to build virtual worlds capable of embodying the knowledge to be taught: the metaphorical structuring of abstract concepts looks for bodily forms of expression in order to make knowledge accessible to students. The description of a case study aimed at learning scientific categorization serves to explain and implement the process of metaphorical projection. Our proposals are based on Lakoff and Johnson's theory of cognition, which defends the conception of the embodied mind, according to which most of our knowledge relies on basic metaphors derived from our bodily experience.", "title": "" }, { "docid": "40f8240220dad82a7a2da33932fb0e73", "text": "The incidence of clinically evident Curling's ulcer among 109 potentially salvageable severely burned patients was reviewed. These patients, who had greater than a 40 per cent body surface area burn, received one of these three treatment regimens: antacids hourly until autografting was complete, antacids hourly during the early postburn period followed by nutritional supplementation with Vivonex until autografting was complete or no antacids during the early postburn period but subsequent nutritional supplementation with Vivonex until autografting was complete. Clinically evident Curling's ulcer occurred in three patients. This incidence approximates the lowest reported among severely burned patients treated prophylactically with acid-reducing regimens to minimize clinically evident Curling's ulcer. In addition to its protective effect on Curling's ulcer, Vivonex, when used in combination with a high protein, high caloric diet, meets the caloric needs of the severely burned patient. Probably, Vivonex, which has a pH range of 4.5 to 5.4 protects against clinically evident Curling's ulcer by a dilutional alkalinization of gastric secretion.", "title": "" }, { "docid": "7350c0433fe1330803403e6aa03a2f26", "text": "An introduction is provided to Multi-Entity Bayesian Networks (MEBN), a logic system that integrates First Order Logic (FOL) with Bayesian probability theory. MEBN extends ordinary Bayesian networks to allow representation of graphical models with repeated sub-structures. Knowledge is encoded as a collection of Bayesian network fragments (MFrags) that can be instantiated and combined to form highly complex situation-specific Bayesian networks. A MEBN theory (MTheory) implicitly represents a joint probability distribution over possibly unbounded numbers of hypotheses, and uses Bayesian learning to refine a knowledge base as observations accrue. MEBN provides a logical foundation for the emerging collection of highly expressive probability-based languages. A running example illustrates the representation and reasoning power of the MEBN formalism.", "title": "" }, { "docid": "1c7457ef393a604447b0478451ef0c62", "text": "Melasma is an acquired increased pigmentation of the skin [1], a symmetric hypermelanosis, characterized by irregular light to gray brown macules. Melasma comes from the Greek word melas [= black color), formerly known as Chloasma, another Greek word meaning green color, even though the term was more often used for melasma cases during pregnancy. It is considered to be part of a large group of facial melanosis, such as Riehl’s melanosis, Lichen planuspigmentous, erythema dyschromicumperstans, erythrosis and poikiloderma of Civatte [2]. Hyperpigmented macules and patches are most commonly developed in the sun-exposed areas of the skin [3]. Melasma is considered to be a chronic acquired hypermelanosis of the skin [4], with poorly understood pathogenesis [5]. The increased pigmentation and the photo damaged features that characterize melasma include solar elastosis, even though the main pathogenesis still remains unknown [6].", "title": "" }, { "docid": "65a143978c3b1980f512cfb22f176568", "text": "Recognizing textual entailment (RTE) has been proposed as a task in computational linguistics under a successful series of annual evaluation campaigns started in 2005 with the Pascal RTE-1 shared task. RTE is defined as the capability of a system to recognize that the meaning of a portion of text (usually one or few sentences) entails the meaning of another portion of text. Subsequently, the task has also been extended to recognizing specific cases of non-entailment, as when the meaning of the first text contradicts the meaning of the second text. Although the study of entailment phenomena in natural language was addressed much earlier, the novelty of the RTE evaluation was to propose a simple text-to-text task to compare human and system judgments, making it possible to build data sets and to experiment with a variety of approaches. Two main reasons likely contributed to the success of the initiative: First, the possibility to address, for the first time, the complexity of entailment phenomena under a data-driven perspective; second, the text-to-text approach allows one to easily incorporate a textual entailment engine into applications (e.g., question answering, summarization, information extraction) as a core inferential component.", "title": "" }, { "docid": "97968acf486f3f4bcdbccdfcd116dabb", "text": "Disruption of electric power operations can be catastrophic on national security and the economy. Due to the complexity of widely dispersed assets and the interdependences among computer, communication, and power infrastructures, the requirement to meet security and quality compliance on operations is a challenging issue. In recent years, the North American Electric Reliability Corporation (NERC) established a cybersecurity standard that requires utilities' compliance on cybersecurity of control systems. This standard identifies several cyber-related vulnerabilities that exist in control systems and recommends several remedial actions (e.g., best practices). In this paper, a comprehensive survey on cybersecurity of critical infrastructures is reported. A supervisory control and data acquisition security framework with the following four major components is proposed: (1) real-time monitoring; (2) anomaly detection; (3) impact analysis; and (4) mitigation strategies. In addition, an attack-tree-based methodology for impact analysis is developed. The attack-tree formulation based on power system control networks is used to evaluate system-, scenario -, and leaf-level vulnerabilities by identifying the system's adversary objectives. The leaf vulnerability is fundamental to the methodology that involves port auditing or password strength evaluation. The measure of vulnerabilities in the power system control framework is determined based on existing cybersecurity conditions, and then, the vulnerability indices are evaluated.", "title": "" }, { "docid": "e6e6eb1f1c0613a291c62064144ff0ba", "text": "Mobile phones have become the most popular way to communicate with other individuals. While cell phones have become less of a status symbol and more of a fashion statement, they have created an unspoken social dependency. Adolescents and young adults are more likely to engage in SMS messing, making phone calls, accessing the internet from their phone or playing a mobile driven game. Once pervaded by boredom, teenagers resort to instant connection, to someone, somewhere. Sensation seeking behavior has also linked adolescents and young adults to have the desire to take risks with relationships, rules and roles. Individuals seek out entertainment and avoid boredom at all times be it appropriate or inappropriate. Cell phones are used for entertainment, information and social connectivity. It has been demonstrated that individuals with low self – esteem use cell phones to form and maintain social relationships. They form an attachment with cell phone which molded their mind that they cannot function without their cell phone on a day-to-day basis. In this context, the study attempts to examine the extent of use of mobile phone and its influence on the academic performance of the students. A face to face survey using structured questionnaire was the method used to elicit the opinions of students between the age group of 18-25 years in three cities covering all the three regions the State of Andhra Pradesh in India. The survey was administered among 1200 young adults through two stage random sampling to select the colleges and respondents from the selected colleges, with 400 from each city. In Hyderabad, 201 males and 199 females participated in the survey. In Visakhapatnam, 192 males and 208 females participated. In Tirupati, 220 males and 180 females completed the survey. Two criteria were taken into consideration while choosing the participants for the survey. The participants are college-going and were mobile phone users. Each of the survey responses was entered and analyzed using SPSS software. The Statistical Package for Social Sciences (SPSS 16) had been used to work out the distribution of samples in terms of percentages for each specified parameter.", "title": "" }, { "docid": "00f9290840ba201e23d0ea6149f344e4", "text": "Despite the plethora of security advice and online education materials offered to end-users, there exists no standard measurement tool for end-user security behaviors. We present the creation of such a tool. We surveyed the most common computer security advice that experts offer to end-users in order to construct a set of Likert scale questions to probe the extent to which respondents claim to follow this advice. Using these questions, we iteratively surveyed a pool of 3,619 computer users to refine our question set such that each question was applicable to a large percentage of the population, exhibited adequate variance between respondents, and had high reliability (i.e., desirable psychometric properties). After performing both exploratory and confirmatory factor analysis, we identified a 16-item scale consisting of four sub-scales that measures attitudes towards choosing passwords, device securement, staying up-to-date, and proactive awareness.", "title": "" }, { "docid": "25b16e9fa168a58ea813110ea46c6ce8", "text": "In many graph–mining problems, two networks from different domains have to be matched. In the absence of reliable node attributes, graph matching has to rely on only the link structures of the two networks, which amounts to a generalization of the classic graph isomorphism problem. Graph matching has applications in social–network reconciliation and de-anonymization, protein–network alignment in biology, and computer vision. The most scalable graph–matching approaches use ideas from percolation theory, where a matched node pair “infects” neighbouring pairs as additional potential matches. This class of matching algorithm requires an initial seed set of known matches to start the percolation. The size and correctness of the matching is very sensitive to the size of the seed set. In this paper, we give a new graph–matching algorithm that can operate with a much smaller seed set than previous approaches, with only a small increase in matching errors. We characterize a phase transition in matching performance as a function of the seed set size, using a random bigraph model and ideas from bootstrap percolation theory. We also show the excellent performance in matching several real large-scale social networks, using only a handful of seeds.", "title": "" }, { "docid": "4b546f3bc34237d31c862576ecf63f9a", "text": "Optimizing the internal supply chain for direct or production goods was a major element during the implementation of enterprise resource planning systems (ERP) which has taken place since the late 1980s. However, supply chains to the suppliers of indirect materials were not usually included due to low transaction volumes, low product values and low strategic importance of these goods. With the advent of the Internet, systems for streamlining indirect goods supply chains emerged and were adopted by many companies. In view of the paperprone processes in many companies, the implementation of these electronic procurement systems led to substantial improvement potentials. This research reports the quantitative and qualitative results of a benchmarking study which explores the use of the Internet in procurement (eProcurement). Among the major goals are to obtain more insight on how European and North American companies used and introduced eProcurement solutions as well as how these systems enhanced the procurement function. The analysis presents a heterogeneous picture and shows that all analyzed solutions emphasize different parts of the procurement and coordination process. Based on interviews and case studies the research proposes an initial set of generalized success factors which may improve future implementations and stimulate further success factor research.", "title": "" } ]
scidocsrr
30299681fe7d92626a84c1b1a6b7deac
Deep learning for tactile understanding from visual and haptic data
[ { "docid": "14658e1be562a01c1ba8338f5e87020b", "text": "This paper discusses a novel approach in developing a texture sensor emulating the major features of a human finger. The aim of this study is to realize precise and quantitative texture sensing. Three physical properties, roughness, softness, and friction are known to constitute texture perception of humans. The sensor is designed to measure the three specific types of information by adopting the mechanism of human texture perception. First, four features of the human finger that were focused on in designing the novel sensor are introduced. Each feature is considered to play an important role in texture perception; the existence of nails and bone, the multiple layered structure of soft tissue, the distribution of mechanoreceptors, and the deployment of epidermal ridges. Next, detailed design of the texture sensor based on the design concept is explained, followed by evaluating experiments and analysis of the results. Finally, we conducted texture perceptive experiments of actual material using the developed sensor, thus achieving the information expected. Results show the potential of our approach.", "title": "" } ]
[ { "docid": "2f110c5f312ceefdf6c1ea1fd78a361f", "text": "Enrollments in introductory computer science courses are growing rapidly, thereby taxing scarce teaching resources and motivating the increased use of automated tools for program grading. Such tools commonly rely on regression testing methods from industry. However, the goals of automated grading differ from those of testing for software production. In academia, a primary motivation for testing is to provide timely and accurate feedback to students so that they can understand and fix defects in their programs. Testing strategies for program grading are therefore distinct from those of traditional software testing. This paper enumerates and describes a number of testing strategies that improve the quality of feedback for different types of programming assignments.", "title": "" }, { "docid": "5b3ca1cc607d2e8f0394371f30d9e83a", "text": "We present a machine learning algorithm that takes as input a 2D RGB image and synthesizes a 4D RGBD light field (color and depth of the scene in each ray direction). For training, we introduce the largest public light field dataset, consisting of over 3300 plenoptic camera light fields of scenes containing flowers and plants. Our synthesis pipeline consists of a convolutional neural network (CNN) that estimates scene geometry, a stage that renders a Lambertian light field using that geometry, and a second CNN that predicts occluded rays and non-Lambertian effects. Our algorithm builds on recent view synthesis methods, but is unique in predicting RGBD for each light field ray and improving unsupervised single image depth estimation by enforcing consistency of ray depths that should intersect the same scene point.", "title": "" }, { "docid": "bb8115f8c172e22bd0ff70bd079dfa98", "text": "This paper reports on the second generation of the Pleated Pneumatic Artificial Muscle (PPAM) which has been developed to extend the life span of its first prototype. This type of artificial was developed to overcome dry friction and material deformation which is present in the widely used McKibben type of artificial muscle. The essence of the PPAM is its pleated membrane structure which enables the muscle to work at low pressures and at large contractions. There is a growing interest in this kind of actuation for robotics applications due to its high power to weight ratio and the adaptable compliance, especially for legged locomotion and robot applications in direct contact with a human. This paper describes the design of the second generation PPAM, for which specifically the membrane layout has been changed. In function of this new layout the mathematical model, developed for the first prototype, has been reformulated. This paper gives an elaborate discussion on this mathematical model which represents the force generation and enclosed muscle volume. Static load tests on some real muscles, which have been carried out in order to validate the mathematical model, are then discussed. Furthermore are given two robotic applications which currently use these pneumatic artificial muscles. One is the biped Lucy and the another one is a manipulator application which works in direct contact with an operator.", "title": "" }, { "docid": "2d22631dcbbae408e0856b414c2f7d8e", "text": "During the past few years, interest in convolutional neural networks (CNNs) has risen constantly, thanks to their excellent performance on a wide range of recognition and classification tasks. However, they suffer from the high level of complexity imposed by the high-dimensional convolutions in convolutional layers. Within scenarios with limited hardware resources and tight power and latency constraints, the high computational complexity of CNNs makes them difficult to be exploited. Hardware solutions have striven to reduce the power consumption using low-power techniques, and to limit the processing time by increasing the number of processing elements (PEs). While most of ASIC designs claim a peak performance of a few hundred giga operations per seconds, their average performance is substantially lower when applied to state-of-the-art CNNs such as AlexNet, VGGNet and ResNet, leading to low resource utilization. Their performance efficiency is limited to less than 55% on average, which leads to unnecessarily high processing latency and silicon area. In this paper, we propose a dataflow which enables to perform both the fully-connected and convolutional computations for any filter/layer size using the same PEs. We then introduce a multi-mode inference engine (MMIE) based on the proposed dataflow. Finally, we show that the proposed MMIE achieves a performance efficiency of more than 84% when performing the computations of the three renown CNNs (i.e., AlexNet, VGGNet and ResNet), outperforming the best architecture in the state-of-the-art in terms of energy consumption, processing latency and silicon area.", "title": "" }, { "docid": "22e3a0e31a70669f311fb51663a76f9c", "text": "A communication infrastructure is an essential part to the success of the emerging smart grid. A scalable and pervasive communication infrastructure is crucial in both construction and operation of a smart grid. In this paper, we present the background and motivation of communication infrastructures in smart grid systems. We also summarize major requirements that smart grid communications must meet. From the experience of several industrial trials on smart grid with communication infrastructures, we expect that the traditional carbon fuel based power plants can cooperate with emerging distributed renewable energy such as wind, solar, etc, to reduce the carbon fuel consumption and consequent green house gas such as carbon dioxide emission. The consumers can minimize their expense on energy by adjusting their intelligent home appliance operations to avoid the peak hours and utilize the renewable energy instead. We further explore the challenges for a communication infrastructure as the part of a complex smart grid system. Since a smart grid system might have over millions of consumers and devices, the demand of its reliability and security is extremely critical. Through a communication infrastructure, a smart grid can improve power reliability and quality to eliminate electricity blackout. Security is a challenging issue since the on-going smart grid systems facing increasing vulnerabilities as more and more automation, remote monitoring/controlling and supervision entities are interconnected.", "title": "" }, { "docid": "c6b656cdec127997a5baf7228e530b02", "text": "There are many scholarly articles in the literature sources that refer to the employee performance evaluation topic. Many scholars, for example, describe relations between employees’ job satisfaction, or motivation, and their performance. Others deal with the performance evaluation of the whole organization where they include tangible and intangible metrics. However, only few of them provide with such an employee performance evaluation model that could be practically applied in the companies as a reference. The main purpose of this paper is to explain one such practical model in the form of a standard document procedure which can serve as an example to follow it in the companies of different types. The model incorporates employee performance and compensation policy and is based on the five questions that represent the guiding principles, as well. The practical employee performance evaluation model and standard procedure will be explained based on the information and experience from a middle-sized industrial organization located in the Slovak republic.", "title": "" }, { "docid": "fba55845801b1d145ff45b47efce8155", "text": "This paper presents a technique for substantially reducing the noise of a CMOS low noise amplifier implemented in the inductive source degeneration topology. The effects of the gate induced current noise on the noise performance are taken into account, and the total output noise is strongly reduced by inserting a capacitance of appropriate value in parallel with the amplifying MOS transistor of the LNA. As a result, very low noise figures become possible already at very low power consumption levels.", "title": "" }, { "docid": "b200a40d95e184e486a937901c606e12", "text": "0749-5978/$ see front matter 2008 Elsevier Inc. A doi:10.1016/j.obhdp.2008.06.003 * Corresponding author. E-mail address: sthau@london.edu (S. Thau). Based on uncertainty management theory [Lind, E. A., & Van den Bos, K., (2002). When fairness works: Toward a general theory of uncertainty management. In Staw, B. M., & Kramer, R. M. (Eds.), Research in organizational behavior (Vol. 24, pp. 181–223). Greenwich, CT: JAI Press.], two studies tested whether a management style depicting situational uncertainty moderates the relationship between abusive supervision and workplace deviance. Study 1, using survey data from 379 subordinates of various industries, found that the positive relationship between abusive supervision and organizational deviance was stronger when authoritarian management style was low (high situational uncertainty) rather than high (low situational uncertainty). No significant interaction effect was found on interpersonal deviance. Study 2, using survey data from 1477 subordinates of various industries, found that the positive relationship between abusive supervision and supervisor-directed and organizational deviance was stronger when employees’ perceptions of their organization’s management style reflected high rather than low situational uncertainty. 2008 Elsevier Inc. All rights reserved.", "title": "" }, { "docid": "34149311075a7f564abe632adbbed521", "text": "This paper presents a high-gain broadband suspended plate antenna for indoor wireless access point applications. This antenna consists of two layers operating at two adjacent bands. The bottom plate is fed by a tapered down strip excited by a probe through an SMA RF connector. The top plate is shorted to a ground plane by a strip electromagnetically coupled with the feed-strip. The design is carried out using a commercial EM software package, and validated experimentally. The measured result shows that the antenna achieves a broad operational bandwidth of 66%, suitable for access points in WiFi (2.4-2.485 GHz) and WiMAX (2.3-2.7 GHz and the 3.4-3.6 GHz) systems (IEEE 802.11b/g and IEEE 802.16-2004/e). The measured antenna gain varies from 7.7-9.5 dBi across the frequency bands of interest. A parametric study of this antenna is also conducted.", "title": "" }, { "docid": "698fb992c5ff7ecc8d2e153f6b385522", "text": "We investigate bag-of-visual-words (BOVW) approaches to land-use classification in high-resolution overhead imagery. We consider a standard non-spatial representation in which the frequencies but not the locations of quantized image features are used to discriminate between classes analogous to how words are used for text document classification without regard to their order of occurrence. We also consider two spatial extensions, the established spatial pyramid match kernel which considers the absolute spatial arrangement of the image features, as well as a novel method which we term the spatial co-occurrence kernel that considers the relative arrangement. These extensions are motivated by the importance of spatial structure in geographic data.\n The methods are evaluated using a large ground truth image dataset of 21 land-use classes. In addition to comparisons with standard approaches, we perform extensive evaluation of different configurations such as the size of the visual dictionaries used to derive the BOVW representations and the scale at which the spatial relationships are considered.\n We show that even though BOVW approaches do not necessarily perform better than the best standard approaches overall, they represent a robust alternative that is more effective for certain land-use classes. We also show that extending the BOVW approach with our proposed spatial co-occurrence kernel consistently improves performance.", "title": "" }, { "docid": "f5d58660137891111a009bc841950ad2", "text": "Lateral brow ptosis is a common aging phenomenon, contributing to the lateral upper eyelid hooding, in addition to dermatochalasis. Lateral brow lift complements upper blepharoplasty in achieving a youthful periorbital appearance. In this study, the author reports his experience in utilizing a temporal (pretrichial) subcutaneous lateral brow lift technique under local anesthesia. A retrospective analysis of all patients undergoing the proposed technique by one surgeon from 2009 to 2016 was conducted. Additional procedures were recorded. Preoperative and postoperative photographs at the longest follow-up visit were used for analysis. Operation was performed under local anesthesia. The surgical technique included a temporal (pretrichial) incision with subcutaneous dissection toward the lateral brow, with superolateral lift and closure. Total of 45 patients (44 females, 1 male; mean age: 58 years) underwent the temporal (pretrichial) subcutaneous lateral brow lift technique under local anesthesia in office setting. The procedure was unilateral in 4 cases. Additional procedures included upper blepharoplasty (38), ptosis surgery (16), and lower blepharoplasty (24). Average follow-up time was 1 year (range, 6 months to 5 years). All patients were satisfied with the eyebrow contour and scar appearance. One patient required additional brow lift on one side for asymmetry. There were no cases of frontal nerve paralysis. In conclusion, the temporal (pretrichial) subcutaneous approach is an effective, safe technique for lateral brow lift/contouring, which can be performed under local anesthesia. It is ideal for women. Additional advantages include ease of operation, cost, and shortening the hairline (if necessary).", "title": "" }, { "docid": "a3be253034ffcf61a25ad265fda1d4ff", "text": "With the development of automated logistics systems, flexible manufacture systems (FMS) and unmanned automated factories, the application of automated guided vehicle (AGV) gradually become more important to improve production efficiency and logistics automatism for enterprises. The development of the AGV systems play an important role in reducing labor cost, improving working conditions, unifying information flow and logistics. Path planning has been a key issue in AGV control system. In this paper, two key problems, shortest time path planning and collision in multi AGV have been solved. An improved A-Star (A*) algorithm is proposed, which introduces factors of turning, and edge removal based on the improved A* algorithm is adopted to solve k shortest path problem. Meanwhile, a dynamic path planning method based on A* algorithm which searches effectively the shortest-time path and avoids collision has been presented. Finally, simulation and experiment have been conducted to prove the feasibility of the algorithm.", "title": "" }, { "docid": "9be50791156572e6e1a579952073d810", "text": "A synthetic aperture radar (SAR) raw data simulator is an important tool for testing the system parameters and the imaging algorithms. In this paper, a scene raw data simulator based on an inverse ω-k algorithm for bistatic SAR of a translational invariant case is proposed. The differences between simulations of monostatic and bistatic SAR are also described. The algorithm proposed has high precision and can be used in long-baseline configuration and for single-pass interferometry. Implementation details are described, and plenty of simulation results are provided to validate the algorithm.", "title": "" }, { "docid": "791294c45e63b104b289b52b58512877", "text": "Open source software (OSS) development teams use electronic means, such as emails, instant messaging, or forums, to conduct open and public discussions. Researchers investigated mailing lists considering them as a hub for project communication. Prior work focused on specific aspects of emails, for example the handling of patches, traceability concerns, or social networks. This led to insights pertaining to the investigated aspects, but not to a comprehensive view of what developers communicate about. Our objective is to increase the understanding of development mailing lists communication. We quantitatively and qualitatively analyzed a sample of 506 email threads from the development mailing list of a major OSS project, Lucene. Our investigation reveals that implementation details are discussed only in about 35% of the threads, and that a range of other topics is discussed. Moreover, core developers participate in less than 75% of the threads. We observed that the development mailing list is not the main player in OSS project communication, as it also includes other channels such as the issue repository.", "title": "" }, { "docid": "539c3b253a18f32064935217f6b0ea67", "text": "Salient object detection is not a pure low-level, bottom-up process. Higher-level knowledge is important even for task-independent image saliency. We propose a unified model to incorporate traditional low-level features with higher-level guidance to detect salient objects. In our model, an image is represented as a low-rank matrix plus sparse noises in a certain feature space, where the non-salient regions (or background) can be explained by the low-rank matrix, and the salient regions are indicated by the sparse noises. To ensure the validity of this model, a linear transform for the feature space is introduced and needs to be learned. Given an image, its low-level saliency is then extracted by identifying those sparse noises when recovering the low-rank matrix. Furthermore, higher-level knowledge is fused to compose a prior map, and is treated as a prior term in the objective function to improve the performance. Extensive experiments show that our model can comfortably achieves comparable performance to the existing methods even without the help from high-level knowledge. The integration of top-down priors further improves the performance and achieves the state-of-the-art. Moreover, the proposed model can be considered as a prototype framework not only for general salient object detection, but also for potential task-dependent saliency applications.", "title": "" }, { "docid": "24bb26da0ce658ff075fc89b73cad5af", "text": "Recent trends in robot learning are to use trajectory-based optimal control techniques and reinforcement learning to scale complex robotic systems. On the one hand, increased computational power and multiprocessing, and on the other hand, probabilistic reinforcement learning methods and function approximation, have contributed to a steadily increasing interest in robot learning. Imitation learning has helped significantly to start learning with reasonable initial behavior. However, many applications are still restricted to rather lowdimensional domains and toy applications. Future work will have to demonstrate the continual and autonomous learning abilities, which were alluded to in the introduction.", "title": "" }, { "docid": "8fd28fb7c30c3dc30d4a92f95d38c966", "text": "In recent years, iris recognition is becoming a very active topic in both research and practical applications. However, fake iris is a potential threat there are potential threats for iris-based systems. This paper presents a novel fake iris detection method based on the analysis of 2-D Fourier spectra together with iris image quality assessment. First, image quality assessment method is used to exclude the defocused, motion blurred fake iris. Then statistical properties of Fourier spectra for fake iris are used for clear fake iris detection. Experimental results show that the proposed method can detect photo iris and printed iris effectively.", "title": "" }, { "docid": "408f58b7dd6cb1e6be9060f112773888", "text": "Semantic hashing has become a powerful paradigm for fast similarity search in many information retrieval systems. While fairly successful, previous techniques generally require two-stage training, and the binary constraints are handled ad-hoc. In this paper, we present an end-to-end Neural Architecture for Semantic Hashing (NASH), where the binary hashing codes are treated as Bernoulli latent variables. A neural variational inference framework is proposed for training, where gradients are directly backpropagated through the discrete latent variable to optimize the hash function. We also draw connections between proposed method and rate-distortion theory, which provides a theoretical foundation for the effectiveness of the proposed framework. Experimental results on three public datasets demonstrate that our method significantly outperforms several state-of-the-art models on both unsupervised and supervised scenarios.", "title": "" }, { "docid": "011332e3d331d461e786fd2827b0434d", "text": "In this manuscript we present various robust statistical methods popular in the social sciences, and show how to apply them in R using the WRS2 package available on CRAN. We elaborate on robust location measures, and present robust t-test and ANOVA versions for independent and dependent samples, including quantile ANOVA. Furthermore, we present on running interval smoothers as used in robust ANCOVA, strategies for comparing discrete distributions, robust correlation measures and tests, and robust mediator models.", "title": "" }, { "docid": "7b552767a37a7d63591471195b2e002b", "text": "Point-of-interest (POI) recommendation, which helps mobile users explore new places, has become an important location-based service. Existing approaches for POI recommendation have been mainly focused on exploiting the information about user preferences, social influence, and geographical influence. However, these approaches cannot handle the scenario where users are expecting to have POI recommendation for a specific time period. To this end, in this paper, we propose a unified recommender system, named the 'Where and When to gO' (WWO) recommender system, to integrate the user interests and their evolving sequential preferences with temporal interval assessment. As a result, the WWO system can make recommendations dynamically for a specific time period and the traditional POI recommender system can be treated as the special case of the WWO system by setting this time period long enough. Specifically, to quantify users' sequential preferences, we consider the distributions of the temporal intervals between dependent POIs in the historical check-in sequences. Then, to estimate the distributions with only sparse observations, we develop the low-rank graph construction model, which identifies a set of bi-weighted graph bases so as to learn the static user preferences and the dynamic sequential preferences in a coherent way. Finally, we evaluate the proposed approach using real-world data sets from several location-based social networks (LBSNs). The experimental results show that our method outperforms the state-of-the-art approaches for POI recommendation in terms of various metrics, such as F-measure and NDCG, with a significant margin.", "title": "" } ]
scidocsrr
4161d52a643d1366f0606add5d1cb4ea
Exhaustive search algorithms to mine subgroups on Big Data using Apache Spark
[ { "docid": "55b405991dc250cd56be709d53166dca", "text": "In Data Mining, the usefulness of association rules is strongly limited by the huge amount of delivered rules. To overcome this drawback, several methods were proposed in the literature such as item set concise representations, redundancy reduction, and post processing. However, being generally based on statistical information, most of these methods do not guarantee that the extracted rules are interesting for the user. Thus, it is crucial to help the decision-maker with an efficient post processing step in order to reduce the number of rules. This paper proposes a new interactive approach to prune and filter discovered rules. First, we propose to use ontologies in order to improve the integration of user knowledge in the post processing task. Second, we propose the Rule Schema formalism extending the specification language proposed by Liu et al. for user expectations. Furthermore, an interactive framework is designed to assist the user throughout the analyzing task. Applying our new approach over voluminous sets of rules, we were able, by integrating domain expert knowledge in the post processing step, to reduce the number of rules to several dozens or less. Moreover, the quality of the filtered rules was validated by the domain expert at various points in the interactive process. KeywordsClustering, classification, and association rules, interactive data exploration and discovery, knowledge management applications.", "title": "" } ]
[ { "docid": "172a35c941407bb09c8d41953dfc6d37", "text": "Multi-task learning (MTL) is a machine learning paradigm that improves the performance of each task by exploiting useful information contained in multiple related tasks. However, the relatedness of tasks can be exploited by attackers to launch data poisoning attacks, which has been demonstrated a big threat to single-task learning. In this paper, we provide the first study on the vulnerability of MTL. Specifically, we focus on multi-task relationship learning (MTRL) models, a popular subclass of MTL models where task relationships are quantized and are learned directly from training data. We formulate the problem of computing optimal poisoning attacks on MTRL as a bilevel program that is adaptive to arbitrary choice of target tasks and attacking tasks. We propose an efficient algorithm called PATOM for computing optimal attack strategies. PATOM leverages the optimality conditions of the subproblem of MTRL to compute the implicit gradients of the upper level objective function. Experimental results on realworld datasets show that MTRL models are very sensitive to poisoning attacks and the attacker can significantly degrade the performance of target tasks, by either directly poisoning the target tasks or indirectly poisoning the related tasks exploiting the task relatedness. We also found that the tasks being attacked are always strongly correlated, which provides a clue for defending against such attacks.", "title": "" }, { "docid": "164fd7be21190314a27bacb4dec522c5", "text": "The relative ineffectiveness of information retrieval systems is largely caused by the inaccuracy with which a query formed by a few keywords models the actual user information need. One well known method to overcome this limitation is automatic query expansion (AQE), whereby the user’s original query is augmented by new features with a similar meaning. AQE has a long history in the information retrieval community but it is only in the last years that it has reached a level of scientific and experimental maturity, especially in laboratory settings such as TREC. This survey presents a unified view of a large number of recent approaches to AQE that leverage various data sources and employ very different principles and techniques. The following questions are addressed. Why is query expansion so important to improve search effectiveness? What are the main steps involved in the design and implementation of an AQE component? What approaches to AQE are available and how do they compare? Which issues must still be resolved before AQE becomes a standard component of large operational information retrieval systems (e.g., search engines)?", "title": "" }, { "docid": "4b6a4f9d91bc76c541f4879a1a684a3f", "text": "Query auto-completion (QAC) is one of the most prominent features of modern search engines. The list of query candidates is generated according to the prefix entered by the user in the search box and is updated on each new key stroke. Query prefixes tend to be short and ambiguous, and existing models mostly rely on the past popularity of matching candidates for ranking. However, the popularity of certain queries may vary drastically across different demographics and users. For instance, while instagram and imdb have comparable popularities overall and are both legitimate candidates to show for prefix i, the former is noticeably more popular among young female users, and the latter is more likely to be issued by men.\n In this paper, we present a supervised framework for personalizing auto-completion ranking. We introduce a novel labelling strategy for generating offline training labels that can be used for learning personalized rankers. We compare the effectiveness of several user-specific and demographic-based features and show that among them, the user's long-term search history and location are the most effective for personalizing auto-completion rankers. We perform our experiments on the publicly available AOL query logs, and also on the larger-scale logs of Bing. The results suggest that supervised rankers enhanced by personalization features can significantly outperform the existing popularity-based base-lines, in terms of mean reciprocal rank (MRR) by up to 9%.", "title": "" }, { "docid": "2c8dc61a5dbdfcf8f086a5e6a0d920c1", "text": "This work achieves a two-and-a-half-dimensional (2.5D) wafer-level radio frequency (RF) energy harvesting rectenna module with a compact size and high power conversion efficiency (PCE) that integrates a 2.45 GHz antenna in an integrated passive device (IPD) and a rectifier in a tsmcTM 0.18 μm CMOS process. The proposed rectifier provides a master-slave voltage doubling full-wave topology which can reach relatively high PCE by means of a relatively simple circuitry. The IPD antenna was stacked on top of the CMOS rectifier. The rectenna (including an antenna and rectifier) achieves an output voltage of 1.2 V and PCE of 47 % when the operation frequency is 2.45 GHz, with −12 dBm input power. The peak efficiency of the circuit is 83 % with −4 dBm input power. The die size of the RF harvesting module is less than 1 cm2. The performance of this module makes it possible to energy mobile device and it is also very suitable for wearable and implantable wireless sensor networks (WSN).", "title": "" }, { "docid": "de24242bef4464a0126ce3806b795ac8", "text": "Music must first be defined and distinguished from speech, and from animal and bird cries. We discuss the stages of hominid anatomy that permit music to be perceived and created, with the likelihood of both Homo neanderthalensis and Homo sapiens both being capable. The earlier hominid ability to emit sounds of variable pitch with some meaning shows that music at its simplest level must have predated speech. The possibilities of anthropoid motor impulse suggest that rhythm may have preceded melody, though full control of rhythm may well not have come any earlier than the perception of music above. There are four evident purposes for music: dance, ritual, entertainment personal, and communal, and above all social cohesion, again on both personal and communal levels. We then proceed to how instruments began, with a brief survey of the surviving examples from the Mousterian period onward, including the possible Neanderthal evidence and the extent to which they showed “artistic” potential in other fields. We warn that our performance on replicas of surviving instruments may bear little or no resemblance to that of the original players. We continue with how later instruments, strings, and skin-drums began and developed into instruments we know in worldwide cultures today. The sound of music is then discussed, scales and intervals, and the lack of any consistency of consonant tonality around the world. This is followed by iconographic evidence of the instruments of later antiquity into the European Middle Ages, and finally, the history of public performance, again from the possibilities of early humanity into more modern times. This paper draws the ethnomusicological perspective on the entire development of music, instruments, and performance, from the times of H. neanderthalensis and H. sapiens into those of modern musical history, and it is written with the deliberate intention of informing readers who are without special education in music, and providing necessary information for inquiries into the origin of music by cognitive scientists.", "title": "" }, { "docid": "5c690df3977b078243b9cb61e5e712a6", "text": "Computing indirect illumination is a challenging and complex problem for real-time rendering in 3D applications. We present a global illumination approach that computes indirect lighting in real time using a simplified version of the outgoing radiance and the scene stored in voxels. This approach comprehends two-bounce indirect lighting for diffuse, specular and emissive materials. Our voxel structure is based on a directional hierarchical structure stored in 3D textures with mipmapping, the structure is updated in real time utilizing the GPU which enables us to approximate indirect lighting for dynamic scenes. Our algorithm employs a voxel-light pass which calculates voxel direct and global illumination for the simplified outgoing radiance. We perform voxel cone tracing within this voxel structure to approximate different lighting phenomena such as ambient occlusion, soft shadows and indirect lighting. We demonstrate with different tests that our developed approach is capable to compute global illumination of complex scenes on interactive times.", "title": "" }, { "docid": "2dd4a6736fcbd3bbb5b126f3ffcdda10", "text": "Recent research leverages results from the continuous-armed bandit literature to create a reinforcement-learning algorithm for continuous state and action spaces. Initially proposed in a theoretical setting, we provide the first examination of the empirical properties of the algorithm. Through experimentation, we demonstrate the effectiveness of this planning method when coupled with exploration and model learning and show that, in addition to its formal guarantees, the approach is very competitive with other continuous-action reinforcement", "title": "" }, { "docid": "38c5b9a1f696e060c4cda4cc19b6fa96", "text": "This study aims to give information about the effect of green marketing on customers purchasing behaviors. First of all, environment and environmental problems, one of the reason why the green marketing emerged, are mentioned, and then the concepts of green marketing and green consumer are explained. Then together with the hypothesis developed literature review has been continued and studies conducted on this subject until now were mentioned. In the last section, moreover, questionnaire results conducted on 540 consumers in Istanbul are evaluated statistically. According to the results of the analysis, environmental awareness, green product features, green promotion activities and green price affect green purchasing behaviors of the consumers in positive way. Demographic characteristics have moderate affect on model.", "title": "" }, { "docid": "ec7931f1a56bf7d4dd6cc1a5cb2d0625", "text": "Modern life is intimately linked to the availability of fossil fuels, which continue to meet the world's growing energy needs even though their use drives climate change, exhausts finite reserves and contributes to global political strife. Biofuels made from renewable resources could be a more sustainable alternative, particularly if sourced from organisms, such as algae, that can be farmed without using valuable arable land. Strain development and process engineering are needed to make algal biofuels practical and economically viable.", "title": "" }, { "docid": "f88b8c7cbabda618f59e75357c1d8262", "text": "A security sandbox is a technology that is often used to detect advanced malware. However, current sandboxes are highly dependent on VM hypervisor types and versions. Thus, in this paper, we introduce a new sandbox design, using memory forensics techniques, to provide an agentless sandbox solution that is independent of the VM hypervisor. In particular, we leverage the VM introspection method to monitor malware running memory data outside the VM and analyze its system behaviors, such as process, file, registry, and network activities. We evaluate the feasibility of this method using 20 advanced and 8 script-based malware samples. We furthermore demonstrate how to analyze malware behavior from memory and verify the results with three different sandbox types. The results show that we can analyze suspicious malware activities, which is also helpful for cyber security defense.", "title": "" }, { "docid": "7624a6ca581c0096c6e5bc484a3d772e", "text": "We describe two systems for text simplification using typed dependency structures, one that performs lexical and syntactic simplification, and another that performs sentence compression optimised to satisfy global text constraints such as lexical density, the ratio of difficult words, and text length. We report a substantial evaluation that demonstrates the superiority of our systems, individually and in combination, over the state of the art, and also report a comprehension based evaluation of contemporary automatic text simplification systems with target non-native readers.", "title": "" }, { "docid": "f4720df58360b726bf2a128547f6d9d1", "text": "Iris texture is commonly thought to be highly discriminative between eyes and stable over individual lifetime, which makes iris particularly suitable for personal identification. However, iris texture also contains more information related to genes, which has been demonstrated by successful use of ethnic and gender classification based on iris. In this paper, we propose a novel ethnic classification method based on supervised codebook optimizing and Locality-constrained Linear Coding (LLC). The optimized codebook is composed of codes which are distinctive or mutual. Iris images from Asian and non-Asian are classified into two classes in experiments. Extensive experimental results show that the proposed method achieves encouraging classification rate and largely improves the ethnic classification performance comparing to existing algorithms.", "title": "" }, { "docid": "55eec4fc4a211cee6b735d1884310cc0", "text": "Understanding driving behaviors is essential for improving safety and mobility of our transportation systems. Data is usually collected via simulator-based studies or naturalistic driving studies. Those techniques allow for understanding relations between demographics, road conditions and safety. On the other hand, they are very costly and time consuming. Thanks to the ubiquity of smartphones, we have an opportunity to substantially complement more traditional data collection techniques with data extracted from phone sensors, such as GPS, accelerometer gyroscope and camera. We developed statistical models that provided insight into driver behavior in the San Francisco metro area based on tens of thousands of driver logs. We used novel data sources to support our work. We used cell phone sensor data drawn from five hundred drivers in San Francisco to understand the speed of traffic across the city as well as the maneuvers of drivers in different areas. Specifically, we clustered drivers based on their driving behavior. We looked at driver norms by street and flagged driving behaviors that deviated from the norm.", "title": "" }, { "docid": "c9b7ddb6eb1431fcc508d29a1f25104b", "text": "The problem of finding the missing values of a matrix given a few of its entries, called matrix completion, has gathered a lot of attention in the recent years. Although the problem under the standard low rank assumption is NP-hard, Candès and Recht showed that it can be exactly relaxed if the number of observed entries is sufficiently large. In this work, we introduce a novel matrix completion model that makes use of proximity information about rows and columns by assuming they form communities. This assumption makes sense in several real-world problems like in recommender systems, where there are communities of people sharing preferences, while products form clusters that receive similar ratings. Our main goal is thus to find a low-rank solution that is structured by the proximities of rows and columns encoded by graphs. We borrow ideas from manifold learning to constrain our solution to be smooth on these graphs, in order to implicitly force row and column proximities. Our matrix recovery model is formulated as a convex non-smooth optimization problem, for which a well-posed iterative scheme is provided. We study and evaluate the proposed matrix completion on synthetic and real data, showing that the proposed structured low-rank recovery model outperforms the standard matrix completion model in many situations.", "title": "" }, { "docid": "fdb9da0c4b6225c69de16411c79ac9dc", "text": "Phylogenetic analyses reveal the evolutionary derivation of species. A phylogenetic tree can be inferred from multiple sequence alignments of proteins or genes. The alignment of whole genome sequences of higher eukaryotes is a computational intensive and ambitious task as is the computation of phylogenetic trees based on these alignments. To overcome these limitations, we here used an alignment-free method to compare genomes of the Brassicales clade. For each nucleotide sequence a Chaos Game Representation (CGR) can be computed, which represents each nucleotide of the sequence as a point in a square defined by the four nucleotides as vertices. Each CGR is therefore a unique fingerprint of the underlying sequence. If the CGRs are divided by grid lines each grid square denotes the occurrence of oligonucleotides of a specific length in the sequence (Frequency Chaos Game Representation, FCGR). Here, we used distance measures between FCGRs to infer phylogenetic trees of Brassicales species. Three types of data were analyzed because of their different characteristics: (A) Whole genome assemblies as far as available for species belonging to the Malvidae taxon. (B) EST data of species of the Brassicales clade. (C) Mitochondrial genomes of the Rosids branch, a supergroup of the Malvidae. The trees reconstructed based on the Euclidean distance method are in general agreement with single gene trees. The Fitch-Margoliash and Neighbor joining algorithms resulted in similar to identical trees. Here, for the first time we have applied the bootstrap re-sampling concept to trees based on FCGRs to determine the support of the branchings. FCGRs have the advantage that they are fast to calculate, and can be used as additional information to alignment based data and morphological characteristics to improve the phylogenetic classification of species in ambiguous cases.", "title": "" }, { "docid": "898efbe8e80d29b1a10e1bed90852dbc", "text": "The aim of this work is to investigate the effectiveness of novel human-machine interaction paradigms for eHealth applications. In particular, we propose to replace usual human-machine interaction mechanisms with an approach that leverages a chat-bot program, opportunely designed and trained in order to act and interact with patients as a human being. Moreover, we have validated the proposed interaction paradigm in a real clinical context, where the chat-bot has been employed within a medical decision support system having the goal of providing useful recommendations concerning several disease prevention pathways. More in details, the chat-bot has been realized to help patients in choosing the most proper disease prevention pathway by asking for different information (starting from a general level up to specific pathways questions) and to support the related prevention check-up and the final diagnosis. Preliminary experiments about the effectiveness of the proposed approach are reported.", "title": "" }, { "docid": "9e5c123b6f744037436e0d5c917e8640", "text": "Relational databases have limited support for data collaboration, where teams collaboratively curate and analyze large datasets. Inspired by software version control systems like git, we propose (a) a dataset version control system, giving users the ability to create, branch, merge, difference and search large, divergent collections of datasets, and (b) a platform, DATAHUB, that gives users the ability to perform collaborative data analysis building on this version control system. We outline the challenges in providing dataset version control at scale.", "title": "" }, { "docid": "a62c1426e09ab304075e70b61773914f", "text": "Converting a scanned or shot line drawing image into a vector graph can facilitate further editand reuse, making it a hot research topic in computer animation and image processing. Besides avoiding noiseinfluence, its main challenge is to preserve the topological structures of the original line drawings, such as linejunctions, in the procedure of obtaining a smooth vector graph from a rough line drawing. In this paper, wepropose a vectorization method of line drawings based on junction analysis, which retains the original structureunlike done by existing methods. We first combine central line tracking and contour tracking, which allowsus to detect the encounter of line junctions when tracing a single path. Then, a junction analysis approachbased on intensity polar mapping is proposed to compute the number and orientations of junction branches.Finally, we make use of bending degrees of contour paths to compute the smoothness between adjacent branches,which allows us to obtain the topological structures corresponding to the respective ones in the input image.We also introduce a correction mechanism for line tracking based on a quadratic surface fitting, which avoidsaccumulating errors of traditional line tracking and improves the robustness for vectorizing rough line drawings.We demonstrate the validity of our method through comparisons with existing methods, and a large amount ofexperiments on both professional and amateurish line drawing images. 本文提出一种基于交叉点分析的线条矢量化方法, 克服了现有方法难以保持拓扑结构的不足。通过中心路径跟踪和轮廓路径跟踪相结合的方式, 准确检测交叉点的出现提出一种基于极坐标亮度映射的交叉点分析方法, 计算交叉点的分支数量和朝向; 利用轮廓路径的弯曲角度判断交叉点相邻分支间的光顺度, 从而获得与原图一致的拓扑结构。", "title": "" }, { "docid": "aa83af152739ac01ba899d186832ee62", "text": "Predicting user \"ratings\" on items is a crucial task in recommender systems. Matrix factorization methods that computes a low-rank approximation of the incomplete user-item rating matrix provide state-of-the-art performance, especially for users and items with several past ratings (warm starts). However, it is a challenge to generalize such methods to users and items with few or no past ratings (cold starts). Prior work [4][32] have generalized matrix factorization to include both user and item features for performing better regularization of factors as well as provide a model for smooth transition from cold starts to warm starts. However, the features were incorporated via linear regression on factor estimates. In this paper, we generalize this process to allow for arbitrary regression models like decision trees, boosting, LASSO, etc. The key advantage of our approach is the ease of computing --- any new regression procedure can be incorporated by \"plugging\" in a standard regression routine into a few intermediate steps of our model fitting procedure. With this flexibility, one can leverage a large body of work on regression modeling, variable selection, and model interpretation. We demonstrate the usefulness of this generalization using the MovieLens and Yahoo! Buzz datasets.", "title": "" }, { "docid": "7adf46bb0a4ba677e58aee9968d06293", "text": "BACKGROUND\nWork-family conflict is a type of interrole conflict that occurs as a result of incompatible role pressures from the work and family domains. Work role characteristics that are associated with work demands refer to pressures arising from excessive workload and time pressures. Literature suggests that work demands such as number of hours worked, workload, shift work are positively associated with work-family conflict, which, in turn is related to poor mental health and negative organizational attitudes. The role of social support has been an issue of debate in the literature. This study examined social support both as a moderator and a main effect in the relationship among work demands, work-to-family conflict, and satisfaction with job and life.\n\n\nOBJECTIVES\nThis study examined the extent to which work demands (i.e., work overload, irregular work schedules, long hours of work, and overtime work) were related to work-to-family conflict as well as life and job satisfaction of nurses in Turkey. The role of supervisory support in the relationship among work demands, work-to-family conflict, and satisfaction with job and life was also investigated.\n\n\nDESIGN AND METHODS\nThe sample was comprised of 243 participants: 106 academic nurses (43.6%) and 137 clinical nurses (56.4%). All of the respondents were female. The research instrument was a questionnaire comprising nine parts. The variables were measured under four categories: work demands, work support (i.e., supervisory support), work-to-family conflict and its outcomes (i.e., life and job satisfaction).\n\n\nRESULTS\nThe structural equation modeling results showed that work overload and irregular work schedules were the significant predictors of work-to-family conflict and that work-to-family conflict was associated with lower job and life satisfaction. Moderated multiple regression analyses showed that social support from the supervisor did not moderate the relationships among work demands, work-to-family conflict, and satisfaction with job and life. Exploratory analyses suggested that social support could be best conceptualized as the main effect directly influencing work-to-family conflict and job satisfaction.\n\n\nCONCLUSION\nNurses' psychological well-being and organizational attitudes could be enhanced by rearranging work conditions to reduce excessive workload and irregular work schedule. Also, leadership development programs should be implemented to increase the instrumental and emotional support of the supervisors.", "title": "" } ]
scidocsrr
d8f86e5b201d06d07ec0bf34237f298e
Wnt signaling pathway participates in valproic acid-induced neuronal differentiation of neural stem cells.
[ { "docid": "a797ab99ed7983bd7372de56d34caca1", "text": "The discovery of stem cells that can generate neural tissue has raised new possibilities for repairing the nervous system. A rush of papers proclaiming adult stem cell plasticity has fostered the notion that there is essentially one stem cell type that, with the right impetus, can create whatever progeny our heart, liver or other vital organ desires. But studies aimed at understanding the role of stem cells during development have led to a different view — that stem cells are restricted regionally and temporally, and thus not all stem cells are equivalent. Can these views be reconciled?", "title": "" } ]
[ { "docid": "d70a74e37f625f542f8b16e3b0b0e647", "text": "Word segmentation is the first step of any tasks in Vietnamese language processing. This paper reviews state-of-the-art approaches and systems for word segmentation in Vietnamese. To have an overview of all stages from building corpora to developing toolkits, we discuss building the corpus stage, approaches applied to solve the word segmentation and existing toolkits to segment words in Vietnamese sentences. In addition, this study shows clearly the motivations on building corpus and implementing machine learning techniques to improve the accuracy for Vietnamese word segmentation. According to our observation, this study also reports a few of achievements and limitations in existing Vietnamese word segmentation systems.", "title": "" }, { "docid": "b59a2c49364f3e95a2c030d800d5f9ce", "text": "An algorithm with linear filters and morphological operations has been proposed for automatic fabric defect detection. The algorithm is applied off-line and real-time to denim fabric samples for five types of defects. All defect types have been detected successfully and the defective regions are labeled. The defective fabric samples are then classified by using feed forward neural network method. Both defect detection and classification application performances are evaluated statistically. Defect detection performance of real time and off-line applications are obtained as 88% and 83% respectively. The defective images are classified with an average accuracy rate of 96.3%.", "title": "" }, { "docid": "4a89f20c4b892203be71e3534b32449c", "text": "This paper draws together knowledge from a variety of fields to propose that innovation management can be viewed as a form of organisational capability. Excellent companies invest and nurture this capability, from which they execute effective innovation processes, leading to innovations in new product, services and processes, and superior business performance results. An extensive review of the literature on innovation management, along with a case study of Cisco Systems, develops a conceptual model of the firm as an innovation engine. This new operating model sees substantial investment in innovation capability as the primary engine for wealth creation, rather than the possession of physical assets. Building on the dynamic capabilities literature, an “innovation capability” construct is proposed with seven elements. These are vision and strategy, harnessing the competence base, organisational intelligence, creativity and idea management, organisational structures and systems, culture and climate, and management of technology.", "title": "" }, { "docid": "381a11fe3d56d5850ec69e2e9427e03f", "text": "We present an approximation algorithm that takes a pool of pre-trained models as input and produces from it a cascaded model with similar accuracy but lower average-case cost. Applied to state-of-the-art ImageNet classification models, this yields up to a 2x reduction in floating point multiplications, and up to a 6x reduction in average-case memory I/O. The auto-generated cascades exhibit intuitive properties, such as using lower-resolution input for easier images and requiring higher prediction confidence when using a computationally cheaper model.", "title": "" }, { "docid": "b8ce74fc2a02a1a5c2d93e2922529bb0", "text": "The basic evolution of direct torque control from other drive types is explained. Qualitative comparisons with other drives are included. The basic concepts behind direct torque control are clarified. An explanation of direct self-control and the field orientation concepts implemented in the adaptive motor model block is presented. The reliance of the control method on fast processing techniques is stressed. The theoretical foundations for the control concept are provided in summary format. Information on the ancillary control blocks outside the basic direct torque control is given. The implementation of special functions directly related to the control approach is described. Finally, performance data from an actual system is presented.", "title": "" }, { "docid": "4579075f2afcd26058abb875e71ee4c3", "text": "So, what comes next? Where are we headed with AI, and what level of responsibility do the designers and providers have with managing AI technology? Will we control AI technology or will it control us? How do we handle the economic Curated by Andrew Boyarsky, MSM, PMP Clinical Associate Professor, and Academic Director of the MS in Enterprise Risk Management, Katz School of Graduate and Professional Studies, Yeshiva University “Every major player is working on this technology of artificial intelligence. As of now, it's benign... but I would say that the day is not far off when artificial intelligence as applied to cyber warfare becomes a threat to everybody.”", "title": "" }, { "docid": "20f2c4ca66ff81fee8092e159bb00d94", "text": "Understanding procedural language requires anticipating the causal effects of actions, even when they are not explicitly stated. In this work, we introduce Neural Process Networks to understand procedural text through (neural) simulation of action dynamics. Our model complements existing memory architectures with dynamic entity tracking by explicitly modeling actions as state transformers. The model updates the states of the entities by executing learned action operators. Empirical results demonstrate that our proposed model can reason about the unstated causal effects of actions, allowing it to provide more accurate contextual information for understanding and generating procedural text, all while offering more interpretable internal representations than existing alternatives.", "title": "" }, { "docid": "4f400f8e774ebd050ba914011da73514", "text": "This paper summarizes the method of polyp detection in colonoscopy images and provides preliminary results to participate in ISBI 2015 Grand Challenge on Automatic Polyp Detection in Colonoscopy videos. The key aspect of the proposed method is to learn hierarchical features using convolutional neural network. The features are learned in different scales to provide scale-invariant features through the convolutional neural network, and then each pixel in the colonoscopy image is classified as polyp pixel or non-polyp pixel through fully connected network. The result is refined via smooth filtering and thresholding step. Experimental result shows that the proposed neural network can classify patches of polyp and non-polyp region with an accuracy of about 90%.", "title": "" }, { "docid": "0eb3d3c33b62c04ed5d34fc3a38b5182", "text": "We propose a general methodology (PURE-LET) to design and optimize a wide class of transform-domain thresholding algorithms for denoising images corrupted by mixed Poisson-Gaussian noise. We express the denoising process as a linear expansion of thresholds (LET) that we optimize by relying on a purely data-adaptive unbiased estimate of the mean-squared error (MSE), derived in a non-Bayesian framework (PURE: Poisson-Gaussian unbiased risk estimate). We provide a practical approximation of this theoretical MSE estimate for the tractable optimization of arbitrary transform-domain thresholding. We then propose a pointwise estimator for undecimated filterbank transforms, which consists of subband-adaptive thresholding functions with signal-dependent thresholds that are globally optimized in the image domain. We finally demonstrate the potential of the proposed approach through extensive comparisons with state-of-the-art techniques that are specifically tailored to the estimation of Poisson intensities. We also present denoising results obtained on real images of low-count fluorescence microscopy.", "title": "" }, { "docid": "19bcbc9630cf5d8c6d033751ad268a16", "text": "The economic literature on standards has focused recently on the possibility of market failure with respect to the choice of a standard. In its strongest form, the argument is essentially this: an established standard can persist over a challenger, even where all users prefer a world dominated by the challenger, if users are unable to coordinate their choices. For example, each of us might prefer to have Beta-format videocassette recorders as long as prerecorded Beta tapes continue to be produced, but individually we do not buy Beta machines because we don't think enough others will buy Beta machines to sustain the prerecorded tape supply. I don't buy a Beta format machine because I think that you won't you don't buy one because you think that I won't. In the end, we both turn out to be correct, but we are both worse off than we might have been. This, of course, is a catch-22 that we might suppose to be common in the economy. There will be no cars until there are gas stations there will be no gas stations until there are cars. Without some way out of this conundrum, joyriding can never become a favorite activity of teenagers.1", "title": "" }, { "docid": "efb24b1c6128ecc10b18f35168433b80", "text": "Automatic vehicle classification is crucial to intelligent transportation system, especially for vehicle-tracking by police. Due to the complex lighting and image capture conditions, image-based vehicle classification in real-world environments is still a challenging task and the performance is far from being satisfactory. However, owing to the mechanism of visual attention, the human vision system shows remarkable capability compared with the computer vision system, especially in distinguishing nuances processing. Inspired by this mechanism, we propose a convolutional neural network (CNN) model of visual attention for image classification. A visual attention-based image processing module is used to highlight one part of an image and weaken the others, generating a focused image. Then the focused image is input into the CNN to be classified. According to the classification probability distribution, we compute the information entropy to guide a reinforcement learning agent to achieve a better policy for image classification to select the key parts of an image. Systematic experiments on a surveillance-nature dataset which contains images captured by surveillance cameras in the front view, demonstrate that the proposed model is more competitive than the large-scale CNN in vehicle classification tasks.", "title": "" }, { "docid": "e43056aad827cd5eea146418aa89ec09", "text": "The detection and analysis of clusters has become commonplace within geographic information science and has been applied in epidemiology, crime prevention, ecology, demography and other fields. One of the many methods for detecting and analyzing these clusters involves searching the dataset with a flock of boids (bird objects). While boids are effective at searching the dataset once their behaviors are properly configured, it can be difficult to find the proper configuration. Since genetic algorithms have been successfully used to configure neural networks, they may also be useful for configuring parameters guiding boid behaviors. In this paper, we develop a genetic algorithm to evolve the ideal boid behaviors. Preliminary results indicate that, even though the genetic algorithm does not return the same configuration each time, it does converge on configurations that improve over the parameters used when boids were initially proposed for geographic cluster detection. Also, once configured, the boids perform as well as other cluster detection methods. Continued work with this system could determine which parameters have a greater effect on the results of the boid system and could also discover rules for configuring a flock of boids directly from properties of the dataset, such as point density, rather than requiring the time-consuming process of optimizing the parameters for each new dataset.", "title": "" }, { "docid": "96db04b5f86b137328b21471fca221d0", "text": "Web frameworks involve many aspects, e.g., forms, model, testing, and migration. Developers differ in terms of their per-aspect experience. We describe a methodology for the identification of relevant aspects of a web app framework, measurement of experience atoms per developer and per aspect based on the commit history of actual projects, and the compilation of developer profiles for summarizing the relevance of different aspects and the developers’ contribution to the project. Measurement relies on a rule-based language. Our case study is concerned with the Pythonbased Django web app framework and the open source Django-Oscar project from which experience atoms were extracted.", "title": "" }, { "docid": "d1ec971608eda914e74f9ffc181c9b9f", "text": "The steady increase in photovoltaic (PV) installations calls for new and better control methods in respect to the utility grid connection. Limiting the harmonic distortion is essential to the power quality, but other requirements also contribute to a more safe grid-operation, especially in dispersed power generation networks. For instance, the knowledge of the utility impedance at the fundamental frequency can be used to detect a utility failure. A PV-inverter with this feature can anticipate a possible network problem and decouple it in time. This paper describes the digital implementation of a PV-inverter with different advanced, robust control strategies and an embedded online technique to determine the utility grid impedance. By injecting an interharmonic current and measuring the voltage response it is possible to estimate the grid impedance at the fundamental frequency. The presented technique, which is implemented with the existing sensors and the CPU of the PV-inverter, provides a fast and low cost approach for online impedance measurement, which may be used for detection of islanding operation. Practical tests on an existing PV-inverter validate the control methods, the impedance measurement, and the islanding detection.", "title": "" }, { "docid": "27bcbde431c340db7544b58faa597fb7", "text": "Face and eye detection algorithms are deployed in a wide variety of applications. Unfortunately, there has been no quantitative comparison of how these detectors perform under difficult circumstances. We created a dataset of low light and long distance images which possess some of the problems encountered by face and eye detectors solving real world problems. The dataset we created is composed of reimaged images (photohead) and semi-synthetic heads imaged under varying conditions of low light, atmospheric blur, and distances of 3m, 50m, 80m, and 200m. This paper analyzes the detection and localization performance of the participating face and eye algorithms compared with the Viola Jones detector and four leading commercial face detectors. Performance is characterized under the different conditions and parameterized by per-image brightness and contrast. In localization accuracy for eyes, the groups/companies focusing on long-range face detection outperform leading commercial applications.", "title": "" }, { "docid": "2c48e9908078ca192ff191121ce90e21", "text": "In the hierarchy of data, information and knowledge, computational methods play a major role in the initial processing of data to extract information, but they alone become less effective to compile knowledge from information. The Kyoto Encyclopedia of Genes and Genomes (KEGG) resource (http://www.kegg.jp/ or http://www.genome.jp/kegg/) has been developed as a reference knowledge base to assist this latter process. In particular, the KEGG pathway maps are widely used for biological interpretation of genome sequences and other high-throughput data. The link from genomes to pathways is made through the KEGG Orthology system, a collection of manually defined ortholog groups identified by K numbers. To better automate this interpretation process the KEGG modules defined by Boolean expressions of K numbers have been expanded and improved. Once genes in a genome are annotated with K numbers, the KEGG modules can be computationally evaluated revealing metabolic capacities and other phenotypic features. The reaction modules, which represent chemical units of reactions, have been used to analyze design principles of metabolic networks and also to improve the definition of K numbers and associated annotations. For translational bioinformatics, the KEGG MEDICUS resource has been developed by integrating drug labels (package inserts) used in society.", "title": "" }, { "docid": "a60128a5b5616added12f62e801671f0", "text": "Research shows that many organizations overlook needs and opportunities to strengthen ethics. Barriers can make it hard to see the need for stronger ethics and even harder to take effective action. These barriers include the organization's misleading use of language, misuse of an ethics code, culture of silence, strategies of justification, institutional betrayal, and ethical fallacies. Ethics placebos tend to take the place of steps to see, solve, and prevent problems. This article reviews relevant research and specific steps that create change.", "title": "" }, { "docid": "1977e7813b15ffb3a4238f3ed40f0e1f", "text": "Despite the existence of standard protocol, many stabilization centers (SCs) continue to experience high mortality of children receiving treatment for severe acute malnutrition. Assessing treatment outcomes and identifying predictors may help to overcome this problem. Therefore, a 30-month retrospective cohort study was conducted among 545 randomly selected medical records of children <5 years of age admitted to SCs in Gedeo Zone. Data was entered by Epi Info version 7 and analyzed by STATA version 11. Cox proportional hazards model was built by forward stepwise procedure and compared by the likelihood ratio test and Harrell's concordance, and fitness was checked by Cox-Snell residual plot. During follow-up, 51 (9.3%) children had died, and 414 (76%) and 26 (4.8%) children had recovered and defaulted (missed follow-up for 2 consecutive days), respectively. The survival rates at the end of the first, second and third weeks were 95.3%, 90% and 85%, respectively, and the overall mean survival time was 79.6 days. Age <24 months (adjusted hazard ratio [AHR] =2.841, 95% confidence interval [CI] =1.101-7.329), altered pulse rate (AHR =3.926, 95% CI =1.579-9.763), altered temperature (AHR =7.173, 95% CI =3.05-16.867), shock (AHR =3.805, 95% CI =1.829-7.919), anemia (AHR =2.618, 95% CI =1.148-5.97), nasogastric tube feeding (AHR =3.181, 95% CI =1.18-8.575), hypoglycemia (AHR =2.74, 95% CI =1.279-5.87) and treatment at hospital stabilization center (AHR =4.772, 95% CI =1.638-13.9) were independent predictors of mortality. The treatment outcomes and incidence of death were in the acceptable ranges of national and international standards. Intervention to further reduce deaths has to focus on young children with comorbidities and altered general conditions.", "title": "" }, { "docid": "061face2272a6c5a31c6fca850790930", "text": "Antibiotic feeding studies were conducted on the firebrat,Thermobia domestica (Zygentoma, Lepismatidae) to determine if the insect's gut cellulases were of insect or microbial origin. Firebrats were fed diets containing either nystatin, metronidazole, streptomycin, tetracycline, or an antibiotic cocktail consisting of all four antibiotics, and then their gut microbial populations and gut cellulase levels were monitored and compared with the gut microbial populations and gut cellulase levels in firebrats feeding on antibiotic-free diets. Each antibiotic significantly reduced the firebrat's gut micro-flora. Nystatin reduced the firebrat's viable gut fungi by 89%. Tetracycline and the antibiotic cocktail reduced the firebrat's viable gut bacteria by 81% and 67%, respectively, and metronidazole, streptomycin, tetracycline, and the antibiotic cocktail reduced the firebrat's total gut flora by 35%, 32%, 55%, and 64%, respectively. Although antibiotics significantly reduced the firebrat's viable and total gut flora, gut cellulase levels in firebrats fed antibiotics were not significantly different from those in firebrats on an antibiotic-free diet. Furthermore, microbial populations in the firebrat's gut decreased significantly over time, even in firebrats feeding on the antibiotic-free diet, without corresponding decreases in gut cellulase levels. Based on this evidence, we conclude that the gut cellulases of firebrats are of insect origin. This conclusion implies that symbiont-independent cellulose digestion is a primitive trait in insects and that symbiont-mediated cellulose digestion is a derived condition.", "title": "" } ]
scidocsrr