query_id
stringlengths 32
32
| query
stringlengths 6
5.38k
| positive_passages
listlengths 1
22
| negative_passages
listlengths 9
100
| subset
stringclasses 7
values |
---|---|---|---|---|
618d6f15c4294a7516991873efc44893
|
Field Mice: Extracting Hand Geometry from Electric Field Measurements
|
[
{
"docid": "24f141bd7a29bb8922fa010dd63181a6",
"text": "This paper reports on the development of a hand to machine interface device that provides real-time gesture, position and orientation information. The key element is a glove and the device as a whole incorporates a collection of technologies. Analog flex sensors on the glove measure finger bending. Hand position and orientation are measured either by ultrasonics, providing five degrees of freedom, or magnetic flux sensors, which provide six degrees of freedom. Piezoceramic benders provide the wearer of the glove with tactile feedback. These sensors are mounted on the light-weight glove and connected to the driving hardware via a small cable.\nApplications of the glove and its component technologies include its use in conjunction with a host computer which drives a real-time 3-dimensional model of the hand allowing the glove wearer to manipulate computer-generated objects as if they were real, interpretation of finger-spelling, evaluation of hand impairment in addition to providing an interface to a visual programming language.",
"title": ""
}
] |
[
{
"docid": "18faba65741b6871517c8050aa6f3a45",
"text": "Individuals differ in the manner they approach decision making, namely their decision-making styles. While some people typically make all decisions fast and without hesitation, others invest more effort into deciding even about small things and evaluate their decisions with much more scrutiny. The goal of the present study was to explore the relationship between decision-making styles, perfectionism and emotional processing in more detail. Specifically, 300 college students majoring in social studies and humanities completed instruments designed for assessing maximizing, decision commitment, perfectionism, as well as emotional regulation and control. The obtained results indicate that maximizing is primarily related to one dimension of perfectionism, namely the concern over mistakes and doubts, as well as emotional regulation and control. Furthermore, together with the concern over mistakes and doubts, maximizing was revealed as a significant predictor of individuals' decision commitment. The obtained findings extend previous reports regarding the association between maximizing and perfectionism and provide relevant insights into their relationship with emotional regulation and control. They also suggest a need to further explore these constructs that are, despite their complex interdependence, typically investigated in separate contexts and domains.",
"title": ""
},
{
"docid": "accad42ca98cd758fd1132e51942cba8",
"text": "The accuracy of face alignment affects the performance of a face recognition system. Since face alignment is usually conducted using eye positions, an accurate eye localization algorithm is therefore essential for accurate face recognition. In this paper, we first study the impact of eye locations on face recognition accuracy, and then introduce an automatic technique for eye detection. The performance of our automatic eye detection technique is subsequently validated using FRGC 1.0 database. The validation shows that our eye detector has an overall 94.5% eye detection rate, with the detected eyes very close to the manually provided eye positions. In addition, the face recognition performance based on the automatic eye detection is shown to be comparable to that of using manually given eye positions.",
"title": ""
},
{
"docid": "1f5c52945d83872a93749adc0e1a0909",
"text": "Turmeric, derived from the plant Curcuma longa, is a gold-colored spice commonly used in the Indian subcontinent, not only for health care but also for the preservation of food and as a yellow dye for textiles. Curcumin, which gives the yellow color to turmeric, was first isolated almost two centuries ago, and its structure as diferuloylmethane was determined in 1910. Since the time of Ayurveda (1900 B.C) numerous therapeutic activities have been assigned to turmeric for a wide variety of diseases and conditions, including those of the skin, pulmonary, and gastrointestinal systems, aches, pains, wounds, sprains, and liver disorders. Extensive research within the last half century has proven that most of these activities, once associated with turmeric, are due to curcumin. Curcumin has been shown to exhibit antioxidant, antiinflammatory, antiviral, antibacterial, antifungal, and anticancer activities and thus has a potential against various malignant diseases, diabetes, allergies, arthritis, Alzheimer’s disease, and other chronic illnesses. Curcumin can be considered an ideal “Spice for Life”. Curcumin is the most important fraction of turmeric which is responsible for its biological activity. In the present work we have investigated the qualitative and quantitative determination of curcumin in the ethanolic extract of C.longa. Qualitative estimation was carried out by thin layer chromatographic (TLC) method. The total phenolic content of the ethanolic extract of C.longa was found to be 11.24 as mg GAE/g. The simultaneous determination of the pharmacologically important active curcuminoids viz. curcumin, demethoxycurcumin and bisdemethoxycurcumin in Curcuma longa was carried out by spectrophotometric and HPLC techniques. HPLC separation was performed on a Cyber Lab C-18 column (250 x 4.0 mm, 5μ) using acetonitrile and 0.1 % orthophosphoric acid solution in water in the ratio 60 : 40 (v/v) at flow rate of 0.5 mL/min. Detection of curcuminoids were performed at 425 nm.",
"title": ""
},
{
"docid": "6d8a413767d9fab8ef3ca22daaa0e921",
"text": "Query-oriented summarization addresses the problem of information overload and help people get the main ideas within a short time. Summaries are composed by sentences. So, the basic idea of composing a salient summary is to construct quality sentences both for user specific queries and multiple documents. Sentence embedding has been shown effective in summarization tasks. However, these methods lack of the latent topic structure of contents. Hence, the summary lies only on vector space can hardly capture multi-topical content. In this paper, our proposed model incorporates the topical aspects and continuous vector representations, which jointly learns semantic rich representations encoded by vectors. Then, leveraged by topic filtering and embedding ranking model, the summarization can select desirable salient sentences. Experiments demonstrate outstanding performance of our proposed model from the perspectives of prominent topics and semantic coherence.",
"title": ""
},
{
"docid": "88d9c077f588e9e02453bd0ea40cfcae",
"text": "This study explored the prevalence of and motivations behind 'drunkorexia' – restricting food intake prior to drinking alcohol. For both male and female university students (N = 3409), intentionally changing eating behaviour prior to drinking alcohol was common practice (46%). Analyses performed on a targeted sample of women (n = 226) revealed that food restriction prior to alcohol use was associated with greater symptomology than eating more food. Those who restrict eating prior to drinking to avoid weight gain scored higher on measures of disordered eating, whereas those who restrict to get intoxicated faster scored higher on measures of alcohol abuse.",
"title": ""
},
{
"docid": "e5691e6bb32f06a34fab7b692539d933",
"text": "Öz Supplier evaluation and selection includes both qualitative and quantitative criteria and it is considered as a complex Multi Criteria Decision Making (MCDM) problem. Uncertainty and impreciseness of data is an integral part of decision making process for a real life application. The fuzzy set theory allows making decisions under uncertain environment. In this paper, a trapezoidal type 2 fuzzy multicriteria decision making methods based on TOPSIS is proposed to select convenient supplier under vague information. The proposed method is applied to the supplier selection process of a textile firm in Turkey. In addition, the same problem is solved with type 1 fuzzy TOPSIS to confirm the findings of type 2 fuzzy TOPSIS. A sensitivity analysis is conducted to observe how the decision changes under different scenarios. Results show that the presented type 2 fuzzy TOPSIS method is more appropriate and effective to handle the supplier selection in uncertain environment. Tedarikçi değerlendirme ve seçimi, nitel ve nicel çok sayıda faktörün değerlendirilmesini gerektiren karmaşık birçok kriterli karar verme problemi olarak görülmektedir. Gerçek hayatta, belirsizlikler ve muğlaklık bir karar verme sürecinin ayrılmaz bir parçası olarak karşımıza çıkmaktadır. Bulanık küme teorisi, belirsizlik durumunda karar vermemize imkân sağlayan metotlardan bir tanesidir. Bu çalışmada, ikizkenar yamuk tip 2 bulanık TOPSIS yöntemi kısaca tanıtılmıştır. Tanıtılan yöntem, Türkiye’de bir tekstil firmasının tedarikçi seçimi problemine uygulanmıştır. Ayrıca, tip 2 bulanık TOPSIS yönteminin sonuçlarını desteklemek için aynı problem tip 1 bulanık TOPSIS ile de çözülmüştür. Duyarlılık analizi yapılarak önerilen çözümler farklı senaryolar altında incelenmiştir. Duyarlılık analizi sonuçlarına göre tip 2 bulanık TOPSIS daha efektif ve uygun çözümler üretmektedir.",
"title": ""
},
{
"docid": "abb54a0c155805e7be2602265f78ae79",
"text": "In this paper we sketch out a computational theory of spatial cognition motivated by navigational behaviours, ecological requirements, and neural mechanisms as identified in animals and man. Spatial cognition is considered in the context of a cognitive agent built around the action-perception cycle. Besides sensors and effectors, the agent comprises multiple memory structures including a working memory and a longterm memory stage. Spatial longterm memory is modeled along the graph approach, treating recognizable places or poses as nodes and navigational actions as links. Models of working memory and its interaction with reference memory are discussed. The model provides an overall framework of spatial cognition which can be adapted to model different levels of behavioural complexity as well as interactions between working and longterm memory. A number of design questions for building cognitive robots are derived from comparison with biological systems and discussed in the paper.",
"title": ""
},
{
"docid": "85fe68b957a8daa69235ef65d92b1990",
"text": "Although Neural Machine Translation (NMT) models have advanced state-of-the-art performance in machine translation, they face problems like the inadequate translation. We attribute this to that the standard Maximum Likelihood Estimation (MLE) cannot judge the real translation quality due to its several limitations. In this work, we propose an adequacyoriented learning mechanism for NMT by casting translation as a stochastic policy in Reinforcement Learning (RL), where the reward is estimated by explicitly measuring translation adequacy. Benefiting from the sequence-level training of RL strategy and a more accurate reward designed specifically for translation, our model outperforms multiple strong baselines, including (1) standard and coverage-augmented attention models with MLE-based training, and (2) advanced reinforcement and adversarial training strategies with rewards based on both word-level BLEU and character-level CHRF3. Quantitative and qualitative analyses on different language pairs and NMT architectures demonstrate the effectiveness and universality of the proposed approach.",
"title": ""
},
{
"docid": "bb0f1e1384d91412fe3f0f0a51e91b8a",
"text": "This paper reports on an integrated navigation algorithm for the visual simultaneous localization and mapping (SLAM) robotic area coverage problem. In the robotic area coverage problem, the goal is to explore and map a given target area within a reasonable amount of time. This goal necessitates the use of minimally redundant overlap trajectories for coverage efficiency; however, visual SLAM’s navigation estimate will inevitably drift over time in the absence of loop-closures. Therefore, efficient area coverage and good SLAM navigation performance represent competing objectives. To solve this decision-making problem, we introduce perception-driven navigation, an integrated navigation algorithm that automatically balances between exploration and revisitation using a reward framework. This framework accounts for SLAM localization uncertainty, area coverage performance, and the identification of good candidate regions in the environment for visual perception. Results are shown for both a hybrid simulation and real-world demonstration of a visual SLAM system for autonomous underwater ship hull inspection.",
"title": ""
},
{
"docid": "83bec63fb2932aec5840a9323cc290b4",
"text": "This paper extends fully-convolutional neural networks (FCN) for the clothing parsing problem. Clothing parsing requires higher-level knowledge on clothing semantics and contextual cues to disambiguate fine-grained categories. We extend FCN architecture with a side-branch network which we refer outfit encoder to predict a consistent set of clothing labels to encourage combinatorial preference, and with conditional random field (CRF) to explicitly consider coherent label assignment to the given image. The empirical results using Fashionista and CFPD datasets show that our model achieves state-of-the-art performance in clothing parsing, without additional supervision during training. We also study the qualitative influence of annotation on the current clothing parsing benchmarks, with our Web-based tool for multi-scale pixel-wise annotation and manual refinement effort to the Fashionista dataset. Finally, we show that the image representation of the outfit encoder is useful for dress-up image retrieval application.",
"title": ""
},
{
"docid": "8fac18c1285875aee8e7a366555a4ca3",
"text": "Automatic speech recognition (ASR) has been under the scrutiny of researchers for many years. Speech Recognition System is the ability to listen what we speak, interpreter and perform actions according to spoken information. After so many detailed study and optimization of ASR and various techniques of features extraction, accuracy of the system is still a big challenge. The selection of feature extraction techniques is completely based on the area of study. In this paper, a detailed theory about features extraction techniques like LPC and LPCC is examined. The goal of this paper is to study the comparative analysis of features extraction techniques like LPC and LPCC.",
"title": ""
},
{
"docid": "1cdcb24b61926f37037fbb43e6d379b7",
"text": "The Internet has undergone dramatic changes in the past 2 decades and now forms a global communication platform that billions of users rely on for their daily activities. While this transformation has brought tremendous benefits to society, it has also created new threats to online privacy, such as omnipotent governmental surveillance. As a result, public interest in systems for anonymous communication has drastically increased. In this work, we survey previous research on designing, developing, and deploying systems for anonymous communication. Our taxonomy and comparative assessment provide important insights about the differences between the existing classes of anonymous communication protocols.",
"title": ""
},
{
"docid": "681f36fde6ec060baa76a6722a62ccbc",
"text": "This study determined if any of six endodontic solutions would have a softening effect on resorcinol-formalin paste in extracted teeth, and if there were any differences in the solvent action between these solutions. Forty-nine single-rooted extracted teeth were decoronated 2 mm coronal to the CEJ, and the roots sectioned apically to a standard length of 15 mm. Canals were prepared to a 12 mm WL and a uniform size with a #7 Parapost drill. Teeth were then mounted in a cylinder ring with acrylic. The resorcinol-formalin mixture was placed into the canals and was allowed to set for 60 days in a humidor. The solutions tested were 0.9% sodium chloride, 5.25% sodium hypochlorite, chloroform, Endosolv R (Endosolv R), 3% hydrogen peroxide, and 70% isopropyl alcohol. Seven samples per solution were tested and seven samples using water served as controls. One drop of the solution was placed over the set mixture in the canal, and the depth of penetration of a 1.5-mm probe was measured at 2, 5, 10, and 20 min using a dial micrometer gauge. A repeated-measures ANOVA showed a difference in penetration between the solutions at 10 min (p = 0.04) and at 20 min (p = 0.0004). At 20 min, Endosolv R, had significantly greater penetration than 5.25% sodium hypochlorite (p = 0.0033) and chloroform (p = 0.0018); however, it was not significantly better than the control (p = 0.0812). Although Endosolv R, had statistically superior probe penetration at 20 min, the softening effect could not be detected clinically at this time.",
"title": ""
},
{
"docid": "d2b545b4f9c0e7323760632c65206480",
"text": "This brief presents a quantitative analysis of the operating characteristics of three-phase diode bridge rectifiers with ac-side reactance and constant-voltage loads. We focus on the case where the ac-side currents vary continuously (continuous ac-side conduction mode). This operating mode is of particular importance in alternators and generators, for example. Simple approximate expressions are derived for the line and output current characteristics as well as the input power factor. Expressions describing the necessary operating conditions for continuous ac-side conduction are also developed. The derived analytical expressions are applied to practical examples and both simulations and experimental results are utilized to validate the analytical results. It is shown that the derived expressions are far more accurate than calculations based on traditional constant-current models.",
"title": ""
},
{
"docid": "ec641ace6df07156891f2bf40ea5d072",
"text": "This paper addresses deep face recognition (FR) problem under open-set protocol, where ideal face features are expected to have smaller maximal intra-class distance than minimal inter-class distance under a suitably chosen metric space. However, few existing algorithms can effectively achieve this criterion. To this end, we propose the angular softmax (A-Softmax) loss that enables convolutional neural networks (CNNs) to learn angularly discriminative features. Geometrically, A-Softmax loss can be viewed as imposing discriminative constraints on a hypersphere manifold, which intrinsically matches the prior that faces also lie on a manifold. Moreover, the size of angular margin can be quantitatively adjusted by a parameter m. We further derive specific m to approximate the ideal feature criterion. Extensive analysis and experiments on Labeled Face in the Wild (LFW), Youtube Faces (YTF) and MegaFace Challenge 1 show the superiority of A-Softmax loss in FR tasks.",
"title": ""
},
{
"docid": "91eecde9d0e3b67d7af0194782923ead",
"text": "The burden of entry into mobile crowdsensing (MCS) is prohibitively high for human-subject researchers who lack a technical orientation. As a result, the benefits of MCS remain beyond the reach of research communities (e.g., psychologists) whose expertise in the study of human behavior might advance applications and understanding of MCS systems. This paper presents Sensus, a new MCS system for human-subject studies that bridges the gap between human-subject researchers and MCS methods. Sensus alleviates technical burdens with on-device, GUI-based design of sensing plans, simple and efficient distribution of sensing plans to study participants, and uniform participant experience across iOS and Android devices. Sensing plans support many hardware and software sensors, automatic deployment of sensor-triggered surveys, and double-blind assignment of participants within randomized controlled trials. Sensus offers these features to study designers without requiring knowledge of markup and programming languages. We demonstrate the feasibility of using Sensus within two human-subject studies, one in psychology and one in engineering. Feedback from non-technical users indicates that Sensus is an effective and low-burden system for MCS-based data collection and analysis.",
"title": ""
},
{
"docid": "44d4114280e3ab9f6bfa0f0b347114b7",
"text": "Dozens of Electronic Control Units (ECUs) can be found on modern vehicles for safety and driving assistance. These ECUs also introduce new security vulnerabilities as recent attacks have been reported by plugging the in-vehicle system or through wireless access. In this paper, we focus on the security of the Controller Area Network (CAN), which is a standard for communication among ECUs. CAN bus by design does not have sufficient security features to protect it from insider or outsider attacks. Intrusion detection system (IDS) is one of the most effective ways to enhance vehicle security on the insecure CAN bus protocol. We propose a new IDS based on the entropy of the identifier bits in CAN messages. The key observation is that all the known CAN message injection attacks need to alter the CAN ID bits and analyzing the entropy of such bits can be an effective way to detect those attacks. We collected real CAN messages from a vehicle (2016 Ford Fusion) and performed simulated message injection attacks. The experimental results showed that our entropy based IDS can successfully detect all the injection attacks without disrupting the communication on CAN.",
"title": ""
},
{
"docid": "b8e90e97e8522ed45788025ca97ec720",
"text": "The use of Business Intelligence (BI) and Business Analytics for supporting decision-making is widespread in the world of praxis and their relevance for Management Accounting (MA) has been outlined in non-academic literature. Nonetheless, current research on Business Intelligence systems’ implications for the Management Accounting System is still limited. The purpose of this study is to contribute to understanding how BI system implementation and use affect MA techniques and Management Accountants’ role. An explorative field study, which involved BI consultants from Italian consulting companies, was carried out. We used the qualitative field study method since it permits dealing with complex “how” questions and, at the same time, taking into consideration multiple sites thus offering a comprehensive picture of the phenomenon. We found that BI implementation can affect Management Accountants’ expertise and can bring about not only incremental changes in existing Management Accounting techniques but also more relevant ones, by supporting the introduction of new and advanced MA techniques. By identifying changes in the Management Accounting System as well as factors which can prevent or favor a virtuous relationship between BI and Management Accounting Systems this research can be useful both for consultants and for client-companies in effectively managing BI projects.",
"title": ""
},
{
"docid": "9f46ec6dad4a1ebeeabb38f77ad4b1d7",
"text": "This paper proposes a fast and reliable method for anomaly detection and localization in video data showing crowded scenes. Time-efficient anomaly localization is an ongoing challenge and subject of this paper. We propose a cubic-patch-based method, characterised by a cascade of classifiers, which makes use of an advanced feature-learning approach. Our cascade of classifiers has two main stages. First, a light but deep 3D auto-encoder is used for early identification of “many” normal cubic patches. This deep network operates on small cubic patches as being the first stage, before carefully resizing the remaining candidates of interest, and evaluating those at the second stage using a more complex and deeper 3D convolutional neural network (CNN). We divide the deep auto-encoder and the CNN into multiple sub-stages, which operate as cascaded classifiers. Shallow layers of the cascaded deep networks (designed as Gaussian classifiers, acting as weak single-class classifiers) detect “simple” normal patches, such as background patches and more complex normal patches, are detected at deeper layers. It is shown that the proposed novel technique (a cascade of two cascaded classifiers) performs comparable to current top-performing detection and localization methods on standard benchmarks, but outperforms those in general with respect to required computation time.",
"title": ""
},
{
"docid": "9b4ffbbcd97e94524d2598cd862a400a",
"text": "Head pose monitoring is an important task for driver assistance systems, since it is a key indicator for human attention and behavior. However, current head pose datasets either lack complexity or do not adequately represent the conditions that occur while driving. Therefore, we introduce DriveAHead, a novel dataset designed to develop and evaluate head pose monitoring algorithms in real driving conditions. We provide frame-by-frame head pose labels obtained from a motion-capture system, as well as annotations about occlusions of the driver's face. To the best of our knowledge, DriveAHead is the largest publicly available driver head pose dataset, and also the only one that provides 2D and 3D data aligned at the pixel level using the Kinect v2. Existing performance metrics are based on the mean error without any consideration of the bias towards one position or another. Here, we suggest a new performance metric, named Balanced Mean Angular Error, that addresses the bias towards the forward looking position existing in driving datasets. Finally, we present the Head Pose Network, a deep learning model that achieves better performance than current state-of-the-art algorithms, and we analyze its performance when using our dataset.",
"title": ""
}
] |
scidocsrr
|
6d2c442c6322bc105b621d85a99c4fc8
|
"And We Will Fight For Our Race!" A Measurement Study of Genetic Testing Conversations on Reddit and 4chan
|
[
{
"docid": "61d6400d7c9cb1979becffd2b8c3e8ec",
"text": "Since its earliest days, harassment and abuse have plagued the Internet. Recent research has focused on in-domain methods to detect abusive content and faces several challenges, most notably the need to obtain large training corpora. In this paper, we introduce a novel computational approach to address this problem called Bag of Communities (BoC)---a technique that leverages large-scale, preexisting data from other Internet communities. We then apply BoC toward identifying abusive behavior within a major Internet community. Specifically, we compute a post's similarity to 9 other communities from 4chan, Reddit, Voat and MetaFilter. We show that a BoC model can be used on communities \"off the shelf\" with roughly 75% accuracy---no training examples are needed from the target community. A dynamic BoC model achieves 91.18% accuracy after seeing 100,000 human-moderated posts, and uniformly outperforms in-domain methods. Using this conceptual and empirical work, we argue that the BoC approach may allow communities to deal with a range of common problems, like abusive behavior, faster and with fewer engineering resources.",
"title": ""
},
{
"docid": "4a8b622eef99f13b8c4f023824688153",
"text": "Internet memes are increasingly used to sway and manipulate public opinion. This prompts the need to study their propagation, evolution, and influence across the Web. In this paper, we detect and measure the propagation of memes across multiple Web communities, using a processing pipeline based on perceptual hashing and clustering techniques, and a dataset of 160M images from 2.6B posts gathered from Twitter, Reddit, 4chan's Politically Incorrect board (/pol/), and Gab, over the course of 13 months. We group the images posted on fringe Web communities (/pol/, Gab, and The_Donald subreddit) into clusters, annotate them using meme metadata obtained from Know Your Meme, and also map images from mainstream communities (Twitter and Reddit) to the clusters.\n Our analysis provides an assessment of the popularity and diversity of memes in the context of each community, showing, e.g., that racist memes are extremely common in fringe Web communities. We also find a substantial number of politics-related memes on both mainstream and fringe Web communities, supporting media reports that memes might be used to enhance or harm politicians. Finally, we use Hawkes processes to model the interplay between Web communities and quantify their reciprocal influence, finding that /pol/ substantially influences the meme ecosystem with the number of memes it produces, while The_Donald has a higher success rate in pushing them to other communities.",
"title": ""
}
] |
[
{
"docid": "3925371ff139ca9cd23222db78f8694a",
"text": "In this paper, we investigate how the Gauss–Newton Hessian matrix affects the basin of convergence in Newton-type methods. Although the Newton algorithm is theoretically superior to the Gauss–Newton algorithm and the Levenberg–Marquardt (LM) method as far as their asymptotic convergence rate is concerned, the LM method is often preferred in nonlinear least squares problems in practice. This paper presents a theoretical analysis of the advantage of the Gauss–Newton Hessian matrix. It is proved that the Gauss–Newton approximation function is the only nonnegative convex quadratic approximation that retains a critical property of the original objective function: taking the minimal value of zero on an (n − 1)-dimensional manifold (or affine subspace). Due to this property, the Gauss–Newton approximation does not change the zero-on-(n − 1)-D “structure” of the original problem, explaining the reason why the Gauss–Newton Hessian matrix is preferred for nonlinear least squares problems, especially when the initial point is far from the solution.",
"title": ""
},
{
"docid": "1c0e441afd88f00b690900c42b40841a",
"text": "Convergence problems occur abundantly in all branches of mathematics or in the mathematical treatment of the sciences. Sequence transformations are principal tools to overcome convergence problems of the kind. They accomplish this by converting a slowly converging or diverging input sequence {sn} ∞ n=0 into another sequence {s ′ n }∞ n=0 with hopefully better numerical properties. Padé approximants, which convert the partial sums of a power series to a doubly indexed sequence of rational functions, are the best known sequence transformations, but the emphasis of the review will be on alternative sequence transformations which for some problems provide better results than Padé approximants.",
"title": ""
},
{
"docid": "d4b9d294d60ef001bee3a872b17a75b1",
"text": "Real-time formative assessment of student learning has become the subject of increasing attention. Students' textual responses to short answer questions offer a rich source of data for formative assessment. However, automatically analyzing textual constructed responses poses significant computational challenges, and the difficulty of generating accurate assessments is exacerbated by the disfluencies that occur prominently in elementary students' writing. With robust text analytics, there is the potential to accurately analyze students' text responses and predict students' future success. In this paper, we present WriteEval, a hybrid text analytics method for analyzing student-composed text written in response to constructed response questions. Based on a model integrating a text similarity technique with a semantic analysis technique, WriteEval performs well on responses written by fourth graders in response to short-text science questions. Further, it was found that WriteEval's assessments correlate with summative analyses of student performance.",
"title": ""
},
{
"docid": "a11c2a1522ae4c4df55467d62e4bbc51",
"text": "In this paper, a new design method considering a desired workspace and swing range of spherical joints of a DELTA robot is presented. The design is based on a new concept, which is the maximum inscribed workspace proposed in this paper. Firstly, the geometric description of the workspace for a DELTA robot is discussed, especially, the concept of the maximum inscribed workspace for the robot is proposed. The inscribed radius of the workspace on a workspace section is illustrated. As an applying example, a design result of the DELTA robot with a given workspace is presented and the reasonability is checked with the conditioning index. The results of the paper are very useful for the design and application of the parallel robot.",
"title": ""
},
{
"docid": "e7946956e8195f9b596d90efe6d6fd09",
"text": "In this paper we present a new biologically inspired approach to the part-of-speech tagging problem, based on particle swarm optimization. As far as we know this is the first attempt of solving this problem using swarm intelligence. We divided the part-of-speech problem into two subproblems. The first concerns the way of automatically extracting disambiguation rules from an annotated corpus. The second is related with how to apply these rules to perform the automatic tagging. We tackled both problems with particle swarm optimization. We tested our approach using two different corpora of English language and also a Portuguese corpus. The accuracy obtained on both languages is comparable to the best results previously published, including other evolutionary approaches.",
"title": ""
},
{
"docid": "8acd9e04cde88eea0965f49036eecd30",
"text": "Facial expressions are crucial to human social communication, but the extent to which they are innate and universal versus learned and culture dependent is a subject of debate. Two studies explored the effect of culture and learning on facial expression understanding. In Experiment 1, Japanese and U.S. participants interpreted facial expressions of emotion. Each group was better than the other at classifying facial expressions posed by members of the same culture. In Experiment 2, this reciprocal in-group advantage was reproduced by a neurocomputational model trained in either a Japanese cultural context or an American cultural context. The model demonstrates how each of us, interacting with others in a particular cultural context, learns to recognize a culture-specific facial expression dialect.",
"title": ""
},
{
"docid": "ecd67367aed0f3f7e3218cdec8a392b4",
"text": "OBJECTIVE\nTo investigate the efficacy of home-based specific stabilizing exercises focusing on the local stabilizing muscles as the only intervention in the treatment of persistent postpartum pelvic girdle pain.\n\n\nDESIGN\nA prospective, randomized, single-blinded, clinically controlled study.\n\n\nSUBJECTS\nEighty-eight women with pelvic girdle pain were recruited 3 months after delivery.\n\n\nMETHODS\nThe treatment consisted of specific stabilizing exercises targeting the local trunk muscles. The reference group had a single telephone contact with a physiotherapist. Primary outcome was disability measured with Oswestry Disability Index. Secondary outcomes were pain, health-related quality of life (EQ-5D), symptom satisfaction, and muscle function.\n\n\nRESULTS\nNo significant differences between groups could be found at 3- or 6-month follow-up regarding primary outcome in disability. Within-group comparisons showed some improvement in both groups in terms of disability, pain, symptom satisfaction and muscle function compared with baseline, although the majority still experienced pelvic girdle pain.\n\n\nCONCLUSION\nTreatment with this home-training concept of specific stabilizing exercises targeting the local muscles was no more effective in improving consequences of persistent postpartum pelvic girdle pain than the clinically natural course. Regardless of whether treatment with specific stabilizing exercises was carried out, the majority of women still experienced some back pain almost one year after pregnancy.",
"title": ""
},
{
"docid": "e73e30e989d47bb1a68bb8613b8a1547",
"text": "Flexibility is an important property for general access control system and especially in the Internet of Things (IoT), which can be achieved by access or authority delegation. Delegation mechanisms in access control that have been studied until now have been intended mainly for a system that has no resource constraint, such as a web-based system, which is not very suitable for a highly pervasive system such as IoT. To this end, this paper presents an access delegation method with security considerations based on Capability-based Context Aware Access Control (CCAAC) model intended for federated machine-to-machine communication or IoT networks. The main idea of our proposed model is that the access delegation is realized by means of a capability propagation mechanism, and incorporating the context information as well as secure capability propagation under federated IoT environments. By using the identity-based capability-based access control approach as well as contextual information and secure federated IoT, this proposed model provides scalability and flexibility as well as secure authority delegation for highly distributed system.",
"title": ""
},
{
"docid": "d8b894fb9dfe1373c790f0eeb8822016",
"text": "Many soft actuators have been studied for use in robots that come into contact with humans, including communication, entertainment, and medical/health care robots. One reason for this is that soft robots are expected to exhibit intrinsic safety in case an accident occurs. This paper proposes a plastic-film pneumatic actuator with a pleated structure that do not undergo the elastic deformation typical of rubber materials. By utilizing thin plastic films, the mass of an actuator can be significantly reduced, even if the actuators are the same size as a human arm. If the mass of the actuator is reduced, the kinetic energy when contacts with humans mechanically can be reduced considerably without reducing the working speed. More specifically, we propose a pleated structure made of plastic to achieve structural deformation generated from a two-dimensional pleated film. The pleated structure easily generates various bending motions. In this paper, a design method for determining the shape parameters of the pleated actuator structure using approximate models with considering measurement results of generating force is presented. We evaluated the adequacy of our approach in experiments using sample actuators. Furthermore, we show the constraints required to determine the necessary parameters. Thus, this paper provides an easy method for designing a lightweight and flexible plastic-film actuator.",
"title": ""
},
{
"docid": "e644b698d2977a2c767fe86a1445e23c",
"text": "This paper describes the E2E data, a new dataset for training end-to-end, datadriven natural language generation systems in the restaurant domain, which is ten times bigger than existing, frequently used datasets in this area. The E2E dataset poses new challenges: (1) its human reference texts show more lexical richness and syntactic variation, including discourse phenomena; (2) generating from this set requires content selection. As such, learning from this dataset promises more natural, varied and less template-like system utterances. We also establish a baseline on this dataset, which illustrates some of the difficulties associated with this data.",
"title": ""
},
{
"docid": "68191b71a4f944178ffcf5e8317e9725",
"text": "There is a wide inter-individual response to statin therapy including rosuvastatin calcium (RC), and it has been hypothesized that genetic differences may contribute to these variations. In fact, several studies have shown that pharmacokinetic (PK) parameters for RC are affected by race. The aim of this study is to demonstrate the interchangeability between two generic RC 20 mg film-coated tablets under fasting conditions among Mediterranean Arabs and to compare the pharmacokinetic results with Asian and Caucasian subjects from other studies. A single oral RC 20 mg dose, randomized, open-label, two-way crossover design study was conducted in 30 healthy Mediterranean Arab volunteers. Blood samples were collected prior to dosing and over a 72-h period. Concentrations in plasma were quantified using a validated liquid chromatography tandem mass spectrometry method. Twenty-six volunteers completed the study. Statistical comparison of the main PK parameters showed no significant difference between the generic and branded products. The point estimates (ratios of geometric mean %) were 107.73 (96.57-120.17), 103.61 (94.03-114.16), and 104.23 (94.84-114.54) for peak plasma concentration (Cmax), Area Under the Curve (AUC)0→last, and AUC0→∞, respectively. The 90% confidence intervals were within the pre-defined limits of 80%-125% as specified by the Food and Drug Administration and European Medicines Agency for bioequivalence studies. Both formulations were well-tolerated and no serious adverse events were reported. The PK results (AUC0→last and Cmax) were close to those of the Caucasian subjects. This study showed that the test and reference products met the regulatory criteria for bioequivalence following a 20 mg oral dose of RC under fasting conditions. Both formulations also showed comparable safety results. The PK results of the test and reference in the study subjects fall within the acceptable interval of 80%-125% and they were very close to the results among Caucasians. These PK results may be useful in order to determine the suitable RC dose among Arab Mediterranean patients.",
"title": ""
},
{
"docid": "fca805a46323a054d6cbe75fcff9deb3",
"text": "This study investigates the effectiveness of digital nudging for users’ social sharing of online platform content. In collaboration with a leading career and education online platform, we conducted a large-scale randomized experiment of digital nudging using website popups. Grounding on the Social Capital Theory and the individual motivation mechanism, we proposed and tested four kinds of nudging messages: simple request, monetary incentive, relational capital, and cognitive capital. We find that nudging messages with monetary incentive, relational and cognitive capital framings lead to increase in social sharing behavior, while nudging message with simple request decreases social sharing, comparing to the control group without nudging. This study contributes to the prior research on digital nudging by providing causal evidence of effective nudging for online social sharing behavior. The findings of this study also provide valuable guidelines for the optimal design of online platforms to effectively nudge/encourage social sharing in practice.",
"title": ""
},
{
"docid": "be2e96a37e48c0ca187639c8a6d6a15b",
"text": "Human beings are a marvel of evolved complexity. Such systems can be difficult to enhance. When we manipulate complex evolved systems, which are poorly understood, our interventions often fail or backfire. It can appear as if there is a ‘‘wisdom of nature’’ which we ignore at our peril. Sometimes the belief in nature’s wisdom—and corresponding doubts about the prudence of tampering with nature, especially human nature—manifest as diffusely moral objections against enhancement. Such objections may be expressed as intuitions about the superiority of the natural or the troublesomeness of hubris, or as an evaluative bias in favor of the status quo. This chapter explores the extent to which such prudence-derived anti-enhancement sentiments are justified. We develop a heuristic, inspired by the field of evolutionary medicine, for identifying promising human enhancement interventions. The heuristic incorporates the grains of truth contained in ‘‘nature knows best’’ attitudes while providing criteria for the special cases where we have reason to believe that it is feasible for us to improve on nature.",
"title": ""
},
{
"docid": "47da8530df2160ee29ff05aee4ab0342",
"text": "The objective of this review was to update Sobal and Stunkard's exhaustive review of the literature on the relation between socioeconomic status (SES) and obesity (Psychol Bull 1989;105:260-75). Diverse research databases (including CINAHL, ERIC, MEDLINE, and Social Science Abstracts) were comprehensively searched during the years 1988-2004 inclusive, using \"obesity,\" \"socioeconomic status,\" and synonyms as search terms. A total of 333 published studies, representing 1,914 primarily cross-sectional associations, were included in the review. The overall pattern of results, for both men and women, was of an increasing proportion of positive associations and a decreasing proportion of negative associations as one moved from countries with high levels of socioeconomic development to countries with medium and low levels of development. Findings varied by SES indicator; for example, negative associations (lower SES associated with larger body size) for women in highly developed countries were most common with education and occupation, while positive associations for women in medium- and low-development countries were most common with income and material possessions. Patterns for women in higher- versus lower-development countries were generally less striking than those observed by Sobal and Stunkard; this finding is interpreted in light of trends related to globalization. Results underscore a view of obesity as a social phenomenon, for which appropriate action includes targeting both economic and sociocultural factors.",
"title": ""
},
{
"docid": "879789ac6eb806fe9a68115aa358b3be",
"text": "The performance of any fingerprint recognizer highly depends on the fingerprint image quality. Different types of noises in the fingerprint images pose greater difficulty for recognizers. Most Automatic Fingerprint Identification Systems (AFIS) use some form of image enhancement. Although several methods have been described in the literature, there is still scope for improvement. In particular, effective methodology of cleaning the valleys between the ridge contours are lacking. We observe that noisy valley pixels and the pixels in the interrupted ridge flow gap are “impulse noises”. Therefore, this paper describes a new approach to fingerprint image enhancement, which is based on integration of Anisotropic Filter and directional median filter(DMF). Gaussian-distributed noises are reduced effectively by Anisotropic Filter, “impulse noises” are reduced efficiently by DMF. Usually, traditional median filter is the most effective method to remove pepper-and-salt noise and other small artifacts, the proposed DMF can not only finish its original tasks, it can also join broken fingerprint ridges, fill out the holes of fingerprint images, smooth irregular ridges as well as remove some annoying small artifacts between ridges. The enhancement algorithm has been implemented and tested on fingerprint images from FVC2002. Images of varying quality have been used to evaluate the performance of our approach. We have compared our method with other methods described in the literature in terms of matched minutiae, missed minutiae, spurious minutiae, and flipped minutiae(between end points and bifurcation points). Experimental results show our method to be superior to those described in the literature.",
"title": ""
},
{
"docid": "23b0756f3ad63157cff70d4973c9e6bd",
"text": "A robot that can carry out a natural-language instruction has been a dream since before the Jetsons cartoon series imagined a life of leisure mediated by a fleet of attentive robot helpers. It is a dream that remains stubbornly distant. However, recent advances in vision and language methods have made incredible progress in closely related areas. This is significant because a robot interpreting a natural-language navigation instruction on the basis of what it sees is carrying out a vision and language process that is similar to Visual Question Answering. Both tasks can be interpreted as visually grounded sequence-to-sequence translation problems, and many of the same methods are applicable. To enable and encourage the application of vision and language methods to the problem of interpreting visually-grounded navigation instructions, we present the Matter-port3D Simulator - a large-scale reinforcement learning environment based on real imagery [11]. Using this simulator, which can in future support a range of embodied vision and language tasks, we provide the first benchmark dataset for visually-grounded natural language navigation in real buildings - the Room-to-Room (R2R) dataset1.",
"title": ""
},
{
"docid": "dade322206eeab84bfdae7d45fe043ca",
"text": "Lung cancer has the highest death rate among all cancers in the USA. In this work we focus on improving the ability of computer-aided diagnosis (CAD) systems to predict the malignancy of nodules from cropped CT images of lung nodules. We evaluate the effectiveness of very deep convolutional neural networks at the task of expert-level lung nodule malignancy classification. Using the state-of-the-art ResNet architecture as our basis, we explore the effect of curriculum learning, transfer learning, and varying network depth on the accuracy of malignancy classification. Due to a lack of public datasets with standardized problem definitions and train/test splits, studies in this area tend to not compare directly against other existing work. This makes it hard to know the relative improvement in the new solution. In contrast, we directly compare our system against two state-of-the-art deep learning systems for nodule classification on the LIDC/IDRI dataset using the same experimental setup and data set. The results show that our system achieves the highest performance in terms of all metrics measured including sensitivity, specificity, precision, AUROC, and accuracy. The proposed method of combining deep residual learning, curriculum learning, and transfer learning translates to high nodule classification accuracy. This reveals a promising new direction for effective pulmonary nodule CAD systems that mirrors the success of recent deep learning advances in other image-based application domains.",
"title": ""
},
{
"docid": "42d3f666325c3c9e2d61fcbad3c6659a",
"text": "Supernumerary or accessory nostrils are a very rare type of congenital nasal anomaly, with only a few cases reported in the literature. They can be associated with such malformations as facial clefts and they can be unilateral or bilateral, with most cases reported being unilateral. The accessory nostril may or may not communicate with the ipsilateral nasal cavity, probably depending on the degree of embryological progression of the anomaly. A case of simple supernumerary left nostril with no nasal cavity communication and with a normally developed nose is presented. The surgical treatment is described and the different speculative theories related to the embryogenesis of supernumerary nostrils are also reviewed.",
"title": ""
},
{
"docid": "a93351d3fb9dc69868a11c8655ec1541",
"text": "Dry powder inhaler formulations comprising commercial lactose–drug blends can show restricted detachment of drug from lactose during aerosolisation, which can lead to poor fine particle fractions (FPFs) which are suboptimal. The aim of the present study was to investigate whether the crystallisation of lactose from different ethanol/butanol co-solvent mixtures could be employed as a method of altering the FPF of salbutamol sulphate from powder blends. Lactose particles were prepared by an anti-solvent recrystallisation process using various ratios of the two solvents. Crystallised lactose or commercial lactose was mixed with salbutamol sulphate and in vitro deposition studies were performed using a multistage liquid impinger. Solid-state characterisation results showed that commercial lactose was primarily composed of the α-anomer whilst the crystallised lactose samples comprised a α/β mixture containing a lower number of moles of water per mole of lactose compared to the commercial lactose. The crystallised lactose particles were also less elongated and more irregular in shape with rougher surfaces. Formulation blends containing crystallised lactose showed better aerosolisation performance and dose uniformity when compared to commercial lactose. The highest FPF of salbutamol sulphate (38.0 ± 2.5%) was obtained for the lactose samples that were crystallised from a mixture of ethanol/butanol (20:60) compared to a FPF of 19.7 ± 1.9% obtained for commercial lactose. Engineered lactose carriers with modified anomer content and physicochemical properties, when compared to the commercial grade, produced formulations which generated a high FPF.",
"title": ""
}
] |
scidocsrr
|
bc56e4984a10c8d9f091a639c2692ec1
|
Fast scale invariant feature detection and matching on programmable graphics hardware
|
[
{
"docid": "3d0103c34fcc6a65ad56c85a9fe10bad",
"text": "This paper approaches the problem of finding correspondences between images in which there are large changes in viewpoint, scale and illumination. Recent work has shown that scale-space ‘interest points’ may be found with good repeatability in spite of such changes. Furthermore, the high entropy of the surrounding image regions means that local descriptors are highly discriminative for matching. For descriptors at interest points to be robustly matched between images, they must be as far as possible invariant to the imaging process. In this work we introduce a family of features which use groups of interest points to form geometrically invariant descriptors of image regions. Feature descriptors are formed by resampling the image relative to canonical frames defined by the points. In addition to robust matching, a key advantage of this approach is that each match implies a hypothesis of the local 2D (projective) transformation. This allows us to immediately reject most of the false matches using a Hough transform. We reject remaining outliers using RANSAC and the epipolar constraint. Results show that dense feature matching can be achieved in a few seconds of computation on 1GHz Pentium III machines.",
"title": ""
}
] |
[
{
"docid": "fe74692a16c5e50bc40f1d379457d643",
"text": "To carry out the motion control of CNC machine and robot, this paper introduces an approach to implement 4-axis motion controller based on field programmable gate array (FPGA). Starting with introduction to existing excellent 4-axis motion controller MCX314, the fundamental structure of controller is discussed. Since the straight-line motion is a fundamental motion of CNC machine and robot, this paper introduces a linear interpolation method to do approximate straight-line motion within any 3-axis space. As Interpolation calculation of hardware interpolation is implemented by hardware logic circuit such as ASIC or FPGA in the controller, therefore this method can avoid a large amount of complex mathematical calculation, which hints that this controller has high real-time performance. The simulation of straight-line motion within 3D space verifies the effectiveness of the proposed approach.",
"title": ""
},
{
"docid": "273bb44ed02076008d5d2835baed9494",
"text": "Modeling informal inference in natural language is very challenging. With the recent availability of large annotated data, it has become feasible to train complex models such as neural networks to perform natural language inference (NLI), which have achieved state-of-the-art performance. Although there exist relatively large annotated data, can machines learn all knowledge needed to perform NLI from the data? If not, how can NLI models benefit from external knowledge and how to build NLI models to leverage it? In this paper, we aim to answer these questions by enriching the state-of-the-art neural natural language inference models with external knowledge. We demonstrate that the proposed models with external knowledge further improve the state of the art on the Stanford Natural Language Inference (SNLI) dataset.",
"title": ""
},
{
"docid": "72c0fecdbcc27b6af98373dc3c03333b",
"text": "The amino acid sequence of the heavy chain of Bombyx mori silk fibroin was derived from the gene sequence. The 5,263-residue (391-kDa) polypeptide chain comprises 12 low-complexity \"crystalline\" domains made up of Gly-X repeats and covering 94% of the sequence; X is Ala in 65%, Ser in 23%, and Tyr in 9% of the repeats. The remainder includes a nonrepetitive 151-residue header sequence, 11 nearly identical copies of a 43-residue spacer sequence, and a 58-residue C-terminal sequence. The header sequence is homologous to the N-terminal sequence of other fibroins with a completely different crystalline region. In Bombyx mori, each crystalline domain is made up of subdomains of approximately 70 residues, which in most cases begin with repeats of the GAGAGS hexapeptide and terminate with the GAAS tetrapeptide. Within the subdomains, the Gly-X alternance is strict, which strongly supports the classic Pauling-Corey model, in which beta-sheets pack on each other in alternating layers of Gly/Gly and X/X contacts. When fitting the actual sequence to that model, we propose that each subdomain forms a beta-strand and each crystalline domain a two-layered beta-sandwich, and we suggest that the beta-sheets may be parallel, rather than antiparallel, as has been assumed up to now.",
"title": ""
},
{
"docid": "f3ec87229acd0ec98c044ad42fd9fec1",
"text": "Increasingly, Internet users trade privacy for service. Facebook, Google, and others mine personal information to target advertising. This paper presents a preliminary and partial answer to the general question \"Can users retain their privacy while still benefiting from these web services?\". We propose NOYB, a novel approach that provides privacy while preserving some of the functionality provided by online services. We apply our approach to the Facebook online social networking website. Through a proof-of-concept implementation we demonstrate that NOYB is practical and incrementally deployable, requires no changes to or cooperation from an existing online service, and indeed can be non-trivial for the online service to detect.",
"title": ""
},
{
"docid": "c70ff7ed949cd6d96c1bd62331649257",
"text": "Bitcoin is a popular alternative to fiat money, widely used for its perceived anonymity properties. However, recent attacks on Bitcoin’s peer-to-peer (P2P) network demonstrated that its gossip-based flooding protocols, which are used to ensure global network consistency, may enable user deanonymization— the linkage of a user’s IP address with her pseudonym in the Bitcoin network. In 2015, the Bitcoin community responded to these attacks by changing the network’s flooding mechanism to a different protocol, known as diffusion. However, no systematic justification was provided for the change, and it is unclear if diffusion actually improves the system’s anonymity. In this paper, we model the Bitcoin networking stack and analyze its anonymity properties, both preand post-2015. In doing so, we consider new adversarial models and spreading mechanisms that have not been previously studied in the source-finding literature. We theoretically prove that Bitcoin’s networking protocols (both preand post-2015) offer poor anonymity properties on networks with a regular-tree topology. We validate this claim in simulation on a 2015 snapshot of the real Bitcoin P2P network topology.",
"title": ""
},
{
"docid": "a333e0e08d7c5b52e08c2e88bdeb1cd1",
"text": "Money laundering (ML) involves moving illicit funds, which may be linked to drug trafficking or organized crime, through a series of transactions or accounts to disguise origin or ownership. China is facing severe challenge on money laundering with an estimated 200 billion RMB laundered annually. Decision tree method is used in this paper to create the determination rules of the money laundering risk by customer profiles of a commercial bank in China. A sample of twenty-eight customers with four attributes is used to induced and validate a decision tree method. The result indicates the effectiveness of decision tree in generating AML rules from companies' customer profiles. The anti-money laundering system in small and middle commerical bank in China is highly needed.",
"title": ""
},
{
"docid": "8b3962dc5895a46c913816f208aa8e60",
"text": "Glaucoma is the second leading cause of blindness worldwide. It is a disease in which fluid pressure in the eye increases continuously, damaging the optic nerve and causing vision loss. Computational decision support systems for the early detection of glaucoma can help prevent this complication. The retinal optic nerve fiber layer can be assessed using optical coherence tomography, scanning laser polarimetry, and Heidelberg retina tomography scanning methods. In this paper, we present a novel method for glaucoma detection using a combination of texture and higher order spectra (HOS) features from digital fundus images. Support vector machine, sequential minimal optimization, naive Bayesian, and random-forest classifiers are used to perform supervised classification. Our results demonstrate that the texture and HOS features after z-score normalization and feature selection, and when combined with a random-forest classifier, performs better than the other classifiers and correctly identifies the glaucoma images with an accuracy of more than 91%. The impact of feature ranking and normalization is also studied to improve results. Our proposed novel features are clinically significant and can be used to detect glaucoma accurately.",
"title": ""
},
{
"docid": "bff6e87727db20562091a6c8c08f3667",
"text": "Many trust-aware recommender systems have explored the value of explicit trust, which is specified by users with binary values and simply treated as a concept with a single aspect. However, in social science, trust is known as a complex term with multiple facets, which have not been well exploited in prior recommender systems. In this paper, we attempt to address this issue by proposing a (dis)trust framework with considerations of both interpersonal and impersonal aspects of trust and distrust. Specifically, four interpersonal aspects (benevolence, competence, integrity and predictability) are computationally modelled based on users’ historic ratings, while impersonal aspects are formulated from the perspective of user connections in trust networks. Two logistic regression models are developed and trained by accommodating these factors, and then applied to predict continuous values of users’ trust and distrust, respectively. Trust information is further refined by corresponding predicted distrust information. The experimental results on real-world data sets demonstrate the effectiveness of our proposed model in further improving the performance of existing state-of-the-art trust-aware recommendation approaches.",
"title": ""
},
{
"docid": "5b021c0223ee25535508eb1d6f63ff55",
"text": "A 32-KB standard CMOS antifuse one-time programmable (OTP) ROM embedded in a 16-bit microcontroller as its program memory is designed and implemented in 0.18-mum standard CMOS technology. The proposed 32-KB OTP ROM cell array consists of 4.2 mum2 three-transistor (3T) OTP cells where each cell utilizes a thin gate-oxide antifuse, a high-voltage blocking transistor, and an access transistor, which are all compatible with standard CMOS process. In order for high density implementation, the size of the 3T cell has been reduced by 80% in comparison to previous work. The fabricated total chip size, including 32-KB OTP ROM, which can be programmed via external I 2C master device such as universal I2C serial EEPROM programmer, 16-bit microcontroller with 16-KB program SRAM and 8-KB data SRAM, peripheral circuits to interface other system building blocks, and bonding pads, is 9.9 mm2. This paper describes the cell, design, and implementation of high-density CMOS OTP ROM, and shows its promising possibilities in embedded applications",
"title": ""
},
{
"docid": "562bd637873f416f5b284afc49401bf6",
"text": "Avionics Full Duplex Ethernet (AFDX) is a well-established backbone network with wide usage in all current aircrafts, with its first usage onboard the Airbus A380. It was initially defined by Airbus and internationally standardized by ARINC (664 Part 7). The key features are 802.3 Ethernet compatibility, static routing based on a virtual link concept, reserved bandwidth with granted bandwidth allocation gaps, transparent redundancy and — most importantly — deterministic latency and jitter bounds. Beside several proprietary real time Ethernet solutions, an emerging open standard with support of major semiconductor companies, currently introduces guaranteed timing behavior with the focus on transportation of video and audio streams. Audio Video Bridging (AVB) is a set of IEEE standards (802.1AS, 802.1Qat, 802.1Qav, 802.1BA, 1722) developed under the Audio/Video Bridging Task group as part of 802.1, which defines protocol layers above the Ethernet MAC & LLC layers. AVB addresses low latency and highly synchronized playback of audio and video with low packet loss, low jitter and low latency. By using priority queuing (PQ) and credit-based shaper algorithms (CBQ), it aims at guaranteeing a maximum latency of 2 ms over 7 hops for the highest priority class, with the transmission of several time sensitive and best effort streams in parallel. This work introduces the basic principles of the new AVB standard in details, compares it with the properties of the AFDX standard, and determines if the latency determinism of AVB is suitable for Avionic's requirements and if the new mass market controllers and switches developed for AVB could be a cost-effective alternative for Commercial off-the-shelf (COTS) based low DAL (Design assurance Level) systems.",
"title": ""
},
{
"docid": "d141c13cea52e72bb7b84d3546496afb",
"text": "A number of resource-intensive applications, such as augmented reality, natural language processing, object recognition, and multimedia-based software are pushing the computational and energy boundaries of smartphones. Cloud-based services augment the resource-scare capabilities of smartphones while offloading compute-intensive methods to resource-rich cloud servers. The amalgam of cloud and mobile computing technologies has ushered the rise of Mobile Cloud Computing (MCC) paradigm which envisions operating smartphones and modern mobile devices beyond their intrinsic capabilities. System virtualization, application virtualization, and dynamic binary translation (DBT) techniques are required to address the heterogeneity of smartphone and cloud architectures. However, most of the current research work has only focused on the offloading of virtualized applications while giving limited consideration to native code offloading. Moreover, researchers have not attended to the requirements of multimedia based applications in MCC offloading frameworks. In this study, we present a survey and taxonomy of state-of-the-art MCC frameworks, DBT techniques for native offloading, and cross-platform execution techniques for multimedia based applications. We survey the MCC frameworks from the perspective of offload enabling techniques. We focus on native code offloading frameworks and analyze the DBT and emulation techniques of smartphones (ARM) on a cloud server (x86) architectures. Furthermore, we debate the open research issues and challenges to native offloading of multimedia based smartphone applications.",
"title": ""
},
{
"docid": "28ba1eddc74c930350e1b2df5931fa39",
"text": "In this paper, the problem of how to implement the MTPA/MTPV control for an energy efficient operation of a high speed Interior Permanent Magnet Synchronous Motor (IPMSM) used as traction drive is considered. This control method depends on the inductances Ld, Lq, the flux linkage ΨPM and the stator resistance Rs which might vary during operation. The parameter variation causes miscalculation of the set point currents Id and Iq for the inner current control system and thus a wrong torque will be set. Consequently the IPMSM will not be operating in the optimal operation point which yields to a reduction of the total energy efficiency and the performance. As a consequence, this paper proposes the implementation of the the Recursive Least Square Estimation (RLS) for a high speed and high performance IPMSM. With this online identification method the variable parameters are estimated and adapted to the MTPA and MTPV control strategy.",
"title": ""
},
{
"docid": "3eff4654a3bbf9aa3fbfe15033383e67",
"text": "Pizza is a strict superset of Java that incorporates three ideas from the academic community: parametric polymorphism, higher-order functions, and algebraic data types. Pizza is defined by translation into Java and compiles into the Java Virtual Machine, requirements which strongly constrain the design space. Nonetheless, Pizza fits smoothly to Java, with only a few rough edges.",
"title": ""
},
{
"docid": "479fe61e0b738cb0a0284da1bda7c36d",
"text": "In urban areas, congestion creates a substantial variation in travel speeds during peak morning and evening hours. This research presents a new solution approach, an iterative route construction and improvement algorithm (IRCI), for the time dependent vehicle routing problem (TDVRP) with hard or soft time windows. Improvements are obtained at a route level; hence the proposed approach does not rely on any type of local improvement procedure. Further, the solution algorithms can tackle constant speed or time-dependent speed problems without any alteration in their structure. A new formulation for the TDVRP with soft and hard time windows is presented. Leveraging on the well known Solomon instances, new test problems that capture the typical speed variations of congested urban settings are proposed. Results in terms of solution quality as well as computational time are presented and discussed. The computational complexity of the IRCI is analyzed and experimental results indicate that average computational time increases proportionally to the square of the number of customers.",
"title": ""
},
{
"docid": "c65fa2f20f6f175ced873cfe5915a296",
"text": "The OntoNotes project is creating a corpus of large-scale, accurate, and integrated annotation of multiple levels of the shallow semantic structure in text. Such rich, integrated annotation covering many levels will allow for richer, cross-level models enabling significantly better automatic semantic analysis. At the same time, it demands a robust, efficient, scalable mechanism for storing and accessing these complex inter-dependent annotations. We describe a relational database representation that captures both the inter- and intra-layer dependencies and provide details of an object-oriented API for efficient, multi-tiered access to this data.",
"title": ""
},
{
"docid": "6bca70ccf17fd4380502b7b4e2e7e550",
"text": "A consistent UI leaves an overall impression on user’s psychology, aesthetics and taste. Human–computer interaction (HCI) is the study of how humans interact with computer systems. Many disciplines contribute to HCI, including computer science, psychology, ergonomics, engineering, and graphic design. HCI is a broad term that covers all aspects of the way in which people interact with computers. In their daily lives, people are coming into contact with an increasing number of computer-based technologies. Some of these computer systems, such as personal computers, we use directly. We come into contact with other systems less directly — for example, we have all seen cashiers use laser scanners and digital cash registers when we shop. We have taken the same but in extensible line and made more solid justified by linking with other scientific pillars and concluded some of the best holistic base work for future innovations. It is done by inspecting various theories of Colour, Shape, Wave, Fonts, Design language and other miscellaneous theories in detail. Keywords— Karamvir Singh Rajpal, Mandeep Singh Rajpal, User Interface, User Experience, Design, Frontend, Neonex Technology,",
"title": ""
},
{
"docid": "e0490f48724e2bd8895328d6ace75ee8",
"text": "Dispersion and radiation properties for bound and leaky modes supported by 1-D printed periodic structures are investigated. A new type of Brillouin diagram is presented that accounts for different types of physical leakage, namely, leakage into one or more surface waves or also simultaneously into space. This new Brillouin diagram not only provides a physical insight into the dispersive behavior of such periodic structures, but it also provides a simple and convenient way to correctly choose the integration paths that arise from a spectral-domain moment-method analysis. Numerical results illustrate the usefulness of this new Brillouin diagram in explaining the leakage and stopband behavior for these types of periodic structures.",
"title": ""
},
{
"docid": "d300119f7e25b4252d7212ca42b32fb3",
"text": "Various computational procedures or constraint-based methods for data repairing have been proposed over the last decades to identify errors and, when possible, correct them. However, these approaches have several limitations including the scalability and quality of the values to be used in replacement of the errors. In this paper, we propose a new data repairing approach that is based on maximizing the likelihood of replacement data given the data distribution, which can be modeled using statistical machine learning techniques. This is a novel approach combining machine learning and likelihood methods for cleaning dirty databases by value modification. We develop a quality measure of the repairing updates based on the likelihood benefit and the amount of changes applied to the database. We propose SCARE (SCalable Automatic REpairing), a systematic scalable framework that follows our approach. SCARE relies on a robust mechanism for horizontal data partitioning and a combination of machine learning techniques to predict the set of possible updates. Due to data partitioning, several updates can be predicted for a single record based on local views on each data partition. Therefore, we propose a mechanism to combine the local predictions and obtain accurate final predictions. Finally, we experimentally demonstrate the effectiveness, efficiency, and scalability of our approach on real-world datasets in comparison to recent data cleaning approaches.",
"title": ""
},
{
"docid": "d4e5cff61b1b3a9afe1bfe4d2255fc97",
"text": "For a large class of piecewise expanding C1,1 maps of the interval we prove the Lasota-Yorke inequality with a constant smaller than the previously known 2/ inf |τ ′|. Consequently, the stability results of Keller-Liverani [7] apply to this class and in particular to maps with periodic turning points. One of the applications is the stability of acim’s for a class of W-shaped maps. Another application is an affirmative answer to a conjecture of Eslami-Misiurewicz [2] regarding acim-stability of a family of unimodal maps.",
"title": ""
},
{
"docid": "3eb8a99236905f59af8a32e281189925",
"text": "F2FS is a Linux file system designed to perform well on modern flash storage devices. The file system builds on append-only logging and its key design decisions were made with the characteristics of flash storage in mind. This paper describes the main design ideas, data structures, algorithms and the resulting performance of F2FS. Experimental results highlight the desirable performance of F2FS; on a state-of-the-art mobile system, it outperforms EXT4 under synthetic workloads by up to 3.1× (iozone) and 2× (SQLite). It reduces elapsed time of several realistic workloads by up to 40%. On a server system, F2FS is shown to perform better than EXT4 by up to 2.5× (SATA SSD) and 1.8× (PCIe SSD).",
"title": ""
}
] |
scidocsrr
|
03d4a0b5fb759fcf30cd3a97765e9180
|
Trajectory generation and control for four wheeled omnidirectional vehicles
|
[
{
"docid": "53b43126d066f5e91d7514f5da754ef3",
"text": "This paper describes a computationally inexpensive, yet high performance trajectory generation algorithm for omnidirectional vehicles. It is shown that the associated nonlinear control problem can be made tractable by restricting the set of admissible control functions. The resulting problem is linear with coupled control efforts and a near-optimal control strategy is shown to be piecewise constant (bang-bang type). A very favorable trade-off between optimality and computational efficiency is achieved. The proposed algorithm is based on a small number of evaluations of simple closed-form expressions and is thus extremely efficient. The low computational cost makes this method ideal for path planning in dynamic environments.",
"title": ""
}
] |
[
{
"docid": "124872419ff2c7d96215f1adf3b68aa4",
"text": "The increased use of domain-specific languages (DSLs) and the absence of adequate tooling to take advantage of commonalities among DSLs has led to a situation where the same structure is duplicated in multiple DSLs. This observation has lead to the work described in this paper: an investigation of methods and tools for pattern specification and application and two extensions of a state-of-the-art tool for patterns in DSLs, DSL-tao. The extensions make patterns more understandable and they also make the tool suitable for more complex pattern applications. The first extension introduces a literal specification for patterns and the second extension introduces a merge function for the application of patterns. These two extensions are demonstrated on an often-occurring pattern in DSLs.",
"title": ""
},
{
"docid": "8a9cf6b4d7d6d2be1d407ef41ceb23e5",
"text": "A highly discriminative and computationally efficient descriptor is needed in many computer vision applications involving human action recognition. This paper proposes a hand-crafted skeleton-based descriptor for human action recognition. It is constructed from five fixed size covariance matrices calculated using strongly related joints coordinates over five body parts (spine, left/ right arms, and left/ right legs). Since covariance matrices are symmetric, the lower/ upper triangular parts of these matrices are concatenated to generate an efficient descriptor. It achieves a saving from 78.26 % to 80.35 % in storage space and from 75 % to 90 % in processing time (depending on the dataset) relative to techniques adopting a covariance descriptor based on all the skeleton joints. To show the effectiveness of the proposed method, its performance is evaluated on five public datasets: MSR-Action3D, MSRC-12 Kinect Gesture, UTKinect-Action, Florence3D-Action, and NTU RGB+D. The obtained recognition rates on all datasets outperform many existing methods and compete with the current state of the art techniques.",
"title": ""
},
{
"docid": "0573cb8c7eb10c5acfe59fc2d0de08e9",
"text": "Players in the online ad ecosystem are struggling to acquire the user data required for precise targeting. Audience look-alike modeling has the potential to alleviate this issue, but models’ performance strongly depends on quantity and quality of available data. In order to maximize the predictive performance of our look-alike modeling algorithms, we propose two novel hybrid filtering techniques that utilize the recent neural probabilistic language model algorithm doc2vec. We apply these methods to data from a large mobile ad exchange and additional app metadata acquired from the Apple App store and Google Play store. First, we model mobile app users through their app usage histories and app descriptions (user2vec). Second, we introduce context awareness to that model by incorporating additional user and app-related metadata in model training (context2vec). Our findings are threefold: (1) the quality of recommendations provided by user2vec is notably higher than current state-of-the-art techniques. (2) User representations generated through hybrid filtering using doc2vec prove to be highly valuable features in supervised machine learning models for look-alike modeling. This represents the first application of hybrid filtering user models using neural probabilistic language models, specifically doc2vec, in look-alike modeling. (3) Incorporating context metadata in the doc2vec model training process to introduce context awareness has positive effects on performance and is superior to directly including the data as features in the downstream supervised models.",
"title": ""
},
{
"docid": "b9a440c317ac4f6b7e11d594eb995600",
"text": "New wave of the technology revolution, often referred to as the fourth industrial revolution, is changing the way we live, work, and communicate with each other. These days, we are witnessing the emergence of unprecedented services and applications requiring lower latency, better reliability massive connection density, and improved energy efficiency. In accordance with this trend and change, international telecommunication union (ITU) defined three representative service categories, viz., enhanced mobile broadband (eMBB), massive machine-type communication (mMTC), and ultra-reliable and low latency communication (uRLLC). Among three service categories, physical-layer design of the uRLLC service is arguably the most challenging and problematic. This is mainly because uRLLC should satisfy two conflicting requirements: low latency and ultra-high reliability. In this article, we provide the stateof-the-art overview of uRLLC communications with an emphasis on technical challenges and solutions. We highlight key requirements of uRLLC service and then discuss the physical-layer issues and enabling technologies including packet and frame structure, multiplexing schemes, and reliability improvement",
"title": ""
},
{
"docid": "cb1048d4bffb141074a4011279054724",
"text": "Question Generation (QG) is the task of generating reasonable questions from a text. It is a relatively new research topic and has its potential usage in intelligent tutoring systems and closed-domain question answering systems. Current approaches include template or syntax based methods. This thesis proposes a novel approach based entirely on semantics. Minimal Recursion Semantics (MRS) is a meta-level semantic representation with emphasis on scope underspecification. With the English Resource Grammar and various tools from the DELPH-IN community, a natural language sentence can be interpreted as an MRS structure by parsing, and an MRS structure can be realized as a natural language sentence through generation. There are three issues emerging from semantics-based QG: (1) sentence simplification for complex sentences, (2) question transformation for declarative sentences, and (3) generation ranking. Three solutions are also proposed: (1) MRS decomposition through a Connected Dependency MRS Graph, (2) MRS transformation from declarative sentences to interrogative sentences, and (3) question ranking by simple language models atop a MaxEnt-based model. The evaluation is conducted in the context of the Question Generation Shared Task and Generation Challenge 2010. The performance of proposed method is compared against other syntax and rule based systems. The result also reveals the challenges of current research on question generation and indicates direction for future work.",
"title": ""
},
{
"docid": "17c9a72c46f63a7121ea9c9b6b893a2f",
"text": "This paper presents the artificial neural network approach namely Back propagation network (BPNs) and probabilistic neural network (PNN). It is used to classify the type of tumor in MRI images of different patients with Astrocytoma type of brain tumor. The image processing techniques have been developed for detection of the tumor in the MRI images. Gray Level Co-occurrence Matrix (GLCM) is used to achieve the feature extraction. The whole system worked in two modes firstly Training/Learning mode and secondly Testing/Recognition mode.",
"title": ""
},
{
"docid": "ee58216dd7e3a0d8df8066703b763187",
"text": "Extraction of discriminative features from salient facial patches plays a vital role in effective facial expression recognition. The accurate detection of facial landmarks improves the localization of the salient patches on face images. This paper proposes a novel framework for expression recognition by using appearance features of selected facial patches. A few prominent facial patches, depending on the position of facial landmarks, are extracted which are active during emotion elicitation. These active patches are further processed to obtain the salient patches which contain discriminative features for classification of each pair of expressions, thereby selecting different facial patches as salient for different pair of expression classes. One-against-one classification method is adopted using these features. In addition, an automated learning-free facial landmark detection technique has been proposed, which achieves similar performances as that of other state-of-art landmark detection methods, yet requires significantly less execution time. The proposed method is found to perform well consistently in different resolutions, hence, providing a solution for expression recognition in low resolution images. Experiments on CK+ and JAFFE facial expression databases show the effectiveness of the proposed system.",
"title": ""
},
{
"docid": "e35d304b73fc8e7b848a154f547c976d",
"text": "While neural machine translation (NMT) provides high-quality translation, it is still hard to interpret and analyze its behavior. We present an interactive interface for visualizing and intervening behavior of NMT, specifically concentrating on the behavior of beam search mechanism and attention component. The tool (1) visualizes search tree and attention and (2) provides interface to adjust search tree and attention weight (manually or automatically) at real-time. We show the tool help users understand NMT in various ways.",
"title": ""
},
{
"docid": "7e0c7042c7bc4d1084234f48dd2e0333",
"text": "Many interesting large-scale systems are distributed systems of multiple communicating components. Such systems can be very hard to debug, especially when they exhibit poor performance. The problem becomes much harder when systems are composed of \"black-box\" components: software from many different (perhaps competing) vendors, usually without source code available. Typical solutions-provider employees are not always skilled or experienced enough to debug these systems efficiently. Our goal is to design tools that enable modestly-skilled programmers (and experts, too) to isolate performance bottlenecks in distributed systems composed of black-box nodes.We approach this problem by obtaining message-level traces of system activity, as passively as possible and without any knowledge of node internals or message semantics. We have developed two very different algorithms for inferring the dominant causal paths through a distributed system from these traces. One uses timing information from RPC messages to infer inter-call causality; the other uses signal-processing techniques. Our algorithms can ascribe delay to specific nodes on specific causal paths. Unlike previous approaches to similar problems, our approach requires no modifications to applications, middleware, or messages.",
"title": ""
},
{
"docid": "40e0d6e93c426107cbefbdf3d4ca85b9",
"text": "H.264/MPEG-4 AVC is the latest international video coding standard. It was jointly developed by the Video Coding Experts Group (VCEG) of the ITU-T and the Moving Picture Experts Group (MPEG) of ISO/IEC. It uses state-of-the-art coding tools and provides enhanced coding efficiency for a wide range of applications, including video telephony, video conferencing, TV, storage (DVD and/or hard disk based, especially high-definition DVD), streaming video, digital video authoring, digital cinema, and many others. The work on a new set of extensions to this standard has recently been completed. These extensions, known as the Fidelity Range Extensions (FRExt), provide a number of enhanced capabilities relative to the base specification as approved in the Spring of 2003. In this paper, an overview of this standard is provided, including the highlights of the capabilities of the new FRExt features. Some comparisons with the existing MPEG-2 and MPEG-4 Part 2 standards are also provided.",
"title": ""
},
{
"docid": "66e2128ebdbd5c348b775d70de1f7127",
"text": "With the rapid development of various online video sharing platforms, large numbers of videos are produced every day. Video affective content analysis has become an active research area in recent years, since emotion plays an important role in the classification and retrieval of videos. In this work, we explore to train very deep convolutional networks using ConvLSTM layers to add more expressive power for video affective content analysis models. Network-in-network principles, batch normalization, and convolution auto-encoder are applied to ensure the effectiveness of the model. Then an extended emotional representation model is used as an emotional annotation. In addition, we set up a database containing two thousand fragments to validate the effectiveness of the proposed model. Experimental results on the proposed data set show that deep learning approach based on ConvLSTM outperforms the traditional baseline and reaches the state-of-the-art system.",
"title": ""
},
{
"docid": "4b5ac4095cb2695a1e5282e1afca80a4",
"text": "Threeexperimentsdocument that14-month-old infants’construalofobjects (e.g.,purple animals) is influenced by naming, that they can distinguish between the grammatical form noun and adjective, and that they treat this distinction as relevant to meaning. In each experiment, infants extended novel nouns (e.g., “This one is a blicket”) specifically to object categories (e.g., animal), and not to object properties (e.g., purple things). This robust noun–category link is related to grammatical form and not to surface differences in the presentation of novel words (Experiment 3). Infants’extensions of novel adjectives (e.g., “This one is blickish”) were more fragile: They extended adjectives specifically to object properties when the property was color (Experiment 1), but revealed a less precise mapping when the property was texture (Experiment 2). These results reveal that by 14 months, infants distinguish between grammatical forms and utilize these distinctions in determining the meaning of novel words.",
"title": ""
},
{
"docid": "32968e136a4a97a42b87ee9f6367b949",
"text": "In this paper we are interested in exploiting geographic priors to help outdoor scene understanding. Towards this goal we propose a holistic approach that reasons jointly about 3D object detection, pose estimation, semantic segmentation as well as depth reconstruction from a single image. Our approach takes advantage of large-scale crowd-sourced maps to generate dense geographic, geometric and semantic priors by rendering the 3D world. We demonstrate the effectiveness of our holistic model on the challenging KITTI dataset [13], and show significant improvements over the baselines in all metrics and tasks.",
"title": ""
},
{
"docid": "faa4f113034ace1ed6682bfb6463d11e",
"text": "Extracting high-quality dynamic foreground layers from a video sequence is a challenging problem due to the coupling of color, motion, and occlusion. Many approaches assume that the background scene is static or undergoes the planar perspective transformation. In this paper, we relax these restrictions and present a comprehensive system for accurately computing object motion, layer, and depth information. A novel algorithm that combines different clues to extract the foreground layer is proposed, where a voting-like scheme robust to outliers is employed in optimization. The system is capable of handling difficult examples in which the background is nonplanar and the camera freely moves during video capturing. Our work finds several applications, such as high-quality view interpolation and video editing.",
"title": ""
},
{
"docid": "d229c679dcd4fa3dd84c6040b95fc99c",
"text": "This paper reviews the supervised learning versions of the no-free-lunch theorems in a simpli ed form. It also discusses the signi cance of those theorems, and their relation to other aspects of supervised learning.",
"title": ""
},
{
"docid": "051fc43d9e32d8b9d8096838b53c47cb",
"text": "Median filtering is a cornerstone of modern image processing and is used extensively in smoothing and de-noising applications. The fastest commercial implementations (e.g. in Adobe® Photoshop® CS2) exhibit O(r) runtime in the radius of the filter, which limits their usefulness in realtime or resolution-independent contexts. We introduce a CPU-based, vectorizable O(log r) algorithm for median filtering, to our knowledge the most efficient yet developed. Our algorithm extends to images of any bit-depth, and can also be adapted to perform bilateral filtering. On 8-bit data our median filter outperforms Photoshop's implementation by up to a factor of fifty.",
"title": ""
},
{
"docid": "643e083415859324c1fdd58e050d30b5",
"text": "In this work we propose a simple unsupervised approach for next frame prediction in video. Instead of directly predicting the pixels in a frame given past frames, we predict the transformations needed for generating the next frame in a sequence, given the transformations of the past frames. This leads to sharper results, while using a smaller prediction model. In order to enable a fair comparison between different video frame prediction models, we also propose a new evaluation protocol. We use generated frames as input to a classifier trained with ground truth sequences. This criterion guarantees that models scoring high are those producing sequences which preserve discriminative features, as opposed to merely penalizing any deviation, plausible or not, from the ground truth. Our proposed approach compares favourably against more sophisticated ones on the UCF-101 data set, while also being more efficient in terms of the number of parameters and computational cost.",
"title": ""
},
{
"docid": "c10dd691e79d211ab02f2239198af45c",
"text": "Neural networks are powerful and flexible models that work well for many difficult learning tasks in image, speech and natural language understanding. Despite their success, neural networks are still hard to design. In this paper, we use a recurrent network to generate the model descriptions of neural networks and train this RNN with reinforcement learning to maximize the expected accuracy of the generated architectures on a validation set. On the CIFAR-10 dataset, our method, starting from scratch, can design a novel network architecture that rivals the best human-invented architecture in terms of test set accuracy. Our CIFAR-10 model achieves a test error rate of 3.84, which is only 0.1 percent worse and 1.2x faster than the current state-of-the-art model. On the Penn Treebank dataset, our model can compose a novel recurrent cell that outperforms the widely-used LSTM cell, and other state-of-the-art baselines. Our cell achieves a test set perplexity of 62.4 on the Penn Treebank, which is 3.6 perplexity better than the previous state-ofthe-art.",
"title": ""
},
{
"docid": "70221b4a688c01e9093e8f35d68ec982",
"text": "A dominant paradigm for learning-based approaches in computer vision is training generic models, such as ResNet for image recognition, or I3D for video understanding, on large datasets and allowing them to discover the optimal representation for the problem at hand. While this is an obviously attractive approach, it is not applicable in all scenarios. We claim that action detection is one such challenging problem the models that need to be trained are large, and the labeled data is expensive to obtain. To address this limitation, we propose to incorporate domain knowledge into the structure of the model to simplify optimization. In particular, we augment a standard I3D network with a tracking module to aggregate long term motion patterns, and use a graph convolutional network to reason about interactions between actors and objects. Evaluated on the challenging AVA dataset, the proposed approach improves over the I3D baseline by 5.5% mAP and over the state-ofthe-art by 4.8% mAP.",
"title": ""
},
{
"docid": "01f08a2710177959bf37698577fefd4f",
"text": "Glaucoma is a chronic eye disease that leads to vision loss. As it cannot be cured, detecting the disease in time is important. Current tests using intraocular pressure (IOP) are not sensitive enough for population based glaucoma screening. Optic nerve head assessment in retinal fundus images is both more promising and superior. This paper proposes optic disc and optic cup segmentation using superpixel classification for glaucoma screening. In optic disc segmentation, histograms, and center surround statistics are used to classify each superpixel as disc or non-disc. A self-assessment reliability score is computed to evaluate the quality of the automated optic disc segmentation. For optic cup segmentation, in addition to the histograms and center surround statistics, the location information is also included into the feature space to boost the performance. The proposed segmentation methods have been evaluated in a database of 650 images with optic disc and optic cup boundaries manually marked by trained professionals. Experimental results show an average overlapping error of 9.5% and 24.1% in optic disc and optic cup segmentation, respectively. The results also show an increase in overlapping error as the reliability score is reduced, which justifies the effectiveness of the self-assessment. The segmented optic disc and optic cup are then used to compute the cup to disc ratio for glaucoma screening. Our proposed method achieves areas under curve of 0.800 and 0.822 in two data sets, which is higher than other methods. The methods can be used for segmentation and glaucoma screening. The self-assessment will be used as an indicator of cases with large errors and enhance the clinical deployment of the automatic segmentation and screening.",
"title": ""
}
] |
scidocsrr
|
987e0266c73109191ccbacf73747a6b3
|
Performance optimization of Hadoop cluster using linux services
|
[
{
"docid": "b104337e30aa30db3dadc4e254ed2ad4",
"text": "We live in on-demand, on-command Digital universe with data prolifering by Institutions, Individuals and Machines at a very high rate. This data is categories as \"Big Data\" due to its sheer Volume, Variety and Velocity. Most of this data is unstructured, quasi structured or semi structured and it is heterogeneous in nature. The volume and the heterogeneity of data with the speed it is generated, makes it difficult for the present computing infrastructure to manage Big Data. Traditional data management, warehousing and analysis systems fall short of tools to analyze this data. Due to its specific nature of Big Data, it is stored in distributed file system architectures. Hadoop and HDFS by Apache is widely used for storing and managing Big Data. Analyzing Big Data is a challenging task as it involves large distributed file systems which should be fault tolerant, flexible and scalable. Map Reduce is widely been used for the efficient analysis of Big Data. Traditional DBMS techniques like Joins and Indexing and other techniques like graph search is used for classification and clustering of Big Data. These techniques are being adopted to be used in Map Reduce. In this paper we suggest various methods for catering to the problems in hand through Map Reduce framework over Hadoop Distributed File System (HDFS). Map Reduce is a Minimization technique which makes use of file indexing with mapping, sorting, shuffling and finally reducing. Map Reduce techniques have been studied in this paper which is implemented for Big Data analysis using HDFS.",
"title": ""
}
] |
[
{
"docid": "5ea560095b752ca8e7fb6672f4092980",
"text": "Access control is a security aspect whose requirements evolve with technology advances and, at the same time, contemporary social contexts. Multitudes of access control models grow out of their respective application domains such as healthcare and collaborative enterprises; and even then, further administering means, human factor considerations, and infringement management are required to effectively deploy the model in the particular usage environment. This paper presents a survey of access control mechanisms along with their deployment issues and solutions available today. We aim to give a comprehensive big picture as well as pragmatic deployment details to guide in understanding, setting up and enforcing access control in its real world application.",
"title": ""
},
{
"docid": "bf08bc98eb9ef7a18163fc310b10bcf6",
"text": "An ultra-low voltage, low power, low line sensitivity MOSFET-only sub-threshold voltage reference with no amplifiers is presented. The low sensitivity is realized by the difference between two complementary currents and second-order compensation improves the temperature stability. The bulk-driven technique is used and most of the transistors work in the sub-threshold region, which allow a remarkable reduction in the minimum supply voltage and power consumption. Moreover, a trimming circuit is adopted to compensate the process-related reference voltage variation while the line sensitivity is not affected. The proposed voltage reference has been fabricated in the 0.18 μm 1.8 V CMOS process. The measurement results show that the reference could operate on a 0.45 V supply voltage. For supply voltages ranging from 0.45 to 1.8 V the power consumption is 15.6 nW, and the average temperature coefficient is 59.4 ppm/°C across a temperature range of -40 to 85 °C and a mean line sensitivity of 0.033%. The power supply rejection ratio measured at 100 Hz is -50.3 dB. In addition, the chip area is 0.013 mm2.",
"title": ""
},
{
"docid": "443a4fe9e7484a18aa53a4b142d93956",
"text": "BACKGROUND AND PURPOSE\nFrequency and duration of static stretching have not been extensively examined. Additionally, the effect of multiple stretches per day has not been evaluated. The purpose of this study was to determine the optimal time and frequency of static stretching to increase flexibility of the hamstring muscles, as measured by knee extension range of motion (ROM).\n\n\nSUBJECTS\nNinety-three subjects (61 men, 32 women) ranging in age from 21 to 39 years and who had limited hamstring muscle flexibility were randomly assigned to one of five groups. The four stretching groups stretched 5 days per week for 6 weeks. The fifth group, which served as a control, did not stretch.\n\n\nMETHODS\nData were analyzed with a 5 x 2 (group x test) two-way analysis of variance for repeated measures on one variable (test).\n\n\nRESULTS\nThe change in flexibility appeared to be dependent on the duration and frequency of stretching. Further statistical analysis of the data indicated that the groups that stretched had more ROM than did the control group, but no differences were found among the stretching groups.\n\n\nCONCLUSION AND DISCUSSION\nThe results of this study suggest that a 30-second duration is an effective amount of time to sustain a hamstring muscle stretch in order to increase ROM. No increase in flexibility occurred when the duration of stretching was increased from 30 to 60 seconds or when the frequency of stretching was increased from one to three times per day.",
"title": ""
},
{
"docid": "8709706ffafdadfc2fb9210794dfa782",
"text": "The increasing availability and affordability of wireless building and home automation networks has increased interest in residential and commercial building energy management. This interest has been coupled with an increased awareness of the environmental impact of energy generation and usage. Residential appliances and equipment account for 30% of all energy consumption in OECD countries and indirectly contribute to 12% of energy generation related carbon dioxide (CO2) emissions (International Energy Agency, 2003). The International Energy Association also predicts that electricity usage for residential appliances would grow by 12% between 2000 and 2010, eventually reaching 25% by 2020. These figures highlight the importance of managing energy use in order to improve stewardship of the environment. They also hint at the potential gains that are available through smart consumption strategies targeted at residential and commercial buildings. The challenge is how to achieve this objective without negatively impacting people’s standard of living or their productivity. The three primary purposes of building energy management are the reduction/management of building energy use; the reduction of electricity bills while increasing occupant comfort and productivity; and the improvement of environmental stewardship without adversely affecting standards of living. Building energy management systems provide a centralized platform for managing building energy usage. They detect and eliminate waste, and enable the efficient use electricity resources. The use of widely dispersed sensors enables the monitoring of ambient temperature, lighting, room occupancy and other inputs required for efficient management of climate control (heating, ventilation and air conditioning), security and lighting systems. Lighting and HVAC account for 50% of commercial and 40% of residential building electricity expenditure respectively, indicating that efficiency improvements in these two areas can significantly reduce energy expenditure. These savings can be made through two avenues: the first is through the use of energy-efficient lighting and HVAC systems; and the second is through the deployment of energy management systems which utilize real time price information to schedule loads to minimize energy bills. The latter scheme requires an intelligent power grid or smart grid which can provide bidirectional data flows between customers and utility companies. The smart grid is characterized by the incorporation of intelligenceand bidirectional flows of information and electricity throughout the power grid. These enhancements promise to revolutionize the grid by enabling customers to not only consume but also supply power.",
"title": ""
},
{
"docid": "80fd067dd6cf2fe85ade3c632e82c04c",
"text": "0957-4174/$ see front matter 2009 Elsevier Ltd. A doi:10.1016/j.eswa.2009.03.046 * Corresponding author. Tel.: +98 09126121921. E-mail address: shahbazi_mo@yahoo.com (M. Sha Recommender systems are powerful tools that allow companies to present personalized offers to their customers and defined as a system which recommends an appropriate product or service after learning the customers’ preferences and desires. Extracting users’ preferences through their buying behavior and history of purchased products is the most important element of such systems. Due to users’ unlimited and unpredictable desires, identifying their preferences is very complicated process. In most researches, less attention has been paid to user’s preferences varieties in different product categories. This may decrease quality of recommended items. In this paper, we introduced a technique of recommendation in the context of online retail store which extracts user preferences in each product category separately and provides more personalized recommendations through employing product taxonomy, attributes of product categories, web usage mining and combination of two well-known filtering methods: collaborative and content-based filtering. Experimental results show that proposed technique improves quality, as compared to similar approaches. 2009 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "c7a96129484bbedd063a0b322d9ae3d3",
"text": "BACKGROUND\nNon-invasive detection of aneuploidies in a fetal genome through analysis of cell-free DNA circulating in the maternal plasma is becoming a routine clinical test. Such tests, which rely on analyzing the read coverage or the allelic ratios at single-nucleotide polymorphism (SNP) loci, are not sensitive enough for smaller sub-chromosomal abnormalities due to sequencing biases and paucity of SNPs in a genome.\n\n\nRESULTS\nWe have developed an alternative framework for identifying sub-chromosomal copy number variations in a fetal genome. This framework relies on the size distribution of fragments in a sample, as fetal-origin fragments tend to be smaller than those of maternal origin. By analyzing the local distribution of the cell-free DNA fragment sizes in each region, our method allows for the identification of sub-megabase CNVs, even in the absence of SNP positions. To evaluate the accuracy of our method, we used a plasma sample with the fetal fraction of 13%, down-sampled it to samples with coverage of 10X-40X and simulated samples with CNVs based on it. Our method had a perfect accuracy (both specificity and sensitivity) for detecting 5 Mb CNVs, and after reducing the fetal fraction (to 11%, 9% and 7%), it could correctly identify 98.82-100% of the 5 Mb CNVs and had a true-negative rate of 95.29-99.76%.\n\n\nAVAILABILITY AND IMPLEMENTATION\nOur source code is available on GitHub at https://github.com/compbio-UofT/FSDA CONTACT: : brudno@cs.toronto.edu.",
"title": ""
},
{
"docid": "503ddcf57b4e7c1ddc4f4646fb6ca3db",
"text": "Merging the virtual World Wide Web with nearby physical devices that are part of the Internet of Things gives anyone with a mobile device and the appropriate authorization the power to monitor or control anything.",
"title": ""
},
{
"docid": "88302ac0c35e991b9db407f268fdb064",
"text": "We propose a novel memory architecture for in-memory computation called McDRAM, where DRAM dies are equipped with a large number of multiply accumulate (MAC) units to perform matrix computation for neural networks. By exploiting high internal memory bandwidth and reducing off-chip memory accesses, McDRAM realizes both low latency and energy efficient computation. In our experiments, we obtained the chip layout based on the state-of-the-art memory, LPDDR4 where McDRAM is equipped with 2048 MACs in a single chip package with a small area overhead (4.7%). Compared with the state-of-the-art accelerator, TPU and the power-efficient GPU, Nvidia P4, McDRAM offers <inline-formula> <tex-math notation=\"LaTeX\">$9.5{\\times }$ </tex-math></inline-formula> and <inline-formula> <tex-math notation=\"LaTeX\">$14.4{\\times }$ </tex-math></inline-formula> speedup, respectively, in the case that the large-scale MLPs and RNNs adopt the batch size of 1. McDRAM also gives <inline-formula> <tex-math notation=\"LaTeX\">$2.1{\\times }$ </tex-math></inline-formula> and <inline-formula> <tex-math notation=\"LaTeX\">$3.7{\\times }$ </tex-math></inline-formula> better computational efficiency in TOPS/W than TPU and P4, respectively, for the large batches.",
"title": ""
},
{
"docid": "3a5dacb4b43f663539108ed1524f0c06",
"text": "This paper describes the design of CMOS receiver electronics for monolithic integration with capacitive micromachined ultrasonic transducer (CMUT) arrays for high-frequency intravascular ultrasound imaging. A custom 8-inch (20-cm) wafer is fabricated in a 0.35-μm two-poly, four-metal CMOS process and then CMUT arrays are built on top of the application specific integrated circuits (ASICs) on the wafer. We discuss advantages of the single-chip CMUT-on-CMOS approach in terms of receive sensitivity and SNR. Low-noise and high-gain design of a transimpedance amplifier (TIA) optimized for a forward-looking volumetric-imaging CMUT array element is discussed as a challenging design example. Amplifier gain, bandwidth, dynamic range, and power consumption trade-offs are discussed in detail. With minimized parasitics provided by the CMUT-on-CMOS approach, the optimized TIA design achieves a 90 fA/√Hz input-referred current noise, which is less than the thermal-mechanical noise of the CMUT element. We show successful system operation with a pulseecho measurement. Transducer-noise-dominated detection in immersion is also demonstrated through output noise spectrum measurement of the integrated system at different CMUT bias voltages. A noise figure of 1.8 dB is obtained in the designed CMUT bandwidth of 10 to 20 MHz.",
"title": ""
},
{
"docid": "59a69e5d33d650ef3e4afc053a98abe6",
"text": "Three-dimensional television (3D-TV) is the next major revolution in television. A successful rollout of 3D-TV will require a backward-compatible transmission/distribution system, inexpensive 3D displays, and an adequate supply of high-quality 3D program material. With respect to the last factor, the conversion of 2D images/videos to 3D will play an important role. This paper provides an overview of automatic 2D-to-3D video conversion with a specific look at a number of approaches for both the extraction of depth information from monoscopic images and the generation of stereoscopic images. Some challenging issues for the success of automatic 2D-to-3D video conversion are pointed out as possible research topics for the future.",
"title": ""
},
{
"docid": "8f360c907e197beb5e6fc82b081c908f",
"text": "This paper describes a 3D object-space paint program. This program allows the user to directly manipulate the parameters used to shade the surface of the 3D shape by applying pigment to its surface. The pigment has all the properties normally associated with material shading models. This includes, but is not limited to, the diffuse color, the specular color, and the surface roughness. The pigment also can have thickness, which is modeled by simultaneously creating a bump map attached to the shape. The output of the paint program is a 3D model with associated texture maps. This information can be used with any rendering program with texture mapping capabilities. Almost all traditional techniques of 2D computer image painting have analogues in 3D object painting, but there are also many new techniques unique to 3D. One example is the use of solid textures to pattern the surface.",
"title": ""
},
{
"docid": "b723616272d078bdbaaae1cf650ace20",
"text": "Most of industrial robots are still programmed using the typical teaching process, through the use of the robot teach pendant. In this paper is proposed an accelerometer-based system to control an industrial robot using two low-cost and small 3-axis wireless accelerometers. These accelerometers are attached to the human arms, capturing its behavior (gestures and postures). An Artificial Neural Network (ANN) trained with a back-propagation algorithm was used to recognize arm gestures and postures, which then will be used as input in the control of the robot. The aim is that the robot starts the movement almost at the same time as the user starts to perform a gesture or posture (low response time). The results show that the system allows the control of an industrial robot in an intuitive way. However, the achieved recognition rate of gestures and postures (92%) should be improved in future, keeping the compromise with the system response time (160 milliseconds). Finally, the results of some tests performed with an industrial robot are presented and discussed.",
"title": ""
},
{
"docid": "d469d31d26d8bc07b9d8dfa8ce277e47",
"text": "BACKGROUND/PURPOSE\nMorbidity in children treated with appendicitis results either from late diagnosis or negative appendectomy. A Prospective analysis of efficacy of Pediatric Appendicitis Score for early diagnosis of appendicitis in children was conducted.\n\n\nMETHODS\nIn the last 5 years, 1,170 children aged 4 to 15 years with abdominal pain suggestive of acute appendicitis were evaluated prospectively. Group 1 (734) were patients with appendicitis and group 2 (436) nonappendicitis. Multiple linear logistic regression analysis of all clinical and investigative parameters was performed for a model comprising 8 variables to form a diagnostic score.\n\n\nRESULTS\nLogistic regression analysis yielded a model comprising 8 variables, all statistically significant, P <.001. These variables in order of their diagnostic index were (1) cough/percussion/hopping tenderness in the right lower quadrant of the abdomen (0.96), (2) anorexia (0.88), (3) pyrexia (0.87), (4) nausea/emesis (0.86), (5) tenderness over the right iliac fossa (0.84), (6) leukocytosis (0.81), (7) polymorphonuclear neutrophilia (0.80) and (8) migration of pain (0.80). Each of these variables was assigned a score of 1, except for physical signs (1 and 5), which were scored 2 to obtain a total of 10. The Pediatric Appendicitis Score had a sensitivity of 1, specificity of 0.92, positive predictive value of 0.96, and negative predictive value of 0.99.\n\n\nCONCLUSION\nPediatric appendicitis score is a simple, relatively accurate diagnostic tool for accessing an acute abdomen and diagnosing appendicitis in children.",
"title": ""
},
{
"docid": "51c0cdb22056a3dc3f2f9b95811ca1ca",
"text": "Technology plays the major role in healthcare not only for sensory devices but also in communication, recording and display device. It is very important to monitor various medical parameters and post operational days. Hence the latest trend in Healthcare communication method using IOT is adapted. Internet of things serves as a catalyst for the healthcare and plays prominent role in wide range of healthcare applications. In this project the PIC18F46K22 microcontroller is used as a gateway to communicate to the various sensors such as temperature sensor and pulse oximeter sensor. The microcontroller picks up the sensor data and sends it to the network through Wi-Fi and hence provides real time monitoring of the health care parameters for doctors. The data can be accessed anytime by the doctor. The controller is also connected with buzzer to alert the caretaker about variation in sensor output. But the major issue in remote patient monitoring system is that the data as to be securely transmitted to the destination end and provision is made to allow only authorized user to access the data. The security issue is been addressed by transmitting the data through the password protected Wi-Fi module ESP8266 which will be encrypted by standard AES128 and the users/doctor can access the data by logging to the html webpage. At the time of extremity situation alert message is sent to the doctor through GSM module connected to the controller. Hence quick provisional medication can be easily done by this system. This system is efficient with low power consumption capability, easy setup, high performance and time to time response.",
"title": ""
},
{
"docid": "d07d6fe33b01fbfb21ba5adc76ec786f",
"text": "Dunaliella salina (Dunal) Teod, a unicellular eukaryotic green alga, is a highly salt-tolerant organism. To identify novel genes with potential roles in salinity tolerance, a salt stress-induced D. salina cDNA library was screened based on the expression in Haematococcus pluvialis, an alga also from Volvocales but one that is hypersensitive to salt. Five novel salt-tolerant clones were obtained from the library. Among them, Ds-26-16 and Ds-A3-3 contained the same open reading frame (ORF) and encoded a 6.1 kDa protein. Transgenic tobacco overexpressing Ds-26-16 and Ds-A3-3 exhibited increased leaf area, stem height, root length, total chlorophyll, and glucose content, but decreased proline content, peroxidase activity, and ascorbate content, and enhanced transcript level of Na+/H+ antiporter salt overly sensitive 1 gene (NtSOS1) expression, compared to those in the control plants under salt condition, indicating that Ds-26-16 enhanced the salt tolerance of tobacco plants. The transcript of Ds-26-16 in D. salina was upregulated in response to salt stress. The expression of Ds-26-16 in Escherichia coli showed that the ORF contained the functional region and changed the protein(s) expression profile. A mass spectrometry assay suggested that the most abundant and smallest protein that changed is possibly a DNA-binding protein or Cold shock-like protein. Subcellular localization analysis revealed that Ds-26-16 was located in the nuclei of onion epidermal cells or nucleoid of E. coli cells. In addition, the possible use of shoots regenerated from leaf discs to quantify the salt tolerance of the transgene at the initial stage of tobacco transformation was also discussed.",
"title": ""
},
{
"docid": "ca32fb4df9c03951e14ce9e06f7d90a0",
"text": "Future wireless local area networks (WLANs) are expected to serve thousands of users in diverse environments. To address the new challenges that WLANs will face, and to overcome the limitations that previous IEEE standards introduced, a new IEEE 802.11 amendment is under development. IEEE 802.11ax aims to enhance spectrum efficiency in a dense deployment; hence system throughput improves. Dynamic Sensitivity Control (DSC) and BSS Color are the main schemes under consideration in IEEE 802.11ax for improving spectrum efficiency In this paper, we evaluate DSC and BSS Color schemes when physical layer capture (PLC) is modelled. PLC refers to the case that a receiver successfully decodes the stronger frame when collision occurs. It is shown, that PLC could potentially lead to fairness issues and higher throughput in specific cases. We study PLC in a small and large scale scenario, and show that PLC could also improve fairness in specific scenarios.",
"title": ""
},
{
"docid": "0acf9ef6e025805a76279d1c6c6c55e7",
"text": "Android mobile devices are enjoying a lion's market share in smartphones and mobile devices. This also attracts malware writers to target the Android platform. Recently, we have discovered a new Android malware distribution channel: releasing malicious firmwares with pre-installed malware to the wild. This poses significant risk since users of mobile devices cannot change the content of the malicious firmwares. Furthermore, pre-installed applications have \" more permissions\" (i.e., silent installation) than other legitimate mobile apps, so they can download more malware or access users' confidential information. To understand and address this new form of malware distribution channel, we design and implement \"DroidRay\": a security evaluation system for customized Android firmwares. DroidRay uses both static and dynamic analyses to evaluate the firmware security on both the application and system levels. To understand the impact of this new malware distribution channel, we analyze 250 Android firmwares and 24,009 pre-installed applications. We reveal how the malicious firmware and pre-installed malware are injected, and discovered 1,947 (8.1%) pre-installed applications have signature vulnerability and 19 (7.6%) firmwares contain pre-installed malware. In addition, 142 (56.8%) firmwares have the default signature vulnerability, five (2.0%) firmwares contain malicious hosts file, at most 40 (16.0%) firmwares have the native level privilege escalation vulnerability and at least 249 (99.6%) firmwares have the Java level privilege escalation vulnerability. Lastly, we investigate a real-world case of a pre-installed zero-day malware known as CEPlugnew, which involves 348,018 infected Android smartphones, and we show its degree and geographical penetration. This shows the significance of this new malware distribution channel, and DroidRay is an effective tool to combat this new form of malware spreading.",
"title": ""
},
{
"docid": "a31287791b12f55adebacbb93a03c8bc",
"text": "Emotional adaptation increases pro-social behavior of humans towards robotic interaction partners. Social cues are an important factor in this context. This work investigates, if emotional adaptation still works under absence of human-like facial Action Units. A human-robot dialog scenario is chosen using NAO pretending to work for a supermarket and involving humans providing object names to the robot for training purposes. In a user study, two conditions are implemented with or without explicit emotional adaptation of NAO to the human user in a between-subjects design. Evaluations of user experience and acceptance are conducted based on evaluated measures of human-robot interaction (HRI). The results of the user study reveal a significant increase of helpfulness (number of named objects), anthropomorphism, and empathy in the explicit emotional adaptation condition even without social cues of facial Action Units, but only in case of prior robot contact of the test persons. Otherwise, an opposite effect is found. These findings suggest, that reduction of these social cues can be overcome by robot experience prior to the interaction task, e.g. realizable by an additional bonding phase, confirming the importance of such from previous work. Additionally, an interaction with academic background of the participants is found.",
"title": ""
},
{
"docid": "6e46fd2a8370bc42d245ca128c9f537b",
"text": "A literature review of the associations between involvement in bullying and depression is presented. Many studies have demonstrated a concurrent association between involvement in bullying and depression in adolescent population samples. Not only victims but also bullies display increased risk of depression, although not all studies have confirmed this for the bullies. Retrospective studies among adults support the notion that victimization is followed by depression. Prospective follow-up studies have suggested both that victimization from bullying may be a risk factor for depression and that depression may predispose adolescents to bullying. Research among clinically referred adolescents is scarce but suggests that correlations between victimization from bullying and depression are likely to be similar in clinical and population samples. Adolescents who bully present with elevated numbers of psychiatric symptoms and psychiatric and social welfare treatment contacts.",
"title": ""
},
{
"docid": "d00f7e5085d5aa9d8ac38f2abc7b5237",
"text": "Data-driven machine learning, in particular deep learning, is improving state-ofthe-art in many healthcare prediction tasks. A current standard protocol is to collect patient data to build, evaluate, and deploy machine learning algorithms for specific age groups (say source domain), which, if not properly trained, can perform poorly on data from other age groups (target domains). In this paper, we address the question of whether it is possible to adapt machine learning models built for one age group to also perform well on other age groups. Additionally, healthcare time series data is also challenging in that it is usually longitudinal and episodic with the potential of having complex temporal relationships. We address these problems with our proposed adversarially trained Variational Adversarial Deep Domain Adaptation (VADDA) model built atop a variational recurrent neural network, which has been shown to be capable of capturing complex temporal latent relationships. We assume and empirically justify that patient data from different age groups can be treated as being similar but different enough to be classified as coming from different domains, requiring the use of domain-adaptive approaches. Through experiments on the MIMIC-III dataset we demonstrate that our model outperforms current state-of-the-art domain adaptation approaches, being (as far as we know) the first to accomplish this for healthcare time-series data.",
"title": ""
}
] |
scidocsrr
|
bf8750d8a31efb7e984834900f9ed872
|
Event Detection and Retrieval on Social Media
|
[
{
"docid": "2d22bb2c565fa716845f7b3065361200",
"text": "Despite the popularity of Twitter for research, there are very few publicly available corpora, and those which are available are either too small or unsuitable for tasks such as event detection. This is partially due to a number of issues associated with the creation of Twitter corpora, including restrictions on the distribution of the tweets and the difficultly of creating relevance judgements at such a large scale. The difficulty of creating relevance judgements for the task of event detection is further hampered by ambiguity in the definition of event. In this paper, we propose a methodology for the creation of an event detection corpus. Specifically, we first create a new corpus that covers a period of 4 weeks and contains over 120 million tweets, which we make available for research. We then propose a definition of event which fits the characteristics of Twitter, and using this definition, we generate a set of relevance judgements aimed specifically at the task of event detection. To do so, we make use of existing state-of-the-art event detection approaches and Wikipedia to generate a set of candidate events with associated tweets. We then use crowdsourcing to gather relevance judgements, and discuss the quality of results, including how we ensured integrity and prevented spam. As a result of this process, along with our Twitter corpus, we release relevance judgements containing over 150,000 tweets, covering more than 500 events, which can be used for the evaluation of event detection approaches.",
"title": ""
}
] |
[
{
"docid": "6a1d534737dcbe75ff7a7ac975bcc5ec",
"text": "Crime is one of the most important social problems in the country, affecting public safety, children development, and adult socioeconomic status. Understanding what factors cause higher crime is critical for policy makers in their efforts to reduce crime and increase citizens' life quality. We tackle a fundamental problem in our paper: crime rate inference at the neighborhood level. Traditional approaches have used demographics and geographical influences to estimate crime rates in a region. With the fast development of positioning technology and prevalence of mobile devices, a large amount of modern urban data have been collected and such big data can provide new perspectives for understanding crime. In this paper, we used large-scale Point-Of-Interest data and taxi flow data in the city of Chicago, IL in the USA. We observed significantly improved performance in crime rate inference compared to using traditional features. Such an improvement is consistent over multiple years. We also show that these new features are significant in the feature importance analysis.",
"title": ""
},
{
"docid": "cc8766fc94cf9865c9035c7b3d3ce4a6",
"text": "Image features known as “gist descriptors” have recently been applied to the malware classification problem. In this research, we implement, test, and analyze a malware score based on gist descriptors, and verify that the resulting score yields very strong classification results. We also analyze the robustness of this gist-based scoring technique when applied to obfuscated malware, and we perform feature reduction to determine a minimal set of gist features. Then we compare the effectiveness of a deep learning technique to this gist-based approach. While scoring based on gist descriptors is effective, we show that our deep learning technique performs equally well. A potential advantage of the deep learning approach is that there is no need to extract the gist features when training or scoring.",
"title": ""
},
{
"docid": "dfcc8dc65e24d70b7068ae8cfc41822a",
"text": "Purpose Businesses are always seeking resilient strategies so they can weather unpredictable competitive environments. One source of unpredictability is the unsustainability of commerce’s environmental, economic or social impacts and the limitations this places on businesses. Another is poor resilience causing erroneous and unexpected outputs. Companies prospering long-term must have both resilience and sustainability, existing in a symbiotic state. This paper explores the two concepts and their relationship, their combined benefits and proposes an approach for supporting decision-makers to proactively build both characteristics. Design/methodology/approach The paper looks at businesses as complex adaptive systems, how their resilience and sustainability can be defined and how these might be exhibited. It then explores how they can be combined in practice. Findings The two qualities are related but have different purposes, moreover resilience has two major forms related to timescales. Both kinds of resilience are identified as key for delivering sustainability, yet the reverse is also found to be true. Both are needed to deliver either and to let businesses flourish. Practical implications Although the ideal state of resilient sustainability is difficult to define or achieve, pragmatic ways exist to deliver the right direction of change in organisational decisions. A novel approach to this is explored based on Transition Engineering and Robustness Engineering. Originality/value This paper links resilience and sustainability explicitly and develops a holistic pragmatic approach for working through their implications in strategic decision-making.",
"title": ""
},
{
"docid": "87eed2ab66bd9bda90cf2a838b990207",
"text": "We present a new framework for compositional distributional semantics in which the distributional contexts of lexemes are expressed in terms of anchored packed dependency trees. We show that these structures have the potential to capture the full sentential contexts of a lexeme and provide a uniform basis for the composition of distributional knowledge in a way that captures both mutual disambiguation and generalization.",
"title": ""
},
{
"docid": "ab7db4c786d2f5b084bf9dd2529baed6",
"text": "New protocols for Internet inter-domain routing struggle to get widely adopted. Because the Internet consists of more than 50,000 autonomous systems (ASes), deployment of a new routing protocol has to be incremental. In this work, we study such incremental deployment. We first formulate the routing problem in regard to a metric of routing cost. Then, the paper proposes and rigorously defines a statistical notion of protocol ignorance that quantifies the inability of a routing protocol to accurately determine routing prices with respect to the metric of interest. The proposed protocol-ignorance model of a routing protocol is fairly generic and can be applied to routing in both inter-domain and intra-domain settings, as well as to transportation and other types of networks. Our model of protocol deployment makes our study specific to Internet interdomain routing. Through a combination of mathematical analysis and simulation, we demonstrate that the benefits from adopting a new inter-domain protocol accumulate smoothly during its incremental deployment. In particular, the simulation shows that decreasing the routing price by 25% requires between 43% and 53% of all nodes to adopt the new protocol. Our findings elucidate the deployment struggle of new inter-domain routing protocols and indicate that wide deployment of such a protocol necessitates involving a large number of relevant ASes into a coordinated effort to adopt the new protocol.",
"title": ""
},
{
"docid": "62938eb6d3b523affbe0b7eb72b423ca",
"text": "Principal component analysis (PCA) is a mainstay of modern data analysis a black box that is widely used but poorly understood. The goal of this paper is to dispel the magic behind this black box. This tutorial focuses on building a solid intuition for how and why principal component analysis works; furthermore, it crystallizes this knowledge by deriving from simple intuitions, the mathematics behind PCA . This tutorial does not shy away from explaining the ideas informally, nor does it shy away from the mathematics. The hope is that by addressing both aspects, readers of all levels will be able to gain a better understanding of PCA as well as the when, the how and the why of applying this technique.",
"title": ""
},
{
"docid": "47c5fd58d6fdbb5003cb907aa1c0bee8",
"text": "OBJECTIVES\nTo review the effects of physical activity on health and behavior outcomes and develop evidence-based recommendations for physical activity in youth.\n\n\nSTUDY DESIGN\nA systematic literature review identified 850 articles; additional papers were identified by the expert panelists. Articles in the identified outcome areas were reviewed, evaluated and summarized by an expert panelist. The strength of the evidence, conclusions, key issues, and gaps in the evidence were abstracted in a standardized format and presented and discussed by panelists and organizational representatives.\n\n\nRESULTS\nMost intervention studies used supervised programs of moderate to vigorous physical activity of 30 to 45 minutes duration 3 to 5 days per week. The panel believed that a greater amount of physical activity would be necessary to achieve similar beneficial effects on health and behavioral outcomes in ordinary daily circumstances (typically intermittent and unsupervised activity).\n\n\nCONCLUSION\nSchool-age youth should participate daily in 60 minutes or more of moderate to vigorous physical activity that is developmentally appropriate, enjoyable, and involves a variety of activities.",
"title": ""
},
{
"docid": "8c9b360309da686a832cbf6eaee42db8",
"text": "System-level design issues become critical as implementation technology evolves toward increasingly complex integrated circuits and the time-to-market pressure continues relentlessly. To cope with these issues, new methodologies that emphasize re-use at all levels of abstraction are a “must”, and this is a major focus of our work in the Gigascale Silicon Research Center. We present some important concepts for system design that are likely to provide at least some of the gains in productivity postulated above. In particular, we focus on a method that separates parts of the design process and makes them nearly independent so that complexity could be mastered. In this domain, architecture-function co-design and communication-based design are introduced and motivated. Platforms are essential elements of this design paradigm. We define system platforms and we argue about their use and relevance. Then we present an application of the design methodology to the design of wireless systems. Finally, we present a new approach to platform-based design called modern embedded systems, compilers, architectures and languages, based on highly concurrent and software-programmable architectures and associated design tools.",
"title": ""
},
{
"docid": "8e37d0612617061b539c2f4463b7b571",
"text": "Software engineers often use record/replay tools to enable the automated testing of web applications. Tests created in this man- ner can then be used to regression test new versions of the web applications as they evolve. Web application tests recorded by record/replay tools, however, can be quite brittle; they can easily break as applications change. For this reason, researchers have be- gun to seek approaches for automatically repairing record/replay tests. This research investigates different aspects in relation to test- ing web applications using record/replay tools. The areas that we are interested in include taxonomizing the causes behind breakages and developing automated techniques to repair breakages, creating prevention techniques to stop the occurrence of breakages and de- veloping automated frameworks for root cause analysis. Finally, we intend to evaluate all of these activities via controlled studies involving software engineers and real web application tests.",
"title": ""
},
{
"docid": "b75e9077cc745b15fa70267c3b0eba45",
"text": "This study explored the relation of shame proneness and guilt proneness to constructive versus destructive responses to anger among 302 children (Grades 4-6), adolescents (Grades 7-11), 176 college students, and 194 adults. Across all ages, shame proneness was clearly related to maladaptive response to anger, including malevolent intentions; direct, indirect, and displaced aggression; self-directed hostility; and negative long-term consequences. In contrast, guilt proneness was associated with constructive means of handling anger, including constructive intentions, corrective action and non-hostile discussion with the target of the anger, cognitive reappraisals of the target's role, and positive long-term consequences. Escapist-diffusing responses showed some interesting developmental trends. Among children, these dimensions were positively correlated with guilt and largely unrelated to shame; among older participants, the results were mixed.",
"title": ""
},
{
"docid": "2d43c36d19c3a90da90921cedf4ba8ca",
"text": "ABSTRACT The offset voltage of the dynamic latched comparator is analyzed in detailed and dynamic latched comparator design is optimized for the minimal offsetvoltage based on the analysis in this paper. As a result offset-voltage was reduced from 0.87μV (in conventional double tail latched comparator) to 0.3μV (in case of proposed comparator. The simulated results of the conventional as well as proposed comparator have been shown on pspice orcad 9.2 versions.",
"title": ""
},
{
"docid": "a16be992aa947c8c5d2a7c9899dfbcd8",
"text": "The effect of the Eureka Spring (ES) appliance was investigated on 37 consecutively treated, noncompliant patients with bilateral Class II malocclusions. Lateral cephalographs were taken at the start of orthodontic treatment (T1), at insertion of the ES (T2), and at removal of the ES (T3). The average treatment interval between T2 and T3 was four months. The Class II correction occurred almost entirely by dentoalveolar movement and was almost equally distributed between the maxillary and mandibular dentitions. The rate of molar correction was 0.7 mm/mo. There was no change in anterior face height, mandibular plane angle, palatal plane angle, or gonial angle with treatment. There was a 2 degrees change in the occlusal plane resulting from intrusion of the maxillary molar and the mandibular incisor. Based on the results in this sample, the ES appliance was very effective in correcting Class II malocclusions in noncompliant patients without increasing the vertical dimension.",
"title": ""
},
{
"docid": "4445f128f31d6f42750049002cb86a29",
"text": "Convolutional neural networks are a popular choice for current object detection and classification systems. Their performance improves constantly but for effective training, large, hand-labeled datasets are required. We address the problem of obtaining customized, yet large enough datasets for CNN training by synthesizing them in a virtual world, thus eliminating the need for tedious human interaction for ground truth creation. We developed a CNN-based multi-class detection system that was trained solely on virtual world data and achieves competitive results compared to state-of-the-art detection systems.",
"title": ""
},
{
"docid": "49af355cfc9e13234a2a3b115f225c1b",
"text": "Tattoos play an important role in many religions. Tattoos have been used for thousands of years as important tools in ritual and tradition. Judaism, Christianity, and Islam have been hostile to the use of tattoos, but many religions, in particular Buddhism and Hinduism, make extensive use of them. This article examines their use as tools for protection and devotion.",
"title": ""
},
{
"docid": "2e3dcd4ba0dbcabb86c8716d73760028",
"text": "Power transformers are one of the most critical devices in power systems. It is responsible for voltage conversion, power distribution and transmission, and provides power services. Therefore, the normal operation of the transformer is an important guarantee for the safe, reliable, high quality and economical operation of the power system. It is necessary to minimize and reduce the occurrence of transformer failure and accident. The on-line monitoring and fault diagnosis of power equipment is not only the prerequisite for realizing the predictive maintenance of equipment, but also the key to ensure the safe operation of equipment. Although the analysis of dissolved gas in transformer oil is an important means of transformer insulation monitoring, the coexistence of two kinds of faults, such as discharge and overheat, can lead to a lower positive rate of diagnosis. In this paper, we use the basic particle swarm optimization algorithm to optimize the BP neural network DGA method, select the typical oil in the oil as a neural network input, and then use the trained particle swarm algorithm to optimize the neural network for transformer fault type diagnosis. The results show that the method has a good classification effect, which can solve the problem of difficult to distinguish the faults of the transformer when the discharge and overheat coexist. The positive rate of fault diagnosis is high.",
"title": ""
},
{
"docid": "f3e382102c57e9d8f5349e374d1e6907",
"text": "In SCARA robots, which are often used in industrial applications, all joint axes are parallel, covering three degrees of freedom in translation and one degree of freedom in rotation. Therefore, conventional approaches for the handeye calibration of articulated robots cannot be used for SCARA robots. In this paper, we present a new linear method that is based on dual quaternions and extends the work of [1] for SCARA robots. To improve the accuracy, a subsequent nonlinear optimization is proposed. We address several practical implementation issues and show the effectiveness of the method by evaluating it on synthetic and real data.",
"title": ""
},
{
"docid": "062a575f7b519aa8a6aee4ec5e67955b",
"text": "This document provides a survey of the mathematical methods currently used for position estimation in indoor local positioning systems (LPS), particularly those based on radiofrequency signals. The techniques are grouped into four categories: geometry-based methods, minimization of the cost function, fingerprinting, and Bayesian techniques. Comments on the applicability, requirements, and immunity to nonline-of-sight (NLOS) propagation of the signals of each method are provided.",
"title": ""
},
{
"docid": "d5fc7535bcf4bfc55da11d5c569950b3",
"text": "The way information spreads through society has changed significantly over the past decade with the advent of online social networking. Twitter, one of the most widely used social networking websites, is known as the real-time, public microblogging network where news breaks first. Most users love it for its iconic 140-character limitation and unfiltered feed that show them news and opinions in the form of tweets. Tweets are usually multilingual in nature and of varying quality. However, machine translation (MT) of twitter data is a challenging task especially due to the following two reasons: (i) tweets are informal in nature (i.e., violates linguistic norms), and (ii) parallel resource for twitter data is scarcely available on the Internet. In this paper, we develop FooTweets, a first parallel corpus of tweets for English–German language pair. We extract 4, 000 English tweets from the FIFA 2014 world cup and manually translate them into German with a special focus on the informal nature of the tweets. In addition to this, we also annotate sentiment scores between 0 and 1 to all the tweets depending upon the degree of sentiment associated with them. This data has recently been used to build sentiment translation engines and an extensive evaluation revealed that such a resource is very useful in machine translation of user generated content.",
"title": ""
},
{
"docid": "8d7e63dcb792a2b61dd708475117dac7",
"text": "Nanotechnology has played a crucial role in the development of biosensors over the past decade. The development, testing, optimization, and validation of new biosensors has become a highly interdisciplinary effort involving experts in chemistry, biology, physics, engineering, and medicine. The sensitivity, the specificity and the reproducibility of biosensors have improved tremendously as a result of incorporating nanomaterials in their design. In general, nanomaterials-based electrochemical immunosensors amplify the sensitivity by facilitating greater loading of the larger sensing surface with biorecognition molecules as well as improving the electrochemical properties of the transducer. The most common types of nanomaterials and their properties will be described. In addition, the utilization of nanomaterials in immunosensors for biomarker detection will be discussed since these biosensors have enormous potential for a myriad of clinical uses. Electrochemical immunosensors provide a specific and simple analytical alternative as evidenced by their brief analysis times, inexpensive instrumentation, lower assay cost as well as good portability and amenability to miniaturization. The role nanomaterials play in biosensors, their ability to improve detection capabilities in low concentration analytes yielding clinically useful data and their impact on other biosensor performance properties will be discussed. Finally, the most common types of electroanalytical detection methods will be briefly touched upon.",
"title": ""
},
{
"docid": "3c8b9a015157a7dd7ce4a6b0b35847d9",
"text": "While more and more people are relying on social media for news feeds, serious news consumers still resort to well-established news outlets for more accurate and in-depth reporting and analyses. They may also look for reports on related events that have happened before and other background information in order to better understand the event being reported. Many news outlets already create sidebars and embed hyperlinks to help news readers, often with manual efforts. Technologies in IR and NLP already exist to support those features, but standard test collections do not address the tasks of modern news consumption. To help advance such technologies and transfer them to news reporting, NIST, in partnership with the Washington Post, is starting a new TREC track in 2018 known as the News Track.",
"title": ""
}
] |
scidocsrr
|
8ad596984c7392fae304edc4b4977259
|
Procrastination , Academic Success and the Effectiveness of a Remedial Program
|
[
{
"docid": "2e2f54243e0ec8af2308346560afc26a",
"text": "Procrastination is all too familiar to most people. People delay writing up their research (so we hear!), repeatedly declare they will start their diets tomorrow, or postpone until next week doing odd jobs around the house. Yet people also sometimes attempt to control their procrastination by setting deadlines for themselves. In this article, we pose three questions: (a) Are people willing to self-impose meaningful (i.e., costly) deadlines to overcome procrastination? (b) Are self-imposed deadlines effective in improving task performance? (c) When self-imposing deadlines, do people set them optimally, for maximum performance enhancement? A set of studies examined these issues experimentally, showing that the answer is \"yes\" to the first two questions, and \"no\" to the third. People have self-control problems, they recognize them, and they try to control them by self-imposing costly deadlines. These deadlines help people control procrastination, hit they are not as effective as some externally imposed deadlines in improving task performance.",
"title": ""
},
{
"docid": "6627a1d89adf1389959983d04c8c26dd",
"text": "Recent models of procrastination due to self-control problems assume that a procrastinator considers just one option and is unaware of her self-control problems. We develop a model where a person chooses from a menu of options and is partially aware of her self-control problems. This menu model replicates earlier results and generates new ones. A person might forego completing an attractive option because she plans to complete a more attractive but never-to-be-completed option. Hence, providing a non-procrastinator additional options can induce procrastination, and a person may procrastinate worse pursuing important goals than unimportant ones.",
"title": ""
},
{
"docid": "6200e3a50d2e578d56ef9015149dd5fb",
"text": "This study investigated the frequency of college students' procrastination on academic tasks and the reasons for procrastination behavior. A high percentage of students reported problems with procrastination on several specific academic tasks. Self-reported procrastination was positively correlated with the number of self-paced quizzes students took late in the semester and with participation in an experimental session offered late in the semester. A factor analysis of the reasons for procrastination indicated that the factors Fear of Failure and Aversiveness of the Task accounted for most of the variance. A small but very homogeneous group of subjects endorsed items on the Fear of Failure factor that correlated significantly with self-report measures of depression, irrational cognitions, low self-esteem, delayed study behavior, anxiety, and lack of assertion. A larger and relatively heterogeneous group of subjects reported procrastinating as a result of aversiveness of the task. The Aversiveness of the Task factor did not correlate significantly with anxiety or assertion, but it did correlate significantly with'depression, irrational cognitions, low self-esteem, and delayed study behavior. These results indicate that procrastination is not solely a deficit in study habits or time management, but involves a complex interaction of behavioral, cognitive, and affective components;",
"title": ""
}
] |
[
{
"docid": "163cee9000ecd421334a507958491a25",
"text": "It has been assumed that the physical separation ('air-gap') of computers provides a reliable level of security, such that should two adjacent computers become compromised, the covert exchange of data between them would be impossible. In this paper, we demonstrate BitWhisper, a method of bridging the air-gap between adjacent compromised computers by using their heat emissions and built-in thermal sensors to create a covert communication channel. Our method is unique in two respects: it supports bidirectional communication, and it requires no additional dedicated peripheral hardware. We provide experimental results based on the implementation of the Bit-Whisper prototype, and examine the channel's properties and limitations. Our experiments included different layouts, with computers positioned at varying distances from one another, and several sensor types and CPU configurations (e.g., Virtual Machines). We also discuss signal modulation and communication protocols, showing how BitWhisper can be used for the exchange of data between two computers in a close proximity (positioned 0-40 cm apart) at an effective rate of 1-8 bits per hour, a rate which makes it possible to infiltrate brief commands and exfiltrate small amount of data (e.g., passwords) over the covert channel.",
"title": ""
},
{
"docid": "447c008d30a6f86830d49bd74bd7a551",
"text": "OBJECTIVES\nTo investigate the effects of 24 weeks of whole-body-vibration (WBV) training on knee-extension strength and speed of movement and on counter-movement jump performance in older women.\n\n\nDESIGN\nA randomized, controlled trial.\n\n\nSETTING\nExercise Physiology and Biomechanics Laboratory, Leuven, Belgium.\n\n\nPARTICIPANTS\nEighty-nine postmenopausal women, off hormone replacement therapy, aged 58 to 74, were randomly assigned to a WBV group (n=30), a resistance-training group (RES, n=30), or a control group (n=29).\n\n\nINTERVENTION\nThe WBV group and the RES group trained three times a week for 24 weeks. The WBV group performed unloaded static and dynamic knee-extensor exercises on a vibration platform, which provokes reflexive muscle activity. The RES group trained knee-extensors by performing dynamic leg-press and leg-extension exercises increasing from low (20 repetitions maximum (RM)) to high (8RM) resistance. The control group did not participate in any training.\n\n\nMEASUREMENTS\nPre-, mid- (12 weeks), and post- (24 weeks) isometric strength and dynamic strength of knee extensors were measured using a motor-driven dynamometer. Speed of movement of knee extension was assessed using an external resistance equivalent to 1%, 20%, 40%, and 60% of isometric maximum. Counter-movement jump performance was determined using a contact mat.\n\n\nRESULTS\nIsometric and dynamic knee extensor strength increased significantly (P<.001) in the WBV group (mean+/-standard error 15.0+/-2.1% and 16.1+/-3.1%, respectively) and the RES group (18.4+/-2.8% and 13.9+/-2.7%, respectively) after 24 weeks of training, with the training effects not significantly different between the groups (P=.558). Speed of movement of knee extension significantly increased at low resistance (1% or 20% of isometric maximum) in the WBV group only (7.4+/-1.8% and 6.3+/-2.0%, respectively) after 24 weeks of training, with no significant differences in training effect between the WBV and the RES groups (P=.391; P=.142). Counter-movement jump height enhanced significantly (P<.001) in the WBV group (19.4+/-2.8%) and the RES group (12.9+/-2.9%) after 24 weeks of training. Most of the gain in knee-extension strength and speed of movement and in counter-movement jump performance had been realized after 12 weeks of training.\n\n\nCONCLUSION\nWBV is a suitable training method and is as efficient as conventional RES training to improve knee-extension strength and speed of movement and counter-movement jump performance in older women. As previously shown in young women, it is suggested that the strength gain in older women is mainly due to the vibration stimulus and not only to the unloaded exercises performed on the WBV platform.",
"title": ""
},
{
"docid": "d16f126a07ad5fa41acfa9da7b180898",
"text": "Matrix metalloproteinases (MMPs) are a family of endopeptidases that function to remodel tissue during both normal physiology and pathology. Research performed by the Medical University of South Carolina found an increased release of several MMP species during cardiopulmonary bypass (CPB), including the subtype MMP-9, but whether and to what degree the extracorporeal circulation circuit (ECC) induces the release of MMPs has yet to be determined. Human bank whole blood scheduled for discard was obtained and exposed to an ECC. The first set of studies (N = 8) was performed with a loop circuit using a standard arterial line filter. A leukoreduction filter was incorporated during the first 30 min of the pump run for the second set of trials; the leukoreduction filter was then bypassed and a standard arterial filter used for the remaining 60 minutes on pump (N = 8). Blood samples were drawn at four time points for analysis (baseline, 30, 60, and 90 min). Data were analyzed using repeated measures analysis of variance with between-subjects factors, and a p value of less than .1 was considered statistically significant. The MMP-9 level increased by 130.44% at 90 min on pump in the standard arterial filter group and decreased by 34.62% at 90 min on pump in the leukoreduction group. There was a significant difference between the baseline MMP-9 level and the MMP-9 concentrations at 30, 60, and 90 min for both groups (p = .0348); there was a significant difference in MMP-9 levels between the two filter groups (p = .0611). The present study found a significant increase in MMP-9 levels when blood was exposed to an ECC with a standard arterial filter. The use of a leukoreduction filter significantly reduced MMP-9 concentrations as compared to baseline levels in this study. Leukocyte depletion filtration may serve to benefit CPB patients by attenuating the inflammatory response and disrupting pathways that govern such mediators as the MMPs.",
"title": ""
},
{
"docid": "86a97f1fd99bc7c96716da40ecb94f13",
"text": "Recommender systems help users more easily and quickly find products that they truly prefer amidst the enormous volume of information available to them. Collaborative filtering (CF) methods, making recommendations based on opinions from “most similar” users, have been widely adopted in various applications. In spite of the overall success of CF systems, they encounter one crucial issue remaining to be solved, namely the cold-start problem. In this paper, we propose a method that combines human personality characteristics into the traditional rating-based similarity computation in the framework of user-based collaborative filtering systems with the motivation to make good recommendations for new users who have rated few items. This technique can be especially useful for recommenders that are embedded in social networks where personality data can be more easily obtained. We first analyze our method in terms of the influence of the parameters such as the number of neighbors and the weight of rating-based similarity. We further compare our method with pure traditional ratings-based similarity in several experimental conditions. Our results show that applying personality information into traditional user-based collaborative filtering systems can efficiently address the new user problem.",
"title": ""
},
{
"docid": "ca70ba5ad592708e1681f823d09bcd52",
"text": "The causal discovery of Bayesian networks is an active and important research area, and it is based upon searching the space of causal models for those which can best explain a pattern of probabilistic dependencies shown in the data. However, some of those dependencies are generated by causal structures involving variables which have not been measured, i.e., latent variables. Some such patterns of dependency “reveal” themselves, in that no model based solely upon the observed variables can explain them as well as a model using a latent variable. That is what latent variable discovery is based upon. Here we did a search for finding them systematically, so that they may be applied in latent variable discovery in a more rigorous fashion.",
"title": ""
},
{
"docid": "7411ae149016be794566261d7362f7d3",
"text": "BACKGROUND\nProcrastination, to voluntarily delay an intended course of action despite expecting to be worse-off for the delay, is a persistent behavior pattern that can cause major psychological suffering. Approximately half of the student population and 15%-20% of the adult population are presumed having substantial difficulties due to chronic and recurrent procrastination in their everyday life. However, preconceptions and a lack of knowledge restrict the availability of adequate care. Cognitive behavior therapy (CBT) is often considered treatment of choice, although no clinical trials have previously been carried out.\n\n\nOBJECTIVE\nThe aim of this study will be to test the effects of CBT for procrastination, and to investigate whether it can be delivered via the Internet.\n\n\nMETHODS\nParticipants will be recruited through advertisements in newspapers, other media, and the Internet. Only people residing in Sweden with access to the Internet and suffering from procrastination will be included in the study. A randomized controlled trial with a sample size of 150 participants divided into three groups will be utilized. The treatment group will consist of 50 participants receiving a 10-week CBT intervention with weekly therapist contact. A second treatment group with 50 participants receiving the same treatment, but without therapist contact, will also be employed. The intervention being used for the current study is derived from a self-help book for procrastination written by one of the authors (AR). It includes several CBT techniques commonly used for the treatment of procrastination (eg, behavioral activation, behavioral experiments, stimulus control, and psychoeducation on motivation and different work methods). A control group consisting of 50 participants on a wait-list control will be used to evaluate the effects of the CBT intervention. For ethical reasons, the participants in the control group will gain access to the same intervention following the 10-week treatment period, albeit without therapist contact.\n\n\nRESULTS\nThe current study is believed to result in three important findings. First, a CBT intervention is assumed to be beneficial for people suffering from problems caused by procrastination. Second, the degree of therapist contact will have a positive effect on treatment outcome as procrastination can be partially explained as a self-regulatory failure. Third, an Internet based CBT intervention is presumed to be an effective way to administer treatment for procrastination, which is considered highly important, as the availability of adequate care is limited. The current study is therefore believed to render significant knowledge on the treatment of procrastination, as well as providing support for the use of Internet based CBT for difficulties due to delayed tasks and commitments.\n\n\nCONCLUSIONS\nTo our knowledge, the current study is the first clinical trial to examine the effects of CBT for procrastination, and is assumed to render significant knowledge on the treatment of procrastination, as well as investigating whether it can be delivered via the Internet.\n\n\nTRIAL REGISTRATION\nClinicalTrials.gov: NCT01842945; http://clinicaltrials.gov/show/NCT01842945 (Archived by WebCite at http://www.webcitation.org/6KSmaXewC).",
"title": ""
},
{
"docid": "ddeb76fa4315ee274bf1aa7ac014b6a2",
"text": "Linked Data offers new opportunities for Semantic Web-based application development by connecting structured information from various domains. These technologies allow machines and software agents to automatically interpret and consume Linked Data and provide users with intelligent query answering services. In order to enable advanced and innovative semantic applications of Linked Data such as recommendation, social network analysis, and information clustering, a fundamental requirement is systematic metrics that allow comparison between resources. In this research, we develop a hybrid similarity metric based on the characteristics of Linked Data. In particular, we develop and demonstrate metrics for providing recommendations of closely related resources. The results of our preliminary experiments and future directions are also presented.",
"title": ""
},
{
"docid": "b7fc526150d689dc751cfc9649bab32c",
"text": "AIC Akaike information criterion AUC area under the curve COV coefficient of variation GIS geographic information systems GPS Global Positioning System HYV high-yielding variety LUCC land use and land cover change NDVI normalized difference vegetation index OLS ordinary least squares PCA principal component analysis ROC relative or receiver operating characteristic SC Schwartz criterion SSE error sum of squares SSR regression sum of squares SST total sum of squares TLU tropical livestock unit 8 9 Acknowledgements This report is written as part of the project 'Transregional analysis of crop-livestock systems: understanding intensification and evolution across three continents' commissioned by the Ecoregional Fund and implemented by the International Livestock Research Institute (ILRI) in collaboration with the Department of Environmental Sciences of Wageningen University, and the Kenya Agricultural Research Institute. This report reviews a large amount of knowledge from the Land Use and Land Cover Change research community (LUCC, a joint IGBP/IHDP project) and is one of the activities of the LUCC Focus 3 Office hosted by the Department of Environmental Sciences, Wageningen University. Part of the writing was funded by the Foundation for the Advancement of Tropical Research (WOTRO) of the Netherlands Organization for Scientific Research (NWO) within the project 'Integrating macro-modelling and actor-oriented research in studying the dynamics of land use change in NorthEast Luzon, Philippines'. The authors would like to thank all who contributed to this report, especially the contributions of Isabelle Baltenweck, Jeannette van de Steeg and Koen Overmars and the thoughtful reviews of two anonymous referees. 1 Introduction Land use and land cover change (LUCC) has important impacts on the functioning of socioeconomic and environmental systems with important tradeoffs for sustainability, food security, biodiversity and the vulnerability of people and ecosystems to global change impacts. Land cover change refers to the complete replacement of one cover type by another, e.g. deforestation. Land use change includes the modification of land cover types, e.g. intensification of agricultural management or other changes in the farming system. Land use and land cover changes are the result of the interplay between socioeconomic , institutional and environmental factors. Key to understanding LUCC is to recognize the role of individual decision makers bringing about change, through their choices, on land resources and technologies. A unifying hypothesis that links the ecological and social realms, and an important reason for pursuing integrated modelling of LUCC, is that humans respond to cues both from the physical environment and …",
"title": ""
},
{
"docid": "aa3c4e267122b636eae557513900dd85",
"text": "At their core, Intelligent Tutoring Systems consist of a student model and a policy. The student model captures the state of the student and the policy uses the student model to individualize instruction. Policies require different properties from the student model. For example, a mastery threshold policy requires the student model to have a way to quantify whether the student has mastered a skill. A large amount of work has been done on building student models that can predict student performance on the next question. In this paper, we leverage this prior work with a new whento-stop policy that is compatible with any such predictive student model. Our results suggest that, when employed as part of our new predictive similarity policy, student models with similar predictive accuracies can suggest that substantially different amounts of practice are necessary. This suggests that predictive accuracy may not be a sufficient metric by itself when choosing which student model to use in intelligent tutoring systems.",
"title": ""
},
{
"docid": "22418c06e09887d5994aee27ea05691d",
"text": "About a decade ago, psychology of the arts started to gain momentum owing to a number of drives: technological progress improved the conditions under which art could be studied in the laboratory, neuroscience discovered the arts as an area of interest, and new theories offered a more comprehensive look at aesthetic experiences. Ten years ago, Leder, Belke, Oeberst, and Augustin (2004) proposed a descriptive information-processing model of the components that integrate an aesthetic episode. This theory offered explanations for modern art's large number of individualized styles, innovativeness, and for the diverse aesthetic experiences it can stimulate. In addition, it described how information is processed over the time course of an aesthetic episode, within and over perceptual, cognitive and emotional components. Here, we review the current state of the model, and its relation to the major topics in empirical aesthetics today, including the nature of aesthetic emotions, the role of context, and the neural and evolutionary foundations of art and aesthetics.",
"title": ""
},
{
"docid": "4c6c369a0a209837159407a792452835",
"text": "We present e cient new randomized and deterministic methods for transforming optimal solutions for a type of relaxed integer linear program into provably good solutions for the corresponding NP-hard discrete optimization problem. Without any constraint violation, the -approximation problem for many problems of this type is itself NP-hard. Our methods provide polynomial-time -approximations while attempting to minimize the packing constraint violation. Our methods lead to the rst known approximation algorithms with provable performance guarantees for the s-median problem, the tree pruning problem, and the generalized assignment problem. These important problems have numerous applications to data compression, vector quantization, memory-based learning, computer graphics, image processing, clustering, regression, network location, scheduling, and communication. We provide evidence via reductions that our approximation algorithms are nearly optimal in terms of the packing constraint violation. We also discuss some recent applications of our techniques to scheduling problems. Support was provided in part by an National Science Foundation PresidentialYoung InvestigatorAward CCR{9047466 with matching funds from IBM, by NSF research grant CCR{9007851, by Army Research O ce grant DAAL03{91{G{0035, and by the O ce of Naval Research and the Defense Advanced Research Projects Agency under contract N00014{91{J{4052, ARPA order 8225. The authors can be reached by electronic mail at jhl@cs.brown.edu and jsv@cs.brown.edu, respectively.",
"title": ""
},
{
"docid": "a27c96091d6d806b05730e76377927e0",
"text": "Visual priming is known to affect the human visual system to allow detection of scene elements, even those that may have been near unnoticeable before, such as the presence of camouflaged animals. This process has been shown to be an effect of top-down signaling in the visual system triggered by the said cue. In this paper, we propose a mechanism to mimic the process of priming in the context of object detection and segmentation. We view priming as having a modulatory, cue dependent effect on layers of features within a network. Our results show how such a process can be complementary to, and at times more effective than simple post-processing applied to the output of the network, notably so in cases where the object is hard to detect such as in severe noise, small size or atypical appearance. Moreover, we find the effects of priming are sometimes stronger when early visual layers are affected. Overall, our experiments confirm that top-down signals can go a long way in improving object detection and segmentation.",
"title": ""
},
{
"docid": "8fc0d896dfb5411079068f11800aac93",
"text": "This paper is concerned with estimating a probability density function of human skin color using a nite Gaussian mixture model whose parameters are estimated through the EM algorithm Hawkins statistical test on the normality and homoscedasticity common covariance matrix of the estimated Gaussian mixture models is performed and McLachlan s bootstrap method is used to test the number of components in a mixture Experimental results show that the estimated Gaussian mixture model ts skin images from a large database Applications of the estimated density function in image and video databases are presented",
"title": ""
},
{
"docid": "76ce07d1086aa0b6a8d85a349c10df54",
"text": "Rankings are a popular and universal approach to structuring otherwise unorganized collections of items by computing a rank for each item based on the value of one or more of its attributes. This allows us, for example, to prioritize tasks or to evaluate the performance of products relative to each other. While the visualization of a ranking itself is straightforward, its interpretation is not, because the rank of an item represents only a summary of a potentially complicated relationship between its attributes and those of the other items. It is also common that alternative rankings exist which need to be compared and analyzed to gain insight into how multiple heterogeneous attributes affect the rankings. Advanced visual exploration tools are needed to make this process efficient. In this paper we present a comprehensive analysis of requirements for the visualization of multi-attribute rankings. Based on these considerations, we propose LineUp - a novel and scalable visualization technique that uses bar charts. This interactive technique supports the ranking of items based on multiple heterogeneous attributes with different scales and semantics. It enables users to interactively combine attributes and flexibly refine parameters to explore the effect of changes in the attribute combination. This process can be employed to derive actionable insights as to which attributes of an item need to be modified in order for its rank to change. Additionally, through integration of slope graphs, LineUp can also be used to compare multiple alternative rankings on the same set of items, for example, over time or across different attribute combinations. We evaluate the effectiveness of the proposed multi-attribute visualization technique in a qualitative study. The study shows that users are able to successfully solve complex ranking tasks in a short period of time.",
"title": ""
},
{
"docid": "556013f32d362413ca54483f75dd401c",
"text": "Existing shape-from-shading algorithms assume constant reflectance across the shaded surface. Multi-colored surfaces are excluded because both shading and reflectance affect the measured image intensity. Given a standard RGB color image, we describe a method of eliminating the reflectance effects in order to calculate a shading field that depends only on the relative positions of the illuminant and surface. Of course, shading recovery is closely tied to lightness recovery and our method follows from the work of Land [10, 9], Horn [7] and Blake [1]. In the luminance image, R+G+B, shading and reflectance are confounded. Reflectance changes are located and removed from the luminance image by thresholding the gradient of its logarithm at locations of abrupt chromaticity change. Thresholding can lead to gradient fields which are not conservative (do not have zero curl everywhere and are not integrable) and therefore do not represent realizable shading fields. By applying a new curl-correction technique at the thresholded locations, the thresholding is improved and the gradient fields are forced to be conservative. The resulting Poisson equation is solved directly by the Fourier transform method. Experiments with real images are presented.",
"title": ""
},
{
"docid": "b4efebd49c8dd2756a4c2fb86b854798",
"text": "Mobile technologies (including handheld and wearable devices) have the potential to enhance learning activities from basic medical undergraduate education through residency and beyond. In order to use these technologies successfully, medical educators need to be aware of the underpinning socio-theoretical concepts that influence their usage, the pre-clinical and clinical educational environment in which the educational activities occur, and the practical possibilities and limitations of their usage. This Guide builds upon the previous AMEE Guide to e-Learning in medical education by providing medical teachers with conceptual frameworks and practical examples of using mobile technologies in medical education. The goal is to help medical teachers to use these concepts and technologies at all levels of medical education to improve the education of medical and healthcare personnel, and ultimately contribute to improved patient healthcare. This Guide begins by reviewing some of the technological changes that have occurred in recent years, and then examines the theoretical basis (both social and educational) for understanding mobile technology usage. From there, the Guide progresses through a hierarchy of institutional, teacher and learner needs, identifying issues, problems and solutions for the effective use of mobile technology in medical education. This Guide ends with a brief look to the future.",
"title": ""
},
{
"docid": "4e8131e177330af2fb8999c799508b58",
"text": "Unmanned aerial vehicles (UAVs) such as multi-copters are expected to be used for inspection of aged infrastructure or for searching damaged buildings in the event of a disaster. However, in a confined space in such environments, UAVs suffer a high risk of falling as a result of contact with an obstacle. To ensure an aerial inspection in the confined space, we have proposed a UAV with a passive rotating spherical shell (PRSS UAV); The UAV and the spherical shell are connected by a 3DOF gimbal mechanism to allow them to rotate in all directions independently, so that the UAV can maintain its flight stability during a collision with an obstacle because only the shell is disturbed and rotated. To apply the PRSS UAV into real-world missions, we have to carefully choose many design parameters such as weight, structure, diameter, strength of the spherical shell, axis configuration of the gimbal, and model of the UAV. In this paper, we propose a design strategy for applying the concept of the PRSS mechanism, focusing on disaster response and infrastructure inspection. We also demonstrate the validity of this approach by the successful result of quantitative experiments and practical field tests.",
"title": ""
},
{
"docid": "c9d1b064e140601eceea2803037e4b81",
"text": "This paper presents a simulated design of millimeter wave square patch antenna 1×6 array on silicon and Roger RO4003 substrate for prominent multiple bands i.e. 58GHz-60GHz, 65GHz-68GHz, 72GHz-77GHz. Designed antenna can serve 5G cellular network as well as advance device-to-device (D2D) network which is special feature of 5G communication system to reduce end-to-end latency and to implement Mission Critical Push-To-Talk Communication (MCPTT) and Vehicle-to-Anything (V2X) Communication. Designed antenna has peakgain of 9 dB and very high efficiency. Return loss for given bands at their resonant frequencies are as low as -35dB and total bandwidth of 9.57 GHz. Silicon is used under feeding network to enhance the bandwidth and reduce the size of feeding network and low dielectric material under patch to reduce dielectric loss thus maintaining the efficiency. Symmetrical parallel feeding network is used to enhance gain. Inset fed with quarter wave transformers are used for feeding and matching, along with maintaining the conformity. A novel design is used to kill the spurious radiation due to feed network, thus shaping the radiation pattern for cellular application. Overall size of antenna is 6.7mm×30mm×1.2mm compatible with miniaturized devices and is printable.",
"title": ""
},
{
"docid": "f1c57270a908155954049ff06d33918b",
"text": "Volume 40 Number 11 November 2014 484 Health care organizations today are finding that simply providing a “good” health care experience is insufficient to meet patient expectations. Organizations must train staff to provide excellent customer service to all patients. Many patients have become savvy health care “consumers” and consider customer service when they evaluate the quality of care they receive. The challenge for health care organizations is that patients want and expect not only outstanding clinical interventions but also excellent customer service—on every single visit.1(p. 25) A growing body of evidence suggests that patient (including family) feedback can provide compelling opportunities for developing risk management and quality improvement strategies, as well as improving customer satisfaction.2–5 Research links patient dissatisfaction with malpractice claims and unnecessary expenses.6–10 Cold food, rude behavior, long waiting periods, and quality of care concerns should be taken seriously by hospital leadership not only because such attention is addressed in Joint Commission accreditation standards or required by the Centers for Medicare & Medicaid Services (CMS) but because it is the right thing to do. The Joint Commission standards speak to the collection of, response to, and documentation of complaints from hospital patients and their families,*11 and CMS deems a time frame of 7 days appropriate for resolution for most complaints, with 21 days for complex complaints.†12 In addition, in July 2008 Joint Commission Sentinel Event Alert 40 stated that disruptive and intimidating physician behavior toward patients and colleagues may lead to medical errors, poor patient satisfaction, preventable adverse outcomes Patient-Centered Care",
"title": ""
},
{
"docid": "fb66a74a7cb4aa27556b428e378353a8",
"text": "This work has been submitted to the IEEE for possible publication. Copyright may be transferred without notice, after which this version may no longer be accessible. Abstract—High-resolution radar sensors are able to resolve multiple measurements per object and therefore provide valuable information for vehicle environment perception. For instance, multiple measurements allow to infer the size of an object or to more precisely measure the object’s motion. Yet, the increased amount of data raises the demands on tracking modules: measurement models that are able to process multiple measurements for an object are necessary and measurement-toobject associations become more complex. This paper presents a new variational radar model for vehicles and demonstrates how this model can be incorporated in a Random-Finite-Setbased multi-object tracker. The measurement model is learned from actual data using variational Gaussian mixtures and avoids excessive manual engineering. In combination with the multiobject tracker, the entire process chain from the raw measurements to the resulting tracks is formulated probabilistically. The presented approach is evaluated on experimental data and it is demonstrated that data-driven measurement model outperforms a manually designed model.",
"title": ""
}
] |
scidocsrr
|
e928e50e7191ad2b7de5ae53d23205fe
|
Relational dynamic memory networks
|
[
{
"docid": "a32d6897d74397f5874cc116221af207",
"text": "A plausible definition of “reasoning” could be “algebraically manipulating previously acquired knowledge in order to answer a new question”. This definition covers first-order logical inference or probabilistic inference. It also includes much simpler manipulations commonly used to build large learning systems. For instance, we can build an optical character recognition system by first training a character segmenter, an isolated character recognizer, and a language model, using appropriate labelled training sets. Adequately concatenating these modules and fine tuning the resulting system can be viewed as an algebraic operation in a space of models. The resulting model answers a new question, that is, converting the image of a text page into a computer readable text. This observation suggests a conceptual continuity between algebraically rich inference systems, such as logical or probabilistic inference, and simple manipulations, such as the mere concatenation of trainable learning systems. Therefore, instead of trying to bridge the gap between machine learning systems and sophisticated “all-purpose” inference mechanisms, we can instead algebraically enrich the set of manipulations applicable to training systems, and build reasoning capabilities from the ground up.",
"title": ""
}
] |
[
{
"docid": "36d261d49f898664a6f42a84911a8b7c",
"text": "Items in real-world recommender systems exhibit certain hierarchical structures. Similarly, user preferences also present hierarchical structures. Recent studies show that incorporating the hierarchy of items or user preferences can improve the performance of recommender systems. However, hierarchical structures are often not explicitly available, especially those of user preferences. Thus, there's a gap between the importance of hierarchies and their availability. In this paper, we investigate the problem of exploring the implicit hierarchical structures for recommender systems when they are not explicitly available. We propose a novel recommendation framework to bridge the gap, which enables us to explore the implicit hierarchies of users and items simultaneously. We then extend the framework to integrate explicit hierarchies when they are available, which gives a unified framework for both explicit and implicit hierarchical structures. Experimental results on real-world datasets demonstrate the effectiveness of the proposed framework by incorporating implicit and explicit structures.",
"title": ""
},
{
"docid": "bb5f748fa34ddc91389fb22ad8c1d163",
"text": "Hate speech detection on Twitter is critical for applications like controversial event extraction, building AI chatterbots, content recommendation, and sentiment analysis. We define this task as being able to classify a tweet as racist, sexist or neither. The complexity of the natural language constructs makes this task very challenging. We perform extensive experiments with multiple deep learning architectures to learn semantic word embeddings to handle this complexity. Our experiments on a benchmark dataset of 16K annotated tweets show that such deep learning methods outperform state-of-the-art char/word n-gram methods by ∼18 F1 points.",
"title": ""
},
{
"docid": "bfd946e8b668377295a1672a7bb915a3",
"text": "Code-Mixing is a frequently observed phenomenon in social media content generated by multi-lingual users. The processing of such data for linguistic analysis as well as computational modelling is challenging due to the linguistic complexity resulting from the nature of the mixing as well as the presence of non-standard variations in spellings and grammar, and transliteration. Our analysis shows the extent of Code-Mixing in English-Hindi data. The classification of Code-Mixed words based on frequency and linguistic typology underline the fact that while there are easily identifiable cases of borrowing and mixing at the two ends, a large majority of the words form a continuum in the middle, emphasizing the need to handle these at different levels for automatic processing of the data.",
"title": ""
},
{
"docid": "6514ddb39c465a8ca207e24e60071e7f",
"text": "The psychometric properties and clinical utility of the Separation Anxiety Avoidance Inventory, child and parent version (SAAI-C/P) were examined in two studies. The aim of the SAAI, a self- and parent-report measure, is to evaluate the avoidance relating to separation anxiety disorder (SAD) situations. In the first study, a school sample of 384 children and their parents (n = 279) participated. In the second study, 102 children with SAD and 35 children with other anxiety disorders (AD) were investigated. In addition, 93 parents of children with SAD, and 35 parents of children with other AD participated. A two-factor structure was confirmed by confirmatory factor analysis. The SAAI-C and SAAI-P demonstrated good internal consistency, test-retest reliability, as well as construct and discriminant validity. Furthermore, the SAAI was sensitive to treatment change. The parent-child agreement was substantial. Overall, these results provide support for the use of the SAAI-C/P version in clinical and research settings.",
"title": ""
},
{
"docid": "8f29de514e2a266a02be4b75d62be44f",
"text": "In this work, we apply word embeddings and neural networks with Long Short-Term Memory (LSTM) to text classification problems, where the classification criteria are decided by the context of the application. We examine two applications in particular. The first is that of Actionability, where we build models to classify social media messages from customers of service providers as Actionable or Non-Actionable. We build models for over 30 different languages for actionability, and most of the models achieve accuracy around 85%, with some reaching over 90% accuracy. We also show that using LSTM neural networks with word embeddings vastly outperform traditional techniques. Second, we explore classification of messages with respect to political leaning, where social media messages are classified as Democratic or Republican. The model is able to classify messages with a high accuracy of 87.57%. As part of our experiments, we vary different hyperparameters of the neural networks, and report the effect of such variation on the accuracy. These actionability models have been deployed to production and help company agents provide customer support by prioritizing which messages to respond to. The model for political leaning has been opened and made available for wider use.",
"title": ""
},
{
"docid": "2afbb4e8963b9e6953fd6f7f8c595c06",
"text": "Large-scale linguistically annotated corpora have played a crucial role in advancing the state of the art of key natural language technologies such as syntactic, semantic and discourse analyzers, and they serve as training data as well as evaluation benchmarks. Up till now, however, most of the evaluation has been done on monolithic corpora such as the Penn Treebank, the Proposition Bank. As a result, it is still unclear how the state-of-the-art analyzers perform in general on data from a variety of genres or domains. The completion of the OntoNotes corpus, a large-scale, multi-genre, multilingual corpus manually annotated with syntactic, semantic and discourse information, makes it possible to perform such an evaluation. This paper presents an analysis of the performance of publicly available, state-of-the-art tools on all layers and languages in the OntoNotes v5.0 corpus. This should set the benchmark for future development of various NLP components in syntax and semantics, and possibly encourage research towards an integrated system that makes use of the various layers jointly to improve overall performance.",
"title": ""
},
{
"docid": "12a34678fa46825e11944f317fdd4977",
"text": "The purpose of a distributed file system (DFS) is to allow users of physically distributed computers to share data and storage resources by using a common file system. A typical configuration for a DFS is a collection of workstations and mainframes connected by a local area network (LAN). A DFS is implemented as part of the operating system of each of the connected computers. This paper establishes a viewpoint that emphasizes the dispersed structure and decentralization of both data and control in the design of such systems. It defines the concepts of transparency, fault tolerance, and scalability and discusses them in the context of DFSs. The paper claims that the principle of distributed operation is fundamental for a fault tolerant and scalable DFS design. It also presents alternatives for the semantics of sharing and methods for providing access to remote files. A survey of contemporary UNIX-based systems, namely, UNIX United, Locus, Sprite, Sun's Network File System, and ITC's Andrew, illustrates the concepts and demonstrates various implementations and design alternatives. Based on the assessment of these systems, the paper makes the point that a departure from the extending centralized file systems over a communication network is necessary to accomplish sound distributed file system design.",
"title": ""
},
{
"docid": "21025b37c5c172399c63148f1bfa49ab",
"text": "Buffer overflows belong to the most common class of attacks on today’s Internet. Although stack-based variants are still by far more frequent and well-understood, heap-based overflows have recently gained more attention. Several real-world exploits have been published that corrupt heap management information and allow arbitrary code execution with the privileges of the victim process. This paper presents a technique that protects the heap management information and allows for run-time detection of heap-based overflows. We discuss the structure of these attacks and our proposed detection scheme that has been implemented as a patch to the GNU Lib C. We report the results of our experiments, which demonstrate the detection effectiveness and performance impact of our approach. In addition, we discuss different mechanisms to deploy the memory protection.",
"title": ""
},
{
"docid": "2363f0f9b50bc2ebbccb0746bb6b1080",
"text": "This communication presents a wideband, dual-polarized Vivaldi antenna or tapered slot antenna with over a decade (10.7:1) of bandwidth. The dual-polarized antenna structure is achieved by inserting two orthogonal Vivaldi antennas in a cross-shaped form without a galvanic contact. The measured -10 dB impedance bandwidth (S11) is approximately from 0.7 up to 7.30 GHz, corresponding to a 166% relative frequency bandwidth. The isolation (S21) between the antenna ports is better than 30 dB, and the measured maximum gain is 3.8-11.2 dB at the aforementioned frequency bandwidth. Orthogonal polarizations have the same maximum gain within the 0.7-3.6 GHz band, and a slight variation up from 3.6 GHz. The cross-polarization discrimination (XPD) is better than 19 dB across the measured 0.7-6.0 GHz frequency bandwidth, and better than 25 dB up to 4.5 GHz. The measured results are compared with the numerical ones in terms of S-parameters, maximum gain, and XPD.",
"title": ""
},
{
"docid": "2ca5118d8f4402ed1a2d1c26fbcf9f53",
"text": "Weakly supervised data is an important machine learning data to help improve learning performance. However, recent results indicate that machine learning techniques with the usage of weakly supervised data may sometimes cause performance degradation. Safely leveraging weakly supervised data is important, whereas there is only very limited effort, especially on a general formulation to help provide insight to guide safe weakly supervised learning. In this paper we present a scheme that builds the final prediction results by integrating several weakly supervised learners. Our resultant formulation brings two advantages. i) For the commonly used convex loss functions in both regression and classification tasks, safeness guarantees exist under a mild condition; ii) Prior knowledge related to the weights of base learners can be embedded in a flexible manner. Moreover, the formulation can be addressed globally by simple convex quadratic or linear program efficiently. Experiments on multiple weakly supervised learning tasks such as label noise learning, domain adaptation and semi-supervised learning validate the effectiveness.",
"title": ""
},
{
"docid": "56ed889e2e7c359393f847f8f45e9bf1",
"text": "In culture analytics, it is important to ask fundamental questions that address salient characteristics of collective human behavior. This paper explores how analyzing cooking recipes in aggregate and at scale identifies these characteristics in the cooking culture, and answer fundamental questions like 'what makes a chocolate chip cookie a chocolate chip cookie?'. Aspiring cooks, professional chefs and cooking hobbyists share their recipes online resulting in thousands of different procedural instructions towards a shared goal. However, existing approaches focus merely on analysis at the ingredient level, for example, extracting ingredient information from individual recipes. We introduce RecipeScape, a prototype interface which supports visually querying, browsing and comparing cooking recipes at scale. We also present the underlying computational pipeline of RecipeScape that scrapes recipes online, extracts their ingredient and instruction information, constructs a graphical representation, and computes similarity between pairs of recipes.",
"title": ""
},
{
"docid": "7f4b27422520ad678dd2f5f658ffebc3",
"text": "We present a generic framework to make wrapper induction algorithms tolerant to noise in the training data. This enables us to learn wrappers in a completely unsupervised manner from automatically and cheaply obtained noisy training data, e.g., using dictionaries and regular expressions. By removing the site-level supervision that wrapper-based techniques require, we are able to perform information extraction at web-scale, with accuracy unattained with existing unsupervised extraction techniques. Our system is used in production at Yahoo! and powers live applications.",
"title": ""
},
{
"docid": "4afa269cb8ff0fb4b90f3fe5ddcd0675",
"text": "Sleep specialists often conduct manual sleep stage scoring by visually inspecting the patient’s neurophysiological signals collected at sleep labs. This is, generally, a very difficult, tedious and time-consuming task. The limitations of manual sleep stage scoring have escalated the demand for developing Automatic Sleep Stage Classification (ASSC) systems. Sleep stage classification refers to identifying the various stages of sleep and is a critical step in an effort to assist physicians in the diagnosis and treatment of related sleep disorders. The aim of this paper is to survey the progress and challenges in various existing Electroencephalogram (EEG) signal-based methods used for sleep stage identification at each phase; including pre-processing, feature extraction and classification; in an attempt to find the research gaps and possibly introduce a reasonable solution. Many of the prior and current related studies use multiple EEG channels, and are based on 30 s or 20 s epoch lengths which affect the feasibility and speed of ASSC for real-time applications. Thus, in this paper, we also present a novel and efficient technique that can be implemented in an embedded hardware device to identify sleep stages using new statistical features applied to 10 s epochs of single-channel EEG signals. In this study, the PhysioNet Sleep European Data Format (EDF) Database was used. The proposed methodology achieves an average classification sensitivity, specificity and accuracy of 89.06%, 98.61% and 93.13%, respectively, when the decision tree classifier is applied. Finally, our new method is compared with those in recently published studies, which reiterates the high classification accuracy performance.",
"title": ""
},
{
"docid": "36d79b2b2640d1b2ac7f8ef057abc75c",
"text": "Published scientific articles are linked together into a graph, the citation graph, through their citations. This paper explores the notion of similarity based on connectivity alone, and proposes several algorithms to quantify it. Our metrics take advantage of the local neighborhoods of the nodes in the citation graph. Two variants of link-based similarity estimation between two nodes are described, one based on the separate local neighborhoods of the nodes, and another based on the joint local neighborhood expanded from both nodes at the same time. The algorithms are implemented and evaluated on a subgraph of the citation graph of computer science in a retrieval context. The results are compared with text-based similarity, and demonstrate the complementarity of link-based and text-based retrieval.",
"title": ""
},
{
"docid": "5f01cb5c34ac9182f6485f70d19101db",
"text": "Gastroeophageal reflux is a condition in which the acidified liquid content of the stomach backs up into the esophagus. The antiacid magaldrate and prokinetic domperidone are two drugs clinically used for the treatment of gastroesophageal reflux symptoms. However, the evidence of a superior effectiveness of this combination in comparison with individual drugs is lacking. A double-blind, randomized and comparative clinical trial study was designed to characterize the efficacy and safety of a fixed dose combination of magaldrate (800 mg)/domperidone (10 mg) against domperidone alone (10 mg), in patients with gastroesophageal reflux symptoms. One hundred patients with gastroesophageal reflux diagnosed by Carlsson scale were randomized to receive a chewable tablet of a fixed dose of magaldrate/domperidone combination or domperidone alone four times each day during a month. Magaldrate/domperidone combination showed a superior efficacy to decrease global esophageal (pyrosis, regurgitation, dysphagia, hiccup, gastroparesis, sialorrhea, globus pharyngeus and nausea) and extraesophageal (chronic cough, hoarseness, asthmatiform syndrome, laryngitis, pharyngitis, halitosis and chest pain) reflux symptoms than domperidone alone. In addition, magaldrate/domperidone combination improved in a statistically manner the quality of life of patients with gastroesophageal reflux respect to monotherapy, and more patients perceived the combination as a better treatment. Both treatments were well tolerated. Data suggest that oral magaldrate/domperidone mixture could be a better option in the treatment of gastroesophageal reflux symptoms than only domperidone.",
"title": ""
},
{
"docid": "dd82e1c54a2b73e98788eb7400600be3",
"text": "Supernovae Type-Ia (SNeIa) play a significant role in exploring the history of the expansion of the Universe, since they are the best-known standard candles with which we can accurately measure the distance to the objects. Finding large samples of SNeIa and investigating their detailed characteristics has become an important issue in cosmology and astronomy. Existing methods relied on a photometric approach that first measures the luminance of supernova candidates precisely and then fits the results to a parametric function of temporal changes in luminance. However, it inevitably requires a lot of observations and complex luminance measurements. In this work, we present a novel method for detecting SNeIa simply from single-shot observation images without any complex measurements, by effectively integrating the state-of-the-art computer vision methodology into the standard photometric approach. Experimental results show the effectiveness of the proposed method and reveal classification performance comparable to existing photometric methods with many observations.",
"title": ""
},
{
"docid": "fe116849575dd91759a6c1ef7ed239f3",
"text": "We have recently seen many successful applications of recurrent neural networks (RNNs) on electronic medical records (EMRs), which contain histories of patients' diagnoses, medications, and other various events, in order to predict the current and future states of patients. Despite the strong performance of RNNs, it is often challenging for users to understand why the model makes a particular prediction. Such black-box nature of RNNs can impede its wide adoption in clinical practice. Furthermore, we have no established methods to interactively leverage users' domain expertise and prior knowledge as inputs for steering the model. Therefore, our design study aims to provide a visual analytics solution to increase interpretability and interactivity of RNNs via a joint effort of medical experts, artificial intelligence scientists, and visual analytics researchers. Following the iterative design process between the experts, we design, implement, and evaluate a visual analytics tool called RetainVis, which couples a newly improved, interpretable, and interactive RNN-based model called RetainEX and visualizations for users' exploration of EMR data in the context of prediction tasks. Our study shows the effective use of RetainVis for gaining insights into how individual medical codes contribute to making risk predictions, using EMRs of patients with heart failure and cataract symptoms. Our study also demonstrates how we made substantial changes to the state-of-the-art RNN model called RETAIN in order to make use of temporal information and increase interactivity. This study will provide a useful guideline for researchers that aim to design an interpretable and interactive visual analytics tool for RNNs.",
"title": ""
},
{
"docid": "2effb3276d577d961f6c6ad18a1e7b3e",
"text": "This paper extends the recovery of structure and motion to im age sequences with several independently moving objects. The mot ion, structure, and camera calibration are all a-priori unknown. The fundamental constraint that we introduce is that multiple motions must share the same camer parameters. Existing work on independent motions has not employed this constr ai t, and therefore has not gained over independent static-scene reconstructi ons. We show how this constraint leads to several new results in st ructure and motion recovery, where Euclidean reconstruction becomes pos ible in the multibody case, when it was underconstrained for a static scene. We sho w how to combine motions of high-relief, low-relief and planar objects. Add itionally we show that structure and motion can be recovered from just 4 points in th e uncalibrated, fixed camera, case. Experiments on real and synthetic imagery demonstrate the v alidity of the theory and the improvement in accuracy obtained using multibody an alysis.",
"title": ""
},
{
"docid": "eb9f859b8a8fe6ae9b98638610564a94",
"text": "In this paper, we quantify the effectiveness of third-party tracker blockers on a large scale. First, we analyze the architecture of various state-of-the-art blocking solutions and discuss the advantages and disadvantages of each method. Second, we perform a two-part measurement study on the effectiveness of popular tracker-blocking tools. Our analysis quantifies the protection offered against trackers present on more than 100,000 popular websites and 10,000 popular Android applications. We provide novel insights into the ongoing arms race between trackers and developers of blocking tools as well as which tools achieve the best results under what circumstances. Among others, we discover that rule-based browser extensions outperform learning-based ones, trackers with smaller footprints are more successful at avoiding being blocked, and CDNs pose a major threat towards the future of tracker-blocking tools. Overall, the contributions of this paper advance the field of web privacy by providing not only the largest study to date on the effectiveness of tracker-blocking tools, but also by highlighting the most pressing challenges and privacy issues of third-party tracking.",
"title": ""
}
] |
scidocsrr
|
057d36e9f3be2ee9fabf4775afcf0a16
|
The effect of author set size and data size in authorship attribution
|
[
{
"docid": "6d227bbf8df90274f44a26d9c269c663",
"text": "Text categorization is a fundamental task in document processing, allowing the automated handling of enormous streams of documents in electronic form. One difficulty in handling some classes of documents is the presence of different kinds of textual errors, such as spelling and grammatical errors in email, and character recognition errors in documents that come through OCR. Text categorization must work reliably on all input, and thus must tolerate some level of these kinds of problems. We describe here an N-gram-based approach to text categorization that is tolerant of textual errors. The system is small, fast and robust. This system worked very well for language classification, achieving in one test a 99.8% correct classification rate on Usenet newsgroup articles written in different languages. The system also worked reasonably well for classifying articles from a number of different computer-oriented newsgroups according to subject, achieving as high as an 80% correct classification rate. There are also several obvious directions for improving the system’s classification performance in those cases where it did not do as well. The system is based on calculating and comparing profiles of N-gram frequencies. First, we use the system to compute profiles on training set data that represent the various categories, e.g., language samples or newsgroup content samples. Then the system computes a profile for a particular document that is to be classified. Finally, the system computes a distance measure between the document’s profile and each of the category profiles. The system selects the category whose profile has the smallest distance to the document’s profile. The profiles involved are quite small, typically 10K bytes for a category training set, and less than 4K bytes for an individual document. Using N-gram frequency profiles provides a simple and reliable way to categorize documents in a wide range of classification tasks.",
"title": ""
},
{
"docid": "a60d79008bfb7cccee262667b481d897",
"text": "It is well known that utterances convey a great deal of information about the speaker in addition to their semantic content. One such type of information consists of cues to the speaker’s personality traits, the most fundamental dimension of variation between humans. Recent work explores the automatic detection of other types of pragmatic variation in text and conversation, such as emotion, deception, speaker charisma, dominance, point of view, subjectivity, opinion and sentiment. Personality affects these other aspects of linguistic production, and thus personality recognition may be useful for these tasks, in addition to many other potential applications. However, to date, there is little work on the automatic recognition of personality traits. This article reports experimental results for recognition of all Big Five personality traits, in both conversation and text, utilising both self and observer ratings of personality. While other work reports classification results, we experiment with classification, regression and ranking models. For each model, we analyse the effect of different feature sets on accuracy. Results show that for some traits, any type of statistical model performs significantly better than the baseline, but ranking models perform best overall. We also present an experiment suggesting that ranking models are more accurate than multi-class classifiers for modelling personality. In addition, recognition models trained on observed personality perform better than models trained using selfreports, and the optimal feature set depends on the personality trait. A qualitative analysis of the learned models confirms previous findings linking language and personality, while revealing many new linguistic markers.",
"title": ""
},
{
"docid": "7f652be9bde8f47d166e7bbeeb3a535b",
"text": "One of the problems often associated with online anonymity is that it hinders social accountability, as substantiated by the high levels of cybercrime. Although identity cues are scarce in cyberspace, individuals often leave behind textual identity traces. In this study we proposed the use of stylometric analysis techniques to help identify individuals based on writing style. We incorporated a rich set of stylistic features, including lexical, syntactic, structural, content-specific, and idiosyncratic attributes. We also developed the Writeprints technique for identification and similarity detection of anonymous identities. Writeprints is a Karhunen-Loeve transforms-based technique that uses a sliding window and pattern disruption algorithm with individual author-level feature sets. The Writeprints technique and extended feature set were evaluated on a testbed encompassing four online datasets spanning different domains: email, instant messaging, feedback comments, and program code. Writeprints outperformed benchmark techniques, including SVM, Ensemble SVM, PCA, and standard Karhunen-Loeve transforms, on the identification and similarity detection tasks with accuracy as high as 94% when differentiating between 100 authors. The extended feature set also significantly outperformed a baseline set of features commonly used in previous research. Furthermore, individual-author-level feature sets generally outperformed use of a single group of attributes.",
"title": ""
}
] |
[
{
"docid": "e6922a113d619784bd902c06863b5eeb",
"text": "Brake Analysis and NVH (Noise, Vibration and Harshness) Optimization have become critically important areas of application in the Automotive Industry. Brake Noise and Vibration costs approximately $1Billion/year in warranty work in Detroit alone. NVH optimization is now increasingly being used to predict the vehicle tactile and acoustic responses in relation to the established targets for design considerations. Structural optimization coupled with frequency response analysis is instrumental in driving the design process so that the design targets are met in a timely fashion. Usual design targets include minimization of vehicle weight, the adjustment of fundamental eigenmodes and the minimization of acoustic pressure or vibration at selected vehicle locations. Both, Brake Analysis and NVH Optimization are computationally expensive analyses involving eigenvalue calculations. From a computational sense and the viewpoint of MSC.Nastran, brake analysis exercises the CEAD (Complex Eigenvalue Analysis Dmap) module, while NVH optimization invokes the DSADJ (Design Sensitivity using ADJoint method DMAP) module. In this paper, two automotive applications are presented to demonstrate the performance improvements of the CEAD and DSADJ modules on NEC vector-parallel supercomputers. Dramatic improvements in the DSADJ module resulting in approx. 8-9 fold performance improvement as compared to MSC.Nastran V70 were observed for NVH optimization. Also, brake simulations and experiences at General Motors will be presented. This analysis method has been successfully applied to 4 different programs at GM and the simulation results were consistent with laboratory experiments on test vehicles.",
"title": ""
},
{
"docid": "5efebde0526dbb7015ecef066b76d1a9",
"text": "Recent advances in mixed-reality technologies have renewed interest in alternative modes of communication for human-robot interaction. However, most of the work in this direction has been confined to tasks such as teleoperation, simulation or explication of individual actions of a robot. In this paper, we will discuss how the capability to project intentions affect the task planning capabilities of a robot. Specifically, we will start with a discussion on how projection actions can be used to reveal information regarding the future intentions of the robot at the time of task execution. We will then pose a new planning paradigm - projection-aware planning - whereby a robot can trade off its plan cost with its ability to reveal its intentions using its projection actions. We will demonstrate each of these scenarios with the help of a joint human-robot activity using the HoloLens.",
"title": ""
},
{
"docid": "2615f2f66adeaf1718d7afa5be3b32b1",
"text": "In this paper, an advanced design of an Autonomous Underwater Vehicle (AUV) is presented. The design is driven only by four water pumps. The different power combinations of the four motors provides the force and moment for propulsion and maneuvering. No control surfaces are needed in this design, which make the manufacturing cost of such a vehicle minimal and more reliable. Based on the propulsion method of the vehicle, a nonlinear AUV dynamic model is studied. This nonlinear model is linearized at the operation point. A control strategy of the AUV is proposed including attitude control and auto-pilot design. Simulation results for the attitude control loop are presented to validate this approach.",
"title": ""
},
{
"docid": "1749bfd76f18ced4a987c09013108cbf",
"text": "The mm-Wave bands defined as the new radio in the fifth generation (5G) mobile networks would decrease the dimension of the antenna into the scale of package level. In this study, a patch antenna array with stacked patches was designed for a wider operation frequency band than a typical patch. By considering a better electrical performance of the antenna in package (AiP), an unbalanced substrate of 4-layer metal stack-up within the processing capacity is proposed in this paper. The proposed unbalanced substrate structure is more elegant than the conventional substrate structure because of fewer substrate layers. The electrical and dimensional data are collected and analyzed. The designed patch antenna in this paper shows good correlations between simulations and measurements. The measured results show that the 1×4 patch array achieves a bandwidth of about 15.4 % with -10 dB return loss and gain of 10.8 dBi.",
"title": ""
},
{
"docid": "7867544be1b36ffab85b02c63cb03922",
"text": "In this paper a general theory of multistage decimators and interpolators for sampling rate reduction and sampling rate increase is presented. A set of curves and the necessary relations for optimally designing multistage decimators is also given. It is shown that the processes of decimation and interpolation are duals and therefore the same set of design curves applies to both problems. Further, it is shown that highly efficient implementations of narrow-band finite impulse response (FIR) fiiters can be obtained by cascading the processes of decimation and interpolation. Examples show that the efficiencies obtained are comparable to those of recursive elliptic filter designs.",
"title": ""
},
{
"docid": "ae3e9bf485d4945af625fca31eaedb76",
"text": "This document describes concisely the ubiquitous class of exponential family distributions met in statistics. The first part recalls definitions and summarizes main properties and duality with Bregman divergences (all proofs are skipped). The second part lists decompositions and related formula of common exponential family distributions. We recall the Fisher-Rao-Riemannian geometries and the dual affine connection information geometries of statistical manifolds. It is intended to maintain and update this document and catalog by adding new distribution items. See the jMEF library, a Java package for processing mixture of exponential families. Available for download at http://www.lix.polytechnique.fr/~nielsen/MEF/ École Polytechnique (France) and Sony Computer Science Laboratories Inc. (Japan). École Polytechnique (France).",
"title": ""
},
{
"docid": "6f31b0ba60dccb6f1c4ac3e4161f8a44",
"text": "In this work, we propose an alternative solution for parallel wave generation by WaveNet. In contrast to parallel WaveNet (Oord et al., 2018), we distill a Gaussian inverse autoregressive flow from the autoregressive WaveNet by minimizing a novel regularized KL divergence between their highly-peaked output distributions. Our method computes the KL divergence in closed-form, which simplifies the training algorithm and provides very efficient distillation. In addition, we propose the first text-to-wave neural architecture for speech synthesis, which is fully convolutional and enables fast end-to-end training from scratch. It significantly outperforms the previous pipeline that connects a text-to-spectrogram model to a separately trained WaveNet (Ping et al., 2018). We also successfully distill a parallel waveform synthesizer conditioned on the hidden representation in this end-to-end model. 2",
"title": ""
},
{
"docid": "afdc57b5d573e2c99c73deeef3c2fd5f",
"text": "The purpose of this article is to consider oral reading fluency as an indicator of overall reading competence. We begin by examining theoretical arguments for supposing that oral reading fluency may reflect overall reading competence. We then summarize several studies substantiating this phenomenon. Next, we provide an historical analysis of the extent to which oral reading fluency has been incorporated into measurement approaches during the past century. We conclude with recommendations about the assessment of oral reading fluency for research and practice.",
"title": ""
},
{
"docid": "553eb49b292b5edb4b53953701410a7d",
"text": "We review the most important mathematical models and algorithms developed for the exact solution of the one-dimensional bin packing and cutting stock problems, and experimentally evaluate, on state-of-the art computers, the performance of the main available software tools.",
"title": ""
},
{
"docid": "c938996e79711cae64bdcc23d7e3944b",
"text": "Decreased antimicrobial efficiency has become a global public health issue. The paucity of new antibacterial drugs is evident, and the arsenal against infectious diseases needs to be improved urgently. The selection of plants as a source of prototype compounds is appropriate, since plant species naturally produce a wide range of secondary metabolites that act as a chemical line of defense against microorganisms in the environment. Although traditional approaches to combat microbial infections remain effective, targeting microbial virulence rather than survival seems to be an exciting strategy, since the modulation of virulence factors might lead to a milder evolutionary pressure for the development of resistance. Additionally, anti-infective chemotherapies may be successfully achieved by combining antivirulence and conventional antimicrobials, extending the lifespan of these drugs. This review presents an updated discussion of natural compounds isolated from plants with chemically characterized structures and activity against the major bacterial virulence factors: quorum sensing, bacterial biofilms, bacterial motility, bacterial toxins, bacterial pigments, bacterial enzymes, and bacterial surfactants. Moreover, a critical analysis of the most promising virulence factors is presented, highlighting their potential as targets to attenuate bacterial virulence. The ongoing progress in the field of antivirulence therapy may therefore help to translate this promising concept into real intervention strategies in clinical areas.",
"title": ""
},
{
"docid": "a109c609d5cb72fb0e968cfc0f240f9a",
"text": "Indoor positioning systems (IPSes) can enable many location-based services in large indoor environments where GPS is not available or reliable. Mobile crowdsourcing is widely advocated as an effective way to construct IPS maps. This paper presents the first systematic study of security issues in crowd-sourced WiFi-based IPSes to promote security considerations in designing and deploying crowdsourced IPSes. We identify three attacks on crowdsourced WiFi-based IPSes and propose the corresponding countermeasures. The efficacy of the attacks and also our countermeasures are experimentally validated on a prototype system. The attacks and countermeasures can be easily extended to other crowdsourced IPSes.",
"title": ""
},
{
"docid": "eebca83626e8568e8b92019541466873",
"text": "There is a need for new spectrum access protocols that are opportunistic, flexible and efficient, yet fair. Game theory provides a framework for analyzing spectrum access, a problem that involves complex distributed decisions by independent spectrum users. We develop a cooperative game theory model to analyze a scenario where nodes in a multi-hop wireless network need to agree on a fair allocation of spectrum. We show that in high interference environments, the utility space of the game is non-convex, which may make some optimal allocations unachievable with pure strategies. However, we show that as the number of channels available increases, the utility space becomes close to convex and thus optimal allocations become achievable with pure strategies. We propose the use of the Nash Bargaining Solution and show that it achieves a good compromise between fairness and efficiency, using a small number of channels. Finally, we propose a distributed algorithm for spectrum sharing and show that it achieves allocations reasonably close to the Nash Bargaining Solution.",
"title": ""
},
{
"docid": "5725dfa487c061612cb782139c9fd2de",
"text": "A microgrid is defined as a local electric power distribution system with diverse distributed generation (DG), energy storage systems, and loads, which can operate as a part of the distribution system or when needed can operate in an islanded mode. Energy storage systems play a key role in improving security, stability, and power quality of the microgrid. During grid-connected mode, these storage units are charged from various DG sources as well as the main grid. During islanded mode, DG sources along with the storage units need to supply the load. Power electronic interfaces between the microgrid buses and the storage units should be able to detect the mode of operation, allow seamless transition between the modes, and allow power flow in both directions, while maintaining stability and power quality. An overview of bidirectional converter topologies relevant to microgrid energy storage application and their control strategies will be presented in this paper.",
"title": ""
},
{
"docid": "d72652b6ad54422e6864baccc88786a8",
"text": "Neisseria meningitidis is a major global pathogen that continues to cause endemic and epidemic human disease. Initial exposure typically occurs within the nasopharynx, where the bacteria can invade the mucosal epithelium, cause fulminant sepsis, and disseminate to the central nervous system, causing bacterial meningitis. Recently, Chamot-Rooke and colleagues1 described a unique virulence property of N. meningitidis in which the bacterial surface pili, after contact with host cells, undergo a modification that facilitates both systemic invasion and the spread of colonization to close contacts. Person-to-person spread of N. meningitidis can result in community epidemics of bacterial meningitis, with major consequences for public health. In resource-poor nations, cyclical outbreaks continue to result in high mortality and long-term disability, particularly in sub-Saharan Africa, where access to early diagnosis, antibiotic therapy, and vaccination is limited.2,3 An exclusively human pathogen, N. meningitidis uses several virulence factors to cause disease. Highly charged and hydrophilic capsular polysaccharides protect N. meningitidis from phagocytosis and complement-mediated bactericidal activity of the innate immune system. A family of proteins (called opacity proteins) on the bacterial outer membrane facilitate interactions with both epithelial and endothelial cells. These proteins are phase-variable — that is, the genome of the bacterium encodes related opacity proteins that are variably expressed, depending on environment, allowing the bacterium to adjust to rapidly changing environmental conditions. Lipooligosaccharide, analogous to the lipopolysaccharide of enteric gram-negative bacteria, contains a lipid A moiety with endotoxin activity that promotes the systemic sepsis encountered clinically. However, initial attachment to host cells is primarily mediated by filamentous organelles referred to as type IV pili, which are common to many bacterial pathogens and unique in their ability to undergo both antigenic and phase variation. Within hours of attachment to the host endothelial cell, N. meningitidis induces the formation of protrusions in the plasma membrane of host cells that aggregate the bacteria into microcolonies and facilitate pili-mediated contacts between bacteria and between bacteria and host cells. After attachment and aggregation, N. meningitidis detaches from the aggregates to systemically invade the host, by means of a transcellular pathway that crosses the respiratory epithelium,4 or becomes aerosolized and spreads the colonization of new hosts (Fig. 1). Chamot-Rooke et al. dissected the molecular mechanism underlying this critical step of systemic invasion and person-to-person spread and reported that pathogenesis depends on a unique post-translational modification of the type IV pili. Using whole-protein mass spectroscopy, electron microscopy, and molecular modeling, they showed that the major component of N. meningitidis type IV pili (called PilE or pilin) undergoes an unusual post-translational modification by phosphoglycerol. Expression of pilin phosphotransferase, the enzyme that transfers phosphoglycerol onto pilin, is increased within 4 hours of meningococcus contact with host cells and modifies the serine residue at amino acid position 93 of pilin, altering the charge of the pilin structure and thereby destabilizing the pili bundles, reducing bacterial aggregation, and promoting detachment from the cell surface. Strains of N. meningitidis in which phosphoglycerol modification of pilin occurred had a greatly enhanced ability to cross epithelial monolayers, a finding that supports the view that this virulence property, which causes deaggregation, promotes both transmission to new hosts and systemic invasion. Although this new molecular understanding of N. meningitidis virulence in humans is provoc-",
"title": ""
},
{
"docid": "4e5c9901da9ee977d995dd4fd6b9b6bd",
"text": "kmlonolgpbqJrtsHu qNvwlyxzl{vw|~}ololyp | xolyxoqNv
J lgxgOnyc}g pAqNvwl lgrc p|HqJbxz|r rc|pb4|HYl xzHnzl}o}gpb |p'w|rmlypnoHpb0rpb }zqJOn pyxg |HqJOp c}&olypb%nov4|rrclgpbYlo%ys{|Xq|~qlo noxX}ozz|~}lz rlo|xgp4pb0|~} |3 loqNvwH J xzOpb0| p|HqJbxz|rr|pbw|~lmxzHnolo}o}gpb;}gsH}oqly ¡cqOv rpb }zqJOnm¢~p TrloHYly¤£;r¥qOv4XHv&noxX}ozz|~}lz |YxzH|Ynvwl}]vw|~l zlolyp¦}4nonolo}o}gbrp2 |p4s o lyxzlypbq |xzlo|~}^]p|~q§bxz|r4r|pbw|~lmxzHnolo}o}gpbHu ̈cq©c} Joqhlyp qNvwl]no|~}yl^qNvw|~qaqNvwl}llqOv4~} no|o4qJbxzl qNvwl&rtpbbc}oq§Nn pgHxg |HqJOp#qNvwlys%|xol Xlgrrpb«pxzlonoqJrts¦p r|xJYl2w|X¬g4l&q|Xgrclo}2J }oqh|HqJc}o qJOn};®v }&no|p |~¢l¦cq3 ̄=nybr°q]qh%|p|rsH±ylu bpXlgx}zqh|p|p%]xzl qNvwl«|XgrcqJsLJ&qOv4lo}l |Yxo|Xnov4lo}q HYlyr pYlyxgrtspw0rtpw~bc}oqJOn;zlvw|Nxg 2gp¦qNv c} 4|o4lyxou 3l rr Yl}ngxgNzl;| }g rlxgbrlzo|H}lo |oYxzH|Ynv q |Xq|~qlo rlo|xgp4pb0 rpbbc}oq§On^¢p TrcloHYlgT®v } |oYxzH|Ynv vw|~} ololgp}ovw ¡p2xL| ́p4bolyxLJ&q|~}o¢} qhno|4qJ xol¦pgxg|~q§Np p |nyrlo|xolgx2|p# xzl«xzlonq |~}ov Op cqNvwXq]|%noxo c}l p«wlyxxg |pnoly3μLl¶xzl}lgpwq¶|«Ylq|rlo«no|H}l }oqJ4s%J qOvbc} rclz|xgp4pw0lqNvwHL|YrtOlo qh4|xoq]J;}J4lolznv2qh|HHpb",
"title": ""
},
{
"docid": "0b8e7bd2935aaf7cec7f5b028e52f8cf",
"text": "We show that gradient descent converges to a local minimizer, almost surely with random initialization. This is proved by applying the Stable Manifold Theorem from dynamical systems theory.",
"title": ""
},
{
"docid": "62e867a7f60df8e0937a01cfc2d39738",
"text": "Schema mapping is used to transform data to a desired schema from data sources with different schemas. Manually writing complete schema mapping specifications requires a deep understanding of the source and target schemas, which can be burdensome for the user. Programming By Example (PBE) schema mapping methods allow the user to describe the schema mapping using data records. However, real data records are still harder to specify compared to other useful insights about the desired schema mapping the user might have. In this project, we develop a new schema mapping technique, Beaver, that enables an interaction model that gives the user more flexibility in describing the desired schema mapping. The end user is not limited to providing exact and complete target schema data examples but may also provide incomplete or ambiguous examples. Moreover, the user can provide other types of descriptions, like data type or value range, about the target schema. We design an explore-and-verify search-based algorithm to efficiently discover all satisfying schema mapping specifications. We implemented a prototype of our schema mapping technique and experimentally evaluated the efficiency of the system in handling traditional PBE schema mapping test cases, as well as our newly-proposed declarative schema mapping test cases. The experiment results show that the declarative queries, which we believe are easier for non-expert user to input, often cost around zero to five seconds more than the traditional PBE queries. This suggests we retain a system efficiency comparable to traditional PBE schema mapping systems.",
"title": ""
},
{
"docid": "b00ec93bf47aab14aa8ced69612fc39a",
"text": "In today’s increasingly rich material life, people are shifting their focus from the physical world to the spiritual world. In order to identify and care for people’s emotions, human-machine interaction systems have been created. The currently available human-machine interaction systems often support the interaction between human and robot under the line-of-sight (LOS) propagation environment, while most communications in terms of human-to-human and human-to-machine are non-LOS (NLOS). In order to break the limitation of the traditional human–machine interaction system, we propose the emotion communication system based on NLOS mode. Specifically, we first define the emotion as a kind of multimedia which is similar to voice and video. The information of emotion can not only be recognized, but can also be transmitted over a long distance. Then, considering the real-time requirement of the communications between the involved parties, we propose an emotion communication protocol, which provides a reliable support for the realization of emotion communications. We design a pillow robot speech emotion communication system, where the pillow robot acts as a medium for user emotion mapping. Finally, we analyze the real-time performance of the whole communication process in the scene of a long distance communication between a mother-child users’ pair, to evaluate the feasibility and effectiveness of emotion communications.",
"title": ""
},
{
"docid": "6b497321713a9725fef39b1f0e54acfa",
"text": "In today's time when data is generating by everyone at every moment, and the word is moving so fast with exponential growth of new technologies and innovations in all science and engineering domains, the age of big data is coming, and the potential of learning from this huge amount of data and from different sources is undoubtedly significant to uncover underlying structure and facilitate the development of more intelligent solution. Intelligence is around us, and the concept of big data and learning from it has existed since the emergence of the human being. In this article we focus on data from; sensors, images, and text, and we incorporate the principles of human intelligence; brain - body - environment, as a source of inspiration that allows us to put a new concept based on big data - machine learning--domain and pave the way for intelligent platform.",
"title": ""
},
{
"docid": "3cb255c8799252093d04d9fa24c52296",
"text": "Modern computing tasks such as real-time analytics require refresh of query results under high update rates. Incremental View Maintenance (IVM) approaches this problem by materializing results in order to avoid recomputation. IVM naturally induces a trade-off between the space needed to maintain the materialized results and the time used to process updates. In this paper, we show that the full materialization of results is a barrier for more general optimization strategies. In particular, we present a new approach for evaluating queries under updates. Instead of the materialization of results, we require a data structure that allows: (1) linear time maintenance under updates, (2) constant-delay enumeration of the output, (3) constant-time lookups in the output, while (4) using only linear space in the size of the database. We call such a structure a Dynamic Constant-delay Linear Representation (DCLR) for the query. We show that DYN, a dynamic version of the Yannakakis algorithm, yields DCLRs for the class of free-connex acyclic CQs. We show that this is optimal in the sense that no DCLR can exist for CQs that are not free-connex acyclic. Moreover, we identify a sub-class of queries for which DYN features constant-time update per tuple and show that this class is maximal. Finally, using the TPC-H and TPC-DS benchmarks, we experimentally compare DYN and a higher-order IVM (HIVM) engine. Our approach is not only more efficient in terms of memory consumption (as expected), but is also consistently faster in processing updates.",
"title": ""
}
] |
scidocsrr
|
41f1836cec72af4b0a39705851739d94
|
Representing and Querying Correlated Tuples in Probabilistic Databases
|
[
{
"docid": "172835b4eaaf987e93d352177fd583b1",
"text": "A new method is proposed for exploiting causal independencies in exact Bayesian network inference. A Bayesian network can be viewed as representing a factorization of a joint probability into the multiplication of a set of conditional probabilities. We present a notion of causal independence that enables one to further factorize the conditional probabilities into a combination of even smaller factors and consequently obtain a finer-grain factorization of the joint probability. The new formulation of causal independence lets us specify the conditional probability of a variable given its parents in terms of an associative and commutative operator, such as “or”, “sum” or “max”, on the contribution of each parent. We start with a simple algorithm VE for Bayesian network inference that, given evidence and a query variable, uses the factorization to find the posterior distribution of the query. We show how this algorithm can be extended to exploit causal independence. Empirical studies, based on the CPCS networks for medical diagnosis, show that this method is more efficient than previous methods and allows for inference in larger networks than previous algorithms.",
"title": ""
}
] |
[
{
"docid": "d552b6beeea587bc014a4c31cabee121",
"text": "Recent successes of neural networks in solving combinatorial problems and games like Go, Poker and others inspire further attempts to use deep learning approaches in discrete domains. In the field of automated planning, the most popular approach is informed forward search driven by a heuristic function which estimates the quality of encountered states. Designing a powerful and easily-computable heuristics however is still a challenging problem on many domains. In this paper, we use machine learning to construct such heuristic automatically. We train a neural network to predict a minimal number of moves required to solve a given instance of Rubik’s cube. We then use the trained network as a heuristic distance estimator with a standard forward-search algorithm and compare the results with other heuristics. Our experiments show that the learning approach is competitive with state-of-the-art and might be the best choice in some use-case scenarios.",
"title": ""
},
{
"docid": "555b07171f5305f7ae968d9a76d74ec3",
"text": "The production of lithium-ion (Li-ion) batteries has been continually increasing since their first introduction into the market in 1991 because of their excellent performance, which is related to their high specific energy, energy density, specific power, efficiency, and long life. Li-ion batteries were first used for consumer electronics products such as mobile phones, camcorders, and laptop computers, followed by automotive applications that emerged during the last decade and are still expanding, and finally industrial applications including energy storage. There are four promising cell chemistries considered for energy storage applications: 1) LiMn2O4/graphite cell chemistry uses low-cost materials that are naturally abundant; 2) LiNi1-X-Y2CoXAlYO2/graphite cell chemistry has high specific energy and long life; 3) LiFePO4/graphite (or carbon) cell chemistry has good safety characteristics; and 4) Li4Ti5O12 is used as the negative electrode material in Li-ion batteries with long life and good safety features. However, each of the cell chemistries has some disadvantages, and the development of these technologies is still in progress. Therefore, it is too early to predict which cell chemistry will be the main candidate for energy storage applications, and we have to remain vigilant with respect to trends in technological progress and also consider changes in economic and social conditions before this can be determined.",
"title": ""
},
{
"docid": "8e2407e6fc3e3b3e5f0aeb64eb842712",
"text": "Visual programming in 3D sounds much more appealing than programming in 2D, but what are its benefits? Here, University of Colorado Boulder educators discuss the differences between 2D and 3D regarding three concepts connecting computer graphics to computer science education: ownership, spatial thinking, and syntonicity.",
"title": ""
},
{
"docid": "85c942d932bce6e22a4e1e5b14cd678c",
"text": "The translation of legal texts cannot be done without regarding legal-cultural concepts and differences between legal systems. The level of equivalence of the terms depends on the extent of relatedness of the legal systems and not on that of the languages involved. This article aims at analyzing the aspects of translation equivalence (TE) in legal translation. First, it provides a theoretical framework focusing on legal translation from the existing perspectives. Then, different types of equivalence, especially functional along with its subcategories, namely, near-equivalence, partial equivalence and nonequivalence based on Šarčević (2000) categories are elucidated, and finally some desiderata for legal translators will be suggested.",
"title": ""
},
{
"docid": "24e380a79c5520a4f656ff2177d43dd7",
"text": "a r t i c l e i n f o Social media have increasingly become popular platforms for information dissemination. Recently, companies have attempted to take advantage of social advertising to deliver their advertisements to appropriate customers. The success of message propagation in social media depends greatly on the content relevance and the closeness of social relationships. In this paper, considering the factors of user preference, network influence , and propagation capability, we propose a diffusion mechanism to deliver advertising information over microblogging media. Our experimental results show that the proposed model could provide advertisers with suitable targets for diffusing advertisements continuously and thus efficiently enhance advertising effectiveness. In recent years, social media, such as Facebook, Twitter and Plurk, have flourished and raised much attention. Social media provide users with an excellent platform to share and receive information and give marketers a great opportunity to diffuse information through numerous populations. An overwhelming majority of mar-keters are using social media to market their businesses, and a significant 81% of these marketers indicate that their efforts in social media have generated effective exposure for their businesses [59]. With effective vehicles for understanding customer behavior and new hybrid elements of the promotion mix, social media allow enterprises to make timely contact with the end-consumer at relatively low cost and higher levels of efficiency [52]. Since the World Wide Web (Web) is now the primary message delivering medium between advertisers and consumers, it is a critical issue to find the best way to utilize on-line media for advertising purposes [18,29]. The effectiveness of advertisement distribution highly relies on well understanding the preference information of the targeted users. However, some implicit personal information of users, particularly the new users, may not be always obtainable to the marketers [23]. As users know more about their friends than marketers, the relations between the users become a natural medium and filter for message diffusion. Moreover, most people are willing to share their information with friends and are likely to be affected by the opinions of their friends [35,45]. Social advertising is a kind of recommendation system, of sharing information between friends. It takes advantage of the relation of users to conduct an advertising campaign. In 2010, eMarketer reported that 90% of consumers rely on recommendations from people they trust. In the same time, IDG Amplify indicated that the efficiency of social advertising is greater than the traditional …",
"title": ""
},
{
"docid": "0ac70da81cdb4d1eae3d2b475b857219",
"text": "Patients with coronoid process hyperplasia of the mandibular area are rare. The treatment of this disease is to increase the patient's mouth opening by surgery. There are various, but controversial, methods to treat it. We present a modified (gap) coronoidotomy procedure in detail and compare it with other conventional methods to treat coronoid process hyperplasia.",
"title": ""
},
{
"docid": "aa52a5764fc0b95e11d3088f7cdc7448",
"text": "Generative Adversarial Networks (GANs) have received wide attention in the machine learning field for their potential to learn high-dimensional, complex real data distribution. Specifically, they do not rely on any assumptions about the distribution and can generate real-like samples from latent space in a simple manner. This powerful property allows GANs to be applied to various applications such as image synthesis, image attribute editing, image translation, domain adaptation, and other academic fields. In this article, we discuss the details of GANs for those readers who are familiar with, but do not comprehend GANs deeply or who wish to view GANs from various perspectives. In addition, we explain how GANs operates and the fundamental meaning of various objective functions that have been suggested recently. We then focus on how the GAN can be combined with an autoencoder framework. Finally, we enumerate the GAN variants that are applied to various tasks and other fields for those who are interested in exploiting GANs for their research.",
"title": ""
},
{
"docid": "cfc27935a5d53d5c2c92847f4e200a9b",
"text": "Li Gao, Jia Wu, Hong Yang, Zhi Qiao, Chuan Zhou, Yue Hu Institute of Information Engineering, Chinese Academy of Sciences, Beijing, China Quantum Computation & Intelligent Systems Centre, University of Technology Sydney, Australia MathWorks, Beijing, China Institute of Computing Technology, Chinese Academy of Sciences, Beijing, China {gaoli, huyue}@iie.ac.cn, zhiqiao.ict@gmail.com, hong.yang@mathworks.cn, jia.wu@uts.edu.au",
"title": ""
},
{
"docid": "1fcdfd02a6ecb12dec5799d6580c67d4",
"text": "One of the major problems in developing countries is maintenance of roads. Well maintained roads contribute a major portion to the country's economy. Identification of pavement distress such as potholes and humps not only helps drivers to avoid accidents or vehicle damages, but also helps authorities to maintain roads. This paper discusses previous pothole detection methods that have been developed and proposes a cost-effective solution to identify the potholes and humps on roads and provide timely alerts to drivers to avoid accidents or vehicle damages. Ultrasonic sensors are used to identify the potholes and humps and also to measure their depth and height, respectively. The proposed system captures the geographical location coordinates of the potholes and humps using a global positioning system receiver. The sensed-data includes pothole depth, height of hump, and geographic location, which is stored in the database (cloud). This serves as a valuable source of information to the government authorities and vehicle drivers. An android application is used to alert drivers so that precautionary measures can be taken to evade accidents. Alerts are given in the form of a flash messages with an audio beep.",
"title": ""
},
{
"docid": "951c29150649a6ea8342b722bf39855c",
"text": "A method is proposed to enhance vascular structures within the framework of scale space theory. We combine a smooth vessel filter which is based on a geometrical analysis of the Hessian's eigensystem, with a non-linear anisotropic diffusion scheme. The amount and orientation of diffusion depend on the local vessel likeliness. Vessel enhancing diffusion (VED) is applied to patient and phantom data and compared to linear, regularized Perona-Malik, edge and coherence enhancing diffusion. The method performs better than most of the existing techniques in visualizing vessels with varying radii and in enhancing vessel appearance. A diameter study on phantom data shows that VED least affects the accuracy of diameter measurements. It is shown that using VED as a preprocessing step improves level set based segmentation of the cerebral vasculature, in particular segmentation of the smaller vessels of the vasculature.",
"title": ""
},
{
"docid": "2710a25b3cf3caf5ebd5fb9f08c9e5e3",
"text": "Your use of the JSTOR archive indicates your acceptance of JSTOR's Terms and Conditions of Use, available at http://www.jstor.org/page/info/about/policies/terms.jsp. JSTOR's Terms and Conditions of Use provides, in part, that unless you have obtained prior permission, you may not download an entire issue of a journal or multiple copies of articles, and you may use content in the JSTOR archive only for your personal, non-commercial use.",
"title": ""
},
{
"docid": "c8869d9a481a3d7100397788d4ced1fb",
"text": "E-commerce (electronic commerce or EC) is the buying and selling of goods and services, or the transmitting of funds or data online. E-commerce platforms come in many kinds, with global players such as Amazon, Airbnb, Alibaba, eBay, JD.com and platforms targeting specific markets such as Bol.com and Booking.com. Information retrieval has a natural role to play in e-commerce, especially in connecting people to goods and services. Information discovery in e-commerce concerns different types of search (exploratory search vs. lookup tasks), recommender systems, and natural language processing in e-commerce portals. Recently, the explosive popularity of e-commerce sites has made research on information discovery in e-commerce more important and more popular. There is increased attention for e-commerce information discovery methods in the community as witnessed by an increase in publications and dedicated workshops in this space. Methods for information discovery in e-commerce largely focus on improving the performance of e-commerce search and recommender systems, on enriching and using knowledge graphs to support e-commerce, and on developing innovative question-answering and bot-based solutions that help to connect people to goods and services. Below we describe why we believe that the time is right for an introductory tutorial on information discovery in e-commerce, the objectives of the proposed tutorial, its relevance, as well as more practical details, such as the format, schedule and support materials.",
"title": ""
},
{
"docid": "a70d064af5e8c5842b8ca04abc3fb2d6",
"text": "In the current scenario of cloud computing, heterogeneous resources are located in various geographical locations requiring security-aware resource management to handle security threats. However, existing techniques are unable to protect systems from security attacks. To provide a secure cloud service, a security-based resource management technique is required that manages cloud resources automatically and delivers secure cloud services. In this paper, we propose a self-protection approach in cloud resource management called SECURE, which offers self-protection against security attacks and ensures continued availability of services to authorized users. The performance of SECURE has been evaluated using SNORT. The experimental results demonstrate that SECURE performs effectively in terms of both the intrusion detection rate and false positive rate. Further, the impact of security on quality of service (QoS) has been analyzed.",
"title": ""
},
{
"docid": "17b1c31d88d56a09879c6b7ff2b10365",
"text": "In major league baseball, a hitter could have a long and productive career by maintaining a .300 average, that is, by getting a base hit 30% of the time. A great deal of money could be earned and fame accrued. Yet the other 70% of the time, this player would have failed. The vast majority of attempts to hit the ball would result in ‘‘making an out’’ and thus pose a potential threat to the player’s sense of personal worth and social regard. Like major league baseball players, people in contemporary society face innumerable failures and self‐threats. These include substandard performance on the job or in class, frustrated goals or aspirations, information challenging the validity of long‐held beliefs, illness, the defeat of one’s political party in an election or of one’s favorite sports team in a playoV, scientific evidence suggesting that one is engaging in risky health behavior, negative feedback at work or in school, rejection in a romantic relationship, real and perceived social slights, interpersonal and intergroup conflict, the misbehavior of one’s child, the loss of a loved one, and so on. In the course of a given day, the potential number of events that could threaten people’s ‘‘moral and adaptive adequacy’’—their sense of themselves as good, virtuous, successful, and able to control important life outcomes (Steele, 1988)—seems limitless and likely to exceed the small number of events that aYrm it. A major undertaking for most people is to sustain self‐integrity when faced with the inevitable setbacks and disappointments of daily life—the 70% of the time",
"title": ""
},
{
"docid": "330de15c472bd403f2572f3bdcce2d52",
"text": "Programmers repeatedly reuse code snippets. Retyping boilerplate code, and rediscovering how to correctly sequence API calls, programmers waste time. In this paper, we develop techniques that automatically synthesize code snippets upon a programmer’s request. Our approach is based on discovering snippets located in repositories; we mine repositories offline and suggest discovered snippets to programmers. Upon request, our synthesis procedure uses programmer’s current code to find the best fitting snippets, which are then presented to the programmer. The programmer can then either learn the proper API usage or integrate the synthesized snippets directly into her code. We call this approach interactive code snippet synthesis through repository mining. We show that this approach reduces the time spent developing code for 32% in our experiments.",
"title": ""
},
{
"docid": "433af389c7d6b874de387c23ba0e2f35",
"text": "Using cross-sectional time-series data for U.S. counties from 1977 to 1992, we find that allowing citizens to carry concealed weapons deters violent crimes and it appears to produce no increase in accidental deaths. If those states which did not have right-to-carry concealed gun provisions had adopted them in 1992, approximately 1,570 murders; 4,177 rapes; and over 60,000 aggravate assaults would have been avoided yearly. On the other hand, consistent with the notion of criminals responding to incentives, we find criminals substituting into property crimes involving stealth and where the probabilities of contact between the criminal and the victim are minimal. The largest population counties where the deterrence effect on violent crimes is greatest are where the substitution effect into property crimes is highest. Concealed handguns also have their greatest deterrent effect in the highest crime counties. Higher arrest and conviction rates consistently and dramatically reduce the crime rate. Consistent with other recent work (Lott, 1992b), the results imply that increasing the arrest rate, independent of the probability of eventual conviction, imposes a significant penalty on criminals. The estimated annual gain from allowing concealed handguns is at least $6.214 billion.",
"title": ""
},
{
"docid": "7704eb15f3c576e2575e18613ce312df",
"text": "Objects for detection usually have distinct characteristics in different sub-regions and different aspect ratios. However, in prevalent two-stage object detection methods, Region-of-Interest (RoI) features are extracted by RoI pooling with little emphasis on these translation-variant feature components. We present feature selective networks to reform the feature representations of RoIs by exploiting their disparities among sub-regions and aspect ratios. Our network produces the sub-region attention bank and aspect ratio attention bank for the whole image. The RoI-based sub-region attention map and aspect ratio attention map are selectively pooled from the banks, and then used to refine the original RoI features for RoI classification. Equipped with a lightweight detection subnetwork, our network gets a consistent boost in detection performance based on general ConvNet backbones (ResNet-101, GoogLeNet and VGG-16). Without bells and whistles, our detectors equipped with ResNet-101 achieve more than 3% mAP improvement compared to counterparts on PASCAL VOC 2007, PASCAL VOC 2012 and MS COCO datasets.",
"title": ""
},
{
"docid": "1bc1965682f757dcfa86936911855add",
"text": "Software-Defined Networking (SDN) introduces a new communication network management paradigm and has gained much attention recently. In SDN, a network controller overlooks and manages the entire network by configuring routing mechanisms for underlying switches. The switches report their status to the controller periodically, such as port statistics and flow statistics, according to their communication protocol. However, switches may contain vulnerabilities that can be exploited by attackers. A compromised switch may not only lose its normal functionality, but it may also maliciously paralyze the network by creating network congestions or packet loss. Therefore, it is important for the system to be able to detect and isolate malicious switches. In this work, we investigate a methodology for an SDN controller to detect compromised switches through real-time analysis of the periodically collected reports. Two types of malicious behavior of compromised switches are investigated: packet dropping and packet swapping. We proposed two anomaly detection algorithms to detect packet droppers and packet swappers. Our simulation results show that our proposed methods can effectively detect packet droppers and swappers. To the best of our knowledge, our work is the first to address malicious switches detection using statistics reports in SDN.",
"title": ""
},
{
"docid": "28d6661ca55f033480cb24b09083146d",
"text": "This paper describes the implementation of our three systems at SemEval-2007, for task 2 (word sense discrimination), task 5 (Chinese word sense disambiguation), and the first subtask in task 17 (English word sense disambiguation). For task 2, we applied a cluster validation method to estimate the number of senses of a target word in untagged data, and then grouped the instances of this target word into the estimated number of clusters. For both task 5 and task 17, We used the label propagation algorithm as the classifier for sense disambiguation. Our system at task 2 achieved 63.9% F-score under unsupervised evaluation, and 71.9% supervised recall with supervised evaluation. For task 5, our system obtained 71.2% micro-average precision and 74.7% macro-average precision. For the lexical sample subtask for task 17, our system achieved 86.4% coarsegrained precision and recall.",
"title": ""
},
{
"docid": "f7d64d093df1aa158636482af2dd7bff",
"text": "Vision-based Human activity recognition is becoming a trendy area of research due to its wide application such as security and surveillance, human–computer interactions, patients monitoring system, and robotics. In the past two decades, there are several publically available human action, and activity datasets are reported based on modalities, view, actors, actions, and applications. The objective of this survey paper is to outline the different types of video datasets and highlights their merits and demerits under practical considerations. Based on the available information inside the dataset we can categorise these datasets into RGB (Red, Green, and Blue) and RGB-D(depth). The most prominent challenges involved in these datasets are occlusions, illumination variation, view variation, annotation, and fusion of modalities. The key specification of these datasets is discussed such as resolutions, frame rate, actions/actors, background, and application domain. We have also presented the state-of-the-art algorithms in a tabular form that give the best performance on such datasets. In comparison with earlier surveys, our works give a better presentation of datasets on the well-organised comparison, challenges, and latest evaluation technique on existing datasets.",
"title": ""
}
] |
scidocsrr
|
9e163ed398cca51860be804c31b114e6
|
Modeling and simulation of cloud computing: A review
|
[
{
"docid": "e91b301d060ade5e21c287403e85bd19",
"text": "Computing today is shifting from hosting services in servers owned by individual organizations to data centres providing resources to a number of organizations on a shared infrastructure. Managing such a data centre presents a unique set of goals and challenges. Through the use of virtualization, multiple users can run isolated virtual machines (VMs) on a single physical host, allowing for a higher server utilization. By consolidating VMs onto fewer physical hosts, infrastructure costs can be reduced in terms of the number of servers required, power consumption, and maintenance. To meet constantly changing workload levels, running VMs may need to be migrated (moved) to another physical host. Algorithms to perform dynamic VM reallocation, as well as dynamic resource provisioning on a single host, are open research problems. Experimenting with such algorithms on the data centre scale is impractical. Thus, there is a need for simulation tools to allow rapid development and evaluation of data centre management techniques. We present DCSim, an extensible simulation framework for simulating a data centre hosting an Infrastructure as a Service cloud. We evaluate the scalability of DCSim, and demonstrate its usefulness in evaluating VM management techniques.",
"title": ""
}
] |
[
{
"docid": "6c10d03fa49109182c95c36debaf06cc",
"text": "Visual versus near infrared (VIS-NIR) face image matching uses an NIR face image as the probe and conventional VIS face images as enrollment. It takes advantage of the NIR face technology in tackling illumination changes and low-light condition and can cater for more applications where the enrollment is done using VIS face images such as ID card photos. Existing VIS-NIR techniques assume that during classifier learning, the VIS images of each target people have their NIR counterparts. However, since corresponding VIS-NIR image pairs of the same people are not always available, which is often the case, so those methods cannot be applied. To address this problem, we propose a transductive method named transductive heterogeneous face matching (THFM) to adapt the VIS-NIR matching learned from training with available image pairs to all people in the target set. In addition, we propose a simple feature representation for effective VIS-NIR matching, which can be computed in three steps, namely Log-DoG filtering, local encoding, and uniform feature normalization, to reduce heterogeneities between VIS and NIR images. The transduction approach can reduce the domain difference due to heterogeneous data and learn the discriminative model for target people simultaneously. To the best of our knowledge, it is the first attempt to formulate the VIS-NIR matching using transduction to address the generalization problem for matching. Experimental results validate the effectiveness of our proposed method on the heterogeneous face biometric databases.",
"title": ""
},
{
"docid": "4e006cd320506a5ef244eedd3f761756",
"text": "Document classification is a growing interest in the research of text mining. Correctly identifying the documents into particular category is still presenting challenge because of large and vast amount of features in the dataset. In regards to the existing classifying approaches, Naïve Bayes is potentially good at serving as a document classification model due to its simplicity. The aim of this paper is to highlight the performance of employing Naïve Bayes in document classification. Results show that Naïve Bayes is the best classifiers against several common classifiers (such as decision tree, neural network, and support vector machines) in term of accuracy and computational efficiency.",
"title": ""
},
{
"docid": "3946f4fabec4295e2be13b60b0ce8625",
"text": "The present study was designed and simulated for an all optical half-adder, based on 2D photonic crystals. The proposed structure in this work contains a hexagonal lattice. The main advantages of the proposed designation can be highlighted as its small sizing as well as simplicity. Furthermore, the other improvement of this half-adder can be regarded as providing proper distinct space in output between “0” and “1” as logical states. This improvement reduces the error in the identification of logical states (i.e., 0 and 1) at output. Because of the high photonic band gap for transverse electric (TE) polarization, the TE mode calculations are done to analyze the defected lines of light. The logical values of “0” and “1” were defined according to the amount of electrical field.",
"title": ""
},
{
"docid": "056f9496de2911ac3d41f7e03a2e6f76",
"text": "This paper presents a survey on the role of negationin sentiment analysis. Negation is a very common linguistic construction that affects polarity and, therefore, needs to be taken into consideration in sentiment analysis. We will present various computational approaches modeling negation in sentiment analysis. We will, in particular, focus on aspects, such as level of representation used for sentiment analysis, negation word detection and scope of negation. We will also discuss limits and challenges of negation modeling on that task.",
"title": ""
},
{
"docid": "4d6e7af21e8feabf70bd7cab556d4aac",
"text": "The use of diagnostic imaging has increased dramatically in recent years. A substantial number are chest xrays used to diagnose a plethora of conditions. These diagnoses are still primarily done by radiologists manually poring over each scan, with no automated triaging or assistance. We aim to use deep learning to predict thorax disease categories using chest x-rays and their metadata with greater than first-pass specialist accuracy. Our problem can be cast as a multiclass image classification problem with 15 different labels. The paper provides a proof of concept of an automated chest x-ray diagnosis system by utilizing the NIH dataset. Deep learning is used to improve the multiclass classification accuracy of thorax disease classification, measured against a baseline of softmax regression.",
"title": ""
},
{
"docid": "beba751220fc4f8df7be8d8e546150d0",
"text": "Theoretical analysis and implementation of autonomous staircase detection and stair climbing algorithms on a novel rescue mobile robot are presented in this paper. The main goals are to find the staircase during navigation and to implement a fast, safe and smooth autonomous stair climbing algorithm. Silver is used here as the experimental platform. This tracked mobile robot is a tele-operative rescue mobile robot with great capabilities in climbing obstacles in destructed areas. Its performance has been demonstrated in rescue robot league of international RoboCup competitions. A fuzzy controller is applied to direct the robot during stair climbing. Controller inputs are generated by processing the range data from two LASER range finders which scan the environment one horizontally and the other vertically. The experimental results of stair detection algorithm and stair climbing controller are demonstrated at the end.",
"title": ""
},
{
"docid": "15a76f43782ef752e4b8e61e38726d69",
"text": "This paper considers invariant texture analysis. Texture analysis approaches whose performances are not a,ected by translation, rotation, a.ne, and perspective transform are addressed. Existing invariant texture analysis algorithms are carefully studied and classi0ed into three categories: statistical methods, model based methods, and structural methods. The importance of invariant texture analysis is presented 0rst. Each approach is reviewed according to its classi0cation, and its merits and drawbacks are outlined. The focus of possible future work is also suggested. ? 2001 Pattern Recognition Society. Published by Elsevier Science Ltd. All rights reserved.",
"title": ""
},
{
"docid": "ae9d14cfbc20eff358ff71322f4cc8eb",
"text": "One of the key challenges of video game design is teaching new players how to play. Although game developers frequently use tutorials to teach game mechanics, little is known about how tutorials affect game learnability and player engagement. Seeking to estimate this value, we implemented eight tutorial designs in three video games of varying complexity and evaluated their effects on player engagement and retention. The results of our multivariate study of over 45,000 players show that the usefulness of tutorials depends greatly on game complexity. Although tutorials increased play time by as much as 29% in the most complex game, they did not significantly improve player engagement in the two simpler games. Our results suggest that investment in tutorials may not be justified for games with mechanics that can be discovered through experimentation.",
"title": ""
},
{
"docid": "64e8c1bbf1153a93e446bc6bf11e295c",
"text": "Online political discussion amongst citizens has often been labelled uncivil. Indeed, as online discussion allows participants to remain relatively anonymous, and by extension, unaccountable for their behaviour, citizens often engage in angry, hostile, and derogatory discussion, taking the opportunity to attack the beliefs and values of others without fear of retribution. Some commentators believe that this type of incivility, however, could soon be a thing of the past as citizens increasingly turn to online social network sites such as Facebook.com to discuss politics. Facebook requires users, when registering for an account, to do so using their real name, and encourages them to attach a photograph and other personal details to their profile. As a result, users are both identified with and accountable for the comments they make, presumably making them less likely to engage in uncivil discussion. This paper aims to test this assumption by analysing the occurrence of incivility in reader comments left in response to political news articles by the Washington Post. Specifically, it will quantitatively content analyse the comments, comparing the occurrence of incivility evident in comments left on the Washington Post website with comments left on the Washington Post’s Facebook page. Analysis suggests that, in line with the hypothesis, these online platforms differ significantly when it comes to incivility. Paper to be presented at the Elections, Public Opinion, and Parties (EPOP) Conference, September 13-15, 2013, University of Lancaster, UK. Acknowledgement: This work was supported by the Economic and Social Research Council [grant number ES/I902767/1].",
"title": ""
},
{
"docid": "c5ed17e96ef80b03e5cf5e2848d9d20a",
"text": "xvii List of Publications xix",
"title": ""
},
{
"docid": "7ded4b632681fe82f3f739542b512524",
"text": "Within the field of numerical multilinear algebra, block tensors are increasingly important. Accordingly, it is appropriate to develop an infrastructure that supports reasoning about block tensor computation. In this paper we establish concise notation that is suitable for the analysis and development of block tensor algorithms, prove several useful block tensor identities, and make precise the notion of a block tensor unfolding.",
"title": ""
},
{
"docid": "97cf73f010854fd029458beddefd439e",
"text": "In recent years, there have been some interesting studies on predictive modeling in data streams. However, most such studies assume relatively balanced and stable data streams but cannot handle well rather skewed (e.g., few positives but lots of negatives) and stochastic distributions, which are typical in many data stream applications. In this paper, we propose a new approach to mine data streams by estimating reliable posterior probabilities using an ensemble of models to match the distribution over under-samples of negatives and repeated samples of positives. We formally show some interesting and important properties of the proposed framework, e.g., reliability of estimated probabilities on skewed positive class, accuracy of estimated probabilities, efficiency and scalability. Experiments are performed on several synthetic as well as real-world datasets with skewed distributions, and they demonstrate that our framework has substantial advantages over existing approaches in estimation reliability and predication accuracy.",
"title": ""
},
{
"docid": "bf687d16bd11b4bae52c3ba96016ae93",
"text": "Neural attention has become central to many state-of-the-art models in natural language processing and related domains. Attention networks are an easy-to-train and effective method for softly simulating alignment; however, the approach does not marginalize over latent alignments in a probabilistic sense. This property makes it difficult to compare attention to other alignment approaches, to compose it with probabilistic models, and to perform posterior inference conditioned on observed data. A related latent approach, hard attention, fixes these issues, but is generally harder to train and less accurate. This work considers variational attention networks, alternatives to soft and hard attention for learning latent variable alignment models, with tighter approximation bounds based on amortized variational inference. We further propose methods for reducing the variance of gradients to make these approaches computationally feasible. Experiments show that for machine translation and visual question answering, inefficient exact latent variable models outperform standard neural attention, but these gains go away when using hard attention based training. On the other hand, variational attention retains most of the performance gain but with training speed comparable to neural attention.",
"title": ""
},
{
"docid": "539c3b253a18f32064935217f6b0ea67",
"text": "Salient object detection is not a pure low-level, bottom-up process. Higher-level knowledge is important even for task-independent image saliency. We propose a unified model to incorporate traditional low-level features with higher-level guidance to detect salient objects. In our model, an image is represented as a low-rank matrix plus sparse noises in a certain feature space, where the non-salient regions (or background) can be explained by the low-rank matrix, and the salient regions are indicated by the sparse noises. To ensure the validity of this model, a linear transform for the feature space is introduced and needs to be learned. Given an image, its low-level saliency is then extracted by identifying those sparse noises when recovering the low-rank matrix. Furthermore, higher-level knowledge is fused to compose a prior map, and is treated as a prior term in the objective function to improve the performance. Extensive experiments show that our model can comfortably achieves comparable performance to the existing methods even without the help from high-level knowledge. The integration of top-down priors further improves the performance and achieves the state-of-the-art. Moreover, the proposed model can be considered as a prototype framework not only for general salient object detection, but also for potential task-dependent saliency applications.",
"title": ""
},
{
"docid": "9e5d38fa22500ff30888a3d71d938676",
"text": "While there are many Web services which help users nd things to buy, we know of none which actually try to automate the process of buying and selling. Kasbah is a virtual marketplace on the Web where users create autonomous agents to buy and sell goods on their behalf. Users specify parameters to guide and constrain an agent's overall behavior. A simple prototype has been built to test the viability of this concept.",
"title": ""
},
{
"docid": "5bc22b48b82b749f81c8ac95ababba83",
"text": "Matrix factorization techniques have been frequently applied in many fields. Among them, nonnegative matrix factorization (NMF) has received considerable attention for it aims to find a parts-based, linear representations of nonnegative data. Recently, many researchers propose various manifold learning algorithms to enhance learning performance by considering the local manifold smoothness assumption. However, NMF does not consider the geometrical structure of data and the local manifold smoothness does not directly ensure the representations of the data point with different labels being dissimilar. In order to find a better representation of data, we propose a novel matrix decomposition method, called nonnegative matrix factorization with Regularizations (RNMF), which incorporates three appropriate regularizations: nonnegative matrix factorization, the local manifold smoothness and a rank constraint. The representations of data learned by RNMF tend to be discriminative and sparse. By learning a Mahalanobis distance space based on labeled data, RNMF can also be extended to a semi-supervised algorithm (semi-RNMF) which has an amazing improvement on clustering performance. Our empirical study shows encouraging results of the proposed algorithm in comparison to the state-of-the-art algorithms on real-world problems.",
"title": ""
},
{
"docid": "b99944ad31c5ad81d0e235c200a332b4",
"text": "This paper introduces speech-based visual question answering (VQA), the task of generating an answer given an image and a spoken question. Two methods are studied: an end-to-end, deep neural network that directly uses audio waveforms as input versus a pipelined approach that performs ASR (Automatic Speech Recognition) on the question, followed by text-based visual question answering. Furthermore, we investigate the robustness of both methods by injecting various levels of noise into the spoken question and find both methods to be tolerate noise at similar levels.",
"title": ""
},
{
"docid": "0ae5df7af64f0069d691922d391f3c60",
"text": "With the realization that more research is needed to explore external factors (e.g., pedagogy, parental involvement in the context of K-12 learning) and internal factors (e.g., prior knowledge, motivation) underlying student-centered mobile learning, the present study conceptually and empirically explores how the theories and methodologies of self-regulated learning (SRL) can help us analyze and understand the processes of mobile learning. The empirical data collected from two elementary science classes in Singapore indicates that the analytical SRL model of mobile learning proposed in this study can illuminate the relationships between three aspects of mobile learning: students’ self-reports of psychological processes, patterns of online learning behavior in the mobile learning environment (MLE), and learning achievement. Statistical analyses produce three main findings. First, student motivation in this case can account for whether and to what degree the students can actively engage in mobile learning activities metacognitively, motivationally, and behaviorally. Second, the effect of students’ self-reported motivation on their learning achievement is mediated by their behavioral engagement in a pre-designed activity in the MLE. Third, students’ perception of parental autonomy support is not only associated with their motivation in school learning, but also associated with their actual behaviors in self-regulating their learning. ! 2011 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "a505cc6496d2ccd64e16a2b0ad074a45",
"text": "We tackle the problem of reducing the false positive rate of face detectors by applying a classifier after the detection step. We first define and study this post classification problem. To this end, we first consider the multiple-stage cascade structure which is the most common face detection architecture. Here, each cascade stage aims to solve a binary classification problem, denoted the Face/non-Face (FnF) problem. In this context, the post classification problem can be considered as the most challenging FnF problem, or the Hard FnF (HFnF) problem. To study the HFnF problem, we propose HFnF datasets derived from the recent face detection datasets. A baseline method utilizing the GIST features and Support Vector Machine (SVM) classifier is also proposed. In our evaluation, we found that it is possible to further improve the face detection performance by addressing the HFnF problem.",
"title": ""
},
{
"docid": "43aae415dc32b28f49c941ad58616769",
"text": "The telemedicine intervention in chronic disease management promises to involve patients in their own care, provides continuous monitoring by their healthcare providers, identifies early symptoms, and responds promptly to exacerbations in their illnesses. This review set out to establish the evidence from the available literature on the impact of telemedicine for the management of three chronic diseases: congestive heart failure, stroke, and chronic obstructive pulmonary disease. By design, the review focuses on a limited set of representative chronic diseases because of their current and increasing importance relative to their prevalence, associated morbidity, mortality, and cost. Furthermore, these three diseases are amenable to timely interventions and secondary prevention through telemonitoring. The preponderance of evidence from studies using rigorous research methods points to beneficial results from telemonitoring in its various manifestations, albeit with a few exceptions. Generally, the benefits include reductions in use of service: hospital admissions/re-admissions, length of hospital stay, and emergency department visits typically declined. It is important that there often were reductions in mortality. Few studies reported neutral or mixed findings.",
"title": ""
}
] |
scidocsrr
|
332686c5e63dc617e5c643c7fa518a31
|
Automatic Transcription of Flamenco Singing From Polyphonic Music Recordings
|
[
{
"docid": "fd108d142963d10968904708555efc9d",
"text": "The Gaussian filter has been used extensively in image processing and computer vision for many years. In this survey paper, we discuss the various features of this operator that make it the filter of choice in the area of edge detection. Despite these desirable features of the Gaussian filter, edge detection algorithms which use it suffer from many problems. We will review several linear and nonlinear Gaussian-based edge detection methods.",
"title": ""
},
{
"docid": "6101fe189ad6ad7de6723784eec68b42",
"text": "We present a novel system for the automatic extraction of the main melody from polyphonic music recordings. Our approach is based on the creation and characterization of pitch contours, time continuous sequences of pitch candidates grouped using auditory streaming cues. We define a set of contour characteristics and show that by studying their distributions we can devise rules to distinguish between melodic and non-melodic contours. This leads to the development of new voicing detection, octave error minimization and melody selection techniques. A comparative evaluation of the proposed approach shows that it outperforms current state-of-the-art melody extraction systems in terms of overall accuracy. Further evaluation of the algorithm is provided in the form of a qualitative error analysis and the study of the effect of key parameters and algorithmic components on system performance. Finally, we conduct a glass ceiling analysis to study the current limitations of the method, and possible directions for future work are proposed.",
"title": ""
}
] |
[
{
"docid": "9ed27131bd14a4121a08ea8d0f39edf1",
"text": "The current research explored the potential value of adding a supplementary measure of metamemory to the Information subtest of the Wechsler Adult Intelligence Scale - Third Edition (WAIS-III in Study 1) or Fourth Edition (WAIS-IV in Study 2) in order to assess its relationship to other neuropsychological measures and to brain injury. After completing the Information subtest, neuropsychological examinees were asked to make retrospective confidence judgements (RCJ) by rating their answer certainty in the original order of item administration. In Study 1 (N = 52) and study 2 (N = 30), correct answers were rated with significantly more certainty than wrong answers (termed a \"confidence gap\"), and in both studies, higher confidence for wrong answers was significantly correlated with poorer performance on the Wisconsin Card Sorting Test (for categories completed r = -.58 in Study 1, and r = -.47 in Study 2; for perseverative errors r = .44 in Study 1, and r = .45 in Study 2). In both studies, a number of examinees with positive CT findings had a very small or reversed confidence gap. These findings suggest that semantic metamemory is sensitive to executive functioning and brain injury and should be assessed in the neuropsychological examination.",
"title": ""
},
{
"docid": "f27b71ef83348db73b47c3d333fdbb78",
"text": "OBJECTIVE: To generate normative data on the Symbol Digit Modalities Test (SDMT) across 11 countries in Latin America, with country-specific adjustments for gender, age, and education, where appropriate. METHOD: The sample consisted of 3,977 healthy adults who were recruited from Argentina, Bolivia, Chile, Cuba, El Salvador, Guatemala, Honduras, Mexico, Paraguay, Peru, and, Puerto Rico. Each subject was administered the SDMT as part of a larger neuropsychological battery. A standardized five-step statistical procedure was used to generate the norms. RESULTS: The final multiple linear regression models explained 29–56% of the variance in SDMT scores. Although there were gender differences on the SDMT in Mexico, Honduras, Paraguay, and Guatemala, none of the four countries had an effect size greater than 0.3. As a result, gender-adjusted norms were not generated. CONCLUSIONS: This is the first normative multicenter study conducted in Latin America to create norms for the SDMT; this study will have an impact on the future practice of neuropsychology throughout the global region.",
"title": ""
},
{
"docid": "110742230132649f178d2fa99c8ffade",
"text": "Recent approaches based on artificial neural networks (ANNs) have shown promising results for named-entity recognition (NER). In order to achieve high performances, ANNs need to be trained on a large labeled dataset. However, labels might be difficult to obtain for the dataset on which the user wants to perform NER: label scarcity is particularly pronounced for patient note de-identification, which is an instance of NER. In this work, we analyze to what extent transfer learning may address this issue. In particular, we demonstrate that transferring an ANN model trained on a large labeled dataset to another dataset with a limited number of labels improves upon the state-of-the-art results on two different datasets for patient note de-identification.",
"title": ""
},
{
"docid": "eeaa7d079ef7239a9971aff9e86400fb",
"text": "We study the problem of scalable monitoring of operational 3G wireless networks. Threshold-based performance monitoring in large 3G networks is very challenging for two main factors: large network scale and dynamics in both time and spatial domains. A fine-grained threshold setting (e.g., perlocation hourly) incurs prohibitively high management complexity, while a single static threshold fails to capture the network dynamics, thus resulting in unacceptably poor alarm quality (up to 70% false/miss alarm rates). In this paper, we propose a scalable monitoring solution, called threshold-compression that can characterize the location- and time-specific threshold trend of each individual network element (NE) with minimal threshold setting. The main insight is to identify groups of NEs with similar threshold behaviors across location and time dimensions, forming spatial-temporal clusters to reduce the number of thresholds while maintaining acceptable alarm accuracy in a large-scale 3G network. Our evaluations based on the operational experience on a commercial 3G network have demonstrated the effectiveness of the proposed solution. We are able to reduce the threshold setting up to 90% with less than 10% false/miss alarms.",
"title": ""
},
{
"docid": "59693182ac2803d821c508e92383d499",
"text": "We introduce the notion of image-driven simplification, a framework that uses images to decide which portions of a model to simplify. This is a departure from approaches that make polygonal simplification decisions based on geometry. As with many methods, we use the edge collapse operator to make incremental changes to a model. Unique to our approach, however, is the use at comparisons between images of the original model against those of a simplified model to determine the cost of an ease collapse. We use common graphics rendering hardware to accelerate the creation of the required images. As expected, this method produces models that are close to the original model according to image differences. Perhaps more surprising, however, is that the method yields models that have high geometric fidelity as well. Our approach also solves the quandary of how to weight the geometric distance versus appearance properties such as normals, color, and texture. All of these trade-offs are balanced by the image metric. Benefits of this approach include high fidelity silhouettes, extreme simplification of hidden portions of a model, attention to shading interpolation effects, and simplification that is sensitive to the content of a texture. In order to better preserve the appearance of textured models, we introduce a novel technique for assigning texture coordinates to the new vertices of the mesh. This method is based on a geometric heuristic that can be integrated with any edge collapse algorithm to produce high quality textured surfaces.",
"title": ""
},
{
"docid": "774bdacd260740d5345a08f21e0fd8f0",
"text": "This paper presents a new way of categorizing behavior change in a framework called the Behavior Grid. This preliminary work shows 35 types of behavior along two categorical dimensions. To demonstrate the analytical potential for the Behavior Grid, this paper maps behavior goals from Facebook onto the framework, revealing potential patterns of intent. To show the potential for designers of persuasive technology, this paper uses the Behavior Grid to show what types of behavior change might most easily be achieved through mobile technology. The Behavior Grid needs further development, but this early version can still be useful for designers and researchers in thinking more clearly about behavior change and persuasive technology.",
"title": ""
},
{
"docid": "e99343a0ab1eb9007df4610ae35dec97",
"text": "Who did what to whom is a major focus in natural language understanding, which is right the aim of semantic role labeling (SRL). Although SRL is naturally essential to text comprehension tasks, it is surprisingly ignored in previous work. This paper thus makes the first attempt to let SRL enhance text comprehension and inference through specifying verbal arguments and their corresponding semantic roles. In terms of deep learning models, our embeddings are enhanced by semantic role labels for more fine-grained semantics. We show that the salient labels can be conveniently added to existing models and significantly improve deep learning models in challenging text comprehension tasks. Extensive experiments on benchmark machine reading comprehension and inference datasets verify that the proposed semantic learning helps our system reach new state-of-the-art.",
"title": ""
},
{
"docid": "94a646f32c4cd392f748887d1163bf51",
"text": "Article history: Received 10 February 2009 Received in revised form 7 March 2010 Accepted 10 March 2010 Available online 20 March 2010",
"title": ""
},
{
"docid": "0df56ee771c5ddaafd01f63a151b11fe",
"text": "Genes play a central role in all biological processes. DNA microarray technology has made it possible to study the expression behavior of thousands of genes in one go. Often, gene expression data is used to generate features for supervised and unsupervised learning tasks. At the same time, advances in the field of deep learning have made available a plethora of architectures. In this paper, we use deep architectures pre-trained in an unsupervised manner using denoising autoencoders as a preprocessing step for a popular unsupervised learning task. Denoising autoencoders (DA) can be used to learn a compact representation of input, and have been used to generate features for further supervised learning tasks. We propose that our deep architectures can be treated as empirical versions of Deep Belief Networks (DBNs). We use our deep architectures to regenerate gene expression time series data for two different data sets. We test our hypothesis on two popular datasets for the unsupervised learning task of clustering and find promising improvements in performance.",
"title": ""
},
{
"docid": "a9c88cbaaf6846447c2d69d86387fe32",
"text": "This paper presents four studies designed to assess different types of gratifications that can be associated with the experience of emotions in movie and TV audiences. Exploratory and confirmatory factor analyses of a pool of statements derived from qualitative interviews revealed three factors that reflect rewarding feelings: 1) fun, 2) thrill, and 3) empathic sadness, and four factors that reflect the role of emotional media experiences within the broader context of individuals' social and cognitive needs: 4) contemplative emotional experiences, 5) emotional engagement with characters, 6) social sharing of emotions, and 7) vicarious release of emotions. Validation analyses showed that the scales developed to assess these factors are predicted by the experience of emotions and meta-emotions and served in turn to predict different aspects of positive content evaluation. Results are discussed with regard to theoretical issues including entertainment audiences' voluntary exposure to unpleasant feelings, and the role of entertainment in psychosocial need satisfaction and eudaimonic well-being.",
"title": ""
},
{
"docid": "e6457f5257e95d727e06e212bef2f488",
"text": "The emerging ability to comply with caregivers' dictates and to monitor one's own behavior accordingly signifies a major growth of early childhood. However, scant attention has been paid to the developmental course of self-initiated regulation of behavior. This article summarizes the literature devoted to early forms of control and highlights the different philosophical orientations in the literature. Then, focusing on the period from early infancy to the beginning of the preschool years, the author proposes an ontogenetic perspective tracing the kinds of modulation or control the child is capable of along the way. The developmental sequence of monitoring behaviors that is proposed calls attention to contributions made by the growth of cognitive skills. The role of mediators (e.g., caregivers) is also discussed.",
"title": ""
},
{
"docid": "bbc7e7f30eb6e0e77e01ec186853a0b8",
"text": "Injuries to the tarsometatarsal joint and of the Lisfranc ligament present a challenge. They are difficult to diagnose and outcomes worsen as diagnosis is delayed. As a result, radiologists and clinicians must have a clear understanding of the relevant nomenclature, anatomy, injury mechanisms, and imaging findings.",
"title": ""
},
{
"docid": "0c417cce8944b4d924451aa88fe2b7a3",
"text": "Estimation of social influence in networks can be substantially biased in observational studies due to homophily and network correlation in exposure to exogenous events. Randomized experiments, in which the researcher intervenes in the social system and uses randomization to determine how to do so, provide a methodology for credibly estimating of causal effects of social behaviors. In addition to addressing questions central to the social sciences, these estimates can form the basis for effective marketing and public policy. In this review, we discuss the design space of experiments to measure social influence through combinations of interventions and randomizations. We define an experiment as combination of (1) a target population of individuals connected by an observed interaction network, (2) a set of treatments whereby the researcher will intervene in the social system, (3) a randomization strategy which maps individuals or edges to treatments, and (4) a measurement of an outcome of interest after treatment has been assigned. We review experiments that demonstrate potential experimental designs and we evaluate their advantages and tradeoffs for answering different types of causal questions about social influence. We show how randomization also provides a basis for statistical inference when analyzing these experiments.",
"title": ""
},
{
"docid": "14dbf1851016161633e847e55e93cad3",
"text": "Direct drive permanent magnet generators(PMGs) are increasingly capturing the global wind market in large onshore and offshore applications. The aim of this paper is to provide a quick overview of permanent magnet generator design and related control issues for large wind turbines. Generator systems commonly used in wind turbines, the permanent magnet generator types, and control methods are reviewed in the paper. The current commercial PMG wind turbine on market is surveyed. The design of a 5 MW axial flux permanent magnet (AFPM) generator for large wind turbines is discussed and presented in detail.",
"title": ""
},
{
"docid": "1997a007b2eb9a314c4e9320d22293b4",
"text": "Face detection constitutes a key visual information analysis task in Machine Learning. The rise of Big Data has resulted in the accumulation of a massive volume of visual data which requires proper and fast analysis. Deep Learning methods are powerful approaches towards this task as training with large amounts of data exhibiting high variability has been shown to significantly enhance their effectiveness, but often requires expensive computations and leads to models of high complexity. When the objective is to analyze visual content in massive datasets, the complexity of the model becomes crucial to the success of the model. In this paper, a lightweight deep Convolutional Neural Network (CNN) is introduced for the purpose of face detection, designed with a view to minimize training and testing time, and outperforms previously published deep convolutional networks in this task, in terms of both effectiveness and efficiency. To train this lightweight deep network without compromising its efficiency, a new training method of progressive positive and hard negative sample mining is introduced and shown to drastically improve training speed and accuracy. Additionally, a separate deep network was trained to detect individual facial features and a model that combines the outputs of the two networks was created and evaluated. Both methods are capable of detecting faces under severe occlusion and unconstrained pose variation and meet the difficulties of large scale real-world, real-time face detection, and are suitable for deployment even in mobile environments such as Unmanned Aerial Vehicles (UAVs).",
"title": ""
},
{
"docid": "7acb840958dd6f0d9146fcb1527dd87e",
"text": "Videos represent the primary source of information for surveillance applications and are available in large amounts but in most cases contain little or no annotation for supervised learning. This article reviews the state-of-the-art deep learning based methods for video anomaly detection and categorizes them based on the type of model and criteria of detection. We also perform simple studies to understand the different approaches and provide the criteria of evaluation for spatio-temporal anomaly detection.",
"title": ""
},
{
"docid": "2a8d7998ec186e0144c0dcf762afbacc",
"text": "Within the software industry software piracy is a great concern. In this article we address this issue through a prevention technique called software watermarking. Depending on how a software watermark is applied it can be used to discourage piracy; as proof of authorship or purchase; or to track the source of the illegal redistribution. In particular we analyze an algorithm originally proposed by Geneviève Arboit in A Method for Watermarking Java Programs via Opaque Predicates. This watermarking technique embeds the watermark by adding opaque predicates to the application. We have found that the Arboit technique does withstand some forms of attack and has a respectable data-rate. However, it is susceptible to a variety of distortive attacks. One unanswered question in the area of software watermarking is whether dynamic algorithms are inherently more resilient to attacks than static algorithms. We have implemented and empirically evaluated both static and dynamic versions within the SANDMARK framework.",
"title": ""
},
{
"docid": "7a720c34f461728bab4905716f925ace",
"text": "We introduce the concept of Graspable User Interfaces that allow direct control of electronic or virtual objects through physical handles for control. These physical artifacts, which we call \"bricks,\" are essentially new input devices that can be tightly coupled or \"attached\" to virtual objects for manipulation or for expressing action (e.g., to set parameters or for initiating processes). Our bricks operate on top of a large horizontal display surface known as the \"ActiveDesk.\" We present four stages in the development of Graspable UIs: (1) a series of exploratory studies on hand gestures and grasping; (2) interaction simulations using mock-ups and rapid prototyping tools; (3) a working prototype and sample application called GraspDraw; and (4) the initial integrating of the Graspable UI concepts into a commercial application. Finally, we conclude by presenting a design space for Bricks which lay the foundation for further exploring and developing Graspable User Interfaces.",
"title": ""
},
{
"docid": "33465b87cdc917904d16eb9d6cb8fece",
"text": "An audio fingerprint is a compact content-based signature that summarizes an audio recording. Audio Fingerprinting technologies have attracted attention since they allow the identification of audio independently of its format and without the need of meta-data or watermark embedding. Other uses of fingerprinting include: integrity verification, watermark support and content-based audio retrieval. The different approaches to fingerprinting have been described with different rationales and terminology: Pattern matching, Multimedia (Music) Information Retrieval or Cryptography (Robust Hashing). In this paper, we review different techniques describing its functional blocks as parts of a common, unified framework.",
"title": ""
},
{
"docid": "ffadf882ac55d9cb06b77b3ce9a6ad8c",
"text": "Three experimental techniques based on automatic swept-frequency network and impedance analysers were used to measure the dielectric properties of tissue in the frequency range 10 Hz to 20 GHz. The technique used in conjunction with the impedance analyser is described. Results are given for a number of human and animal tissues, at body temperature, across the frequency range, demonstrating that good agreement was achieved between measurements using the three pieces of equipment. Moreover, the measured values fall well within the body of corresponding literature data.",
"title": ""
}
] |
scidocsrr
|
07cb13827af34afb3c6455699037dcf2
|
Lipschitz Properties for Deep Convolutional Networks
|
[
{
"docid": "afee419227629f8044b5eb0addd65ce3",
"text": "Both Convolutional Neural Networks (CNNs) and Long Short-Term Memory (LSTM) have shown improvements over Deep Neural Networks (DNNs) across a wide variety of speech recognition tasks. CNNs, LSTMs and DNNs are complementary in their modeling capabilities, as CNNs are good at reducing frequency variations, LSTMs are good at temporal modeling, and DNNs are appropriate for mapping features to a more separable space. In this paper, we take advantage of the complementarity of CNNs, LSTMs and DNNs by combining them into one unified architecture. We explore the proposed architecture, which we call CLDNN, on a variety of large vocabulary tasks, varying from 200 to 2,000 hours. We find that the CLDNN provides a 4-6% relative improvement in WER over an LSTM, the strongest of the three individual models.",
"title": ""
},
{
"docid": "8fd893ef59f788742de78d8a279496ca",
"text": "A wavelet scattering network computes a translation invariant image representation, which is stable to deformations and preserves high frequency information for classification. It cascades wavelet transform convolutions with non-linear modulus and averaging operators. The first network layer outputs SIFT-type descriptors whereas the next layers provide complementary invariant information which improves classification. The mathematical analysis of wavelet scattering networks explain important properties of deep convolution networks for classification. A scattering representation of stationary processes incorporates higher order moments and can thus discriminate textures having same Fourier power spectrum. State of the art classification results are obtained for handwritten digits and texture discrimination, with a Gaussian kernel SVM and a generative PCA classifier.",
"title": ""
},
{
"docid": "b1b5b1dd170bff4bd33d11d6b3959d11",
"text": "Neural Networks are formally hard to train. How can we circumvent hardness results? • Over specified networks: While over specification seems to speedup training , formally hardness results are valid in the improper model. • Changing the activation function: While changing the activation function from sigmoid to ReLu has lead to faster convergence of SGD methods, formally these networks are still hard.",
"title": ""
}
] |
[
{
"docid": "d395193924613f6818511650d24cf9ae",
"text": "Assortment planning of substitutable products is a major operational issue that arises in many industries, such as retailing, airlines and consumer electronics. We consider a single-period joint assortment and inventory planning problem under dynamic substitution with stochastic demands, and provide complexity and algorithmic results as well as insightful structural characterizations of near-optimal solutions for important variants of the problem. First, we show that the assortment planning problem is NP-hard even for a very simple consumer choice model, where each customer is willing to buy only two products. In fact, we show that the problem is hard to approximate within a factor better than 1− 1/e. Secondly, we show that for several interesting and practical choice models, one can devise a polynomial-time approximation scheme (PTAS), i.e., the problem can be solved efficiently to within any level of accuracy. To the best of our knowledge, this is the first efficient algorithm with provably near-optimal performance guarantees for assortment planning problems under dynamic substitution. Quite surprisingly, the algorithm we propose stocks only a constant number of different product types; this constant depends only on the desired accuracy level. This provides an important managerial insight that assortments with a relatively small number of product types can obtain almost all of the potential revenue. Furthermore, we show that our algorithm can be easily adapted for more general choice models, and present numerical experiments to show that it performs significantly better than other known approaches.",
"title": ""
},
{
"docid": "ccac2236f232222a832bfd1a63927cac",
"text": "Visualization of textual data may reveal interesting properties regarding the information conveyed in a group of documents. In this paper, we study whether the structure revealed by a visualization method can be used as inputs for improved classifiers. In particular, we study whether the locations of news items on a concept map could be used as inputs for improving the prediction of stock price movements from the news. We propose a method based on information visualization and text classification for achieving this. We apply the proposed approach to the prediction of the stock price movements of companies within the oil and natural gas sector. In a case study, we show that our proposed approach performs better than a naive approach and a bag-of-words approach",
"title": ""
},
{
"docid": "9cf5fc6b50010d1489f12d161f302428",
"text": "With the advent of large code repositories and sophisticated search capabilities, code search is increasingly becoming a key software development activity. In this work we shed some light into how developers search for code through a case study performed at Google, using a combination of survey and log-analysis methodologies. Our study provides insights into what developers are doing and trying to learn when per- forming a search, search scope, query properties, and what a search session under different contexts usually entails. Our results indicate that programmers search for code very frequently, conducting an average of five search sessions with 12 total queries each workday. The search queries are often targeted at a particular code location and programmers are typically looking for code with which they are somewhat familiar. Further, programmers are generally seeking answers to questions about how to use an API, what code does, why something is failing, or where code is located.",
"title": ""
},
{
"docid": "9ce1d3d0c4a366d96beecb36b9f87071",
"text": "One of the key challenges for current face recognition techniques is how to handle pose variations between the probe and gallery face images. In this paper, we present a method for reconstructing the virtual frontal view from a given nonfrontal face image using Markov random fields (MRFs) and an efficient variant of the belief propagation algorithm. In the proposed approach, the input face image is divided into a grid of overlapping patches, and a globally optimal set of local warps is estimated to synthesize the patches at the frontal view. A set of possible warps for each patch is obtained by aligning it with images from a training database of frontal faces. The alignments are performed efficiently in the Fourier domain using an extension of the Lucas-Kanade algorithm that can handle illumination variations. The problem of finding the optimal warps is then formulated as a discrete labeling problem using an MRF. The reconstructed frontal face image can then be used with any face recognition technique. The two main advantages of our method are that it does not require manually selected facial landmarks or head pose estimation. In order to improve the performance of our pose normalization method in face recognition, we also present an algorithm for classifying whether a given face image is at a frontal or nonfrontal pose. Experimental results on different datasets are presented to demonstrate the effectiveness of the proposed approach.",
"title": ""
},
{
"docid": "443fb61dbb3cc11060104ed6ed0c645c",
"text": "An interactive framework for soft segmentation and matting of natural images and videos is presented in this paper. The proposed technique is based on the optimal, linear time, computation of weighted geodesic distances to user-provided scribbles, from which the whole data is automatically segmented. The weights are based on spatial and/or temporal gradients, considering the statistics of the pixels scribbled by the user, without explicit optical flow or any advanced and often computationally expensive feature detectors. These could be naturally added to the proposed framework as well if desired, in the form of weights in the geodesic distances. An automatic localized refinement step follows this fast segmentation in order to further improve the results and accurately compute the corresponding matte function. Additional constraints into the distance definition permit to efficiently handle occlusions such as people or objects crossing each other in a video sequence. The presentation of the framework is complemented with numerous and diverse examples, including extraction of moving foreground from dynamic background in video, natural and 3D medical images, and comparisons with the recent literature.",
"title": ""
},
{
"docid": "102ad264e4a9a4a43a943f0895b61e96",
"text": "Power quality disturbance (PQD) monitoring has become an important issue due to the growing number of disturbing loads connected to the power line and to the susceptibility of certain loads to their presence. In any real power system, there are multiple sources of several disturbances which can have different magnitudes and appear at different times. In order to avoid equipment damage and estimate the damage severity, they have to be detected, classified, and quantified. In this work, a smart sensor for detection, classification, and quantification of PQD is proposed. First, the Hilbert transform (HT) is used as detection technique; then, the classification of the envelope of a PQD obtained through HT is carried out by a feed forward neural network (FFNN). Finally, the root mean square voltage (Vrms), peak voltage (Vpeak), crest factor (CF), and total harmonic distortion (THD) indices calculated through HT and Parseval's theorem as well as an instantaneous exponential time constant quantify the PQD according to the disturbance presented. The aforementioned methodology is processed online using digital hardware signal processing based on field programmable gate array (FPGA). Besides, the proposed smart sensor performance is validated and tested through synthetic signals and under real operating conditions, respectively.",
"title": ""
},
{
"docid": "63f20dd528d54066ed0f189e4c435fe7",
"text": "In many specific laboratories the students use only a PLC simulator software, because the hardware equipment is expensive. This paper presents a solution that allows students to study both the hardware and software parts, in the laboratory works. The hardware part of solution consists in an old plotter, an adapter board, a PLC and a HMI. The software part of this solution is represented by the projects of the students, in which they developed applications for programming the PLC and the HMI. This equipment can be made very easy and can be used in university labs by students, so that they design and test their applications, from low to high complexity [1], [2].",
"title": ""
},
{
"docid": "8830eb9ac71b112c6061e64446e396ab",
"text": "BACKGROUND\nLabia minora reduction (labioplasty, labiaplasty) is the most common female genital aesthetic procedure. The majority of labia reductions are performed by trimming the labial edges. Many of these women present with (1) asymmetry; (2) scalloping of the labial edges with wide, occasionally painful scars; and (3) abrupt termination and distortion of the clitoral hood at its normal junctions with the clitoral frenula and the upper labium. Reconstruction can usually be performed with wedge excisions, labial YV advancement, and touch-up trimming. Reconstruction of a labial amputation, however, required the development of a new clitoral hood flap.\n\n\nMETHODS\nTwenty-four clitoral hood flaps were performed on 17 patients from June of 2006 through May of 2010. An island clitoral hood flap randomly based on the dartos fascia of the lower clitoral hood and medial labium majus is transposed to the ipsilateral labial defect to reconstruct a labium. Of the 10 patients with unilateral flaps, nine of the patients had previous bilateral labial reductions. Reconstruction of the opposite side in these nine women was performed using one or a combination of the following: wedge excisions, YV advancement flaps, or controlled touch-up trimming.\n\n\nRESULTS\nAll 24 flaps survived, with four minor complications. Five patients underwent revision of a total of seven flaps, but only two were for complications. As experience increased, revisions for aesthetic improvement became less common.\n\n\nCONCLUSION\nReconstruction of labia minora defects secondary to trimming labia reductions is very successful using a combination of clitoral hood flaps, wedge excisions, and YV advancements.",
"title": ""
},
{
"docid": "c9171bf5a2638b35ff7dc9c8e6104d30",
"text": "Dimensionality reduction is an important aspect in the pattern classification literature, and linear discriminant analysis (LDA) is one of the most widely studied dimensionality reduction technique. The application of variants of LDA technique for solving small sample size (SSS) problem can be found in many research areas e.g. face recognition, bioinformatics, text recognition, etc. The improvement of the performance of variants of LDA technique has great potential in various fields of research. In this paper, we present an overview of these methods. We covered the type, characteristics and taxonomy of these methods which can overcome SSS problem. We have also highlighted some important datasets and software/ packages.",
"title": ""
},
{
"docid": "a3ebadf449537b5df8de3c5ab96c74cb",
"text": "Do conglomerate firms have the ability to allocate resources efficiently across business segments? We address this question by comparing the performance of firms that follow passive benchmark strategies in their capital allocation process to those that actively deviate from those benchmarks. Using three measures of capital allocation style to capture various aspects of activeness, we show that active firms have a lower average industry-adjusted profitability than passive firms. This result is robust to controlling for potential endogeneity using matching analysis and regression analysis with firm fixed effects. Moreover, active firms obtain lower valuation and lower excess stock returns in subsequent periods. Our findings suggest that, on average, conglomerate firms that actively allocate resources across their business segments do not do so efficiently and that the stock market does not fully incorporate information revealed in the internal capital allocation process. Guedj and Huang are from the McCombs School of Business, University of Texas at Austin. Guedj: guedj@mail.utexas.edu and (512) 471-5781. Huang: jennifer.huang@mccombs.utexas.edu and (512) 232-9375. Sulaeman is from the Cox School of Business, Southern Methodist University, sulaeman@smu.edu and (214) 768-8284. The authors thank Alexander Butler, Amar Gande, Mark Leary, Darius Miller, Maureen O’Hara, Owen Lamont, Gordon Phillips, Mike Roberts, Oleg Rytchkov, Gideon Saar, Zacharias Sautner, Clemens Sialm, Rex Thompson, Sheridan Titman, Yuhai Xuan, participants at the Financial Research Association meeting and seminars at Cornell University, Southern Methodist University, the University of Texas at Austin, and the University of Texas at Dallas for their helpful comments.",
"title": ""
},
{
"docid": "c574570eb7366fcf0c15fb0fa833365c",
"text": "Many time-critical applications require predictable performance and tasks in these applications have deadlines to be met. In this paper, we propose an efficient algorithm for nonpreemptive scheduling of dynamically arriving real-time tasks (aperiodic tasks) in multiprocessor systems. A real-time task is characterized by its deadline, resource requirements, and worst case computation time on p processors, where p is the degree of parallelization of the task. We use this parallelism in tasks to meet their deadlines and, thus, obtain better schedulability compared to nonparallelizable task scheduling algorithms. To study the effectiveness of the proposed scheduling algorithm, we have conducted extensive simulation studies and compared its performance with the myopic [8] scheduling algorithm. The simulation studies show that the schedulability of the proposed algorithm is always higher than that of the myopic algorithm for a wide variety of task parameters. Index Terms —Multiprocessor, real-time systems, dynamic scheduling, parallelizable tasks, resource constraints. —————————— ✦ ——————————",
"title": ""
},
{
"docid": "0a3bb33d5cff66346a967092202737ab",
"text": "An Li-ion battery charger based on a charge-control buck regulator operating at 2.2 MHz is implemented in 180 nm CMOS technology. The novelty of the proposed charge-control converter consists of regulating the average output current by only sensing a portion of the inductor current and using an adaptive reference voltage. By adopting this approach, the charger average output current is set to a constant value of 900 mA regardless of the battery voltage variation. In constant-voltage (CV) mode, a feedback loop is established in addition to the preexisting current control loop, preserving the smoothness of the output voltage at the transition from constant-current (CC) to CV mode. A small-signal model has been developed to analyze the system stability and subharmonic oscillations at low current levels. Transistor-level simulations of the proposed switching charger are presented. The output voltage ranges from 2.1 to 4.2 V, and the power efficiency at 900 mA has been measured to be 86% for an input voltage of 10 V. The accuracy of the output current using the proposed sensing technique is 9.4% at 10 V.",
"title": ""
},
{
"docid": "1bfea99831fbcddf1d953a31d154af9a",
"text": "The tremendous growth in the use of Social Media has led to radical paradigm shifts in the ways we communicate, collaborate, consume, and create information. Our focus in this special issue is on the reciprocal interplay of Social Media and Collective Intelligence. We therefore discuss constituting attributes of Social Media and Collective Intelligence, and we structure the rapidly growing body of literature including adjacent research streams such as social network analysis, Web Science, and computational social science. We conclude by making propositions for future research where in particular the disciplines of artificial intelligence, computer science, and information systems can substantially contribute to the interdisciplinary academic discourse.",
"title": ""
},
{
"docid": "3d9fe9c30d09a9e66f7339b0ad24edb7",
"text": "Due to progress in wired and wireless home networking, sensor networks, networked appliances, mechanical and control engineering, and computers, we can build smart homes, and many smart home projects are currently proceeding throughout the world. However, we have to be careful not to repeat the same mistake that was made with home automation technologies that were booming in the 1970s. That is, [total?] automation should not be a goal of smart home technologies. I believe the following points are important in construction of smart homes from users¿ viewpoints: development of interface technologies between humans and systems for detection of human intensions, feelings, and situations; improvement of system knowledge; and extension of human activity support outside homes to the scopes of communities, towns, and cities.",
"title": ""
},
{
"docid": "65250c2c208e410ae5c01110d77f64c9",
"text": "The Mad package described here facilitates the evaluation of first derivatives of multidimensional functions that are defined by computer codes written in MATLAB. The underlying algorithm is the well-known forward mode of automatic differentiation implemented via operator overloading on variables of the class fmad. The main distinguishing feature of this MATLAB implementation is the separation of the linear combination of derivative vectors into a separate derivative vector class derivvec. This allows for the straightforward performance optimization of the overall package. Additionally, by internally using a matrix (two-dimensional) representation of arbitrary dimension directional derivatives, we may utilize MATLAB's sparse matrix class to propagate sparse directional derivatives for MATLAB code which uses arbitrary dimension arrays. On several examples, the package is shown to be more efficient than Verma's ADMAT package [Verma 1998a].",
"title": ""
},
{
"docid": "2cd3833634cf2dae58ccb268ba85e955",
"text": "We explore the hypothesis that many intuitive physical inferences are based on a mental physics engine that is analogous in many ways to the machine physics engines used in building interactive video games. We describe the key features of game physics engines and their parallels in human mental representation, focusing especially on the intuitive physics of young infants where the hypothesis helps to unify many classic and otherwise puzzling phenomena, and may provide the basis for a computational account of how the physical knowledge of infants develops. This hypothesis also explains several 'physics illusions', and helps to inform the development of artificial intelligence (AI) systems with more human-like common sense.",
"title": ""
},
{
"docid": "9ba6656cb67dcb72d4ebadcaf9450f40",
"text": "OBJECTIVE\nThe Japan Ankylosing Spondylitis Society conducted a nationwide questionnaire survey of spondyloarthropathies (SpA) in 1990 and 1997, (1) to estimate the prevalence and incidence, and (2) to validate the criteria of Amor and the European Spondylarthropathy Study Group (ESSG) in Japan.\n\n\nMETHODS\nJapan was divided into 9 districts, to each of which a survey supervisor was assigned. According to unified criteria, each supervisor selected all the clinics and hospitals with potential for SpA patients in the district. The study population consisted of all patients with SpA seen at these institutes during a 5 year period (1985-89) for the 1st survey and a 7 year period (1990-96) for the 2nd survey.\n\n\nRESULTS\nThe 1st survey recruited 426 and the 2nd survey 638 cases, 74 of which were registered in both studies. The total number of patients with SpA identified 1985-96 was 990 (760 men, 227 women). They consisted of patients with ankylosing spondylitis (68.3%), psoriatic arthritis (12.7%), reactive arthritis (4.0%), undifferentiated SpA (5.4%), inflammatory bowel disease (2.2%), pustulosis palmaris et plantaris (4.7%), and others (polyenthesitis, etc.) (0.8%). The maximum onset number per year was 49. With the assumption that at least one-tenth of the Japanese population with SpA was recruited, incidence and prevalence were estimated not to exceed 0.48/100,000 and 9.5/100,000 person-years, respectively. The sensitivity was 84.0% for Amor criteria and 84.6 for ESSG criteria.\n\n\nCONCLUSION\nThe incidence and prevalence of SpA in Japanese were estimated to be less than 1/10 and 1/200, respectively, of those among Caucasians. The adaptability of the Amor and ESSG criteria was validated for the Japanese population.",
"title": ""
},
{
"docid": "48f2e91304f7e4dbec5e5cc1f509d38e",
"text": "This paper presents on-going research to define the basic models and architecture patterns for federated access control in heterogeneous (multi-provider) multi-cloud and inter-cloud environment. The proposed research contributes to the further definition of Intercloud Federation Framework (ICFF) which is a part of the general Intercloud Architecture Framework (ICAF) proposed by authors in earlier works. ICFF attempts to address the interoperability and integration issues in provisioning on-demand multi-provider multi-domain heterogeneous cloud infrastructure services. The paper describes the major inter-cloud federation scenarios that in general involve two types of federations: customer-side federation that includes federation between cloud based services and customer campus or enterprise infrastructure, and provider-side federation that is created by a group of cloud providers to outsource or broker their resources when provisioning services to customers. The proposed federated access control model uses Federated Identity Management (FIDM) model that can be also supported by the trusted third party entities such as Cloud Service Broker (CSB) and/or trust broker to establish dynamic trust relations between entities without previously existing trust. The research analyses different federated identity management scenarios, defines the basic architecture patterns and the main components of the distributed federated multi-domain Authentication and Authorisation infrastructure.",
"title": ""
},
{
"docid": "1a7e2ca13d00b6476820ad82c2a68780",
"text": "To understand the dynamics of mental health, it is essential to develop measures for the frequency and the patterning of mental processes in every-day-life situations. The Experience-Sampling Method (ESM) is an attempt to provide a valid instrument to describe variations in self-reports of mental processes. It can be used to obtain empirical data on the following types of variables: a) frequency and patterning of daily activity, social interaction, and changes in location; b) frequency, intensity, and patterning of psychological states, i.e., emotional, cognitive, and conative dimensions of experience; c) frequency and patterning of thoughts, including quality and intensity of thought disturbance. The article reviews practical and methodological issues of the ESM and presents evidence for its short- and long-term reliability when used as an instrument for assessing the variables outlined above. It also presents evidence for validity by showing correlation between ESM measures on the one hand and physiological measures, one-time psychological tests, and behavioral indices on the other. A number of studies with normal and clinical populations that have used the ESM are reviewed to demonstrate the range of issues to which the technique can be usefully applied.",
"title": ""
},
{
"docid": "bc7f80192416aa7787657aed1bda3997",
"text": "In this paper we propose a deep learning technique to improve the performance of semantic segmentation tasks. Previously proposed algorithms generally suffer from the over-dependence on a single modality as well as a lack of training data. We made three contributions to improve the performance. Firstly, we adopt two models which are complementary in our framework to enrich field-of-views and features to make segmentation more reliable. Secondly, we repurpose the datasets form other tasks to the segmentation task by training the two models in our framework on different datasets. This brings the benefits of data augmentation while saving the cost of image annotation. Thirdly, the number of parameters in our framework is minimized to reduce the complexity of the framework and to avoid over- fitting. Experimental results show that our framework significantly outperforms the current state-of-the-art methods with a smaller number of parameters and better generalization ability.",
"title": ""
}
] |
scidocsrr
|
f6b105ed06c14154ade4c3638c7da6fe
|
EEG-Based Attention Tracking During Distracted Driving
|
[
{
"docid": "4ec947c0420e47decd6de65330baf820",
"text": "Detailed exploration on Brain Computer Interface (BCI) and its recent trends has been done in this paper. Work is being done to identify objects, images, videos and their color compositions. Efforts are on the way in understanding speech, words, emotions, feelings and moods. When humans watch the surrounding environment, visual data is processed by the brain, and it is possible to reconstruct the same on the screen with some appreciable accuracy by analyzing the physiological data. This data is acquired by using one of the non-invasive techniques like electroencephalography (EEG) in BCI. The acquired signal is to be translated to produce the image on to the screen. This paper also lays suitable directions for future work. KeywordsBCI; EEG; brain image reconstruction.",
"title": ""
}
] |
[
{
"docid": "16bd1ca1e6320e0875dede14e7a2cc7d",
"text": "Software process is viewed as an important factor to deliver high quality products. Although there have been several Software Process Models proposed, the software processes are still short of formal descriptions. This paper presents an ontology-based approach to express software processes at the conceptual level. An OWL-based ontology for software processes, called SPO (Software Process Ontology), is designed, and it is extended to generate ontologies for specific process models, such as CMMI and ISO/IEC 15504. A prototype of a web-based process assessment tool based on SPO is developed to illustrate the advantages of this approach. Finally, some further research in this direction is outlined.",
"title": ""
},
{
"docid": "9bdd5424d73375a44c3461ffe456a844",
"text": "A new suspended plate antenna is presented for the enhancement of impedance bandwidth. The probe-fed plate antenna is suspended above a ground plane and its center portion is concaved to form a \"V\" shape. The experiment and simulation show that without increase in size the proposed antenna is capable of providing an impedance bandwidth of up to 60% for |S/sub 11/|<-10 dB with an acceptable gain of 8 dBi.",
"title": ""
},
{
"docid": "024e95f41a48e8409bd029c14e6acb3a",
"text": "This communication investigates the application of metamaterial absorber (MA) to waveguide slot antenna to reduce its radar cross section (RCS). A novel ultra-thin MA is presented, and its absorbing characteristics and mechanism are analyzed. The PEC ground plane of waveguide slot antenna is covered by this MA. As compared with the slot antenna with a PEC ground plane, the simulation and experiment results demonstrate that the monostatic and bistatic RCS of waveguide slot antenna are reduced significantly, and the performance of antenna is preserved simultaneously.",
"title": ""
},
{
"docid": "259b80df0ad4def6db381067c8f97121",
"text": "Concept sketches are popularly used by designers to convey pose and function of products. Understanding such sketches, however, requires special skills to form a mental 3D representation of the product geometry by linking parts across the different sketches and imagining the intermediate object configurations. Hence, the sketches can remain inaccessible to many, especially non-designers. We present a system to facilitate easy interpretation and exploration of concept sketches. Starting from crudely specified incomplete geometry, often inconsistent across the different views, we propose a globally-coupled analysis to extract part correspondence and inter-part junction information that best explain the different sketch views. The user can then interactively explore the abstracted object to gain better understanding of the product functions. Our key technical contribution is performing shape analysis without access to any coherent 3D geometric model by reasoning in the space of inter-part relations. We evaluate our system on various concept sketches obtained from popular product design books and websites.",
"title": ""
},
{
"docid": "8e1947a9e890ef110c75a52d706eec2a",
"text": "Despite the rapid increase in online shopping, the literature is silent in terms of the interrelationship between perceived risk factors, the marketing impacts, and their influence on product and web-vendor consumer trust. This research focuses on holidaymakers’ perspectives using Internet bookings for their holidays. The findings reveal the associations between Internet perceived risks and the relatively equal influence of product and e-channel risks in consumers’ trust, and that online purchasing intentions are equally influenced by product and e-channel consumer trust. They also illustrate the relationship between marketing strategies and perceived risks, and provide managerial suggestions for further e-purchasing tourism improvement.",
"title": ""
},
{
"docid": "8534ec92800e1166fb28e6598b517dde",
"text": "In the Vehicle Routing Problem (VRP), the aim is to design a set of m minimum cost vehicle routes through n customer locations, so that each route starts and ends at a common location and some side constraints are satisfied. Common application arise in newspaper and food delivery, and in milk collection. This paper summarizes the main known results for the classical VRP in which only vehicle capacity constraints are present. The paper is structured around three main headings: exact algorithms, classical heuristics, and metaheuristics.",
"title": ""
},
{
"docid": "c47fde74be75b5e909d7657bb64bf23d",
"text": "As the primary stakeholder for the Enterprise Architecture, the Chief Information Officer (CIO) is responsible for the evolution of the enterprise IT system. An important part of the CIO role is therefore to make decisions about strategic and complex IT matters. This paper presents a cost effective and scenariobased approach for providing the CIO with an accurate basis for decision making. Scenarios are analyzed and compared against each other by using a number of problem-specific easily measured system properties identified in literature. In order to test the usefulness of the approach, a case study has been carried out. A CIO needed guidance on how to assign functionality and data within four overlapping systems. The results are quantifiable and can be presented graphically, thus providing a cost-efficient and easily understood basis for decision making. The study shows that the scenario-based approach can make complex Enterprise Architecture decisions understandable for CIOs and other business-orientated stakeholders",
"title": ""
},
{
"docid": "77371cfa61dbb3053f3106f5433d23a7",
"text": "We present a new noniterative approach to synthetic aperture radar (SAR) autofocus, termed the multichannel autofocus (MCA) algorithm. The key in the approach is to exploit the multichannel redundancy of the defocusing operation to create a linear subspace, where the unknown perfectly focused image resides, expressed in terms of a known basis formed from the given defocused image. A unique solution for the perfectly focused image is then directly determined through a linear algebraic formulation by invoking an additional image support condition. The MCA approach is found to be computationally efficient and robust and does not require prior assumptions about the SAR scene used in existing methods. In addition, the vector-space formulation of MCA allows sharpness metric optimization to be easily incorporated within the restoration framework as a regularization term. We present experimental results characterizing the performance of MCA in comparison with conventional autofocus methods and discuss the practical implementation of the technique.",
"title": ""
},
{
"docid": "8d64cfe5d2f09d5ffae6ad5452d02636",
"text": "PURPOSE\nThis study was designed to examine the relationship between active transportation (defined as the percentage of trips taken by walking, bicycling, and public transit) and obesity rates (BMI > or = 30 kg . m-2) in different countries.\n\n\nMETHODS\nNational surveys of travel behavior and health indicators in Europe, North America, and Australia were used in this study; the surveys were conducted in 1994 to 2006. In some cases raw data were obtained from national or federal agencies and then analyzed, and in other cases summary data were obtained from published reports.\n\n\nRESULTS\nCountries with the highest levels of active transportation generally had the lowest obesity rates. Europeans walked more than United States residents (382 versus 140 km per person per year) and bicycled more (188 versus 40 km per person per year) in 2000.\n\n\nDISCUSSION\nWalking and bicycling are far more common in European countries than in the United States, Australia, and Canada. Active transportation is inversely related to obesity in these countries. Although the results do not prove causality, they suggest that active transportation could be one of the factors that explain international differences in obesity rates.",
"title": ""
},
{
"docid": "b19f473f77b20dcb566fded46100a71b",
"text": "Large amount of information are available online on web.The discussion forum, review sites, blogs are some of the opinion rich resources where review or posted articles is their sentiment, or overall opinion towards the subject matter. The opinions obtained from those can be classified in to positive or negative which can be used by customer to make product choice and by businessmen for finding customer satisfaction .This paper studies online movie reviews using sentiment analysis approaches. In this study, sentiment classification techniques were applied to movie reviews. Specifically, we compared two supervised machine learning approaches SVM, Navie Bayes for Sentiment Classification of Reviews. Results states that Naïve Bayes approach outperformed the svm. If the training dataset had a large number of reviews, Naive bayes approach reached high accuracies as compare to other.",
"title": ""
},
{
"docid": "813f499c7140f882b077be51e99a8ef6",
"text": "This article discusses the challenges, benefits and approaches associated with realizing largescale antenna arrays at mmWave frequency bands for future 5G cellular devices. Key design considerations are investigated to deduce a novel and practical phased array antenna solution operating at 28 GHz with near spherical coverage. The approach is further evolved into a first-of- a-kind cellular phone prototype equipped with mmWave 5G antenna arrays consisting of a total of 32 low-profile antenna elements. Indoor measurements are carried out using the presented prototype to characterize the proposed mmWave antenna system using 16-QAM modulated signals with 27.925 GHz carrier frequency. The biological implications due to the absorbed electromagnetic waves when using mmWave cellular devices are studied and compared in detail with those of 3/4G cellular devices.",
"title": ""
},
{
"docid": "caf866341ad9f74b1ac1dc8572f6e95c",
"text": "One important but often overlooked aspect of human contexts of ubiquitous computing environment is human’s emotional status. And, there are no realistic and robust humancentric contents services so far, because there are few considers about combining context awareness computing with wearable computing for improving suitability of contents to each user’s needs. In this paper, we discuss combining context awareness computing with wearable computing to develop more effective personalized services. And we propose new algorithms to develop efficiently personalized emotion based content service system.",
"title": ""
},
{
"docid": "3514b5c6897c164be822f099e56705f3",
"text": "High-rise tasks such as cleaning, painting, inspection, and maintenance on walls of large buildings or other structures require robots with climbing and manipulating skills. Motivated by these potential applications and inspired by the climbing motion of inchworms, we have developed a biped wall-climbing robot-W-Climbot. Built with a modular approach, the robot consists of five joint modules connected in series and two suction modules mounted at the two ends. With this configuration and biped climbing mode, W-Climbot not only has superior mobility on smooth walls, but also has the function of attaching to and manipulating objects equivalent to a “mobile manipulator.” In this paper, we address several fundamental issues with this novel wall-climbing robot, including system development, analysis of suction force, basic climbing gaits, overcoming obstacles, and transiting among walls. A series of comprehensive and challenging experiments with the robot climbing on walls and performing a manipulation task have been conducted to demonstrate its superior climbing ability and manipulation function. The analytical and experimental results have shown that W-Climbot represents a significant advancement in the development of wall-climbing robots.",
"title": ""
},
{
"docid": "c99dda10fa7c35c56dbb2ee24db2a315",
"text": "Traditional approaches to the task of ACE event extraction primarily rely on elaborately designed features and complicated natural language processing (NLP) tools. These traditional approaches lack generalization, take a large amount of human effort and are prone to error propagation and data sparsity problems. This paper proposes a novel event-extraction method, which aims to automatically extract lexical-level and sentence-level features without using complicated NLP tools. We introduce a word-representation model to capture meaningful semantic regularities for words and adopt a framework based on a convolutional neural network (CNN) to capture sentence-level clues. However, CNN can only capture the most important information in a sentence and may miss valuable facts when considering multiple-event sentences. We propose a dynamic multi-pooling convolutional neural network (DMCNN), which uses a dynamic multi-pooling layer according to event triggers and arguments, to reserve more crucial information. The experimental results show that our approach significantly outperforms other state-of-the-art methods.",
"title": ""
},
{
"docid": "45d6563b2b4c64bb11ad65c3cff0d843",
"text": "The performance of single cue object tracking algorithms may degrade due to complex nature of visual world and environment challenges. In recent past, multicue object tracking methods using single or multiple sensors such as vision, thermal, infrared, laser, radar, audio, and RFID are explored to a great extent. It was acknowledged that combining multiple orthogonal cues enhance tracking performance over single cue methods. The aim of this paper is to categorize multicue tracking methods into single-modal and multi-modal and to list out new trends in this field via investigation of representative work. The categorized works are also tabulated in order to give detailed overview of latest advancement. The person tracking datasets are analyzed and their statistical parameters are tabulated. The tracking performance measures are also categorized depending upon availability of ground truth data. Our review gauges the gap between reported work and future demands for object tracking.",
"title": ""
},
{
"docid": "0122f015e3c054840782d09ede609390",
"text": "Decision rules are one of the most expressive languages for machine learning. In this paper we present Adaptive Model Rules (AMRules), the first streaming rule learning algorithm for regression problems. In AMRules the antecedent of a rule is a conjunction of conditions on the attribute values, and the consequent is a linear combination of attribute values. Each rule uses a PageHinkley test to detect changes in the process generating data and react to changes by pruning the rule set. In the experimental section we report the results of AMRules on benchmark regression problems, and compare the performance of our system with other streaming regression algorithms.",
"title": ""
},
{
"docid": "eeb1c6e76e3957e5444dcc3865595642",
"text": "The advances of Radio-Frequency Identification (RFID) technology have significantly enhanced the capability of capturing data from pervasive space. It becomes a great challenge in the information era to effectively understand human behavior, mobility and activity through the perceived RFID data. Focusing on RFID data management, this article provides an overview of current challenges, emerging opportunities and recent progresses in RFID. In particular, this article has described and analyzed the research work on three aspects: algorithm, protocol and performance evaluation. We investigate the research progress in RFID with anti-collision algorithms, authentication and privacy protection protocols, localization and activity sensing, as well as performance tuning in realistic settings. We emphasize the basic principles of RFID data management to understand the state-of-the-art and to address directions of future research in RFID.",
"title": ""
},
{
"docid": "e3823047ccc723783cf05f24ca60d449",
"text": "Social science studies have acknowledged that the social influence of individuals is not identical. Social networks structure and shared text can reveal immense information about users, their interests, and topic-based influence. Although some studies have considered measuring user influence, less has been on measuring and estimating topic-based user influence. In this paper, we propose an approach that incorporates network structure, user-generated content for topic-based influence measurement, and user’s interactions in the network. We perform experimental analysis on Twitter data and show that our proposed approach can effectively measure topic-based user influence.",
"title": ""
},
{
"docid": "6c3d34e1a7ab24493a79e938fb67ebec",
"text": "The need to enhance the sustainability of intensive agricultural systems is widely recognized One promising approach is to encourage beneficial services provided by soil microorganisms to decrease the inputs of fertilizers and pesticides. However, limited success of this approach in field applications raises questions as to how this might be best accomplished. We highlight connections between root exudates and the rhizosphere microbiome, and discuss the possibility of using plant exudation characteristics to selectively enhance beneficial microbial activities and microbiome characteristics. Gaps in our understanding and areas of research that are vital to our ability to more fully exploit the soil microbiome for agroecosystem productivity and sustainability are also discussed. This article outlines strategies for more effectively exploiting beneficial microbial services on agricultural systems, and cals attention to topics that require additional research.",
"title": ""
},
{
"docid": "47a0704b6a762ca8fc2561961924da71",
"text": "Mobile apps are becoming complex software systems that must be developed quickly and evolve continuously to fit new user requirements and execution contexts. However, addressing these constraints may result in poor design choices, known as antipatterns, which may incidentally degrade software quality and performance. Thus, the automatic detection of antipatterns is an important activity that eases both maintenance and evolution tasks. Moreover, it guides developers to refactor their applications and thus, to improve their quality. While antipatterns are well-known in object-oriented applications, their study in mobile applications is still in their infancy. In this paper, we propose a tooled approach, called Paprika, to analyze Android applications and to detect object-oriented and Androidspecific antipatterns from binaries of mobile apps. We validate the effectiveness of our approach on a set of popular mobile apps downloaded from the Google Play Store.",
"title": ""
}
] |
scidocsrr
|
0e00383e9e9c94f96a7df024dd09e5c1
|
Blepharophimosis, ptosis, epicanthus inversus syndrome with translocation and deletion at chromosome 3q23 in a black African female.
|
[
{
"docid": "3a29bbe76a53c8284123019eba7e0342",
"text": "Although von Ammon' first used the term blepharphimosis in 1841, it was Vignes2 in 1889 who first associated blepharophimosis with ptosis and epicanthus inversus. In 1921, Dimitry3 reported a family in which there were 21 affected subjects in five generations. He described them as having ptosis alone and did not specify any other features, although photographs in the report show that they probably had the full syndrome. Dimitry's pedigree was updated by Owens et a/ in 1960. The syndrome appeared in both sexes and was transmitted as a Mendelian dominant. In 1935, Usher5 reviewed the reported cases. By then, 26 pedigrees had been published with a total of 175 affected persons with transmission mainly through affected males. There was no consanguinity in any pedigree. In three pedigrees, parents who obviously carried the gene were unaffected. Well over 150 families have now been reported and there is no doubt about the autosomal dominant pattern of inheritance. However, like Usher,5 several authors have noted that transmission is mainly through affected males and less commonly through affected females.4 6 Reports by Moraine et al7 and Townes and Muechler8 have described families where all affected females were either infertile with primary or secondary amenorrhoea or had menstrual irregularity. Zlotogora et a/9 described one family and analysed 38 families reported previously. They proposed the existence of two types: type I, the more common type, in which the syndrome is transmitted by males only and affected females are infertile, and type II, which is transmitted by both affected females and males. There is male to male transmission in both types and both are inherited as an autosomal dominant trait. They found complete penetrance in type I and slightly reduced penetrance in type II.",
"title": ""
}
] |
[
{
"docid": "c8634e3256cfafeec5232a37f141edf0",
"text": "This paper proposes a novel memory-based online video representation that is efficient, accurate and predictive. This is in contrast to prior works that often rely on computationally heavy 3D convolutions, ignore actual motion when aligning features over time, or operate in an off-line mode to utilize future frames. In particular, our memory (i) holds the feature representation, (ii) is spatially warped over time to compensate for observer and scene motions, (iii) can carry long-term information, and (iv) enables predicting feature representations in future frames. By exploring a variant that operates at multiple temporal scales, we efficiently learn across even longer time horizons. We apply our online framework to object detection in videos, obtaining a large 2.3 times speed-up and losing only 0.9% mAP on ImageNet-VID dataset, compared to prior works that even use future frames. Finally, we demonstrate the predictive property of our representation in two novel detection setups, where features are propagated over time to (i) significantly enhance a real-time detector by more than 10% mAP in a multi-threaded online setup and to (ii) anticipate objects in future frames.",
"title": ""
},
{
"docid": "4aa4f059e626239bb54c2e9d2a3c3005",
"text": "INTRODUCTION\nSequential stages in the development of the hand, wrist, and cervical vertebrae commonly are used to assess maturation and predict the timing of the adolescent growth spurt. This approach is predicated on the idea that forecasts based on skeletal age must, of necessity, be superior to those based on chronologic age. This study was undertaken to test this reasonable, albeit largely unproved, assumption in a large, longitudinal sample.\n\n\nMETHODS\nSerial records of 100 children (50 girls, 50 boys) were chosen from the files of the Bolton-Brush Growth Study Center in Cleveland, Ohio. The 100 series were 6 to 11 years in length, a span that was designed to encompass the onset and the peak of the adolescent facial growth spurt in each subject. Five linear cephalometric measurements (S-Na, Na-Me, PNS-A, S-Go, Go-Pog) were summed to characterize general facial size; a sixth (Co-Gn) was used to assess mandibular length. In all, 864 cephalograms were traced and analyzed. For most years, chronologic age, height, and hand-wrist films were available, thereby permitting various alternative methods of maturational assessment and prediction to be tested. The hand-wrist and the cervical vertebrae films for each time point were staged. Yearly increments of growth for stature, face, and mandible were calculated and plotted against chronologic age. For each subject, the actual age at onset and peak for stature and facial and mandibular size served as the gold standards against which key ages inferred from other methods could be compared.\n\n\nRESULTS\nOn average, the onset of the pubertal growth spurts in height, facial size, and mandibular length occurred in girls at 9.3, 9.8, and 9.5 years, respectively. The difference in timing between height and facial size growth spurts was statistically significant. In boys, the onset for height, facial size, and mandibular length occurred more or less simultaneously at 11.9, 12.0, and 11.9 years, respectively. In girls, the peak of the growth spurt in height, facial size, and mandibular length occurred at 10.9, 11.5, and 11.5 years. Height peaked significantly earlier than both facial size and mandibular length. In boys, the peak in height occurred slightly (but statistically significantly) earlier than did the peaks in the face and mandible: 14.0, 14.4, and 14.3 years. Based on rankings, the hand-wrist stages provided the best indication (lowest root mean squared error) that maturation had advanced to the peak velocity stage. Chronologic age, however, was nearly as good, whereas the vertebral stages were consistently the worst. Errors from the use of statural onset to predict the peak of the pubertal growth spurt in height, facial size, and mandibular length were uniformly lower than for predictions based on the cervical vertebrae. Chronologic age, especially in boys, was a close second.\n\n\nCONCLUSIONS\nThe common assumption that onset and peak occur at ages 12 and 14 years in boys and 10 and 12 years in girls seems correct for boys, but it is 6 months to 1 year late for girls. As an index of maturation, hand-wrist skeletal ages appear to offer the best indication that peak growth velocity has been reached. Of the methods tested here for the prediction of the timing of peak velocity, statural onset had the lowest errors. Although mean chronologic ages were nearly as good, stature can be measured repeatedly and thus might lead to improved prediction of the timing of the adolescent growth spurt.",
"title": ""
},
{
"docid": "cc6111093376f0bae267fe686ecd22cd",
"text": "This paper overviews the diverse information technologies that are used to provide athletes with relevant feedback. Examples taken from various sports are used to illustrate selected applications of technology-based feedback. Several feedback systems are discussed, including vision, audition and proprioception. Each technology described here is based on the assumption that feedback would eventually enhance skill acquisition and sport performance and, as such, its usefulness to athletes and coaches in training is critically evaluated.",
"title": ""
},
{
"docid": "9b44952749ebfdb356ab98843299e788",
"text": "The null space of the within-class scatter matrix is found to express most discriminative information for the small sample size problem (SSSP). The null space-based LDA takes full advantage of the null space while the other methods remove the null space. It proves to be optimal in performance. From the theoretical analysis, we present the NLDA algorithm and the most suitable situation for NLDA. Our method is simpler than all other null space approaches, it saves the computational cost and maintains the performance simultaneously. Furthermore, kernel technique is incorporated into discriminant analysis in the null space. Firstly, all samples are mapped to the kernel space through a better kernel function, called Cosine kernel, which is proposed to increase the discriminating capability of the original polynomial kernel function. Secondly, a truncated NLDA is employed. The novel approach only requires one eigenvalue analysis and is also applicable to the large sample size problem. Experiments are carried out on different face data sets to demonstrate the effectiveness of the proposed methods.",
"title": ""
},
{
"docid": "8326f993dbb83e631d2e6892e03520e7",
"text": "Within NASA, there is an increasing awareness that software is of growing importance to the success of missions. Much data has been collected, and many theories have been advanced on how to reduce or eliminate errors in code. However, learning requires experience. This article documents a new NASA initiative to build a centralized repository of software defect data; in particular, it documents one specific case study on software metrics. Software metrics are used as a basis for prediction of errors in code modules, but there are many different metrics available. McCabe is one of the more popular tools used to produce metrics, but, as will be shown in this paper, other metrics can be more significant.",
"title": ""
},
{
"docid": "55b4e5cfd3d162065d15f8f814c20e1e",
"text": "BACKGROUND\nResearchers have demonstrated moderate evidence for the use of exercise in the treatment of subacromial impingement syndrome (SAIS). Recent evidence also supports eccentric exercise for patients with lower extremity and wrist tendinopathies. However, only a few investigators have examined the effects of eccentric exercise on patients with rotator cuff tendinopathy.\n\n\nPURPOSE\nTo compare the effectiveness of an eccentric progressive resistance exercise (PRE) intervention to a concentric PRE intervention in adults with SAIS.\n\n\nSTUDY DESIGN\nRandomized Clinical Trial.\n\n\nMETHODS\nThirty-four participants with SAIS were randomized into concentric (n = 16, mean age: 48.6 ± 14.6 years) and eccentric (n = 18, mean age: 50.1 ± 16.9 years) exercise groups. Supervised rotator cuff and scapular PRE's were performed twice a week for eight weeks. A daily home program of shoulder stretching and active range of motion (AROM) exercises was performed by both groups. The outcome measures of the Disabilities of the Arm, Shoulder, and Hand (DASH) score, pain-free arm scapular plane elevation AROM, pain-free shoulder abduction and external rotation (ER) strength were assessed at baseline, week five, and week eight of the study.\n\n\nRESULTS\nFour separate 2x3 ANOVAs with repeated measures showed no significant difference in any outcome measure between the two groups over time. However, all participants made significant improvements in all outcome measures from baseline to week five (p < 0.0125). Significant improvements also were found from week five to week eight (p < 0.0125) for all outcome measures except scapular plane elevation AROM.\n\n\nCONCLUSION\nBoth eccentric and concentric PRE programs resulted in improved function, AROM, and strength in patients with SAIS. However, no difference was found between the two exercise modes, suggesting that therapists may use exercises that utilize either exercise mode in their treatment of SAIS.\n\n\nLEVEL OF EVIDENCE\nTherapy, level 1b.",
"title": ""
},
{
"docid": "d62bded822aff38333a212ed1853b53c",
"text": "The design of an activity recognition and monitoring system based on the eWatch, multi-sensor platform worn on different body positions, is presented in this paper. The system identifies the user's activity in realtime using multiple sensors and records the classification results during a day. We compare multiple time domain feature sets and sampling rates, and analyze the tradeoff between recognition accuracy and computational complexity. The classification accuracy on different body positions used for wearing electronic devices was evaluated",
"title": ""
},
{
"docid": "8c24f4e178ebe403da3f90f05b97ac17",
"text": "The success of the Human Genome Project and the powerful tools of molecular biology have ushered in a new era of medicine and nutrition. The pharmaceutical industry expects to leverage data from the Human Genome Project to develop new drugs based on the genetic constitution of the patient; likewise, the food industry has an opportunity to position food and nutritional bioactives to promote health and prevent disease based on the genetic constitution of the consumer. This new era of molecular nutrition--that is, nutrient-gene interaction--can unfold in dichotomous directions. One could focus on the effects of nutrients or food bioactives on the regulation of gene expression (ie, nutrigenomics) or on the impact of variations in gene structure on one's response to nutrients or food bioactives (ie, nutrigenetics). The challenge of the public health nutritionist will be to balance the needs of the community with those of the individual. In this regard, the excitement and promise of molecular nutrition should be tempered by the need to validate the scientific data emerging from the disciplines of nutrigenomics and nutrigenetics and the need to educate practitioners and communicate the value to consumers-and to do it all within a socially responsible bioethical framework.",
"title": ""
},
{
"docid": "f1a36f7fd6b3cf42415c483f6ade768e",
"text": "The current paradigm of genomic studies of complex diseases is association and correlation analysis. Despite significant progress in dissecting the genetic architecture of complex diseases by genome-wide association studies (GWAS), the identified genetic variants by GWAS can only explain a small proportion of the heritability of complex diseases. A large fraction of genetic variants is still hidden. Association analysis has limited power to unravel mechanisms of complex diseases. It is time to shift the paradigm of genomic analysis from association analysis to causal inference. Causal inference is an essential component for the discovery of mechanism of diseases. This paper will review the major platforms of the genomic analysis in the past and discuss the perspectives of causal inference as a general framework of genomic analysis. In genomic data analysis, we usually consider four types of associations: association of discrete variables (DNA variation) with continuous variables (phenotypes and gene expressions), association of continuous variables (expressions, methylations, and imaging signals) with continuous variables (gene expressions, imaging signals, phenotypes, and physiological traits), association of discrete variables (DNA variation) with binary trait (disease status) and association of continuous variables (gene expressions, methylations, phenotypes, and imaging signals) with binary trait (disease status). In this paper, we will review algorithmic information theory as a general framework for causal discovery and the recent development of statistical methods for causal inference on discrete data, and discuss the possibility of extending the association analysis of discrete variable with disease to the causal analysis for discrete variable and disease.",
"title": ""
},
{
"docid": "09b77e632fb0e5dfd7702905e51fc706",
"text": "Most natural videos contain numerous events. For example, in a video of a “man playing a piano”, the video might also contain “another man dancing” or “a crowd clapping”. We introduce the task of dense-captioning events, which involves both detecting and describing events in a video. We propose a new model that is able to identify all events in a single pass of the video while simultaneously describing the detected events with natural language. Our model introduces a variant of an existing proposal module that is designed to capture both short as well as long events that span minutes. To capture the dependencies between the events in a video, our model introduces a new captioning module that uses contextual information from past and future events to jointly describe all events. We also introduce ActivityNet Captions, a large-scale benchmark for dense-captioning events. ActivityNet Captions contains 20k videos amounting to 849 video hours with 100k total descriptions, each with its unique start and end time. Finally, we report performances of our model for dense-captioning events, video retrieval and localization.",
"title": ""
},
{
"docid": "011f6529db0dc1dfed11033ed3786759",
"text": "Most modern face super-resolution methods resort to convolutional neural networks (CNN) to infer highresolution (HR) face images. When dealing with very low resolution (LR) images, the performance of these CNN based methods greatly degrades. Meanwhile, these methods tend to produce over-smoothed outputs and miss some textural details. To address these challenges, this paper presents a wavelet-based CNN approach that can ultra-resolve a very low resolution face image of 16 × 16 or smaller pixelsize to its larger version of multiple scaling factors (2×, 4×, 8× and even 16×) in a unified framework. Different from conventional CNN methods directly inferring HR images, our approach firstly learns to predict the LR’s corresponding series of HR’s wavelet coefficients before reconstructing HR images from them. To capture both global topology information and local texture details of human faces, we present a flexible and extensible convolutional neural network with three types of loss: wavelet prediction loss, texture loss and full-image loss. Extensive experiments demonstrate that the proposed approach achieves more appealing results both quantitatively and qualitatively than state-ofthe- art super-resolution methods.",
"title": ""
},
{
"docid": "35b668eeecb71fc1931e139a90f2fd1f",
"text": "In this article we present novel learning methods for estimating the quality of results returned by a search engine in response to a query. Estimation is based on the agreement between the top results of the full query and the top results of its sub-queries. We demonstrate the usefulness of quality estimation for several applications, among them improvement of retrieval, detecting queries for which no relevant content exists in the document collection, and distributed information retrieval. Experiments on TREC data demonstrate the robustness and the effectiveness of our learning algorithms.",
"title": ""
},
{
"docid": "47dc7c546c4f0eb2beb1b251ef9e4a81",
"text": "In this paper we describe AMT, a tool for monitoring temporal properties of continuous signals. We first introduce S TL /PSL, a specification formalism based on the industrial standard language P SL and the real-time temporal logic MITL , extended with constructs that allow describing behaviors of real-valued variables. The tool automatically builds property observers from an STL /PSL specification and checks, in an offlineor incrementalfashion, whether simulation traces satisfy the property. The AMT tool is validated through a Fla sh memory case-study.",
"title": ""
},
{
"docid": "e4e58d00ffdfcc881c0ea934ca6152f2",
"text": "Translating linear temporal logic formulas to automata has proven to be an effective approach for implementing linear-time model-checking, and for obtaining many extensions and improvements to this verification method. On the other hand, for branching temporal logic, automata-theoretic techniques have long been thought to introduce an exponential penalty, making them essentially useless for model-checking. Recently, Bernholtz and Grumberg [1993] have shown that this exponential penalty can be avoided, though they did not match the linear complexity of non-automata-theoretic algorithms. In this paper, we show that alternating tree automata are the key to a comprehensive automata-theoretic framework for branching temporal logics. Not only can they be used to obtain optimal decision procedures, as was shown by Muller et al., but, as we show here, they also make it possible to derive optimal model-checking algorithms. Moreover, the simple combinatorial structure that emerges from the automata-theoretic approach opens up new possibilities for the implementation of branching-time model checking and has enabled us to derive improved space complexity bounds for this long-standing problem.",
"title": ""
},
{
"docid": "e7bbef4600048504c8019ff7fdb4758c",
"text": "Convenient assays for superoxide dismutase have necessarily been of the indirect type. It was observed that among the different methods used for the assay of superoxide dismutase in rat liver homogenate, namely the xanthine-xanthine oxidase ferricytochromec, xanthine-xanthine oxidase nitroblue tetrazolium, and pyrogallol autoxidation methods, a modified pyrogallol autoxidation method appeared to be simple, rapid and reproducible. The xanthine-xanthine oxidase ferricytochromec method was applicable only to dialysed crude tissue homogenates. The xanthine-xanthine oxidase nitroblue tetrazolium method, either with sodium carbonate solution, pH 10.2, or potassium phosphate buffer, pH 7·8, was not applicable to rat liver homogenate even after extensive dialysis. Using the modified pyrogallol autoxidation method, data have been obtained for superoxide dismutase activity in different tissues of rat. The effect of age, including neonatal and postnatal development on the activity, as well as activity in normal and cancerous human tissues were also studied. The pyrogallol method has also been used for the assay of iron-containing superoxide dismutase inEscherichia coli and for the identification of superoxide dismutase on polyacrylamide gels after electrophoresis.",
"title": ""
},
{
"docid": "f44718a0831c9eaa5c73256c6ce31231",
"text": "Plasma concentrations of adiponectin, a novel adipose-specific protein with putative antiatherogenic and antiinflammatory effects, were found to be decreased in Japanese individuals with obesity, type 2 diabetes, and cardiovascular disease, conditions commonly associated with insulin resistance and hyperinsulinemia. To further characterize the relationship between adiponectinemia and adiposity, insulin sensitivity, insulinemia, and glucose tolerance, we measured plasma adiponectin concentrations, body composition (dual-energy x-ray absorptiometry), insulin sensitivity (M, hyperinsulinemic clamp), and glucose tolerance (75-g oral glucose tolerance test) in 23 Caucasians and 121 Pima Indians, a population with a high propensity for obesity and type 2 diabetes. Plasma adiponectin concentration was negatively correlated with percent body fat (r = -0.43), waist-to-thigh ratio (r = -0.46), fasting plasma insulin concentration (r = -0.63), and 2-h glucose concentration (r = -0.38), and positively correlated with M (r = 0.59) (all P < 0.001); all relations were evident in both ethnic groups. In a multivariate analysis, fasting plasma insulin concentration, M, and waist-to-thigh ratio, but not percent body fat or 2-h glucose concentration, were significant independent determinates of adiponectinemia, explaining 47% of the variance (r(2) = 0.47). Differences in adiponectinemia between Pima Indians and Caucasians (7.2 +/- 2.6 vs. 10.2 +/- 4.3 microg/ml, P < 0.0001) and between Pima Indians with normal, impaired, and diabetic glucose tolerance (7.5 +/- 2.7, 6.1 +/- 2.0, 5.5 +/- 1.6 microg/ml, P < 0.0001) remained significant after adjustment for adiposity, but not after additional adjustment for M or fasting insulin concentration. These results confirm that obesity and type 2 diabetes are associated with low plasma adiponectin concentrations in different ethnic groups and indicate that the degree of hypoadiponectinemia is more closely related to the degree of insulin resistance and hyperinsulinemia than to the degree of adiposity and glucose intolerance.",
"title": ""
},
{
"docid": "ae961e9267b1571ec606347f56b0d4ca",
"text": "A benchmark turbulent Backward Facing Step (BFS) airflow was studied in detail through a program of tightly coupled experimental and CFD analysis. The theoretical and experimental approaches were developed simultaneously in a “building block” approach and the results used to verify each “block”. Information from both CFD and experiment was used to develop confidence in the accuracy of each technique and to increase our understanding of the BFS flow.",
"title": ""
},
{
"docid": "dcb79661bc3c89541555be00c7d3d33a",
"text": "With the advent of different kinds of wireless networks and smart phones, Cellular network users are provided with various data connectivity options by Network Service Providers (ISPs) abiding to Service Level Agreement, i.e. regarding to Quality of Service (QoS) of network deployed. Network Performance Metrics (NPMs) are needed to measure the network performance and guarantee the QoS Parameters like Availability, delivery, latency, bandwidth, etc. Two way active measurement protocol (TWAMP) is widely prevalent active measurement approach to measure two-way metrics of networks. In this work, software tool is developed, that enables network user to assess the network performance. There is dearth of tools, which can measure the network performance of wireless networks like Wi-Fi, 3G, etc., Therefore proprietary TWAMP implementation for IPv6 wireless networks on Android platform and indigenous driver development to obtain send/receive timestamps of packets, is proposed, to obtain metrics namely Round-trip delay, Two-way packet Loss, Jitter, Packet Reordering, Packet Duplication and Loss-patterns etc. Analysis of aforementioned metrics indicate QoS of the wireless network under concern and give hints to applications of varying QoS profiles like VOIP, video streaming, etc. to be run at that instant of time or not.",
"title": ""
},
{
"docid": "42f7b11d84110d124a23cdd34545bb93",
"text": "Joint extraction of entities and relations is an important task in information extraction. To tackle this problem, we firstly propose a novel tagging scheme that can convert the joint extraction task to a tagging problem. Then, based on our tagging scheme, we study different end-toend models to extract entities and their relations directly, without identifying entities and relations separately. We conduct experiments on a public dataset produced by distant supervision method and the experimental results show that the tagging based methods are better than most of the existing pipelined and joint learning methods. What’s more, the end-to-end model proposed in this paper, achieves the best results on the public dataset.",
"title": ""
},
{
"docid": "065c12155991b38d36ec1e71cff60ce4",
"text": "The purpose of this chapter is to introduce, analyze, and compare the models of wheeled mobile robots (WMR) and to present several realizations and commonly encountered designs. The mobility of WMR is discussed on the basis of the kinematic constraints resulting from the pure rolling conditions at the contact points between the wheels and the ground. According to this discussion it is shown that, whatever the number and the types of the wheels, all WMR belong to only five generic classes. Different types of models are derived and compared: the posture model versus the configuration model, the kinematic model versus the dynamic model. The structural properties of these models are discussed and compared. These models as well as their properties constitute the background necessary for model-based control design. Practical robot structures are classified according to the number of wheels, and features are introduced focusing on commonly adopted designs. Omnimobile robots and articulated robots realizations are described in more detail.",
"title": ""
}
] |
scidocsrr
|
cc752f5cf5a0037c84c5ca860dcb60cc
|
A Scalable Distributed Graph Partitioner
|
[
{
"docid": "537793712e4e62d66e35b3c9114706f2",
"text": "Database indices provide a non-discriminative navigational infrastructure to localize tuples of interest. Their maintenance cost is taken during database updates. In this work we study the complementary approach, addressing index maintenance as part of query processing using continuous physical reorganization, i.e., cracking the database into manageable pieces. Each query is interpreted not only as a request for a particular result set, but also as an advice to crack the physical database store into smaller pieces. Each piece is described by a query, all of which are assembled in a cracker index to speedup future search. The cracker index replaces the non-discriminative indices (e.g., B-trees and hash tables) with a discriminative index. Only database portions of past interest are easily localized. The remainder is unexplored territory and remains non-indexed until a query becomes interested. The cracker index is fully self-organized and adapts to changing query workloads. With cracking, the way data is physically stored self-organizes according to query workload. Even with a huge data set, only tuples of interest are touched, leading to significant gains in query performance. In case the focus shifts to a different part of the data, the cracker index will automatically adjust to that. We report on our design and implementation of cracking in the context of a full fledged relational system. It led to a limited enhancement to its relational algebra kernel, such that cracking could be piggy-backed without incurring too much processing overhead. Furthermore, we illustrate the ripple effect of dynamic reorganization on the query plans derived by the SQL optimizer. The experiences and results obtained are indicative of a significant reduction in system complexity with clear performance benefits. ∗Stratos Idreos is the contact author (Stratos.Idreos@cwi.nl) and a Ph.D student at CWI",
"title": ""
}
] |
[
{
"docid": "089c1dda565d88bd739dfb10f88c034f",
"text": "This paper presents the design of CMOS op-amps using indirect feedback compensation technique. The indirect feedback compensation results in much faster and low power op-amps, significant reduction in the layout size and better power supply noise rejection",
"title": ""
},
{
"docid": "f052fae696370910cc59f48552ddd889",
"text": "Decisions involve many intangibles that need to be traded off. To do that, they have to be measured along side tangibles whose measurements must also be evaluated as to, how well, they serve the objectives of the decision maker. The Analytic Hierarchy Process (AHP) is a theory of measurement through pairwise comparisons and relies on the judgements of experts to derive priority scales. It is these scales that measure intangibles in relative terms. The comparisons are made using a scale of absolute judgements that represents, how much more, one element dominates another with respect to a given attribute. The judgements may be inconsistent, and how to measure inconsistency and improve the judgements, when possible to obtain better consistency is a concern of the AHP. The derived priority scales are synthesised by multiplying them by the priority of their parent nodes and adding for all such nodes. An illustration is included.",
"title": ""
},
{
"docid": "d816e98205e191d4dfb3119a384c846c",
"text": "BACKGROUND\nIn order to keep up the optimal iron status in chronic hemodialysis patients, it is important to know how much iron is lost due to hemodialysis. Residual blood associated with the hemodialysis procedure together with blood sampling inevitably causes the loss of iron in chronic hemodialysis patients. Recent advances in hemodialysis techniques might have reduced this complication. In this cross-sectional study, we directly measured total iron loss by hemodialysis.\n\n\nMETHODS\nTwo hundred thirty-nine patients who received chronic hemodialysis at Otowa Memorial Hospital were enrolled; 65.7% of patients were men, and mean age was 67 ± 6.4 years (mean ± SD) and 43.2% were diabetic. Residual blood in blood tubing set and dialyzer after rinse back with saline was collected and homogenized. The iron content including free, protein-bound and heme iron was measured using an atomic absorption spectrometry.\n\n\nRESULTS\nThe mean iron content in residual blood was 1,247.3 ± 796.2 µg (mean ± SD) and the median was 1,002 µg (95% CI 377.6-3,461.6 µg), indicating 160.8 mg (95% CI 58.9-540.0 mg) iron loss annually when hemodialysis was performed 156 times a year. Fifty milliliter whole blood for monthly blood test and another 2 ml of whole blood lost by paracentesis at every dialysis session contains 228.6 and 118.9 mg iron at 11 g/dl hemoglobin, respectively. Therefore, an annual total iron loss due to hemodialysis comes to 508.3 mg (95% CI 406.4-887.5 mg).\n\n\nCONCLUSIONS\nFive hundred milligram of annual iron supplementation might be sufficient to maintain iron status in hemodialysis patients, which is less than the dose recommended as 1,000-2,000 mg a year. Further study will be required to verify this iron supplementation dosage with recent hemodialysis procedure.",
"title": ""
},
{
"docid": "ba3bdb8bc6831fd3df737a24b7656b12",
"text": "I ntegrated circuit processing technology offers increasing integration density, which fuels microprocessor performance growth. Within 10 years it will be possible to integrate a billion transistors on a reasonably sized silicon chip. At this integration level, it is necessary to find parallelism to effectively utilize the transistors. Currently, processor designs dynamically extract parallelism with these transistors by executing many instructions within a single, sequential program in parallel. To find independent instructions within a sequential sequence of instructions, or thread of control, today's processors increasingly make use of sophisticated architectural features. Examples are out-of-order instruction execution and speculative execution of instructions after branches predicted with dynamic hardware branch prediction techniques. Future performance improvements will require processors to be enlarged to execute more instructions per clock cycle. 1 However, reliance on a single thread of control limits the parallelism available for many applications, and the cost of extracting parallelism from a single thread is becoming prohibitive. This cost manifests itself in numerous ways, including increased die area and longer design and verification times. In general, we see diminishing returns when trying to extract parallelism from a single thread. To continue this trend will trade only incremental performance increases for large increases in overall complexity. Although this parallelization might be achieved dynamically in hardware, we advocate using a software approach instead, allowing the hardware to be simple and fast. Emerging parallel compilation technologies , 2 an increase in the use of inherently parallel applications (such as multimedia), and more widespread use of multitasking operating systems should make this feasible. Researchers have proposed two alternative microar-chitectures that exploit multiple threads of control: simultaneous multithreading (SMT) 3 and chip multi-processors (CMP). 4 SMT processors augment wide (issuing many instructions at once) superscalar processors with hardware that allows the processor to execute instructions from multiple threads of control concurrently when possible, dynamically selecting and executing instructions from many active threads simultaneously. This promotes much higher utilization of the processor's execution resources and provides latency tolerance in case a thread stalls due to cache misses or data dependencies. When multiple threads are not available, however, the SMT simply looks like a conventional wide-issue superscalar. CMPs use relatively simple single-thread processor cores to exploit only moderate amounts of parallelism within any one thread, while executing multiple threads in parallel across multiple processor cores. If an application cannot be effectively decomposed into threads, CMPs will be underutilized. From a …",
"title": ""
},
{
"docid": "700c5ed8bac3ee26051991639d2b7fe9",
"text": "A digital computer is generally believed to be an efficient universal computing device; that is, it is believed able to simulate any physical computing device with an increase in computation time by at most a polynomial factor. This may not be true when quantum mechanics is taken into consideration. This paper considers factoring integers and finding discrete logarithms, two problems which are generally thought to be hard on a classical computer and which have been used as the basis of several proposed cryptosystems. Efficient randomized algorithms are given for these two problems on a hypothetical quantum computer. These algorithms take a number of steps polynomial in the input size, e.g., the number of digits of the integer to be factored.",
"title": ""
},
{
"docid": "05e6bc54f6175e1f9bb296500bc3d9e7",
"text": "This article describes XRel, a novel approach for storage and retrieval of XML documents using relational databases. In this approach, an XML document is decomposed into nodes on the basis of its tree structure and stored in relational tables according to the node type, with path information from the root to each node. XRel enables us to store XML documents using a fixed relational schema without any information about DTDs and also to utilize indices such as the B+-tree and the R-tree supported by database management systems. Thus, XRel does not need any extension of relational databases for storing XML documents. For processing XML queries, we present an algorithm for translating a core subset of XPath expressions into SQL queries. Finally, we demonstrate the effectiveness of this approach through several experiments using actual XML documents.",
"title": ""
},
{
"docid": "209ff14abd0b16496af29c143b0fa274",
"text": "Automated text categorization is an important technique for many web applications, such as document indexing, document filtering, and cataloging web resources. Many different approaches have been proposed for the automated text categorization problem. Among them, centroid-based approaches have the advantages of short training time and testing time due to its computational efficiency. As a result, centroid-based classifiers have been widely used in many web applications. However, the accuracy of centroid-based classifiers is inferior to SVM, mainly because centroids found during construction are far from perfect locations.\n We design a fast Class-Feature-Centroid (CFC) classifier for multi-class, single-label text categorization. In CFC, a centroid is built from two important class distributions: inter-class term index and inner-class term index. CFC proposes a novel combination of these indices and employs a denormalized cosine measure to calculate the similarity score between a text vector and a centroid. Experiments on the Reuters-21578 corpus and 20-newsgroup email collection show that CFC consistently outperforms the state-of-the-art SVM classifiers on both micro-F1 and macro-F1 scores. Particularly, CFC is more effective and robust than SVM when data is sparse.",
"title": ""
},
{
"docid": "79337af1a501064e34ef69d1d2956013",
"text": "Monero is a privacy-centric cryptocurrency that allows users to obscure their transaction graph by including chaff coins, called “mixins,” along with the actual coins they spend. In this report, we empirically evaluate two weaknesses in Monero’s mixin sampling strategy. First, about 62% of transaction inputs with one or more mixins are vulnerable to “chain-reaction” analysis — that is, the real input can be deduced by elimination, e.g. because the mixins they include are spent by 0-mixin transactions. Second, Monero mixins are sampled in such a way that the mixins can be easily distinguished from the real coins by their age distribution; in short, the real input is usually the “newest” input. We estimate that this heuristic can be used to guess the real input with 80% accuracy over all transactions with 1 or more mixins. Our analysis uses only public blockchain data, in contrast to earlier attacks requiring active participation in the network [10, 7]. While the first weakness primarily affects Monero transactions made by older software versions (i.e., prior to RingCT), the second weakness is applicable to the newest versions as well. We propose and evaluate a countermeasure derived from blockchain data that can improve the privacy of future transactions. Working paper disclaimer: This is a draft of work-inprogress. It has not yet been peer-reviewed, and contains preliminary results that may be subject to further revision. ∗University of Illinois at Urbana-Champaign †Initiative for Cryptocurrencies and Contracts, initc3.org ‡Andrew Miller is a consultant to the Zerocoin Electric Coin Company and a board member of the Zcash Foundation. §Princeton University",
"title": ""
},
{
"docid": "16e2ba731973bfdad051b775078e08be",
"text": "I examine the phenomenon of implicit learning, the process by which knowledge about the ralegoverned complexities of the stimulus environment is acquired independently of conscious attempts to do so. Our research with the two, seemingly disparate experimental paradigms of synthetic grammar learning and probability learning is reviewed and integrated with other approaches to the general problem of unconscious cognition. The conclusions reached are as follows: (a) Implicit learning produces a tacit knowledge base that is abstract and representative of the structure of the environment; (b) such knowledge is optimally acquired independently of conscious efforts to learn; and (c) it can be used implicitly to solve problems and make accurate decisions about novel stimulus circumstances. Various epistemological issues and related prob1 lems such as intuition, neuroclinical disorders of learning and memory, and the relationship of evolutionary processes to cognitive science are also discussed.",
"title": ""
},
{
"docid": "be017adea5e5c5f183fd35ac2ff6b614",
"text": "In nationally representative yearly surveys of United States 8th, 10th, and 12th graders 1991-2016 (N = 1.1 million), psychological well-being (measured by self-esteem, life satisfaction, and happiness) suddenly decreased after 2012. Adolescents who spent more time on electronic communication and screens (e.g., social media, the Internet, texting, gaming) and less time on nonscreen activities (e.g., in-person social interaction, sports/exercise, homework, attending religious services) had lower psychological well-being. Adolescents spending a small amount of time on electronic communication were the happiest. Psychological well-being was lower in years when adolescents spent more time on screens and higher in years when they spent more time on nonscreen activities, with changes in activities generally preceding declines in well-being. Cyclical economic indicators such as unemployment were not significantly correlated with well-being, suggesting that the Great Recession was not the cause of the decrease in psychological well-being, which may instead be at least partially due to the rapid adoption of smartphones and the subsequent shift in adolescents' time use. (PsycINFO Database Record",
"title": ""
},
{
"docid": "8840e9e1e304a07724dd6e6779cfc9c4",
"text": "Clustering has become an increasingly important task in modern application domains such as marketing and purchasing assistance, multimedia, molecular biology as well as many others. In most of these areas, the data are originally collected at different sites. In order to extract information from these data, they are merged at a central site and then clustered. In this paper, we propose a different approach. We cluster the data locally and extract suitable representatives from these clusters. These representatives are sent to a global server site where we restore the complete clustering based on the local representatives. This approach is very efficient, because the local clustering can be carried out quickly and independently from each other. Furthermore, we have low transmission cost, as the number of transmitted representatives is much smaller than the cardinality of the complete data set. Based on this small number of representatives, the global clustering can be done very efficiently. For both the local and the global clustering, we use a density based clustering algorithm. The combination of both the local and the global clustering forms our new DBDC (Density Based Distributed Clustering) algorithm. Furthermore, we discuss the complex problem of finding a suitable quality measure for evaluating distributed clusterings. We introduce two quality criteria which are compared to each other and which allow us to evaluate the quality of our DBDC algorithm. In our experimental evaluation, we will show that we do not have to sacrifice clustering quality in order to gain an efficiency advantage when using our distributed clustering approach.",
"title": ""
},
{
"docid": "60ebdcd2d3e47ce8a054f2073672f43e",
"text": "Deep reinforcement learning algorithms that estimate state and state-action value functions have been shown to be effective in a variety of challenging domains, including learning control strategies from raw image pixels. However, algorithms that estimate state and state-action value functions typically assume a fully observed state and must compensate for partial observations by using finite length observation histories or recurrent networks. In this work, we propose a new deep reinforcement learning algorithm based on counterfactual regret minimization that iteratively updates an approximation to an advantage-like function and is robust to partially observed state. We demonstrate that this new algorithm can substantially outperform strong baseline methods on several partially observed reinforcement learning tasks: learning first-person 3D navigation in Doom and Minecraft, and acting in the presence of partially observed objects in Doom and Pong.",
"title": ""
},
{
"docid": "b913385dfedc1c6557c11a9e5db1ce51",
"text": "The design of a wireless communication system is dependent upon the propagation environment in which the system is to be used. Factors such as the time delay spread and the path loss of a radio channel affect the performance and reliability of a wireless system. These factors can be accurately measured through RF propagation measurements in the environments in which artemerging wireless technology is to be deployed~",
"title": ""
},
{
"docid": "0e37a1a251c97fd88aa2ab3ee9ed422b",
"text": "k-means algorithm and its variations are known to be fast clustering algorithms. However, they are sensitive to the choice of starting points and inefficient for solving clustering problems in large data sets. Recently, a new version of the k-means algorithm, the global k-means algorithm has been developed. It is an incremental algorithm that dynamically adds one cluster center at a time and uses each data point as a candidate for the k-th cluster center. Results of numerical experiments show that the global k-means algorithm considerably outperforms the k-means algorithms. In this paper, a new version of the global k-means algorithm is proposed. A starting point for the k-th cluster center in this algorithm is computed by minimizing an auxiliary cluster function. Results of numerical experiments on 14 data sets demonstrate the superiority of the new algorithm, however, it requires more computational time than the global k-means algorithm.",
"title": ""
},
{
"docid": "d4303828b62c4a03ca69a071d909b0a8",
"text": "Despite the increased salience of metaphor in organization theory, current perspectives are flawed and misguided in assuming that metaphor can be explained with the so-called comparison model. I therefore outline an alternative model of metaphor understanding—the domains-interaction model—which suggests that metaphor involves the conjunction of whole semantic domains in which a correspondence between terms or concepts is constructed rather than deciphered and where the resulting image and meaning is creative. I also discuss implications of this model for organizational theorizing and research.",
"title": ""
},
{
"docid": "66b088871549d5ec924dbe500522d6f8",
"text": "Being able to effectively measure similarity between patents in a complex patent citation network is a crucial task in understanding patent relatedness. In the past, techniques such as text mining and keyword analysis have been applied for patent similarity calculation. The drawback of these approaches is that they depend on word choice and writing style of authors. Most existing graph-based approaches use common neighbor-based measures, which only consider direct adjacency. In this work we propose new similarity measures for patents in a patent citation network using only the patent citation network structure. The proposed similarity measures leverage direct and indirect co-citation links between patents. A challenge is when some patents receive a large number of citations, thus are considered more similar to many other patents in the patent citation network. To overcome this challenge, we propose a normalization technique to account for the case where some pairs are ranked very similar to each other because they both are cited by many other patents. We validate our proposed similarity measures using US class codes for US patents and the well-known Jaccard similarity index. Experiments show that the proposed methods perform well when compared to the Jaccard similarity index.",
"title": ""
},
{
"docid": "78f272578191996200259e10d209fe19",
"text": "The information in government web sites, which are widely adopted in many countries, must be accessible for all people, easy to use, accurate and secure. The main objective of this study is to investigate the usability, accessibility and security aspects of e-government web sites in Kyrgyz Republic. The analysis of web government pages covered 55 sites listed in the State Information Resources of the Kyrgyz Republic and five government web sites which were not included in the list. Analysis was conducted using several automatic evaluation tools. Results suggested that government web sites in Kyrgyz Republic have a usability error rate of 46.3 % and accessibility error rate of 69.38 %. The study also revealed security vulnerabilities in these web sites. Although the “Concept of Creation and Development of Information Network of the Kyrgyz Republic” was launched at September 23, 1994, government web sites in the Kyrgyz Republic have not been reviewed and still need great efforts to improve accessibility, usability and security.",
"title": ""
},
{
"docid": "181eafc11f3af016ca0926672bdb5a9d",
"text": "The conventional wisdom is that backprop nets with excess hi dden units generalize poorly. We show that nets with excess capacity ge neralize well when trained with backprop and early stopping. Experim nts suggest two reasons for this: 1) Overfitting can vary significant ly i different regions of the model. Excess capacity allows better fit to reg ions of high non-linearity, and backprop often avoids overfitting the re gions of low non-linearity. 2) Regardless of size, nets learn task subco mponents in similar sequence. Big nets pass through stages similar to th ose learned by smaller nets. Early stopping can stop training the large n et when it generalizes comparably to a smaller net. We also show that co njugate gradient can yield worse generalization because it overfits regions of low non-linearity when learning to fit regions of high non-linea rity.",
"title": ""
},
{
"docid": "63c6fbbb7d3df72e55affbea8576d8b4",
"text": "Supervised (pre-)training currently yields state-of-the-art performance for representation learning for visual recognition, yet it comes at the cost of (1) intensive manual annotations and (2) an inherent restriction in the scope of data relevant for learning. In this work, we explore unsupervised feature learning from unlabeled video. We introduce a novel object-centric approach to temporal coherence that encourages similar representations to be learned for object-like regions segmented from nearby frames. Our framework relies on a Siamese-triplet network to train a deep convolutional neural network (CNN) representation. Compared to existing temporal coherence methods, our idea has the advantage of lightweight preprocessing of the unlabeled video (no tracking required) while still being able to extract object-level regions from which to learn invariances. Furthermore, as we show in results on several standard datasets, our method typically achieves substantial accuracy gains over competing unsupervised methods for image classification and retrieval tasks.",
"title": ""
},
{
"docid": "eb3da7095e9b5837db7ae6aa9f30aefa",
"text": "Phenolics are broadly distributed in the plant kingdom and are the most abundant secondary metabolites of plants. Plant polyphenols have drawn increasing attention due to their potent antioxidant properties and their marked effects in the prevention of various oxidative stress associated diseases such as cancer. In the last few years, the identification and development of phenolic compounds or extracts from different plants has become a major area of health- and medical-related research. This review provides an updated and comprehensive overview on phenolic extraction, purification, analysis and quantification as well as their antioxidant properties. Furthermore, the anticancer effects of phenolics in-vitro and in-vivo animal models are viewed, including recent human intervention studies. Finally, possible mechanisms of action involving antioxidant and pro-oxidant activity as well as interference with cellular functions are discussed.",
"title": ""
}
] |
scidocsrr
|
87d435409e5dd54ef5ae6c22fc661ca3
|
High-performance secure multi-party computation for data mining applications
|
[
{
"docid": "cd36a4e57a446e25ae612cdc31f6293e",
"text": "Privacy and security concerns can prevent sharing of data, derailing data mining projects. Distributed knowledge discovery, if done correctly, can alleviate this problem. The key is to obtain valid results, while providing guarantees on the (non)disclosure of data. We present a method for k-means clustering when different sites contain different attributes for a common set of entities. Each site learns the cluster of each entity, but learns nothing about the attributes at other sites.",
"title": ""
}
] |
[
{
"docid": "eb150ae59ceffae1894c8985931ddfc9",
"text": "This paper presents the design and implementation of Constant-Fraction-Discriminators (CFD) suitable for multi-channel mixed-mode ICs. Issues related to area occupation, power consumption and timing accuracy are discussed in detail. The circuits have been designed targeting a 0.13µm CMOS process.",
"title": ""
},
{
"docid": "7bb9f8794f8df481967f6f01b9e9d924",
"text": "It is widely realized that the integration of database and information retrieval techniques will provide users with a wide range of high quality services. In this paper, we study processing an l-keyword query, p1, p1, ..., pl, against a relational database which can be modeled as a weighted graph, G(V, E). Here V is a set of nodes (tuples) and E is a set of edges representing foreign key references between tuples. Let Vi ⊆ V be a set of nodes that contain the keyword pi. We study finding top-k minimum cost connected trees that contain at least one node in every subset Vi, and denote our problem as GST-k When k = 1, it is known as a minimum cost group Steiner tree problem which is NP-complete. We observe that the number of keywords, l, is small, and propose a novel parameterized solution, with l as a parameter, to find the optimal GST-1, in time complexity O(3ln + 2l ((l + logn)n + m)), where n and m are the numbers of nodes and edges in graph G. Our solution can handle graphs with a large number of nodes. Our GST-1 solution can be easily extended to support GST-k, which outperforms the existing GST-k solutions over both weighted undirected/directed graphs. We conducted extensive experimental studies, and report our finding.",
"title": ""
},
{
"docid": "6b2118549a18be9af844f6bbf11fc0ee",
"text": "Feature selection is an important technique for data mining. Despite its importance, most studies of feature selection are restricted to batch learning. Unlike traditional batch learning methods, online learning represents a promising family of efficient and scalable machine learning algorithms for large-scale applications. Most existing studies of online learning require accessing all the attributes/features of training instances. Such a classical setting is not always appropriate for real-world applications when data instances are of high dimensionality or it is expensive to acquire the full set of attributes/features. To address this limitation, we investigate the problem of online feature selection (OFS) in which an online learner is only allowed to maintain a classifier involved only a small and fixed number of features. The key challenge of online feature selection is how to make accurate prediction for an instance using a small number of active features. This is in contrast to the classical setup of online learning where all the features can be used for prediction. We attempt to tackle this challenge by studying sparsity regularization and truncation techniques. Specifically, this article addresses two different tasks of online feature selection: 1) learning with full input, where an learner is allowed to access all the features to decide the subset of active features, and 2) learning with partial input, where only a limited number of features is allowed to be accessed for each instance by the learner. We present novel algorithms to solve each of the two problems and give their performance analysis. We evaluate the performance of the proposed algorithms for online feature selection on several public data sets, and demonstrate their applications to real-world problems including image classification in computer vision and microarray gene expression analysis in bioinformatics. The encouraging results of our experiments validate the efficacy and efficiency of the proposed techniques.",
"title": ""
},
{
"docid": "44b44e400b44f3f83b698f9492e5c8b7",
"text": "Word vector representation techniques, built on word-word co-occurrence statistics, often provide representations that decode the differences in meaning between various words. This significant fact is a powerful tool that can be exploited to a great deal of natural language processing tasks. In this work, we propose a simple and efficient unsupervised approach for keyphrase extraction, called Reference Vector Algorithm (RVA) which utilizes a local word vector representation by applying the GloVe method in the context of one scientific publication at a time. Then, the mean word vector (reference vector) of the article’s abstract guides the candidate keywords’ selection process, using the cosine similarity. The experimental results that emerged through a thorough evaluation process show that our method outperforms the state-of-the-art methods by providing high quality keyphrases in most cases, proposing in this way an additional mode for the exploitation of GloVe word vectors.",
"title": ""
},
{
"docid": "52c74771c7d9d31ca4c78cf1da7d9c01",
"text": "This paper describes the Tezpur University dataset of online handwritten Assamese characters. The online data acquisition process involves the capturing of data as the text is written on a digitizer with an electronic pen. A sensor picks up the pen-tip movements, as well as pen-up/pen-down switching. The dataset contains 8,235 isolated online handwritten Assamese characters. Preliminary results on the classification of online handwritten Assamese characters using the above dataset are presented in this paper. The use of the support vector machine classifier and the classification accuracy for three different feature vectors are explored in our research.",
"title": ""
},
{
"docid": "2a1bee8632e983ca683cd5a9abc63343",
"text": "Phrase browsing techniques use phrases extracted automatically from a large information collection as a basis for browsing and accessing it. This paper describes a case study that uses an automatically constructed phrase hierarchy to facilitate browsing of an ordinary large Web site. Phrases are extracted from the full text using a novel combination of rudimentary syntactic processing and sequential grammar induction techniques. The interface is simple, robust and easy to use.\nTo convey a feeling for the quality of the phrases that are generated automatically, a thesaurus used by the organization responsible for the Web site is studied and its degree of overlap with the phrases in the hierarchy is analyzed. Our ultimate goal is to amalgamate hierarchical phrase browsing and hierarchical thesaurus browsing: the latter provides an authoritative domain vocabulary and the former augments coverage in areas the thesaurus does not reach.",
"title": ""
},
{
"docid": "2fe0639b8a1fc6c64bb8e177576ec06e",
"text": "A new approach for ranking fuzzy numbers based on a distance measure is introduced. A new class of distance measures for interval numbers that takes into account all the points in both intervals is developed -rst, and then it is used to formulate the distance measure for fuzzy numbers. The approach is illustrated by numerical examples, showing that it overcomes several shortcomings such as the indiscriminative and counterintuitive behavior of several existing fuzzy ranking approaches. c © 2002 Elsevier Science B.V. All rights reserved.",
"title": ""
},
{
"docid": "8fb99cd1e2db6b1e4f3f0c2fa1b125bc",
"text": "Temptation pervades modern social life, including the temptation to engage in infidelity. The present investigation examines one factor that may put individuals at a greater risk of being unfaithful to their partner: dispositional avoidant attachment style. The authors hypothesize that avoidantly attached people may be less resistant to temptations for infidelity due to lower levels of commitment in romantic relationships. This hypothesis was confirmed in 8 studies. People with high, vs. low, levels of dispositional avoidant attachment had more permissive attitudes toward infidelity (Study 1), showed attentional bias toward attractive alternative partners (Study 2), expressed greater daily interest in meeting alternatives to their current relationship partner (Study 5), perceived alternatives to their current relationship partner more positively (Study 6), and engaged in more infidelity over time (Studies 3, 4, 7, and 8). This effect was mediated by lower levels of commitment (Studies 5-8). Thus, avoidant attachment predicted a broad spectrum of responses indicative of interest in alternatives and propensity to engage in infidelity, which were mediated by low levels of commitment.",
"title": ""
},
{
"docid": "2717779fa409f10f3a509e398dc24233",
"text": "Hallyu refers to the phenomenon of Korean popular culture which came into vogue in Southeast Asia and mainland China in late 1990s. Especially, hallyu is very popular among young people enchanted with Korean music (K-pop), dramas (K-drama), movies, fashion, food, and beauty in China, Taiwan, Hong Kong, and Vietnam, etc. This cultural phenomenon has been closely connected with multi-layered transnational movements of people, information and capital flows in East Asia. Since the 15 century, East and West have been the two subjects of cultural phenomena. Such East–West dichotomy was articulated by Westerners in the scholarly tradition known as “Orientalism.”During the Age of Exploration (1400–1600), West didn’t only take control of East by military force, but also created a new concept of East/Orient, as Edward Said analyzed it expertly in his masterpiece Orientalism in 1978. Throughout the history of imperialism for nearly 4-5 centuries, west was a cognitive subject, but East was an object being recognized by the former. Accordingly, “civilization and modernization” became the exclusive properties of which West had copyright (?!), whereas East was a “sub-subject” to borrow or even plagiarize from Western standards. In this sense, (making) modern history in East Asia was a compulsive imitation of Western civilization or a catch-up with the West in other wards. Thus, it is interesting to note that East Asian people, after gaining economic power through “compressed modernization,” are eager to be main agents of their cultural activities in and through the enjoyment of East Asian popular culture in a postmodern era. In this transition from Westerncentered into East Asian-based popular culture, they are no longer sub-subjects of modernity.",
"title": ""
},
{
"docid": "8553a5d062f48f47de899cc5d23e2059",
"text": "A systems approach to studying biology uses a variety of mathematical, computational, and engineering tools to holistically understand and model properties of cells, tissues, and organisms. Building from early biochemical, genetic, and physiological studies, systems biology became established through the development of genome-wide methods, high-throughput procedures, modern computational processing power, and bioinformatics. Here, we highlight a variety of systems approaches to the study of biological rhythms that occur with a 24-h period-circadian rhythms. We review how systems methods have helped to elucidate complex behaviors of the circadian clock including temperature compensation, rhythmicity, and robustness. Finally, we explain the contribution of systems biology to the transcription-translation feedback loop and posttranslational oscillator models of circadian rhythms and describe new technologies and \"-omics\" approaches to understand circadian timekeeping and neurophysiology.",
"title": ""
},
{
"docid": "1349cdd5f181c2d6b958280a728d43b6",
"text": "Colormaps are a vital method for users to gain insights into data in a visualization. With a good choice of colormaps, users are able to acquire information in the data more effectively and efficiently. In this survey, we attempt to provide readers with a comprehensive review of colormap generation techniques and provide readers a taxonomy which is helpful for finding appropriate techniques to use for their data and applications. Specifically, we first briefly introduce the basics of color spaces including color appearance models. In the core of our paper, we survey colormap generation techniques, including the latest advances in the field by grouping these techniques into four classes: procedural methods, user-study based methods, rule-based methods, and data-driven methods; we also include a section on methods that are beyond pure data comprehension purposes. We then classify colormapping techniques into a taxonomy for readers to quickly identify the appropriate techniques they might use. Furthermore, a representative set of visualization techniques that explicitly discuss the use of colormaps is reviewed and classified based on the nature of the data in these applications. Our paper is also intended to be a reference of colormap choices for readers when they are faced with similar data and/or tasks.",
"title": ""
},
{
"docid": "a22f0e1bda2c3cfcf8e9f7cf3feabf6a",
"text": "Object detection in aerial images is an active yet challenging task in computer vision because of the birdview perspective, the highly complex backgrounds, and the variant appearances of objects. Especially when detecting densely packed objects in aerial images, methods relying on horizontal proposals for common object detection often introduce mismatches between the Region of Interests (RoIs) and objects. This leads to the common misalignment between the final object classification confidence and localization accuracy. Although rotated anchors have been used to tackle this problem, the design of them always multiplies the number of anchors and dramatically increases the computational complexity. In this paper, we propose a RoI Transformer to address these problems. More precisely, to improve the quality of region proposals, we first designed a Rotated RoI (RRoI) learner to transform a Horizontal Region of Interest (HRoI) into a Rotated Region of Interest (RRoI). Based on the RRoIs, we then proposed a Rotated Position Sensitive RoI Align (RPS-RoI-Align) module to extract rotation-invariant features from them for boosting subsequent classification and regression. Our RoI Transformer is with light weight and can be easily embedded into detectors for oriented object detection. A simple implementation of the RoI Transformer has achieved state-of-the-art performances on two common and challenging aerial datasets, i.e., DOTA and HRSC2016, with a neglectable reduction to detection speed. Our RoI Transformer exceeds the deformable Position Sensitive RoI pooling when oriented bounding-box annotations are available. Extensive experiments have also validated the flexibility and effectiveness of our RoI Transformer. The results demonstrate that it can be easily integrated with other detector architectures and significantly improve the performances.",
"title": ""
},
{
"docid": "0c62440845e4543ee16150e0c7222f49",
"text": "Background\nTo ensure high quality patient care an effective interprofessional collaboration between healthcare professionals is required. Interprofessional education (IPE) has a positive impact on team work in daily health care practice. Nevertheless, there are various challenges for sustainable implementation of IPE. To identify enablers and barriers of IPE for medical and nursing students as well as to specify impacts of IPE for both professions, the 'Cooperative academical regional evidence-based Nursing Study in Mecklenburg-Western Pomerania' (Care-N Study M-V) was conducted. The aim is to explore, how IPE has to be designed and implemented in medical and nursing training programs to optimize students' impact for IPC.\n\n\nMethods\nA qualitative study was conducted using the Delphi method and included 25 experts. Experts were selected by following inclusion criteria: (a) ability to answer every research question, one question particularly competent, (b) interdisciplinarity, (c) sustainability and (d) status. They were purposely sampled. Recruitment was based on existing collaborations and a web based search.\n\n\nResults\nThe experts find more enablers than barriers for IPE between medical and nursing students. Four primary arguments for IPE were mentioned: (1) development and promotion of interprofessional thinking and acting, (2) acquirement of shared knowledge, (3) promotion of beneficial information and knowledge exchange, and (4) promotion of mutual understanding. Major barriers of IPE are the coordination and harmonization of the curricula of the two professions. With respect to the effects of IPE for IPC, experts mentioned possible improvements on (a) patient level and (b) professional level. Experts expect an improved patient-centered care based on better mutual understanding and coordinated cooperation in interprofessional health care teams. To sustainably implement IPE for medical and nursing students, IPE needs endorsement by both, medical and nursing faculties.\n\n\nConclusion\nIn conclusion, IPE promotes interprofessional cooperation between the medical and the nursing profession. Skills in interprofessional communication and roles understanding will be primary preconditions to improve collaborative patient-centered care. The impact of IPE for patients and caregivers as well as for both professions now needs to be more specifically analysed in prospective intervention studies.",
"title": ""
},
{
"docid": "910b955d0d290e90fe207418b5601019",
"text": "We propose a branch flow model for the analysis and optimization of mesh as well as radial networks. The model leads to a new approach to solving optimal power flow (OPF) that consists of two relaxation steps. The first step eliminates the voltage and current angles and the second step approximates the resulting problem by a conic program that can be solved efficiently. For radial networks, we prove that both relaxation steps are always exact, provided there are no upper bounds on loads. For mesh networks, the conic relaxation is always exact but the angle relaxation may not be exact, and we provide a simple way to determine if a relaxed solution is globally optimal. We propose convexification of mesh networks using phase shifters so that OPF for the convexified network can always be solved efficiently for an optimal solution. We prove that convexification requires phase shifters only outside a spanning tree of the network and their placement depends only on network topology, not on power flows, generation, loads, or operating constraints. Part I introduces our branch flow model, explains the two relaxation steps, and proves the conditions for exact relaxation. Part II describes convexification of mesh networks, and presents simulation results.",
"title": ""
},
{
"docid": "0f659ff5414e75aefe23bb85127d93dd",
"text": "Important information is captured in medical documents. To make use of this information and intepret the semantics, technologies are required for extracting, analysing and interpreting it. As a result, rich semantics including relations among events, subjectivity or polarity of events, become available. The First Workshop on Extraction and Processing of Rich Semantics from Medical Texts, is devoted to the technologies for dealing with clinical documents for medical information gathering and application in knowledge based systems. New approaches for identifying and analysing rich semantics are presented. In this paper, we introduce the topic and summarize the workshop contributions.",
"title": ""
},
{
"docid": "2578607ec2e7ae0d2e34936ec352ff6e",
"text": "AI Innovation in Industry is a new department for IEEE Intelligent Systems, and this paper examines some of the basic concerns and uses of AI for big data (AI has been used in several different ways to facilitate capturing and structuring big data, and it has been used to analyze big data for key insights).",
"title": ""
},
{
"docid": "9402365e2fdbdbdea13c18da5e4a05de",
"text": "Battery models capture the characteristics of real-life batteries, and can be used to predict their behavior under various operating conditions. In this paper, a dynamic model of lithium-ion battery has been developed with MATLAB/Simulink® in order to investigate the output characteristics of lithium-ion batteries. Dynamic simulations are carried out, including the observation of the changes in battery terminal output voltage under different charging/discharging, temperature and cycling conditions, and the simulation results are compared with the results obtained from several recent studies. The simulation studies are presented for manifesting that the model is effective and operational.",
"title": ""
},
{
"docid": "fea1bc4b60abe7435c4953f2eb4b5dae",
"text": "Facing a large number of personal photos and limited resource of mobile devices, cloud plays an important role in photo storing, sharing and searching. Meanwhile, some recent reputation damage and stalk events caused by photo leakage increase people's concern about photo privacy. Though most would agree that photo search function and privacy are both valuable, few cloud system supports both of them simultaneously. The center of such an ideal system is privacy-preserving outsourced image similarity measurement, which is extremely challenging when the cloud is untrusted and a high extra overhead is disliked. In this work, we introduce a framework POP, which enables privacy-seeking mobile device users to outsource burdensome photo sharing and searching safely to untrusted servers. Unauthorized parties, including the server, learn nothing about photos or search queries. This is achieved by our carefully designed architecture and novel non-interactive privacy-preserving protocols for image similarity computation. Our framework is compatible with the state-of-the-art image search techniques, and it requires few changes to existing cloud systems. For efficiency and good user experience, our framework allows users to define personalized private content by a simple check-box configuration and then enjoy the sharing and searching services as usual. All privacy protection modules are transparent to users. The evaluation of our prototype implementation with 31,772 real-life images shows little extra communication and computation overhead caused by our system.",
"title": ""
},
{
"docid": "ca0d3a031ee0b29c8135613787ee19c4",
"text": "As children and youth with diabetes grow up, they become increasingly responsible for controlling and monitoring their condition. We conducted a scoping review to explore the research literature on self-management interventions for children and youth with diabetes. Eleven studies met the inclusion criteria. Some of the studies reviewed combined the participant population so that children with Type 1 as well as children with Type 2 diabetes were included. The majority of the studies focused on children age 14 yr or older and provided self-management education, self-management support, or both. Parent involvement was a key component of the majority of the interventions, and the use of technology was evident in 3 studies. The findings highlight factors that occupational therapy practitioners should consider when working with pediatric diabetes teams to select self-management interventions.",
"title": ""
},
{
"docid": "990ee920895672c2b8b05bc6cf4fad3f",
"text": "The world market of e-scooter is expected to experiment an increase of 15% in Western Europe between 2015 and 2025. In order to push this growth it is needed to develop new low-cost more efficient and reliable drives with high torque to weight ratio. In this paper a new axial-flux switched reluctance motor is proposed in order to accomplish this goal. The motor is constituted by a stator sandwiched by two rotors in which the ferromagnetic parts are made of soft magnetic composites. It has a new disposition of the stator and the rotor poles and shorter flux paths Simulations have demonstrated that the proposed axial-flux switched reluctance motor drive is able to meet the requirements of an e-scooter.",
"title": ""
}
] |
scidocsrr
|
ecc14d8c9b80a0d95c1fed7affcc6a1d
|
How online social interactions influence customer information contribution behavior in online social shopping communities: A social learning theory perspective
|
[
{
"docid": "c57cbe432fdab3f415d2c923bea905ff",
"text": "Through Web-based consumer opinion platforms (e.g., epinions.com), the Internet enables customers to share their opinions on, and experiences with, goods and services with a multitude of other consumers; that is, to engage in electronic wordof-mouth (eWOM) communication. Drawing on findings from research on virtual communities and traditional word-of-mouth literature, a typology for motives of consumer online articulation is © 2004 Wiley Periodicals, Inc. and Direct Marketing Educational Foundation, Inc.",
"title": ""
}
] |
[
{
"docid": "b92252ac701b564f17aa36d411f65ecf",
"text": "Abstract Image segmentation is a primary step in image analysis used to separate the input image into meaningful regions. MRI is an advanced medical imaging technique widely used in detecting brain tumors. Segmentation of Brain MR image is a complex task. Among the many approaches developed for the segmentation of MR images, a popular method is fuzzy C-mean (FCM). In the proposed method, Artificial Bee Colony (ABC) algorithm is used to improve the efficiency of FCM on abnormal brain images.",
"title": ""
},
{
"docid": "97decda9a345d39e814e19818eebe8b8",
"text": "In this review article, we present some challenges and opportunities in Ambient Assisted Living (AAL) for disabled and elderly people addressing various state of the art and recent approaches particularly in artificial intelligence, biomedical engineering, and body sensor networking.",
"title": ""
},
{
"docid": "f0a77d7d6fbae0b701be5e8a869552b1",
"text": "The ‘RNA world’ hypothesis describes an early stage of life on Earth, prior to the evolution of coded protein synthesis, in which RNA served as both information carrier and functional catalyst1,2. Not only is there a significant body of evidence to support this hypothesis3, but the ‘ribo-organisms’ from this RNA world are likely to have achieved a significant degree of metabolic sophistication4. From the perspective of the origins of life, the path from pre-life chemistry to the RNA world probably included cycles of template-directed RNA replication, with the RNA templates assembled from prebiotically generated ribonucleotides (Fig. 1)5. RNA seems well suited for the task of replication because its components pair in a complementary fashion. One strand of nucleic acid could thereby serve as a template to direct the polymerization of its complementary strand. Nevertheless, even given abundant ribonucleotides and prebiotically generated RNA templates, significant problems are believed to stand in the way of an experimental demonstration of multiple cycles of RNA replication. For example, non-enzymatic RNA template-copying reactions generate complementary strands that contain 2ʹ,5ʹ-phosphodiester linkages randomly distributed amongst the 3ʹ,5ʹ-linkages (Fig. 2a)6 rather than the solely 3ʹ,5ʹ-linkages found in contemporary biology. This heterogeneity has been generally presumed to preclude the evolution of heritable RNA with functional properties. A second problem with the RNA replication cycle concerns the high ‘melting temperatures’ required to separate the strands of the RNA duplexes formed by template copying7. For example, 14mer RNA duplexes with a high proportion of guanine–cytosine pairs can have melting temperatures well above 90 °C. Such stability would prohibit strand separation and thereby halt progression to the next generation of replication. Yet another difficulty results from the hydrolysis and cyclization reactions of chemically activated mononucleotide and oligonucleotide substrates7, which would deactivate them for template copying. Together, these and other issues have precluded the demonstration of chemically driven RNA replication in the laboratory. Now, two new studies reported in Nature Chemistry — one from the Szostak laboratory and the other from the Sutherland group — offer potential solutions to these problems8,9. Szostak and co-workers have challenged the assumption that backbone homogeneity was a requirement for the primordial RNA replication process, and considered whether RNAs that contain significant levels of 2ʹ,5ʹ-linkages can be tolerated within known ribozymes and aptamers8. They synthesized two well-known functional RNAs — the flavin mononucleotide (FMN) aptamer and hammerhead ribozyme — containing varying amounts (10–50%) of randomly distributed 2ʹ,5ʹ-linkages. Overall, FMN aptamers and hammerhead ribozymes possessing high levels of CHEMICAL ORIGINS OF LIFE",
"title": ""
},
{
"docid": "18cf88b01ff2b20d17590d7b703a41cb",
"text": "Human age provides key demographic information. It is also considered as an important soft biometric trait for human identification or search. Compared to other pattern recognition problems (e.g., object classification, scene categorization), age estimation is much more challenging since the difference between facial images with age variations can be more subtle and the process of aging varies greatly among different individuals. In this work, we investigate deep learning techniques for age estimation based on the convolutional neural network (CNN). A new framework for age feature extraction based on the deep learning model is built. Compared to previous models based on CNN, we use feature maps obtained in different layers for our estimation work instead of using the feature obtained at the top layer. Additionally, a manifold learning algorithm is incorporated in the proposed scheme and this improves the performance significantly. Furthermore, we also evaluate different classification and regression schemes in estimating age using the deep learned aging pattern (DLA). To the best of our knowledge, this is the first time that deep learning technique is introduced and applied to solve the age estimation problem. Experimental results on two datasets show that the proposed approach is significantly better than the state-of-the-art.",
"title": ""
},
{
"docid": "614a258877ad160c977a698cdfeac67d",
"text": "Research in natural language processing has increasingly focused on normalizing Twitter messages. Currently, while different well-defined approaches have been proposed for the English language, the problem remains far from being solved for other languages, such as Malay. Thus, in this paper, we propose an approach to normalize the Malay Twitter messages based on corpus-driven analysis. An architecture for Malay Tweet normalization is presented, which comprises seven main modules: (1) enhanced tokenization, (2) In-Vocabulary (IV) detection, (3) specialized dictionary query, (4) repeated letter elimination, (5) abbreviation adjusting, (6) English word translation, and (7) de-tokenization. A parallel Tweet dataset, consisting of 9000 Malay Tweets, is used in the development and testing stages. To measure the performance of the system, an evaluation is carried out. The result is promising whereby we score 0.83 in BLEU against the baseline BLEU, which scores 0.46. To compare the accuracy of the architecture with other statistical approaches, an SMT-like normalization system is implemented, trained, and evaluated with an identical parallel dataset. The experimental results demonstrate that we achieve higher accuracy by the normalization system, which is designed based on the features of Malay Tweets, compared to the SMT-like system. 2014 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "47d278d37dfd3ab6c0b64dd94eb2de6c",
"text": "We present a novel approach for multi-object tracking which considers object detection and spacetime trajectory estimation as a coupled optimization problem. It is formulated in a hypothesis selection framework and builds upon a state-of-the-art pedestrian detector. At each time instant, it searches for the globally optimal set of spacetime trajectories which provides the best explanation for the current image and for all evidence collected so far, while satisfying the constraints that no two objects may occupy the same physical space, nor explain the same image pixels at any point in time. Successful trajectory hypotheses are fed back to guide object detection in future frames. The optimization procedure is kept efficient through incremental computation and conservative hypothesis pruning. The resulting approach can initialize automatically and track a large and varying number of persons over long periods and through complex scenes with clutter, occlusions, and large-scale background changes. Also, the global optimization framework allows our system to recover from mismatches and temporarily lost tracks. We demonstrate the feasibility of the proposed approach on several challenging video sequences.",
"title": ""
},
{
"docid": "5eab47907e673449ad73ec6cef30bc07",
"text": "Three-dimensional circuits built upon multiple layers of polyimide are required for constructing Si/SiGe monolithic microwave/mm-wave integrated circuits on low resistivity Si wafers. However, the closely spaced transmission lines are susceptible to high levels of cross-coupling, which degrades the overall circuit performance. In this paper, theoretical and experimental results on coupling of Finite Ground Coplanar (FGC) waveguides embedded in polyimide layers are presented for the first time. These results show that FGC lines have approximately 8 dB lower coupling than coupled Coplanar Waveguides. Furthermore, it is shown that the forward and backward coupling characteristics for FGC lines do not resemble the coupling characteristics of other transmission lines such as microstrip.",
"title": ""
},
{
"docid": "53e8333b3e4e9874449492852d948ea2",
"text": "In recent deep online and near-online multi-object tracking approaches, a difficulty has been to incorporate long-term appearance models to efficiently score object tracks under severe occlusion and multiple missing detections. In this paper, we propose a novel recurrent network model, the Bilinear LSTM, in order to improve the learning of long-term appearance models via a recurrent network. Based on intuitions drawn from recursive least squares, Bilinear LSTM stores building blocks of a linear predictor in its memory, which is then coupled with the input in a multiplicative manner, instead of the additive coupling in conventional LSTM approaches. Such coupling resembles an online learned classifier/regressor at each time step, which we have found to improve performances in using LSTM for appearance modeling. We also propose novel data augmentation approaches to efficiently train recurrent models that score object tracks on both appearance and motion. We train an LSTM that can score object tracks based on both appearance and motion and utilize it in a multiple hypothesis tracking framework. In experiments, we show that with our novel LSTM model, we achieved state-of-the-art performance on near-online multiple object tracking on the MOT 2016 and MOT 2017 benchmarks.",
"title": ""
},
{
"docid": "81f5c17e5b0b52bb55a27733a198be51",
"text": "This paper uses the 'lens' of integrated and sustainable waste management (ISWM) to analyse the new data set compiled on 20 cities in six continents for the UN-Habitat flagship publication Solid Waste Management in the World's Cities. The comparative analysis looks first at waste generation rates and waste composition data. A process flow diagram is prepared for each city, as a powerful tool for representing the solid waste system as a whole in a comprehensive but concise way. Benchmark indicators are presented and compared for the three key physical components/drivers: public health and collection; environment and disposal; and resource recovery--and for three governance strategies required to deliver a well-functioning ISWM system: inclusivity; financial sustainability; and sound institutions and pro-active policies. Key insights include the variety and diversity of successful models - there is no 'one size fits all'; the necessity of good, reliable data; the importance of focusing on governance as well as technology; and the need to build on the existing strengths of the city. An example of the latter is the critical role of the informal sector in the cities in many developing countries: it not only delivers recycling rates that are comparable with modern Western systems, but also saves the city authorities millions of dollars in avoided waste collection and disposal costs. This provides the opportunity for win-win solutions, so long as the related wider challenges can be addressed.",
"title": ""
},
{
"docid": "28b1374bd39b17eb8773d986c532f699",
"text": "Recently, indoor positioning systems (IPSs) have been designed to provide location information of persons and devices. The position information enables location-based protocols for user applications. Personal networks (PNs) are designed to meet the users' needs and interconnect users' devices equipped with different communications technologies in various places to form one network. Location-aware services need to be developed in PNs to offer flexible and adaptive personal services and improve the quality of lives. This paper gives a comprehensive survey of numerous IPSs, which include both commercial products and research-oriented solutions. Evaluation criteria are proposed for assessing these systems, namely security and privacy, cost, performance, robustness, complexity, user preferences, commercial availability, and limitations.We compare the existing IPSs and outline the trade-offs among these systems from the viewpoint of a user in a PN.",
"title": ""
},
{
"docid": "42ecca95c15cd1f92d6e5795f99b414a",
"text": "Personalized tag recommendation systems recommend a list of tags to a user when he is about to annotate an item. It exploits the individual preference and the characteristic of the items. Tensor factorization techniques have been applied to many applications, such as tag recommendation. Models based on Tucker Decomposition can achieve good performance but require a lot of computation power. On the other hand, models based on Canonical Decomposition can run in linear time and are more feasible for online recommendation. In this paper, we propose a novel method for personalized tag recommendation, which can be considered as a nonlinear extension of Canonical Decomposition. Different from linear tensor factorization, we exploit Gaussian radial basis function to increase the model’s capacity. The experimental results show that our proposed method outperforms the state-of-the-art methods for tag recommendation on real datasets and perform well even with a small number of features, which verifies that our models can make better use of features.",
"title": ""
},
{
"docid": "3a6c58a05427392750d15307fda4faec",
"text": "In this paper, we present the design of a low voltage bandgap reference (LVBGR) circuit for supply voltage of 1.2V which can generate an output reference voltage of 0.363V. Traditional BJT based bandgap reference circuits give very precise output reference but power and area consumed by these BJT devices is larger so for low supply bandgap reference we chose MOSFETs operating in subthreshold region based reference circuits. LVBGR circuits with less sensitivity to supply voltage and temperature is used in both analog and digital circuits like high precise comparators used in data converter, phase-locked loop, ring oscillator, memory systems, implantable biomedical product etc. In the proposed circuit subthreshold MOSFETs temperature characteristics are used to achieve temperature compensation of output voltage reference and it can work under very low supply voltage. A PMOS structure 2stage opamp which will be operating in subthreshold region is designed for the proposed LVBGR circuit whose gain is 89.6dB and phase margin is 74 °. Finally a LVBGR circuit is designed which generates output voltage reference of 0.364V given with supply voltage of 1.2 V with 10 % variation and temperature coefficient of 240ppm/ °C is obtained for output reference voltage variation with respect to temperature over a range of 0 to 100°C. The output reference voltage exhibits a variation of 230μV with a supply range of 1.08V to 1.32V at typical process corner. The proposed LVBGR circuit for 1.2V supply is designed with the Mentor Graphics Pyxis tool using 130nm technology with EldoSpice simulator. Overall current consumed by the circuit is 900nA and also the power consumed by the entire LVBGR circuit is 0.9μW and the PSRR of the LVBGR circuit is -70dB.",
"title": ""
},
{
"docid": "07525c300e39dc3de4fda88ce86159c9",
"text": "The recording of seizures is of primary interest in the evaluation of epileptic patients. Seizure is the phenomenon of rhythmicity discharge from either a local area or the whole brain and the individual behavior usually lasts from seconds to minutes. Since seizures, in general, occur infrequently and unpredictably, automatic detection of seizures during long-term electroencephalograph (EEG) recordings is highly recommended. As EEG signals are nonstationary, the conventional methods of frequency analysis are not successful for diagnostic purposes. This paper presents a method of analysis of EEG signals, which is based on time-frequency analysis. Initially, selected segments of the EEG signals are analyzed using time-frequency methods and several features are extracted for each segment, representing the energy distribution in the time-frequency plane. Then, those features are used as an input in an artificial neural network (ANN), which provides the final classification of the EEG segments concerning the existence of seizures or not. We used a publicly available dataset in order to evaluate our method and the evaluation results are very promising indicating overall accuracy from 97.72% to 100%.",
"title": ""
},
{
"docid": "dd5883895261ad581858381bec1b92eb",
"text": "PURPOSE\nTo establish the validity and reliability of a new vertical jump force test (VJFT) for the assessment of bilateral strength asymmetry in a total of 451 athletes.\n\n\nMETHODS\nThe VJFT consists of countermovement jumps with both legs simultaneously: one on a single force platform, the other on a leveled wooden platform. Jumps with the right or the left leg on the force platform were alternated. Bilateral strength asymmetry was calculated as [(stronger leg - weaker leg)/stronger leg] x 100. A positive sign indicates a stronger right leg; a negative sign indicates a stronger left leg. Studies 1 (N = 59) and 2 (N = 41) examined the correlation between the VJFT and other tests of lower-limb bilateral strength asymmetry in male athletes. In study 3, VJFT reliability was assessed in 60 male athletes. In study 4, the effect of rehabilitation on bilateral strength asymmetry was examined in seven male and female athletes 8-12 wk after unilateral knee surgery. In study 5, normative data were determined in 313 male soccer players.\n\n\nRESULTS\nSignificant correlations were found between VJFT and both the isokinetic leg extension test (r = 0.48; 95% confidence interval, 0.26-0.66) and the isometric leg press test (r = 0.83; 0.70-0.91). VJFT test-retest intraclass correlation coefficient was 0.91 (0.85-0.94), and typical error was 2.4%. The change in mean [-0.40% (-1.25 to 0.46%)] was not substantial. Rehabilitation decreased bilateral strength asymmetry (mean +/- SD) of the athletes recovering from unilateral knee surgery from 23 +/- 3 to 10 +/- 4% (P < 0.01). The range of normal bilateral strength asymmetry (2.5th to 97.5th percentiles) was -15 to 15%.\n\n\nCONCLUSIONS\nThe assessment of bilateral strength asymmetry with the VJFT is valid and reliable, and it may be useful in sports medicine.",
"title": ""
},
{
"docid": "ec18c088e0068c58410bf427528aa8e4",
"text": "Abnormal accounting accruals are unusually high around stock offers, especially high for firms whose offers subsequently attract lawsuits. Accruals tend to reverse after stock offers and are negatively related to post-offer stock returns. Reversals are more pronounced and stock returns are lower for sued firms than for those that are not sued. The incidence of lawsuits involving stock offers and settlement amounts are significantly positively related to abnormal accruals around the offer and significantly negatively related to post-offer stock returns. Our results support the view that some firms opportunistically manipulate earnings upward before stock issues rendering themselves vulnerable to litigation. r 2003 Elsevier B.V. All rights reserved. JEL classification: G14; G24; G32; K22; M41",
"title": ""
},
{
"docid": "3508e1a4a4c04127792268509c1f572d",
"text": "In this paper predictions of the Normalized Difference Vegetation Index (NDVI) data recorded by satellites over Ventspils Municipality in Courland, Latvia are discussed. NDVI is an important variable for vegetation forecasting and management of various problems, such as climate change monitoring, energy usage monitoring, managing the consumption of natural resources, agricultural productivity monitoring, drought monitoring and forest fire detection. Artificial Neural Networks (ANN) are computational models and universal approximators, which are widely used for nonlinear, non-stationary and dynamical process modeling and forecasting. In this paper Elman Recurrent Neural Networks (ERNN) are used to make one-step-ahead prediction of univariate NDVI time series.",
"title": ""
},
{
"docid": "3a011bdec6531de3f0f9718f35591e52",
"text": "Since Markowitz (1952) formulated the portfolio selection problem, many researchers have developed models aggregating simultaneously several conflicting attributes such as: the return on investment, risk and liquidity. The portfolio manager generally seeks the best combination of stocks/assets that meets his/ her investment objectives. The Goal Programming (GP) model is widely applied to finance and portfolio management. The aim of this paper is to present the different variants of the GP model that have been applied to the financial portfolio selection problem from the 1970s to nowadays. 2013 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "33e1dad6c4f163c0d69bd3f58ecf9058",
"text": "Resistive random access memory (RRAM) has gained significant attentions because of its excellent characteristics which are suitable for next-generation non-volatile memory applications. It is also very attractive to build neuromorphic computing chip based on RRAM cells due to non-volatile and analog properties. Neuromorphic computing hardware technologies using analog weight storage allow the scaling-up of the system size to complete cognitive tasks such as face classification much faster while consuming much lower energy. In this paper, RRAM technology development from material selection to device structure, from small array to full chip will be discussed in detail. Neuromorphic computing using RRAM devices is demonstrated, and speed & energy consumption are compared with Xeon Phi processor.",
"title": ""
},
{
"docid": "acc960b2fd1066efce4655da837213f4",
"text": "0957-4174/$ see front matter 2013 Elsevier Ltd. A http://dx.doi.org/10.1016/j.eswa.2012.12.082 ⇑ Corresponding author. Tel.: +562 978 4834. E-mail addresses: goberreu@ing.uchile.cl (G. Ober (J.D. Velásquez). URL: http://wi.dii.uchile.cl/ (J.D. Velásquez). Plagiarism detection is of special interest to educational institutions, and with the proliferation of digital documents on the Web the use of computational systems for such a task has become important. While traditional methods for automatic detection of plagiarism compute the similarity measures on a document-to-document basis, this is not always possible since the potential source documents are not always available. We do text mining, exploring the use of words as a linguistic feature for analyzing a document by modeling the writing style present in it. The main goal is to discover deviations in the style, looking for segments of the document that could have been written by another person. This can be considered as a classification problem using self-based information where paragraphs with significant deviations in style are treated as outliers. This so-called intrinsic plagiarism detection approach does not need comparison against possible sources at all, and our model relies only on the use of words, so it is not language specific. We demonstrate that this feature shows promise in this area, achieving reasonable results compared to benchmark models. 2013 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "1796e75d847bc06995e5e0861cd9ba9f",
"text": "This paper presents a two-stage method to detect license plates in real world images. To do license plate detection (LPD), an initial set of possible license plate character regions are first obtained by the first stage classifier and then passed to the second stage classifier to reject non-character regions. 36 Adaboost classifiers (each trained with one alpha-numerical character, i.e. A..Z, 0..9) serve as the first stage classifier. In the second stage, a support vector machine (SVM) trained on scale-invariant feature transform (SIFT) descriptors obtained from training sub-windows were employed. A recall rate of 0.920792 and precision rate of 0.90185 was obtained.",
"title": ""
}
] |
scidocsrr
|
67b61b956d1026a11352ab5be658f884
|
Sequence Data Mining Techniques and Applications
|
[
{
"docid": "841f2ab48d111a6b70b2a3171c155f44",
"text": "In this paper we present SPADE, a new algorithm for fast discovery of Sequential Patterns. The existing solutions to this problem make repeated database scans, and use complex hash structures which have poor locality. SPADE utilizes combinatorial properties to decompose the original problem into smaller sub-problems, that can be independently solved in main-memory using efficient lattice search techniques, and using simple join operations. All sequences are discovered in only three database scans. Experiments show that SPADE outperforms the best previous algorithm by a factor of two, and by an order of magnitude with some pre-processed data. It also has linear scalability with respect to the number of input-sequences, and a number of other database parameters. Finally, we discuss how the results of sequence mining can be applied in a real application domain.",
"title": ""
}
] |
[
{
"docid": "707828ef765512b0b5ebef27ca133504",
"text": "In the mammalian myocardium, potassium (K(+)) channels control resting potentials, action potential waveforms, automaticity, and refractory periods and, in most cardiac cells, multiple types of K(+) channels that subserve these functions are expressed. Molecular cloning has revealed the presence of a large number of K(+) channel pore forming (alpha) and accessory (beta) subunits in the heart, and considerable progress has been made recently in defining the relationships between expressed K(+) channel subunits and functional cardiac K(+) channels. To date, more than 20 mouse models with altered K(+) channel expression/functioning have been generated using dominant-negative transgenic and targeted gene deletion approaches. In several instances, the genetic manipulation of K(+) channel subunit expression has revealed the role of specific K(+) channel subunit subfamilies or individual K(+) channel subunit genes in the generation of myocardial K(+) channels. In other cases, however, the phenotypic consequences have been unexpected. This review summarizes what has been learned from the in situ genetic manipulation of cardiac K(+) channel functioning in the mouse, discusses the limitations of the models developed to date, and explores the likely directions of future research.",
"title": ""
},
{
"docid": "780cd36f1936c8b5af1ad29882094cf5",
"text": "We propose a unified coding framework for distributed computing with straggling servers, by introducing a tradeoff between \"latency of computation\" and \"load of communication\" for some linear computation tasks. We show that the coded scheme of [1]-[3] that repeats the intermediate computations to create coded multicasting opportunities to reduce communication load, and the coded scheme of [4] that generates redundant intermediate computations to combat against straggling servers can be viewed as special instances of the proposed framework, by considering two extremes of this tradeoff: minimizing either the load of communication or the latency of computation individually. Furthermore, the latency-load tradeoff achieved by the proposed coded framework allows to systematically operate at any point on that tradeoff to perform distributed computing tasks. We also prove an information-theoretic lower bound on the latency- load tradeoff, which is shown to be within a constant multiplicative gap from the achieved tradeoff at the two end points.",
"title": ""
},
{
"docid": "e146526fbd2561d1dac33ab82470efae",
"text": "Using daily returns of the S&P500 stocks from 2001 to 2011, we perform a backtesting study of the portfolio optimization strategy based on the extreme risk index (ERI). This method uses multivariate extreme value theory to minimize the probability of large portfolio losses. With more than 400 stocks to choose from, our study applies extreme value techniques in portfolio management on a large scale. We compare the performance of this strategy with the Markowitz approach and investigate how the ERI method can be applied most effectively. Our results show that the annualized return of the ERI strategy is particularly high for assets with heavy tails. The comparison also includes maximal drawdown, transaction costs, portfolio concentration, and asset diversity in the portfolio. In addition to that we study the impact of an alternative tail index estimator.",
"title": ""
},
{
"docid": "16e1197633329b615bd4a07b6c9c5e27",
"text": "This paper presents an analog front-end (AFE) IC for mutual capacitance touch sensing with 224 sensor channels in 0.18 μm CMOS with 3.3 V drive voltage. A 32-in touch sensing system and a 70-in one having 37 dB SNR for 1 mm diameter stylus at 240 Hz reporting rate are realized with the AFE. The AFE adopts a parallel drive method to achieve the large format and the high SNR simultaneously. With the parallel drive method, the measured SNRs of the AFE stay almost constant at a higher level regardless of the number of sensor channels, which was impossible by conventional sequential drive methods. A novel differential sensing scheme which enhances the immunity against the noise from a display device is also realized in the AFE. While the coupled LCD is on and off, the differences between the measured SNRs are less than 2 dB.",
"title": ""
},
{
"docid": "76d4ed8e7692ca88c6b5a70c9954c0bd",
"text": "Custom-tailored products are meant by the products having various sizes and shapes to meet the customer’s different tastes or needs. Thus fabrication of custom-tailored products inherently involves inefficiency. Custom-tailoring shoes are not an exception because corresponding shoe-lasts must be custom-ordered. It would be nice if many template shoe-lasts had been cast in advance, the most similar template was identified automatically from the custom-ordered shoe-last, and only the different portions in the template shoe-last could be machined. To enable this idea, the first step is to derive the geometric models of template shoe-lasts to be cast. Template shoe-lasts can be derived by grouping all the various existing shoe-lasts into manageable number of groups and by uniting all the shoe-lasts in each group such that each template shoe-last for each group barely encloses all the shoe-lasts in the group. For grouping similar shoe-lasts into respective groups, similarity between shoe-lasts should be quantized. Similarity comparison starts with the determination of the closest pose between two shapes in consideration. The closest pose is derived by comparing the ray distances while one shape is virtually rotated with respect to the other. Shape similarity value and overall similarity value calculated from ray distances are also used for grouping. A prototype system based on the proposed methodology has been implemented and applied to grouping of the shoe-lasts of various shapes and sizes and deriving template shoe-lasts. q 2004 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "0a81730588c23c4ed153dab18791bdc2",
"text": "Deep neural networks (DNNs) have shown an inherent vulnerability to adversarial examples which are maliciously crafted on real examples by attackers, aiming at making target DNNs misbehave. The threats of adversarial examples are widely existed in image, voice, speech, and text recognition and classification. Inspired by the previous work, researches on adversarial attacks and defenses in text domain develop rapidly. In order to make people have a general understanding about the field, this article presents a comprehensive review on adversarial examples in text, including attack and defense approaches. We analyze the advantages and shortcomings of recent adversarial examples generation methods and elaborate the efficiency and limitations on the countermeasures. Finally, we discuss the challenges in adversarial texts and provide a research direction of this aspect.",
"title": ""
},
{
"docid": "b752e7513d4acbd0a0cd8991022f093e",
"text": "One common strategy for dealing with large, complex models is to partition them into pieces that are easier to handle. While decomposition into convex components results in pieces that are easy to process, such decompositions can be costly to construct and often result in representations with an unmanageable number of components. In this paper, we propose an alternative partitioning strategy that decomposes a given polyhedron into “approximately convex” pieces. For many applications, the approximately convex components of this decomposition provide similar benefits as convex components, while the resulting decomposition is both significantly smaller and can be computed more efficiently. Indeed, for many models, an approximate convex decomposition can more accurately represent the important structural features of the model by providing a mechanism for ignoring insignificant features, such as wrinkles and other surface texture. We propose a simple algorithm to compute approximate convex decompositions of polyhedra of arbitrary genus to within a user specified tolerance. This algorithm measures the significance of the model’s features and resolves them in order of priority. As a by product, it also produces an elegant hierarchical representation of the model. We illustrate its utility in constructing an approximate skeleton of the model that results in significant performance gains over skeletons based on an exact convex decomposition. This research supported in part by NSF CAREER Award CCR-9624315, NSF Grants IIS-9619850, ACI-9872126, EIA-9975018, EIA-0103742, EIA-9805823, ACI-0113971, CCR-0113974, EIA-9810937, EIA-0079874, and by the Texas Higher Education Coordinating Board grant ARP-036327-017. Figure 1: Each component is approximately convex (concavity less than 10 by our measure). There are a total of 17 components.",
"title": ""
},
{
"docid": "2c3c227a8fd9f2a96e61549b962d3741",
"text": "Developmental dyslexia is an unexplained inability to acquire accurate or fluent reading that affects approximately 5-17% of children. Dyslexia is associated with structural and functional alterations in various brain regions that support reading. Neuroimaging studies in infants and pre-reading children suggest that these alterations predate reading instruction and reading failure, supporting the hypothesis that variant function in dyslexia susceptibility genes lead to atypical neural migration and/or axonal growth during early, most likely in utero, brain development. Yet, dyslexia is typically not diagnosed until a child has failed to learn to read as expected (usually in second grade or later). There is emerging evidence that neuroimaging measures, when combined with key behavioral measures, can enhance the accuracy of identification of dyslexia risk in pre-reading children but its sensitivity, specificity, and cost-efficiency is still unclear. Early identification of dyslexia risk carries important implications for dyslexia remediation and the amelioration of the psychosocial consequences commonly associated with reading failure.",
"title": ""
},
{
"docid": "843aa1e751391fb740571c08de46d2ca",
"text": "Antineutrophil cytoplasm antibody (ANCA)-associated vasculitides are small-vessel vasculitides that include granulomatosis with polyangiitis (formerly Wegener's granulomatosis), microscopic polyangiitis, and eosinophilic granulomatosis with polyangiitis (Churg-Strauss syndrome). Renal-limited ANCA-associated vasculitides can be considered the fourth entity. Despite their rarity and still unknown cause(s), research pertaining to ANCA-associated vasculitides has been very active over the past decades. The pathogenic role of antimyeloperoxidase ANCA (MPO-ANCA) has been supported using several animal models, but that of antiproteinase 3 ANCA (PR3-ANCA) has not been as strongly demonstrated. Moreover, some MPO-ANCA subsets, which are directed against a few specific MPO epitopes, have recently been found to be better associated with disease activity, but a different method than the one presently used in routine detection is required to detect them. B cells possibly play a major role in the pathogenesis because they produce ANCAs, as well as neutrophil abnormalities and imbalances in different T-cell subtypes [T helper (Th)1, Th2, Th17, regulatory cluster of differentiation (CD)4+ CD25+ forkhead box P3 (FoxP3)+ T cells] and/or cytokine-chemokine networks. The alternative complement pathway is also involved, and its blockade has been shown to prevent renal disease in an MPO-ANCA murine model. Other recent studies suggested strongest genetic associations by ANCA type rather than by clinical diagnosis. The induction treatment for severe granulomatosis with polyangiitis and microscopic polyangiitis is relatively well codified but does not (yet) really differ by precise diagnosis or ANCA type. It comprises glucocorticoids combined with another immunosuppressant, cyclophosphamide or rituximab. The choice between the two immunosuppressants must consider the comorbidities, past exposure to cyclophosphamide for relapsers, plans for pregnancy, and also the cost of rituximab. Once remission is achieved, maintenance strategy following cyclophosphamide-based induction relies on less toxic agents such as azathioprine or methotrexate. The optimal maintenance strategy following rituximab-based induction therapy remains to be determined. Preliminary results on rituximab for maintenance therapy appear promising. Efforts are still under way to determine the optimal duration of maintenance therapy, ideally tailored according to the characteristics of each patient and the previous treatment received.",
"title": ""
},
{
"docid": "09cc8bd6fec4123a174f78586ef587df",
"text": "Cloud computing technology is garnering success and wisdom-like stories of savings, ease of use, and increased flexibility in controlling how resources are used at any given time to deliver computing capability. This paper develops a preliminary decision framework to assist managers who are determining which cloud solution matches their specific requirements and evaluating the numerous commercial claims (in many cases unsubstantiated) of a cloud's value. This decision framework and research helps managers allocate investments and assess cloud alternatives that now compete with in-house data centers that previously stored, accessed, and processed data or with another company's (outsourced) data center resources. The hypothetically newly captured corporate value (from cloud) is that resources are no longer idle most of the time, and are now much more fully utilized (with lower unit costs). This reduces high ownership and support costs, improves capital leverage, and delivers increased flexibility in the use of resources.",
"title": ""
},
{
"docid": "4aee0c91e48b9a34be4591d36103c622",
"text": "We construct a polyhedron that is topologically convex (i.e., has the graph of a convex polyhedron) yet has no vertex unfolding: no matter how we cut along the edges and keep faces attached at vertices to form a connected (hinged) surface, the surface necessarily unfolds with overlap.",
"title": ""
},
{
"docid": "be09a9be6ef80f694c34546767300b41",
"text": "Nipple-sparing mastectomy (NSM) is increasingly popular as a procedure for the treatment of breast cancer and as a prophylactic procedure for those at high risk of developing the disease. However, it remains a controversial option due to questions regarding its oncological safety and concerns regarding locoregional recurrence. This systematic review with a pooled analysis examines the current literature regarding NSM, including locoregional recurrence and complication rates. Systematic electronic searches were conducted using the PubMed database and the Ovid database for studies reporting the indications for NSM and the subsequent outcomes. Studies between January 1970 and January 2015 (inclusive) were analysed if they met the inclusion criteria. Pooled descriptive statistics were performed. Seventy-three studies that met the inclusion criteria were included in the analysis, yielding 12,358 procedures. After a mean follow up of 38 months (range, 7.4-156 months), the overall pooled locoregional recurrence rate was 2.38%, the overall complication rate was 22.3%, and the overall incidence of nipple necrosis, either partial or total, was 5.9%. Significant heterogeneity was found among the published studies and patient selection was affected by tumour characteristics. We concluded that NSM appears to be an oncologically safe option for appropriately selected patients, with low rates of locoregional recurrence. For NSM to be performed, tumours should be peripherally located, smaller than 5 cm in diameter, located more than 2 cm away from the nipple margin, and human epidermal growth factor 2-negative. A separate histopathological examination of the subareolar tissue and exclusion of malignancy at this site is essential for safe oncological practice. Long-term follow-up studies and prospective cohort studies are required in order to determine the best reconstructive methods.",
"title": ""
},
{
"docid": "2d5a8949119d7881a97693867a009917",
"text": "Labeling a histopathology image as having cancerous regions or not is a critical task in cancer diagnosis; it is also clinically important to segment the cancer tissues and cluster them into various classes. Existing supervised approaches for image classification and segmentation require detailed manual annotations for the cancer pixels, which are time-consuming to obtain. In this paper, we propose a new learning method, multiple clustered instance learning (MCIL) (along the line of weakly supervised learning) for histopathology image segmentation. The proposed MCIL method simultaneously performs image-level classification (cancer vs. non-cancer image), medical image segmentation (cancer vs. non-cancer tissue), and patch-level clustering (different classes). We embed the clustering concept into the multiple instance learning (MIL) setting and derive a principled solution to performing the above three tasks in an integrated framework. In addition, we introduce contextual constraints as a prior for MCIL, which further reduces the ambiguity in MIL. Experimental results on histopathology colon cancer images and cytology images demonstrate the great advantage of MCIL over the competing methods.",
"title": ""
},
{
"docid": "8ad1213f0b85f57741dc80e57d83a24d",
"text": "Recently, many neural network models have been applied to Chinese word segmentation. However, such models focus more on collecting local information while long distance dependencies are not well learned. To integrate local features with long distance dependencies, we propose a dependency-based gated recursive neural network. Local features are first collected by bi-directional long short term memory network, then combined and refined to long distance dependencies via gated recursive neural network. Experimental results show that our model is a competitive model for Chinese word segmentation.",
"title": ""
},
{
"docid": "67e930a26ec2dc18735f688c1643a69f",
"text": "I design a small object detection network, which is simplified from YOLO(You Only Look Once[15]) network. YOLO is a fast and elegant network that can extract meta features, predict bounding boxes and assign scores to bounding boxes. Compared with RCNN, it doesn’t have complex pipline, which is easier for me to implement. Start from a ImageNet pretrained model, I train my YOLO on PASCAL VOC2007 training dataset. And validate my YOLO on PASCAL VOC2007 validation dataset. Finally, I evaluate my YOLO on an artwork dataset(Picasso dataset). With the best parameters, I got 40% precision and 35% recall.",
"title": ""
},
{
"docid": "099947c3bf9595d98f01398727fa413e",
"text": "The RoboCup Middle Size League competition is a standard real-world test bed for autonomous multi-robot control, robot vision and other relative research subjects. In the past decade, omnidirectional vision system has become one of the most important sensors for the RoboCup soccer robots, for it can provide a 360° view of the robot's surrounding environment in a single image. The robot can use it for tracking and self-localization which very important for robot's control, strategy, and coordination. This paper will discuss the vision system to detect ball, goals, and calculate the angle and real distance from those objects. Based on the research that has been done, the system can detect the ball and the goal, and calculate the angle and the actual distance with a maximum error distance is 5%.",
"title": ""
},
{
"docid": "7b27d8b8f05833888b9edacf9ace0a18",
"text": "This paper reports results from a study on the adoption of an information visualization system by administrative data analysts. Despite the fact that the system was neither fully integrated with their current software tools nor with their existing data analysis practices, analysts identified a number of key benefits that visualization systems provide to their work. These benefits for the most part occurred when analysts went beyond their habitual and well-mastered data analysis routines and engaged in creative discovery processes. We analyze the conditions under which these benefits arose, to inform the design of visualization systems that can better assist the work of administrative data analysts.",
"title": ""
},
{
"docid": "bfa6e76830bc1dfcbec473f912797e0e",
"text": "We present OpenFace, our new open-source face recognition system that approaches state-of-the-art accuracy. Integrating OpenFace with inter-frame tracking, we build RTFace, a mechanism for denaturing video streams that selectively blurs faces according to specified policies at full frame rates. This enables privacy management for live video analytics while providing a secure approach for handling retrospective policy exceptions. Finally, we present a scalable, privacy-aware architecture for large camera networks using RTFace.",
"title": ""
},
{
"docid": "70cad4982e42d44eec890faf6ddc5c75",
"text": "Both translation arrest and proteasome stress associated with accumulation of ubiquitin-conjugated protein aggregates were considered as a cause of delayed neuronal death after transient global brain ischemia; however, exact mechanisms as well as possible relationships are not fully understood. The aim of this study was to compare the effect of chemical ischemia and proteasome stress on cellular stress responses and viability of neuroblastoma SH-SY5Y and glioblastoma T98G cells. Chemical ischemia was induced by transient treatment of the cells with sodium azide in combination with 2-deoxyglucose. Proteasome stress was induced by treatment of the cells with bortezomib. Treatment of SH-SY5Y cells with sodium azide/2-deoxyglucose for 15 min was associated with cell death observed 24 h after treatment, while glioblastoma T98G cells were resistant to the same treatment. Treatment of both SH-SY5Y and T98G cells with bortezomib was associated with cell death, accumulation of ubiquitin-conjugated proteins, and increased expression of Hsp70. These typical cellular responses to proteasome stress, observed also after transient global brain ischemia, were not observed after chemical ischemia. Finally, chemical ischemia, but not proteasome stress, was in SH-SY5Y cells associated with increased phosphorylation of eIF2α, another typical cellular response triggered after transient global brain ischemia. Our results showed that short chemical ischemia of SH-SY5Y cells is not sufficient to induce both proteasome stress associated with accumulation of ubiquitin-conjugated proteins and stress response at the level of heat shock proteins despite induction of cell death and eIF2α phosphorylation.",
"title": ""
},
{
"docid": "43aae415dc32b28f49c941ad58616769",
"text": "The telemedicine intervention in chronic disease management promises to involve patients in their own care, provides continuous monitoring by their healthcare providers, identifies early symptoms, and responds promptly to exacerbations in their illnesses. This review set out to establish the evidence from the available literature on the impact of telemedicine for the management of three chronic diseases: congestive heart failure, stroke, and chronic obstructive pulmonary disease. By design, the review focuses on a limited set of representative chronic diseases because of their current and increasing importance relative to their prevalence, associated morbidity, mortality, and cost. Furthermore, these three diseases are amenable to timely interventions and secondary prevention through telemonitoring. The preponderance of evidence from studies using rigorous research methods points to beneficial results from telemonitoring in its various manifestations, albeit with a few exceptions. Generally, the benefits include reductions in use of service: hospital admissions/re-admissions, length of hospital stay, and emergency department visits typically declined. It is important that there often were reductions in mortality. Few studies reported neutral or mixed findings.",
"title": ""
}
] |
scidocsrr
|
81f68ffb6fe836778fdf8c09540067e8
|
Personality Measurement and Faking : An Integrative Framework Asl › Göncü Çankaya Üniversitesi
|
[
{
"docid": "ada320bb2747d539ff6322bbd46bd9f0",
"text": "Real job applicants completed a 5-factor model personality measure as part of the job application process. They were rejected; 6 months later they (n = 5,266) reapplied for the same job and completed the same personality measure. Results indicated that 5.2% or fewer improved their scores on any scale on the 2nd occasion; moreover, scale scores were as likely to change in the negative direction as the positive. Only 3 applicants changed scores on all 5 scales beyond a 95% confidence threshold. Construct validity of the personality scales remained intact across the 2 administrations, and the same structural model provided an acceptable fit to the scale score matrix on both occasions. For the small number of applicants whose scores changed beyond the standard error of measurement, the authors found the changes were systematic and predictable using measures of social skill, social desirability, and integrity. Results suggest that faking on personality measures is not a significant problem in real-world selection settings.",
"title": ""
}
] |
[
{
"docid": "2b4b639973f54bdd7b987d5bc9bb3978",
"text": "Computational stereo is one of the classical problems in computer vision. Numerous algorithms and solutions have been reported in recent years focusing on developing methods for computing similarity, aggregating it to obtain spatial support and finally optimizing an energy function to find the final disparity. In this paper, we focus on the feature extraction component of stereo matching architecture and we show standard CNNs operation can be used to improve the quality of the features used to find point correspondences. Furthermore, we propose a simple space aggregation that hugely simplifies the correlation learning problem. Our results on benchmark data are compelling and show promising potential even without refining the solution.",
"title": ""
},
{
"docid": "76d4ed8e7692ca88c6b5a70c9954c0bd",
"text": "Custom-tailored products are meant by the products having various sizes and shapes to meet the customer’s different tastes or needs. Thus fabrication of custom-tailored products inherently involves inefficiency. Custom-tailoring shoes are not an exception because corresponding shoe-lasts must be custom-ordered. It would be nice if many template shoe-lasts had been cast in advance, the most similar template was identified automatically from the custom-ordered shoe-last, and only the different portions in the template shoe-last could be machined. To enable this idea, the first step is to derive the geometric models of template shoe-lasts to be cast. Template shoe-lasts can be derived by grouping all the various existing shoe-lasts into manageable number of groups and by uniting all the shoe-lasts in each group such that each template shoe-last for each group barely encloses all the shoe-lasts in the group. For grouping similar shoe-lasts into respective groups, similarity between shoe-lasts should be quantized. Similarity comparison starts with the determination of the closest pose between two shapes in consideration. The closest pose is derived by comparing the ray distances while one shape is virtually rotated with respect to the other. Shape similarity value and overall similarity value calculated from ray distances are also used for grouping. A prototype system based on the proposed methodology has been implemented and applied to grouping of the shoe-lasts of various shapes and sizes and deriving template shoe-lasts. q 2004 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "6afcc3c2e0c67823348cf89a0dfec9db",
"text": "BACKGROUND\nThe consumption of dietary protein is important for resistance-trained individuals. It has been posited that intakes of 1.4 to 2.0 g/kg/day are needed for physically active individuals. Thus, the purpose of this investigation was to determine the effects of a very high protein diet (4.4 g/kg/d) on body composition in resistance-trained men and women.\n\n\nMETHODS\nThirty healthy resistance-trained individuals participated in this study (mean ± SD; age: 24.1 ± 5.6 yr; height: 171.4 ± 8.8 cm; weight: 73.3 ± 11.5 kg). Subjects were randomly assigned to one of the following groups: Control (CON) or high protein (HP). The CON group was instructed to maintain the same training and dietary habits over the course of the 8 week study. The HP group was instructed to consume 4.4 grams of protein per kg body weight daily. They were also instructed to maintain the same training and dietary habits (e.g. maintain the same fat and carbohydrate intake). Body composition (Bod Pod®), training volume (i.e. volume load), and food intake were determined at baseline and over the 8 week treatment period.\n\n\nRESULTS\nThe HP group consumed significantly more protein and calories pre vs post (p < 0.05). Furthermore, the HP group consumed significantly more protein and calories than the CON (p < 0.05). The HP group consumed on average 307 ± 69 grams of protein compared to 138 ± 42 in the CON. When expressed per unit body weight, the HP group consumed 4.4 ± 0.8 g/kg/d of protein versus 1.8 ± 0.4 g/kg/d in the CON. There were no changes in training volume for either group. Moreover, there were no significant changes over time or between groups for body weight, fat mass, fat free mass, or percent body fat.\n\n\nCONCLUSIONS\nConsuming 5.5 times the recommended daily allowance of protein has no effect on body composition in resistance-trained individuals who otherwise maintain the same training regimen. This is the first interventional study to demonstrate that consuming a hypercaloric high protein diet does not result in an increase in body fat.",
"title": ""
},
{
"docid": "bcae6eb2ad3a379f889ec9fea12d203b",
"text": "Within the last few decades inkjet printing has grown into a mature noncontact patterning method, since it can produce large-area patterns with high resolution at relatively high speeds while using only small amounts of functional materials. The main fields of interest where inkjet printing can be applied include the manufacturing of radiofrequency identification (RFID) tags, organic thin-film transistors (OTFTs), and electrochromic devices (ECDs), and are focused on the future of plastic electronics. In view of these applications on polymer foils, micrometersized conductive features on flexible substrates are essential. To fabricate conductive features onto polymer substrates, solutionprocessable materials are often used. The most frequently used are dispersions of silver nanoparticles in an organic solvent. Inks of silver nanoparticle dispersions are relatively easy to prepare and, moreover, silver has the lowest resistivity of all metals (1.59mV cm). After printing and evaporation of the solvent, the particles require a thermal-processing step to render the features conductive by removing the organic binder that is present around the nanoparticles. In nonpolar solvents, long alkyl chains with a polar head, like thiols or carboxylic acids, are usually used to stabilize the nanoparticles. Steric stabilization of these particles in nonpolar solvents substantially screens van der Waals attractions and introduces steep steric repulsion between the particles at contact, which avoids agglomeration. In addition, organic binders are often added to the ink to assure not only mechanical integrity and adhesion to the substrate, but also to promote the printability of the ink. Nanoparticles with a diameter below 50 nmhave a significantly reduced sintering temperature, typically between 160 and 300 8C, which is well below the melting temperature of the bulk material (Tm1⁄4 963 8C). Despite these low sintering temperatures conventional heating methods are still not compatible with common polymer foils, such as polycarbonate (PC) and polyethylene terephthalate (PET), due to their low glass-transition temperatures (Tg). In fact, only the expensive high-performance polymers, like polytetrafluoroethylene (PTFE), poly(ether ether ketone) (PEEK), and polyimide (PI) can be used at these temperatures. This represents, however, a significant drawback for the implementation in a large-area production of plastic electronics, being unfavorable in terms of costs. Furthermore, the long sintering time of 60min or more that is generally required to create conductive features also obstructs industrial implementation. Therefore, other techniques have to be used in order to facilitate fast and selective heating of materials. One selective technique for nanoparticle sintering that has been described in literature is based on an argon-ion laser beam that follows the as-printed feature and selectively sinters the central region. Features with a line width smaller than 10mm have been created with this technique. However, the large overall thermal energy impact together with the low writing speed of 0.2mm s 1 of the translational stage are limiting factors. A faster alternative to selectively heat silver nanoparticles is to use microwave radiation. Ceramics and other dielectric materials can be heated by microwaves due to dielectric losses that are caused by dipole polarization. Under ambient conditions, however, metals behave as reflectors for microwave radiation, because of their small skin depth, which is defined as the distance at which the incident power is reduced to half of its initial value. The small skin depth results from the high conductance s and the high dielectric loss factor e00 together with a small capacitance. When instead of bulk material, the metal consists of particles and/or is heated to at least 400 8C, the materials absorbs microwave radiation to a greater extent. It is believed that the conductive particle interaction with microwave radiation, i.e., inductive coupling, is mainly based on Maxwell–Wagner polarization, which results from the accumulation of charge at the materials interfaces, electric conduction, and eddy currents. However, the main reasons for successful heating of metallic particles through microwave radiation are not yet fully understood. In contrast to the relatively strongmicrowave absorption by the conductive particles, the polarization of dipoles in thermoplastic polymers below the Tg is limited, which makes the polymer foil’s skin depth almost infinite, hence transparent, to microwave radiation. Therefore, only the conductive particles absorb the microwaves and can be sintered selectively. Recently, it has been shown that it is possible to create conductive printed features with microwave radiation within 3–4min. The resulting conductivity, however, is only approximately 5% of the bulk silver value. In this contribution, we present a study on antenna-supported microwave sintering of conducted features on polymer foils. We",
"title": ""
},
{
"docid": "e66fb8ed9e26b058a419d34d9c015a4c",
"text": "Children and adolescents now communicate online to form and/or maintain relationships with friends, family, and strangers. Relationships in \"real life\" are important for children's and adolescents' psychosocial development; however, they can be difficult for those who experience feelings of loneliness and/or social anxiety. The aim of this study was to investigate differences in usage of online communication patterns between children and adolescents with and without self-reported loneliness and social anxiety. Six hundred twenty-six students ages 10 to 16 years completed a survey on the amount of time they spent communicating online, the topics they discussed, the partners they engaged with, and their purposes for communicating over the Internet. Participants were administered a shortened version of the UCLA Loneliness Scale and an abbreviated subscale of the Social Anxiety Scale for Adolescents (SAS-A). Additionally, age and gender differences in usage of the online communication patterns were examined across the entire sample. Findings revealed that children and adolescents who self-reported being lonely communicated online significantly more frequently about personal and intimate topics than did those who did not self-report being lonely. The former were motivated to use online communication significantly more frequently to compensate for their weaker social skills to meet new people. Results suggest that Internet usage allows them to fulfill critical needs of social interactions, self-disclosure, and identity exploration. Future research, however, should explore whether or not the benefits derived from online communication may also facilitate lonely children's and adolescents' offline social relationships.",
"title": ""
},
{
"docid": "fe48a551dfbe397b7bcf52e534dfcf00",
"text": "This meta-analysis of 12 dependent variables from 9 quantitative studies comparing music to no-music conditions during treatment of children and adolescents with autism resulted in an overall effect size of d =.77 and a mean weighted correlation of r =.36 (p =.00). Since the confidence interval did not include 0, results were considered to be significant. All effects were in a positive direction, indicating benefits of the use of music in intervention. The homogeneity Q value was not significant (p =.83); therefore, results of included studies are considered to be homogeneous and explained by the overall effect size. The significant effect size, combined with the homogeneity of the studies, leads to the conclusion that all music intervention, regardless of purpose or implementation, has been effective for children and adolescents with autism. Included studies are described in terms of type of dependent variables measured; theoretical approach; number of subjects in treatment sessions; participation in and use, selection, and presentation of music; researcher discipline; published or unpublished source; and subject age. Clinical implications as well as recommendations for future research are discussed.",
"title": ""
},
{
"docid": "9798859ddb2d29fa461dab938c5183bb",
"text": "The emergence of the extended manufacturing enterprise, a globally dispersed collection of strategically aligned organizations, has brought new attention to how organizations coordinate the flow of information and materials across their w supply chains. This paper explores and develops the concept of enterprise logistics Greis, N.P., Kasarda, J.D., 1997. Ž . x Enterprise logistics in the information age. California Management Review 39 3 , 55–78 as a tool for integrating the logistics activities both within and between the strategically aligned organizations of the extended enterprise. Specifically, this paper examines the fit between an organization’s enterprise logistics integration capabilities and its supply chain structure. Using a configurations approach, we test whether globally dispersed network organizations that adopt enterprise logistics practices are able to achieve higher levels of organizational performance. Results indicate that enterprise logistics is a necessary tool for the coordination of supply chain operations that are geographically dispersed around the world. However, for a pure network structure, a high level of enterprise logistics integration alone does not guarantee improved organizational performance. The paper ends with a discussion of managerial implications and directions for future research. q 2000 Elsevier Science B.V. All rights reserved.",
"title": ""
},
{
"docid": "111b5bfb34a76b0ea78a0fd58311d31f",
"text": "Wireless micro sensor networks have been identified as one of the most important technologies for the 21st century. This paper traces the history of research in sensor networks over the past three decades, including two important programs of the Defense Advanced Research Projects Agency (DARPA) spanning this period: the Distributed Sensor Networks (DSN) and the Sensor Information Technology (SensIT) programs. Technology trends that impact the development of sensor networks are reviewed and new applications such as infrastructure security, habitat monitoring, and traffic control are presented. Technical challenges in sensor network development include network discovery, control and routing, collaborative signal and information processing, tasking and querying, and security. The paper concludes by presenting some recent research results in sensor network algorithms, including localized algorithms and directed diffusion, distributed tracking in wireless ad hoc networks, and distributed classification using local agents. Keywords— Collaborative signal processing, micro sensors, net-work routing and control, querying and tasking, sensor networks, tracking and classification, wireless networks.",
"title": ""
},
{
"docid": "565f815ef0c1dd5107f053ad39dade20",
"text": "Intensity inhomogeneity often occurs in real-world images, which presents a considerable challenge in image segmentation. The most widely used image segmentation algorithms are region-based and typically rely on the homogeneity of the image intensities in the regions of interest, which often fail to provide accurate segmentation results due to the intensity inhomogeneity. This paper proposes a novel region-based method for image segmentation, which is able to deal with intensity inhomogeneities in the segmentation. First, based on the model of images with intensity inhomogeneities, we derive a local intensity clustering property of the image intensities, and define a local clustering criterion function for the image intensities in a neighborhood of each point. This local clustering criterion function is then integrated with respect to the neighborhood center to give a global criterion of image segmentation. In a level set formulation, this criterion defines an energy in terms of the level set functions that represent a partition of the image domain and a bias field that accounts for the intensity inhomogeneity of the image. Therefore, by minimizing this energy, our method is able to simultaneously segment the image and estimate the bias field, and the estimated bias field can be used for intensity inhomogeneity correction (or bias correction). Our method has been validated on synthetic images and real images of various modalities, with desirable performance in the presence of intensity inhomogeneities. Experiments show that our method is more robust to initialization, faster and more accurate than the well-known piecewise smooth model. As an application, our method has been used for segmentation and bias correction of magnetic resonance (MR) images with promising results.",
"title": ""
},
{
"docid": "1b777ff8e7c30c23e7cc827ec3aee0bc",
"text": "The task of 2-D articulated human pose estimation in natural images is extremely challenging due to the high level of variation in human appearance. These variations arise from different clothing, anatomy, imaging conditions and the large number of poses it is possible for a human body to take. Recent work has shown state-of-the-art results by partitioning the pose space and using strong nonlinear classifiers such that the pose dependence and multi-modal nature of body part appearance can be captured. We propose to extend these methods to handle much larger quantities of training data, an order of magnitude larger than current datasets, and show how to utilize Amazon Mechanical Turk and a latent annotation update scheme to achieve high quality annotations at low cost. We demonstrate a significant increase in pose estimation accuracy, while simultaneously reducing computational expense by a factor of 10, and contribute a dataset of 10,000 highly articulated poses.",
"title": ""
},
{
"docid": "8f0276f7a902fa02b6236dfc76b882d2",
"text": "Support Vector Machines (SVMs) have successfully shown efficiencies in many areas such as text categorization. Although recommendation systems share many similarities with text categorization, the performance of SVMs in recommendation systems is not acceptable due to the sparsity of the user-item matrix. In this paper, we propose a heuristic method to improve the predictive accuracy of SVMs by repeatedly correcting the missing values in the user-item matrix. The performance comparison to other algorithms has been conducted. The experimental studies show that the accurate rates of our heuristic method are the highest.",
"title": ""
},
{
"docid": "c71a8c9163d6bf294a5224db1ff5c6f5",
"text": "BACKGROUND\nOsteosarcoma is the second most common primary tumor of the skeletal system and the most common primary bone tumor. Usually occurring at the metaphysis of long bones, osteosarcomas are highly aggressive lesions that comprise osteoid-producing spindle cells. Craniofacial osteosarcomas comprise <8% and are believed to be less aggressive and lower grade. Primary osteosarcomas of the skull and skull base comprise <2% of all skull tumors. Osteosarcomas originating from the clivus are rare. We present a case of a primar, high-grade clival osteosarcoma.\n\n\nCASE DESCRIPTION\nA 29-year-old man presented to our institution with a progressively worsening right frontal headache for 3 weeks. There were no sensory or cranial nerve deficits. Computed tomography revealed a destructive mass involving the clivus with extension into the left sphenoid sinus. Magnetic resonance imaging revealed a homogenously enhancing lesion measuring 2.7 × 2.5 × 3.2 cm. The patient underwent endonasal transphenoidal surgery for gross total resection. The histopathologic analysis revealed proliferation of malignant-appearing spindled and epithelioid cells with associated osteoclast-like giant cells and a small area of osteoid production. The analysis was consistent with high-grade osteosarcoma. The patient did well and was discharged on postoperative day 2. He was referred for adjuvant radiation therapy and chemotherapy. Two-year follow-up showed postoperative changes and clival expansion caused by packing material.\n\n\nCONCLUSIONS\nOsteosarcoma is a highly malignant neoplasm. These lesions are usually found in the extremities; however, they may rarely present in the craniofacial region. Clival osteosarcomas are relatively infrequent. We present a case of a primary clival osteosarcoma with high-grade pathology.",
"title": ""
},
{
"docid": "9c534d53d6c52a1559a401b8d2fc9bac",
"text": "The inconsistency between textual features and visual contents can cause poor image search results. To solve this problem, click features, which are more reliable than textual information in justifying the relevance between a query and clicked images, are adopted in image ranking model. However, the existing ranking model cannot integrate visual features, which are efficient in refining the click-based search results. In this paper, we propose a novel ranking model based on the learning to rank framework. Visual features and click features are simultaneously utilized to obtain the ranking model. Specifically, the proposed approach is based on large margin structured output learning and the visual consistency is integrated with the click features through a hypergraph regularizer term. In accordance with the fast alternating linearization method, we design a novel algorithm to optimize the objective function. This algorithm alternately minimizes two different approximations of the original objective function by keeping one function unchanged and linearizing the other. We conduct experiments on a large-scale dataset collected from the Microsoft Bing image search engine, and the results demonstrate that the proposed learning to rank models based on visual features and user clicks outperforms state-of-the-art algorithms.",
"title": ""
},
{
"docid": "25eea5205d1f8beaa8c4a857da5714bc",
"text": "To backpropagate the gradients through discrete stochastic layers, we encode the true gradients into a multiplication between random noises and the difference of the same function of two different sets of discrete latent variables, which are correlated with these random noises. The expectations of that multiplication over iterations are zeros combined with spikes from time to time. To modulate the frequencies, amplitudes, and signs of the spikes to capture the temporal evolution of the true gradients, we propose the augment-REINFORCE-merge (ARM) estimator that combines data augmentation, the score-function estimator, permutation of the indices of latent variables, and variance reduction for Monte Carlo integration using common random numbers. The ARM estimator provides low-variance and unbiased gradient estimates for the parameters of discrete distributions, leading to state-of-the-art performance in both auto-encoding variational Bayes and maximum likelihood inference, for discrete latent variable models with one or multiple discrete stochastic layers.",
"title": ""
},
{
"docid": "0a17722ba7fbeda51784cdd699f54b3f",
"text": "One of the greatest challenges food research is facing in this century lies in maintaining sustainable food production and at the same time delivering high quality food products with an added functionality to prevent life-style related diseases such as, cancer, obesity, diabetes, heart disease, stroke. Functional foods that contain bioactive components may provide desirable health benefits beyond basic nutrition and play important roles in the prevention of life-style related diseases. Polyphenols and carotenoids are plant secondary metabolites which are well recognized as natural antioxidants linked to the reduction of the development and progression of life-style related diseases. This chapter focuses on healthpromoting food ingredients (polyphenols and carotenoids), food structure and functionality, and bioavailability of these bioactive ingredients, with examples on their commercial applications, namely on functional foods. Thereafter, in order to support successful development of health-promoting food ingredients, this chapter contributes to an understanding of the relationship between food structures, ingredient functionality, in relation to the breakdown of food structures in the gastrointestinal tract and its impact on the bioavailability of bioactive ingredients. The overview on food processing techniques and the processing of functional foods given here will elaborate novel delivery systems for functional food ingredients and their applications in food. Finally, this chapter concludes with microencapsulation techniques and examples of encapsulation of polyphenols and carotenoids; the physical structure of microencapsulated food ingredients and their impacts on food sensorial properties; yielding an outline on the controlled release of encapsulated bioactive compounds in food products.",
"title": ""
},
{
"docid": "8d20b2a4d205684f6353fe710f989fde",
"text": "Financial institutions manage numerous portfolios whose risk must be managed continuously, and the large amounts of data that has to be processed renders this a considerable effort. As such, a system that autonomously detects anomalies in the risk measures of financial portfolios, would be of great value. To this end, the two econometric models ARMA-GARCH and EWMA, and the two machine learning based algorithms LSTM and HTM, were evaluated for the task of performing unsupervised anomaly detection on the streaming time series of portfolio risk measures. Three datasets of returns and Value-at-Risk series were synthesized and one dataset of real-world Value-at-Risk series had labels handcrafted for the experiments in this thesis. The results revealed that the LSTM has great potential in this domain, due to an ability to adapt to different types of time series and for being effective at finding a wide range of anomalies. However, the EWMA had the benefit of being faster and more interpretable, but lacked the ability to capture anomalous trends. The ARMA-GARCH was found to have difficulties in finding a good fit to the time series of risk measures, resulting in poor performance, and the HTM was outperformed by the other algorithms in every regard, due to an inability to learn the autoregressive behaviour of the time series.",
"title": ""
},
{
"docid": "ed08e93061f2d248f6b70fde6e17b431",
"text": "With the rapid growth of e-commerce, the B2C of e-commerce has been a significant issue. The purpose of this study aims to predict consumers’ purchase intentions by integrating trust and perceived risk into the model to empirically examine the impact of key variables. 705 samples were obtained from online users purchasing from e-vendor of Yahoo! Kimo. This study applied the Structural Equation Model to examine consumers’ online shopping based on the Technology Acceptance Model (TAM). The results indicate that perceived ease of use (PEOU), perceived usefulness (PU), trust, and perceived risk significantly impact purchase intentions both directly and indirectly. Moreover, trust significantly reduced online consumer perceived risk during online shopping. This study provides evidence of the relationship between consumers’ purchase intention, perceived trust and perceived risk to websites of specific e-vendors. Such knowledge may help to inform promotion, designing, and advertising website strategies employed by practitioners.",
"title": ""
},
{
"docid": "7f420ef711e271be98c5acd427a2be57",
"text": "The Purchasing Power Parity Debate* Originally propounded by the 16th-century scholars of the University of Salamanca, the concept of purchasing power parity (PPP) was revived in the interwar period in the context of the debate concerning the appropriate level at which to re-establish international exchange rate parities. Broadly accepted as a long-run equilibrium condition in the post-war period, it first was advocated as a short-run equilibrium by many international economists in the first few years following the breakdown of the Bretton Woods system in the early 1970s and then increasingly came under attack on both theoretical and empirical grounds from the late 1970s to the mid 1990s. Accordingly, over the last three decades, a large literature has built up that examines how much the data deviated from theory, and the fruits of this research have provided a deeper understanding of how well PPP applies in both the short run and the long run. Since the mid 1990s, larger datasets and nonlinear econometric methods, in particular, have improved estimation. As deviations narrowed between real exchange rates and PPP, so did the gap narrow between theory and data, and some degree of confidence in long-run PPP began to emerge again. In this respect, the idea of long-run PPP now enjoys perhaps its strongest support in more than 30 years, a distinct reversion in economic thought. JEL Classification: F31 and F41",
"title": ""
},
{
"docid": "92716e900851c637fb60da359caf09a0",
"text": "Litz wire uses complex twisting to balance currents between strands. Most models assume that the twisting works perfectly to accomplish this balancing, and thus are not helpful in choosing the details of the twisting configuration. A complete model that explicitly models the effect of twisting on loss is introduced. Skin effect and proximity effect are each modeled at the level of the individual strands and at each level of the twisting construction. Comparisons with numerical simulations are used to verify the model. The results are useful for making design choices for the twisting configuration and the pitches of each twisting step. Windings with small numbers of turns are the most likely to have significant bundle-level effects that are not captured by conventional models, and are the most important to model and optimize with this approach.",
"title": ""
},
{
"docid": "e055fe2b1f2be90f58828da4cff78c78",
"text": "Probabilistic topic models, which aim to discover latent topics in text corpora define each document as a multinomial distributions over topics and each topic as a multinomial distributions over words. Although, humans can infer a proper label for each topic by looking at top representative words of the topic but, it is not applicable for machines. Automatic Topic Labeling techniques try to address the problem. The ultimate goal of topic labeling techniques are to assign interpretable labels for the learned topics. In this paper, we are taking concepts of ontology into consideration instead of words alone to improve the quality of generated labels for each topic. Our work is different in comparison with the previous efforts in this area, where topics are usually represented with a batch of selected words from topics. We have highlighted some aspects of our approach including: 1) we have incorporated ontology concepts with statistical topic modeling in a unified framework, where each topic is a multinomial probability distribution over the concepts and each concept is represented as a distribution over words; and 2) a topic labeling model according to the meaning of the concepts of the ontology included in the learned topics. The best topic labels are selected with respect to the semantic similarity of the concepts and their ontological categorizations. We demonstrate the effectiveness of considering ontological concepts as richer aspects between topics and words by comprehensive experiments on two different data sets. In another word, representing topics via ontological concepts shows an effective way for generating descriptive and representative labels for the discovered topics. Keywords—Topic modeling; topic labeling; statistical learning; ontologies; linked open data",
"title": ""
}
] |
scidocsrr
|
249dd9f3c0e1da8c0ec47f30ed236aca
|
Private Equality Test Using Ring-LWE Somewhat Homomorphic Encryption
|
[
{
"docid": "afd3bdd971c272583c1a24b3e1a331b6",
"text": "Machine learning classification is used for numerous tasks nowadays, such as medical or genomics predictions, spam detection, face recognition, and financial predictions. Due to privacy concerns, in some of these applications, it is important that the data and the classifier remain confidential. In this work, we construct three major classification protocols that satisfy this privacy constraint: hyperplane decision, Naïve Bayes, and decision trees. We also enable these protocols to be combined with AdaBoost. At the basis of these constructions is a new library of building blocks, which enables constructing a wide range of privacy-preserving classifiers; we demonstrate how this library can be used to construct other classifiers than the three mentioned above, such as a multiplexer and a face detection classifier. We implemented and evaluated our library and our classifiers. Our protocols are efficient, taking milliseconds to a few seconds to perform a classification when running on real medical datasets.",
"title": ""
}
] |
[
{
"docid": "37e65ab2fc4d0a9ed5b8802f41a1a2a2",
"text": "This paper is based on a panel discussion held at the Artificial Intelligence in Medicine Europe (AIME) conference in Amsterdam, The Netherlands, in July 2007. It had been more than 15 years since Edward Shortliffe gave a talk at AIME in which he characterized artificial intelligence (AI) in medicine as being in its \"adolescence\" (Shortliffe EH. The adolescence of AI in medicine: will the field come of age in the '90s? Artificial Intelligence in Medicine 1993;5:93-106). In this article, the discussants reflect on medical AI research during the subsequent years and characterize the maturity and influence that has been achieved to date. Participants focus on their personal areas of expertise, ranging from clinical decision-making, reasoning under uncertainty, and knowledge representation to systems integration, translational bioinformatics, and cognitive issues in both the modeling of expertise and the creation of acceptable systems.",
"title": ""
},
{
"docid": "35060ab7be361f6158bccb4b2ffe0b6b",
"text": "In recent years, the potential of stem cell research for tissue engineering-based therapies and regenerative medicine clinical applications has become well established. In 2006, Chung pioneered the first entire organ transplant using adult stem cells and a scaffold for clinical evaluation. With this a new milestone was achieved, with seven patients with myelomeningocele receiving stem cell-derived bladder transplants resulting in substantial improvements in their quality of life. While a bladder is a relatively simple organ, the breakthrough highlights the incredible benefits that can be gained from the cross-disciplinary nature of tissue engineering and regenerative medicine (TERM) that encompasses stem cell research and stem cell bioprocessing. Unquestionably, the development of bioprocess technologies for the transfer of the current laboratory-based practice of stem cell tissue culture to the clinic as therapeutics necessitates the application of engineering principles and practices to achieve control, reproducibility, automation, validation and safety of the process and the product. The successful translation will require contributions from fundamental research (from developmental biology to the 'omics' technologies and advances in immunology) and from existing industrial practice (biologics), especially on automation, quality assurance and regulation. The timely development, integration and execution of various components will be critical-failures of the past (such as in the commercialization of skin equivalents) on marketing, pricing, production and advertising should not be repeated. This review aims to address the principles required for successful stem cell bioprocessing so that they can be applied deftly to clinical applications.",
"title": ""
},
{
"docid": "11c245ca7bc133155ff761374dfdea6e",
"text": "Received Nov 12, 2017 Revised Jan 20, 2018 Accepted Feb 11, 2018 In this paper, a modification of PVD (Pixel Value Differencing) algorithm is used for Image Steganography in spatial domain. It is normalizing secret data value by encoding method to make the new pixel edge difference less among three neighbors (horizontal, vertical and diagonal) and embedding data only to less intensity pixel difference areas or regions. The proposed algorithm shows a good improvement for both color and gray-scale images compared to other algorithms. Color images performances are better than gray images. However, in this work the focus is mainly on gray images. The strenght of this scheme is that any random hidden/secret data do not make any shuttle differences to Steg-image compared to original image. The bit plane slicing is used to analyze the maximum payload that has been embeded into the cover image securely. The simulation results show that the proposed algorithm is performing better and showing great consistent results for PSNR, MSE values of any images, also against Steganalysis attack.",
"title": ""
},
{
"docid": "4d449388969075c56b921f9183fbc7b5",
"text": "Tasks such as question answering and semantic search are dependent on the ability of querying & reasoning over large-scale commonsense knowledge bases (KBs). However, dealing with commonsense data demands coping with problems such as the increase in schema complexity, semantic inconsistency, incompleteness and scalability. This paper proposes a selective graph navigation mechanism based on a distributional relational semantic model which can be applied to querying & reasoning over heterogeneous knowledge bases (KBs). The approach can be used for approximative reasoning, querying and associational knowledge discovery. In this paper we focus on commonsense reasoning as the main motivational scenario for the approach. The approach focuses on addressing the following problems: (i) providing a semantic selection mechanism for facts which are relevant and meaningful in a specific reasoning & querying context and (ii) allowing coping with information incompleteness in large KBs. The approach is evaluated using ConceptNet as a commonsense KB, and achieved high selectivity, high scalability and high accuracy in the selection of meaningful navigational paths. Distributional semantics is also used as a principled mechanism to cope with information incompleteness.",
"title": ""
},
{
"docid": "d5a7b2c027679d016c7c1ed128e48fd8",
"text": "Figure 3: Example of phase correlation between two microphones. The peak of this function indicates the inter-channel delay. index associated with peak value of f(t). This delay estimator is computationally convenient and more robust to noise and reverberation than other approaches based on cross-correlation or adaptive ltering. In ideal conditions, the output of Equation (5) is a delta function centered on the correct delay. In real applications with a wide band signal, e.g., a speech signal, the outcome is not a perfect delta function. Rather it resembles a correlation function of a random process. The time index associated with the maximum value of the output of Equation (5) provides an estimation of the delay. The system can produce wrong answers when two or more peaks of similar amplitude are present, i.e., in highly reverber-ant conditions. The resolution in delay estimation is limited in discrete systems by the sampling frequency. In order to increase the accuracy, oversampling can be applied in the neighborhood of the peak, to achieve sub-sample precision. Fig. 3 demonstrates an example of the result of a cross-power spectrum time delay estimator. Once the relative delays associated with all considered microphone pairs are known, the source position (x s ; y s) is estimated as the point that would produce the most similar delay values to the observed ones. This optimization is performed by a downhill sim-plex algorithm 6] applied to minimize the Euclidean distance between M observed delays ^ i and the corresponding M theoretical delays i : An analysis of the impulse responses associated with all the microphones, given an acoustic source emitting at a speciic position, has shown that constructive interference phenomena occur in the presence of signiicant reverberation. In some cases, the direct wavefront happens to be weaker than a coincidence of reeections, inducing a wrong estimation of the arrival direction and leading to an incorrect result. Selecting only microphone pairs that show the highest peaks of phase correlation generally alleviates this problem. Location results obtained with this strategy show comparable performance (mean posi-Reverb. Time Average Error 10 mic pairs 4 mic pairs 0.1sec 38.4 cm 29.8 cm 0.6sec 51.3 cm 32.1 cm 1.7sec 105.0 cm 46.4 cm Table 1: Average location error using either all 10 pairs or 4 pairs of microphones. Three reverberation time conditions are considered. tion error of about 0.3 m) at reverberation times of 0.1 s and 0.6 s. …",
"title": ""
},
{
"docid": "a28567e108f00e3b251882404f2574b2",
"text": "Sirs: A 46-year-old woman was referred to our hospital because of suspected cerebral ischemia. Two days earlier the patient had recognized a left-sided weakness and clumsiness. On neurological examination we found a mild left-sided hemiparesis and hemiataxia. There was a generalized shrinking violaceous netlike pattering of the skin especially on both legs and arms but also on the trunk and buttocks (Fig. 1). The patient reported the skin changing to be more prominent on cold exposure. The patient’s family remembered this skin finding to be evident since the age of five years. A diagnosis of livedo racemosa had been made 5 years ago. The neuropsychological assessment of this highly educated civil servant revealed a slight cognitive decline. MRI showed a right-sided cerebral ischemia in the middle cerebral artery (MCA) territory. Her medical history was significant for migraine-like headache for many years, a miscarriage 18 years before and a deep vein thrombosis of the left leg six years ago. She had no history of smoking or other cerebrovascular risk factors including no estrogen-containing oral contraceptives. The patient underwent intensive examinations including duplex sonography of extraand intracranial arteries, transesophageal echocardiography, 24-h ECG, 24-h blood pressure monitoring, multimodal evoked potentials, electroencephalography, lumbar puncture and sonography of abdomen. All these tests were negative. Extensive laboratory examinations revealed a heterozygote prothrombin 20210 mutation, which is associated with a slightly increased risk for thrombosis. Antiphospholipid antibodies (aplAB) and other laboratory examinations to exclude vasculitis, toxic metabolic disturbances and other causes for livedo racemosa were negative. Skin biopsy showed vasculopathy with intimal proliferation and an occluding thrombus. The patient was diagnosed as having antiphospholipid-antibodynegative Sneddon’s syndrome (SS) based on cerebral ischemia combined with wide-spread livedo racemosa associated with a history of miscarriage, deep vein thrombosis, migraine like headaches and mild cognitive decline. We started long-term prophylactic pharmacological therapy with captopril as a myocyte proliferation agent and with aspirin as an antiplatelet therapy. Furthermore we recommended thrombosis prophylaxis in case of immobilization. One month later the patient experienced vein thrombosis of her right forearm and suffered from dyspnea. Antiphospholipid antibody testing again was negative. EBT and CT of thorax showed an aneurysmatic dilatation of aorta ascendens up to 4.5 cm. After careful consideration of the possible disadvantages we nevertheless decided to start long-term anticoagulation instead of antiplatelet therapy because of the second thrombotic event. The elucidating and interesting issue of this case is the association of miscarriage and two vein thromboses in aplAB-negative SS. Little is known about this phenomenon and there are only a few reports about these symptoms in aplABLETTER TO THE EDITORS",
"title": ""
},
{
"docid": "27b42d8eaf6eea29589ee2960532f996",
"text": "This paper describes a system for semi-automatic tr nscription of prosody based on a stylization of the fundamenta l frequency data (contour) for vocalic (or syllabic) nuclei. Th e stylization is a simulation of tonal perception of human listen ers. The system requires a time-aligned phonetic annotation. The transcription has been applied to several speech co rpora.",
"title": ""
},
{
"docid": "e00c05ab9796c6c217e00695adcb07ac",
"text": "Web 2.0 technologies opened up new perspectives in learning and teaching activities. Collaboration, communication and sharing between learners contribute to the self-regulated learning, a bottom-up approach. The market for smartphones and tablets are growing rapidly. They are being used more often in everyday life. This allows us to support self-regulated learning in a way that learning resources and applications are accessible any time and at any place. This publication focuses on the Personal Learning Environment (PLE) that was launched at Graz University of Technology in 2010. After a first prototype a complete redesign was carried out to fulfill a change towards learner-centered framework. Statistical data show a high increase of attractiveness of the whole system in general. As the next step a mobile version is integrated. A converter for browser-based learning apps within PLE to native smartphone apps leads to the Ubiquitous PLE, which is discussed in this paper in detail.",
"title": ""
},
{
"docid": "0e64386d566fafabb793fe33a0ac1280",
"text": "Autonomous mobile robot navigation is a very relevant problem in robotics research. This paper proposes a vision-based autonomous navigation system using artificial neural networks (ANN) and finite state machines (FSM). In the first step, ANNs are used to process the image frames taken from the robot´s camera, classifying the space, resulting in navigable or non-navigable areas (image road segmentation). Then, the ANN output is processed and used by a FSM, which identifies the robot´s current state, and define which action the robot should take according to the processed image frame. Different experiments were performed in order to validate and evaluate this approach, using a small mobile robot with integrated camera, in a structured indoor environment. The integration of ANN vision-based algorithms and robot´s action control based on a FSM, as proposed in this paper, demonstrated to be a promising approach to autonomous mobile robot navigation.",
"title": ""
},
{
"docid": "4deec89ffa0b860db8a8ceca03e945bd",
"text": "This paper aims at reviewing literature on nurses’ knowledge of delirium, dementia and depression (3Ds) which are rapidly increasing worldwide as the population ages, and to identify interventions that have shown effectiveness in improving nurses’ knowledge level of these diseases. Nurses’ knowledge of delirium, dementia and depression is essential to providing quality patient care. To access the literature, online databases including Medline (OVID), CINAHL (EBSCO), Nursing and Allied Health Source (ProQuest), and Health and Medicine (ProQuest), in addition to Google scholar search engine, were searched using key words “delirium”, “dementia”, “depression”, “nurse*”, “knowledge” and their alternative words. Overall, 20 articles were found to meet the criteria for inclusion in the review. The study found that nurses’ knowledge of the 3Ds was generally low, and they were not particularly able to differentiate between the three diseases. It is important that health care systems are adequately resourced to meet this growing challenge. Nurses should receive appropriate training about the 3Ds, and their knowledge be reinforced through continuing education.",
"title": ""
},
{
"docid": "d879e53880baeb2da303179195731b03",
"text": "Semantic search has been one of the motivations of the semantic Web since it was envisioned. We propose a model for the exploitation of ontology-based knowledge bases to improve search over large document repositories. In our view of information retrieval on the semantic Web, a search engine returns documents rather than, or in addition to, exact values in response to user queries. For this purpose, our approach includes an ontology-based scheme for the semiautomatic annotation of documents and a retrieval system. The retrieval model is based on an adaptation of the classic vector-space model, including an annotation weighting algorithm, and a ranking algorithm. Semantic search is combined with conventional keyword-based retrieval to achieve tolerance to knowledge base incompleteness. Experiments are shown where our approach is tested on corpora of significant scale, showing clear improvements with respect to keyword-based search",
"title": ""
},
{
"docid": "46fa91ce587d094441466a7cbe5c5f07",
"text": "Automatic facial expression analysis is an interesting and challenging problem which impacts important applications in many areas such as human-computer interaction and data-driven animation. Deriving effective facial representative features from face images is a vital step towards successful expression recognition. In this paper, we evaluate facial representation based on statistical local features called Local Binary Patterns (LBP) for facial expression recognition. Simulation results illustrate that LBP features are effective and efficient for facial expression recognition. A real-time implementation of the proposed approach is also demonstrated which can recognize expressions accurately at the rate of 4.8 frames per second.",
"title": ""
},
{
"docid": "de1db4e54fb686f2b597936aa551cd14",
"text": "Trustworthy software requires strong privacy and security guarantees from a secure trust base in hardware. While chipmakers provide hardware support for basic security and privacy primitives such as enclaves and memory encryption. these primitives do not address hiding of the memory access pattern, information about which may enable attacks on the system or reveal characteristics of sensitive user data. State-of-the-art approaches to protecting the access pattern are largely based on Oblivious RAM (ORAM). Unfortunately, current ORAM implementations suffer from very significant practicality and overhead concerns, including roughly an order of magnitude slowdown, more than 100% memory capacity overheads, and the potential for system deadlock.\n Memory technology trends are moving towards 3D and 2.5D integration, enabling significant logic capabilities and sophisticated memory interfaces. Leveraging the trends, we propose a new approach to access pattern obfuscation, called ObfusMem. ObfusMem adds the memory to the trusted computing base and incorporates cryptographic engines within the memory. ObfusMem encrypts commands and addresses on the memory bus, hence the access pattern is cryptographically obfuscated from external observers. Our evaluation shows that ObfusMem incurs an overhead of 10.9% on average, which is about an order of magnitude faster than ORAM implementations. Furthermore, ObfusMem does not incur capacity overheads and does not amplify writes. We analyze and compare the security protections provided by ObfusMem and ORAM, and highlight their differences.",
"title": ""
},
{
"docid": "4c3a7002536a825b73607c45a6b36cb4",
"text": "In this article we take an empirical cross-country perspective to investigate the robustness and causality of the link between income inequality and crime rates. First, we study the correlation between the Gini index and, respectively, homicide and robbery rates along different dimensions of the data (within and between countries). Second, we examine the inequality-crime link when other potential crime determinants are controlled for. Third, we control for the likely joint endogeneity of income inequality in order to isolate its exogenous impact on homicide and robbery rates. Fourth, we control for the measurement error in crime rates by modelling it as both unobserved country-specific effects and random noise. Lastly, we examine the robustness of the inequality-crime link to alternative measures of inequality. The sample for estimation consists of panels of non-overlapping 5-year averages for 39 countries over 1965-95 in the case of homicides, and 37 countries over 1970-1994 in the case of robberies. We use a variety of statistical techniques, from simple correlations to regression analysis and from static OLS to dynamic GMM estimation. We find that crime rates and inequality are positively correlated (within each country and, particularly, between countries), and it appears that this correlation reflects causation from inequality to crime rates, even controlling for other crime determinants. * We are grateful for comments and suggestions from Francois Bourguignon, Dante Contreras, Francisco Ferreira, Edward Glaeser, Sam Peltzman, Debraj Ray, Luis Servén, and an anonymous referee. N. Loayza worked at the research group of the Central Bank of Chile during the preparation of the paper. This study was sponsored by the Latin American Regional Studies Program, The World Bank. The opinions and conclusions expressed here are those of the authors and do not necessarily represent the views of the institutions to which they are affiliated.",
"title": ""
},
{
"docid": "e70086a4ba81b7457031e850450601cd",
"text": "Some of the features in Discipulus that contribute to its extraordinary performance [3, 4, 5, 6, 9] are: • Discipulus implements a Genetic Programming algorithm. This algorithm determines the appropriate functional form and optimizes the parameters of the function. It is an ideal algorithm for complex, noisy, poorly understood domains. • Discipulus performs Genetic Programming thru direct manipulation of binary machine code. This makes Discipulus about sixty to two-hundred times faster than comparable automated learning approaches [10]. • Discipulus performs multi-run Genetic Programming, intelligently adapting its own parameters to the problem at hand. Each of these capabilities of Discipulus are discussed below.",
"title": ""
},
{
"docid": "6ddad64507fa5ebf3b2930c261584967",
"text": "In this article we propose a methodology to determine snow cover by means of Landsat-7 ETM+ and Landsat-5 TM images, as well as an improvement in daily Snow Cover TERRA- MODIS product (MOD10A1), between 2002 and 2005. Both methodologies are based on a NDSI threshold > 0.4. In the Landsat case, and although this threshold also selects water bodies, we have obtained optimal results using a mask of water bodies and generating a pre-boundary snow mask around the snow cover. Moreover, an important improvement in snow cover mapping in shadow cast areas by means of a hybrid classification has been obtained. Using these results as ground truth we have verified MODIS Snow Cover product using coincident dates. In the MODIS product, we have noted important commission errors in water bodies, forest covers and orographic shades because of the NDVI-NDSI filter applied to this product. In order to improve MODIS snow cover determination using MODIS images, we propose a hybrid methodology based on experience with Landsat images, which provide greater spatial resolution.",
"title": ""
},
{
"docid": "2f7b1f2422526d99e75dce7d38665774",
"text": "Conventional Open Information Extraction (Open IE) systems are usually built on hand-crafted patterns from other NLP tools such as syntactic parsing, yet they face problems of error propagation. In this paper, we propose a neural Open IE approach with an encoder-decoder framework. Distinct from existing methods, the neural Open IE approach learns highly confident arguments and relation tuples bootstrapped from a state-of-the-art Open IE system. An empirical study on a large benchmark dataset shows that the neural Open IE system significantly outperforms several baselines, while maintaining comparable computational efficiency.",
"title": ""
},
{
"docid": "b1535b6f1c5f1054e2d61c4920d860ba",
"text": "This research examines a collaborative solution to a common problem, that of providing help to distributed users. The Answer Garden 2 system provides a secondgeneration architecture for organizational and community memory applications. After describing the need for Answer Garden 2’s functionality, we describe the architecture of the system and two underlying systems, the Cafe ConstructionKit and Collaborative Refinery. We also present detailed descriptions of the collaborative help and collaborative refining facilities in the Answer Garden 2 system.",
"title": ""
},
{
"docid": "d91eb9b24c557f8a973962e34941d413",
"text": "In this paper suggestion FPC for antenna design not only easy to control the antenna pattern on desired direction and radiation magnitude but also effective and quickly solved the complex tuning parameters of antenna application. Flexible printer circuit (FPC) board serves planar wireless antenna design. Effectively to supersede the antenna based on the traditional fixed printer circuit board (PCB) implement, and then provide free dimension of antenna and wireless system integration to designer. This paper proposes a new antenna technique for FPC printed monopole antenna as a radiator. This branch lines FPC monopole antenna based on asymmetric meander lines with flexible structure exhibits good performances. Branch lines monopole radiator has the merits to improve antenna bandwidth with fine frequency tuning. The PDA phone for CDMA2000 cellular, PCS, BT/WiFi, and GPS applications is well demonstrated with the integration and measurement of the co-designed.",
"title": ""
},
{
"docid": "a9e856d2c3bb69df289abf26ce6f178c",
"text": "A novel hybrid method coupling genetic programming and orthogonal least squares, called GP/OLS, was employed to derive new ground-motion prediction equations (GMPEs). The principal ground-motion parameters formulated were peak ground acceleration (PGA), peak ground velocity (PGV) and peak ground displacement (PGD). The proposed GMPEs relate PGA, PGV and PGD to different seismic parameters including earthquake magnitude, earthquake source to site distance, average shear-wave velocity, and faulting mechanisms. The equations were established based on an extensive database of strong ground-motion recordings released by Pacific Earthquake Engineering Research Center (PEER). For more validity verification, the developed equations were employed to predict the ground-motion parameters of the Iranian plateau earthquakes. A sensitivity analysis was carried out to determine the contributions of the parameters affecting PGA, PGV and PGD. The sensitivity of the models to the variations of the influencing parameters was further evaluated through a parametric analysis. The obtained GMPEs are effectively capable of estimating the site ground-motion parameters. The equations provide a prediction performance better than or comparable with the attenuation relationships found in the literature. The derived GMPEs are remarkably simple and straightforward and can reliably be used for the pre-design purposes. & 2011 Elsevier Ltd. All rights reserved.",
"title": ""
}
] |
scidocsrr
|
66d01d6fdbcf073124f6d389b7cd724e
|
Quick Quiz: A Gamified Approach for Enhancing Learning
|
[
{
"docid": "9b13beaf2e5aecc256117fdd8ccf8368",
"text": "This paper examines the literature on computer games and serious games in regard to the potential positive impacts of gaming on users aged 14 years or above, especially with respect to learning, skill enhancement and engagement. Search terms identified 129 papers reporting empirical evidence about the impacts and outcomes of computer games and serious games with respect to learning and engagement and a multidimensional approach to categorizing games was developed. The findings revealed that playing computer games is linked to a range of perceptual, cognitive, behavioural, affective and motivational impacts and outcomes. The most frequently occurring outcomes and impacts were knowledge acquisition/content understanding and affective and motivational outcomes. The range of indicators and measures used in the included papers are discussed, together with methodological limitations and recommendations for further work in this area. 2012 Published by Elsevier Ltd.",
"title": ""
},
{
"docid": "712d292b38a262a8c37679c9549a631d",
"text": "Addresses for correspondence: Dr Sara de Freitas, London Knowledge Lab, Birkbeck College, University of London, 23–29 Emerald Street, London WC1N 3QS. UK. Tel: +44(0)20 7763 2117; fax: +44(0)20 7242 2754; email: sara@lkl.ac.uk. Steve Jarvis, Vega Group PLC, 2 Falcon Way, Shire Park, Welwyn Garden City, Herts AL7 1TW, UK. Tel: +44 (0)1707 362602; Fax: +44 (0)1707 393909; email: steve.jarvis@vega.co.uk",
"title": ""
}
] |
[
{
"docid": "d87edfb603b5d69bcd0e0dc972d26991",
"text": "The adult nervous system is not static, but instead can change, can be reshaped by experience. Such plasticity has been demonstrated from the most reductive to the most integrated levels, and understanding the bases of this plasticity is a major challenge. It is apparent that stress can alter plasticity in the nervous system, particularly in the limbic system. This paper reviews that subject, concentrating on: a) the ability of severe and/or prolonged stress to impair hippocampal-dependent explicit learning and the plasticity that underlies it; b) the ability of mild and transient stress to facilitate such plasticity; c) the ability of a range of stressors to enhance implicit fear conditioning, and to enhance the amygdaloid plasticity that underlies it.",
"title": ""
},
{
"docid": "79c2623b0e1b51a216fffbc6bbecd9ec",
"text": "Visual notations form an integral part of the language of software engineering (SE). Yet historically, SE researchers and notation designers have ignored or undervalued issues of visual representation. In evaluating and comparing notations, details of visual syntax are rarely discussed. In designing notations, the majority of effort is spent on semantics, with graphical conventions largely an afterthought. Typically, no design rationale, scientific or otherwise, is provided for visual representation choices. While SE has developed mature methods for evaluating and designing semantics, it lacks equivalent methods for visual syntax. This paper defines a set of principles for designing cognitively effective visual notations: ones that are optimized for human communication and problem solving. Together these form a design theory, called the Physics of Notations as it focuses on the physical (perceptual) properties of notations rather than their logical (semantic) properties. The principles were synthesized from theory and empirical evidence from a wide range of fields and rest on an explicit theory of how visual notations communicate. They can be used to evaluate, compare, and improve existing visual notations as well as to construct new ones. The paper identifies serious design flaws in some of the leading SE notations, together with practical suggestions for improving them. It also showcases some examples of visual notation design excellence from SE and other fields.",
"title": ""
},
{
"docid": "4d34ba30b0ab330fcf6251490928120c",
"text": "BACKGROUND\nDespite extensive data about physician burnout, to our knowledge, no national study has evaluated rates of burnout among US physicians, explored differences by specialty, or compared physicians with US workers in other fields.\n\n\nMETHODS\nWe conducted a national study of burnout in a large sample of US physicians from all specialty disciplines using the American Medical Association Physician Masterfile and surveyed a probability-based sample of the general US population for comparison. Burnout was measured using validated instruments. Satisfaction with work-life balance was explored.\n\n\nRESULTS\nOf 27 276 physicians who received an invitation to participate, 7288 (26.7%) completed surveys. When assessed using the Maslach Burnout Inventory, 45.8% of physicians reported at least 1 symptom of burnout. Substantial differences in burnout were observed by specialty, with the highest rates among physicians at the front line of care access (family medicine, general internal medicine, and emergency medicine). Compared with a probability-based sample of 3442 working US adults, physicians were more likely to have symptoms of burnout (37.9% vs 27.8%) and to be dissatisfied with work-life balance (40.2% vs 23.2%) (P < .001 for both). Highest level of education completed also related to burnout in a pooled multivariate analysis adjusted for age, sex, relationship status, and hours worked per week. Compared with high school graduates, individuals with an MD or DO degree were at increased risk for burnout (odds ratio [OR], 1.36; P < .001), whereas individuals with a bachelor's degree (OR, 0.80; P = .048), master's degree (OR, 0.71; P = .01), or professional or doctoral degree other than an MD or DO degree (OR, 0.64; P = .04) were at lower risk for burnout.\n\n\nCONCLUSIONS\nBurnout is more common among physicians than among other US workers. Physicians in specialties at the front line of care access seem to be at greatest risk.",
"title": ""
},
{
"docid": "29bc53c2e50de52e073b7d0e304d0f5f",
"text": "UNLABELLED\nA theory is presented that attempts to answer two questions. What visual contents can an observer consciously access at one moment?\n\n\nANSWER\nonly one feature value (e.g., green) per dimension, but those feature values can be associated (as a group) with multiple spatially precise locations (comprising a single labeled Boolean map). How can an observer voluntarily select what to access?\n\n\nANSWER\nin one of two ways: (a) by selecting one feature value in one dimension (e.g., selecting the color red) or (b) by iteratively combining the output of (a) with a preexisting Boolean map via the Boolean operations of intersection and union. Boolean map theory offers a unified interpretation of a wide variety of visual attention phenomena usually treated in separate literatures. In so doing, it also illuminates the neglected phenomena of attention to structure.",
"title": ""
},
{
"docid": "55d10e35e0a54859b20e5c8e9c9d8ef4",
"text": "Course allocation is one of the most complex issues facing any university, due to the sensitive nature of deciding which subset of students should be granted seats in highly-popular (market-scarce) courses. In recent years, researchers have proposed numerous solutions, using techniques in integer programming, combinatorial auction design, and matching theory. In this paper, we present a four-part AI-based course allocation algorithm that was conceived by an undergraduate student, and recently implemented at a small Canadian liberal arts university. This new allocation process, which builds upon the Harvard Business School Draft, has received overwhelming support from students and faculty for its transparency, impartiality, and effectiveness.",
"title": ""
},
{
"docid": "43c4dd05f438adf91a62f42f1f7d5abc",
"text": "We introduce a technique for augmenting neural text-to-speech (TTS) with lowdimensional trainable speaker embeddings to generate different voices from a single model. As a starting point, we show improvements over the two state-ofthe-art approaches for single-speaker neural TTS: Deep Voice 1 and Tacotron. We introduce Deep Voice 2, which is based on a similar pipeline with Deep Voice 1, but constructed with higher performance building blocks and demonstrates a significant audio quality improvement over Deep Voice 1. We improve Tacotron by introducing a post-processing neural vocoder, and demonstrate a significant audio quality improvement. We then demonstrate our technique for multi-speaker speech synthesis for both Deep Voice 2 and Tacotron on two multi-speaker TTS datasets. We show that a single neural TTS system can learn hundreds of unique voices from less than half an hour of data per speaker, while achieving high audio quality synthesis and preserving the speaker identities almost perfectly.",
"title": ""
},
{
"docid": "1cc962ab0d15a47725858ed5ff5872f6",
"text": "Although spontaneous remyelination does occur in multiple sclerosis lesions, its extent within the global population with this disease is presently unknown. We have systematically analysed the incidence and distribution of completely remyelinated lesions (so-called shadow plaques) or partially remyelinated lesions (shadow plaque areas) in 51 autopsies of patients with different clinical courses and disease durations. The extent of remyelination was variable between cases. In 20% of the patients, the extent of remyelination was extensive with 60-96% of the global lesion area remyelinated. Extensive remyelination was found not only in patients with relapsing multiple sclerosis, but also in a subset of patients with progressive disease. Older age at death and longer disease duration were associated with significantly more remyelinated lesions or lesion areas. No correlation was found between the extent of remyelination and either gender or age at disease onset. These results suggest that the variable and patient-dependent extent of remyelination must be considered in the design of future clinical trials aimed at promoting CNS repair.",
"title": ""
},
{
"docid": "c76fc0f9ce4422bee1d2cf3964f1024c",
"text": "The subjective nature of gender inequality motivates the analysis and comparison of data from real and fictional human interaction. We present a computational extension of the Bechdel test: A popular tool to assess if a movie contains a male gender bias, by looking for two female characters who discuss about something besides a man. We provide the tools to quantify Bechdel scores for both genders, and we measure them in movie scripts and large datasets of dialogues between users of MySpace and Twitter. Comparing movies and users of social media, we find that movies and Twitter conversations have a consistent male bias, which does not appear when analyzing MySpace. Furthermore, the narrative of Twitter is closer to the movies that do not pass the Bechdel test than to",
"title": ""
},
{
"docid": "fe1d0321b1182c9ecb92ccd95c83cd25",
"text": "Cybercriminals have leveraged the popularity of a large user base available on Online Social Networks (OSNs) to spread spam campaigns by propagating phishing URLs, attaching malicious contents, etc. However, another kind of spam attacks using phone numbers has recently become prevalent on OSNs, where spammers advertise phone numbers to attract users’ attention and convince them to make a call to these phone numbers. The dynamics of phone number based spam is different from URL-based spam due to an inherent trust associated with a phone number. While previous work has proposed strategies to mitigate URL-based spam attacks, phone number based spam attacks have received less attention. In this paper, we aim to detect spammers that use phone numbers to promote campaigns on Twitter. To this end, we collected information (tweets, user meta-data, etc.) about 3, 370 campaigns spread by 670, 251 users. We model the Twitter dataset as a heterogeneous network by leveraging various interconnections between different types of nodes present in the dataset. In particular, we make the following contributions – (i) We propose a simple yet effective metric, called Hierarchical Meta-Path Score (HMPS) to measure the proximity of an unknown user to the other known pool of spammers. (ii) We design a feedback-based active learning strategy and show that it significantly outperforms three state-of-the-art baselines for the task of spam detection. Our method achieves 6.9% and 67.3% higher F1-score and AUC, respectively compared to the best baseline method. (iii) To overcome the problem of less training instances for supervised learning, we show that our proposed feedback strategy achieves 25.6% and 46% higher F1-score and AUC respectively than other oversampling strategies. Finally, we perform a case study to show how our method is capable of detecting those users as spammers who have not been suspended by Twitter (and other baselines) yet.",
"title": ""
},
{
"docid": "f2205324dbf3a828e695854402ebbafe",
"text": "Current research in law and neuroscience is promising to answer these questions with a \"yes.\" Some legal scholars working in this area claim that we are close to realizing the \"early criminologists' dream of identifying the biological roots of criminality.\" These hopes for a neuroscientific transformation of the criminal law, although based in the newest research, are part of a very old story. Criminal law and neuroscience have been engaged in an ill-fated and sometimes tragic affair for over two hundred years. Three issues have recurred that track those that bedeviled earlier efforts to ground criminal law in brain sciences. First is the claim that the brain is often the most relevant or fundamental level at which to understand criminal conduct. Second is that the various phenomena we call \"criminal violence\" arise causally from dysfunction within specific locations in the brain (\"localization\"). Third is the related claim that, because much violent criminality arises from brain dysfunction, people who commit such acts are biologically different from typical people (\"alterity\" or \"otherizing\").",
"title": ""
},
{
"docid": "e640c691a45a5435dcdb7601fb581280",
"text": "We study the problem of response selection for multi-turn conversation in retrieval-based chatbots. The task involves matching a response candidate with a conversation context, the challenges for which include how to recognize important parts of the context, and how to model the relationships among utterances in the context. Existing matching methods may lose important information in contexts as we can interpret them with a unified framework in which contexts are transformed to fixed-length vectors without any interaction with responses before matching. This motivates us to propose a new matching framework that can sufficiently carry important information in contexts to matching and model relationships among utterances at the same time. The new framework, which we call a sequential matching framework (SMF), lets each utterance in a context interact with a response candidate at the first step and transforms the pair to a matching vector. The matching vectors are then accumulated following the order of the utterances in the context with a recurrent neural network (RNN) that models relationships among utterances. Context-response matching is then calculated with the hidden states of the RNN. Under SMF, we propose a sequential convolutional network and sequential attention network and conduct experiments on two public data sets to test their performance. Experiment results show that both models can significantly outperform state-of-the-art matching methods. We also show that the models are interpretable with visualizations that provide us insights on how they capture and leverage important information in contexts for matching.",
"title": ""
},
{
"docid": "a205d93fb0ce6dfc24a4367dd3461055",
"text": "Smart devices are gaining popularity in our homes with the promise to make our lives easier and more comfortable. However, the increased deployment of such smart devices brings an increase in potential security risks. In this work, we propose an intrusion detection and mitigation framework, called IoT-IDM, to provide a network-level protection for smart devices deployed in home environments. IoT-IDM monitors the network activities of intended smart devices within the home and investigates whether there is any suspicious or malicious activity. Once an intrusion is detected, it is also capable of blocking the intruder in accessing the victim device on the fly. The modular design of IoT-IDM gives its users the flexibility to employ customized machine learning techniques for detection based on learned signature patterns of known attacks. Software-defined networking technology and its enabling communication protocol, OpenFlow, are used to realise this framework. Finally, a prototype of IoT-IDM is developed and the applicability and efficiency of proposed framework demonstrated through a real IoT device: a smart light bulb.",
"title": ""
},
{
"docid": "d972e23eb49c15488d2159a9137efb07",
"text": "One of the main challenges of the solid-state transformer (SST) lies in the implementation of the dc–dc stage. In this paper, a quadruple-active-bridge (QAB) dc–dc converter is investigated to be used as a basic module of a modular three-stage SST. Besides the feature of high power density and soft-switching operation (also found in others converters), the QAB converter provides a solution with reduced number of high-frequency transformers, since more bridges are connected to the same multiwinding transformer. To ensure soft switching for the entire operation range of the QAB converter, the triangular current-mode modulation strategy, previously adopted for the dual-active-bridge converter, is extended to the QAB converter. The theoretical analysis is developed considering balanced (equal power processed by the medium-voltage (MV) cells) and unbalanced (unequal power processed by the MV cells) conditions. In order to validate the theoretical analysis developed in the paper, a 2-kW prototype is built and experimented.",
"title": ""
},
{
"docid": "d7ea5e0bdf811f427b7c283d4aae7371",
"text": "This work investigates the development of students’ computational thinking (CT) skills in the context of educational robotics (ER) learning activity. The study employs an appropriate CT model for operationalising and exploring students’ CT skills development in two different age groups (15 and 18 years old) and across gender. 164 students of different education levels (Junior high: 89; High vocational: 75) engaged in ER learning activities (2 hours per week, 11 weeks totally) and their CT skills were evaluated at different phases during the activity, using different modality (written and oral) assessment tools. The results suggest that: (a) students reach eventually the same level of CT skills development independent of their age and gender, (b) CT skills inmost cases need time to fully develop (students’ scores improve significantly towards the end of the activity), (c) age and gender relevant differences appear when analysing students’ score in the various specific dimensions of the CT skills model, (d) the modality of the skill assessment instrumentmay have an impact on students’ performance, (e) girls appear inmany situations to need more training time to reach the same skill level compared to boys. © 2015 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "a961b8851761575ae9b54684c58aa30d",
"text": "We propose an optical wireless indoor localization using light emitting diodes (LEDs) and demonstrate it via simulation. Unique frequency addresses are assigned to each LED lamp, and transmitted through the light radiated by the LED. Using the phase difference, time difference of arrival (TDOA) localization algorithm is employed. Because the proposed localization method used pre-installed LED ceiling lamps, no additional infrastructure for localization is required to install and therefore, inexpensive system can be realized. The performance of the proposed localization method is evaluated by computer simulation, and the indoor location accuracy is less than 1 cm in the space of 5m x 5 m x 3 m.",
"title": ""
},
{
"docid": "dc9d26442f454685d8bb92deb17b4a23",
"text": "Computer vision is the science and technology of making machines that see. It is concerned with the theory, design and implementation of algorithms that can automatically process visual data to recognize objects, track and recover their shape and spatial layout. The International Computer Vision Summer School ICVSS was established in 2007 to provide both an objective and clear overview and an in-depth analysis of the state-of-the-art research in Computer Vision. The courses are delivered by world renowned experts in the field, from both academia and industry, and cover both theoretical and practical aspects of real Computer Vision problems. The school is organized every year by University of Cambridge (Computer Vision and Robotics Group) and University of Catania (Image Processing Lab). Different topics are covered each year. A summary of the past Computer Vision Summer Schools can be found at: http:// www.dmi.unict.it/icvss This edited volume contains a selection of articles covering some of the talks and tutorials held during the last editions of the school. The chapters provide an in-depth overview of challenging areas with key references to the existing literature.",
"title": ""
},
{
"docid": "fe1882df52ed6555a087f7683efe80d1",
"text": "Enforcing security on various implementations of OAuth in Android apps should consider a wide range of issues comprehensively. OAuth implementations in Android apps differ from the recommended specification due to the provider and platform factors, and the varied implementations often become vulnerable. Current vulnerability assessments on these OAuth implementations are ad hoc and lack a systematic manner. As a result, insecure OAuth implementations are still widely used and the situation is far from optimistic in many mobile app ecosystems.\n To address this problem, we propose a systematic vulnerability assessment framework for OAuth implementations on Android platform. Different from traditional OAuth security analyses that are experiential with a restrictive three-party model, our proposed framework utilizes an systematic security assessing methodology that adopts a five-party, three-stage model to detect typical vulnerabilities of popular OAuth implementations in Android apps. Based on this framework, a comprehensive investigation on vulnerable OAuth implementations is conducted at the level of an entire mobile app ecosystem. The investigation studies the Chinese mainland mobile app markets (e.g., Baidu App Store, Tencent, Anzhi) that covers 15 mainstream OAuth service providers. Top 100 relevant relying party apps (RP apps) are thoroughly assessed to detect vulnerable OAuth implementations, and we further perform an empirical study of over 4,000 apps to validate how frequently developers misuse the OAuth protocol. The results demonstrate that 86.2% of the apps incorporating OAuth services are vulnerable, and this ratio of Chinese mainland Android app market is much higher than that (58.7%) of Google Play.",
"title": ""
},
{
"docid": "f7a8116cefaaf6ab82118885efac4c44",
"text": "Entrepreneurs have created a number of new Internet-based platforms that enable owners to rent out their durable goods when not using them for personal consumption. We develop a model of these kinds of markets in order to analyze the determinants of ownership, rental rates, quantities, and the surplus generated in these markets. Our analysis considers both a short run, before consumers can revise their ownership decisions and a long run, in which they can. This allows us to explore how patterns of ownership and consumption might change as a result of these new markets. We also examine the impact of bringing-to-market costs, such as depreciation, labor costs and transaction costs and consider the platform’s pricing problem. An online survey of consumers broadly supports the modeling assumptions employed. For example, ownership is determined by individuals’ forward-looking assessments of planned usage. Factors enabling sharing markets to flourish are explored. JEL L1, D23, D47",
"title": ""
},
{
"docid": "e769b09a593b68e7d47102046efc6d8d",
"text": "BACKGROUND\nExisting research indicates sleep problems to be prevalent in youth with internalizing disorders. However, childhood sleep problems are common in the general population and few data are available examining unique relationships between sleep, specific types of anxiety and depressive symptoms among non-clinical samples of children and adolescents.\n\n\nMETHODS\nThe presence of sleep problems was examined among a community sample of children and adolescents (N=175) in association with anxiety and depressive symptoms, age, and gender. Based on emerging findings from the adult literature we also examined associations between cognitive biases and sleep problems.\n\n\nRESULTS\nOverall findings revealed significant associations between sleep problems and both anxiety and depressive symptoms, though results varied by age. Depressive symptoms showed a greater association with sleep problems among adolescents, while anxiety symptoms were generally associated with sleep problems in all youth. Cognitive factors (cognitive errors and control beliefs) linked with anxiety and depression also were associated with sleep problems among adolescents, though these correlations were no longer significant after controlling for internalizing symptoms.\n\n\nCONCLUSIONS\nResults are discussed in terms of their implications for research and treatment of sleep and internalizing disorders in youth.",
"title": ""
},
{
"docid": "de5fd8ae40a2d078101d5bb1859f689b",
"text": "The number and variety of mobile multicast applications are growing at an unprecedented and unanticipated pace. Mobile network providers are in front of a dramatic increase in multicast traffic load, and this growth is forecasted to continue in fifth-generation (5G) networks. The major challenges come from the fact that multicast traffic not only targets groups of end-user devices; it also involves machine-type communications (MTC) for the Internet of Things (IoT). The increase in the MTC load, predicted for 5G, calls into question the effectiveness of the current multimedia broadcast multicast service (MBMS). The aim of this paper is to provide a survey of 5G challenges in the view of effective management of multicast applications, and to identify how to enhance the mobile network architecture to enable multicast applications in future 5G scenarios. By accounting for the presence of both human and machine-related traffic, strengths and weaknesses of the state-of-the-art achievements in multicasting are critically analyzed to provide guidelines for future research on 5G networks and more conscious design choices.",
"title": ""
}
] |
scidocsrr
|
56dbbdee15552081c9d558e100166c06
|
Conceptual analysis of fire fighting robots' control systems
|
[
{
"docid": "08bd4d2c48ebde047a8b36ce72fe61b6",
"text": "S imultaneous localization and mapping (SLAM) is the process by which a mobile robot can build a map of the environment and, at the same time, use this map to compute its location. The past decade has seen rapid and exciting progress in solving the SLAM problem together with many compelling implementations of SLAM methods. The great majority of work has focused on improving computational efficiency while ensuring consistent and accurate estimates for the map and vehicle pose. However, there has also been much research on issues such as nonlinearity, data association , and landmark characterization, all of which are vital in achieving a practical and robust SLAM implementation. This tutorial focuses on the recursive Bayesian formulation of the SLAM problem in which probability distributions or estimates of absolute or relative locations of landmarks and vehicle pose are obtained. Part I of this tutorial (IEEE Robotics & Auomation Magazine, vol. 13, no. 2) surveyed the development of the essential SLAM algorithm in state-space and particle filter form, described a number of key implementations, and cited locations of source code and real-world data for evaluation of SLAM algorithms. Part II of this tutorial (this article), surveys the current state of the art in SLAM research with a focus on three key areas: computational complexity, data association, and environment representation. Much of the mathematical notation and essential concepts used in this article are defined in Part I of this tutorial and, therefore, are not repeated here. SLAM, in its naive form, scales quadratically with the number of landmarks in a map. For real-time implementation, this scaling is potentially a substantial limitation in the use of SLAM methods. The complexity section surveys the many approaches that have been developed to reduce this complexity. These include linear-time state augmentation, sparsifica-tion in information form, partitioned updates, and submapping methods. A second major hurdle to overcome in the implementation of SLAM methods is to correctly associate observations of landmarks with landmarks held in the map. Incorrect association can lead to catastrophic failure of the SLAM algorithm. Data association is particularly important when a vehicle returns to a previously mapped region after a long excursion, the so-called loop-closure problem. The data association section surveys current data association methods used in SLAM. These include batch-validation methods that exploit constraints inherent in the SLAM formulation, appearance based methods, and multihypothesis techniques. The third development discussed in this tutorial is …",
"title": ""
}
] |
[
{
"docid": "641c611e970ce1af055608c7870eedb4",
"text": "We propose two large universe Attribute-Based Encryption constructions. In a large universe ABE construction any string can be used as an attribute and attributes need not be enumerated at system setup. Our first construction establishes a novel large universe Ciphertext-Policy ABE scheme on prime order bilinear groups, while the second achieves a significant efficiency improvement over the large universe Key-Policy ABE systems of Lewko-Waters and Lewko. Both schemes are selectively secure in the standard model under two “q-type” assumptions similar to ones used in prior works. Our work brings back “program and cancel” techniques to this problem. We provide implementations and benchmarks of our constructions in Charm; a programming environment for rapid prototyping of cryptographic primitives.",
"title": ""
},
{
"docid": "7bb04f2163e253068ac665f12a5dd35c",
"text": "Automatic segmentation of the liver and hepatic lesions is an important step towards deriving quantitative biomarkers for accurate clinical diagnosis and computer-aided decision support systems. This paper presents a method to automatically segment liver and lesions in CT and MRI abdomen images using cascaded fully convolutional neural networks (CFCNs) enabling the segmentation of large-scale medical trials and quantitative image analyses. We train and cascade two FCNs for the combined segmentation of the liver and its lesions. As a first step, we train an FCN to segment the liver as ROI input for a second FCN. The second FCN solely segments lesions within the predicted liver ROIs of step 1. CFCN models were trained on an abdominal CT dataset comprising 100 hepatic tumor volumes. Validation results on further datasets show that CFCN-based semantic liver and lesion segmentation achieves Dice scores over 94% for the liver with computation times below 100s per volume. We further experimentally demonstrate the robustness of the proposed method on 38 MRI liver tumor volumes and the public 3DIRCAD dataset.",
"title": ""
},
{
"docid": "40c16b5db17fa31a1bdae7e66a297ea7",
"text": "Code smells, i.e., symptoms of poor design and implementation choices applied by programmers during the development of a software project [2], represent an important factor contributing to technical debt [3]. The research community spent a lot of effort studying the extent to which code smells tend to remain in a software project for long periods of time [9], as well as their negative impact on non-functional properties of source code [4, 7]. As a consequence, several tools and techniques have been proposed to help developers in detecting code smells and to suggest refactoring opportunities (e.g., [5, 6, 8]).\n So far, almost all detectors identify code smells using structural properties of source code. However, recent studies have indicated that code smells detected by existing tools are generally ignored (and thus not refactored) by the developers [1]. A possible reason is that developers do not perceive the code smells identified by the tool as actual design problems or, if they do, they are not able to practically work on such code smells. In other words, there is misalignment between what is considered smelly by the tool and what is actually refactorable by developers.\n In a previous paper [6], we introduced a tool named TACO that uses textual analysis to detect code smells. The results indicated that textual and structural techniques are complementary: while some code smell instances in a software system can be correctly identified by both TACO and the alternative structural approaches, other instances can be only detected by one of the two [6].\n In this paper, we investigate whether code smells detected using textual information are as difficult to identify and refactor as structural smells or if they follow a different pattern during software evolution. We firstly performed a repository mining study considering 301 releases and 183,514 commits from 20 open source projects (i) to verify whether textually and structurally detected code smells are treated differently, and (ii) to analyze their likelihood of being resolved with regards to different types of code changes, e.g., refactoring operations. Since our quantitative study cannot explain relation and causation between code smell types and maintenance activities, we perform a qualitative study with 19 industrial developers and 5 software quality experts in order to understand (i) how code smells identified using different sources of information are perceived, and (ii) whether textually or structurally detected code smells are easier to refactor. In both studies, we focused on five code smell types, i.e., Blob, Feature Envy, Long Method, Misplaced Class, and Promiscuous Package.\n The results of our studies indicate that textually detected code smells are perceived as harmful as the structural ones, even though they do not exceed any typical software metrics' value (e.g., lines of code in a method). Moreover, design problems in source code affected by textual-based code smells are easier to identify and refactor. As a consequence, developers' activities tend to decrease the intensity of textual code smells, positively impacting their likelihood of being resolved. Vice versa, structural code smells typically increase in intensity over time, indicating that maintenance operations are not aimed at removing or limiting them. Indeed, while developers perceive source code affected by structural-based code smells as harmful, they face more problems in correctly identifying the actual design problems affecting these code components and/or the right refactoring operation to apply to remove them.",
"title": ""
},
{
"docid": "bfb189f8052f41fe1491d8d71f9586f1",
"text": "In this paper, we introduce a novel reconfigurable architecture, named 3D field-programmable gate array (3D nFPGA), which utilizes 3D integration techniques and new nanoscale materials synergistically. The proposed architecture is based on CMOS nanohybrid techniques that incorporate nanomaterials such as carbon nanotube bundles and nanowire crossbars into CMOS fabrication process. This architecture also has built-in features for fault tolerance and heat alleviation. Using unique features of FPGAs and a novel 3D stacking method enabled by the application of nanomaterials, 3D nFPGA obtains a 4x footprint reduction comparing to the traditional CMOS-based 2D FPGAs. With a customized design automation flow, we evaluate the performance and power of 3D nFPGA driven by the 20 largest MCNC benchmarks. Results demonstrate that 3D nFPGA is able to provide a performance gain of 2.6 x with a small power overhead comparing to the traditional 2D FPGA architecture.",
"title": ""
},
{
"docid": "bdc1d214884770b979161ba709454486",
"text": "The traditional two-stage stochastic programming approach is to minimize the total expected cost with the assumption that the distribution of the random parameters is known. However, in most practices, the actual distribution of the random parameters is not known, and instead, only a series of historical data are available. Thus, the solution obtained from the traditional twostage stochastic program can be biased and suboptimal for the true problem, if the estimated distribution of the random parameters is not accurate, which is usually true when only a limited amount of historical data are available. In this paper, we propose a data-driven risk-averse stochastic optimization approach. Based on the observed historical data, we construct the confidence set of the ambiguous distribution of the random parameters, and develop a riskaverse stochastic optimization framework to minimize the total expected cost under the worstcase distribution within the constructed confidence set. We introduce the Wasserstein metric to construct the confidence set and by using this metric, we can successfully reformulate the risk-averse two-stage stochastic program to its tractable counterpart. In addition, we derive the worst-case distribution and develop efficient algorithms to solve the reformulated problem. Moreover, we perform convergence analysis to show that the risk averseness of the proposed formulation vanishes as the amount of historical data grows to infinity, and accordingly, the corresponding optimal objective value converges to that of the traditional risk-neutral twostage stochastic program. We further precisely derive the convergence rate, which indicates the value of data. Finally, the numerical experiments on risk-averse stochastic facility location and stochastic unit commitment problems verify the effectiveness of our proposed framework.",
"title": ""
},
{
"docid": "c36cf972ad94cd4a583d7257cb0bdbb1",
"text": "Practitioners and researchers alike increasingly use social media messages as an additional source of information to analyse stock price movements. In this regard, previous preliminary findings demonstrate the incremental value of considering the multi-dimensional structure of human emotions in sentiment analysis instead of the predominant assessment of the binary positive-negative valence of emotions. Therefore, based on emotion theory and an established sentiment lexicon, we develop and apply an open source dictionary for the analysis of seven different emotions (affection, happiness, satisfaction, fear, anger, depression, and contempt).To investigate the connection between the differential emotions and stock movements we analyse approximately 5.5 million Twitter messages on 33 S&P 100 companies and their respective NYSE stock prices from Yahoo!Finance over a period of three months. Subsequently, we conduct a lagged fixed-effects panel regression on the daily closing value differences. The results generally support the assumption of the necessity of considering a more differentiated sentiment. Moreover, comparing positive and negative valence, we find that only the average negative emotionality strength has a significant connection with company-specific stock price movements. The emotion specific analysis reveals that an increase in depression and happiness strength is associated with a significant decrease in company-specific stock prices.",
"title": ""
},
{
"docid": "03e267aeeef5c59aab348775d264afce",
"text": "Visual relations, such as person ride bike and bike next to car, offer a comprehensive scene understanding of an image, and have already shown their great utility in connecting computer vision and natural language. However, due to the challenging combinatorial complexity of modeling subject-predicate-object relation triplets, very little work has been done to localize and predict visual relations. Inspired by the recent advances in relational representation learning of knowledge bases and convolutional object detection networks, we propose a Visual Translation Embedding network (VTransE) for visual relation detection. VTransE places objects in a low-dimensional relation space where a relation can be modeled as a simple vector translation, i.e., subject + predicate ≈ object. We propose a novel feature extraction layer that enables object-relation knowledge transfer in a fully-convolutional fashion that supports training and inference in a single forward/backward pass. To the best of our knowledge, VTransE is the first end-toend relation detection network. We demonstrate the effectiveness of VTransE over other state-of-the-art methods on two large-scale datasets: Visual Relationship and Visual Genome. Note that even though VTransE is a purely visual model, it is still competitive to the Lu’s multi-modal model with language priors [27].",
"title": ""
},
{
"docid": "8ec9a57e096e05ad57e3421b67dc1b27",
"text": "I review the literature on equity market momentum, a seminal and intriguing finding in finance. This phenomenon is the ability of returns over the past one to four quarters to predict future returns over the same period in the cross-section of equities. I am able to document about ten different theories for momentum, and a large volume of empirical work on the topic. I find, however, that after a quarter century following the discovery of momentum by Jegadeesh and Titman (1993), we are still no closer to finding a discernible cause for this phenomenon, in spite of the extensive work on the topic. More needs to be done to develop tests that are focused not so much on testing one specific theory, but on ruling out alternative",
"title": ""
},
{
"docid": "dd48abf39ab52758719d5be06dc8e733",
"text": "A new algorithm for Boolean operations on general planar polygons is presented. It is available for general planar polygons (manifold or non-manifold, with or without holes). Edges of the two general polygons are subdivided at the intersection points and touching points. Thus, the boundary of the Boolean operation resultant polygon is made of some whole edges of the polygons after the subdivision process. We use the simplex theory to build the basic mathematical model of the new algorithm. The subordination problem between an edge and a polygon is reduced to a problem of determining whether a point is on some edges of some simplices or inside the simplices, and the associated simplicial chain of the resultant polygon is just an assembly of some simplices and their coefficients of the two polygons after the subdivision process. Examples show that the running time required by the new algorithm is less than one-third of that by the Rivero and Feito algorithm. r 2004 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "e9e7a68578f23b85bee9ebfe1b923f87",
"text": "Low-density lipoprotein (LDL) is the most abundant and the most atherogenic class of cholesterol-carrying lipoproteins in human plasma. The level of plasma LDL is regulated by the LDL receptor, a cell surface glycoprotein that removes LDL from plasma by receptor-mediated endocytosis. Defects in the gene encoding the LDL receptor, which occur in patients with familial hypercholesterolemia, elevate the plasma LDL level and produce premature coronary atherosclerosis. The physiologically important LDL receptors are located primarily in the liver, where their number is regulated by the cholesterol content of the hepatocyte. When the cholesterol content of hepatocytes is raised by ingestion of diets high in saturated fat and cholesterol, LDL receptors fall and plasma LDL levels rise. Conversely, maneuvers that lower the cholesterol content of hepatocytes, such as ingestion of drugs that inhibit cholesterol synthesis (mevinolin or compactin) or prevent the reutilization of bile acids (cholestyramine or colestipol), stimulate LDL receptor production and lower plasma LDL levels. The normal process of receptor regulation can therefore be exploited in powerful and novel ways so as to reverse hypercholesterolemia and prevent atherosclerosis.",
"title": ""
},
{
"docid": "a937f479b462758a089ed23cfa5a0099",
"text": "The paper outlines the development of a large vocabulary continuous speech recognition (LVCSR) system for the Indonesian language within the Asian speech translation (A-STAR) project. An overview of the A-STAR project and Indonesian language characteristics will be briefly described. We then focus on a discussion of the development of Indonesian LVCSR, including data resources issues, acoustic modeling, language modeling, the lexicon, and accuracy of recognition. There are three types of Indonesian data resources: daily news, telephone application, and BTEC tasks, which are used in this project. They are available in both text and speech forms. The Indonesian speech recognition engine was trained using the clean speech of both daily news and telephone application tasks. The optimum performance achieved on the BTEC task was 92.47% word accuracy. 1 A-STAR Project Overview The A-STAR project is an Asian consortium that is expected to advance the state-of-the-art in multilingual man-machine interfaces in the Asian region. This basic infrastructure will accelerate the development of large-scale spoken language corpora in Asia and also facilitate the development of related fundamental information communication technologies (ICT), such as multi-lingual speech translation, Figure 1: Outline of future speech-technology services connecting each area in the Asian region through network. multi-lingual speech transcription, and multi-lingual information retrieval. These fundamental technologies can be applied to the human-machine interfaces of various telecommunication devices and services connecting Asian countries through the network using standardized communication protocols as outlined in Fig. 1. They are expected to create digital opportunities, improve our digital capabilities, and eliminate the digital divide resulting from the differences in ICT levels in each area. The improvements to borderless communication in the Asian region are expected to result in many benefits in everyday life including tourism, business, education, and social security. The project was coordinated together by the Advanced Telecommunication Research (ATR) and the National Institute of Information and Communications Technology (NICT) Japan in cooperation with several research institutes in Asia, such as the National Laboratory of Pattern Recognition (NLPR) in China, the Electronics and Telecommunication Research Institute (ETRI) in Korea, the Agency for the Assessment and Application Technology (BPPT) in Indonesia, the National Electronics and Computer Technology Center (NECTEC) in Thailand, the Center for Development of Advanced Computing (CDAC) in India, the National Taiwan University (NTU) in Taiwan. Partners are still being sought for other languages in Asia. More details about the A-STAR project can be found in (Nakamura et al., 2007). 2 Indonesian Language Characteristic The Indonesian language, or so-called Bahasa Indonesia, is a unified language formed from hundreds of languages spoken throughout the Indonesian archipelago. Compared to other languages, which have a high density of native speakers, Indonesian is spoken as a mother tongue by only 7% of the population, and more than 195 million people speak it as a second language with varying degrees of proficiency. There are approximately 300 ethnic groups living throughout 17,508 islands, speaking 365 native languages or no less than 669 dialects (Tan, 2004). At home, people speak their own language, such as Javanese, Sundanese or Balinese, even though almost everybody has a good understanding of Indonesian as they learn it in school. Although the Indonesian language is infused with highly distinctive accents from different ethnic languages, there are many similarities in patterns across the archipelago. Modern Indonesian is derived from the literary of the Malay dialect. Thus, it is closely related to the Malay spoken in Malaysia, Singapore, Brunei, and some other areas. Unlike the Chinese language, it is not a tonal language. Compared with European languages, Indonesian has a strikingly small use of gendered words. Plurals are often expressed by means of word repetition. It is also a member of the agglutinative language family, meaning that it has a complex range of prefixes and suffixes, which are attached to base words. Consequently, a word can become very long. More details on Indonesian characteristics can be found in (Sakti et al., 2004). 3 Indonesian Phoneme Set The Indonesian phoneme set is defined based on Indonesian grammar described in (Alwi et al., 2003). A full phoneme set contains 33 phoneme symbols in total, which consists of 10 vowels (including diphthongs), 22 consonants, and one silent symbol. The vowel articulation pattern of the Indonesian language, which indicates the first two resonances of the vocal tract, F1 (height) and F2 (backness), is shown in Fig. 2.",
"title": ""
},
{
"docid": "8c7af6b1aa36c5369c7e023dd84dabfd",
"text": "This paper compares various methodologies for the design of Sobel Edge Detection Algorithm on Field Programmable Gate Arrays (FPGAs). We show some characteristics to design a computer vision algorithm to suitable hardware platforms. We evaluate hardware resources and power consumption of Sobel Edge Detection on two studies: Xilinx system generator (XSG) and Vivado_HLS tools which both are very useful tools for developing computer vision algorithms. The comparison the hardware resources and power consumption among FPGA platforms (Zynq-7000 AP SoC, Spartan 3A DSP) are analyzed. The hardware resources by using Vivado_HLS on both platforms are used less 9 times with BRAM_18K, 7 times with DSP48E, 2 times with FFs, and approximately with LUTs comparing with XSG. In addition, the power consumption on Zynq-7000 AP SoC spends more 30% by using Vivado_HLS than by using XSG tool and for Spartan 3A DSP consumes a half of power comparing with by using XSG tool. In the study by using Vivado_HLS shows that power consumption depends on frequency.",
"title": ""
},
{
"docid": "ce282fba1feb109e03bdb230448a4f8a",
"text": "The goal of two-sample tests is to assess whether two samples, SP ∼ P and SQ ∼ Q, are drawn from the same distribution. Perhaps intriguingly, one relatively unexplored method to build two-sample tests is the use of binary classifiers. In particular, construct a dataset by pairing the n examples in SP with a positive label, and by pairing the m examples in SQ with a negative label. If the null hypothesis “P = Q” is true, then the classification accuracy of a binary classifier on a held-out subset of this dataset should remain near chance-level. As we will show, such Classifier Two-Sample Tests (C2ST) learn a suitable representation of the data on the fly, return test statistics in interpretable units, have a simple null distribution, and their predictive uncertainty allow to interpret where P and Q differ. The goal of this paper is to establish the properties, performance, and uses of C2ST. First, we analyze their main theoretical properties. Second, we compare their performance against a variety of state-of-the-art alternatives. Third, we propose their use to evaluate the sample quality of generative models with intractable likelihoods, such as Generative Adversarial Networks (GANs). Fourth, we showcase the novel application of GANs together with C2ST for causal discovery.",
"title": ""
},
{
"docid": "e13874aa8c3fe19bb2a176fd3a039887",
"text": "As a typical deep learning model, Convolutional Neural Network (CNN) has shown excellent ability in solving complex classification problems. To apply CNN models in mobile ends and wearable devices, a fully pipelined hardware architecture adopting a Row Processing Tree (RPT) structure with small memory resource consumption between convolutional layers is proposed. A modified Row Stationary (RS) dataflow is implemented to evaluate the RPT architecture. Under the the same work frequency requirement for these two architectures, the experimental results show that the RPT architecture reduces 91% on-chip memory and 75% DRAM bandwidth compared with the modified RS dataflow, but the throughput of the modified RS dataflow is 3 times higher than the our proposed RPT architecture. The RPT architecture can achieve 121fps at 100MHZ while processing a CNN including 4 convolutional layers.",
"title": ""
},
{
"docid": "a76071628d25db972127702b974d4849",
"text": "Surveying 3D scenes is a common task in robotics. Systems can do so autonomously by iteratively obtaining measurements. This process of planning observations to improve the model of a scene is called Next Best View (NBV) planning. NBV planning approaches often use either volumetric (e.g., voxel grids) or surface (e.g., triangulated meshes) representations. Volumetric approaches generalise well between scenes as they do not depend on surface geometry but do not scale to high-resolution models of large scenes. Surface representations can obtain high-resolution models at any scale but often require tuning of unintuitive parameters or multiple survey stages. This paper presents a scene-model-free NBV planning approach with a density representation. The Surface Edge Explorer (SEE) uses the density of current measurements to detect and explore observed surface boundaries. This approach is shown experimentally to provide better surface coverage in lower computation time than the evaluated state-of-the-art volumetric approaches while moving equivalent distances.",
"title": ""
},
{
"docid": "8bd9e3fe5d2b6fe8d58a86baf3de3522",
"text": "Hand pose estimation from single depth images is an essential topic in computer vision and human computer interaction. Despite recent advancements in this area promoted by convolutional neural networks, accurate hand pose estimation is still a challenging problem. In this paper we propose a novel approach named as Pose guided structured Region Ensemble Network (Pose-REN) to boost the performance of hand pose estimation. Under the guidance of an initially estimated pose, the proposed method extracts regions from the feature maps of convolutional neural network and generates more optimal and representative features for hand pose estimation. The extracted feature regions are then integrated hierarchically according to the topology of hand joints by tree-structured fully connections to regress the refined hand pose. The final hand pose is obtained by an iterative cascaded method. Comprehensive experiments on public hand pose datasets demonstrate that our proposed method outperforms state-of-the-art algorithms.",
"title": ""
},
{
"docid": "36bdc3b5f9ce2fbbff0dd815bf3eee67",
"text": "A patient with upper limb dimelia including a double scapula, humerus, radius, and ulna, 11 metacarpals and digits (5 on the superior side, 6 on the inferior side) was treated with a simple amputation of the inferior limb resulting in cosmetic improvement and maintenance of range of motion in the preserved limb. During the amputation, the 2 limbs were found to be anatomically separate except for the ulnar nerve, which, in the superior limb, bifurcated into the sensory branch of radial nerve in the inferior limb, and the brachial artery, which bifurcated into the radial artery. Each case of this rare anomaly requires its own individually carefully planned surgical procedure.",
"title": ""
},
{
"docid": "a99785b0563ca5922da304f69aa370c0",
"text": "Marcel Fritz, Christian Schlereth, Stefan Figge Empirical Evaluation of Fair Use Flat Rate Strategies for Mobile Internet The fair use flat rate is a promising tariff concept for the mobile telecommunication industry. Similar to classical flat rates it allows unlimited usage at a fixed monthly fee. Contrary to classical flat rates it limits the access speed once a certain usage threshold is exceeded. Due to the current global roll-out of the LTE (Long Term Evolution) technology and the related economic changes for telecommunication providers, the application of fair use flat rates needs a reassessment. We therefore propose a simulation model to evaluate different pricing strategies and their contribution margin impact. The key input element of the model is provided by socalled discrete choice experiments that allow the estimation of customer preferences. Based on this customer information and the simulation results, the article provides the following recommendations. Classical flat rates do not allow profitable provisioning of mobile Internet access. Instead, operators should apply fair use flat rates with a lower usage threshold of 1 or 3 GB which leads to an improved contribution margin. Bandwidth and speed are secondary and do merely impact customer preferences. The main motivation for new mobile technologies such as LTE should therefore be to improve the cost structure of an operator rather than using it to skim an assumed higher willingness to pay of mobile subscribers.",
"title": ""
},
{
"docid": "0320ebc09663ecd6bf5c39db472fcbde",
"text": "The human visual system is capable of learning an unbounded number of facts from images including not only objects but also their attributes, actions and interactions. Such uniform understanding of visual facts has not received enough attention. Existing visual recognition systems are typically modeled differently for each fact type such as objects, actions, and interactions. We propose a setting where all these facts can be modeled simultaneously with a capacity to understand an unbounded number of facts in a structured way. The training data comes as structured facts in images, including (1) objects (e.g., <boy>), (2) attributes (e.g., <boy, tall>), (3) actions (e.g., <boy, playing>), and (4) interactions (e.g., <boy, riding, a horse >). Each fact has a language view (e.g., < boy, playing>) and a visual view (an image). We show that learning visual facts in a structured way enables not only a uniform but also generalizable visual understanding. We propose and investigate recent and strong approaches from the multiview learning literature and also introduce a structured embedding model. We applied the investigated methods on several datasets that we augmented with structured facts and a large scale dataset of > 202,000 facts and 814,000 images. Our results show the advantage of relating facts by the structure by the proposed model compared to the baselines.",
"title": ""
},
{
"docid": "795e9da03d2b2d6e66cf887977fb24e9",
"text": "Researchers working on the planning, scheduling, and execution of scientific workflows need access to a wide variety of scientific workflows to evaluate the performance of their implementations. This paper provides a characterization of workflows from six diverse scientific applications, including astronomy, bioinformatics, earthquake science, and gravitational-wave physics. The characterization is based on novel workflow profiling tools that provide detailed information about the various computational tasks that are present in the workflow. This information includes I/O, memory and computational characteristics. Although the workflows are diverse, there is evidence that each workflow has a job type that consumes the most amount of runtime. The study also uncovered inefficiency in a workflow component implementation, where the component was re-reading the same data multiple times. © 2012 Elsevier B.V. All rights reserved.",
"title": ""
}
] |
scidocsrr
|
c0114e72609cd1e0f502e9bcc33c614e
|
Survey on Classification Algorithms for Data Mining:(Comparison and Evaluation)
|
[
{
"docid": "27fd27cf86b68822b3cfb73cff2e2cb6",
"text": "Patients with Liver disease have been continuously increasing because of excessive consumption of alcohol, inhale of harmful gases, intake of contaminated food, pickles and drugs. Automatic classification tools may reduce burden on doctors. This paper evaluates the selected classification algorithms for the classification of some liver patient datasets. The classification algorithms considered here are Naïve Bayes classifier, C4.5, Back propagation Neural Network algorithm, and Support Vector Machines. These algorithms are evaluated based on four criteria: Accuracy, Precision, Sensitivity and Specificity.",
"title": ""
},
{
"docid": "8d9a02974ad85aa508dc0f7a85a669f1",
"text": "The successful application of data mining in highly visible fields like e-business, marketing and retail has led to its application in other industries and sectors. Among these sectors just discovering is healthcare. The healthcare environment is still „information rich‟ but „knowledge poor‟. There is a wealth of data available within the healthcare systems. However, there is a lack of effective analysis tools to discover hidden relationships and trends in data. This research paper intends to provide a survey of current techniques of knowledge discovery in databases using data mining techniques that are in use in today‟s medical research particularly in Heart Disease Prediction. Number of experiment has been conducted to compare the performance of predictive data mining technique on the same dataset and the outcome reveals that Decision Tree outperforms and some time Bayesian classification is having similar accuracy as of decision tree but other predictive methods like KNN, Neural Networks, Classification based on clustering are not performing well. The second conclusion is that the accuracy of the Decision Tree and Bayesian Classification further improves after applying genetic algorithm to reduce the actual data size to get the optimal subset of attribute sufficient for heart disease prediction.",
"title": ""
}
] |
[
{
"docid": "c1a4da111d6e3496845b4726dfabcb5b",
"text": "A growing number of information technology systems and services are being developed to change users’ attitudes or behavior or both. Despite the fact that attitudinal theories from social psychology have been quite extensively applied to the study of user intentions and behavior, these theories have been developed for predicting user acceptance of the information technology rather than for providing systematic analysis and design methods for developing persuasive software solutions. This article is conceptual and theory-creating by its nature, suggesting a framework for Persuasive Systems Design (PSD). It discusses the process of designing and evaluating persuasive systems and describes what kind of content and software functionality may be found in the final product. It also highlights seven underlying postulates behind persuasive systems and ways to analyze the persuasion context (the intent, the event, and the strategy). The article further lists 28 design principles for persuasive system content and functionality, describing example software requirements and implementations. Some of the design principles are novel. Moreover, a new categorization of these principles is proposed, consisting of the primary task, dialogue, system credibility, and social support categories.",
"title": ""
},
{
"docid": "4d389e4f6e33d9f5498e3071bf116a49",
"text": "This paper reviews the origins and definitions of social capital in the writings of Bourdieu, Loury, and Coleman, among other authors. It distinguishes four sources of social capital and examines their dynamics. Applications of the concept in the sociological literature emphasize its role in social control, in family support, and in benefits mediated by extrafamilial networks. I provide examples of each of these positive functions. Negative consequences of the same processes also deserve attention for a balanced picture of the forces at play. I review four such consequences and illustrate them with relevant examples. Recent writings on social capital have extended the concept from an individual asset to a feature of communities and even nations. The final sections describe this conceptual stretch and examine its limitations. I argue that, as shorthand for the positive consequences of sociability, social capital has a definite place in sociological theory. However, excessive extensions of the concept may jeopardize its heuristic value. Alejandro Portes: Biographical Sketch Alejandro Portes is professor of sociology at Princeton University and faculty associate of the Woodrow Wilson School of Public Affairs. He formerly taught at Johns Hopkins where he held the John Dewey Chair in Arts and Sciences, Duke University, and the University of Texas-Austin. In 1997 he held the Emilio Bacardi distinguished professorship at the University of Miami. In the same year he was elected president of the American Sociological Association. Born in Havana, Cuba, he came to the United States in 1960. He was educated at the University of Havana, Catholic University of Argentina, and Creighton University. He received his MA and PhD from the University of Wisconsin-Madison. 0360-0572/98/0815-0001$08.00 1 A nn u. R ev . S oc io l. 19 98 .2 4: 124 . D ow nl oa de d fr om w w w .a nn ua lr ev ie w s. or g A cc es s pr ov id ed b y St an fo rd U ni ve rs ity M ai n C am pu s R ob er t C ro w n L aw L ib ra ry o n 03 /1 0/ 17 . F or p er so na l u se o nl y. Portes is the author of some 200 articles and chapters on national development, international migration, Latin American and Caribbean urbanization, and economic sociology. His most recent books include City on the Edge, the Transformation of Miami (winner of the Robert Park award for best book in urban sociology and of the Anthony Leeds award for best book in urban anthropology in 1995); The New Second Generation (Russell Sage Foundation 1996); Caribbean Cities (Johns Hopkins University Press); and Immigrant America, a Portrait. The latter book was designated as a centennial publication by the University of California Press. It was originally published in 1990; the second edition, updated and containing new chapters on American immigration policy and the new second generation, was published in 1996.",
"title": ""
},
{
"docid": "18faba65741b6871517c8050aa6f3a45",
"text": "Individuals differ in the manner they approach decision making, namely their decision-making styles. While some people typically make all decisions fast and without hesitation, others invest more effort into deciding even about small things and evaluate their decisions with much more scrutiny. The goal of the present study was to explore the relationship between decision-making styles, perfectionism and emotional processing in more detail. Specifically, 300 college students majoring in social studies and humanities completed instruments designed for assessing maximizing, decision commitment, perfectionism, as well as emotional regulation and control. The obtained results indicate that maximizing is primarily related to one dimension of perfectionism, namely the concern over mistakes and doubts, as well as emotional regulation and control. Furthermore, together with the concern over mistakes and doubts, maximizing was revealed as a significant predictor of individuals' decision commitment. The obtained findings extend previous reports regarding the association between maximizing and perfectionism and provide relevant insights into their relationship with emotional regulation and control. They also suggest a need to further explore these constructs that are, despite their complex interdependence, typically investigated in separate contexts and domains.",
"title": ""
},
{
"docid": "fb6068d738c7865d07999052750ff6a8",
"text": "Malware detection and prevention methods are increasingly becoming necessary for computer systems connected to the Internet. The traditional signature based detection of malware fails for metamorphic malware which changes its code structurally while maintaining functionality at time of propagation. This category of malware is called metamorphic malware. In this paper we dynamically analyze the executables produced from various metamorphic generators through an emulator by tracing API calls. A signature is generated for an entire malware class (each class representing a family of viruses generated from one metamorphic generator) instead of for individual malware sample. We show that most of the metamorphic viruses of same family are detected by the same base signature. Once a base signature for a particular metamorphic generator is generated, all the metamorphic viruses created from that tool are easily detected by the proposed method. A Proximity Index between the various Metamorphic generators has been proposed to determine how similar two or more generators are.",
"title": ""
},
{
"docid": "66e7979aff5860f713dffd10e98eed3d",
"text": "The encoder-decoder dialog model is one of the most prominent methods used to build dialog systems in complex domains. Yet it is limited because it cannot output interpretable actions as in traditional systems, which hinders humans from understanding its generation process. We present an unsupervised discrete sentence representation learning method that can integrate with any existing encoderdecoder dialog models for interpretable response generation. Building upon variational autoencoders (VAEs), we present two novel models, DI-VAE and DI-VST that improve VAEs and can discover interpretable semantics via either auto encoding or context predicting. Our methods have been validated on real-world dialog datasets to discover semantic representations and enhance encoder-decoder models with interpretable generation.1",
"title": ""
},
{
"docid": "215aa5e4d0837fe56179de182b1613e0",
"text": "Today Security of data is of foremost importance in today's world. Security has become one of the most important factor in communication and information technology. For this purpose steganography is used. Steganography is the art of hiding secret or sensitive information into digital media like images so as to have secure communication. In this paper we present and discuss LSB (Least Significant Bit) based image steganography and AES",
"title": ""
},
{
"docid": "3af344724ba7a3966968d035727ad705",
"text": "We prove a simple relationship between extended binomial coefficients — natural extensions of the well-known binomial coefficients — and weighted restricted integer compositions. Moreover, we give a very useful interpretation of extended binomial coefficients as representing distributions of sums of independent discrete random variables. We apply our results, e.g., to determine the distribution of the sum of k logarithmically distributed random variables, and to determining the distribution, specifying all moments, of the random variable whose values are part-products of random restricted integer compositions. Based on our findings and using the central limit theorem, we also give generalized Stirling formulae for central extended binomial coefficients. We enlarge the list of known properties of extended binomial coefficients.",
"title": ""
},
{
"docid": "d7c7eaae670910f78038e439b1553032",
"text": "Wireless powered communication networks (WPCNs), where multiple energy-limited devices first harvest energy in the downlink and then transmit information in the uplink, have been envisioned as a promising solution for the future Internet-of-Things (IoT). Meanwhile, nonorthogonal multiple access (NOMA) has been proposed to improve the system spectral efficiency (SE) of the fifth-generation (5G) networks by allowing concurrent transmissions of multiple users in the same spectrum. As such, NOMA has been recently considered for the uplink of WPCNs based IoT networks with a massive number of devices. However, simultaneous transmissions in NOMA may also incur more transmit energy consumption as well as circuit energy consumption in practice which is critical for energy constrained IoT devices. As a result, compared to orthogonal multiple access schemes such as time-division multiple access (TDMA), whether the SE can be improved and/or the total energy consumption can be reduced with NOMA in such a scenario still remains unknown. To answer this question, we first derive the optimal time allocations for maximizing the SE of a TDMA-based WPCN (T-WPCN) and a NOMA-based WPCN (N-WPCN), respectively. Subsequently, we analyze the total energy consumption as well as the maximum SE achieved by these two networks. Surprisingly, it is found that N-WPCN not only consumes more energy, but also is less spectral efficient than T-WPCN. Simulation results verify our theoretical findings and unveil the fundamental performance bottleneck, i.e., “worst user bottleneck problem”, in multiuser NOMA systems.",
"title": ""
},
{
"docid": "2dc2b9d60244e819a85b33581800ae56",
"text": "In this study, a simple and effective silver ink formulation was developed to generate silver tracks with high electrical conductivity on flexible substrates at low sintering temperatures. Diethanolamine (DEA), a self-oxidizing compound at moderate temperatures, was mixed with a silver ammonia solution to form a clear and stable solution. After inkjet-printed or pen-written on plastic sheets, DEA in the silver ink decomposes at temperatures higher than 50 °C and generates formaldehyde, which reacts spontaneously with silver ammonia ions to form silver thin films. The electrical conductivity of the inkjet-printed silver films can be 26% of the bulk silver after heating at 75 °C for 20 min and show great adhesion on plastic sheets.",
"title": ""
},
{
"docid": "b4a5ebf335cc97db3790c9e2208e319d",
"text": "We examine whether conservative white males are more likely than are other adults in the U.S. general public to endorse climate change denial. We draw theoretical and analytical guidance from the identityprotective cognition thesis explaining the white male effect and from recent political psychology scholarship documenting the heightened system-justification tendencies of political conservatives. We utilize public opinion data from ten Gallup surveys from 2001 to 2010, focusing specifically on five indicators of climate change denial. We find that conservative white males are significantly more likely than are other Americans to endorse denialist views on all five items, and that these differences are even greater for those conservative white males who self-report understanding global warming very well. Furthermore, the results of our multivariate logistic regression models reveal that the conservative white male effect remains significant when controlling for the direct effects of political ideology, race, and gender as well as the effects of nine control variables. We thus conclude that the unique views of conservative white males contribute significantly to the high level of climate change denial in the United States. 2011 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "ae5a1d9874b9fd1358d7768936c85491",
"text": "Photoplethysmography (PPG) is a technique that uses light to noninvasively obtain a volumetric measurement of an organ with each cardiac cycle. A PPG-based system emits monochromatic light through the skin and measures the fraction of the light power which is transmitted through a vascular tissue and detected by a photodetector. Part of thereby transmitted light power is modulated by the vascular tissue volume changes due to the blood circulation induced by the heart beating. This modulated light power plotted against time is called the PPG signal. Pulse Oximetry is an empirical technique which allows the arterial blood oxygen saturation (SpO2 – molar fraction) evaluation from the PPG signals. There have been many reports in the literature suggesting that other arterial blood chemical components molar fractions and concentrations can be evaluated from the PPG signals. Most attempts to perform such evaluation on empirical bases have failed, especially for components concentrations. This paper introduces a non-empirical physical model which can be used to analytically investigate the phenomena of PPG signal. Such investigation would result in simplified engineering models, which can be used to design validating experiments and new types of spectroscopic devices with the potential to assess venous and arterial blood chemical composition in both molar fractions and concentrations non-invasively.",
"title": ""
},
{
"docid": "648a1ff0ad5b2742ff54460555287c84",
"text": "In the European academic and institutional debate, interoperability is predominantly seen as a means to enable public administrations to collaborate within Members State and across borders. The article presents a conceptual framework for ICT-enabled governance and analyses the role of interoperability in this regard. The article makes a specific reference to the exploratory research project carried out by the Information Society Unit of the Institute for Prospective Technological Studies (IPTS) of the European Commission’s Joint Research Centre on emerging ICT-enabled governance models in EU cities (EXPGOV). The aim of this project is to study the interplay between ICTs and governance processes at city level and formulate an interdisciplinary framework to assess the various dynamics emerging from the application of ICT-enabled service innovations in European cities. In this regard, the conceptual framework proposed in this article results from an action research perspective and investigation of e-governance experiences carried out in Europe. It aims to elicit the main value drivers that should orient how interoperable systems are implemented, considering the reciprocal influences that occur between these systems and different governance models in their specific context.",
"title": ""
},
{
"docid": "7716fcbb39961666483835e4db1da5b4",
"text": "Software development is a knowledge intensive and collaborative activity. The success of the project totally depends on knowledge and experience of the developers. Increasing knowledge creation and sharing among software engineers are uphill tasks in software development environments. The field of knowledge management has emerged into this field to improve the productivity of the software by effective and efficient knowledge creation, sharing and transferring. In other words, knowledge management for software engineering aims at facilitating knowledge flow and utilization across every phases of a software engineering process. Therefore, adaptation of various knowledge management practices by software engineering organizations is essential. This survey identified the knowledge management involvement in software engineering in different perspectives in the recent literature and guide future research in this area.",
"title": ""
},
{
"docid": "543b79408c3b66476efc66f3a29d1fb0",
"text": "Because of polysemy, distant labeling for information extraction leads to noisy training data. We describe a procedure for reducing this noise by using label propagation on a graph in which the nodes are entity mentions, and mentions are coupled when they occur in coordinate list structures. We show that this labeling approach leads to good performance even when off-the-shelf classifiers are used on the distantly-labeled data.",
"title": ""
},
{
"docid": "b78f935622b143bbbcaff580ba42e35d",
"text": "A churn is defined as the loss of a user in an online social network (OSN). Detecting and analyzing user churn at an early stage helps to provide timely delivery of retention solutions (e.g., interventions, customized services, and better user interfaces) that are useful for preventing users from churning. In this paper we develop a prediction model based on a clustering scheme to analyze the potential churn of users. In the experiment, we test our approach on a real-name OSN which contains data from 77,448 users. A set of 24 attributes is extracted from the data. A decision tree classifier is used to predict churn and non-churn users of the future month. In addition, k-means algorithm is employed to cluster the actual churn users into different groups with different online social networking behaviors. Results show that the churn and nonchurn prediction accuracies of ∼65% and ∼77% are achieved respectively. Furthermore, the actual churn users are grouped into five clusters with distinguished OSN activities and some suggestions of retaining these users are provided.",
"title": ""
},
{
"docid": "b6c81766443ec1518b7d4d044a86e23d",
"text": "Infusion is part of treatment to get drugs or vitamins into body. This is efficient way to accelerate treatment because it faster while absorbed in body and can avoid impact on digestion. If the dosage given does not match or fluids into the body getting too much, it causing disruption to the patient's health. The main objective of this paper is to provide information on the speed and volume of the infusion that being used by each patient using a photodiode sensor and node.js server can distinguish each incoming data by utilizing topic features on MQTT. Topic feature used to exchange data using ESP8266 identity and the data being sent is the volume and velocity of the infusion. Topics, one of features on MQTT, can be used to manage the data from multiple infusion into the server. Additionally, the system provides warning information of the residual volume and velocity limit when the infusion rate exceeds the normal limit that has been specified by the user.",
"title": ""
},
{
"docid": "80ce0f83ea565a1fb2b80156a3515288",
"text": "Given an image of a street scene in a city, this paper develops a new method that can quickly and precisely pinpoint at which location (as well as viewing direction) the image was taken, against a pre-stored large-scale 3D point-cloud map of the city. We adopt the recently developed 2D-3D direct feature matching framework for this task [23,31,32,42–44]. This is a challenging task especially for large-scale problems. As the map size grows bigger, many 3D points in the wider geographical area can be visually very similar–or even identical–causing severe ambiguities in 2D-3D feature matching. The key is to quickly and unambiguously find the correct matches between a query image and the large 3D map. Existing methods solve this problem mainly via comparing individual features’ visual similarities in a local and per feature manner, thus only local solutions can be found, inadequate for large-scale applications. In this paper, we introduce a global method which harnesses global contextual information exhibited both within the query image and among all the 3D points in the map. This is achieved by a novel global ranking algorithm, applied to a Markov network built upon the 3D map, which takes account of not only visual similarities between individual 2D-3D matches, but also their global compatibilities (as measured by co-visibility) among all matching pairs found in the scene. Tests on standard benchmark datasets show that our method achieved both higher precision and comparable recall, compared with the state-of-the-art.",
"title": ""
},
{
"docid": "30719d273f3966d80335db625792c3b7",
"text": "Recent years have produced great advances in training large, deep neural networks (DNNs), including notable successes in training convolutional neural networks (convnets) to recognize natural images. However, our understanding of how these models work, especially what computations they perform at intermediate layers, has lagged behind. Progress in the field will be further accelerated by the development of better tools for visualizing and interpreting neural nets. We introduce two such tools here. The first is a tool that visualizes the activations produced on each layer of a trained convnet as it processes an image or video (e.g. a live webcam stream). We have found that looking at live activations that change in response to user input helps build valuable intuitions about how convnets work. The second tool enables visualizing features at each layer of a DNN via regularized optimization in image space. Because previous versions of this idea produced less recognizable images, here we introduce several new regularization methods that combine to produce qualitatively clearer, more interpretable visualizations. Both tools are open source and work on a pretrained convnet with minimal setup. Published in the Deep Learning Workshop, 31 st International Conference on Machine Learning, Lille, France, 2015. Copyright 2015 by the author(s).",
"title": ""
},
{
"docid": "cf32d3c7f0562b3bfa2c549ba914f468",
"text": "A novel inverter-output filter, which cannot only filter the differential-mode voltage dv/dt but also suppress the common-mode voltage dv/dt and their rms values, is proposed in this paper. The filter is in combination with a conventional RLC filter and a common-mode transformer. The main advantage is that the functions of filtering a differential-mode voltage and suppressing a common-mode voltage can be integrated into a single system. Furthermore, the structure and design of the proposed filter are rather simple because only passive components are used. Simulations and experiments are conducted to validate the performance of the proposed filter. Both of their results indicate that about 80% of the rms value of the common-mode voltage are suppressed, while the demand of differential-mode voltage filtering is still met",
"title": ""
},
{
"docid": "018b25742275dd628c58208e5bd5a532",
"text": "Multivariate time series (MTS) datasets broadly exist in numerous fields, including health care, multimedia, finance, and biometrics. How to classify MTS accurately has become a hot research topic since it is an important element in many computer vision and pattern recognition applications. In this paper, we propose a Mahalanobis distance-based dynamic time warping (DTW) measure for MTS classification. The Mahalanobis distance builds an accurate relationship between each variable and its corresponding category. It is utilized to calculate the local distance between vectors in MTS. Then we use DTW to align those MTS which are out of synchronization or with different lengths. After that, how to learn an accurate Mahalanobis distance function becomes another key problem. This paper establishes a LogDet divergence-based metric learning with triplet constraint model which can learn Mahalanobis matrix with high precision and robustness. Furthermore, the proposed method is applied on nine MTS datasets selected from the University of California, Irvine machine learning repository and Robert T. Olszewski's homepage, and the results demonstrate the improved performance of the proposed approach.",
"title": ""
}
] |
scidocsrr
|
a6ed59d650797425fe60115eb9f32f5e
|
I Know the Feeling: Learning to Converse with Empathy
|
[
{
"docid": "6f5b3f2d2ebb46a993124242af8a50b8",
"text": "We present the SemEval-2018 Task 1: Affect in Tweets, which includes an array of subtasks on inferring the affectual state of a person from their tweet. For each task, we created labeled data from English, Arabic, and Spanish tweets. The individual tasks are: 1. emotion intensity regression, 2. emotion intensity ordinal classification, 3. valence (sentiment) regression, 4. valence ordinal classification, and 5. emotion classification. Seventy-five teams (about 200 team members) participated in the shared task. We summarize the methods, resources, and tools used by the participating teams, with a focus on the techniques and resources that are particularly useful. We also analyze systems for consistent bias towards a particular race or gender. The data is made freely available to further improve our understanding of how people convey emotions through language.",
"title": ""
},
{
"docid": "117de8844d5a6c506d69de65ae6b62ae",
"text": "Computer-based conversational agents are becoming ubiquitous. However, for these systems to be engaging and valuable to the user, they must be able to express emotion, in addition to providing informative responses. Humans rely on much more than language during conversations; visual information is key to providing context. We present the first example of an image-grounded conversational agent using visual sentiment, facial expression and scene features. We show that key qualities of the generated dialogue can be manipulated by the features used for training the agent. We evaluate our model on a large and very challenging real-world dataset of conversations from social media (Twitter). The image-grounding leads to significantly more informative, emotional and specific responses, and the exact qualities can be tuned depending on the image features used. Furthermore, our model improves the objective quality of dialogue responses when evaluated on standard natural language metrics.",
"title": ""
},
{
"docid": "49942573c60fa910369b81c44447a9b1",
"text": "Generic generation and manipulation of text is challenging and has limited success compared to recent deep generative modeling in visual domain. This paper aims at generating plausible text sentences, whose attributes are controlled by learning disentangled latent representations with designated semantics. We propose a new neural generative model which combines variational auto-encoders (VAEs) and holistic attribute discriminators for effective imposition of semantic structures. The model can alternatively be seen as enhancing VAEs with the wake-sleep algorithm for leveraging fake samples as extra training data. With differentiable approximation to discrete text samples, explicit constraints on independent attribute controls, and efficient collaborative learning of generator and discriminators, our model learns interpretable representations from even only word annotations, and produces short sentences with desired attributes of sentiment and tenses. Quantitative experiments using trained classifiers as evaluators validate the accuracy of sentence and attribute generation.",
"title": ""
}
] |
[
{
"docid": "97de6efcdba528f801cbfa087498ab3f",
"text": "Abstract: Educational Data Mining refers to techniques, tools, and research designed for automatically extracting meaning from large repositories of data generated by or related to people' learning activities in educational settings.[1] It is an emerging discipline, concerned with developing methods for exploring the unique types of data that come from educational settings, and using those methods to better understand students, and the settings which they learn in.[2]",
"title": ""
},
{
"docid": "7b98d56c2ebe5dcfb0c4b8a95ca1fba1",
"text": "Over the past three or four years there has been some controversy regarding the applicability of intrusion detection systems (IDS) to the forensic evidence collection process. Two points of view, essentially, have emerged. One perspective views forensic evidence collection and preservation in the case of a computer or network security incident to be inappropriate for an intrusion detection system. Another perspective submits that the IDS is the most likely candidate for collecting forensically pristine evidentiary data in real or near real time. This extended abstract describes, briefly, the framework for a research project intended to explore the applicability of intrusion detection systems to the evidence collection and management process. The project will review the performance and forensic acceptability of several types of intrusion detection systems in a laboratory environment. 1.0 Background and Problem Statement Intrusion detection, as a discipline, is fairly immature. Most of the serious work in intrusion detection is being carried on in the academic, commercial and government research communities. Commercially available examples of successful intrusion detection systems are limited, although the state of the art is progressing rapidly. However, as new approaches to intrusion detection are introduced, there is one question that seems to emerge continuously: should we be using intrusion detection systems to gather forensic evidence in the case of a detected penetration or abuse attempt. The whole concept of mixing investigation with detection of intrusion or abuse attempts begs a number of questions. First, can an IDS perform adequately if it also has to manage evidentiary data appropriately to meet legal standards? Second, what is required to automate the management of data from an evidentiary perspective? Third, what measures need to be added to an IDS to ensure that it not only can perform as an IDS (including performance requirements for the type of system in which it is implemented), but that it can manage evidence appropriately? It is not appropriate to ask any system to do double duty, performing additional tasks which may or may not be related to its primary function, at the expense of the results of its primary mission. This idea – that of combining evidence gathering with system protection – has generated considerable discussion over recent years. There is reasonable conjecture as to whether the presence of an IDS during an attack provides an appropriate evidence gathering mechanism. There appears to be general agreement, informed or otherwise, in the courts that such is the case. Today, in the absence of an alternative, the IDS probably is the best source of information about an attack. Whether that information is forensically pristine or not is an entirely different question. Sommer [SO98], however, reports that the NSTAC Network Group Intrusion Detection Subgroup found in December 1997 that: • “Current intrusion detection systems are not designed to collect and protect the integrity of the type of information required to conduct law enforcement investigations.” • “There is a lack of guidance to employees as to how to respond to intrusions and capture the information required to conduct a law enforcement investigation. The subgroup discussed the need to develop guidelines and training materials for end users that will make them aware of what information law enforcement requires and what procedures they use to collect evidence on an intrusion.” This finding implies strongly that there is a disconnect between the use of intrusion detection systems and the collection of forensically appropriate evidence during an intrusion attempt. On the other hand, Yuill et al [YU99] propose that an intrusion detection system can collect enough information during an on-going attack to profile, if not identify, the attacker. The ability of an IDS to gather significant information about an attack in progress without materially affecting the primary mission of the intrusion detection system suggests that an IDS could be deployed that would provide both detection/response and forensically pristine evidence in the case of a security incident. 1.1 Problem Statement Fundamentally, this project seeks to answer the question: “Is it practical and appropriate to combine intrusion detection and response with forensic management of collected data within a single IDS in today’s networks?”. The issue we will address in this research is three-fold. First, can an IDS gather useful forensic evidence during an attack without impacting its primary mission of detect and respond? Second, what is required to provide an acceptable case file of forensic information? And, finally, in a practical implementation, can an IDS be implemented that will accomplish both its primary mission and, at the same time, collect and manage forensically pure evidence that can be used in a legal setting? There are several difficulties in addressing these issues. First, the theoretical requirements of an IDS in terms of performing its primary mission may be at odds with the requirements of collecting and preserving forensic evidence. The primary mission of an IDS is to detect and respond to security incidents. The definition of a security incident should be, at least in part, determined by the organization’s security policy. Therefore, the detailed definition of the IDS’ primary mission is partially determined by the security policy, not by some overarching standard or generic procedure. The result is that there can be a wide disparity among requirements for an IDS from organization to organization. That contrasts significantly with the relatively static set of requirements for developing and managing evidence for use in a legal proceeding. A second difficulty is that the IDS, by design, does not manage its information in the sense that a forensics system does. There is a requirement within a forensic system (automated or not) for, among other things, the maintenance of a chain of custody whereby all evidence can be accounted for and its integrity attested to from the time of its collection to the time of its use in a legal proceeding. The third difficulty deals with the architecture of the IDS. The ability of a program to perform widely disparate tasks (in this case detection and response as well as forensic management of data) implies an architecture that may or may not be present currently in an IDS. Thus, there develops the need for a standard architecture for intrusion detection systems that also are capable of forensic data management.",
"title": ""
},
{
"docid": "64b0db1e23b225fab910bef5de9fd921",
"text": "Question answering (QA) has become a popular way for humans to access billion-scale knowledge bases. Unlike web search, QA over a knowledge base gives out accurate and concise results, provided that natural language questions can be understood and mapped precisely to structured queries over the knowledge base. The challenge, however, is that a human can ask one question in many different ways. Previous approaches have natural limits due to their representations: rule based approaches only understand a small set of “canned” questions, while keyword based or synonym based approaches cannot fully understand the questions. In this paper, we design a new kind of question representation: templates, over a billion scale knowledge base and a million scale QA corpora. For example, for questions about a city’s population, we learn templates such as What’s the population of $city?, How many people are there in $city?. We learned 27 million templates for 2782 intents. Based on these templates, our QA system KBQA effectively supports binary factoid questions, as well as complex questions which are composed of a series of binary factoid questions. Furthermore, we expand predicates in RDF knowledge base, which boosts the coverage of knowledge base by 57 times. Our QA system beats all other state-of-art works on both effectiveness and efficiency over QALD benchmarks.",
"title": ""
},
{
"docid": "bc85e28da375e2a38e06f0332a18aef0",
"text": "Background: Statistical reviews of the theories of reasoned action (TRA) and planned behavior (TPB) applied to exercise are limited by methodological issues including insufficient sample size and data to examine some moderator associations. Methods: We conducted a meta-analytic review of 111 TRA/TPB and exercise studies and examined the influences of five moderator variables. Results: We found that: a) exercise was most strongly associated with intention and perceived behavioral control; b) intention was most strongly associated with attitude; and c) intention predicted exercise behavior, and attitude and perceived behavioral control predicted intention. Also, the time interval between intention to behavior; scale correspondence; subject age; operationalization of subjective norm, intention, and perceived behavioral control; and publication status moderated the size of the effect. Conclusions: The TRA/TPB effectively explained exercise intention and behavior and moderators of this relationship. Researchers and practitioners are more equipped to design effective interventions by understanding the TRA/TPB constructs.",
"title": ""
},
{
"docid": "6adbe9f2de5a070cf9c1b7f708f4a452",
"text": "Prior research has provided valuable insights into how and why employees make a decision about the adoption and use of information technologies (ITs) in the workplace. From an organizational point of view, however, the more important issue is how managers make informed decisions about interventions that can lead to greater acceptance and effective utilization of IT. There is limited research in the IT implementation literature that deals with the role of interventions to aid such managerial decision making. Particularly, there is a need to understand how various interventions can influence the known determinants of IT adoption and use. To address this gap in the literature, we draw from the vast body of research on the technology acceptance model (TAM), particularly the work on the determinants of perceived usefulness and perceived ease of use, and: (i) develop a comprehensive nomological network (integrated model) of the determinants of individual level (IT) adoption and use; (ii) empirically test the proposed integrated model; and (iii) present a research agenda focused on potential preand postimplementation interventions that can enhance employees’ adoption and use of IT. Our findings and research agenda have important implications for managerial decision making on IT implementation in organizations. Subject Areas: Design Characteristics, Interventions, Management Support, Organizational Support, Peer Support, Technology Acceptance Model (TAM), Technology Adoption, Training, User Acceptance, User Involvement, and User Participation.",
"title": ""
},
{
"docid": "80105a011097a3bd37bf58d030131e13",
"text": "Deep CNNs have achieved great success in text detection. Most of existing methods attempt to improve accuracy with sophisticated network design, while paying less attention on speed. In this paper, we propose a general framework for text detection called Guided CNN to achieve the two goals simultaneously. The proposed model consists of one guidance subnetwork, where a guidance mask is learned from the input image itself, and one primary text detector, where every convolution and non-linear operation are conducted only in the guidance mask. The guidance subnetwork filters out non-text regions coarsely, greatly reducing the computation complexity. At the same time, the primary text detector focuses on distinguishing between text and hard non-text regions and regressing text bounding boxes, achieving a better detection accuracy. A novel training strategy, called background-aware block-wise random synthesis, is proposed to further boost up the performance. We demonstrate that the proposed Guided CNN is not only effective but also efficient with two state-of-the-art methods, CTPN [52] and EAST [64], as backbones. On the challenging benchmark ICDAR 2013, it speeds up CTPN by 2.9 times on average, while improving the F-measure by 1.5%. On ICDAR 2015, it speeds up EAST by 2.0 times while improving the F-measure by 1.0%. c © 2018. The copyright of this document resides with its authors. It may be distributed unchanged freely in print or electronic forms. * Zhanghui Kuang is the corresponding author 2 YUE ET AL: BOOSTING UP SCENE TEXT DETECTORS WITH GUIDED CNN Figure 1: Illustration of guiding the primary text detector. Convolutions and non-linear operations are conducted only in the guidance mask indicated by the red and blue rectangles. The guidance mask (the blue) is expanded by backgroundaware block-wise random synthesis (the red) during training. When testing, the guidance mask is not expanded. Figure 2: Text appears very sparsely in scene images. The left shows one example image. The right shows the text area ratio composition of ICDAR 2013 test set. Images with (0%,10%], (10%,20%], (20%,30%], and (30%,40%] text region account for 57%, 21%, 11%, and 6% respectively. Only 5 % images have more than 40% text region. 57% 21% 11% 6% 5% (0.0,0.1] (0.1,0.2] (0.2,0.3] (0.3,0.4] (0.4,1.0]",
"title": ""
},
{
"docid": "c949e051cbfd9cff13d939a7b594e6e6",
"text": "Propagation measurements at 28 GHz were conducted in outdoor urban environments in New York City using four different transmitter locations and 83 receiver locations with distances of up to 500 m. A 400 mega- chip per second channel sounder with steerable 24.5 dBi horn antennas at the transmitter and receiver was used to measure the angular distributions of received multipath power over a wide range of propagation distances and urban settings. Measurements were also made to study the small-scale fading of closely-spaced power delay profiles recorded at half-wavelength (5.35 mm) increments along a small-scale linear track (10 wavelengths, or 107 mm) at two different receiver locations. Our measurements indicate that power levels for small- scale fading do not significantly fluctuate from the mean power level at a fixed angle of arrival. We propose here a new lobe modeling technique that can be used to create a statistical channel model for lobe path loss and shadow fading, and we provide many model statistics as a function of transmitter- receiver separation distance. Our work shows that New York City is a multipath-rich environment when using highly directional steerable horn antennas, and that an average of 2.5 signal lobes exists at any receiver location, where each lobe has an average total angle spread of 40.3° and an RMS angle spread of 7.8°. This work aims to create a 28 GHz statistical spatial channel model for future 5G cellular networks.",
"title": ""
},
{
"docid": "d7624f0fe57b0022a81587b0f2edf755",
"text": "In a recent press release Joseph A. Califano, Jr., Chairman and President of the National Center on Addiction and Substance Abuse at Columbia University called for a major shift in American attitudes about substance abuse and addiction and a top to bottom overhaul in the nation's healthcare, criminal justice, social service, and eduction systems to curtail the rise in illegal drug use and other substance abuse. Califano, in 2005, also noted that while America has been congratulating itself on curbing increases in alcohol and illicit drug use and in the decline in teen smoking, abuse and addition of controlled prescription drugs-opioids, central nervous system depressants and stimulants-have been stealthily, but sharply rising. All the statistics continue to show that prescription drug abuse is escalating with increasing emergency department visits and unintentional deaths due to prescription controlled substances. While the problem of drug prescriptions for controlled substances continues to soar, so are the arguments of undertreatment of pain. The present state of affairs show that there were 6.4 million or 2.6% Americans using prescription-type psychotherapeutic drugs nonmedically in the past month. Of these, 4.7 million used pain relievers. Current nonmedical use of prescription-type drugs among young adults aged 18-25 increased from 5.4% in 2002 to 6.3% in 2005. The past year, nonmedical use of psychotherapeutic drugs has increased to 6.2% in the population of 12 years or older with 15.172 million persons, second only to marijuana use and three times the use of cocaine. Parallel to opioid supply and nonmedical prescription drug use, the epidemic of medical drug use is also escalating with Americans using 80% of world's supply of all opioids and 99% of hydrocodone. Opioids are used extensively despite a lack of evidence of their effectiveness in improving pain or functional status with potential side effects of hyperalgesia, negative hormonal and immune effects, addiction and abuse. The multiple reasons for continued escalation of prescription drug abuse and overuse are lack of education among all segments including physicians, pharmacists, and the public; ineffective and incoherent prescription monitoring programs with lack of funding for a national prescription monitoring program NASPER; and a reactive approach on behalf of numerous agencies. This review focuses on the problem of prescription drug abuse with a discussion of facts and fallacies, along with proposed solutions.",
"title": ""
},
{
"docid": "fc4ea7391c1500851ec0d37beed4cd90",
"text": "As a crucial operation, routing plays an important role in various communication networks. In the context of data and sensor networks, routing strategies such as shortest-path, multi-path and potential-based (“all-path”) routing have been developed. Existing results in the literature show that the shortest path and all-path routing can be obtained from L1 and L2 flow optimization, respectively. Based on this connection between routing and flow optimization in a network, in this paper we develop a unifying theoretical framework by considering flow optimization with mixed (weighted) L1/L2-norms. We obtain a surprising result: as we vary the trade-off parameter θ, the routing graphs induced by the optimal flow solutions span from shortest-path to multi-path to all-path routing-this entire sequence of routing graphs is referred to as the routing continuum. We also develop an efficient iterative algorithm for computing the entire routing continuum. Several generalizations are also considered, with applications to traffic engineering, wireless sensor networks, and network robustness analysis.",
"title": ""
},
{
"docid": "bade68b8f95fc0ae5a377a52c8b04b5c",
"text": "The majority of deterministic mathematical programming problems have a compact formulation in terms of algebraic equations. Therefore they can easily take advantage of the facilities offered by algebraic modeling languages. These tools allow expressing models by using convenient mathematical notation (algebraic equations) and translate the models into a form understandable by the solvers for mathematical programs. Algebraic modeling languages provide facility for the management of a mathematical model and its data, and access different general-purpose solvers. The use of algebraic modeling languages (AMLs) simplifies the process of building the prototype model and in some cases makes it possible to create and maintain even the production version of the model. As presented in other chapters of this book, stochastic programming (SP) is needed when exogenous parameters of the mathematical programming problem are random. Dealing with stochasticities in planning is not an easy task. In a standard scenario-by-scenario analysis, the system is optimized for each scenario separately. Varying the scenario hypotheses we can observe the different optimal responses of the system and delineate the “strong trends” of the future. Indeed, this scenarioby-scenario approach implicitly assumes perfect foresight. The method provides a first-stage decision, which is valid only for the scenario under consideration. Having as many decisions as there are scenarios leaves the decision-maker without a clear recommendation. In stochastic programming the whole set of scenarios is combined into an event tree, which describes the unfolding of uncertainties over the period of planning. The model takes into account the uncertainties characterizing the scenarios through stochastic programming techniques. This adaptive plan is much closer, in spirit, to the way that decision-makers have to deal with uncertain future",
"title": ""
},
{
"docid": "7752661edead3eb69375c9a17be2c52d",
"text": "This article explores the rich heritage of the boundary element method (BEM) by examining its mathematical foundation from the potential theory, boundary value problems, Green’s functions, Green’s identities, to Fredholm integral equations. The 18th to 20th century mathematicians, whose contributions were key to the theoretical development, are honored with short biographies. The origin of the numerical implementation of boundary integral equations can be traced to the 1960s, when the electronic computers had become available. The full emergence of the numerical technique known as the boundary element method occurred in the late 1970s. This article reviews the early history of the boundary element method up to the late 1970s. q 2005 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "692f80bda530610858312da98bc49815",
"text": "Loss of heterozygosity (LOH) at locus 10q23.3 and mutation of the PTEN tumor suppressor gene occur frequently in both endometrial carcinoma and ovarian endometrioid carcinoma. To investigate the potential role of the PTEN gene in the carcinogenesis of ovarian endometrioid carcinoma and its related subtype, clear cell carcinoma, we examined 20 ovarian endometrioid carcinomas, 24 clear cell carcinomas, and 34 solitary endometrial cysts of the ovary for LOH at 10q23.3 and point mutations within the entire coding region of the PTEN gene. LOH was found in 8 of 19 ovarian endometrioid carcinomas (42.1%), 6 of 22 clear cell carcinomas (27.3%), and 13 of 23 solitary endometrial cysts (56.5%). In 5 endometrioid carcinomas synchronous with endometriosis, 3 cases displayed LOH events common to both the carcinoma and the endometriosis, 1 displayed an LOH event in only the carcinoma, and 1 displayed no LOH events in either lesion. In 7 clear cell carcinomas synchronous with endometriosis, 3 displayed LOH events common to both the carcinoma and the endometriosis, 1 displayed an LOH event in only the carcinoma, and 3 displayed no LOH events in either lesion. In no cases were there LOH events in the endometriosis only. Somatic mutations in the PTEN gene were identified in 4 of 20 ovarian endometrioid carcinomas (20.0%), 2 of 24 clear cell carcinomas (8.3%), and 7 of 34 solitary endometrial cysts (20.6%). These results indicate that inactivation of the PTEN tumor suppressor gene is an early event in the development of ovarian endometrioid carcinoma and clear cell carcinoma of the ovary.",
"title": ""
},
{
"docid": "5f419f75e2f6399e6a1a456f78d0e48e",
"text": "We present an attention-based bidirectional LSTM approach to improve the target-dependent sentiment classification. Our method learns the alignment between the target entities and the most distinguishing features. We conduct extensive experiments on a real-life dataset. The experimental results show that our model achieves state-of-the-art results.",
"title": ""
},
{
"docid": "a5be27d89874b1dfcad85206ad7403ba",
"text": "The upcoming Fifth Generation (5G) networks can provide ultra-reliable ultra-low latency vehicle-to-everything for vehicular ad hoc networks (VANET) to promote road safety, traffic management, information dissemination, and automatic driving for drivers and passengers. However, 5G-VANET also attracts tremendous security and privacy concerns. Although several pseudonymous authentication schemes have been proposed for VANET, the expensive cost for their initial authentication may cause serious denial of service (DoS) attacks, which furthermore enables to do great harm to real space via VANET. Motivated by this, a puzzle-based co-authentication (PCA) scheme is proposed here. In the PCA scheme, the Hash puzzle is carefully designed to mitigate DoS attacks against the pseudonymous authentication process, which is facilitated through collaborative verification. The effectiveness and efficiency of the proposed scheme is approved by performance analysis based on theory and experimental results.",
"title": ""
},
{
"docid": "9292d1a97913257cfd1e72645969a988",
"text": "A digital PLL employing an adaptive tracking technique and a novel frequency acquisition scheme achieves a wide tracking range and fast frequency acquisition. The test chip fabricated in a 0.13 mum CMOS process operates from 0.6 GHz to 2 GHz and achieves better than plusmn3200 ppm frequency tracking range when the reference clock is modulated with a 1 MHz sine wave.",
"title": ""
},
{
"docid": "6abe1b7806f6452bbcc087b458a7ef96",
"text": "We demonstrate distributed, online, and real-time cooperative localization and mapping between multiple robots operating throughout an unknown environment using indirect measurements. We present a novel Expectation Maximization (EM) based approach to efficiently identify inlier multi-robot loop closures by incorporating robot pose uncertainty, which significantly improves the trajectory accuracy over long-term navigation. An EM and hypothesis based method is used to determine a common reference frame. We detail a 2D laser scan correspondence method to form robust correspondences between laser scans shared amongst robots. The implementation is experimentally validated using teams of aerial vehicles, and analyzed to determine its accuracy, computational efficiency, scalability to many robots, and robustness to varying environments. We demonstrate through multiple experiments that our method can efficiently build maps of large indoor and outdoor environments in a distributed, online, and real-time setting.",
"title": ""
},
{
"docid": "6b0bb5e87efacf0008918380f98cd5ae",
"text": "This paper discusses Low Power Wide Area Network technologies. The purpose of this work is a presentation of these technologies in a mutual context in order to analyse their coexistence. In this work there are described Low Power Wide Area Network terms and their representatives LoRa, Sigfox and IQRF, of which characteristics, topology and some significant technics are inspected. The technologies are also compared together in a frequency spectrum in order to detect risk bands causing collisions. A potential increased risk of collisions is found around 868.2 MHz. The main contribution of this paper is a summary of characteristics, which have an influence on the resulting coexistence.",
"title": ""
},
{
"docid": "d93dbf04604d9e60a554f39b0f7e3122",
"text": "BACKGROUND\nThe World Health Organization (WHO) estimates that 1.9 million deaths worldwide are attributable to physical inactivity and at least 2.6 million deaths are a result of being overweight or obese. In addition, WHO estimates that physical inactivity causes 10% to 16% of cases each of breast cancer, colon, and rectal cancers as well as type 2 diabetes, and 22% of coronary heart disease and the burden of these and other chronic diseases has rapidly increased in recent decades.\n\n\nOBJECTIVES\nThe purpose of this systematic review was to summarize the evidence of the effectiveness of school-based interventions in promoting physical activity and fitness in children and adolescents.\n\n\nSEARCH METHODS\nThe search strategy included searching several databases to October 2011. In addition, reference lists of included articles and background papers were reviewed for potentially relevant studies, as well as references from relevant Cochrane reviews. Primary authors of included studies were contacted as needed for additional information.\n\n\nSELECTION CRITERIA\nTo be included, the intervention had to be relevant to public health practice (focused on health promotion activities), not conducted by physicians, implemented, facilitated, or promoted by staff in local public health units, implemented in a school setting and aimed at increasing physical activity, included all school-attending children, and be implemented for a minimum of 12 weeks. In addition, the review was limited to randomized controlled trials and those that reported on outcomes for children and adolescents (aged 6 to 18 years). Primary outcomes included: rates of moderate to vigorous physical activity during the school day, time engaged in moderate to vigorous physical activity during the school day, and time spent watching television. Secondary outcomes related to physical health status measures including: systolic and diastolic blood pressure, blood cholesterol, body mass index (BMI), maximal oxygen uptake (VO2max), and pulse rate.\n\n\nDATA COLLECTION AND ANALYSIS\nStandardized tools were used by two independent reviewers to assess each study for relevance and for data extraction. In addition, each study was assessed for risk of bias as specified in the Cochrane Handbook for Systematic Reviews of Interventions. Where discrepancies existed, discussion occurred until consensus was reached. The results were summarized narratively due to wide variations in the populations, interventions evaluated, and outcomes measured.\n\n\nMAIN RESULTS\nIn the original review, 13,841 records were identified and screened, 302 studies were assessed for eligibility, and 26 studies were included in the review. There was some evidence that school-based physical activity interventions had a positive impact on four of the nine outcome measures. Specifically positive effects were observed for duration of physical activity, television viewing, VO2 max, and blood cholesterol. Generally, school-based interventions had little effect on physical activity rates, systolic and diastolic blood pressure, BMI, and pulse rate. At a minimum, a combination of printed educational materials and changes to the school curriculum that promote physical activity resulted in positive effects.In this update, given the addition of three new inclusion criteria (randomized design, all school-attending children invited to participate, minimum 12-week intervention) 12 of the original 26 studies were excluded. In addition, studies published between July 2007 and October 2011 evaluating the effectiveness of school-based physical interventions were identified and if relevant included. In total an additional 2378 titles were screened of which 285 unique studies were deemed potentially relevant. Of those 30 met all relevance criteria and have been included in this update. This update includes 44 studies and represents complete data for 36,593 study participants. Duration of interventions ranged from 12 weeks to six years.Generally, the majority of studies included in this update, despite being randomized controlled trials, are, at a minimum, at moderate risk of bias. The results therefore must be interpreted with caution. Few changes in outcomes were observed in this update with the exception of blood cholesterol and physical activity rates. For example blood cholesterol was no longer positively impacted upon by school-based physical activity interventions. However, there was some evidence to suggest that school-based physical activity interventions led to an improvement in the proportion of children who engaged in moderate to vigorous physical activity during school hours (odds ratio (OR) 2.74, 95% confidence interval (CI), 2.01 to 3.75). Improvements in physical activity rates were not observed in the original review. Children and adolescents exposed to the intervention also spent more time engaged in moderate to vigorous physical activity (with results across studies ranging from five to 45 min more), spent less time watching television (results range from five to 60 min less per day), and had improved VO2max (results across studies ranged from 1.6 to 3.7 mL/kg per min). However, the overall conclusions of this update do not differ significantly from those reported in the original review.\n\n\nAUTHORS' CONCLUSIONS\nThe evidence suggests the ongoing implementation of school-based physical activity interventions at this time, given the positive effects on behavior and one physical health status measure. However, given these studies are at a minimum of moderate risk of bias, and the magnitude of effect is generally small, these results should be interpreted cautiously. Additional research on the long-term impact of these interventions is needed.",
"title": ""
},
{
"docid": "42250041b2f5f24bbd843d06c2627cc6",
"text": "Robots that interact with humans must learn to not only adapt to different human partners but also to new interactions. Such a form of learning can be achieved by demonstrations and imitation. A recently introduced method to learn interactions from demonstrations is the framework of Interaction Primitives. While this framework is limited to represent and generalize a single interaction pattern, in practice, interactions between a human and a robot can consist of many different patterns. To overcome this limitation this paper proposes a Mixture of Interaction Primitives to learn multiple interaction patterns from unlabeled demonstrations. Specifically the proposed method uses Gaussian Mixture Models of Interaction Primitives to model nonlinear correlations between the movements of the different agents. We validate our algorithm with two experiments involving interactive tasks between a human and a lightweight robotic arm. In the first, we compare our proposed method with conventional Interaction Primitives in a toy problem scenario where the robot and the human are not linearly correlated. In the second, we present a proof-of-concept experiment where the robot assists a human in assembling a box.",
"title": ""
},
{
"docid": "46fdba2028abec621e8b9fbd0919e043",
"text": "The HF band, located in between 3-30 MHz, can offer single hop communication channels over a very long distances - even up to around the world. Traditionally, the HF is seen primarily as a solution for long communication ranges although it may also be a perfect choice for much shorter communication ranges when high data rates are not a primary target. It is well known that the HF channel is a demanding environment to operate since it changes rapidly, i.e., channel is available at a moment but the next moment it is not. Therefore, a big problem in HF communications is channel access or channel selection. By choosing the used HF channels wisely, i.e., cognitively, the channel behavior and system reliability considerably improves. This paper discusses about a change of paradigm in HF communication that will take place after applying cognitive principles on the HF system.",
"title": ""
}
] |
scidocsrr
|
bae6e13abeb2d80da62733c0fc0b6ef0
|
Retargeting Technical Documentation to Augmented Reality
|
[
{
"docid": "259b80df0ad4def6db381067c8f97121",
"text": "Concept sketches are popularly used by designers to convey pose and function of products. Understanding such sketches, however, requires special skills to form a mental 3D representation of the product geometry by linking parts across the different sketches and imagining the intermediate object configurations. Hence, the sketches can remain inaccessible to many, especially non-designers. We present a system to facilitate easy interpretation and exploration of concept sketches. Starting from crudely specified incomplete geometry, often inconsistent across the different views, we propose a globally-coupled analysis to extract part correspondence and inter-part junction information that best explain the different sketch views. The user can then interactively explore the abstracted object to gain better understanding of the product functions. Our key technical contribution is performing shape analysis without access to any coherent 3D geometric model by reasoning in the space of inter-part relations. We evaluate our system on various concept sketches obtained from popular product design books and websites.",
"title": ""
},
{
"docid": "1497c5ce53dec0c2d02981d01a419f4b",
"text": "While image registration has been studied in different areas of computer vision, aligning images depicting different scenes remains a challenging problem, closer to recognition than to image matching. Analogous to optical flow, where an image is aligned to its temporally adjacent frame, we propose SIFT flow, a method to align an image to its neighbors in a large image collection consisting of a variety of scenes. For a query image, histogram intersection on a bag-of-visual-words representation is used to find the set of nearest neighbors in the database. The SIFT flow algorithm then consists of matching densely sampled SIFT features between the two images, while preserving spatial discontinuities. The use of SIFT features allows robust matching across different scene/object appearances and the discontinuity-preserving spatial model allows matching of objects located at different parts of the scene. Experiments show that the proposed approach is able to robustly align complicated scenes with large spatial distortions. We collect a large database of videos and apply the SIFT flow algorithm to two applications: (i) motion field prediction from a single static image and (ii) motion synthesis via transfer of moving objects.",
"title": ""
},
{
"docid": "a3e24b6438257176aabb4726c4eb6260",
"text": "We present a system for creating and viewing interactive exploded views of complex 3D models. In our approach, a 3D input model is organized into an explosion graph that encodes how parts explode with respect to each other. We present an automatic method for computing explosion graphs that takes into account part hierarchies in the input models and handles common classes of interlocking parts. Our system also includes an interface that allows users to interactively explore our exploded views using both direct controls and higher-level interaction modes.",
"title": ""
}
] |
[
{
"docid": "2c91e6ca6cf72279ad084c4a51b27b1c",
"text": "Knowing where the host lane lies is paramount to the effectiveness of many advanced driver assistance systems (ADAS), such as lane keep assist (LKA) and adaptive cruise control (ACC). This paper presents an approach for improving lane detection based on the past trajectories of vehicles. Instead of expensive high-precision map, we use the vehicle trajectory information to provide additional lane-level spatial support of the traffic scene, and combine it with the visual evidence to improve each step of the lane detection procedure, thereby overcoming typical challenges of normal urban streets. Such an approach could serve as an Add-On to enhance the performance of existing lane detection systems in terms of both accuracy and robustness. Experimental results in various typical but challenging scenarios show the effectiveness of the proposed system.",
"title": ""
},
{
"docid": "e0cc48dc60f6c79befb8584cee95e9ea",
"text": "Neural Network approaches to time series prediction are briefly discussed, and the need to specify an appropriately sized input window identified. Relevant theoretical results from dynamic systems theory are introduced, and the number of false neighbours heuristic is described, as a means of finding the correct embedding dimension, and thence window size. The method is applied to three time series and the resulting generalisation performance of the trained feed-forward neural network predictors is analysed. It is shown that the heuristics can provide useful information in defining the appropriate network architecture.",
"title": ""
},
{
"docid": "8decac4ff789460595664a38e7527ed6",
"text": "Unit selection synthesis has shown itself to be capable of producing high quality natural sounding synthetic speech when constructed from large databases of well-recorded, well-labeled speech. However, the cost in time and expertise of building such voices is still too expensive and specialized to be able to build individual voices for everyone. The quality in unit selection synthesis is directly related to the quality and size of the database used. As we require our speech synthesizers to have more variation, style and emotion, for unit selection synthesis, much larger databases will be required. As an alternative, more recently we have started looking for parametric models for speech synthesis, that are still trained from databases of natural speech but are more robust to errors and allow for better modeling of variation. This paper presents the CLUSTERGEN synthesizer which is implemented within the Festival/FestVox voice building environment. As well as the basic technique, three methods of modeling dynamics in the signal are presented and compared: a simple point model, a basic trajectory model and a trajectory model with overlap and add.",
"title": ""
},
{
"docid": "7fc3dfcc8fa43c36938f41877a65bed7",
"text": "We propose a real-time RGB-based pipeline for object detection and 6D pose estimation. Our novel 3D orientation estimation is based on a variant of the Denoising Autoencoder that is trained on simulated views of a 3D model using Domain Randomization. This so-called Augmented Autoencoder has several advantages over existing methods: It does not require real, pose-annotated training data, generalizes to various test sensors and inherently handles object and view symmetries. Instead of learning an explicit mapping from input images to object poses, it provides an implicit representation of object orientations defined by samples in a latent space. Experiments on the T-LESS and LineMOD datasets show that our method outperforms similar modelbased approaches and competes with state-of-the art approaches that require real pose-annotated images. 1",
"title": ""
},
{
"docid": "9dac90ed6c1a89fc1f12d7ba581d4889",
"text": "BACKGROUND\nAccurate measurement of core temperature is a standard component of perioperative and intensive care patient management. However, core temperature measurements are difficult to obtain in awake patients. A new non-invasive thermometer has been developed, combining two sensors separated by a known thermal resistance ('double-sensor' thermometer). We thus evaluated the accuracy of the double-sensor thermometer compared with a distal oesophageal thermometer to determine if the double-sensor thermometer is a suitable substitute.\n\n\nMETHODS\nIn perioperative and intensive care patient populations (n=68 total), double-sensor measurements were compared with measurements from a distal oesophageal thermometer using Bland-Altman analysis and Lin's concordance correlation coefficient (CCC).\n\n\nRESULTS\nOverall, 1287 measurement pairs were obtained at 5 min intervals. Ninety-eight per cent of all double-sensor values were within +/-0.5 degrees C of oesophageal temperature. The mean bias between the methods was -0.08 degrees C; the limits of agreement were -0.66 degrees C to 0.50 degrees C. Sensitivity and specificity for detection of fever were 0.86 and 0.97, respectively. Sensitivity and specificity for detection of hypothermia were 0.77 and 0.93, respectively. Lin's CCC was 0.93.\n\n\nCONCLUSIONS\nThe new double-sensor thermometer is sufficiently accurate to be considered an alternative to distal oesophageal core temperature measurement, and may be particularly useful in patients undergoing regional anaesthesia.",
"title": ""
},
{
"docid": "fbac56ecc5d477586707c9bfc1bf8196",
"text": "This paper presents implementation of a highly dynamic running gait with a hierarchical controller on the",
"title": ""
},
{
"docid": "114affaf4e25819aafa1c11da26b931f",
"text": "We propose a coherent mathematical model for human fingerprint images. Fingerprint structure is represented simply as a hologram - namely a phase modulated fringe pattern. The holographic form unifies analysis, classification, matching, compression, and synthesis of fingerprints in a self-consistent formalism. Hologram phase is at the heart of the method; a phase that uniquely decomposes into two parts via the Helmholtz decomposition theorem. Phase also circumvents the infinite frequency singularities that always occur at minutiae. Reliable analysis is possible using a recently discovered two-dimensional demodulator. The parsimony of this model is demonstrated by the reconstruction of a fingerprint image with an extreme compression factor of 239.",
"title": ""
},
{
"docid": "1a78e17056cca09250c7cc5f81fb271b",
"text": "This paper presents a lightweight stereo vision-based driving lane detection and classification system to achieve the ego-car’s lateral positioning and forward collision warning to aid advanced driver assistance systems (ADAS). For lane detection, we design a self-adaptive traffic lanes model in Hough Space with a maximum likelihood angle and dynamic pole detection region of interests (ROIs), which is robust to road bumpiness, lane structure changing while the ego-car’s driving and interferential markings on the ground. What’s more, this model can be improved with geographic information system or electronic map to achieve more accurate results. Besides, the 3-D information acquired by stereo matching is used to generate an obstacle mask to reduce irrelevant objects’ interfere and detect forward collision distance. For lane classification, a convolutional neural network is trained by using manually labeled ROI from KITTI data set to classify the left/right-side line of host lane so that we can provide significant information for lane changing strategy making in ADAS. Quantitative experimental evaluation shows good true positive rate on lane detection and classification with a real-time (15Hz) working speed. Experimental results also demonstrate a certain level of system robustness on variation of the environment.",
"title": ""
},
{
"docid": "a0bb908ff9c7cf14c34acfcdc47e4c1f",
"text": "DCF77 is a longwave radio transmitter located in Germany. Atomic clocks generate a 77.5-kHz carrier which is amplitudeand phase-modulated to broadcast the official time. The signal is used by industrial and consumer radio-controlled clocks. DCF77 faces competition from the Global Positioning System (GPS) which provides higher accuracy time. Still, DCF77 and other longwave time services worldwide remain popular because they allow indoor reception at lower cost, lower power, and sufficient accuracy. Indoor longwave reception is challenged by signal attenuation and electromagnetic interference from an increasing number of devices, particularly switched-mode power supplies. This paper introduces new receiver architectures and compares them with existing detectors and time decoders. Simulations and analytical calculations characterize the performance in terms of bit error rate and decoding probability, depending on input noise and narrowband interference. The most promising detector with maximum-likelihood time decoder displays the time in less than 60 s after powerup and at a noise level of Eb/N0 = 2.7 dB, an improvement of 20 dB over previous receivers. A field-programmable gate array-based demonstration receiver built for the purposes of this paper confirms the capabilities of these new algorithms. The findings of this paper enable future high-performance DCF77 receivers and further study of indoor longwave reception.",
"title": ""
},
{
"docid": "3c54b07b159fabe4c3ca1813abfdae6f",
"text": "We study the structure of the social graph of active Facebook users, the largest social network ever analyzed. We compute numerous features of the graph including the number of users and friendships, the degree distribution, path lengths, clustering, and mixing patterns. Our results center around three main observations. First, we characterize the global structure of the graph, determining that the social network is nearly fully connected, with 99.91% of individuals belonging to a single large connected component, and we confirm the ‘six degrees of separation’ phenomenon on a global scale. Second, by studying the average local clustering coefficient and degeneracy of graph neighborhoods, we show that while the Facebook graph as a whole is clearly sparse, the graph neighborhoods of users contain surprisingly dense structure. Third, we characterize the assortativity patterns present in the graph by studying the basic demographic and network properties of users. We observe clear degree assortativity and characterize the extent to which ‘your friends have more friends than you’. Furthermore, we observe a strong effect of age on friendship preferences as well as a globally modular community structure driven by nationality, but we do not find any strong gender homophily. We compare our results with those from smaller social networks and find mostly, but not entirely, agreement on common structural network characteristics.",
"title": ""
},
{
"docid": "221c59b8ea0460dac3128e81eebd6aca",
"text": "STUDY DESIGN\nA prospective self-assessment analysis and evaluation of nutritional and radiographic parameters in a consecutive series of healthy adult volunteers older than 60 years.\n\n\nOBJECTIVES\nTo ascertain the prevalence of adult scoliosis, assess radiographic parameters, and determine if there is a correlation with functional self-assessment in an aged volunteer population.\n\n\nSUMMARY OF BACKGROUND DATA\nThere exists little data studying the prevalence of scoliosis in a volunteer aged population, and correlation between deformity and self-assessment parameters.\n\n\nMETHODS\nThere were 75 subjects in the study. Inclusion criteria were: age > or =60 years, no known history of scoliosis, and no prior spine surgery. Each subject answered a RAND 36-Item Health Survey questionnaire, a full-length anteroposterior standing radiographic assessment of the spine was obtained, and nutritional parameters were analyzed from blood samples. For each subject, radiographic, laboratory, and clinical data were evaluated. The study population was divided into 3 groups based on frontal plane Cobb angulation of the spine. Comparison of the RAND 36-Item Health Surveys data among groups of the volunteer population and with United States population benchmark data (age 65-74 years) was undertaken using an unpaired t test. Any correlation between radiographic, laboratory, and self-assessment data were also investigated.\n\n\nRESULTS\nThe mean age of the patients in this study was 70.5 years (range 60-90). Mean Cobb angle was 17 degrees in the frontal plane. In the study group, 68% of subjects met the definition of scoliosis (Cobb angle >10 degrees). No significant correlation was noted among radiographic parameters and visual analog scale scores, albumin, lymphocytes, or transferrin levels in the study group as a whole. Prevalence of scoliosis was not significantly different between males and females (P > 0.03). The scoliosis prevalence rate of 68% found in this study reveals a rate significantly higher than reported in other studies. These findings most likely reflect the targeted selection of an elderly group. Although many patients with adult scoliosis have pain and dysfunction, there appears to be a large group (such as the volunteers in this study) that has no marked physical or social impairment.\n\n\nCONCLUSIONS\nPrevious reports note a prevalence of adult scoliosis up to 32%. In this study, results indicate a scoliosis rate of 68% in a healthy adult population, with an average age of 70.5 years. This study found no significant correlations between adult scoliosis and visual analog scale scores or nutritional status in healthy, elderly volunteers.",
"title": ""
},
{
"docid": "da1f5a7c5c39f50c70948eeba5cd9716",
"text": "Mushrooms have long been used not only as food but also for the treatment of various ailments. Although at its infancy, accumulated evidence suggested that culinary-medicinal mushrooms may play an important role in the prevention of many age-associated neurological dysfunctions, including Alzheimer's and Parkinson's diseases. Therefore, efforts have been devoted to a search for more mushroom species that may improve memory and cognition functions. Such mushrooms include Hericium erinaceus, Ganoderma lucidum, Sarcodon spp., Antrodia camphorata, Pleurotus giganteus, Lignosus rhinocerotis, Grifola frondosa, and many more. Here, we review over 20 different brain-improving culinary-medicinal mushrooms and at least 80 different bioactive secondary metabolites isolated from them. The mushrooms (either extracts from basidiocarps/mycelia or isolated compounds) reduced beta amyloid-induced neurotoxicity and had anti-acetylcholinesterase, neurite outgrowth stimulation, nerve growth factor (NGF) synthesis, neuroprotective, antioxidant, and anti-(neuro)inflammatory effects. The in vitro and in vivo studies on the molecular mechanisms responsible for the bioactive effects of mushrooms are also discussed. Mushrooms can be considered as useful therapeutic agents in the management and/or treatment of neurodegeneration diseases. However, this review focuses on in vitro evidence and clinical trials with humans are needed.",
"title": ""
},
{
"docid": "bde70da078bba2a63899cc7eb2a9aaf9",
"text": "In the past few years, cloud computing develops very quickly. A large amount of data are uploaded and stored in remote public cloud servers which cannot fully be trusted by users. Especially, more and more enterprises would like to manage their data by the aid of the cloud servers. However, when the data outsourced in the cloud are sensitive, the challenges of security and privacy becomes urgent for wide deployment of the cloud systems. This paper proposes a secure data sharing scheme to ensure the privacy of data owner and the security of the outsourced cloud data. The proposed scheme provides flexible utility of data while solving the privacy and security challenges for data sharing. The security and efficiency analysis demonstrate that the designed scheme is feasible and efficient. At last, we discuss its application in electronic health record.",
"title": ""
},
{
"docid": "499fe7f6bf5c7d8fcfe690e7390a5d36",
"text": "Compressional or traumatic asphyxia is a well recognized entity to most forensic pathologists. The vast majority of reported cases have been accidental. The case reported here describes the apparent inflicted compressional asphyxia of a small child. A review of mechanisms and related controversy regarding proposed mechanisms is discussed.",
"title": ""
},
{
"docid": "7c75c3f2cdfe00a26d6a0e9ac922e543",
"text": "The budgeted information gathering problem — where a robot with a fixed fuel budget is required to maximize the amount of information gathered from the world — appears in practice across a wide range of applications in autonomous exploration and inspection with mobile robots. Although there is an extensive amount of prior work investigating effective approximations of the problem, these methods do not address the fact that their performance is heavily dependent on distribution of objects in the world. In this paper, we attempt to address this issue by proposing a novel data-driven imitation learning framework. We present an efficient algorithm, EXPLORE, that trains a policy on the target distribution to imitate a clairvoyant oracle — an oracle that has full information about the world and computes non-myopic solutions to maximize information gathered. We validate the approach on a spectrum of results on a number of 2D and 3D exploration problems that demonstrates the ability of EXPLORE to adapt to different object distributions. Additionally, our analysis provides theoretical insight into the behavior of EXPLORE. Our approach paves the way forward for efficiently applying data-driven methods to the domain of information gathering.",
"title": ""
},
{
"docid": "9a397ca2a072d9b1f861f8a6770aa792",
"text": "Computational photography systems are becoming increasingly diverse, while computational resources---for example on mobile platforms---are rapidly increasing. As diverse as these camera systems may be, slightly different variants of the underlying image processing tasks, such as demosaicking, deconvolution, denoising, inpainting, image fusion, and alignment, are shared between all of these systems. Formal optimization methods have recently been demonstrated to achieve state-of-the-art quality for many of these applications. Unfortunately, different combinations of natural image priors and optimization algorithms may be optimal for different problems, and implementing and testing each combination is currently a time-consuming and error-prone process. ProxImaL is a domain-specific language and compiler for image optimization problems that makes it easy to experiment with different problem formulations and algorithm choices. The language uses proximal operators as the fundamental building blocks of a variety of linear and nonlinear image formation models and cost functions, advanced image priors, and noise models. The compiler intelligently chooses the best way to translate a problem formulation and choice of optimization algorithm into an efficient solver implementation. In applications to the image processing pipeline, deconvolution in the presence of Poisson-distributed shot noise, and burst denoising, we show that a few lines of ProxImaL code can generate highly efficient solvers that achieve state-of-the-art results. We also show applications to the nonlinear and nonconvex problem of phase retrieval.",
"title": ""
},
{
"docid": "22eefe8e8a46f1323fdfdcc5e0e4cac5",
"text": " Covers the main data mining techniques through carefully selected case studies Describes code and approaches that can be easily reproduced or adapted to your own problems Requires no prior experience with R Includes introductions to R and MySQL basics Provides a fundamental understanding of the merits, drawbacks, and analysis objectives of the data mining techniques Offers data and R code on www.liaad.up.pt/~ltorgo/DataMiningWithR/",
"title": ""
},
{
"docid": "eae5713c086986c4ef346d85ce06bf3d",
"text": "We describe a study designed to assess properties of a P300 brain-computer interface (BCI). The BCI presents the user with a matrix containing letters and numbers. The user attends to a character to be communicated and the rows and columns of the matrix briefly intensify. Each time the attended character is intensified it serves as a rare event in an oddball sequence and it elicits a P300 response. The BCI works by detecting which character elicited a P300 response. We manipulated the size of the character matrix (either 3 x 3 or 6 x 6) and the duration of the inter stimulus interval (ISI) between intensifications (either 175 or 350 ms). Online accuracy was highest for the 3 x 3 matrix 175-ms ISI condition, while bit rate was highest for the 6 x 6 matrix 175-ms ISI condition. Average accuracy in the best condition for each subject was 88%. P300 amplitude was significantly greater for the attended stimulus and for the 6 x 6 matrix. This work demonstrates that matrix size and ISI are important variables to consider when optimizing a BCI system for individual users and that a P300-BCI can be used for effective communication.",
"title": ""
},
{
"docid": "09dd98eb68bdf804d7953dc210a634f0",
"text": "The increasing popularity of social media is shortening the distance between people. Social activities, e.g., tagging in Flickr, book marking in Delicious, twittering in Twitter, etc. are reshaping people’s social life and redefining their social roles. People with shared interests tend to form their groups in social media, and users within the same community likely exhibit similar social behavior (e.g., going for the same movies, having similar political viewpoints), which in turn reinforces the community structure. The multiple interactions in social activities entail that the community structures are often overlapping, i.e., one person is involved in several communities. We propose a novel co-clustering framework, which takes advantage of networking information between users and tags in social media, to discover these overlapping communities. In our method, users are connected via tags and tags are connected to users. This explicit representation of users and tags is useful for understanding group evolution by looking at who is interested in what. The efficacy of our method is supported by empirical evaluation in both synthetic and online social networking data.",
"title": ""
}
] |
scidocsrr
|
4ee9d1a9c56a0999eeed6d80e98b127a
|
Drag , Drop , and Clone : An Interactive Interface for Surface Composition
|
[
{
"docid": "1353157ed70460e7ddf2202a3f1125f9",
"text": "Following the increasing demand to make the creation and manipulation of 3D geometry simpler and more accessible, we introduce a modeling approach that allows even novice users to create sophisticated models in minutes. Our approach is based on the observation that in many modeling settings users create models which belong to a small set of model classes, such as humans or quadrupeds. The models within each class typically share a common component structure. Following this observation, we introduce a modeling system which utilizes this common component structure allowing users to create new models by shuffling interchangeable components between existing models. To enable shuffling, we develop a method for computing a compatible segmentation of input models into meaningful, interchangeable components. Using this segmentation our system lets users create new models with a few mouse clicks, in a fraction of the time required by previous composition techniques. We demonstrate that the shuffling paradigm allows for easy and fast creation of a rich geometric content.",
"title": ""
}
] |
[
{
"docid": "586230bd896e1b289d71af6bf1dd1b7e",
"text": "This thesis presents the design of Pequod, a distributed, application-levelWeb cache.Web developers store data in application-level caches to avoid expensive operations on persistent storage.While useful for reducing the latency of data access, an application-level cache adds complexity to the application. The developer is responsible for keeping the cached data consistent with persistent storage. This consistency task can be difficult and costly, especially when the cached data represent the derived output of a computation. Pequod improves on the state-of-the-art by introducing an abstraction, the cache join, that caches derived datawithout requiring extensive consistency-related applicationmaintenance. Cache joins provide a mechanism for filtering, joining, and aggregating cached data. Pequod assumes the responsibility for maintaining cache freshness by automatically applying updates to derived data as inputs change over time. This thesis describes how cache joins are defined using a declarative syntax to overlay a relational data model on a key-value store, how cache data are generated on demand and kept fresh with a combination of eager and lazy incremental updates, howPequod uses the memory and computational resources of multiple machines to grow the cache, and how the correctness of derived data is maintained in the face of eviction. We show through experimentation that cache joins can be used to improve the performance ofWeb applications that cache derived data.We find that moving computation and maintenance tasks into the cache, where they can often be performed more efficiently, accounts for the majority of the improvement.",
"title": ""
},
{
"docid": "c758c3edb2e800d6e9dc7e61580d7efe",
"text": "The aim of this paper is to take a look at discourse structure from the standpoint of pronominal anaphora processing and socalled ‘accessibility domains’. The core hypothesis of the paper is that attention-based anaphora interpretation models like Focus Theory or Centering Theory can be utilized in a more satisfying way if discourse is considered as a bundle of concurrent, interacting processes. Elaborating on this hypothesis, in the paper a central role is played by various notions borrowed from non-linear phonological frameworks.",
"title": ""
},
{
"docid": "50a70ea76a6a713696fc4373f2f27b8a",
"text": "From the Department of General Internal Medicine, Clinical Immunology and Infectious Diseases, Medical University of Innsbruck, Innsbruck, Austria (G.W.); and the Departments of Pathology and Medicine, Stanford University, Stanford, Calif. (L.T.G.). Address reprint requests to Dr. Weiss at the Department of General Internal Medicine, Clinical Immunology and Infectious Diseases, Medical University of Innsbruck, Anichstr. 35, A-6020 Innsbruck, Austria, or at guenter.weiss@uibk.ac.at.",
"title": ""
},
{
"docid": "05532f05f969c6db5744e5dd22a6fbe4",
"text": "Lamellipodia, filopodia and membrane ruffles are essential for cell motility, the organization of membrane domains, phagocytosis and the development of substrate adhesions. Their formation relies on the regulated recruitment of molecular scaffolds to their tips (to harness and localize actin polymerization), coupled to the coordinated organization of actin filaments into lamella networks and bundled arrays. Their turnover requires further molecular complexes for the disassembly and recycling of lamellipodium components. Here, we give a spatial inventory of the many molecular players in this dynamic domain of the actin cytoskeleton in order to highlight the open questions and the challenges ahead.",
"title": ""
},
{
"docid": "73b62ff6e2a9599d465f25e554ad0fb7",
"text": "Rapid advancements in technology coupled with drastic reduction in cost of storage have resulted in tremendous increase in the volumes of stored data. As a consequence, analysts find it hard to cope with the rates of data arrival and the volume of data, despite the availability of many automated tools. In a digital investigation context where it is necessary to obtain information that led to a security breach and corroborate them is the contemporary challenge. Traditional techniques that rely on keyword based search fall short of interpreting data relationships and causality that is inherent to the artifacts, present across one or more sources of information. The problem of handling very large volumes of data, and discovering the associations among the data, emerges as an important contemporary challenge. The work reported in this paper is based on the use of metadata associations and eliciting the inherent relationships. We study the metadata associations methodology and introduce the algorithms to group artifacts. We establish that grouping artifacts based on metadata can provide a volume reduction of at least $$ {\\raise0.7ex\\hbox{$1$} \\!\\mathord{\\left/ {\\vphantom {1 {2M}}}\\right.\\kern-0pt} \\!\\lower0.7ex\\hbox{${2M}$}} $$ 1 2 M , even on a single source, where M is the largest number of metadata associated with an artifact in that source. The value of M is independent of inherently available metadata on any given source. As one understands the underlying data better, one can further refine the value of M iteratively thereby enhancing the volume reduction capabilities. We also establish that such reduction in volume is independent of the distribution of metadata associations across artifacts in any given source. We systematically develop the algorithms necessary to group artifacts on an arbitrary collection of sources and study the complexity.",
"title": ""
},
{
"docid": "a966c2222e88813574319fd0695c16f4",
"text": "Most streaming decision models evolve continuously over time, run in resource-aware environments, and detect and react to changes in the environment generating data. One important issue, not yet convincingly addressed, is the design of experimental work to evaluate and compare decision models that evolve over time. This paper proposes a general framework for assessing predictive stream learning algorithms. We defend the use of prequential error with forgetting mechanisms to provide reliable error estimators. We prove that, in stationary data and for consistent learning algorithms, the holdout estimator, the prequential error and the prequential error estimated over a sliding window or using fading factors, all converge to the Bayes error. The use of prequential error with forgetting mechanisms reveals to be advantageous in assessing performance and in comparing stream learning algorithms. It is also worthwhile to use the proposed methods for hypothesis testing and for change detection. In a set of experiments in drift scenarios, we evaluate the ability of a standard change detection algorithm to detect change using three prequential error estimators. These experiments point out that the use of forgetting mechanisms (sliding windows or fading factors) are required for fast and efficient change detection. In comparison to sliding windows, fading factors are faster and memoryless, both important requirements for streaming applications. Overall, this paper is a contribution to a discussion on best practice for performance assessment when learning is a continuous process, and the decision models are dynamic and evolve over time.",
"title": ""
},
{
"docid": "c9e5a1b9c18718cc20344837e10b08f7",
"text": "Reconnaissance is the initial and essential phase of a successful advanced persistent threat (APT). In many cases, attackers collect information from social media, such as professional social networks. This information is used to select members that can be exploited to penetrate the organization. Detecting such reconnaissance activity is extremely hard because it is performed outside the organization premises. In this paper, we propose a framework for management of social network honeypots to aid in detection of APTs at the reconnaissance phase. We discuss the challenges that such a framework faces, describe its main components, and present a case study based on the results of a field trial conducted with the cooperation of a large European organization. In the case study, we analyze the deployment process of the social network honeypots and their maintenance in real social networks. The honeypot profiles were successfully assimilated into the organizational social network and received suspicious friend requests and mail messages that revealed basic indications of a potential forthcoming attack. In addition, we explore the behavior of employees in professional social networks, and their resilience and vulnerability toward social network infiltration.",
"title": ""
},
{
"docid": "6c4433b640cf1d7557b2e74cbd2eee85",
"text": "A compact Ka-band broadband waveguide-based travelingwave spatial power combiner is presented. The low loss micro-strip probes are symmetrically inserted into both broadwalls of waveguide, quadrupling the coupling ways but the insertion loss increases little. The measured 16 dB return-loss bandwidth of the eight-way back-toback structure is from 30 GHz to 39.4 GHz (more than 25%) and the insertion loss is less than 1 dB, which predicts the power-combining efficiency is higher than 90%.",
"title": ""
},
{
"docid": "0c2a2cb741d1d22c5ef3eabd0b525d8d",
"text": "Part-of-speech (POS) tagging is a process of assigning the words in a text corresponding to a particular part of speech. A fundamental version of POS tagging is the identification of words as nouns, verbs, adjectives etc. For processing natural languages, Part of Speech tagging is a prominent tool. It is one of the simplest as well as most constant and statistical model for many NLP applications. POS Tagging is an initial stage of linguistics, text analysis like information retrieval, machine translator, text to speech synthesis, information extraction etc. In POS Tagging we assign a Part of Speech tag to each word in a sentence and literature. Various approaches have been proposed to implement POS taggers. In this paper we present a Marathi part of speech tagger. It is morphologically rich language. Marathi is spoken by the native people of Maharashtra. The general approach used for development of tagger is statistical using Unigram, Bigram, Trigram and HMM Methods. It presents a clear idea about all the algorithms with suitable examples. It also introduces a tag set for Marathi which can be used for tagging Marathi text. In this paper we have shown the development of the tagger as well as compared to check the accuracy of taggers output. The three Marathi POS taggers viz. Unigram, Bigram, Trigram and HMM gives the accuracy of 77.38%, 90.30%, 91.46% and 93.82% respectively.",
"title": ""
},
{
"docid": "ba6a8f6ba04434ab7fccf0abfc7c784c",
"text": "In this paper I discuss the curious lock of contact between developmental psychologists studying the principles of early learning and those concentrating on loter learning in children, where predispositions to learn certain types of concepts are less reodlly discussed. Instead, there is tacit agreement thot learning and tronsfer mechanisms ore content-independent and age-dependent. I argue here that one cannot study leornlng and transfer In a vacuum ond that children's ablllty to learn is lntimotely dependent on what they ore required to learn and the context in which they must learn it. Specifically, I orgue that children learn and transfer readily, even in traditlonol laboratory settings, if they are requlred ta extend their knowledge about causal mechanisms that they already understond. This point Is illustrated In o series of studies with children from 1 to 3 years of age leorning about simple mechanisms of physical causality (pushing-pulling, wetting, cutting, etc.). In addition, I document children's difficulty learning about causally lmpassi-ble events, such OS pulling with strings thot da not appear to make contact with the object they are pulling. Even young children transfer an the bosis of deep structural principles rather than perceptual features when they have access to the requisite domain-specific knowledge. I argue that a search far causal ex-plonatlons is the basis of broad understanding, of wide patterns of generalization , and of flexible transfer ond creative Inferential projections-in sum, the essential elements of meanlngful learning. In this paper I will consider the effects of principles that guide early learning, such as those described by Gelman (this issue), on later learning in children. This is not an easy task, as psychologists who have studied constraints, This paper is based on a talk given in the symposium, Structural Constraints on Cognitive Development, Psychonomics, 1986. Preparation of the manuscript was supported by NICHD Grant HD 06864. I wish to thank Anne Slattery for her patience and sensitivity with the toddlers in the string and tool studies. I thank Rita Gaskill for her word processing skills and patient work on the many versions of this manuscript, Usha Goswami and Mary Jo Kane for collaborating on studies, and Stephanie Lyons-Olsen and Alison McClain for helping collect data. I would also like to thank Rachel Gelman for her helpful comments, and Jim Greeno, Annette Karmiloff-Smith, and Doug Medin for their thoughtful reviews of this manuscript. Portions of the discussion are adapted from Brown (1989).",
"title": ""
},
{
"docid": "26d1014c6412d4fe62453e73cb2f3d92",
"text": "The use of anaesthetics becomes essential in the transportation medium for mitigating physiological stress and reducing metabolic rates. Clove oil is now emerging as safe, eco-friendly, effective, and economic fish anaesthetic. We tested the efficacy of clove oil as an anaesthetic for the handling and transportation of rohu, Labeo rohita (Hamilton-Buchnan) fingerlings. The lowest effective dose of clove oil that produced induction (≤3 min) and recovery (≤5 min) found was 50μl L-1.The induction times decreased and recovery times increased with increased in the concentrations of clove oil. The effective sedative dose at 5μl L-1 of clove oil was found suitable for transportation in plastic bags with pure oxygen up to 12h. The mortality rate (%) of fingerlings was significantly higher (14.4±1.14%) in the control (without sedative) than sedative doses of clove oil (P<0.05). The dose at 5μl L-1 of clove oil was found to mitigate stress responses with lower glucose level and reduce the deterioration of water quality in comparison to control. The present findings revealed that clove oil is promising to be used as anaesthetic and sedative for handling and transportation of rohu fingerlings. Clove oil is feasible to use in the commercial fish seed transportation due its merits like cheap and safe anaesthetic.",
"title": ""
},
{
"docid": "023fa0ac94b2ea1740f1bbeb8de64734",
"text": "The establishment of an endosymbiotic relationship typically seems to be driven through complementation of the host's limited metabolic capabilities by the biochemical versatility of the endosymbiont. The most significant examples of endosymbiosis are represented by the endosymbiotic acquisition of plastids and mitochondria, introducing photosynthesis and respiration to eukaryotes. However, there are numerous other endosymbioses that evolved more recently and repeatedly across the tree of life. Recent advances in genome sequencing technology have led to a better understanding of the physiological basis of many endosymbiotic associations. This review focuses on endosymbionts in protists (unicellular eukaryotes). Selected examples illustrate the incorporation of various new biochemical functions, such as photosynthesis, nitrogen fixation and recycling, and methanogenesis, into protist hosts by prokaryotic endosymbionts. Furthermore, photosynthetic eukaryotic endosymbionts display a great diversity of modes of integration into different protist hosts. In conclusion, endosymbiosis seems to represent a general evolutionary strategy of protists to acquire novel biochemical functions and is thus an important source of genetic innovation.",
"title": ""
},
{
"docid": "f6de868d9d3938feb7c33f082dddcdc0",
"text": "The proliferation of wearable devices, e.g., smartwatches and activity trackers, with embedded sensors has already shown its great potential on monitoring and inferring human daily activities. This paper reveals a serious security breach of wearable devices in the context of divulging secret information (i.e., key entries) while people accessing key-based security systems. Existing methods of obtaining such secret information relies on installations of dedicated hardware (e.g., video camera or fake keypad), or training with labeled data from body sensors, which restrict use cases in practical adversary scenarios. In this work, we show that a wearable device can be exploited to discriminate mm-level distances and directions of the user's fine-grained hand movements, which enable attackers to reproduce the trajectories of the user's hand and further to recover the secret key entries. In particular, our system confirms the possibility of using embedded sensors in wearable devices, i.e., accelerometers, gyroscopes, and magnetometers, to derive the moving distance of the user's hand between consecutive key entries regardless of the pose of the hand. Our Backward PIN-Sequence Inference algorithm exploits the inherent physical constraints between key entries to infer the complete user key entry sequence. Extensive experiments are conducted with over 5000 key entry traces collected from 20 adults for key-based security systems (i.e. ATM keypads and regular keyboards) through testing on different kinds of wearables. Results demonstrate that such a technique can achieve 80% accuracy with only one try and more than 90% accuracy with three tries, which to our knowledge, is the first technique that reveals personal PINs leveraging wearable devices without the need for labeled training data and contextual information.",
"title": ""
},
{
"docid": "6ecf5cb70cca991fbefafb739a0a44c9",
"text": "Reasoning about objects, relations, and physics is central to human intelligence, and 1 a key goal of artificial intelligence. Here we introduce the interaction network, a 2 model which can reason about how objects in complex systems interact, supporting 3 dynamical predictions, as well as inferences about the abstract properties of the 4 system. Our model takes graphs as input, performs objectand relation-centric 5 reasoning in a way that is analogous to a simulation, and is implemented using 6 deep neural networks. We evaluate its ability to reason about several challenging 7 physical domains: n-body problems, rigid-body collision, and non-rigid dynamics. 8 Our results show it can be trained to accurately simulate the physical trajectories of 9 dozens of objects over thousands of time steps, estimate abstract quantities such 10 as energy, and generalize automatically to systems with different numbers and 11 configurations of objects and relations. Our interaction network implementation 12 is the first general-purpose, learnable physics engine, and a powerful general 13 framework for reasoning about object and relations in a wide variety of complex 14 real-world domains. 15",
"title": ""
},
{
"docid": "a5cd94446abfc46c6d5c4e4e376f1e0a",
"text": "Commitment problem in credit market and its eãects on economic growth are discussed. Completions of investment projects increase capital stock of the economy. These projects require credits which are ånanced by ånacial intermediaries. A simpliåed credit model of Dewatripont and Maskin is used to describe the ånancing process, in which the commitment problem or the \\soft budget constraint\" problem arises. However, in dynamic general equilibrium setup with endougenous determination of value and cost of projects, there arise multiple equilibria in the project ånancing model, namely reånancing equilirium and no-reånancing equilibrium. The former leads the economy to the stationary state with smaller capital stock level than the latter. Both the elimination of reånancing equilibrium and the possibility of \\Animal Spirits Cycles\" equilibrium are also discussed.",
"title": ""
},
{
"docid": "8aacb85c5f551ad264bacb6b98db2baf",
"text": "Three experiments investigated the relationship between the presumption of harm in harmfree violations of creatural norms (taboos) and the moral emotions of anger and disgust. In Experiment 1, participants made a presumption of harm to others from taboo violations, even in conditions described as harmless and not involving other people; this presumption was predicted by anger and not disgust. Experiment 2 manipulated taboo violation and included a cognitive load task to clarify the post hoc nature of presumption of harm. Experiment 3 was similar but more accurately measured presumed harm. In Experiments 2 and 3, only without load was symbolic harm presumed, indicating its post hoc function to justify moral anger, which was not affected by load. In general, manipulations of harmfulness to others predicted moral anger better than moral disgust, whereas manipulations of taboo predicted disgust better. The presumption of harm was found on measures of symbolic rather than actual harm when a choice existed. These studies clarify understanding of the relationship between emotions and their justification when people consider victimless, offensive acts.",
"title": ""
},
{
"docid": "6e9edeffb12cf8e50223a933885bcb7c",
"text": "Reversible data hiding in encrypted images (RDHEI) is an effective technique to embed data in the encrypted domain. An original image is encrypted with a secret key and during or after its transmission, it is possible to embed additional information in the encrypted image, without knowing the encryption key or the original content of the image. During the decoding process, the secret message can be extracted and the original image can be reconstructed. In the last few years, RDHEI has started to draw research interest. Indeed, with the development of cloud computing, data privacy has become a real issue. However, none of the existing methods allow us to hide a large amount of information in a reversible manner. In this paper, we propose a new reversible method based on MSB (most significant bit) prediction with a very high capacity. We present two approaches, these are: high capacity reversible data hiding approach with correction of prediction errors and high capacity reversible data hiding approach with embedded prediction errors. With this method, regardless of the approach used, our results are better than those obtained with current state of the art methods, both in terms of reconstructed image quality and embedding capacity.",
"title": ""
},
{
"docid": "84114e63ea1a4f133a0987004f1193b9",
"text": "Networks protection against different types of attacks is one of most important posed issue into the network and information security application domains. This problem on Wireless Sensor Networks (WSNs), in attention to their special properties, has more importance. Now, there are some of proposed architectures and guide lines to protect Wireless Sensor Networks (WSNs) against different types of intrusions; but any one of them do not has a comprehensive view to this problem and they are usually designed and implemented in single-purpose; but, the proposed design in this paper tries to has been a comprehensive view to this issue by presenting a complete and comprehensive Intrusion Detection Architecture (IDA). The main contribution of this architecture is its hierarchical structure; i.e., it is designed and applicable, in one or two levels, consistent to the application domain and its required security level. Focus of this paper is on the clustering WSNs, designing and deploying Cluster-based Intrusion Detection System (CIDS) on cluster-heads and Wireless Sensor Network wide level Intrusion Detection System (WSNIDS) on the central server. Suppositions of the WSN and Intrusion Detection Architecture (IDA) are: static and heterogeneous network, hierarchical and clustering structure, clusters' overlapping and using hierarchical routing protocol such as LEACH, but along with minor changes. Finally, the proposed idea has been verified by designing a questionnaire, representing it to some (about 50 people) experts and then, analyzing and evaluating its acquired results.",
"title": ""
},
{
"docid": "0d3403ce2d1613c1ea6b938b3ba9c5e6",
"text": "Extracting a set of generalizable rules that govern the dynamics of complex, high-level interactions between humans based only on observations is a high-level cognitive ability. Mastery of this skill marks a significant milestone in the human developmental process. A key challenge in designing such an ability in autonomous robots is discovering the relationships among discriminatory features. Identifying features in natural scenes that are representative of a particular event or interaction (i.e. »discriminatory features») and then discovering the relationships (e.g., temporal/spatial/spatio-temporal/causal) among those features in the form of generalized rules are non-trivial problems. They often appear as a »chicken-and-egg» dilemma. This paper proposes an end-to-end learning framework to tackle these two problems in the context of learning generalized, high-level rules of human interactions from structured demonstrations. We employed our proposed deep reinforcement learning framework to learn a set of rules that govern a behavioral intervention session between two agents based on observations of several instances of the session. We also tested the accuracy of our framework with human subjects in diverse situations.",
"title": ""
}
] |
scidocsrr
|
f8b51177deaab99558e813f799e70558
|
Establishing Fraud Detection Patterns Based on Signatures
|
[
{
"docid": "30f2ccf69951bf068fd8c913ad72a35e",
"text": "This paper discusses the status of research on detection of fraud undertaken as part of the European Commission-funded ACTS ASPeCT (Advanced Security for Personal Communications Technologies) project. A first task has been the identification of possible fraud scenarios and of typical fraud indicators which can be mapped to data in Toll Tickets. Currently, the project is exploring the detection of fraudulent behaviour based on a combination of absolute and differential usage. Three approaches are being investigated: a rule-based approach and two approaches based on neural networks, where both supervised and unsupervised learning are considered. Special attention is being paid to the feasibility of the implementations.",
"title": ""
},
{
"docid": "74bb1f11761857bf876c9869ed47baeb",
"text": "This paper describes the automatic design of methods for detecting fraudulent behavior. Much of the de&,, ic nrrnm,-,li~h~rl ,,&,a n .am.L~ nf mn.-h;na lm..~:~~ e-. .. ..--..*.*yYYA’“.. UY.“b Y UISLUY “I III-Yllr IxuIY11~ methods. In particular, we combine data mining and constructive induction with more standard machine learning techniques to design methods for detecting fraudulent usage of cellular telephones based on profiling customer behavior. Specifically, we use a rulelearning program to uncover indicators of fraudulent behavior from a large database of cellular calls. These indicators are used to create profilers, which then serve as features to a system that combines evidence from multiple profilers to generate high-confidence alarms. Experiments indicate that this automatic approach performs nearly as well as the best hand-tuned methods for detecting fraud.",
"title": ""
}
] |
[
{
"docid": "47a87a903c4a8ef650fdbf670fca8568",
"text": "Social networks are a popular movement on the web. On the Semantic Web, it is simple to make trust annotations to social relationships. In this paper, we present a two level approach to integrating trust, provenance, and annotations in Semantic Web systems. We describe an algorithm for inferring trust relationships using provenance information and trust annotations in Semantic Web-based social networks. Then, we present an application, FilmTrust, that combines the computed trust values with the provenance of other annotations to personalize the website. The FilmTrust system uses trust to compute personalized recommended movie ratings and to order reviews. We believe that the results obtained with FilmTrust illustrate the success that can be achieved using this method of combining trust and provenance on the Semantic Web.",
"title": ""
},
{
"docid": "dc7f68a286fcf0ebc36bc02b80b5b6bd",
"text": "Many studies of digital communication, in particular of Twitter, use natural language processing (NLP) to find topics, assess sentiment, and describe user behaviour. In finding topics often the relationships between users who participate in the topic are neglected. We propose a novel method of describing and classifying online conversations using only the structure of the underlying temporal network and not the content of individual messages. This method utilises all available information in the temporal network (no aggregation), combining both topological and temporal structure using temporal motifs and inter-event times. This allows us create an embedding of the temporal network in order to describe the behaviour of individuals and collectives over time and examine the structure of conversation over multiple timescales.",
"title": ""
},
{
"docid": "057b397d3b72a30352697ce0940e490a",
"text": "Recent events of multiple earthquakes in Nepal, Italy and New Zealand resulting loss of life and resources bring our attention to the ever growing significance of disaster management, especially in the context of large scale nature disasters such as earthquake and Tsunami. In this paper, we focus on how disaster communication system can benefit from recent advances in wireless communication technologies especially mobile technologies and devices. The paper provides an overview of how the new generation of telecommunications and technologies such as 4G/LTE, Device to Device (D2D) and 5G can improve the potential of disaster networks. D2D is a promising technology for 5G networks, providing high data rates, increased spectral and energy efficiencies, reduced end-to-end delay and transmission power. We examine a scenario of multi-hop D2D communications where one UE may help other UEs to exchange information, by utilizing cellular network technique. Results show the average energy-efficiency spectral- efficiency of these transmission types are enhanced when the number of hops used in multi-hop links increases. The effect of resource group allocation is also pointed out for efficient design of system.",
"title": ""
},
{
"docid": "850832511dd2f6f809c3f0a2db77576e",
"text": "We present a novel probabilistic model for distributions over sets of structures— for example, sets of sequences, trees, or graphs. The critical characteristic of our model is a preference for diversity: sets containing dissimilar structures are more likely. Our model is a marriage of structured probabilistic models, like Markov random fields and context free grammars, with determinantal point processes, which arise in quantum physics as models of particles with repulsive interactions. We extend the determinantal point process model to handle an exponentially-sized set of particles (structures) via a natural factorization of the model into parts. We show how this factorization leads to tractable algorithms for exact inference, including computing marginals, computing conditional probabilities, and sampling. Our algorithms exploit a novel polynomially-sized dual representation of determinantal point processes, and use message passing over a special semiring to compute relevant quantities. We illustrate the advantages of the model on tracking and articulated pose estimation problems.",
"title": ""
},
{
"docid": "7dd86bc341e2637505387a96c16ea9c8",
"text": "This paper focuses on the relationship between fine art movements in the 20th C and the pioneers of digital art from 1956 to 1986. The research is part of a project called Digital Art Museum, which is an electronic archive devoted to the history and practice of computer art, and is also active in curating exhibitions of the work. While computer art genres never became mainstream art movements, there are clear areas of common interest, even when these are separated by some decades.",
"title": ""
},
{
"docid": "2efd26fc1e584aa5f70bdf9d24e5c2cd",
"text": "Bridging cultures that have often been distant, Julia combines expertise from the diverse fields of computer science and computational science to create a new approach to numerical computing. Julia is designed to be easy and fast and questions notions generally held to be “laws of nature” by practitioners of numerical computing: 1. High-level dynamic programs have to be slow. 2. One must prototype in one language and then rewrite in another language for speed or deployment. 3. There are parts of a system appropriate for the programmer, and other parts that are best left untouched as they have been built by the experts. We introduce the Julia programming language and its design—a dance between specialization and abstraction. Specialization allows for custom treatment. Multiple dispatch, a technique from computer science, picks the right algorithm for the right circumstance. Abstraction, which is what good computation is really about, recognizes what remains the same after differences are stripped away. Abstractions in mathematics are captured as code through another technique from computer science, generic programming. Julia shows that one can achieve machine performance without sacrificing human convenience.",
"title": ""
},
{
"docid": "ff774eb7c90d4efadb190155ff606013",
"text": "Communities are vehicles for efficiently disseminating news, rumors, and opinions in human social networks. Modeling information diffusion through a network can enable us to reach a superior functional understanding of the effect of network structures such as communities on information propagation. The intrinsic assumption is that form follows function---rational actors exercise social choice mechanisms to join communities that best serve their information needs. Particle Swarm Optimization (PSO) was originally designed to simulate aggregate social behavior; our proposed diffusion model, PSODM (Particle Swarm Optimization Diffusion Model) models information flow in a network by creating particle swarms for local network neighborhoods that optimize a continuous version of Holland's hyperplane-defined objective functions. In this paper, we show how our approach differs from prior modeling work in the area and demonstrate that it outperforms existing model-based community detection methods on several social network datasets.",
"title": ""
},
{
"docid": "f6cb3ee09942c03bd0f89520a76cac39",
"text": "This paper proposes a high-performance transformerless single-stage high step-up ac-dc matrix converter based on Cockcroft-Walton (CW) voltage multiplier. Deploying a four-bidirectional-switch matrix converter between the ac source and CW circuit, the proposed converter provides high quality of line conditions, adjustable output voltage, and low output ripple. The matrix converter is operated with two independent frequencies. One of which is associated with power factor correction (PFC) control, and the other is used to set the output frequency of the matrix converter. Moreover, the relationship among the latter frequency, line frequency, and output ripple will be discussed. This paper adopts one-cycle control method to achieve PFC, and a commercial control IC associating with a preprogrammed complex programmable logic device is built as the system controller. The operation principle, control strategy, and design considerations of the proposed converter are all detailed in this paper. A 1.2-kV/500-W laboratory prototype of the proposed converter is built for test, measurement, and evaluation. At full-load condition, the measured power factor, the system efficiency, and the output ripple factor are 99.9%, 90.3%, and 0.3%, respectively. The experimental results demonstrate the high performance of the proposed converter and the validity for high step-up ac-dc applications.",
"title": ""
},
{
"docid": "ef81266ae8c2023ea35dca8384db3803",
"text": "Linked Open Data has been recognized as a useful source of background knowledge for building content-based recommender systems. Vast amount of RDF data, covering multiple domains, has been published in freely accessible datasets. In this paper, we present an approach that uses language modeling approaches for unsupervised feature extraction from sequences of words, and adapts them to RDF graphs used for building content-based recommender system. We generate sequences by leveraging local information from graph sub-structures and learn latent numerical representations of entities in RDF graphs. Our evaluation on two datasets in the domain of movies and books shows that feature vector representations of general knowledge graphs such as DBpedia and Wikidata can be effectively used in content-based recommender systems.",
"title": ""
},
{
"docid": "cc7aa8b5b581c3e1996189411ca09235",
"text": "Owing to a number of reasons, the deployment of encryption solutions are beginning to be ubiquitous at both organizational and individual levels. The most emphasized reason is the necessity to ensure confidentiality of privileged information. Unfortunately, it is also popular as cyber-criminals' escape route from the grasp of digital forensic investigations. The direct encryption of data or indirect encryption of storage devices, more often than not, prevents access to such information contained therein. This consequently leaves the forensics investigation team, and subsequently the prosecution, little or no evidence to work with, in sixty percent of such cases. However, it is unthinkable to jeopardize the successes brought by encryption technology to information security, in favour of digital forensics technology. This paper examines what data encryption contributes to information security, and then highlights its contributions to digital forensics of disk drives. The paper also discusses the available ways and tools, in digital forensics, to get around the problems constituted by encryption. A particular attention is paid to the Truecrypt encryption solution to illustrate ideas being discussed. It then compares encryption's contributions in both realms, to justify the need for introduction of new technologies to forensically defeat data encryption as the only solution, whilst maintaining the privacy goal of users. Keywords—Encryption; Information Security; Digital Forensics; Anti-Forensics; Cryptography; TrueCrypt",
"title": ""
},
{
"docid": "a94f066ec5db089da7fd19ac30fe6ee3",
"text": "Information Centric Networking (ICN) is a new networking paradigm in which the ne twork provides users with content instead of communicatio n channels between hosts. Software Defined Networking (SDN) is an approach that promises to enable the co ntinuous evolution of networking architectures. In this paper we propose and discuss solutions to support ICN by using SDN concepts. We focus on an ICN framework called CONET, which groun ds its roots in the CCN/NDN architecture and can interwork with its implementation (CCNx). Altho ugh some details of our solution have been specifically designed for the CONET architecture, i ts general ideas and concepts are applicable to a c lass of recent ICN proposals, which follow the basic mod e of operation of CCN/NDN. We approach the problem in two complementary ways. First we discuss a general and long term solution based on SDN concepts without taking into account specific limit ations of SDN standards and equipment. Then we focus on an experiment to support ICN functionality over a large scale SDN testbed based on OpenFlow, developed in the context of the OFELIA Eu ropean research project. The current OFELIA testbed is based on OpenFlow 1.0 equipment from a v ariety of vendors, therefore we had to design the experiment taking into account the features that ar e currently available on off-the-shelf OpenFlow equipment.",
"title": ""
},
{
"docid": "ca715288ff8af17697e65d8b3c9f01bf",
"text": "In the last five years, biologically inspired features (BIF) always held the state-of-the-art results for human age estimation from face images. Recently, researchers mainly put their focuses on the regression step after feature extraction, such as support vector regression (SVR), partial least squares (PLS), canonical correlation analysis (CCA) and so on. In this paper, we apply convolutional neural network (CNN) to the age estimation problem, which leads to a fully learned end-toend system can estimate age from image pixels directly. Compared with BIF, the proposed method has deeper structure and the parameters are learned instead of hand-crafted. The multi-scale analysis strategy is also introduced from traditional methods to the CNN, which improves the performance significantly. Furthermore, we train an efficient network in a multi-task way which can do age estimation, gender classification and ethnicity classification well simultaneously. The experiments on MORPH Album 2 illustrate the superiorities of the proposed multi-scale CNN over other state-of-the-art methods.",
"title": ""
},
{
"docid": "1089b4e9a25ff2d6fc619523d1222fda",
"text": "Health promotion often comprises a tension between 'bottom-up' and 'top-down' programming. The former, more associated with concepts of community empowerment, begins on issues of concern to particular groups or individuals, and regards some improvement in their overall power or capacity as the important health outcome. The latter, more associated with disease prevention efforts, begins by seeking to involve particular groups or individuals in issues and activities largely defined by health agencies, and regards improvement in particular behaviours as the important health outcome. Community empowerment is viewed more instrumentally as a means to the end of health behaviour change. The tension between these two approaches is not unresolvable, but this requires a different orientation on the part of those responsible for planning more conventional, top-down programmes. This article presents a framework intended to assist planners, implementers and evaluators to systematically consider community empowerment goals within top-down health promotion programming. The framework 'unpacks' the tensions in health promotion at each stage of the more conventional, top-down programme cycle, by presenting a parallel 'empowerment' track. The framework also presents a new technology for the assessment and strategic planning of nine identified 'domains' that represent the organizational influences on the process of community empowerment. Future papers analyze the design of this assessment and planning methodology, and discuss the findings of its field-testing in rural communities in Fiji.",
"title": ""
},
{
"docid": "43d3ccb68457b320b25b8fea92ee2461",
"text": "Most existing e-commerce recommender systems aim to recommend the right products to a consumer, assuming the properties of each product are fixed. However, some properties, including price discount, can be personalized to respond to each consumer's preference. This paper studies how to automatically set the price discount when recommending a product, in light of the fact that the price will often alter a consumer's purchase decision. The key to optimizing the discount is to predict consumer's willingness-to-pay (WTP), namely, the highest price a consumer is willing to pay for a product. Purchase data used by traditional e-commerce recommender systems provide points below or above the decision boundary. In this paper we collected training data to better predict the decision boundary. We implement a new e-commerce mechanism adapted from laboratory lottery and auction experiments that elicit a rational customer's exact WTP for a small subset of products, and use a machine learning algorithm to predict the customer's WTP for other products. The mechanism is implemented on our own e-commerce website that leverages Amazon's data and subjects recruited via Mechanical Turk. The experimental results suggest that this approach can help predict WTP, and boost consumer satisfaction as well as seller profit.",
"title": ""
},
{
"docid": "7d3449a6ea821d214f7d961d4c85c6a4",
"text": "Collisions between automated moving equipment and human workers in job sites are one of the main sources of fatalities and accidents during the execution of construction projects. In this paper, we present a methodology to identify and assess project plans in terms of hazards before their execution. Our methodology has the following steps: 1) several potential plans are extracted from an initial activity graph; 2) plans are translated from a high-level activity graph to a discrete-event simulation model; 3) trajectories and safety policies are generated that avoid static and moving obstacles using existing motion planning algorithms; 4) safety scores and risk-based heatmaps are calculated based on the trajectories of moving equipment; and 5) managerial implications are provided to select an acceptable plan with the aid of a sensitivity analysis of different factors (cost, resources, and deadlines) that affect the safety of a plan. Finally, we present illustrative case study examples to demonstrate the usefulness of our model.Note to Practitioners—Currently, construction project planning does not explicitly consider safety due to a lack of automated tools that can identify a plan’s safety level before its execution. This paper proposes an automated construction safety assessment tool which is able to evaluate the alternate construction plans and help to choose considering safety, cost, and deadlines. Our methodology uses discrete-event modeling along with motion planning to simulate the motions of workers and equipment, which account for most of the hazards in construction sites. Our method is capable of generating safe motion trajectories and coordination policies for both humans and machines to minimize the number of collisions. We also provide safety heatmaps as a spatiotemporal visual display of construction site to identify risky zones inside the environment throughout the entire timeline of the project. Additionally, a detailed sensitivity analysis helps to choose among plans in terms of safety, cost, and deadlines.",
"title": ""
},
{
"docid": "93dba45f5309d77b63c8957609f146b7",
"text": "Research papers available on the World Wide Web (WWW or Web) areoften poorly organized, often exist in forms opaque to searchengines (e.g. Postscript), and increase in quantity daily.Significant amounts of time and effort are typically needed inorder to find interesting and relevant publications on the Web. Wehave developed a Web based information agent that assists the userin the process of performing a scientific literature search. Givena set of keywords, the agent uses Web search engines and heuristicsto locate and download papers. The papers are parsed in order toextract information features such as the abstract and individuallyidentified citations. The agents Web interface can be used to findrelevant papers in the database using keyword searches, or bynavigating the links between papers formed by the citations. Linksto both citing and cited publications can be followed. In additionto simple browsing and keyword searches, the agent can find paperswhich are similar to a given paper using word information and byanalyzing common citations made by the papers.",
"title": ""
},
{
"docid": "f2193eafc9992da971766bdf3e2f9094",
"text": "Network embedding is to learn low-dimensional vector representations for nodes of a given social network, facilitating many tasks in social network analysis such as link prediction. The vast majority of existing embedding algorithms are designed for unsigned social networks or social networks with only positive links. However, networks in social media could have both positive and negative links, and little work exists for signed social networks. From recent findings of signed network analysis, it is evident that negative links have distinct properties and added value besides positive links, which brings about both challenges and opportunities for signed network embedding. In this paper, we propose a deep learning framework SiNE for signed network embedding. The framework optimizes an objective function guided by social theories that provide a fundamental understanding of signed social networks. Experimental results on two realworld datasets of social media demonstrate the effectiveness of the proposed framework SiNE.",
"title": ""
},
{
"docid": "4af3a073fd8a5687acc5be824ef46107",
"text": "In the field of endodontics revolution transpired over the years. The modern endodontic specialty practice has little resemblance to the traditional means. There is a lot of transformation, in the materials used and the type of instrumentation. Initially, stainless steel was the material of choice and manual instrumentation was the only means of cleaning the root canals. However, there is a tremendous change in the modern endodontics, due to introduction of NiTi (Nickel Titanium) files and rotary instrumentation. NiTi was developed 40 years ago in the Naval Ordinance Laboratory (NOL) in Silver Springs, Maryland. Therefore, the acronym Nitinol is used worldwide for this unusual type of alloy with better flexibility and fracture resistance. Rotary systems that evolved over years in to different generations differ in the series of instruments included. Hence, it is essential for the beginners to understand the differences among all the available systems and their usage which are highlighted in this review article.",
"title": ""
},
{
"docid": "b866fc215dbae6538e998b249563e78d",
"text": "The term `heavy metal' is, in this context, imprecise. It should probably be reserved for those elements with an atomic mass of 200 or greater [e.g., mercury (200), thallium (204), lead (207), bismuth (209) and the thorium series]. In practice, the term has come to embrace any metal, exposure to which is clinically undesirable and which constitutes a potential hazard. Our intention in this review is to provide an overview of some general concepts of metal toxicology and to discuss in detail metals of particular importance, namely, cadmium, lead, mercury, thallium, bismuth, arsenic, antimony and tin. Poisoning from individual metals is rare in the UK, even when there is a known risk of exposure. Table 1 shows that during 1991±92 only 1 ́1% of male lead workers in the UK and 5 ́5% of female workers exceeded the legal limits for blood lead concentration. Collectively, however, poisoning with metals forms an important aspect of toxicology because of their widespread use and availability. Furthermore, hitherto unrecognized hazards and accidents continue to be described. The investigation of metal poisoning forms a distinct specialist area, since most metals are usually measured using atomic absorption techniques. Analyses require considerable expertise and meticulous attention to detail to ensure valid results. Different analytical performance standards may be required of assays used for environmental and occupational monitoring, or for solely toxicological purposes. Because of the high capital cost of good quality instruments, the relatively small numbers of tests required and the variety of metals, it is more cost-effective if such testing is carried out in regional, national or other centres having the necessary experience. Nevertheless, patients are frequently cared for locally, and clinical biochemists play a crucial role in maintaining a high index of suspicion and liaising with clinical colleagues to ensure the provision of correct samples for analysis and timely advice.",
"title": ""
}
] |
scidocsrr
|
19d34e8554ae75eb09a9a576fa85fbcb
|
Real-time Dense Disparity Estimation based on Multi-Path Viterbi for Intelligent Vehicle Applications
|
[
{
"docid": "776e04fa00628e249900b02f1edf9432",
"text": "We propose an algorithm for minimizing the total variation of an image, and provide a proof of convergence. We show applications to image denoising, zooming, and the computation of the mean curvature motion of interfaces.",
"title": ""
},
{
"docid": "cc4c58f1bd6e5eb49044353b2ecfb317",
"text": "Today, visual recognition systems are still rarely employed in robotics applications. Perhaps one of the main reasons for this is the lack of demanding benchmarks that mimic such scenarios. In this paper, we take advantage of our autonomous driving platform to develop novel challenging benchmarks for the tasks of stereo, optical flow, visual odometry/SLAM and 3D object detection. Our recording platform is equipped with four high resolution video cameras, a Velodyne laser scanner and a state-of-the-art localization system. Our benchmarks comprise 389 stereo and optical flow image pairs, stereo visual odometry sequences of 39.2 km length, and more than 200k 3D object annotations captured in cluttered scenarios (up to 15 cars and 30 pedestrians are visible per image). Results from state-of-the-art algorithms reveal that methods ranking high on established datasets such as Middlebury perform below average when being moved outside the laboratory to the real world. Our goal is to reduce this bias by providing challenging benchmarks with novel difficulties to the computer vision community. Our benchmarks are available online at: www.cvlibs.net/datasets/kitti.",
"title": ""
}
] |
[
{
"docid": "f6d8b2641fbc36143741cd809c70e057",
"text": "Websites that encourage consumers to research, rate, and review products online have become an increasingly important factor in purchase decisions. This increased importance has been accompanied by a growth in deceptive opinion spam fraudulent reviews written with the intent to sound authentic and mislead consumers. In this study, we pool deceptive reviews solicited through crowdsourcing with actual reviews obtained from product review websites. We then explore several humanand machine-based assessment methods to spot deceptive opinion spam in our pooled review set. We find that the combination of humanbased assessment methods with easily-obtained statistical information generated from the review text outperforms detection methods using human assessors alone.",
"title": ""
},
{
"docid": "81cd34302bf028a444019e228a5148d7",
"text": "Since the release of the large discourse-level annotation of the Penn Discourse Treebank (PDTB), research work has been carried out on certain subtasks of this annotation, such as disambiguating discourse connectives and classifying Explicit or Implicit relations. We see a need to construct a full parser on top of these subtasks and propose a way to evaluate the parser. In this work, we have designed and developed an end-to-end discourse parser-to-parse free texts in the PDTB style in a fully data-driven approach. The parser consists of multiple components joined in a sequential pipeline architecture, which includes a connective classifier, argument labeler, explicit classifier, non-explicit classifier, and attribution span labeler. Our trained parser first identifies all discourse and non-discourse relations, locates and labels their arguments, and then classifies the sense of the relation between each pair of arguments. For the identified relations, the parser also determines the attribution spans, if any, associated with them. We introduce novel approaches to locate and label arguments, and to identify attribution spans. We also significantly improve on the current state-of-the-art connective classifier. We propose and present a comprehensive evaluation from both component-wise and error-cascading perspectives, in which we illustrate how each component performs in isolation, as well as how the pipeline performs with errors propagated forward. The parser gives an overall system F1 score of 46.80 percent for partial matching utilizing gold standard parses, and 38.18 percent with full automation.",
"title": ""
},
{
"docid": "7a945183a38a751052f5bfc80d3d3ff6",
"text": "It is time to reconsider unifying logic and memory. Since most of the transistors on this merged chip will be devoted to memory, it is called 'intelligent RAM'. IRAM is attractive because the gigabit DRAM chip has enough transistors for both a powerful processor and a memory big enough to contain whole programs and data sets. It contains 1024 memory blocks each 1kb wide. It needs more metal layers to accelerate the long lines of 600mm/sup 2/ chips. It may require faster transistors for the high-speed interface of synchronous DRAM. Potential advantages of IRAM include lower memory latency, higher memory bandwidth, lower system power, adjustable memory width and size, and less board space. Challenges for IRAM include high chip yield given processors have not been repairable via redundancy, high memory retention rates given processors usually need higher power than DRAMs, and a fast processor given logic is slower in a DRAM process.",
"title": ""
},
{
"docid": "23a77ef19b59649b50f168b1cb6cb1c5",
"text": "A novel interleaved high step-up converter with voltage multiplier cell is proposed in this paper to avoid the extremely narrow turn-off period and to reduce the current ripple, which flows through the power devices compared with the conventional interleaved boost converter in high step-up applications. Interleaved structure is employed in the input side to distribute the input current, and the voltage multiplier cell is adopted in the output side to achieve a high step-up gain. The voltage multiplier cell is composed of the secondary windings of the coupled inductors, a series capacitor, and two diodes. Furthermore, the switch voltage stress is reduced due to the transformer function of the coupled inductors, which makes low-voltage-rated MOSFETs available to reduce the conduction losses. Moreover, zero-current-switching turn- on soft-switching performance is realized to reduce the switching losses. In addition, the output diode turn-off current falling rate is controlled by the leakage inductance of the coupled inductors, which alleviates the diode reverse recovery problem. Additional active device is not required in the proposed converter, which makes the presented circuit easy to design and control. Finally, a 1-kW 40-V-input 380-V-output prototype operating at 100 kHz switching frequency is built and tested to verify the effectiveness of the presented converter.",
"title": ""
},
{
"docid": "74e8303ef8eeb51ee1c1a4197db64d76",
"text": "Four major theoretical perspectives on emotion III psychology are described. Examples of the ways in which research on emotion and speech utilize aspects of the various perspectives are presented and a plea is made for students of emotion and speech to consider more self-consciously the place of their research within each of the perspectives. 1. ARE THEORIES OF EMOTION",
"title": ""
},
{
"docid": "752c61771593e4395856f56690a6f61b",
"text": "We conducted a longitudinal study with 32 nonmusician children over 9 months to determine 1) whether functional differences between musician and nonmusician children reflect specific predispositions for music or result from musical training and 2) whether musical training improves nonmusical brain functions such as reading and linguistic pitch processing. Event-related brain potentials were recorded while 8-year-old children performed tasks designed to test the hypothesis that musical training improves pitch processing not only in music but also in speech. Following the first testing sessions nonmusician children were pseudorandomly assigned to music or to painting training for 6 months and were tested again after training using the same tests. After musical (but not painting) training, children showed enhanced reading and pitch discrimination abilities in speech. Remarkably, 6 months of musical training thus suffices to significantly improve behavior and to influence the development of neural processes as reflected in specific pattern of brain waves. These results reveal positive transfer from music to speech and highlight the influence of musical training. Finally, they demonstrate brain plasticity in showing that relatively short periods of training have strong consequences on the functional organization of the children's brain.",
"title": ""
},
{
"docid": "e27575b8d7a7455f1a8f941adb306a04",
"text": "Seung-Joon Yi GRASP Lab, University of Pennsylvania, Philadelphia, Pennsylvania 19104 e-mail: yiseung@seas.upenn.edu Stephen G. McGill GRASP Lab, University of Pennsylvania, Philadelphia, Pennsylvania 19104 e-mail: smcgill3@seas.upenn.edu Larry Vadakedathu GRASP Lab, University of Pennsylvania, Philadelphia, Pennsylvania 19104 e-mail: vlarry@seas.upenn.edu Qin He GRASP Lab, University of Pennsylvania, Philadelphia, Pennsylvania 19104 e-mail: heqin@seas.upenn.edu Inyong Ha Robotis, Seoul, Korea e-mail: dudung@robotis.com Jeakweon Han Robotis, Seoul, Korea e-mail: jkhan@robotis.com Hyunjong Song Robotis, Seoul, Korea e-mail: hjsong@robotis.com Michael Rouleau RoMeLa, Virginia Tech, Blacksburg, Virginia 24061 e-mail: mrouleau@vt.edu Byoung-Tak Zhang BI Lab, Seoul National University, Seoul, Korea e-mail: btzhang@bi.snu.ac.kr Dennis Hong RoMeLa, University of California, Los Angeles, Los Angeles, California 90095 e-mail: dennishong@ucla.edu Mark Yim GRASP Lab, University of Pennsylvania, Philadelphia, Pennsylvania 19104 e-mail: yim@seas.upenn.edu Daniel D. Lee GRASP Lab, University of Pennsylvania, Philadelphia, Pennsylvania 19104 e-mail: ddlee@seas.upenn.edu",
"title": ""
},
{
"docid": "0ed736b21954253557fb732d67e43eb1",
"text": "Recurrent neural network language models have enjoyed great success in speech recognition, partially due to their ability to model longer-distance context than word n-gram models. In recurrent neural networks (RNNs), contextual information from past inputs is modeled with the help of recurrent connections at the hidden layer, while Long Short-Term Memory (LSTM) neural networks are RNNs that contain units that can store values for arbitrary amounts of time. While conventional unidirectional networks predict outputs from only past inputs, one can build bidirectional networks that also condition on future inputs. In this paper, we propose applying bidirectional RNNs and LSTM neural networks to language modeling for speech recognition. We discuss issues that arise when utilizing bidirectional models for speech, and compare unidirectional and bidirectional models on an English Broadcast News transcription task. We find that bidirectional RNNs significantly outperform unidirectional RNNs, but bidirectional LSTMs do not provide any further gain over their unidirectional counterparts.",
"title": ""
},
{
"docid": "899349ba5a7adb31f5c7d24db6850a82",
"text": "Sampling is a core process for a variety of graphics applications. Among existing sampling methods, blue noise sampling remains popular thanks to its spatial uniformity and absence of aliasing artifacts. However, research so far has been mainly focused on blue noise sampling with a single class of samples. This could be insufficient for common natural as well as man-made phenomena requiring multiple classes of samples, such as object placement, imaging sensors, and stippling patterns.\n We extend blue noise sampling to multiple classes where each individual class as well as their unions exhibit blue noise characteristics. We propose two flavors of algorithms to generate such multi-class blue noise samples, one extended from traditional Poisson hard disk sampling for explicit control of sample spacing, and another based on our soft disk sampling for explicit control of sample count. Our algorithms support uniform and adaptive sampling, and are applicable to both discrete and continuous sample space in arbitrary dimensions. We study characteristics of samples generated by our methods, and demonstrate applications in object placement, sensor layout, and color stippling.",
"title": ""
},
{
"docid": "1d2352ebbcbb4d83ef2f5aaaf0e06dc9",
"text": "We introduce a new neurofeedback approach, which allows users to manipulate expressive parameters in music performances using their emotional state, and we present the results of a pilot clinical experiment applying the approach to alleviate depression in elderly people. Ten adults (9 female and 1 male, mean = 84, SD = 5.8) with normal hearing participated in the neurofeedback study consisting of 10 sessions (2 sessions per week) of 15 min each. EEG data was acquired using the Emotiv EPOC EEG device. In all sessions, subjects were asked to sit in a comfortable chair facing two loudspeakers, to close their eyes, and to avoid moving during the experiment. Participants listened to music pieces preselected according to their music preferences, and were encouraged to increase the loudness and tempo of the pieces, based on their arousal and valence levels. The neurofeedback system was tuned so that increased arousal, computed as beta to alpha activity ratio in the frontal cortex corresponded to increased loudness, and increased valence, computed as relative frontal alpha activity in the right lobe compared to the left lobe, corresponded to increased tempo. Pre and post evaluation of six participants was performed using the BDI depression test, showing an average improvement of 17.2% (1.3) in their BDI scores at the end of the study. In addition, an analysis of the collected EEG data of the participants showed a significant decrease of relative alpha activity in their left frontal lobe (p = 0.00008), which may be interpreted as an improvement of their depression condition.",
"title": ""
},
{
"docid": "303509037a36933e6c999067e7b34bc6",
"text": "Corporates and organizations across the globe are spending huge sums on information security as they are reporting an increase in security related incidents. The proliferation of cloud, social network and multiple mobile device usage is on one side represent an opportunity and benefits to the organisation and on other side have posed new challenges for those policing cybercrimes. Cybercriminals have devised more sophisticated and targeted methods/techniques to trap victim and breach security setups. The emergence of highly technical nature of digital crimes has created a new branch of science known as digital forensics. Digital Forensics is the field of forensics science that deals with digital crimes and crimes involving computers. This paper focuses on briefing of digital forensics, various phases of digital forensics, digital forensics tools and its comparisons, and emerging trends and issues in this fascinated area. Keywords— Digital forensics, Digital evidence, Digital forensics tools, Network intrusion, Information security,",
"title": ""
},
{
"docid": "c49d4b1f2ac185bcb070cb105798417a",
"text": "The performance of face detection has been largely improved with the development of convolutional neural network. However, the occlusion issue due to mask and sunglasses, is still a challenging problem. The improvement on the recall of these occluded cases usually brings the risk of high false positives. In this paper, we present a novel face detector called Face Attention Network (FAN), which can significantly improve the recall of the face detection problem in the occluded case without compromising the speed. More specifically, we propose a new anchor-level attention, which will highlight the features from the face region. Integrated with our anchor assign strategy and data augmentation techniques, we obtain state-of-art results on public face ∗Equal contribution. †Work was done during an internship at Megvii Research. detection benchmarks like WiderFace and MAFA. The code will be released for reproduction.",
"title": ""
},
{
"docid": "b8fdea273f4b22f564e2d961154d4d8d",
"text": "While the study of the physiochemical composition and structure of the interstitium on a molecular level is a large and important field in itself, the present review centered mainly on the functional consequences for the control of extracellular fluid volume. As pointed out in section I, a biological monitoring system for the total extracellular volume seems very unlikely because a major part of that volume is made up of multiple, separate, and functionally heterogeneous interstitial compartments. Even less likely is a selective volume control of each of these compartments by the nervous system. Instead, as shown by many studies cited in this review, a local autoregulation of interstitial volume is provided by automatic adjustment of the transcapillary Starling forces and lymph flow. Local vascular control of capillary pressure and surface area, of special importance in orthostasis, has been discussed in several recent reviews and was mentioned only briefly in this article. The gel-like consistency of the interstitium is attributed to glycosaminoglycans, in soft connective tissues mainly hyaluronan. However, the concept of a gel phase and a free fluid phase now seems to be replaced by the quantitatively more well-defined distribution spaces for glycosaminoglycans and plasma protein, apparently in osmotic equilibrium with each other. The protein-excluded space, determined mainly by the content of glycosaminoglycans and collagen, has been measured in vivo in many tissues, and the effect of exclusion on the oncotic buffering has been clarified. The effect of protein charge on its excluded volume and on interstitial hydraulic conductivity has been studied only in lungs and is only partly clarified. Of unknown functional importance is also the recent finding of a free interstitial hyaluronan pool with relatively rapid removal by lymph. The postulated preferential channels from capillaries to lymphatics have received little direct support. Thus the variation of plasma-to-lymph passage times for proteins may probably be ascribed to heterogeneity with respect to path length, linear velocity, and distribution volumes. Techniques for measuring interstitial fluid pressure have been refined and reevaluated, approaching some concensus on slightly negative control pressures in soft connective tissues (0 to -4 mmHg), zero, or slightly positive pressure in other tissues. Interstitial pressure-volume curves have been recorded in several tissues, and progress has been made in clarifying the dependency of interstitial compliance on glycosaminoglycan-osmotic pressure, collagen, and microfibrils.(ABSTRACT TRUNCATED AT 400 WORDS)",
"title": ""
},
{
"docid": "8e4bd52e3b10ea019241679541c25c9d",
"text": "Accurate project effort prediction is an important goal for the software engineering community. To date most work has focused upon building algorithmic models of effort, for example COCOMO. These can be calibrated to local environments. We describe an alternative approach to estimation based upon the use of analogies. The underlying principle is to characterize projects in terms of features (for example, the number of interfaces, the development method or the size of the functional requirements document). Completed projects are stored and then the problem becomes one of finding the most similar projects to the one for which a prediction is required. Similarity is defined as Euclidean distance in n-dimensional space where n is the number of project features. Each dimension is standardized so all dimensions have equal weight. The known effort values of the nearest neighbors to the new project are then used as the basis for the prediction. The process is automated using a PC-based tool known as ANGEL. The method is validated on nine different industrial datasets (a total of 275 projects) and in all cases analogy outperforms algorithmic models based upon stepwise regression. From this work we argue that estimation by analogy is a viable technique that, at the very least, can be used by project managers to complement current estimation techniques.",
"title": ""
},
{
"docid": "f535dead9fb8e4a02a591689c31b2666",
"text": "B competitive advantage is increasingly found in knowing how to do things, rather than in having special access to resources and markets, knowledge and intellectual capital have become both the primary bases of core competencies and the key to superior performance. This article explores how companies can best grow their knowledge resources to create not simply competitive advantage, but sustainable competitive advantage. In recent years, the development of “hypercompetition” and shortened product lifecycles have reduced the degree to which much special knowledge can provide companies with sustained competitive advantage. Shrinking logistical and communication costs, along with new organizational designs, have enabled multinational corporations (MNCs) to function as truly global companies, rather than as conglomerations of national ones. Global companies introduce their newest products worldwide and effectively share knowledge across country units. Fueling hypercompetition, many industries now have several MNCs competing against each other on a worldwide basis, rather than a few local companies and only one MNC competing in each market. New product innovation has become the key to competing successfully. MNCs use their deep pockets to fund research and development (R&D), and they reverse engineer each other’s products and turn to consultants to learn about best practices in their industry. Companies develop or acquire new knowledge so rapidly that having special knowledge is no longer a basis for sustainable competitive advantage. To provide sustained competitive advantage, one needs knowledge that is difficult for outsiders to copy as well as the ability to rapidly develop new knowledge. In a graphic demonstration of the transient value of much knowledge, Michael Tushman began his courses at Columbia Business School by asking what the following list of products had in common: watches, cars, cameras, color TVs, hand tools, radial tires, industrial robots, machine tools, electric motors, financial services, food processors, microwave ovens, stereo equipment, athletic equipment, computer chips, optical equipment, medical equipment, and consulting services. The answer: each of these industries was initially dominated by a company with specialized knowledge, but that company rapidly lost its lead as other companies acquired the knowledge required to compete. Also illustrating the situation, fewer than 40% of the Fortune 500 companies of 1970 still existed in their original form by 1991. In today’s environment, much of the Organizational Dynamics, Vol. 29, No. 4, pp. 164–178, 2001 ISSN 0090-2616/01/$–see frontmatter © 2001 Elsevier Science, Inc. PII S0090-2616(01)00026-2",
"title": ""
},
{
"docid": "4abd7884b97c1af7c24a81da7a6c0c3d",
"text": "AIM\nThe interaction between running, stretching and practice jumps during warm-up for jumping tests has not been investigated. The purpose of the present study was to compare the effects of running, static stretching of the leg extensors and practice jumps on explosive force production and jumping performance.\n\n\nMETHODS\nSixteen volunteers (13 male and 3 female) participated in five different warm-ups in a randomised order prior to the performance of two jumping tests. The warm-ups were control, 4 min run, static stretch, run + stretch, and run + stretch + practice jumps. After a 2 min rest, a concentric jump and a drop jump were performed, which yielded 6 variables expressing fast force production and jumping performance of the leg extensor muscles (concentric jump height, peak force, rate of force developed, drop jump height, contact time and height/time).\n\n\nRESULTS\nGenerally the stretching warm-up produced the lowest values and the run or run + stretch + jumps warm-ups produced the highest values of explosive force production. There were no significant differences (p<0.05) between the control and run + stretch warm-ups, whereas the run yielded significantly better scores than the run + stretch warm-up for drop jump height (3.2%), concentric jump height (3.4%) and peak concentric force (2.7%) and rate of force developed (15.4%).\n\n\nCONCLUSION\nThe results indicated that submaximum running and practice jumps had a positive effect whereas static stretching had a negative influence on explosive force and jumping performance. It was suggested that an alternative for static stretching should be considered in warm-ups prior to power activities.",
"title": ""
},
{
"docid": "f4503626420d2f17e0716312a7c325ad",
"text": "Segmentation of left ventricular (LV) endocardium from 3D echocardiography is important for clinical diagnosis because it not only can provide some clinical indices (e.g. ventricular volume and ejection fraction) but also can be used for the analysis of anatomic structure of ventricle. In this work, we proposed a new full-automatic method, combining the deep learning and deformable model, for the segmentation of LV endocardium. We trained convolutional neural networks to generate a binary cuboid to locate the region of interest (ROI). And then, using ROI as the input, we trained stacked autoencoder to infer the LV initial shape. At last, we adopted snake model initiated by inferred shape to segment the LV endocardium. In the experiments, we used 3DE data, from CETUS challenge 2014 for training and testing by segmentation accuracy and clinical indices. The results demonstrated the proposed method is accuracy and efficiency respect to expert's measurements.",
"title": ""
},
{
"docid": "33f0a2bbda3f701dab66a8ffb67d5252",
"text": "Microglia, the resident macrophages of the CNS, are exquisitely sensitive to brain injury and disease, altering their morphology and phenotype to adopt a so-called activated state in response to pathophysiological brain insults. Morphologically activated microglia, like other tissue macrophages, exist as many different phenotypes, depending on the nature of the tissue injury. Microglial responsiveness to injury suggests that these cells have the potential to act as diagnostic markers of disease onset or progression, and could contribute to the outcome of neurodegenerative diseases. The persistence of activated microglia long after acute injury and in chronic disease suggests that these cells have an innate immune memory of tissue injury and degeneration. Microglial phenotype is also modified by systemic infection or inflammation. Evidence from some preclinical models shows that systemic manipulations can ameliorate disease progression, although data from other models indicates that systemic inflammation exacerbates disease progression. Systemic inflammation is associated with a decline in function in patients with chronic neurodegenerative disease, both acutely and in the long term. The fact that diseases with a chronic systemic inflammatory component are risk factors for Alzheimer disease implies that crosstalk occurs between systemic inflammation and microglia in the CNS.",
"title": ""
},
{
"docid": "fa1d37a9ea833982ef0a09bba52b2c74",
"text": "Data Center Networks represent the convergence of computing and networking, of data and storage networks, and of packet transport mechanisms in Layers 2 and 3. Congestion control algorithms are a key component of data transport in this type of network. Recently, a Layer 2 congestion management algorithm, called QCN (Quantized Congestion Notification), has been adopted for the IEEE 802.1 Data Center Bridging standard: IEEE 802.1Qau. The QCN algorithm has been designed to be stable, responsive, and simple to implement. However, it does not provide weighted fairness, where the weights can be set by the operator on a per-flow or per-class basis. Such a feature can be very useful in multi-tenanted Cloud Computing and Data Center environments. This paper addresses this issue. Specifically, we develop an algorithm, called AF-QCN (for Approximately Fair QCN), which ensures a faster convergence to fairness than QCN, maintains this fairness at fine-grained time scales, and provides programmable weighted fair bandwidth shares to flows/flow-classes. It combines the QCN algorithm developed by some of the authors of this paper, and the AFD algorithm previously developed by Pan et. al. AF-QCN requires no modifications to a QCN source (Reaction Point) and introduces a very light-weight addition to a QCNcapable switch (Congestion Point). The results obtained through simulations and an FPGA implementation on a 1Gbps platform show that AF-QCN retains the good congestion management performance of QCN while achieving rapid and programmable (approximate) weighted fairness.",
"title": ""
}
] |
scidocsrr
|
a2fa08fac825e5f1f2e3e4966f4a504a
|
A randomized, wait-list controlled clinical trial: the effect of a mindfulness meditation-based stress reduction program on mood and symptoms of stress in cancer outpatients.
|
[
{
"docid": "b5360df245a0056de81c89945f581f14",
"text": "The inability to cope successfully with the enormous stress of medical education may lead to a cascade of consequences at both a personal and professional level. The present study examined the short-term effects of an 8-week meditation-based stress reduction intervention on premedical and medical students using a well-controlled statistical design. Findings indicate that participation in the intervention can effectively (1) reduce self-reported state and trait anxiety, (2) reduce reports of overall psychological distress including depression, (3) increase scores on overall empathy levels, and (4) increase scores on a measure of spiritual experiences assessed at termination of intervention. These results (5) replicated in the wait-list control group, (6) held across different experiments, and (7) were observed during the exam period. Future research should address potential long-term effects of mindfulness training for medical and premedical students.",
"title": ""
}
] |
[
{
"docid": "7317713e6725f6541e4197cb02525cd4",
"text": "This survey describes the current state-of-the-art in the development of automated visual surveillance systems so as to provide researchers in the field with a summary of progress achieved to date and to identify areas where further research is needed. The ability to recognise objects and humans, to describe their actions and interactions from information acquired by sensors is essential for automated visual surveillance. The increasing need for intelligent visual surveillance in commercial, law enforcement and military applications makes automated visual surveillance systems one of the main current application domains in computer vision. The emphasis of this review is on discussion of the creation of intelligent distributed automated surveillance systems. The survey concludes with a discussion of possible future directions.",
"title": ""
},
{
"docid": "1e4292950f907d26b27fa79e1e8fa41f",
"text": "All over the world every business and profit earning firm want to make their consumer loyal. There are many factors responsible for this customer loyalty but two of them are prominent. This research study is focused on that how customer satisfaction and customer retention contribute towards customer loyalty. For analysis part of this study, Universities students of Peshawar Region were targeted. A sample of 120 were selected from three universities of Peshawar. These universities were Preston University, Sarhad University and City University of Science and Information technology. Analysis was conducted with the help of SPSS 19. Results of the study shows that customer loyalty is more dependent upon Customer satisfaction in comparison of customer retention. Customer perceived value and customer perceived quality are the major factors which contribute for the customer loyalty of Universities students for mobile handsets.",
"title": ""
},
{
"docid": "5a97d79641f7006d7b5d0decd3a7ad3e",
"text": "We present a cognitive model of inducing verb selectional preferences from individual verb usages. The selectional preferences for each verb argument are represented as a probability distribution over the set of semantic properties that the argument can possess—asemantic profile . The semantic profiles yield verb-specific conceptualizations of the arguments associated with a syntactic position. The proposed model can learn appropriate verb profiles from a small set of noisy training data, and can use them in simulating human plausibility judgments and analyzing implicit object alternation.",
"title": ""
},
{
"docid": "9b010450862f5b3b73273028242db8ad",
"text": "A number of mechanisms ensure that the intestine is protected from pathogens and also against our own intestinal microbiota. The outermost of these is the secreted mucus, which entraps bacteria and prevents their translocation into the tissue. Mucus contains many immunomodulatory molecules and is largely produced by the goblet cells. These cells are highly responsive to the signals they receive from the immune system and are also able to deliver antigens from the lumen to dendritic cells in the lamina propria. In this Review, we will give a basic overview of mucus, mucins and goblet cells, and explain how each of these contributes to immune regulation in the intestine.",
"title": ""
},
{
"docid": "87a319361ad48711eff002942735258f",
"text": "This paper describes an innovative principle for climbing obstacles with a two-axle and four-wheel robot with articulated frame. It is based on axle reconfiguration while ensuring permanent static stability. A simple example is demonstrated based on the OpenWHEEL platform with a serial mechanism connecting front and rear axles of the robot. A generic tridimensional multibody simulation is provided with Adams software. It permits to validate the concept and to get an approach of control laws for every type of inter-axle mechanism. This climbing principle permits to climb obstacles as high as the wheel while keeping energetic efficiency of wheel propulsion and using only one supplemental actuator. Applications to electric wheelchairs, quads and all terrain vehicles (ATV) are envisioned",
"title": ""
},
{
"docid": "f9ffe3af3a2f604efb6bde83f519f55c",
"text": "BIA is easy, non-invasive, relatively inexpensive and can be performed in almost any subject because it is portable. Part II of these ESPEN guidelines reports results for fat-free mass (FFM), body fat (BF), body cell mass (BCM), total body water (TBW), extracellular water (ECW) and intracellular water (ICW) from various studies in healthy and ill subjects. The data suggests that BIA works well in healthy subjects and in patients with stable water and electrolytes balance with a validated BIA equation that is appropriate with regard to age, sex and race. Clinical use of BIA in subjects at extremes of BMI ranges or with abnormal hydration cannot be recommended for routine assessment of patients until further validation has proven for BIA algorithm to be accurate in such conditions. Multi-frequency- and segmental-BIA may have advantages over single-frequency BIA in these conditions, but further validation is necessary. Longitudinal follow-up of body composition by BIA is possible in subjects with BMI 16-34 kg/m(2) without abnormal hydration, but must be interpreted with caution. Further validation of BIA is necessary to understand the mechanisms for the changes observed in acute illness, altered fat/lean mass ratios, extreme heights and body shape abnormalities.",
"title": ""
},
{
"docid": "10dc52289ed1ea2f9ae6a6afd7299492",
"text": "This work proposes a potentiostat circuit for multiple implantable sensor applications. Implantable sensors play a vital role in continuous in situ monitoring of biological phenomena in a real-time health care monitoring system. In the proposed work a three-electrode based electrochemical sensing system has been employed. In this system a fixed potential difference between the working and the reference electrodes is maintained using a potentiostat to generate a current signal in the counter electrode which is proportional to the concentration of the analyte. This potential difference between the working and the reference electrodes can be changed to detect different analytes. The designed low power potentiostat consumes only 66 µW with 2.5 volt power supply which is highly suitable for low-power implantable sensor applications. All the circuits are designed and fabricated in a 0.35-micron standard CMOS process.",
"title": ""
},
{
"docid": "a667360d5214a47efee3326536a95527",
"text": "In this paper we propose a method for automatic color extraction and indexing to support color queries of image and video databases. This approach identifies the regions within images that contain colors from predetermined color sets. By searching over a large number of color sets, a color index for the database is created in a fashion similar to that for file inversion. This allows very fast indexing of the image collection by color contents of the images. Furthermore, information about the identified regions, such as the color set, size, and location, enables a rich variety of queries that specify both color content and spatial relationships of regions. We present the single color extraction and indexing method and contrast it to other color approaches. We examine single and multiple color extraction and image query on a database of 3000 color images.",
"title": ""
},
{
"docid": "d5284538412222101f084fee2dc1acc4",
"text": "The hand is an integral component of the human body, with an incredible spectrum of functionality. In addition to possessing gross and fine motor capabilities essential for physical survival, the hand is fundamental to social conventions, enabling greeting, grooming, artistic expression and syntactical communication. The loss of one or both hands is, thus, a devastating experience, requiring significant psychological support and physical rehabilitation. The majority of hand amputations occur in working-age males, most commonly as a result of work-related trauma or as casualties sustained during combat. For millennia, humans have used state-of-the-art technology to design clever devices to facilitate the reintegration of hand amputees into society. The present article provides a historical overview of the progress in replacing a missing hand, from early iron hands intended primarily for use in battle, to today's standard body-powered and myoelectric prostheses, to revolutionary advancements in the restoration of sensorimotor control with targeted reinnervation and hand transplantation.",
"title": ""
},
{
"docid": "86f82b7fc89fa5132f9784296a322e8c",
"text": "The Developmental Eye Movement Test (DEM) is a standardized test for evaluating saccadic eye movements in children. An adult version, the Adult Developmental Eye Movement Test (A-DEM), was recently developed for Spanish-speaking adults ages 14 to 68. No version yet exists for adults over the age of 68 and normative studies for English-speaking adults are absent. However, it is not clear if the single-digit format of the DEM or the double-digit A-DEM format should be used for further test develop-",
"title": ""
},
{
"docid": "c4f6edd01cee1e44a00eca11a086a284",
"text": "In this paper we investigate the effectiveness of Recurrent Neural Networks (RNNs) in a top-N content-based recommendation scenario. Specifically, we propose a deep architecture which adopts Long Short Term Memory (LSTM) networks to jointly learn two embeddings representing the items to be recommended as well as the preferences of the user. Next, given such a representation, a logistic regression layer calculates the relevance score of each item for a specific user and we returns the top-N items as recommendations.\n In the experimental session we evaluated the effectiveness of our approach against several baselines: first, we compared it to other shallow models based on neural networks (as Word2Vec and Doc2Vec), next we evaluated it against state-of-the-art algorithms for collaborative filtering. In both cases, our methodology obtains a significant improvement over all the baselines, thus giving evidence of the effectiveness of deep learning techniques in content-based recommendation scenarios and paving the way for several future research directions.",
"title": ""
},
{
"docid": "30e89edb65cbf54b27115c037ee9c322",
"text": "AbstructIGBT’s are available with short-circuit withstand times approaching those of bipolar transistors. These IGBT’s can therefore be protected by the same relatively slow-acting circuitry. The more efficient IGBT’s, however, have lower shortcircuit withstand times. While protection of these types of IGBT’s is not difficult, it does require a reassessment of the traditional protection methods used for the bipolar transistors. An in-depth discussion on the behavior of IGBT’s under different short-circuit conditions is carried out and the effects of various parameters on permissible short-circuit time are analyzed. The paper also rethinks the problem of providing short-circuit protection in relation to the special characteristics of the most efficient IGBT’s. The pros and cons of some of the existing protection circuits are discussed and, based on the recommendations, a protection scheme is implemented to demonstrate that reliable short-circuit protection of these types of IGBT’s can be achieved without difficulty in a PWM motor-drive application. volts",
"title": ""
},
{
"docid": "229cdcef4b7a28b73d4bde192ad0cb53",
"text": "The problem of anomaly detection is a critical topic across application domains and is the subject of extensive research. Applications include finding frauds and intrusions, warning on robot safety, and many others. Standard approaches in this field exploit simple or complex system models, created by experts using detailed domain knowledge. In this paper, we put forth a statistics-based anomaly detector motivated by the fact that anomalies are sparse by their very nature. Powerful sparsity directed algorithms—namely Robust Principal Component Analysis and the Group Fused LASSO—form the basis of the methodology. Our novel unsupervised single-step solution imposes a convex optimisation task on the vector time series data of the monitored system by employing group-structured, switching and robust regularisation techniques. We evaluated our method on data generated by using a Baxter robot arm that was disturbed randomly by a human operator. Our procedure was able to outperform two baseline schemes in terms of F1 score. Generalisations to more complex dynamical scenarios are desired.",
"title": ""
},
{
"docid": "8925f16c563e3f7ab666efe58076ee59",
"text": "An incomplete method for solving the propositional satisfiability problem (or a general constraint satisfaction problem) is one that does not provide the guarantee that it will eventually either report a satisfying assignment or declare that the given formula is unsatisfiable. In practice, most such methods are biased towards the satisfiable side: they are typically run with a pre-set resource limit, after which they either produce a valid solution or report failure; they never declare the formula to be unsatisfiable. These are the kind of algorithms we will discuss in this chapter. In complexity theory terms, such algorithms are referred to as having one-sided error. In principle, an incomplete algorithm could instead be biased towards the unsatisfiable side, always providing proofs of unsatisfiability but failing to find solutions to some satisfiable instances, or be incomplete with respect to both satisfiable and unsatisfiable instances (and thus have two-sided error). Unlike systematic solvers often based on an exhaustive branching and backtracking search, incomplete methods are generally based on stochastic local search, sometimes referred to as SLS. On problems from a variety of domains, such incomplete methods for SAT can significantly outperform DPLL-based methods. Since the early 1990’s, there has been a tremendous amount of research on designing, understanding, and improving local search methods for SAT. There have also been attempts at hybrid approaches that explore combining ideas from DPLL methods and local search techniques [e.g. 39, 68, 84, 88]. We cannot do justice to all recent research in local search solvers for SAT, and will instead try to provide a brief overview and touch upon some interesting details. The interested reader is encouraged to further explore the area through some of the nearly a hundred publications we cite along the way. We begin the chapter by discussing two methods that played a key role in the success of local search for satisfiability, namely GSAT [98] and Walksat [95]. We will then discuss some extensions of these ideas, in particular clause weighting",
"title": ""
},
{
"docid": "f6ea3edc8116110d7591562f3c1d97ca",
"text": "Feature selection is an important task for data analysis and information retrieval processing, pattern classification systems, and data mining applications. It reduces the number of features by removing noisy, irrelevant and redundant data. In this paper, a novel feature selection algorithm based on Ant Colony Optimization (ACO), called Advanced Binary ACO (ABACO), is presented. Features are treated as graph nodes to construct a graph model and are fully connected to each other. In this graph, each node has two sub-nodes, one for selecting and the other for deselecting the feature. Ant colony algorithm is used to select nodes while ants should visit all features. The use of several statistical measures is examined as the heuristic function for visibility of the edges in the graph. At the end of a tour, each ant has a binary vector with the same length as the number of features, where 1 implies selecting and 0 implies deselecting the corresponding feature. The performance of proposed algorithm is compared to the performance of Binary Genetic Algorithm (BGA), Binary Particle Swarm Optimization (BPSO), CatfishBPSO, Improved Binary Gravitational Search Algorithm (IBGSA), and some prominent ACO-based algorithms on the task of feature selection on 12 well-known UCI datasets. Simulation results verify that the algorithm provides a suitable feature subset with good classification accuracy using a smaller feature set than competing feature selection methods. KeywordsFeature selection; Wrraper; Ant colony optimization (ACO); Binary ACO; Classification.",
"title": ""
},
{
"docid": "08d1a9f3edc449ff08b45caaaf56f6ad",
"text": "Despite the theoretical and demonstrated empirical significance of parental coping strategies for the wellbeing of families of children with disabilities, relatively little research has focused explicitly on coping in mothers and fathers of children with autism. In the present study, 89 parents of preschool children and 46 parents of school-age children completed a measure of the strategies they used to cope with the stresses of raising their child with autism. Factor analysis revealed four reliable coping dimensions: active avoidance coping, problem-focused coping, positive coping, and religious/denial coping. Further data analysis suggested gender differences on the first two of these dimensions but no reliable evidence that parental coping varied with the age of the child with autism. Associations were also found between coping strategies and parental stress and mental health. Practical implications are considered including reducing reliance on avoidance coping and increasing the use of positive coping strategies.",
"title": ""
},
{
"docid": "34641057a037740ec28581a798c96f05",
"text": "Vehicles are becoming complex software systems with many components and services that need to be coordinated. Service oriented architectures can be used in this domain to support intra-vehicle, inter-vehicles, and vehicle-environment services. Such architectures can be deployed on different platforms, using different communication and coordination paradigms. We argue that practical solutions should be hybrid: they should integrate and support interoperability of different paradigms. We demonstrate the concept by integrating Jini, the service-oriented technology we used within the vehicle, and JXTA, the peer to peer infrastructure we used to support interaction with the environment through a gateway service, called J2J. Initial experience with J2J is illustrated.",
"title": ""
},
{
"docid": "3a011bdec6531de3f0f9718f35591e52",
"text": "Since Markowitz (1952) formulated the portfolio selection problem, many researchers have developed models aggregating simultaneously several conflicting attributes such as: the return on investment, risk and liquidity. The portfolio manager generally seeks the best combination of stocks/assets that meets his/ her investment objectives. The Goal Programming (GP) model is widely applied to finance and portfolio management. The aim of this paper is to present the different variants of the GP model that have been applied to the financial portfolio selection problem from the 1970s to nowadays. 2013 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "ee63ca73151e24ee6f0543b0914a3bb6",
"text": "The aim of this study was to investigate whether different aspects of morality predict traditional bullying and cyberbullying behaviour in a similar way. Students between 12 and 19 years participated in an online study. They reported on the frequency of different traditional and cyberbullying behaviours and completed self-report measures on moral emotions and moral values. A scenario approach with open questions was used to assess morally disengaged justifications. Tobit regressions indicated that a lack of moral values and a lack of remorse predicted both traditional and cyberbullying behaviour. Traditional bullying was strongly predictive for cyberbullying. A lack of moral emotions and moral values predicted cyberbullying behaviour even when controlling for traditional bUllying. Morally disengaged justifications were only predictive for traditional, but not for cyberbullying behaviour. The findings show that moral standards and moral affect are important to understand individual differences in engagement in both traditional and cyberforms of bUllying.",
"title": ""
},
{
"docid": "215b02216c68ba6eb2d040e8e01c1ac1",
"text": "Numerous companies are expecting their knowledge management (KM) to be performed effectively in order to leverage and transform the knowledge into competitive advantages. However, here raises a critical issue of how companies can better evaluate and select a favorable KM strategy prior to a successful KM implementation. The KM strategy selection is a kind of multiple criteria decision-making (MCDM) problem, which requires considering a large number of complex factors as multiple evaluation criteria. A robust MCDM method should consider the interactions among criteria. The analytic network process (ANP) is a relatively new MCDM method which can deal with all kinds of interactions systematically. Moreover, the Decision Making Trial and Evaluation Laboratory (DEMATEL) not only can convert the relations between cause and effect of criteria into a visual structural model, but also can be used as a way to handle the inner dependences within a set of criteria. Hence, this paper proposes an effective solution based on a combined ANP and DEMATEL approach to help companies that need to evaluate and select KM strategies. Additionally, an empirical study is presented to illustrate the application of the proposed method. 2007 Elsevier Ltd. All rights reserved.",
"title": ""
}
] |
scidocsrr
|
18c7b07db34916e8406fb2c5886bcad0
|
Collective Nominal Semantic Role Labeling for Tweets
|
[
{
"docid": "f042dd6b78c65541e657c48452a1e0e4",
"text": "We present a general framework for semantic role labeling. The framework combines a machine-learning technique with an integer linear programming-based inference procedure, which incorporates linguistic and structural constraints into a global decision process. Within this framework, we study the role of syntactic parsing information in semantic role labeling. We show that full syntactic parsing information is, by far, most relevant in identifying the argument, especially, in the very first stagethe pruning stage. Surprisingly, the quality of the pruning stage cannot be solely determined based on its recall and precision. Instead, it depends on the characteristics of the output candidates that determine the difficulty of the downstream problems. Motivated by this observation, we propose an effective and simple approach of combining different semantic role labeling systems through joint inference, which significantly improves its performance. Our system has been evaluated in the CoNLL-2005 shared task on semantic role labeling, and achieves the highest F1 score among 19 participants.",
"title": ""
}
] |
[
{
"docid": "c6baff0d600c76fac0be9a71b4238990",
"text": "Nature has provided rich models for computational problem solving, including optimizations based on the swarm intelligence exhibited by fireflies, bats, and ants. These models can stimulate computer scientists to think nontraditionally in creating tools to address application design challenges.",
"title": ""
},
{
"docid": "51de174e74a94edd85bf1a88595c9f4e",
"text": "We present a complete processing chain for computing 2D occupancy grids from image sequences. A multi layer grid is introduced which serves several purposes. First the 3D points reconstructed from the images are distributed onto the underlying grid. Thereafter a virtual measurement is computed for each cell thus reducing computational complexity and rejecting potential outliers. Subsequently a height profile is updated from which the current measurement is partitioned into ground and obstacle pixels. Different height profile update strategies are tested and compared yielding a stable height profile estimation. Lastly the occupancy layer of the grid is updated. To asses the algorithm we evaluate it quantitatively by comparing the output of it to ground truth data illustrating its accuracy. We show the applicability of the algorithm by using both, dense stereo reconstructed and sparse structure and motion points. The algorithm was implemented and run online on one of our test vehicles in real time.",
"title": ""
},
{
"docid": "a7fc0958b0830e0a34a281ce0a293e6a",
"text": "Abstract Laboratory diagnostics (i.e., the total testing process) develops conventionally through a virtual loop, originally referred to as \"the brain to brain cycle\" by George Lundberg. Throughout this complex cycle, there is an inherent possibility that a mistake might occur. According to reliable data, preanalytical errors still account for nearly 60%-70% of all problems occurring in laboratory diagnostics, most of them attributable to mishandling procedures during collection, handling, preparing or storing the specimens. Although most of these would be \"intercepted\" before inappropriate reactions are taken, in nearly one fifth of the cases they can produce inappropriate investigations and unjustifiable increase in costs, while generating inappropriate clinical decisions and causing some unfortunate circumstances. Several steps have already been undertaken to increase awareness and establish a governance of this frequently overlooked aspect of the total testing process. Standardization and monitoring preanalytical variables is of foremost importance and is associated with the most efficient and well-organized laboratories, resulting in reduced operational costs and increased revenues. As such, this article is aimed at providing readers with significant updates on the total quality management of the preanalytical phase to endeavour further improvement for patient safety throughout this phase of the total testing process.",
"title": ""
},
{
"docid": "2f08b35bb6f4f9d44d1225e2d26b5395",
"text": "An efficient disparity estimation and occlusion detection algorithm for multiocular systems is presented. A dynamic programming algorithm, using a multiview matching cost as well as pure geometrical constraints, is used to estimate disparity and to identify the occluded areas in the extreme left and right views. A significant advantage of the proposed approach is that the exact number of views in which each point appears (is not occluded) can be determined. The disparity and occlusion information obtained may then be used to create virtual images from intermediate viewpoints. Furthermore, techniques are developed for the coding of occlusion and disparity information, which is needed at the receiver for the reproduction of a multiview sequence using the two encoded extreme views. Experimental results illustrate the performance of the proposed techniques.",
"title": ""
},
{
"docid": "5407b8e976d7e6e1d7aa1e00c278a400",
"text": "In his paper a 7T SRAM cell operating well in low voltages is presented. Suitable read operation structure is provided by controlling the drain induced barrier lowering (DIBL) effect and body-source voltage in the hold `1' state. The read-operation structure of the proposed cell utilizes the single transistor which leads to a larger write margin. The simulation results at 90nm TSMC CMOS demonstrate the outperforms of the proposed SRAM cell in terms of power dissipation, write margin, sensitivity to process variations as compared with the other most efficient low-voltage SRAM cells.",
"title": ""
},
{
"docid": "6064bdefac3e861bcd46fa303b0756be",
"text": "Some models of textual corpora employ text generation methods involving n-gram statistics, while others use latent topic variables inferred using the \"bag-of-words\" assumption, in which word order is ignored. Previously, these methods have not been combined. In this work, I explore a hierarchical generative probabilistic model that incorporates both n-gram statistics and latent topic variables by extending a unigram topic model to include properties of a hierarchical Dirichlet bigram language model. The model hyperparameters are inferred using a Gibbs EM algorithm. On two data sets, each of 150 documents, the new model exhibits better predictive accuracy than either a hierarchical Dirichlet bigram language model or a unigram topic model. Additionally, the inferred topics are less dominated by function words than are topics discovered using unigram statistics, potentially making them more meaningful.",
"title": ""
},
{
"docid": "b6ada5769b8ffd6eae296b60ad41c774",
"text": "Research shows that various social media platforms on Internet such as Twitter, Tumblr (micro-blogging websites), Facebook (a popular social networking website), YouTube (largest video sharing and hosting website), Blogs and discussion forums are being misused by extremist groups for spreading their beliefs and ideologies, promoting radicalization, recruiting members and creating online virtual communities sharing a common agenda. Popular microblogging websites such as Twitter are being used as a real-time platform for information sharing and communication during planning and mobilization if civil unrest related events. Applying social media intelligence for predicting and identifying online radicalization and civil unrest oriented threats is an area that has attracted several researchers’ attention over past 10 years. There are several algorithms, techniques and tools that have been proposed in existing literature to counter and combat cyber-extremism and predicting protest related events in much advance. In this paper, we conduct a literature review of all these existing techniques and do a comprehensive analysis to understand state-of-the-art, trends and research gaps. We present a one class classification approach to collect scholarly articles targeting the topics and subtopics of our research scope. We perform characterization, classification and an in-depth meta analysis meta-anlaysis of about 100 conference and journal papers to gain a better understanding of existing literature.",
"title": ""
},
{
"docid": "239644f4ecd82758ca31810337a10fda",
"text": "This paper discusses a design of stable filters withH∞ disturbance attenuation of Takagi–Sugeno fuzzy systemswith immeasurable premise variables. When we consider the filter design of Takagi–Sugeno fuzzy systems, the selection of premise variables plays an important role. If the premise variable is the state of the system, then a fuzzy system describes a wide class of nonlinear systems. In this case, however, a filter design of fuzzy systems based on parallel distributed compensator idea is infeasible. To avoid such a difficulty, we consider the premise variables uncertainties. Then we consider a robust H∞ filtering problem for such an uncertain system. A solution of the problem is given in terms of linear matrix inequalities (LMIs). Some numerical examples are given to illustrate our theory. © 2008 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "73e6f03d67508bd2f04b955fc750c18d",
"text": "Interleaving is a key component of many digital communication systems involving error correction schemes. It provides a form of time diversity to guard against bursts of errors. Recently, interleavers have become an even more integral part of the code design itself, if we consider for example turbo and turbo-like codes. In a non-cooperative context, such as passive listening, it is a challenging problem to estimate the interleaver parameters. In this paper we propose an algorithm that allows us to estimate the parameters of the interleaver at the output of a binary symmetric channel and to locate the codewords in the interleaved block. This gives us some clues about the interleaving function used.",
"title": ""
},
{
"docid": "0fa845d3999c8198b5fb6e0b89b2726d",
"text": "Clopidogrel (SR25990C, PLAVIX) is a potent antiplatelet drug, which has been recently launched and is indicated for the prevention of vascular thrombotic events in patients at risk. Clopidogrel is inactive in vitro, and a hepatic biotransformation is necessary to express the full antiaggregating activity of the drug. Moreover, 2-oxo-clopidogrel has been previously suggested to be the essential key intermediate metabolite from which the active metabolite is formed. In the present paper, we give the evidence of the occurrence of an in vitro active metabolite after incubation of 2-oxo-clopidogrel with human liver microsomes. This metabolite was purified by liquid chromatography, and its structure was studied by a combination of mass spectometry (MS) and NMR experiments. MS results suggested that the active metabolite belongs to a family of eight stereoisomers with the following primary chemical structure: 2-[1-[1-(2-chlorophenyl)-2-methoxy-2-oxoethyl]-4-sulfanyl-3-piperidinylidene]acetic acid. Chiral supercritical fluid chromatography resolved these isomers. However, only one of the eight metabolites retained the biological activity, thus underlining the critical importance of associated absolute configuration. Because of its highly labile character, probably due to a very reactive thiol function, structural elucidation of the active metabolite was performed on the stabilized acrylonitrile derivative. Conjunction of all our results suggested that the active metabolite is of S configuration at C 7 and Z configuration at C 3-C 16 double bound.",
"title": ""
},
{
"docid": "d114f37ccb079106a728ad8fe1461919",
"text": "This paper describes a stochastic hill climbing algorithm named SHCLVND to optimize arbitrary vectorial < n ! < functions. It needs less parameters. It uses normal (Gaussian) distributions to represent probabilities which are used for generating more and more better argument vectors. The-parameters of the normal distributions are changed by a kind of Hebbian learning. Kvasnicka et al. KPP95] used algorithm Stochastic Hill Climbing with Learning (HCwL) to optimize a highly multimodal vectorial function on real numbers. We have tested proposed algorithm by optimizations of the same and a similar function and show the results in comparison to HCwL. In opposite to it algorithm SHCLVND desribed here works directly on vectors of numbers instead their bit-vector representations and uses normal distributions instead of numbers to represent probabilities. 1 Overview In Section 2 we give an introduction with the way to the algorithm. Then we describe it exactly in Section 3. There is also given a compact notation in pseudo PASCAL-code, see Section 3.4. After that we give an example: we optimize highly multimodal functions with the proposed algorithm and give some visualisations of the progress in Section 4. In Section 5 there are a short summary and some ideas for future works. At last in Section 6 we give some hints for practical use of the algorithm. 2 Introduction This paper describes a hill climbing algorithm to optimize vectorial functions on real numbers. 2.1 Motivation Flexible algorithms for optimizing any vectorial function are interesting if there is no or only a very diicult mathematical solution known, e.g. parameter adjustments to optimize with respect to some relevant property the recalling behavior of a (trained) neuronal net HKP91, Roj93], or the resulting image of some image-processing lter.",
"title": ""
},
{
"docid": "54b60f333ba4e58f2f1f2614e54b50a8",
"text": "Personalize PageRank (PPR) is an effective relevance (proximity) measure in graph mining. The goal of this paper is to efficiently compute single node relevance and top-k/highly relevant nodes without iteratively computing the relevances of all nodes. Based on a \"random surfer model\", PPR iteratively computes the relevances of all nodes in a graph until convergence for a given user preference distribution. The problem with this iterative approach is that it cannot compute the relevance of just one or a few nodes. The heart of our solution is to compute single node relevance accurately in non-iterative manner based on sparse matrix representation, and to compute top-k/highly relevant nodes exactly by pruning unnecessary relevance computations based on upper/lower relevance estimations. Our experiments show that our approach is up to seven orders of magnitude faster than the existing alternatives.",
"title": ""
},
{
"docid": "da17b52ac5ebe572c22537c766bfd607",
"text": "A newborn's skin may exhibit a variety of changes during the first weeks of life, and rashes are extremely common in the neonatal period, representing a significant source of parental concern. In particular, a variety of skin eruptions can present as pustules. Most of them are innocuous and self-limiting, while others can be the manifestation of an infectious disease or even indicative of serious underlying disorders. Transient neonatal pustular melanosis is an uncommon vesiculopustular rash characterized by small pustules on a non-erythematous base, noted at birth or during the first day of life, without systemic symptoms. The lesions rupture spontaneously, leaving hyperpigmented macules that usually fade within few weeks. Clinical recognition of this disease can help physicians avoid unnecessary diagnostic testing and treatment for infectious etiologies because no specific therapy is recommended. The clinical aspect and time of onset are generally sufficient to make the correct diagnosis. Nevertheless, peculiar clinical presentations may require additional work-up to rule out life-threatening conditions, and dermatological consultation and histological examination are required for the final diagnosis. Conclusion: We report an exceedingly unusual presentation of transient neonatal pustular melanosis, suggesting the importance of a systematic diagnostic approach to allow a confident recognition of this benign condition.",
"title": ""
},
{
"docid": "cc5a5d74d7e6694e9a7af755528e04f8",
"text": "While deep neural networks (DNN) have become an effective computational tool, the prediction results are often criticized by the lack of interpretability, which is essential in many real-world applications such as health informatics. Existing attempts based on local interpretations aim to identify relevant features contributing the most to the prediction of DNN by monitoring the neighborhood of a given input. They usually simply ignore the intermediate layers of the DNN that might contain rich information for interpretation. To bridge the gap, in this paper, we propose to investigate a guided feature inversion framework for taking advantage of the deep architectures towards effective interpretation. The proposed framework not only determines the contribution of each feature in the input but also provides insights into the decision-making process of DNN models. By further interacting with the neuron of the target category at the output layer of the DNN, we enforce the interpretation result to be class-discriminative. We apply the proposed interpretation model to different CNN architectures to provide explanations for image data and conduct extensive experiments on ImageNet and PASCAL VOC07 datasets. The interpretation results demonstrate the effectiveness of our proposed framework in providing class-discriminative interpretation for DNN-based prediction.",
"title": ""
},
{
"docid": "611d3a8aa03bc74a526d6d8aa2daa5e0",
"text": "Solar energy utilization is one of the most promising solutions for the energy crises. Among all the possible means to make use of solar energy, solar water splitting is remarkable since it can accomplish the conversion of solar energy into chemical energy. The produced hydrogen is clean and sustainable which could be used in various areas. For the past decades, numerous efforts have been put into this research area with many important achievements. Improving the overall efficiency and stability of semiconductor photocatalysts are the research focuses for the solar water splitting. Tantalum-based semiconductors, including tantalum oxide, tantalate and tantalum (oxy)nitride, are among the most important photocatalysts. Tantalum oxide has the band gap energy that is suitable for the overall solar water splitting. The more negative conduction band minimum of tantalum oxide provides photogenerated electrons with higher potential for the hydrogen generation reaction. Tantalates, with tunable compositions, show high activities owning to their layered perovskite structure. (Oxy)nitrides, especially TaON and Ta3N5, have small band gaps to respond to visible-light, whereas they can still realize overall solar water splitting with the proper positions of conduction band minimum and valence band maximum. This review describes recent progress regarding the improvement of photocatalytic activities of tantalum-based semiconductors. Basic concepts and principles of solar water splitting will be discussed in the introduction section, followed by the three main categories regarding to the different types of tantalum-based semiconductors. In each category, synthetic methodologies, influencing factors on the photocatalytic activities, strategies to enhance the efficiencies of photocatalysts and morphology control of tantalum-based materials will be discussed in detail. Future directions to further explore the research area of tantalum-based semiconductors for solar water splitting are also discussed.",
"title": ""
},
{
"docid": "2739acca1a61ca8b2738b1312ab857ab",
"text": "The Telecare Medical Information System (TMIS) provides a set of different medical services to the patient and medical practitioner. The patients and medical practitioners can easily connect to the services remotely from their own premises. There are several studies carried out to enhance and authenticate smartcard-based remote user authentication protocols for TMIS system. In this article, we propose a set of enhanced and authentic Three Factor (3FA) remote user authentication protocols utilizing a smartphone capability over a dynamic Cloud Computing (CC) environment. A user can access the TMIS services presented in the form of CC services using his smart device e.g. smartphone. Our framework transforms a smartphone to act as a unique and only identity required to access the TMIS system remotely. Methods, Protocols and Authentication techniques are proposed followed by security analysis and a performance analysis with the two recent authentication protocols proposed for the healthcare TMIS system.",
"title": ""
},
{
"docid": "4ef3b9e8f0b44300db71421f71a755f9",
"text": "Digital images are often corrupted by impulsive noise during data acquisition, transmission, and processing. This paper presents a turbulent particle swarm optimization (PSO) (TPSO)-based fuzzy filtering (or TPFF for short) approach to remove impulse noise from highly corrupted images. The proposed fuzzy filter contains a parallel fuzzy inference mechanism, a fuzzy mean process, and a fuzzy composition process. To a certain extent, the TPFF is an improved and online version of those genetic-based algorithms which had attracted a number of works during the past years. As the PSO is renowned for its ability of achieving success rate and solution quality, the superiority of the TPFF is almost for sure. In particular, by using a no-reference Q metric, the TPSO learning is sufficient to optimize the parameters necessitated by the TPFF. Therefore, the proposed fuzzy filter can cope with practical situations where the assumption of the existence of the “ground-truth” reference does not hold. The experimental results confirm that the TPFF attains an excellent quality of restored images in terms of peak signal-to-noise ratio, mean square error, and mean absolute error even when the noise rate is above 0.5 and without the aid of noise-free images.",
"title": ""
},
{
"docid": "06efa7ddc20dc499c9db3127217883ce",
"text": "The development of Space Shuttle software posed unique requirements above and beyond raw size (30 times larger than Saturn V software), complexity, and criticality.",
"title": ""
},
{
"docid": "6efea39e2c72dcff5a204321bd7fbcd6",
"text": "Global demand for macroalgal and microalgal foods is growing, and algae are increasingly being consumed for functional benefits beyond the traditional considerations of nutrition and health. There is substantial evidence for the health benefits of algal-derived food products, but there remain considerable challenges in quantifying these benefits, as well as possible adverse effects. First, there is a limited understanding of nutritional composition across algal species, geographical regions, and seasons, all of which can substantially affect their dietary value. The second issue is quantifying which fractions of algal foods are bioavailable to humans, and which factors influence how food constituents are released, ranging from food preparation through genetic differentiation in the gut microbiome. Third is understanding how algal nutritional and functional constituents interact in human metabolism. Superimposed considerations are the effects of harvesting, storage, and food processing techniques that can dramatically influence the potential nutritive value of algal-derived foods. We highlight this rapidly advancing area of algal science with a particular focus on the key research required to assess better the health benefits of an alga or algal product. There are rich opportunities for phycologists in this emerging field, requiring exciting new experimental and collaborative approaches.",
"title": ""
},
{
"docid": "fde1cfa5507b801c1080036df7a50320",
"text": "AHRS is an important system that provides the 3-dimensional orientation (roll, pitch and yaw or heading) of an aircraft from which the flying performance can be evaluated. The avionics of aircraft are very sophisticated and majority of them do not allow any external interface for logging. As such, a standalone external AHRS is needed which should be comprised of low cost devices so that it becomes affordable for other aircrafts. This paper proposes a low cost Micro-Electrical-Mechanical system (MEMS) based AHRS for low speed aircraft to evaluate the flying performance. MEMS based 3-axis accelerometer, gyroscope and magnetometer have been used to build the AHRS. Low cost MEMS based sensors suffer from bias, drift and sudden spike. Low pass filter has been utilized to reduce the spikes. Complementary filter has been used to pin down the drift of the gyroscope from accelerometer reading. The system has been implemented in Arduino environment. Simulation and real time results have shown correct functioning of the system. Besides using as an AHRS in low speed aircraft, the system can be used for control and stabilization of other real time system.",
"title": ""
}
] |
scidocsrr
|
1893c534ca6b7cf730681b0f51eba4e4
|
The enactive mind, or from actions to cognition: lessons from autism.
|
[
{
"docid": "fe62e3a9acfe5009966434aa1f39099d",
"text": "Previous studies have found a subgroup of people with autism or Asperger Syndrome who pass second-order tests of theory of mind. However, such tests have a ceiling in developmental terms corresponding to a mental age of about 6 years. It is therefore impossible to say if such individuals are intact or impaired in their theory of mind skills. We report the performance of very high functioning adults with autism or Asperger Syndrome on an adult test of theory of mind ability. The task involved inferring the mental state of a person just from the information in photographs of a person's eyes. Relative to age-matched normal controls and a clinical control group (adults with Tourette Syndrome), the group with autism and Asperger Syndrome were significantly impaired on this task. The autism and Asperger Syndrome sample was also impaired on Happé's strange stories tasks. In contrast, they were unimpaired on two control tasks: recognising gender from the eye region of the face, and recognising basic emotions from the whole face. This provides evidence for subtle mindreading deficits in very high functioning individuals on the autistic continuum.",
"title": ""
},
{
"docid": "8390cef591c0401e22c5999633c71b02",
"text": "Recent neuroimaging studies in adults indicate that visual areas selective for recognition of faces can be recruited through expertise for nonface objects. This reflects a new emphasis on experience in theories of visual specialization. In addition, novel work infers differences between categories of nonface objects, allowing a re-interpretation of differences seen between recognition of faces and objects. Whether there are experience-independent precursors of face expertise remains unclear; indeed, parallels between literature for infants and adults suggest that methodological issues need to be addressed before strong conclusions can be drawn regarding the origins of face recognition.",
"title": ""
}
] |
[
{
"docid": "d4488867e774e28abc2b960a9434d052",
"text": "Understanding how images of objects and scenes behave in response to specific egomotions is a crucial aspect of proper visual development, yet existing visual learning methods are conspicuously disconnected from the physical source of their images. We propose a new “embodied” visual learning paradigm, exploiting proprioceptive motor signals to train visual representations from egocentric video with no manual supervision. Specifically, we enforce that our learned features exhibit equivariance i.e., they respond predictably to transformations associated with distinct egomotions. With three datasets, we show that our unsupervised feature learning approach significantly outperforms previous approaches on visual recognition and next-best-view prediction tasks. In the most challenging test, we show that features learned from video captured on an autonomous driving platform improve large-scale scene recognition in static images from a disjoint domain.",
"title": ""
},
{
"docid": "9a1f69647c56d377f4592247d7e1688d",
"text": "We propose a novel solution for computing the relative pose between two generalized cameras that includes reconciling the internal scale of the generalized cameras. This approach can be used to compute a similarity transformation between two coordinate systems, making it useful for loop closure in visual odometry and registering multiple structure from motion reconstructions together. In contrast to alternative similarity transformation methods, our approach uses 2D-2D image correspondences thus is not subject to the depth uncertainty that often arises with 3D points. We utilize a known vertical direction (which may be easily obtained from IMU data or vertical vanishing point detection) of the generalized cameras to solve the generalized relative pose and scale problem as an efficient Quadratic Eigenvalue Problem. To our knowledge, this is the first method for computing similarity transformations that does not require any 3D information. Our experiments on synthetic and real data demonstrate that this leads to improved performance compared to methods that use 3D-3D or 2D-3D correspondences, especially as the depth of the scene increases.",
"title": ""
},
{
"docid": "53b2e1524dad8dbb9bbfb967d5ce2736",
"text": "Cardiac output (CO) monitoring is essential for indicating the perfusion status of the human cardiovascular system under different physiological conditions. However, it is currently limited to hospital use due to the need for either skilled operators or big, expensive measurement devices. Therefore, in this paper we devise a new CO indicator which can easily be incorporated into existing wearable devices. To this end, we propose an index, the inflection and harmonic area ratio (IHAR), from standard photoplethysmographic (PPG) signals, which can be used to continuously monitor CO. We evaluate the success of our index by testing on sixteen normotensive subjects before and after bicycle exercise. The results showed a strong intra-subject correlation between IHAR and COimp measured by the bio-impedance method in fifteen subjects (mean r = 3D 0.82, p<0.01). After least squares linear regression, the precision between COimp and CO estimated from IHAR (COIHAR) was 1.40 L/min. The total percentage error of the results was 16.2%, which was well below the clinical acceptance limit of 30%. The results suggest that IHAR is a promising indicator for wearable and noninvasive CO monitoring.",
"title": ""
},
{
"docid": "ebb4bf38c87364cdad5764d3d5f5713e",
"text": "IMPORTANCE\nAlthough several longitudinal studies have demonstrated an effect of violent video game play on later aggressive behavior, little is known about the psychological mediators and moderators of the effect.\n\n\nOBJECTIVE\nTo determine whether cognitive and/or emotional variables mediate the effect of violent video game play on aggression and whether the effect is moderated by age, sex, prior aggressiveness, or parental monitoring.\n\n\nDESIGN, SETTING, AND PARTICIPANTS\nThree-year longitudinal panel study. A total of 3034 children and adolescents from 6 primary and 6 secondary schools in Singapore (73% male) were surveyed annually. Children were eligible for inclusion if they attended one of the 12 selected schools, 3 of which were boys' schools. At the beginning of the study, participants were in third, fourth, seventh, and eighth grades, with a mean (SD) age of 11.2 (2.1) years (range, 8-17 years). Study participation was 99% in year 1.\n\n\nMAIN OUTCOMES AND MEASURES\nThe final outcome measure was aggressive behavior, with aggressive cognitions (normative beliefs about aggression, hostile attribution bias, aggressive fantasizing) and empathy as potential mediators.\n\n\nRESULTS\nLongitudinal latent growth curve modeling demonstrated that the effects of violent video game play are mediated primarily by aggressive cognitions. This effect is not moderated by sex, prior aggressiveness, or parental monitoring and is only slightly moderated by age, as younger children had a larger increase in initial aggressive cognition related to initial violent game play at the beginning of the study than older children. Model fit was excellent for all models.\n\n\nCONCLUSIONS AND RELEVANCE\nGiven that more than 90% of youths play video games, understanding the psychological mechanisms by which they can influence behaviors is important for parents and pediatricians and for designing interventions to enhance or mitigate the effects.",
"title": ""
},
{
"docid": "b2612334017b1b342f025dce23fda554",
"text": "In the development of a syllable-centric automatic speech recognition (ASR) system, segmentation of the acoustic signal into syllabic units is an important stage. Although the short-term energy (STE) function contains useful information about syllable segment boundaries, it has to be processed before segment boundaries can be extracted. This paper presents a subband-based group delay approach to segment spontaneous speech into syllable-like units. This technique exploits the additive property of the Fourier transform phase and the deconvolution property of the cepstrum to smooth the STE function of the speech signal and make it suitable for syllable boundary detection. By treating the STE function as a magnitude spectrum of an arbitrary signal, a minimum-phase group delay function is derived. This group delay function is found to be a better representative of the STE function for syllable boundary detection. Although the group delay function derived from the STE function of the speech signal contains segment boundaries, the boundaries are difficult to determine in the context of long silences, semivowels, and fricatives. In this paper, these issues are specifically addressed and algorithms are developed to improve the segmentation performance. The speech signal is first passed through a bank of three filters, corresponding to three different spectral bands. The STE functions of these signals are computed. Using these three STE functions, three minimum-phase group delay functions are derived. By combining the evidence derived from these group delay functions, the syllable boundaries are detected. Further, a multiresolutionbased technique is presented to overcome the problem of shift in segment boundaries during smoothing. Experiments carried out on the Switchboard and OGI-MLTS corpora show that the error in segmentation is at most 25milliseconds for 67% and 76.6% of the syllable segments, respectively.",
"title": ""
},
{
"docid": "912c4601f8c6e31107b21233ee871a6b",
"text": "The physiological mechanisms that control energy balance are reciprocally linked to those that control reproduction, and together, these mechanisms optimize reproductive success under fluctuating metabolic conditions. Thus, it is difficult to understand the physiology of energy balance without understanding its link to reproductive success. The metabolic sensory stimuli, hormonal mediators and modulators, and central neuropeptides that control reproduction also influence energy balance. In general, those that increase ingestive behavior inhibit reproductive processes, with a few exceptions. Reproductive processes, including the hypothalamic-pituitary-gonadal (HPG) system and the mechanisms that control sex behavior are most proximally sensitive to the availability of oxidizable metabolic fuels. The role of hormones, such as insulin and leptin, are not understood, but there are two possible ways they might control food intake and reproduction. They either mediate the effects of energy metabolism on reproduction or they modulate the availability of metabolic fuels in the brain or periphery. This review examines the neural pathways from fuel detectors to the central effector system emphasizing the following points: first, metabolic stimuli can directly influence the effector systems independently from the hormones that bind to these central effector systems. For example, in some cases, excess energy storage in adipose tissue causes deficits in the pool of oxidizable fuels available for the reproductive system. Thus, in such cases, reproduction is inhibited despite a high body fat content and high plasma concentrations of hormones that are thought to stimulate reproductive processes. The deficit in fuels creates a primary sensory stimulus that is inhibitory to the reproductive system, despite high concentrations of hormones, such as insulin and leptin. Second, hormones might influence the central effector systems [including gonadotropin-releasing hormone (GnRH) secretion and sex behavior] indirectly by modulating the metabolic stimulus. Third, the critical neural circuitry involves extrahypothalamic sites, such as the caudal brain stem, and projections from the brain stem to the forebrain. Catecholamines, neuropeptide Y (NPY) and corticotropin-releasing hormone (CRH) are probably involved. Fourth, the metabolic stimuli and chemical messengers affect the motivation to engage in ingestive and sex behaviors instead of, or in addition to, affecting the ability to perform these behaviors. Finally, it is important to study these metabolic events and chemical messengers in a wider variety of species under natural or seminatural circumstances.",
"title": ""
},
{
"docid": "6cd4ed54497e30aba681b1e2bc79d29c",
"text": "Industrial systems consider only partially security, mostly relying on the basis of “isolated” networks, and controlled access environments. Monitoring and control systems such as SCADA/DCS are responsible for managing critical infrastructures operate in these environments, where a false sense of security assumptions is usually made. The Stuxnet worm attack demonstrated widely in mid 2010 that many of the security assumptions made about the operating environment, technological capabilities and potential threat risk analysis are far away from the reality and challenges modern industrial systems face. We investigate in this work the highly sophisticated aspects of Stuxnet, the impact that it may have on existing security considerations and pose some thoughts on the next generation SCADA/DCS systems from a security perspective.",
"title": ""
},
{
"docid": "1ec80e919d847675ce36ca16b7da0c67",
"text": "After more than 12 years of development, the ninth edition of the Present State Examination (PSE-9) was published, together with associated instruments and computer algorithm, in 1974. The system has now been expanded, in the framework of the World Health Organization/Alcohol, Drug Abuse, and Mental Health Administration Joint Project on Standardization of Diagnosis and Classification, and is being tested with the aim of developing a comprehensive procedure for clinical examination that is also capable of generating many of the categories of the International Classification of Diseases, 10th edition, and the Diagnostic and Statistical Manual of Mental Disorders, revised third edition. The new system is known as SCAN (Schedules for Clinical Assessment in Neuropsychiatry). It includes the 10th edition of the PSE as one of its core schedules, preliminary tests of which have suggested that reliability is similar to that of PSE-9. SCAN is being field tested in 20 centers in 11 countries. A final version is expected to be available in January 1990.",
"title": ""
},
{
"docid": "866f1b980b286f6ed3ace9caf0dc415a",
"text": "In this letter, we propose a road structure refined convolutional neural network (RSRCNN) approach for road extraction in aerial images. In order to obtain structured output of road extraction, both deconvolutional and fusion layers are designed in the architecture of RSRCNN. For training RSRCNN, a new loss function is proposed to incorporate the geometric information of road structure in cross-entropy loss, thus called road-structure-based loss function. Experimental results demonstrate that the trained RSRCNN model is able to advance the state-of-the-art road extraction for aerial images, in terms of precision, recall, F-score, and accuracy.",
"title": ""
},
{
"docid": "21abc097d58698c5eae1cddab9bf884e",
"text": "Advances in deep reinforcement learning have allowed autonomous agents to perform well on Atari games, often outperforming humans, using only raw pixels to make their decisions. However, most of these games take place in 2D environments that are fully observable to the agent. In this paper, we present the first architecture to tackle 3D environments in first-person shooter games, that involve partially observable states. Typically, deep reinforcement learning methods only utilize visual input for training. We present a method to augment these models to exploit game feature information such as the presence of enemies or items, during the training phase. Our model is trained to simultaneously learn these features along with minimizing a Q-learning objective, which is shown to dramatically improve the training speed and performance of our agent. Our architecture is also modularized to allow different models to be independently trained for different phases of the game. We show that the proposed architecture substantially outperforms built-in AI agents of the game as well as average humans in deathmatch scenarios.",
"title": ""
},
{
"docid": "a470aa1ba955cdb395b122daf2a17b6a",
"text": "Many real-world sequential decision making problems are partially observable by nature, and the environment model is typically unknown. Consequently, there is great need for reinforcement learning methods that can tackle such problems given only a stream of rewards and incomplete and noisy observations. In this paper, we propose deep variational reinforcement learning (DVRL), which introduces an inductive bias that allows an agent to learn a generative model of the environment and perform inference in that model to effectively aggregate the available information. We develop an n-step approximation to the evidence lower bound (ELBO), allowing the model to be trained jointly with the policy. This ensures that the latent state representation is suitable for the control task. In experiments on Mountain Hike and flickering Atari we show that our method outperforms previous approaches relying on recurrent neural networks to encode the past.",
"title": ""
},
{
"docid": "ea029be1081beef8f2faf7e61787ae57",
"text": "Discriminative learning machines often need a large set of labeled samples for training. Active learning (AL) settings assume that the learner has the freedom to ask an oracle to label its desired samples. Traditional AL algorithms heuristically choose query samples about which the current learner is uncertain. This strategy does not make good use of the structure of the dataset at hand and is prone to be misguided by outliers. To alleviate this problem, we propose to distill the structural information into a probabilistic generative model which acts as a teacher in our model. The active learner uses this information effectively at each cycle of active learning. The proposed method is generic and does not depend on the type of learner and teacher. We then suggest a query criterion for active learning that is aware of distribution of data and is more robust against outliers. Our method can be combined readily with several other query criteria for active learning. We provide the formulation and empirically show our idea via toy and real examples.",
"title": ""
},
{
"docid": "0570bf6abea7b8c4dcad1fb05b9672c6",
"text": "The purpose of this chapter is to describe some similarities, as well as differences, between theoretical proposals emanating from the tradition of phenomenology and the currently popular approach to language and cognition known as cognitive linguistics (hence CL). This is a rather demanding and potentially controversial topic. For one thing, neither CL nor phenomenology constitute monolithic theories, and are actually rife with internal controversies. This forces me to make certain “schematizations”, since it is impossible to deal with the complexity of these debates in the space here allotted.",
"title": ""
},
{
"docid": "d197eacce97d161e4292ba541f8bed57",
"text": "A Luenberger-based observer is proposed to the state estimation of a class of nonlinear systems subject to parameter uncertainty and bounded disturbance signals. A nonlinear observer gain is designed in order to minimize the effects of the uncertainty, error estimation and exogenous signals in an 7-L, sense by means of a set of state- and parameterdependent linear matrix inequalities that are solved using standard software packages. A numerical example illustrates the approach.",
"title": ""
},
{
"docid": "5d9d3e53d428e0613e7e415e864dea43",
"text": "Feature selection, as a data preprocessing strategy, has been proven to be effective and efficient in preparing data (especially high-dimensional data) for various data-mining and machine-learning problems. The objectives of feature selection include building simpler and more comprehensible models, improving data-mining performance, and preparing clean, understandable data. The recent proliferation of big data has presented some substantial challenges and opportunities to feature selection. In this survey, we provide a comprehensive and structured overview of recent advances in feature selection research. Motivated by current challenges and opportunities in the era of big data, we revisit feature selection research from a data perspective and review representative feature selection algorithms for conventional data, structured data, heterogeneous data and streaming data. Methodologically, to emphasize the differences and similarities of most existing feature selection algorithms for conventional data, we categorize them into four main groups: similarity-based, information-theoretical-based, sparse-learning-based, and statistical-based methods. To facilitate and promote the research in this community, we also present an open source feature selection repository that consists of most of the popular feature selection algorithms (http://featureselection.asu.edu/). Also, we use it as an example to show how to evaluate feature selection algorithms. At the end of the survey, we present a discussion about some open problems and challenges that require more attention in future research.",
"title": ""
},
{
"docid": "872d589cd879dee7d88185851b9546ab",
"text": "Considering few treatments are available to slow or stop neurodegenerative disorders, such as Alzheimer’s disease and related dementias (ADRD), modifying lifestyle factors to prevent disease onset are recommended. The Voice, Activity, and Location Monitoring system for Alzheimer’s disease (VALMA) is a novel ambulatory sensor system designed to capture natural behaviours across multiple domains to profile lifestyle risk factors related to ADRD. Objective measures of physical activity and sleep are provided by lower limb accelerometry. Audio and GPS location records provide verbal and mobility activity, respectively. Based on a familiar smartphone package, data collection with the system has proven to be feasible in community-dwelling older adults. Objective assessments of everyday activity will impact diagnosis of disease and design of exercise, sleep, and social interventions to prevent and/or slow disease progression.",
"title": ""
},
{
"docid": "2223bfd504f5552df290bdaec0553a36",
"text": "Department of Computer Information Systems, J. Mack Robinson College of Business, Georgia State University, Atlanta, USA; Department of Information Technology & Decision Sciences, Strome College of Business, Old Dominion University, Norfolk, USA; Management Science and Information Systems Department, College of Management, University of Massachusetts Boston, 100 Morrissey Blvd., Boston, MA 02125, USA; Department of Computer Information Systems, Zicklin School of Business, The City University of New York, New York, USA; Department of Business Information Technology, Pamplin College of Business, Virginia Polytechnic Institute and State University, Blacksburg, USA",
"title": ""
},
{
"docid": "42d02b9eb6a0328967a437a9463b42b5",
"text": "The WiSafeCar (Wireless Traffic Safety Network between Cars) project aims at increasing the performance and reliability of the wireless transport and to provide traffic safety improvements. Within the context of this project, we have designed a Dynamic Carpooling System that will optimize the transport utilization by the ride sharing among people who usually cover the same route. An initial prototype of the system has been developed by using NetLogo. The information obtained from this simulator will be used to study the functioning of the clearing services, the current business models and to propose new ones. The first results seem encouraging, and the users have many economical advantages thanks to the sharing of costs which allows the individuals to retrench expenses and to contribute to the use of green technologies.",
"title": ""
},
{
"docid": "5d9c9dd2a1d9e85ead934ec6cdcbb1cb",
"text": "Acute myeloid leukemia (AML) is the most common type of acute leukemia in adults. AML is a heterogeneous malignancy characterized by distinct genetic and epigenetic abnormalities. Recent genome-wide DNA methylation studies have highlighted an important role of dysregulated methylation signature in AML from biological and clinical standpoint. In this review, we will outline the recent advances in the methylome study of AML and overview the impacts of DNA methylation on AML diagnosis, treatment, and prognosis.",
"title": ""
},
{
"docid": "0c27f28fca4f5c5672e3bffc9f629170",
"text": "This paper presents a novel iris recognition system using 1D log polar Gabor wavelet and Euler numbers. 1D log polar Gabor wavelet is used to extract the textural features, and Euler numbers are used to extract topological features of the iris. The proposed decision strategy uses these features to authenticate an individual’s identity while maintaining a low false rejection rate. The algorithm was tested on CASIA iris image database and found to perform better than existing approaches with an overall accuracy of 99.93%. Keywords—Iris recognition, textural features, topological features.",
"title": ""
}
] |
scidocsrr
|
8c76ce484cc5893192ff4bb375ba662e
|
Analysis of Docker Security
|
[
{
"docid": "7f06370a81e7749970cd0359c5b5f993",
"text": "The use of virtualization technologies in high performance computing (HPC) environments has traditionally been avoided due to their inherent performance overhead. However, with the rise of container-based virtualization implementations, such as Linux VServer, OpenVZ and Linux Containers (LXC), it is possible to obtain a very low overhead leading to near-native performance. In this work, we conducted a number of experiments in order to perform an in-depth performance evaluation of container-based virtualization for HPC. We also evaluated the trade-off between performance and isolation in container-based virtualization systems and compared them with Xen, which is a representative of the traditional hypervisor-based virtualization systems used today.",
"title": ""
}
] |
[
{
"docid": "19c6f2b03624f41acc5fb060bff04c64",
"text": "Estimation of binocular disparity in vision systems is typically based on a matching pipeline and rectification. Estimation of disparity in the brain, in contrast, is widely assumed to be based on the comparison of local phase information from binocular receptive fields. The classic binocular energy model shows that this requires the presence of local quadrature pairs within the eye which show phaseor position-shifts across the eyes. While numerous theoretical accounts of stereopsis have been based on these observations, there has been little work on how energy models and depth inference may emerge through learning from the statistics of image pairs. Here, we describe a probabilistic, deep learning approach to modeling disparity and a methodology for generating binocular training data to estimate model parameters. We show that within-eye quadrature filters occur as a result of fitting the model to data, and we demonstrate how a three-layer network can learn to infer depth entirely from training data. We also show how training energy models can provide depth cues that are useful for recognition. We also show that pooling over more than two filters leads to richer dependencies between the learned filters.",
"title": ""
},
{
"docid": "866fd6d60fc835080dff69f6143348fd",
"text": "In this paper we consider the problem of classifying shapes within a given category (e.g., chairs) into finer-grained classes (e.g., chairs with arms, rocking chairs, swivel chairs). We introduce a multi-label (i.e., shapes can belong to multiple classes) semi-supervised approach that takes as input a large shape collection of a given category with associated sparse and noisy labels, and outputs cleaned and complete labels for each shape. The key idea of the proposed approach is to jointly learn a distance metric for each class which captures the underlying geometric similarity within that class, e.g., the distance metric for swivel chairs evaluates the global geometric resemblance of chair bases. We show how to achieve this objective by first geometrically aligning the input shapes, and then learning the class-specific distance metrics by exploiting the feature consistency provided by this alignment. The learning objectives consider both labeled data and the mutual relations between the distance metrics. Given the learned metrics, we apply a graph-based semi-supervised classification technique to generate the final classification results.\n In order to evaluate the performance of our approach, we have created a benchmark data set where each shape is provided with a set of ground truth labels generated by Amazon's Mechanical Turk users. The benchmark contains a rich variety of shapes in a number of categories. Experimental results show that despite this variety, given very sparse and noisy initial labels, the new method yields results that are superior to state-of-the-art semi-supervised learning techniques.",
"title": ""
},
{
"docid": "2a600bc7d6e35335e1514597aa4c7a79",
"text": "Since the 2000s, Business Process Management (BPM) has evolved into a comprehensively studied discipline that goes beyond the boundaries of particular business processes. By also affecting enterprise-wide capabilities (such as an organisational culture and structure that support a processoriented way of working), BPM can now correctly be called Business Process Orientation (BPO). Meanwhile, various maturity models have been developed to help organisations adopt a processoriented way of working based on step-by-step best practices. The present article reports on a case study in which the process portfolio of an organisation is assessed by different maturity models that each cover a different set of process-oriented capabilities. The purpose is to reflect on how business process maturity is currently measured, and to explore relevant considerations for practitioners, scholars and maturity model designers. Therefore, we investigate a possible difference in maturity scores that are obtained based on model-related characteristics (e.g. capabilities, scale and calculation technique) and respondent-related characteristics (e.g. organisational function). For instance, based on an experimental design, the original maturity scores are recalculated for different maturity scales and different calculation techniques. Follow-up research can broaden our experiment from multiple maturity models in a single case to multiple maturity models in multiple cases.",
"title": ""
},
{
"docid": "ce94ff17f677b6c2c6c81295fa53b8df",
"text": "The Information Artifact Ontology (IAO) was created to serve as a domain‐neutral resource for the representation of types of information content entities (ICEs) such as documents, data‐bases, and digital im‐ ages. We identify a series of problems with the current version of the IAO and suggest solutions designed to advance our understanding of the relations between ICEs and associated cognitive representations in the minds of human subjects. This requires embedding IAO in a larger framework of ontologies, including most importantly the Mental Func‐ tioning Ontology (MFO). It also requires a careful treatment of the aboutness relations between ICEs and associated cognitive representa‐ tions and their targets in reality.",
"title": ""
},
{
"docid": "66363a46aa21f982d5934ff7a88efa6f",
"text": "Ensuring that organizational IT is in alignment with and provides support for an organization’s business strategy is critical to business success. Despite this, business strategy and strategic alignment issues are all but ignored in the requirements engineering research literature. We present B-SCP, a requirements engineering framework for organizational IT that directly addresses an organization’s business strategy and the alignment of IT requirements with that strategy. B-SCP integrates the three themes of strategy, context, and process using a requirements engineering notation for each theme. We demonstrate a means of cross-referencing and integrating the notations with each other, enabling explicit traceability between business processes and business strategy. In addition, we show a means of defining requirements problem scope as a Jackson problem diagram by applying a business modeling framework. Our approach is illustrated via application to an exemplar. The case example demonstrates the feasibility of B-SCP, and we present a comparison with other approaches. q 2006 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "e85397e0dbb7862fd292da4d0c61c6de",
"text": "Summary\nCrocoBLAST is a tool for dramatically speeding up BLAST+ execution on any computer. Alignments that would take days or weeks with NCBI BLAST+ can be run overnight with CrocoBLAST. Additionally, CrocoBLAST provides features critical for NGS data analysis, including: results identical to those of BLAST+; compatibility with any BLAST+ version; real-time information regarding calculation progress and remaining run time; access to partial alignment results; queueing, pausing, and resuming BLAST+ calculations without information loss.\n\n\nAvailability and implementation\nCrocoBLAST is freely available online, with ample documentation (webchem.ncbr.muni.cz/Platform/App/CrocoBLAST). No installation or user registration is required. CrocoBLAST is implemented in C, while the graphical user interface is implemented in Java. CrocoBLAST is supported under Linux and Windows, and can be run under Mac OS X in a Linux virtual machine.\n\n\nContact\njkoca@ceitec.cz.\n\n\nSupplementary information\nSupplementary data are available at Bioinformatics online.",
"title": ""
},
{
"docid": "de46cdbdfbf866c56950f62ef4f489e0",
"text": "BACKGROUND\nComputational methods have been used to find duplicate biomedical publications in MEDLINE. Full text articles are becoming increasingly available, yet the similarities among them have not been systematically studied. Here, we quantitatively investigated the full text similarity of biomedical publications in PubMed Central.\n\n\nMETHODOLOGY/PRINCIPAL FINDINGS\n72,011 full text articles from PubMed Central (PMC) were parsed to generate three different datasets: full texts, sections, and paragraphs. Text similarity comparisons were performed on these datasets using the text similarity algorithm eTBLAST. We measured the frequency of similar text pairs and compared it among different datasets. We found that high abstract similarity can be used to predict high full text similarity with a specificity of 20.1% (95% CI [17.3%, 23.1%]) and sensitivity of 99.999%. Abstract similarity and full text similarity have a moderate correlation (Pearson correlation coefficient: -0.423) when the similarity ratio is above 0.4. Among pairs of articles in PMC, method sections are found to be the most repetitive (frequency of similar pairs, methods: 0.029, introduction: 0.0076, results: 0.0043). In contrast, among a set of manually verified duplicate articles, results are the most repetitive sections (frequency of similar pairs, results: 0.94, methods: 0.89, introduction: 0.82). Repetition of introduction and methods sections is more likely to be committed by the same authors (odds of a highly similar pair having at least one shared author, introduction: 2.31, methods: 1.83, results: 1.03). There is also significantly more similarity in pairs of review articles than in pairs containing one review and one nonreview paper (frequency of similar pairs: 0.0167 and 0.0023, respectively).\n\n\nCONCLUSION/SIGNIFICANCE\nWhile quantifying abstract similarity is an effective approach for finding duplicate citations, a comprehensive full text analysis is necessary to uncover all potential duplicate citations in the scientific literature and is helpful when establishing ethical guidelines for scientific publications.",
"title": ""
},
{
"docid": "95633e39a6f1dee70317edfc56e248f4",
"text": "We construct a deep portfolio theory. By building on Markowitz’s classic risk-return trade-off, we develop a self-contained four-step routine of encode, calibrate, validate and verify to formulate an automated and general portfolio selection process. At the heart of our algorithm are deep hierarchical compositions of portfolios constructed in the encoding step. The calibration step then provides multivariate payouts in the form of deep hierarchical portfolios that are designed to target a variety of objective functions. The validate step trades-off the amount of regularization used in the encode and calibrate steps. The verification step uses a cross validation approach to trace out an ex post deep portfolio efficient frontier. We demonstrate all four steps of our portfolio theory numerically.",
"title": ""
},
{
"docid": "f194075ba0a5cf69d9bba9e127ed29bb",
"text": "Let's start from scratch in thinking about what memory is for, and consequently, how it works. Suppose that memory and conceptualization work in the service of perception and action. In this case, conceptualization is the encoding of patterns of possible physical interaction with a three-dimensional world. These patterns are constrained by the structure of the environment, the structure of our bodies, and memory. Thus, how we perceive and conceive of the environment is determined by the types of bodies we have. Such a memory would not have associations. Instead, how concepts become related (and what it means to be related) is determined by how separate patterns of actions can be combined given the constraints of our bodies. I call this combination \"mesh.\" To avoid hallucination, conceptualization would normally be driven by the environment, and patterns of action from memory would play a supporting, but automatic, role. A significant human skill is learning to suppress the overriding contribution of the environment to conceptualization, thereby allowing memory to guide conceptualization. The effort used in suppressing input from the environment pays off by allowing prediction, recollective memory, and language comprehension. I review theoretical work in cognitive science and empirical work in memory and language comprehension that suggest that it may be possible to investigate connections between topics as disparate as infantile amnesia and mental-model theory.",
"title": ""
},
{
"docid": "42e53bc5c8fe1a2305b37687ea5c07c8",
"text": "The critical commentary by Reimers et al. [1] regarding the Penrose–Hameroff theory of ‘orchestrated objective reduction’ (‘Orch OR’) is largely uninformed and basically incorrect, as they solely criticize non-existent features of Orch OR, and ignore (1) actual Orch OR features, (2) supportive evidence, and (3) previous answers to their objections (Section 5.6 in our review [2]). Here we respond point-by-point to the issues they raise.",
"title": ""
},
{
"docid": "a1f007cf016e177de7b123c624391277",
"text": "Dental disease is among the most common causes for chinchillas and degus to present to veterinarians. Most animals with dental disease present with weight loss, reduced food intake/anorexia, and drooling. Degus commonly present with dyspnea. Dental disease has been primarily referred to as elongation and malocclusion of the cheek teeth. Periodontal disease, caries, and tooth resorption are common diseases in chinchillas, but are missed frequently during routine intraoral examination, even performed under general anesthesia. A diagnostic evaluation, including endoscopy-guided intraoral examination and diagnostic imaging of the skull, is necessary to detect oral disorders and to perform the appropriate therapy.",
"title": ""
},
{
"docid": "3b04e1e9550e5d6e9418ff955152d167",
"text": "This short report describes an automated BWAPI-based script developed for live streams of a StarCraft Brood War bot tournament, SSCAIT. The script controls the in-game camera in order to follow the relevant events and improve the viewer experience. We enumerate its novel features and provide a few implementation notes.",
"title": ""
},
{
"docid": "a85e4925e82baf96f507494c91126361",
"text": "Contractile myocytes provide a test of the hypothesis that cells sense their mechanical as well as molecular microenvironment, altering expression, organization, and/or morphology accordingly. Here, myoblasts were cultured on collagen strips attached to glass or polymer gels of varied elasticity. Subsequent fusion into myotubes occurs independent of substrate flexibility. However, myosin/actin striations emerge later only on gels with stiffness typical of normal muscle (passive Young's modulus, E approximately 12 kPa). On glass and much softer or stiffer gels, including gels emulating stiff dystrophic muscle, cells do not striate. In addition, myotubes grown on top of a compliant bottom layer of glass-attached myotubes (but not softer fibroblasts) will striate, whereas the bottom cells will only assemble stress fibers and vinculin-rich adhesions. Unlike sarcomere formation, adhesion strength increases monotonically versus substrate stiffness with strongest adhesion on glass. These findings have major implications for in vivo introduction of stem cells into diseased or damaged striated muscle of altered mechanical composition.",
"title": ""
},
{
"docid": "2ac1d3ce029f547213c122c0e84650b2",
"text": "Notes: (1) These questions require thought, but do not require long answers. Please be as concise as possible. (2) If you have a question about this homework, we encourage you to post your question on our Piazza forum, at https://piazza.com/class#fall2012/cs229. (3) If you missed the first lecture or are unfamiliar with the collaboration or honor code policy, please read the policy on Handout #1 (available from the course website) before starting work. (4) For problems that require programming, please include in your submission a printout of your code (with comments) and any figures that you are asked to plot. (5) Please indicate the submission time and number of late dates clearly in your submission. SCPD students: Please email your solutions to cs229-qa@cs.stanford.edu with the subject line \" Problem Set 2 Submission \". The first page of your submission should be the homework routing form, which can be found on the SCPD website. Your submission (including the routing form) must be a single pdf file, or we may not be able to grade it. If you are writing your solutions out by hand, please write clearly and in a reasonably large font using a dark pen to improve legibility. 1. [15 points] Constructing kernels In class, we saw that by choosing a kernel K(x, z) = φ(x) T φ(z), we can implicitly map data to a high dimensional space, and have the SVM algorithm work in that space. One way to generate kernels is to explicitly define the mapping φ to a higher dimensional space, and then work out the corresponding K. However in this question we are interested in direct construction of kernels. I.e., suppose we have a function K(x, z) that we think gives an appropriate similarity measure for our learning problem, and we are considering plugging K into the SVM as the kernel function. However for K(x, z) to be a valid kernel, it must correspond to an inner product in some higher dimensional space resulting from some feature mapping φ. Mercer's theorem tells us that K(x, z) is a (Mercer) kernel if and only if for any finite set {x (1) ,. .. , x (m) }, the matrix K is symmetric and positive semidefinite, where the square matrix K ∈ R m×m is given by K ij = K(x (i) , x (j)). Now here comes the question: Let K 1 , K 2 be kernels …",
"title": ""
},
{
"docid": "22c85072db1f5b5a51b69fcabf01eb5e",
"text": "Websites’ and mobile apps’ privacy policies, written in natural language, tend to be long and difficult to understand. Information privacy revolves around the fundamental principle of notice and choice, namely the idea that users should be able to make informed decisions about what information about them can be collected and how it can be used. Internet users want control over their privacy, but their choices are often hidden in long and convoluted privacy policy documents. Moreover, little (if any) prior work has been done to detect the provision of choices in text. We address this challenge of enabling user choice by automatically identifying and extracting pertinent choice language in privacy policies. In particular, we present a two-stage architecture of classification models to identify opt-out choices in privacy policy text, labelling common varieties of choices with a mean F1 score of 0.735. Our techniques enable the creation of systems to help Internet users to learn about their choices, thereby effectuating notice and choice and improving Internet privacy.",
"title": ""
},
{
"docid": "b66e878b1d907c684637bf308ee9fd3f",
"text": "The search for free parking places is a promising application for vehicular ad hoc networks (VANETs). In order to guide drivers to a free parking place at their destination, it is necessary to estimate the occupancy state of the parking lots within the destination area at time of arrival. In this paper, we present a model to predict parking lot occupancy based on information exchanged among vehicles. In particular, our model takes the age of received parking lot information and the time needed to arrive at a certain parking lot into account and estimates the future parking situation at time of arrival. It is based on queueing theory and uses a continuous-time homogeneous Markov model. We have evaluated the model in a simulation study based on a detailed model of the city of Brunswick, Germany.",
"title": ""
},
{
"docid": "429abd1e12826273b7f4c1561f438911",
"text": "Recently, spin-transfer torque magnetic random access memory (STT-MRAM) has been considered as a promising universal memory candidate for future memory and computing systems, thanks to its nonvolatility, high speed, low power, good endurance, and scalability. However, as technology scales down, STT-MRAM suffers from serious process variations and thermal fluctuations, which greatly degrade the performance and stability of STT-MRAM. In general, the optimization and robustness of STT-MRAM under process variations often require a hybrid design flow and multilevel codesign strategies. In this paper, we quantitatively analyze the impacts of process variations and thermal fluctuations on the STT-MRAM performances from physics, technology, and circuit design point of views. Based on the analyses, we found that readability is becoming the newest challenge for deeply scaled STT-MRAM due to the conflict between sensing margin and read disturbance. To deal with this problem, a novel reconfigurable design strategy from device, circuit, and architecture codesign perspective is then presented. Finally, a conceptual hybrid magnetic/CMOS design flow is also proposed for STT-MRAM in deeply scaled technology nodes.",
"title": ""
},
{
"docid": "014306c73db11e9d9b9077868c94ed9f",
"text": "Flying Ad hoc Network (FANET) is a new resource-constrained breed and instantiation of Mobile Ad hoc Network (MANET) employing Unmanned Aerial Vehicles (UAVs) as communicating nodes. These latter follow a predefined path called 'mission' to provide a wide range of applications/services. Without loss of generality, the services and applications offered by the FANET are based on data/content delivery in various forms such as, but not limited to, pictures, video, status, warnings, and so on. Therefore, a content-centric communication mechanism such as Information Centric Networking (ICN) is essential for FANET. ICN addresses the problems of classical TCP/IP-based Internet. To this end, Content-centric networking (CCN), and Named Data Networking (NDN) are two of the most famous and widely-adapted implementations of ICN due to their intrinsic security mechanism and Interest/Data-based communication. To ensure data security, a signature on the contents is appended to each response/data packet in transit. However, trusted communication is of paramount importance and currently lacks in NDN-driven communication. To fill the gaps, in this paper, we propose a novel trust-aware Monitor-based communication architecture for Flying Named Data Networking (FNDN). We first select the monitors based on their trust and stability, which then become responsible for the interest packets dissemination to avoid broadcast storm problem. Once the interest reaches data producer, the data comes back to the requester through the shortest and most trusted path (which is also the same path through which the interest packet arrived at the producer). Simultaneously, the intermediate UAVs choose whether to check the data authenticity or not, following their subjective belief on its producer's behavior and thus-forth reducing the computation complexity and delay. Simulation results show that our proposal can sustain the vanilla NDN security levels exceeding the 80% dishonesty detection ratio while reducing the generated end-to-end delay to less than 1 s in the worst case and reducing the average consumed energy by more than two times.",
"title": ""
},
{
"docid": "c5ca7be10aec26359f27350494821cd7",
"text": "When moving through a tracked immersive virtual environment, it is sometimes useful to deviate from the normal one-to-one mapping of real to virtual motion. One option is the application of rotation gain, where the virtual rotation of a user around the vertical axis is amplified or reduced by a factor. Previous research in head-mounted display environments has shown that rotation gain can go unnoticed to a certain extent, which is exploited in redirected walking techniques. Furthermore, it can be used to increase the effective field of regard in projection systems. However, rotation gain has never been studied in CAVE systems, yet. In this work, we present an experiment with 87 participants examining the effects of rotation gain in a CAVE-like virtual environment. The results show no significant effects of rotation gain on simulator sickness, presence, or user performance in a cognitive task, but indicate that there is a negative influence on spatial knowledge especially for inexperienced users. In secondary results, we could confirm results of previous work and demonstrate that they also hold for CAVE environments, showing a negative correlation between simulator sickness and presence, cognitive performance and spatial knowledge, a positive correlation between presence and spatial knowledge, a mitigating influence of experience with 3D applications and previous CAVE exposure on simulator sickness, and a higher incidence of simulator sickness in women.",
"title": ""
}
] |
scidocsrr
|
8d39b5357c9fd2e378baa28a8fb6f7da
|
Generalization Error in Deep Learning
|
[
{
"docid": "3bb905351ce1ea2150f37059ed256a90",
"text": "A major assumption in many machine learning and data mining algorithms is that the training and future data must be in the same feature space and have the same distribution. However, in many real-world applications, this assumption may not hold. For example, we sometimes have a classification task in one domain of interest, but we only have sufficient training data in another domain of interest, where the latter data may be in a different feature space or follow a different data distribution. In such cases, knowledge transfer, if done successfully, would greatly improve the performance of learning by avoiding much expensive data-labeling efforts. In recent years, transfer learning has emerged as a new learning framework to address this problem. This survey focuses on categorizing and reviewing the current progress on transfer learning for classification, regression, and clustering problems. In this survey, we discuss the relationship between transfer learning and other related machine learning techniques such as domain adaptation, multitask learning and sample selection bias, as well as covariate shift. We also explore some potential future issues in transfer learning research.",
"title": ""
},
{
"docid": "5365f6f5174c3d211ea562c8a7fa0aab",
"text": "Generative Adversarial Networks (GANs) have become one of the dominant methods for fitting generative models to complicated real-life data, and even found unusual uses such as designing good cryptographic primitives. In this talk, we will first introduce the ba- sics of GANs and then discuss the fundamental statistical question about GANs — assuming the training can succeed with polynomial samples, can we have any statistical guarantees for the estimated distributions? In the work with Arora, Ge, Liang, and Zhang, we suggested a dilemma: powerful discriminators cause overfitting, whereas weak discriminators cannot detect mode collapse. Such a conundrum may be solved or alleviated by designing discrimina- tor class with strong distinguishing power against the particular generator class (instead of against all possible generators.)",
"title": ""
}
] |
[
{
"docid": "f34e256296571f9ec1ae25671a7974f0",
"text": "In this paper, we propose a balanced multi-label propagation algorithm (BMLPA) for overlapping community detection in social networks. As well as its fast speed, another important advantage of our method is good stability, which other multi-label propagation algorithms, such as COPRA, lack. In BMLPA, we propose a new update strategy, which requires that community identifiers of one vertex should have balanced belonging coefficients. The advantage of this strategy is that it allows vertices to belong to any number of communities without a global limit on the largest number of community memberships, which is needed for COPRA. Also, we propose a fast method to generate “rough cores”, which can be used to initialize labels for multi-label propagation algorithms, and are able to improve the quality and stability of results. Experimental results on synthetic and real social networks show that BMLPA is very efficient and effective for uncovering overlapping communities.",
"title": ""
},
{
"docid": "ba304fa15be98eb5312fbf6daef45cd4",
"text": "For a long time the cortical systems for language and actions were believed to be independent modules. However, as these systems are reciprocally connected with each other, information about language and actions might interact in distributed neuronal assemblies. A critical case is that of action words that are semantically related to different parts of the body (for example, 'lick', 'pick' and 'kick'): does the comprehension of these words specifically, rapidly and automatically activate the motor system in a somatotopic manner, and does their comprehension rely on activity in the action system?",
"title": ""
},
{
"docid": "9445631e0850d2126750ffa50ae007ee",
"text": "Modern Visual Question Answering (VQA) models have been shown to rely heavily on superficial correlations between question and answer words learned during training – e.g. overwhelmingly reporting the type of room as kitchen or the sport being played as tennis, irrespective of the image. Most alarmingly, this shortcoming is often not well reflected during evaluation because the same strong priors exist in test distributions; however, a VQA system that fails to ground questions in image content would likely perform poorly in real-world settings. In this work, we present a novel regularization scheme for VQA that reduces this effect. We introduce a question-only model that takes as input the question encoding from the VQA model and must leverage language biases in order to succeed. We then pose training as an adversarial game between the VQA model and this question-only adversary – discouraging the VQA model from capturing language biases in its question encoding. Further, we leverage this question-only model to estimate the increase in model confidence after considering the image, which we maximize explicitly to encourage visual grounding. Our approach is a model agnostic training procedure and simple to implement. We show empirically that it can improve performance significantly on a bias-sensitive split of the VQA dataset for multiple base models – achieving state-of-the-art on this task. Further, on standard VQA tasks, our approach shows significantly less drop in accuracy compared to existing bias-reducing VQA models.",
"title": ""
},
{
"docid": "fbfb6b7cb2dc3e774197c470c55a928b",
"text": "The integrated modular avionics (IMA) architectures have ushered in a new wave of thought regarding avionics integration. IMA architectures utilize shared, configurable computing, communication, and I/O resources. These architectures allow avionics system integrators to benefit from increased system scalability, as well as from a form of platform management that reduces the workload for aircraft-level avionics integration activities. In order to realize these architectural benefits, the avionics suppliers must engage in new philosophies for sharing a set of system-level resources that are managed a level higher than each individual avionics system. The mechanisms for configuring and managing these shared intersystem resources are integral to managing the increased level of avionics integration that is inherent to the IMA architectures. This paper provides guidance for developing the methodology and tools to efficiently manage the set of shared intersystem resources. This guidance is based upon the author's experience in developing the Genesis IMA architecture at Smiths Aerospace. The Genesis IMA architecture was implemented on the Boeing 787 Dreamliner as the common core system (CCS)",
"title": ""
},
{
"docid": "8360cf2cda48bc34911f2f5c225b66bf",
"text": "We study the cold-start link prediction problem where edges between vertices is unavailable by learning vertex-based similarity metrics. Existing metric learning methods for link prediction fail to consider communities which can be observed in many real-world social networks. Because di↵erent communities usually exhibit di↵erent intra-community homogeneities, learning a global similarity metric is not appropriate. In this paper, we thus propose to learn communityspecific similarity metrics via joint community detection. Experiments on three real-world networks show that the intra-community homogeneities can be well preserved, and the mixed community-specific metrics perform better than a global similarity metric in terms of prediction accuracy.",
"title": ""
},
{
"docid": "9963e1f7126812d9111a4cb6a8eb8dc6",
"text": "The renewed interest in grapheme to phoneme conversion (G2P), due to the need of developing multilingual speech synthesizers and recognizers, suggests new approaches more efficient than the traditional rule&exception ones. A number of studies have been performed to investigate the possible use of machine learning techniques to extract phonetic knowledge in a automatic way starting from a lexicon. In this paper, we present the results of our experiments in this research field. Starting from the state of art, our contribution is in the development of a language-independent learning scheme for G2P based on Classification and Regression Trees (CART). To validate our approach, we realized G2P converters for the following languages: British English, American English, French and Brazilian Portuguese.",
"title": ""
},
{
"docid": "3936d7cf086384ac24afec31f49235bc",
"text": "Purpose: To compare the Percentage of Consonants Correct (PCC) index of children with and without hearing loss, and to verify whether the time using hearing aids, the time in therapy, and the time spent until hearing loss was diagnosed influence the performance of deaf children. Methods: Participants were 30 children, 15 with hearing impairment and 15 with normal hearing, paired by gender and age. The PCC index was calculated in three different tasks: picture naming, imitation and spontaneous speech. The phonology tasks of the ABFW – Teste de Linguagem Infantil were used in the evaluation. Results: Differences were found between groups in all tasks, and normally hearing children had better results. PCC indexes presented by children with hearing loss characterized a moderately severe phonological disorder. Children enrolled in therapy for a longer period had better PCC indexes, and the longer they had been using hearing aids, the better their performances on the imitation task. Conclusion: Children with hearing loss have lower PCC indexes when compared to normally hearing children. The average performance and imitation are influenced by time in therapy and time using hearing aids.",
"title": ""
},
{
"docid": "93e33f175a989962467a6c553affa4c8",
"text": "Holoprosencephaly is a congenital abnormality of the prosencephalon associated with median facial defects. Its frequency is 1 in 250 pregnancies and 1 in 16,000 live births. The degree of facial deformity usually correlates with the severity of brain malformation. Early mortality is prevalent in severe forms. This report presents a child with lobar holoprosencephaly accompanied by median cleft lip and palate. The treatment and 9 months' follow-up are presented. This unique case shows that holoprosencephaly may present different manifestations of craniofacial malformations, which are not always parallel to the severity of brain abnormalities. Patients with mild to moderate brain abnormalities may survive into childhood and beyond.",
"title": ""
},
{
"docid": "eba5ef77b594703c96c0e2911fcce7b0",
"text": "Deep Neural Network Hidden Markov Models, or DNN-HMMs, are recently very promising acoustic models achieving good speech recognition results over Gaussian mixture model based HMMs (GMM-HMMs). In this paper, for emotion recognition from speech, we investigate DNN-HMMs with restricted Boltzmann Machine (RBM) based unsupervised pre-training, and DNN-HMMs with discriminative pre-training. Emotion recognition experiments are carried out on these two models on the eNTERFACE'05 database and Berlin database, respectively, and results are compared with those from the GMM-HMMs, the shallow-NN-HMMs with two layers, as well as the Multi-layer Perceptrons HMMs (MLP-HMMs). Experimental results show that when the numbers of the hidden layers as well hidden units are properly set, the DNN could extend the labeling ability of GMM-HMM. Among all the models, the DNN-HMMs with discriminative pre-training obtain the best results. For example, for the eNTERFACE'05 database, the recognition accuracy improves 12.22% from the DNN-HMMs with unsupervised pre-training, 11.67% from the GMM-HMMs, 10.56% from the MLP-HMMs, and even 17.22% from the shallow-NN-HMMs, respectively.",
"title": ""
},
{
"docid": "d0c85b824d7d3491f019f47951d1badd",
"text": "A nine-year-old female Rottweiler with a history of repeated gastrointestinal ulcerations and three previous surgical interventions related to gastrointestinal ulceration presented with symptoms of anorexia and intermittent vomiting. Benign gastric outflow obstruction was diagnosed in the proximal duodenal area. The initial surgical plan was to perform a pylorectomy with gastroduodenostomy (Billroth I procedure), but owing to substantial scar tissue and adhesions in the area a palliative gastrojejunostomy was performed. This procedure provided a bypass for the gastric contents into the proximal jejunum via the new stoma, yet still allowed bile and pancreatic secretions to flow normally via the patent duodenum. The gastrojejunostomy technique was successful in the surgical management of this case, which involved proximal duodenal stricture in the absence of neoplasia. Regular telephonic followup over the next 12 months confirmed that the patient was doing well.",
"title": ""
},
{
"docid": "dac2c77424c11a5d94a13cdb5e2e796d",
"text": "Agriculture is the mother of all cultures. It played a vital role in the development of human civilization. But plant leaf diseases can damage the crops there may be economic losses in crops. Without knowing about the diseases affected in the plant, the farmers are using excessive pesticides for the plant disease treatment. To overcome this, the detected spot diseases in leaves are classified based on the diseased leaf types using various neural network algorithms. By this approach one can detect the diseased leaf variety and thus can take necessary steps in time to minimize the loss of production. The proposed methodology uses to classify the diseased plant leaves using Feed Forward Neural Network (FFNN), Learning Vector Quantization (LVQ) and Radial Basis Function Networks (RBF) by processing the set of shape and texture features from the affected leaf image. The simulation results show the effectiveness of the proposed scheme. With the help of this work, a machine learning based system can be formed for the improvement of the crop quality in the Indian Economy.",
"title": ""
},
{
"docid": "c2c4c5895eb2285bc3b927486463a576",
"text": "This study aimed to investigate the effect of piribedil, a drug used for the treatment of Parkinson’s disease and which has direct dopaminergic stimulating action, on the acute hepatic injury in mice. Hepatotoxicity was induced by CCl4 orally (0.28 ml/kg). Piribedil at three dose levels (4.5, 9, or 18 mg/kg) or silymarin (25 mg/kg) was given orally daily for 7 days, starting at time of administration of CCl4. Liver damage was assessed by determining liver serum enzyme activities and by hepatic histopathology. Piribedil administration lessened the increases in serum alanine aminotransferase (ALT), aspartate aminotransferase (AST), and alkaline phosphatase (ALP) and also prevented the development of hepatic necrosis caused by CCl4. The effect of piribedil was dose-dependent one. Piribedil administered at the above doses caused significant reduction in the elevated plasma ALT by −36.3%, −42.8%, and −52.4% and ALP by −25%, −36.9%, and −57.1%, respectively. AST decreased by −36.4% and −46.2% by piribedil at 9 or 18 mg/kg, respectively. In comparison, the elevated serum ALT, AST, and ALP levels decreased to −69.6%, −64.2%, and −68.5% of control values, respectively, by silymarin. Histopathologic examination of the livers of CCl4-treated mice administered piribedil at 9 mg/kg showed noticeable amelioration of the liver tissue damage, while piribedil at 18 mg/kg resulted in restoration of the normal architecture of the liver tissue as well as noticeable increase in the protein content of hepatocytes. It is concluded that administration of the dopaminergic agonist piribedil in a model of liver injury induced by CCl4 results in amelioration of liver damage.",
"title": ""
},
{
"docid": "724388aac829af9671a90793b1b31197",
"text": "We present a statistical phrase-based translation model that useshierarchical phrases — phrases that contain subphrases. The model is formally a synchronous context-free grammar but is learned from a bitext without any syntactic information. Thus it can be seen as a shift to the formal machinery of syntaxbased translation systems without any linguistic commitment. In our experiments using BLEU as a metric, the hierarchical phrasebased model achieves a relative improvement of 7.5% over Pharaoh, a state-of-the-art phrase-based system.",
"title": ""
},
{
"docid": "3bda0519ec7f61a4778cddfaa0c9b12d",
"text": "Recommender systems are assisting users in the process of identifying items that fulfill their wishes and needs. These systems are successfully applied in different e-commerce settings, for example, to the recommendation of news, movies, music, books, and digital cameras. The major goal of this book chapter is to discuss new and upcoming applications of recommendation technologies and to provide an outlook on major characteristics of future technological developments. Based on a literature analysis, we discuss new and upcoming applications in domains such as software engineering, data & knowledge engineering, configurable items, and persuasive technologies. Thereafter we sketch major properties of the next generation of recommendation technologies.",
"title": ""
},
{
"docid": "48f356151587d85dd82834b5f5f490d9",
"text": "We present new applications for cryptographic secret handshakes between mobile devices on top of Bluetooth Low-Energy (LE). Secret handshakes enable mutual authentication, with the property that the parties learn nothing about each other unless they have been both issued credentials by a group administrator. This property provides strong privacy guarantees that enable interesting applications. One of them is proximity-based discovery for private communities. We introduce MASHaBLE, a mobile application that enables participants to discover and interact with nearby users if and only if they belong to the same secret community. We use direct peer-to-peer communication over Bluetooth LE, rather than relying on a central server. We discuss the specifics of implementing secret handshakes over Bluetooth LE and present our prototype implementation.",
"title": ""
},
{
"docid": "11d130f2b757bab08c4d41169c29b3d5",
"text": "We present an approach to training a joint syntactic and semantic parser that combines syntactic training information from CCGbank with semantic training information from a knowledge base via distant supervision. The trained parser produces a full syntactic parse of any sentence, while simultaneously producing logical forms for portions of the sentence that have a semantic representation within the parser’s predicate vocabulary. We demonstrate our approach by training a parser whose semantic representation contains 130 predicates from the NELL ontology. A semantic evaluation demonstrates that this parser produces logical forms better than both comparable prior work and a pipelined syntax-then-semantics approach. A syntactic evaluation on CCGbank demonstrates that the parser’s dependency Fscore is within 2.5% of state-of-the-art.",
"title": ""
},
{
"docid": "4fe8d749fd978627edb58d76f0e8d090",
"text": "The more I study metrology, the more I get persuaded that the measuring activity is an implicit part of our lives, something we are not really aware of, though we do or rely on measurements several times a day. When we check time, put fuel in our cars, buy food, just to mention some everyday activity, either we measure something or we trust measurements done by somebody else. It is quite immediate to conclude that, nowadays, everything is measured and measurement results are the basis of many important decisions. Interestingly enough, measurement has always played an important role in mankind�s evolution and I fully agree with Bryan Kibble�s statement that the measuring stick came before the wheel, otherwise the wheel could not have been built [1]. The measuring stick is also one of the most ancient instruments, and we find it together with time measuring instruments and weighs in almost every civilization of the past, proving that measurement is one of the most important branches of science, and there is no civilization without measurement. It proves also the intimate connection existing between instrumentation and measurement, being the two sides of a single medal: the measurement science, or metrology.",
"title": ""
},
{
"docid": "e73149799b88f5162ab15620903ba24b",
"text": "The present eyetracking study examined the influenc e of emotions on learning with multimedia. Based on a 2x2 experimental design, par ticipants received experimentally induced emotions (positive vs. neutral) and then le arn d with a multimedia instructional material, which was varied in its design (with vs. without anthropomorphisms) to induce positive emotions and facilitate learning. Learners who were in a positive emotional state before learning had better learning outcomes in com prehension and transfer tests and showed longer fixation durations on the text information o f the learning environment. Although anthropomorphisms in the learning environment did n ot i duce positive emotions, the eyetracking data revealed that learners’ attention was captured by this design element. Hence, learners in a positive emotional state who learned with the learning environment that included anthropomorphisms showed the highest learning outco me and longest fixation on the relevant information of the multimedia instruction. Results indicate an attention arousing effect of expressive anthropomorphisms and the relevance of e m tional states before learning.",
"title": ""
},
{
"docid": "a7a51eb9cb434a581eac782da559094b",
"text": "An ever-increasing amount of information on the Web today is available only through search interfaces: the users have to type in a set of keywords in a search form in order to access the pages from certain Web sites. These pages are often referred to as the Hidden Web or the Deep Web. Since there are no static links to the Hidden Web pages, search engines cannot discover and index such pages and thus do not return them in the results. However, according to recent studies, the content provided by many Hidden Web sites is often of very high quality and can be extremely valuable to many users. In this paper, we study how we can build an effective Hidden Web crawler that can autonomously discover and download pages from the Hidden Web. Since the only “entry point” to a Hidden Web site is a query interface, the main challenge that a Hidden Web crawler has to face is how to automatically generate meaningful queries to issue to the site. Here, we provide a theoretical framework to investigate the query generation problem for the Hidden Web and we propose effective policies for generating queries automatically. Our policies proceed iteratively, issuing a different query in every iteration. We experimentally evaluate the effectiveness of these policies on 4 real Hidden Web sites and our results are very promising. For instance, in one experiment, one of our policies downloaded more than 90% of a Hidden Web site (that contains 14 million documents) after issuing fewer than 100 queries.",
"title": ""
},
{
"docid": "43c589663fdd486c4334a914c25b0a40",
"text": "Many studies showed the ability of movies and imagery techniques to elicit emotions. Nevertheless, it is less clear how to manipulate the content of interactive media to induce specific emotional responses. In particular, this is true for the emerging medium virtual reality (VR), whose main feature is the ability to induce a feeling of \"presence\" in the computer-generated world experienced by the user. The main goal of this study was to analyze the possible use of VR as an affective medium. Within this general goal, the study also analyzed the relationship between presence and emotions. The results confirmed the efficacy of VR as affective medium: the interaction with \"anxious\" and \"relaxing\" virtual environments produced anxiety and relaxation. The data also showed a circular interaction between presence and emotions: on one side, the feeling of presence was greater in the \"emotional\" environments; on the other side, the emotional state was influenced by the level of presence. The significance of these results for the assessment of affective interaction is discussed.",
"title": ""
}
] |
scidocsrr
|
d68ac464587726615fe65a84bc6fb3ed
|
CPDScorer: Modeling and Evaluating Developer Programming Ability across Software Communities
|
[
{
"docid": "5f3e2b0051a76352be0566e122157491",
"text": "Community Question Answering (CQA) websites, where people share expertise on open platforms, have become large repositories of valuable knowledge. To bring the best value out of these knowledge repositories, it is critically important for CQA services to know how to find the right experts, retrieve archived similar questions and recommend best answers to new questions. To tackle this cluster of closely related problems in a principled approach, we proposed Topic Expertise Model (TEM), a novel probabilistic generative model with GMM hybrid, to jointly model topics and expertise by integrating textual content model and link structure analysis. Based on TEM results, we proposed CQARank to measure user interests and expertise score under different topics. Leveraging the question answering history based on long-term community reviews and voting, our method could find experts with both similar topical preference and high topical expertise. Experiments carried out on Stack Overflow data, the largest CQA focused on computer programming, show that our method achieves significant improvement over existing methods on multiple metrics.",
"title": ""
},
{
"docid": "414da08f7cffd71f4cf373f13d89961b",
"text": "We study the problem of large-scale social identity linkage across different social media platforms, which is of critical importance to business intelligence by gaining from social data a deeper understanding and more accurate profiling of users. This paper proposes HYDRA, a solution framework which consists of three key steps: (I) modeling heterogeneous behavior by long-term behavior distribution analysis and multi-resolution temporal information matching; (II) constructing structural consistency graph to measure the high-order structure consistency on users' core social structures across different platforms; and (III) learning the mapping function by multi-objective optimization composed of both the supervised learning on pair-wise ID linkage information and the cross-platform structure consistency maximization. Extensive experiments on 10 million users across seven popular social network platforms demonstrate that HYDRA correctly identifies real user linkage across different platforms, and outperforms existing state-of-the-art algorithms by at least 20% under different settings, and 4 times better in most settings.",
"title": ""
}
] |
[
{
"docid": "aa246e38979c25d89ae6220ae7cd9552",
"text": "Fast detection of moving vehicles is crucial for safe autonomous urban driving. We present the vehicle detection algorithm developed for our entry in the Urban Grand Challenge, an autonomous driving race organized by the U.S. Government in 2007. The algorithm provides reliable detection of moving vehicles from a high-speed moving platform using laser range finders. We present the notion of motion evidence, which allows us to overcome the low signal-to-noise ratio that arises during rapid detection of moving vehicles in noisy urban environments. We also present and evaluate an array of optimization techniques that enable accurate detection in real time. Experimental results show empirical validation on data from the most challenging situations presented at the Urban Grand Challenge as well as other urban settings.",
"title": ""
},
{
"docid": "4507f495e401e9e67a0ff6396778ff06",
"text": "Deep generative adversarial networks (GANs) are the emerging technology in drug discovery and biomarker development. In our recent work, we demonstrated a proof-of-concept of implementing deep generative adversarial autoencoder (AAE) to identify new molecular fingerprints with predefined anticancer properties. Another popular generative model is the variational autoencoder (VAE), which is based on deep neural architectures. In this work, we developed an advanced AAE model for molecular feature extraction problems, and demonstrated its advantages compared to VAE in terms of (a) adjustability in generating molecular fingerprints; (b) capacity of processing very large molecular data sets; and (c) efficiency in unsupervised pretraining for regression model. Our results suggest that the proposed AAE model significantly enhances the capacity and efficiency of development of the new molecules with specific anticancer properties using the deep generative models.",
"title": ""
},
{
"docid": "450f13659ece54bee1b4fe61cc335eb2",
"text": "Though considerable effort has recently been devoted to hardware realization of one-dimensional chaotic systems, the influence of implementation inaccuracies is often underestimated and limited to non-idealities in the non-linear map. Here we investigate the consequences of sample-and-hold errors. Two degrees of freedom in the design space are considered: the choice of the map and the sample-and-hold architecture. Current-mode systems based on Bernoulli Shift, on Tent Map and on Tailed Tent Map are taken into account and coupled with an order-one model of sample-and-hold to ascertain error causes and suggest implementation improvements. key words: chaotic systems, analog circuits, sample-and-hold errors",
"title": ""
},
{
"docid": "318c950c72f889156acd2fd1cad53c61",
"text": "We present a system for automatic categorization of news items into a standard set of categories. The system has been built specifically for news stories written in Croatian language. It uses the standard set of news categories established by the International Press Telecommunications Council (IPTC). The algorithm used for categorization transforms each document into a vector of weights corresponding to an automatically chosen set of keywords. This process is performed on a large training set of news items, forming the multi-dimensional space populated by news items of known categories. An unknown news item is also transformed into a vector of keyword weights and then categorized using the k-NN method in this space. The system has been trained on the collection of approx. 2700 manually categorized news items provided by the Croatian News Agency and tested on a different set of approx. 500 randomly chosen news items from the same source. The automatic categorization gave a correct result for 85% of tested news items. Subjective evaluation by news professionals concluded that the system is useful enough to be used in news production process.",
"title": ""
},
{
"docid": "7b89f8a9b3bbe7762ef7898b2bb22bd2",
"text": "This topic covers various aspects of seismic design of reinforced concrete structures with an emphasis on Design for regions of high seismicity. Because the requirement for greater ductility in earthquake-resistant Buildings represents the principal departure from the conventional design for gravity and wind loading, the Major part of the discussion in this chapter will be devoted to considerations associated with providing Ductility in members and structures. The discussion in this chapter will be confined to monolithically cast Reinforced-concrete buildings. The concepts of seismic demand and capacity are introduced and elaborated On. Specific provisions for design of seismic resistant reinforced concrete members and systems are Presented in detail. Appropriate seismic detailing considerations are discussed. Finally, a numerical example is presented where these principles are applied.",
"title": ""
},
{
"docid": "eceba8b3bf8cd7e0a4afc6581a6827eb",
"text": "The epidermal growth factor receptor (EGFR) kinase inhibitors gefitinib and erlotinib are effective treatments for lung cancers with EGFR activating mutations, but these tumors invariably develop drug resistance. Here, we describe a gefitinib-sensitive lung cancer cell line that developed resistance to gefitinib as a result of focal amplification of the MET proto-oncogene. inhibition of MET signaling in these cells restored their sensitivity to gefitinib. MET amplification was detected in 4 of 18 (22%) lung cancer specimens that had developed resistance to gefitinib or erlotinib. We find that amplification of MET causes gefitinib resistance by driving ERBB3 (HER3)-dependent activation of PI3K, a pathway thought to be specific to EGFR/ERBB family receptors. Thus, we propose that MET amplification may promote drug resistance in other ERBB-driven cancers as well.",
"title": ""
},
{
"docid": "fdd14b086d77b95b7ca00ab744f39458",
"text": "1567-4223/$34.00 Crown Copyright 2008 Publishe doi:10.1016/j.elerap.2008.11.001 * Corresponding author. Tel.: +886 7 5254713; fax: E-mail address: tw_cchuang@hotmail.com (C.-C. H While eWOM advertising has recently emerged as an effective marketing strategy among marketing practitioners, comparatively few studies have been conducted to examine the eWOM from the perspective of pass-along emails. Based on social capital theory and social cognitive theory, this paper develops a model involving social enablers and personal cognition factors to explore the eWOM behavior and its efficacy. Data collected from 347 email users have lent credit to the model proposed. Tested by LISREL 8.70, the results indicate that the factors such as message involvement, social interaction tie, affection outcome expectations and message passing self-efficacy exert significant influences on pass-along email intentions (PAEIs). The study result may well be useful to marketing practitioners who are considering email marketing, especially to those who are in the process of selecting key email users and/or designing product advertisements to heighten the eWOM effect. Crown Copyright 2008 Published by Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "90ecdad8743f134fb07489cee9ce15ef",
"text": "As one of the most successful fast food chain in the world, throughout the development of McDonald’s, we could easily identify many successful business strategy implementations. In this paper, I will discuss some critical business strategies, which linked to the company’s structure and external environment. This paper is organized as follows: In the first section, I will give brief introduction to the success of McDonald’s. In the second section, I will analyze some particular strategies used by McDonald’s and how these strategies are suitable to their business structure. I will then analyze why McDonald’s choose these strategies in response to the changing external environment. Finally, I will summarize the approaches used by McDonald’s to achieve their strategic goals.",
"title": ""
},
{
"docid": "a6a2c027b809a98430ad80b837fa8090",
"text": "This paper presents a 60-GHz CMOS direct-conversion Doppler radar RF sensor with a clutter canceller for single-antenna noncontact human vital-signs detection. A high isolation quasi-circulator (QC) is designed to reduce the transmitting (Tx) power leakage (to the receiver). The clutter canceller performs cancellation for the Tx leakage power (from the QC) and the stationary background reflection clutter to enhance the detection sensitivity of weak vital signals. The integration of the 60-GHz RF sensor consists of the voltage-controlled oscillator, divided-by-2 frequency divider, power amplifier, QC, clutter canceller (consisting of variable-gain amplifier and 360 ° phase shifter), low-noise amplifier, in-phase/quadrature-phase sub-harmonic mixer, and three couplers. In the human vital-signs detection experimental measurement, at a distance of 75 cm, the detected heartbeat (1-1.3 Hz) and respiratory (0.35-0.45 Hz) signals can be clearly observed with a 60-GHz 17-dBi patch-array antenna. The RF sensor is fabricated in 90-nm CMOS technology with a chip size of 2 mm×2 mm and a consuming power of 217 mW.",
"title": ""
},
{
"docid": "8f0ed599cec42faa0928a0931ee77b28",
"text": "This paper describes the Connector and Acceptor patterns. The intent of these patterns is to decouple the active and passive connection roles, respectively, from the tasks a communication service performs once connections are established. Common examples of communication services that utilize these patterns include WWW browsers, WWW servers, object request brokers, and “superservers” that provide services like remote login and file transfer to client applications. This paper illustrates how the Connector and Acceptor patterns can help decouple the connection-related processing from the service processing, thereby yielding more reusable, extensible, and efficient communication software. When used in conjunction with related patterns like the Reactor [1], Active Object [2], and Service Configurator [3], the Acceptor and Connector patterns enable the creation of highly extensible and efficient communication software frameworks [4] and applications [5]. This paper is organized as follows: Section 2 outlines background information on networking and communication protocols necessary to appreciate the patterns in this paper; Section 3 motivates the need for the Acceptor and Connector patterns and illustrates how they have been applied to a production application-level Gateway; Section 4 describes the Acceptor and Connector patterns in detail; and Section 5 presents concluding remarks.",
"title": ""
},
{
"docid": "aed7d0610563538784890f2dd72f81f7",
"text": "In 3 experiments, a total of 151 monolingual and bilingual 6-year-old children performed similarly on measures of language and cognitive ability; however, bilinguals solved the global-local and trail-making tasks more rapidly than monolinguals. This bilingual advantage was found not only for the traditionally demanding conditions (incongruent global-local trials and Trails B) but also for the conditions not usually considered to be cognitively demanding (congruent global-local trials and Trails A). All the children performed similarly when congruent trials were presented in a single block or when perceptually simple stimuli were used, ruling out speed differences between the groups. The results demonstrate a bilingual advantage in processing complex stimuli in tasks that require executive processing components for conflict resolution, including switching and updating, even when no inhibition appears to be involved. They also suggest that simple conditions of the trail-making and global-local tasks involve some level of effortful processing for young children. Finally, the bilingual advantage in the trail-making task suggests that the interpretation of standardized measures of executive control needs to be reconsidered for children with specific experiences, such as bilingualism.",
"title": ""
},
{
"docid": "5781bae1fdda2d2acc87102960dab3ed",
"text": "Several static analysis tools, such as Splint or FindBugs, have been proposed to the software development community to help detect security vulnerabilities or bad programming practices. However, the adoption of these tools is hindered by their high false positive rates. If the false positive rate is too high, developers may get acclimated to violation reports from these tools, causing concrete and severe bugs being overlooked. Fortunately, some violations are actually addressed and resolved by developers. We claim that those violations that are recurrently fixed are likely to be true positives, and an automated approach can learn to repair similar unseen violations. However, there is lack of a systematic way to investigate the distributions on existing violations and fixed ones in the wild, that can provide insights into prioritizing violations for developers, and an effective way to mine code and fix patterns which can help developers easily understand the reasons of leading violations and how to fix them. In this paper, we first collect and track a large number of fixed and unfixed violations across revisions of software. The empirical analyses reveal that there are discrepancies in the distributions of violations that are detected and those that are fixed, in terms of occurrences, spread and categories, which can provide insights into prioritizing violations. To automatically identify patterns in violations and their fixes, we propose an approach that utilizes convolutional neural networks to learn features and clustering to regroup similar instances. We then evaluate the usefulness of the identified fix patterns by applying them to unfixed violations. The results show that developers will accept and merge a majority (69/116) of fixes generated from the inferred fix patterns. It is also noteworthy that the yielded patterns are applicable to four real bugs in the Defects4J major benchmark for software testing and automated repair.",
"title": ""
},
{
"docid": "567f48fef5536e9f44a6c66deea5375b",
"text": "The principle of control signal amplification is found in all actuation systems, from engineered devices through to the operation of biological muscles. However, current engineering approaches require the use of hard and bulky external switches or valves, incompatible with both the properties of emerging soft artificial muscle technology and those of the bioinspired robotic systems they enable. To address this deficiency a biomimetic molecular-level approach is developed that employs light, with its excellent spatial and temporal control properties, to actuate soft, pH-responsive hydrogel artificial muscles. Although this actuation is triggered by light, it is largely powered by the resulting excitation and runaway chemical reaction of a light-sensitive acid autocatalytic solution in which the actuator is immersed. This process produces actuation strains of up to 45% and a three-fold chemical amplification of the controlling light-trigger, realising a new strategy for the creation of highly functional soft actuating systems.",
"title": ""
},
{
"docid": "bc69fe2a1791b8d7e0e262f8110df9d4",
"text": "A small-size coupled-fed loop antenna suitable to be printed on the system circuit board of the mobile phone for penta-band WWAN operation (824-960/1710-2170 MHz) is presented. The loop antenna requires only a small footprint of 15 x 25 mm2 on the circuit board, and it can also be in close proximity to the surrounding ground plane printed on the circuit board. That is, very small or no isolation distance is required between the antenna's radiating portion and the nearby ground plane. This can lead to compact integration of the internal on-board printed antenna on the circuit board of the mobile phone, especially the slim mobile phone. The loop antenna also shows a simple structure; it is formed by a loop strip of about 87 mm with its end terminal short-circuited to the ground plane and its front section capacitively coupled to a feeding strip which is also an efficient radiator to contribute a resonant mode for the antenna's upper band to cover the GSM1800/1900/UMTS bands (1710-2170 MHz). Through the coupling excitation, the antenna can also generate a 0.25-wavelength loop resonant mode to form the antenna's lower band to cover the GSM850/900 bands (824-960 MHz). Details of the proposed antenna are presented. The SAR results for the antenna with the presence of the head and hand phantoms are also studied.",
"title": ""
},
{
"docid": "ffc36fa0dcc81a7f5ba9751eee9094d7",
"text": "The independent component analysis (ICA) of a random vector consists of searching for a linear transformation that minimizes the statistical dependence between its components. In order to define suitable search criteria, the expansion of mutual information is utilized as a function of cumulants of increasing orders. An efficient algorithm is proposed, which allows the computation of the ICA of a data matrix within a polynomial time. The concept of lCA may actually be seen as an extension of the principal component analysis (PCA), which can only impose independence up to the second order and, consequently, defines directions that are orthogonal. Potential applications of ICA include data analysis and compression, Bayesian detection, localization of sources, and blind identification and deconvolution. Zusammenfassung Die Analyse unabhfingiger Komponenten (ICA) eines Vektors beruht auf der Suche nach einer linearen Transformation, die die statistische Abh~ingigkeit zwischen den Komponenten minimiert. Zur Definition geeigneter Such-Kriterien wird die Entwicklung gemeinsamer Information als Funktion von Kumulanten steigender Ordnung genutzt. Es wird ein effizienter Algorithmus vorgeschlagen, der die Berechnung der ICA ffir Datenmatrizen innerhalb einer polynomischen Zeit erlaubt. Das Konzept der ICA kann eigentlich als Erweiterung der 'Principal Component Analysis' (PCA) betrachtet werden, die nur die Unabh~ingigkeit bis zur zweiten Ordnung erzwingen kann und deshalb Richtungen definiert, die orthogonal sind. Potentielle Anwendungen der ICA beinhalten Daten-Analyse und Kompression, Bayes-Detektion, Quellenlokalisierung und blinde Identifikation und Entfaltung.",
"title": ""
},
{
"docid": "a7336b4e1ba0846f45f6757b121a7d33",
"text": "Recently, concerns have been raised that residues of glyphosate-based herbicides may interfere with the homeostasis of the intestinal bacterial community and thereby affect the health of humans or animals. The biochemical pathway for aromatic amino acid synthesis (Shikimate pathway), which is specifically inhibited by glyphosate, is shared by plants and numerous bacterial species. Several in vitro studies have shown that various groups of intestinal bacteria may be differently affected by glyphosate. Here, we present results from an animal exposure trial combining deep 16S rRNA gene sequencing of the bacterial community with liquid chromatography mass spectrometry (LC-MS) based metabolic profiling of aromatic amino acids and their downstream metabolites. We found that glyphosate as well as the commercial formulation Glyfonova®450 PLUS administered at up to fifty times the established European Acceptable Daily Intake (ADI = 0.5 mg/kg body weight) had very limited effects on bacterial community composition in Sprague Dawley rats during a two-week exposure trial. The effect of glyphosate on prototrophic bacterial growth was highly dependent on the availability of aromatic amino acids, suggesting that the observed limited effect on bacterial composition was due to the presence of sufficient amounts of aromatic amino acids in the intestinal environment. A strong correlation was observed between intestinal concentrations of glyphosate and intestinal pH, which may partly be explained by an observed reduction in acetic acid produced by the gut bacteria. We conclude that sufficient intestinal levels of aromatic amino acids provided by the diet alleviates the need for bacterial synthesis of aromatic amino acids and thus prevents an antimicrobial effect of glyphosate in vivo. It is however possible that the situation is different in cases of human malnutrition or in production animals.",
"title": ""
},
{
"docid": "9da3fc0b3f0c41ad46412caa325e950b",
"text": "Institutional theory has proven to be a central analytical perspective for investigating the role of larger social and historical structures of Information System (IS) adaptation. However, it does not explicitly account for how organizational actors make sense of and enact IS in their local context. We address this limitation by showing how sensemaking theory can be combined with institutional theory to understand IS adaptation in organizations. Based on a literature review, we present the main assumptions behind institutional and sensemaking theory when used as analytical lenses for investigating the phenomenon of IS adaptation. Furthermore, we explore a combination of the two theories with a case study in a health care setting where an Electronic Patient Record (EPR) system was introduced and used by a group of doctors. The empirical case provides evidence of how existing institutional structures influenced the doctors’ sensemaking of the EPR system. Additionally, it illustrates how the doctors made sense of the EPR system in practice. The paper outlines that: 1) institutional theory has its explanatory power at the organizational field and organizational/group level of analysis focusing on the role that larger institutional structures play in organizational actors’ sensemaking of IS adaptation, 2) sensemaking theory has its explanatory power at the organizational/group and individual/socio-cognitive level focusing on organizational actors’ cognition and situated actions of IS adaptation, and 3) a combined view of the two theories helps us oscillate between levels of analysis, which facilitates a much richer interpretation of IS adaptation.",
"title": ""
},
{
"docid": "a56d43bd191147170e1df87878ca1b11",
"text": "Although problem solving is regarded by most educators as among the most important learning outcomes, few instructional design prescriptions are available for designing problem-solving instruction and engaging learners. This paper distinguishes between well-structured problems and ill-structured problems. Well-structured problems are constrained problems with convergent solutions that engage the application of a limited number of rules and principles within welldefined parameters. Ill-structured problems possess multiple solutions, solution paths, fewer parameters which are less manipulable, and contain uncertainty about which concepts, rules, and principles are necessary for the solution or how they are organized and which solution is best. For both types of problems, this paper presents models for how learners solve them and models for designing instruction to support problem-solving skill development. The model for solving wellstructured problems is based on information processing theories of learning, while the model for solving ill-structured problems relies on an emerging theory of ill-structured problem solving and on constructivist and situated cognition approaches to learning. PROBLEM: INSTRUCTIONAL-DESIGN MODELS FOR PROBLEM SOLVING",
"title": ""
},
{
"docid": "66ae9ba9863f885844ce5074b556013c",
"text": "Autoregressive integrated moving average (ARIMA) is a popular linear models in time series forecasting during the past years. Recent research activities with artificial neural networks (ANNs) suggest that ANNs could be a good selection when the predictor and predictand were not the simple linear relationship. Due to the complex linear and non-linear patterns, there were no ideal methods only using linear or non-linear regression to forecast the particulate matter concentration. In view of the situation, a hybrid methodology that combines both ARIMA and ANN models was developed to improve the forecast accuracy in this paper. The impact of wind direction and the traffic vehicle to the particulate matter concentration was introduction to model by defining wind-weighted-traffic road length density. The paper used road length density, which was obtained using geographic information system (GIS), as a proxy due to the absence of the traffic vehicle date. To demonstrate the utility of the technique, daily average PM10 concentration monitored at a site in Changsha in 2008 was utilized. First we use ARIMA model to model the linear component and then a neural network model was developed to model the residuals from the ARIMA model using wind-weighted-traffic road information around the monitor station. The results indicated that hybrid model can be an effective way to improve the PM10 forecasting accuracy comparing with the single ARIMA model. The approach demonstrates the potential to be applied to other areas of the word.",
"title": ""
},
{
"docid": "df78e51c3ed3a6924bf92db6000062e1",
"text": "We study the problem of computing all Pareto-optimal journeys in a dynamic public transit network for two criteria: arrival time and number of transfers. Existing algorithms consider this as a graph problem, and solve it using variants of Dijkstra’s algorithm. Unfortunately, this leads to either high query times or suboptimal solutions. We take a different approach. We introduce RAPTOR, our novel round-based public transit router. Unlike previous algorithms, it is not Dijkstrabased, looks at each route (such as a bus line) in the network at most once per round, and can be made even faster with simple pruning rules and parallelization using multiple cores. Because it does not rely on preprocessing, RAPTOR works in fully dynamic scenarios. Moreover, it can be easily extended to handle flexible departure times or arbitrary additional criteria, such as fare zones. When run on London’s complex public transportation network, RAPTOR computes all Paretooptimal journeys between two random locations an order of magnitude faster than previous approaches, which easily enables interactive applications.",
"title": ""
}
] |
scidocsrr
|
19266307e86f4bb129cf5b2b65e59652
|
Optic-Flow Based Control of a 46 g Quadrotor
|
[
{
"docid": "dd37e97635b0ded2751d64cafcaa1aa4",
"text": "DEVICES, AND STRUCTURES By S.E. Lyshevshi, CRC Press, 2002. This book is the first of the CRC Press “Nanoand Microscience, Engineering, Technology, and Medicine Series,” of which the author of this book is also the editor. This book could be a textbook of a semester course on microelectro mechanical systems (MEMS) and nanoelectromechanical systems (NEMS). The objective is to cover the topic from basic theory to the design and development of structures of practical devices and systems. The idea of MEMS and NEMS is to utilize and further extend the technology of integrated circuits (VLSI) to nanometer structures of mechanical and biological devices for potential applications in molecular biology and medicine. MEMS and NEMS (nanotechnology) are hot topics in the future development of electronics. The interest is not limited to electrical engineers. In fact, many scientists and researchers are interested in developing MEMS and NEMS for biological and medical applications. Thus, this field has attracted researchers from many different fields. Many new books are coming out. This book seems to be the first one aimed to be a textbook for this field, but it is very hard to write a book for readers with such different backgrounds. The author of this book has emphasized computer modeling, mostly due to his research interest in this field. It would be good to provide coverage on biological and medical MEMS, for example, by reporting a few gen or DNA-related cases. Furthermore, the mathematical modeling in term of a large number of nonlinear coupled differential equations, as used in many places in the book, does not appear to have any practical value to the actual physical structures.",
"title": ""
},
{
"docid": "5cdcb7073bd0f8e1b0affe5ffb4adfc7",
"text": "This paper presents a nonlinear controller for hovering flight and touchdown control for a vertical take-off and landing (VTOL) unmanned aerial vehicle (UAV) using inertial optical flow. The VTOL vehicle is assumed to be a rigid body, equipped with a minimum sensor suite (camera and IMU), manoeuvring over a textured flat target plane. Two different tasks are considered in this paper: the first concerns the stability of hovering flight and the second one concerns regulation of automatic landing using the divergent optical flow as feedback information. Experimental results on a quad-rotor UAV demonstrate the performance of the proposed control strategy.",
"title": ""
}
] |
[
{
"docid": "b01028ef40b1fda74d0621c430ce9141",
"text": "ETRI Journal, Volume 29, Number 2, April 2007 A novel low-voltage CMOS current feedback operational amplifier (CFOA) is presented. This realization nearly allows rail-to-rail input/output operations. Also, it provides high driving current capabilities. The CFOA operates at supply voltages of ±0.75 V with a total standby current of 304 μA. The circuit exhibits a bandwidth better than 120 MHz and a current drive capability of ±1 mA. An application of the CFOA to realize a new all-pass filter is given. PSpice simulation results using 0.25 μm CMOS technology parameters for the proposed CFOA and its application are given.",
"title": ""
},
{
"docid": "a454b5a912c4b74a563f09249edecc34",
"text": "There is great interest in assessing student learning in unscripted, open-ended environments, but students' work can evolve in ways that are too subtle or too complex to be detected by the human eye. In this paper, I describe an automated technique to assess, analyze and visualize students learning computer programming. I logged hundreds of snapshots of students' code during a programming assignment, and I employ different quantitative techniques to extract students' behaviors and categorize them in terms of programming experience. First I review the literature on educational data mining, learning analytics, computer vision applied to assessment, and emotion detection, discuss the relevance of the work, and describe one case study with a group undergraduate engineering students",
"title": ""
},
{
"docid": "36d5ba974945cba3bf9120f3ab9aa7a0",
"text": "In this paper, we analyze the spectral efficiency of multicell massive multiple-input-multiple-output (MIMO) systems with downlink training and a new pilot contamination precoding (PCP) scheme. First, we analyze the spectral efficiency of the beamforming training (BT) scheme with maximum-ratio transmission (MRT) precoding. Then, we derive an approximate closed-form expression of the spectral efficiency to find the optimal lengths of uplink and downlink pilots. Simulation results show that the achieved spectral efficiency can be improved due to channel estimation at the user side, but in comparison with a single-cell scenario, the spectral efficiency per cell in multicell scenario degrades because of pilot contamination. We focus on the practical case where the number of base station (BS) antennas is large but still finite and propose the BT and PCP (BT-PCP) transmission scheme to mitigate the pilot contamination with limited cooperation between BSs. We confirm the effectiveness of the proposed BT-PCP scheme with simulation, and we show that the proposed BT-PCP scheme achieves higher spectral efficiency than the conventional PCP method and that the performance gap from the perfect channel state information (CSI) scenario without pilot contamination is small.",
"title": ""
},
{
"docid": "2107e4efdf7de92a850fc0142bf8c8c3",
"text": "Throughout the wide range of aerial robot related applications, selecting a particular airframe is often a trade-off. Fixed-wing small-scale unmanned aerial vehicles (UAVs) typically have difficulty surveying at low altitudes while quadrotor UAVs, having more maneuverability, suffer from limited flight time. Recent prior work [1] proposes a solar-powered small-scale aerial vehicle designed to transform between fixed-wing and quad-rotor configurations. Surplus energy collected and stored while in a fixed-wing configuration is utilized while in a quad-rotor configuration. This paper presents an improvement to the robot's design in [1] by pursuing a modular airframe, an optimization of the hybrid propulsion system, and solar power electronics. Two prototypes of the robot have been fabricated for independent testing of the airframe in fixed-wing and quad-rotor states. Validation of the solar power electronics and hybrid propulsion system designs were demonstrated through a combination of simulation and empirical data from prototype hardware.",
"title": ""
},
{
"docid": "4726381f2636acc8bebe881dc25316f8",
"text": "Optimized hardware for propagating and checking software-programmable metadata tags can achieve low runtime overhead. We generalize prior work on hardware tagging by considering a generic architecture that supports software-defined policies over metadata of arbitrary size and complexity; we introduce several novel microarchitectural optimizations that keep the overhead of this rich processing low. Our model thus achieves the efficiency of previous hardware-based approaches with the flexibility of the software-based ones. We demonstrate this by using it to enforce four diverse safety and security policies---spatial and temporal memory safety, taint tracking, control-flow integrity, and code and data separation---plus a composite policy that enforces all of them simultaneously. Experiments on SPEC CPU2006 benchmarks with a PUMP-enhanced RISC processor show modest impact on runtime (typically under 10%) and power ceiling (less than 10%), in return for some increase in energy usage (typically under 60%) and area for on-chip memory structures (110%).",
"title": ""
},
{
"docid": "a0a9fc47ba3694864e64e4f29c3c5735",
"text": "Severe cases of traumatic brain injury (TBI) require neurocritical care, the goal being to stabilize hemodynamics and systemic oxygenation to prevent secondary brain injury. It is reported that approximately 45 % of dysoxygenation episodes during critical care have both extracranial and intracranial causes, such as intracranial hypertension and brain edema. For this reason, neurocritical care is incomplete if it only focuses on prevention of increased intracranial pressure (ICP) or decreased cerebral perfusion pressure (CPP). Arterial hypotension is a major risk factor for secondary brain injury, but hypertension with a loss of autoregulation response or excess hyperventilation to reduce ICP can also result in a critical condition in the brain and is associated with a poor outcome after TBI. Moreover, brain injury itself stimulates systemic inflammation, leading to increased permeability of the blood-brain barrier, exacerbated by secondary brain injury and resulting in increased ICP. Indeed, systemic inflammatory response syndrome after TBI reflects the extent of tissue damage at onset and predicts further tissue disruption, producing a worsening clinical condition and ultimately a poor outcome. Elevation of blood catecholamine levels after severe brain damage has been reported to contribute to the regulation of the cytokine network, but this phenomenon is a systemic protective response against systemic insults. Catecholamines are directly involved in the regulation of cytokines, and elevated levels appear to influence the immune system during stress. Medical complications are the leading cause of late morbidity and mortality in many types of brain damage. Neurocritical care after severe TBI has therefore been refined to focus not only on secondary brain injury but also on systemic organ damage after excitation of sympathetic nerves following a stress reaction.",
"title": ""
},
{
"docid": "a28199159d7508a7ef57cd20adf084c2",
"text": "Brain-computer interfaces (BCIs) translate brain activity into signals controlling external devices. BCIs based on visual stimuli can maintain communication in severely paralyzed patients, but only if intact vision is available. Debilitating neurological disorders however, may lead to loss of intact vision. The current study explores the feasibility of an auditory BCI. Sixteen healthy volunteers participated in three training sessions consisting of 30 2-3 min runs in which they learned to increase or decrease the amplitude of sensorimotor rhythms (SMR) of the EEG. Half of the participants were presented with visual and half with auditory feedback. Mood and motivation were assessed prior to each session. Although BCI performance in the visual feedback group was superior to the auditory feedback group there was no difference in performance at the end of the third session. Participants in the auditory feedback group learned slower, but four out of eight reached an accuracy of over 70% correct in the last session comparable to the visual feedback group. Decreasing performance of some participants in the visual feedback group is related to mood and motivation. We conclude that with sufficient training time an auditory BCI may be as efficient as a visual BCI. Mood and motivation play a role in learning to use a BCI.",
"title": ""
},
{
"docid": "8d19d251e31dd3564f7bcab33cc3c9b7",
"text": "The visual appearance of a person is easily affected by many factors like pose variations, viewpoint changes and camera parameter differences. This makes person Re-Identification (ReID) among multiple cameras a very challenging task. This work is motivated to learn mid-level human attributes which are robust to such visual appearance variations. And we propose a semi-supervised attribute learning framework which progressively boosts the accuracy of attributes only using a limited number of labeled data. Specifically, this framework involves a three-stage training. A deep Convolutional Neural Network (dCNN) is first trained on an independent dataset labeled with attributes. Then it is fine-tuned on another dataset only labeled with person IDs using our defined triplet loss. Finally, the updated dCNN predicts attribute labels for the target dataset, which is combined with the independent dataset for the final round of fine-tuning. The predicted attributes, namely deep attributes exhibit superior generalization ability across different datasets. By directly using the deep attributes with simple Cosine distance, we have obtained surprisingly good accuracy on four person ReID datasets. Experiments also show that a simple metric learning modular further boosts our method, making it significantly outperform many recent works.",
"title": ""
},
{
"docid": "4cb66593d4f9ddb30cb7e470db22f0f7",
"text": "Image fusion is the process of combining two or more images for providing more information. Medical image fusion refers to the fusion of medical images obtained from different modalities. Medical Image Fusion helps in medical diagnosis by way of improving the quality of the images. In diagnosis, images obtained from a single modality like MRI, CT etc, may not be able to provide all the required information. It is needed to combine information obtained from other modalities also to improve the information acquired. For example combination of information from MRI and CT modalities gives more information than the individual modalities separately. The aim is to provide a method for fusing the images from the individual modalities in such a way that the fusion results in an image that gives more information without any loss of the input information and without any redundancy or artifacts. In the fusion of medical images obtained from different modalities they might be in different coordinate systems and they have to be aligned properly for efficient fusion. The aligning of the input images before proceeding with the fusion is called image registration. The intensity based registration and Mutual information based image registration procedures are carried out before decomposing the images. The two imaging modalities CT and MRI are considered for this study. The results on CT and MR images demonstrate the performance of the fusion algorithms in comparison with registration schemes.",
"title": ""
},
{
"docid": "ef36ed423a1834272684cf39d06453c1",
"text": "Abstract In general two basic methods are used for controlling the velocity of a hydraulic cylinder. First by an axial variable-displacement pump for controls flow to the cylinder. This configuration is commonly known as a hydrostatic transmission. Second by proportional valve powered by a constant-pressure source, such as a pressure compensated pump, drives the hydraulic cylinder. In this study, the electro-hydraulic servo system (EHSS) for velocity control of hydraulic cylinder is investigated experimentally and its analysis theoretically. Where the controlled hydraulic cylinder is altered by a swashplate axial piston pump or by proportional valve to achieve velocity control. The theoretical part includes the derivation of the mathematical model equations of combination system. Velocity control system for hydraulic cylinder using simple (PID) controller to get constant velocity range of hydraulic cylinder under applied external variable loads . An experimental set-up is constructed, which consists of the hydraulic test pump unit, the electro-hydraulic proportional valve unit, the hydraulic actuator unit , the external load control unit and interfacing electronic unit. The experimental results show that PID controller can be achieve good velocity control by variable displacement axial piston pump and also by proportional valve under external loads variations.",
"title": ""
},
{
"docid": "1f972cc136f47288888657e84464412e",
"text": "This paper evaluates the impact of machine translation on the software localization process and the daily work of professional translators when SMT is applied to low-resourced languages with rich morphology. Translation from English into six low-resourced languages (Czech, Estonian, Hungarian, Latvian, Lithuanian and Polish) from different language groups are examined. Quality, usability and applicability of SMT for professional translation were evaluated. The building of domain and project tailored SMT systems for localization purposes was evaluated in two setups. The results of the first evaluation were used to improve SMT systems and MT platform. The second evaluation analysed a more complex situation considering tag translation and its effects on the translator’s productivity.",
"title": ""
},
{
"docid": "b610e9bef08ef2c133a02e887b89b196",
"text": "We propose to use question answering (QA) data from Web forums to train chatbots from scratch, i.e., without dialog training data. First, we extract pairs of question and answer sentences from the typically much longer texts of questions and answers in a forum. We then use these shorter texts to train seq2seq models in a more efficient way. We further improve the parameter optimization using a new model selection strategy based on QA measures. Finally, we propose to use extrinsic evaluation with respect to a QA task as an automatic evaluation method for chatbots. The evaluation shows that the model achieves a MAP of 63.5% on the extrinsic task. Moreover, it can answer correctly 49.5% of the questions when they are similar to questions asked in the forum, and 47.3% of the questions when they are more conversational in style.",
"title": ""
},
{
"docid": "7bd7b0b85ae68f0ccd82d597667d8acb",
"text": "Trust evaluation plays an important role in securing wireless sensor networks (WSNs), which is one of the most popular network technologies for the Internet of Things (IoT). The efficiency of the trust evaluation process is largely governed by the trust derivation, as it dominates the overhead in the process, and performance of WSNs is particularly sensitive to overhead due to the limited bandwidth and power. This paper proposes an energy-aware trust derivation scheme using game theoretic approach, which manages overhead while maintaining adequate security of WSNs. A risk strategy model is first presented to stimulate WSN nodes' cooperation. Then, a game theoretic approach is applied to the trust derivation process to reduce the overhead of the process. We show with the help of simulations that our trust derivation scheme can achieve both intended security and high efficiency suitable for WSN-based IoT networks.",
"title": ""
},
{
"docid": "584d2858178e4e33855103a71d7fdce4",
"text": "This paper presents 5G mm-wave phased-array antenna for 3D-hybrid beamforming. This uses MFC to steer beam for the elevation, and uses butler matrix network for the azimuth. In case of butler matrix network, this, using 180° ring hybrid coupler switch network, is proposed to get additional beam pattern and improved SRR in comparison with conventional structure. Also, it can be selected 15 of the azimuth beam pattern. When using the chip of proposed structure, it is possible to get variable kind of beam-forming over 1000. In addition, it is suitable 5G system or a satellite communication system that requires a beamforming.",
"title": ""
},
{
"docid": "e3eae34f1ad48264f5b5913a65bf1247",
"text": "Double spending and blockchain forks are two main issues that the Bitcoin crypto-system is confronted with. The former refers to an adversary's ability to use the very same coin more than once while the latter reflects the occurrence of transient inconsistencies in the history of the blockchain distributed data structure. We present a new approach to tackle these issues: it consists in adding some local synchronization constraints on Bitcoin's validation operations, and in making these constraints independent from the native blockchain protocol. Synchronization constraints are handled by nodes which are randomly and dynamically chosen in the Bitcoin system. We show that with such an approach, content of the blockchain is consistent with all validated transactions and blocks which guarantees the absence of both double-spending attacks and blockchain forks.",
"title": ""
},
{
"docid": "119696bc950e1c36fa9d09ee8c1aa6fb",
"text": "A smart grid is an intelligent electricity grid that optimizes the generation, distribution and consumption of electricity through the introduction of Information and Communication Technologies on the electricity grid. In essence, smart grids bring profound changes in the information systems that drive them: new information flows coming from the electricity grid, new players such as decentralized producers of renewable energies, new uses such as electric vehicles and connected houses and new communicating equipments such as smart meters, sensors and remote control points. All this will cause a deluge of data that the energy companies will have to face. Big Data technologies offers suitable solutions for utilities, but the decision about which Big Data technology to use is critical. In this paper, we provide an overview of data management for smart grids, summarise the added value of Big Data technologies for this kind of data, and discuss the technical requirements, the tools and the main steps to implement Big Data solutions in the smart grid context.",
"title": ""
},
{
"docid": "cd7fa5de19b12bdded98f197c1d9cd22",
"text": "Many event monitoring systems rely on counting known keywords in streaming text data to detect sudden spikes in frequency. But the dynamic and conversational nature of Twitter makes it hard to select known keywords for monitoring. Here we consider a method of automatically finding noun phrases (NPs) as keywords for event monitoring in Twitter. Finding NPs has two aspects, identifying the boundaries for the subsequence of words which represent the NP, and classifying the NP to a specific broad category such as politics, sports, etc. To classify an NP, we define the feature vector for the NP using not just the words but also the author's behavior and social activities. Our results show that we can classify many NPs by using a sample of training data from a knowledge-base.",
"title": ""
},
{
"docid": "7a7fedfeaa85536028113c65d5650957",
"text": "In this work we propose a novel framework named Dual-Net aiming at learning more accurate representation for image recognition. Here two parallel neural networks are coordinated to learn complementary features and thus a wider network is constructed. Specifically, we logically divide an end-to-end deep convolutional neural network into two functional parts, i.e., feature extractor and image classifier. The extractors of two subnetworks are placed side by side, which exactly form the feature extractor of DualNet. Then the two-stream features are aggregated to the final classifier for overall classification, while two auxiliary classifiers are appended behind the feature extractor of each subnetwork to make the separately learned features discriminative alone. The complementary constraint is imposed by weighting the three classifiers, which is indeed the key of DualNet. The corresponding training strategy is also proposed, consisting of iterative training and joint finetuning, to make the two subnetworks cooperate well with each other. Finally, DualNet based on the well-known CaffeNet, VGGNet, NIN and ResNet are thoroughly investigated and experimentally evaluated on multiple datasets including CIFAR-100, Stanford Dogs and UEC FOOD-100. The results demonstrate that DualNet can really help learn more accurate image representation, and thus result in higher accuracy for recognition. In particular, the performance on CIFAR-100 is state-of-the-art compared to the recent works.",
"title": ""
},
{
"docid": "81a3def63addf898b91f4d7217f6298a",
"text": "Cloud computing is a new form of technology, which infrastructure, developing platform, software, and storage can be delivered as a service in a pay as you use cost model. However, for critical business application and more sensitive information, cloud providers must be selected based on high level of trustworthiness. In this paper, we present a trust model to evaluate cloud services in order to help cloud users select the most reliable resources. We integrate our previous work “conceptual SLA framework for cloud computing” with the proposed trust management model to present a new solution of defining the reliable criteria for the selection process of cloud providers",
"title": ""
},
{
"docid": "6eca7ba1607a1d7d6697af6127a92c4b",
"text": "Cluster analysis is one of attractive data mining technique that use in many fields. One popular class of data clustering algorithms is the center based clustering algorithm. K-means used as a popular clustering method due to its simplicity and high speed in clustering large datasets. However, K-means has two shortcomings: dependency on the initial state and convergence to local optima and global solutions of large problems cannot found with reasonable amount of computation effort. In order to overcome local optima problem lots of studies done in clustering. Over the last decade, modeling the behavior of social insects, such as ants and bees, for the purpose of search and problem solving has been the context of the emerging area of swarm intelligence. Honey-bees are among the most closely studied social insects. Honey-bee mating may also be considered as a typical swarm-based approach to optimization, in which the search algorithm is inspired by the process of marriage in real honey-bee. Honey-bee has been used to model agent-based systems. In this paper, we proposed application of honeybee mating optimization in clustering (HBMK-means). We compared HBMK-means with other heuristics algorithm in clustering, such as GA, SA, TS, and ACO, by implementing them on several well-known datasets. Our finding shows that the proposed algorithm works than the best one. 2007 Elsevier Inc. All rights reserved.",
"title": ""
}
] |
scidocsrr
|
a87fcb52c49725f7290480cdc5632605
|
Automatic 3D liver location and segmentation via convolutional neural network and graph cut
|
[
{
"docid": "d03abae94005c27aa46c66e1cdc77b23",
"text": "The segmentation of infant brain tissue images into white matter (WM), gray matter (GM), and cerebrospinal fluid (CSF) plays an important role in studying early brain development in health and disease. In the isointense stage (approximately 6-8 months of age), WM and GM exhibit similar levels of intensity in both T1 and T2 MR images, making the tissue segmentation very challenging. Only a small number of existing methods have been designed for tissue segmentation in this isointense stage; however, they only used a single T1 or T2 images, or the combination of T1 and T2 images. In this paper, we propose to use deep convolutional neural networks (CNNs) for segmenting isointense stage brain tissues using multi-modality MR images. CNNs are a type of deep models in which trainable filters and local neighborhood pooling operations are applied alternatingly on the raw input images, resulting in a hierarchy of increasingly complex features. Specifically, we used multi-modality information from T1, T2, and fractional anisotropy (FA) images as inputs and then generated the segmentation maps as outputs. The multiple intermediate layers applied convolution, pooling, normalization, and other operations to capture the highly nonlinear mappings between inputs and outputs. We compared the performance of our approach with that of the commonly used segmentation methods on a set of manually segmented isointense stage brain images. Results showed that our proposed model significantly outperformed prior methods on infant brain tissue segmentation. In addition, our results indicated that integration of multi-modality images led to significant performance improvement.",
"title": ""
},
{
"docid": "fc8850669cc3f6f2dd1baaf2d2792506",
"text": "Liver segmentation is still a challenging task in medical image processing area due to the complexity of the liver's anatomy, low contrast with adjacent organs, and presence of pathologies. This investigation was used to develop and validate an automated method to segment livers in CT images. The proposed framework consists of three steps: 1) preprocessing; 2) initialization; and 3) segmentation. In the first step, a statistical shape model is constructed based on the principal component analysis and the input image is smoothed using curvature anisotropic diffusion filtering. In the second step, the mean shape model is moved using thresholding and Euclidean distance transformation to obtain a coarse position in a test image, and then the initial mesh is locally and iteratively deformed to the coarse boundary, which is constrained to stay close to a subspace of shapes describing the anatomical variability. Finally, in order to accurately detect the liver surface, deformable graph cut was proposed, which effectively integrates the properties and inter-relationship of the input images and initialized surface. The proposed method was evaluated on 50 CT scan images, which are publicly available in two databases Sliver07 and 3Dircadb. The experimental results showed that the proposed method was effective and accurate for detection of the liver surface.",
"title": ""
}
] |
[
{
"docid": "fe74692a16c5e50bc40f1d379457d643",
"text": "To carry out the motion control of CNC machine and robot, this paper introduces an approach to implement 4-axis motion controller based on field programmable gate array (FPGA). Starting with introduction to existing excellent 4-axis motion controller MCX314, the fundamental structure of controller is discussed. Since the straight-line motion is a fundamental motion of CNC machine and robot, this paper introduces a linear interpolation method to do approximate straight-line motion within any 3-axis space. As Interpolation calculation of hardware interpolation is implemented by hardware logic circuit such as ASIC or FPGA in the controller, therefore this method can avoid a large amount of complex mathematical calculation, which hints that this controller has high real-time performance. The simulation of straight-line motion within 3D space verifies the effectiveness of the proposed approach.",
"title": ""
},
{
"docid": "3851a77360fb2d6df454c1ee19c59037",
"text": "Plantar fasciitis affects nearly 1 million persons in the United States at any one time. Conservative therapies have been reported to successfully treat 90% of plantar fasciitis cases; however, for the remaining cases, only invasive therapeutic solutions remain. This investigation studied newly emerging technology, low-level laser therapy. From September 2011 to June 2013, 69 subjects were enrolled in a placebo-controlled, randomized, double-blind, multicenter study that evaluated the clinical utility of low-level laser therapy for the treatment of unilateral chronic fasciitis. The volunteer participants were treated twice a week for 3 weeks for a total of 6 treatments and were evaluated at 5 separate time points: before the procedure and at weeks 1, 2, 3, 6, and 8. The pain rating was recorded using a visual analog scale, with 0 representing \"no pain\" and 100 representing \"worst pain.\" Additionally, Doppler ultrasonography was performed on the plantar fascia to measure the fascial thickness before and after treatment. Study participants also completed the Foot Function Index. At the final follow-up visit, the group participants demonstrated a mean improvement in heel pain with a visual analog scale score of 29.6 ± 24.9 compared with the placebo subjects, who reported a mean improvement of 5.4 ± 16.0, a statistically significant difference (p < .001). Although additional studies are warranted, these data have demonstrated that low-level laser therapy is a promising treatment of plantar fasciitis.",
"title": ""
},
{
"docid": "834af0b828702aae0482a2e31e3f8a40",
"text": "We routinely hear vendors claim that their systems are “secure.” However, without knowing what assumptions are made by the vendor, it is hard to justify such a claim. Prior to claiming the security of a system, it is important to identify the threats to the system in question. Enumerating the threats to a system helps system architects develop realistic and meaningful security requirements. In this paper, we investigate how threat modeling can be used as foundations for the specification of security requirements. Although numerous works have been published on threat modeling, there is a lack of integrated, systematic approach toward threat modeling for complex systems. We examine the differences between modeling software products and complex systems, and outline our approach for identifying threats of networked systems. We also present three case studies of threat modeling: Software-Defined Radio, a network traffic monitoring tool (VisFlowConnect), and a cluster security monitoring tool (NVisionCC).",
"title": ""
},
{
"docid": "0a432546553ffbb06690495d5c858e19",
"text": "Since the first reported death in 1977, scores of seemingly healthy Hmong refugees have died mysteriously and without warning from what has come to be known as Sudden Unexpected Nocturnal Death Syndrome (SUNDS). To date medical research has provided no adequate explanation for these sudden deaths. This study is an investigation into the changing impact of traditional beliefs as they manifest during the stress of traumatic relocation. In Stockton, California, 118 Hmong men and women were interviewed regarding their awareness of and personal experience with a traditional nocturnal spirit encounter. An analysis of this data reveals that the supranormal attack acts as a trigger for Hmong SUNDS.",
"title": ""
},
{
"docid": "1051cb1eb8d9306e1419dbad0ad53ee9",
"text": "The goal of this paper is to design a statistical test for the camera model identification problem. The approach is based on the heteroscedastic noise model, which more accurately describes a natural raw image. This model is characterized by only two parameters, which are considered as unique fingerprint to identify camera models. The camera model identification problem is cast in the framework of hypothesis testing theory. In an ideal context where all model parameters are perfectly known, the likelihood ratio test (LRT) is presented and its performances are theoretically established. For a practical use, two generalized LRTs are designed to deal with unknown model parameters so that they can meet a prescribed false alarm probability while ensuring a high detection performance. Numerical results on simulated images and real natural raw images highlight the relevance of the proposed approach.",
"title": ""
},
{
"docid": "d7aec74465931a52e9cda65de38b1fb7",
"text": "As the use of mobile devices becomes increasingly ubiquitous, the need for systematically testing applications (apps) that run on these devices grows more and more. However, testing mobile apps is particularly expensive and tedious, often requiring substantial manual effort. While researchers have made much progress in automated testing of mobile apps during recent years, a key problem that remains largely untracked is the classic oracle problem, i.e., to determine the correctness of test executions. This paper presents a novel approach to automatically generate test cases, that include test oracles, for mobile apps. The foundation for our approach is a comprehensive study that we conducted of real defects in mobile apps. Our key insight, from this study, is that there is a class of features that we term user-interaction features, which is implicated in a significant fraction of bugs and for which oracles can be constructed - in an application agnostic manner -- based on our common understanding of how apps behave. We present an extensible framework that supports such domain specific, yet application agnostic, test oracles, and allows generation of test sequences that leverage these oracles. Our tool embodies our approach for generating test cases that include oracles. Experimental results using 6 Android apps show the effectiveness of our tool in finding potentially serious bugs, while generating compact test suites for user-interaction features.",
"title": ""
},
{
"docid": "3503074668bd55868f86a99a8a171073",
"text": "Deep Neural Networks (DNNs) provide state-of-the-art solutions in several difficult machine perceptual tasks. However, their performance relies on the availability of a large set of labeled training data, which limits the breadth of their applicability. Hence, there is a need for new semisupervised learning methods for DNNs that can leverage both (a small amount of) labeled and unlabeled training data. In this paper, we develop a general loss function enabling DNNs of any topology to be trained in a semi-supervised manner without extra hyper-parameters. As opposed to current semi-supervised techniques based on topology-specific or unstable approaches, ours is both robust and general. We demonstrate that our approach reaches state-of-the-art performance on the SVHN (9.82% test error, with 500 labels and wide Resnet) and CIFAR10 (16.38% test error, with 8000 labels and sigmoid convolutional neural network) data sets.",
"title": ""
},
{
"docid": "a0a9fc47ba3694864e64e4f29c3c5735",
"text": "Severe cases of traumatic brain injury (TBI) require neurocritical care, the goal being to stabilize hemodynamics and systemic oxygenation to prevent secondary brain injury. It is reported that approximately 45 % of dysoxygenation episodes during critical care have both extracranial and intracranial causes, such as intracranial hypertension and brain edema. For this reason, neurocritical care is incomplete if it only focuses on prevention of increased intracranial pressure (ICP) or decreased cerebral perfusion pressure (CPP). Arterial hypotension is a major risk factor for secondary brain injury, but hypertension with a loss of autoregulation response or excess hyperventilation to reduce ICP can also result in a critical condition in the brain and is associated with a poor outcome after TBI. Moreover, brain injury itself stimulates systemic inflammation, leading to increased permeability of the blood-brain barrier, exacerbated by secondary brain injury and resulting in increased ICP. Indeed, systemic inflammatory response syndrome after TBI reflects the extent of tissue damage at onset and predicts further tissue disruption, producing a worsening clinical condition and ultimately a poor outcome. Elevation of blood catecholamine levels after severe brain damage has been reported to contribute to the regulation of the cytokine network, but this phenomenon is a systemic protective response against systemic insults. Catecholamines are directly involved in the regulation of cytokines, and elevated levels appear to influence the immune system during stress. Medical complications are the leading cause of late morbidity and mortality in many types of brain damage. Neurocritical care after severe TBI has therefore been refined to focus not only on secondary brain injury but also on systemic organ damage after excitation of sympathetic nerves following a stress reaction.",
"title": ""
},
{
"docid": "abb54a0c155805e7be2602265f78ae79",
"text": "In this paper we sketch out a computational theory of spatial cognition motivated by navigational behaviours, ecological requirements, and neural mechanisms as identified in animals and man. Spatial cognition is considered in the context of a cognitive agent built around the action-perception cycle. Besides sensors and effectors, the agent comprises multiple memory structures including a working memory and a longterm memory stage. Spatial longterm memory is modeled along the graph approach, treating recognizable places or poses as nodes and navigational actions as links. Models of working memory and its interaction with reference memory are discussed. The model provides an overall framework of spatial cognition which can be adapted to model different levels of behavioural complexity as well as interactions between working and longterm memory. A number of design questions for building cognitive robots are derived from comparison with biological systems and discussed in the paper.",
"title": ""
},
{
"docid": "fb7d979aa267367c97ac4539954103a6",
"text": "a Department of Social and Organizational Psychology, VU University Amsterdam, Van der Boechorststraat 1, 1081 BT, Amsterdam, The Netherlands b Netherlands Institute for the Study of Crime and Law Enforcement, De Boelelaan 1077a, 1081 HV, Amsterdam, The Netherlands c Department of Criminal Law and Criminology, VU University Amsterdam, De Boelelaan 1077, 1081 HV, Amsterdam, The Netherlands d Phoolan Devi Institute, De Boelelaan 1105, 1081 HV, Amsterdam, The Netherlands",
"title": ""
},
{
"docid": "66876eb3710afda075b62b915a2e6032",
"text": "In this paper we analyze the CS Principles project, a proposed Advanced Placement course, by focusing on the second pilot that took place in 2011-2012. In a previous publication the first pilot of the course was explained, but not in a context related to relevant educational research and philosophy. In this paper we analyze the content and the pedagogical approaches used in the second pilot of the project. We include information about the third pilot being conducted in 2012-2013 and the portfolio exam that is part of that pilot. Both the second and third pilots provide evidence that the CS Principles course is succeeding in changing how computer science is taught and to whom it is taught.",
"title": ""
},
{
"docid": "acf4645478c28811d41755b0ed81fb39",
"text": "Make more knowledge even in less time every day. You may not always spend your time and money to go abroad and get the experience and knowledge by yourself. Reading is a good alternative to do in getting this desirable knowledge and experience. You may gain many things from experiencing directly, but of course it will spend much money. So here, by reading social network data analytics social network data analytics, you can take more advantages with limited budget.",
"title": ""
},
{
"docid": "cf42b86cf4e42d31e2726c4247edf17a",
"text": "Global Navigation Satellite System (GNSS) will in effect be fully deployed and operational in a few years, even with the delays in Galileo as a consequence of European Union's financial difficulties. The vastly broadened GNSS spectra, spread densely across 1146-1616 MHz, versus the narrow Global Positioning System (GPS) L1 and L2 bands, together with a constellation of over 100 Medium Earth Orbit (MEO) and Geostationary Earth Orbit (GEO) satellites versus GPS' 24 MEO satellites, are revolutionizing the design of GNSS receive antennas. For example, a higher elevation cutoff angle will be preferred. As a result, fundamental changes in antenna design, new features and applications, as well as cost structures are ongoing. Existing GNSS receive antenna technologies are reviewed and design challenges are discussed.",
"title": ""
},
{
"docid": "60d807b2bbd3106a0e359c66805b403a",
"text": "The existing word representation methods mostly limit their information source to word co-occurrence statistics. In this paper, we introduce ngrams into four representation methods: SGNS, GloVe, PPMI matrix, and its SVD factorization. Comprehensive experiments are conducted on word analogy and similarity tasks. The results show that improved word representations are learned from ngram cooccurrence statistics. We also demonstrate that the trained ngram representations are useful in many aspects such as finding antonyms and collocations. Besides, a novel approach of building co-occurrence matrix is proposed to alleviate the hardware burdens brought by ngrams.",
"title": ""
},
{
"docid": "7bd7b0b85ae68f0ccd82d597667d8acb",
"text": "Trust evaluation plays an important role in securing wireless sensor networks (WSNs), which is one of the most popular network technologies for the Internet of Things (IoT). The efficiency of the trust evaluation process is largely governed by the trust derivation, as it dominates the overhead in the process, and performance of WSNs is particularly sensitive to overhead due to the limited bandwidth and power. This paper proposes an energy-aware trust derivation scheme using game theoretic approach, which manages overhead while maintaining adequate security of WSNs. A risk strategy model is first presented to stimulate WSN nodes' cooperation. Then, a game theoretic approach is applied to the trust derivation process to reduce the overhead of the process. We show with the help of simulations that our trust derivation scheme can achieve both intended security and high efficiency suitable for WSN-based IoT networks.",
"title": ""
},
{
"docid": "68cf646ecd3aa857ec819485eab03d93",
"text": "Since their introduction as a means of front propagation and their first application to edge-based segmentation in the early 90’s, level set methods have become increasingly popular as a general framework for image segmentation. In this paper, we present a survey of a specific class of region-based level set segmentation methods and clarify how they can all be derived from a common statistical framework. Region-based segmentation schemes aim at partitioning the image domain by progressively fitting statistical models to the intensity, color, texture or motion in each of a set of regions. In contrast to edge-based schemes such as the classical Snakes, region-based methods tend to be less sensitive to noise. For typical images, the respective cost functionals tend to have less local minima which makes them particularly well-suited for local optimization methods such as the level set method. We detail a general statistical formulation for level set segmentation. Subsequently, we clarify how the integration of various low level criteria leads to a set of cost functionals. We point out relations between the different segmentation schemes. In experimental results, we demonstrate how the level set function is driven to partition the image plane into domains of coherent color, texture, dynamic texture or motion. Moreover, the Bayesian formulation allows to introduce prior shape knowledge into the level set method. We briefly review a number of advances in this domain.",
"title": ""
},
{
"docid": "4abd7884b97c1af7c24a81da7a6c0c3d",
"text": "AIM\nThe interaction between running, stretching and practice jumps during warm-up for jumping tests has not been investigated. The purpose of the present study was to compare the effects of running, static stretching of the leg extensors and practice jumps on explosive force production and jumping performance.\n\n\nMETHODS\nSixteen volunteers (13 male and 3 female) participated in five different warm-ups in a randomised order prior to the performance of two jumping tests. The warm-ups were control, 4 min run, static stretch, run + stretch, and run + stretch + practice jumps. After a 2 min rest, a concentric jump and a drop jump were performed, which yielded 6 variables expressing fast force production and jumping performance of the leg extensor muscles (concentric jump height, peak force, rate of force developed, drop jump height, contact time and height/time).\n\n\nRESULTS\nGenerally the stretching warm-up produced the lowest values and the run or run + stretch + jumps warm-ups produced the highest values of explosive force production. There were no significant differences (p<0.05) between the control and run + stretch warm-ups, whereas the run yielded significantly better scores than the run + stretch warm-up for drop jump height (3.2%), concentric jump height (3.4%) and peak concentric force (2.7%) and rate of force developed (15.4%).\n\n\nCONCLUSION\nThe results indicated that submaximum running and practice jumps had a positive effect whereas static stretching had a negative influence on explosive force and jumping performance. It was suggested that an alternative for static stretching should be considered in warm-ups prior to power activities.",
"title": ""
},
{
"docid": "4ce681973defd1564e2774a38598d983",
"text": "OBJECTIVE\nThe Montreal Cognitive Assessment (MoCA; Nasreddine et al., 2005) is a cognitive screening tool that aims to differentiate healthy cognitive aging from Mild Cognitive Impairment (MCI). Several validation studies have been conducted on the MoCA, in a variety of clinical populations. Some studies have indicated that the originally suggested cutoff score of 26/30 leads to an inflated rate of false positives, particularly for those of older age and/or lower education. We conducted a systematic review and meta-analysis of the literature to determine the diagnostic accuracy of the MoCA for differentiating healthy cognitive aging from possible MCI.\n\n\nMETHODS\nOf the 304 studies identified, nine met inclusion criteria for the meta-analysis. These studies were assessed across a range of cutoff scores to determine the respective sensitivities, specificities, positive and negative predictive accuracies, likelihood ratios for positive and negative results, classification accuracies, and Youden indices.\n\n\nRESULTS\nMeta-analysis revealed a cutoff score of 23/30 yielded the best diagnostic accuracy across a range of parameters.\n\n\nCONCLUSIONS\nA MoCA cutoff score of 23, rather than the initially recommended score of 26, lowers the false positive rate and shows overall better diagnostic accuracy. We recommend the use of this cutoff score going forward. Copyright © 2017 John Wiley & Sons, Ltd.",
"title": ""
},
{
"docid": "517916f4c62bc7b5766efa537359349d",
"text": "Document-level sentiment classification aims to predict user’s overall sentiment in a document about a product. However, most of existing methods only focus on local text information and ignore the global user preference and product characteristics. Even though some works take such information into account, they usually suffer from high model complexity and only consider wordlevel preference rather than semantic levels. To address this issue, we propose a hierarchical neural network to incorporate global user and product information into sentiment classification. Our model first builds a hierarchical LSTM model to generate sentence and document representations. Afterwards, user and product information is considered via attentions over different semantic levels due to its ability of capturing crucial semantic components. The experimental results show that our model achieves significant and consistent improvements compared to all state-of-theart methods. The source code of this paper can be obtained from https://github. com/thunlp/NSC.",
"title": ""
},
{
"docid": "60d8839833d10b905729e3d672cafdd6",
"text": "In order to account for the phenomenon of virtual pitch, various theories assume implicitly or explicitly that each spectral component introduces a series of subharmonics. The spectral-compression method for pitch determination can be viewed as a direct implementation of this principle. The widespread application of this principle in pitch determination is, however, impeded by numerical problems with respect to accuracy and computational efficiency. A modified algorithm is described that solves these problems. Its performance is tested for normal speech and \"telephone\" speech, i.e., speech high-pass filtered at 300 Hz. The algorithm out-performs the harmonic-sieve method for pitch determination, while its computational requirements are about the same. The algorithm is described in terms of nonlinear system theory, i.c., subharmonic summation. It is argued that the favorable performance of the subharmonic-summation algorithm stems from its corresponding more closely with current pitch-perception theories than does the harmonic sieve.",
"title": ""
}
] |
scidocsrr
|
768300114fdeb3b343c6bc4c1fa13c72
|
Memory Augmented Policy Optimization for Program Synthesis with Generalization
|
[
{
"docid": "771611dc99e22b054b936fce49aea7fc",
"text": "Count-based exploration algorithms are known to perform near-optimally when used in conjunction with tabular reinforcement learning (RL) methods for solving small discrete Markov decision processes (MDPs). It is generally thought that count-based methods cannot be applied in high-dimensional state spaces, since most states will only occur once. Recent deep RL exploration strategies are able to deal with high-dimensional continuous state spaces through complex heuristics, often relying on optimism in the face of uncertainty or intrinsic motivation. In this work, we describe a surprising finding: a simple generalization of the classic count-based approach can reach near state-of-the-art performance on various highdimensional and/or continuous deep RL benchmarks. States are mapped to hash codes, which allows to count their occurrences with a hash table. These counts are then used to compute a reward bonus according to the classic count-based exploration theory. We find that simple hash functions can achieve surprisingly good results on many challenging tasks. Furthermore, we show that a domaindependent learned hash code may further improve these results. Detailed analysis reveals important aspects of a good hash function: 1) having appropriate granularity and 2) encoding information relevant to solving the MDP. This exploration strategy achieves near state-of-the-art performance on both continuous control tasks and Atari 2600 games, hence providing a simple yet powerful baseline for solving MDPs that require considerable exploration.",
"title": ""
},
{
"docid": "afa5296bca23dbcf138b7fc0ae0c9dd7",
"text": "Learning a natural language interface for database tables is a challenging task that involves deep language understanding and multi-step reasoning. The task is often approached by mapping natural language queries to logical forms or programs that provide the desired response when executed on the database. To our knowledge, this paper presents the first weakly supervised, end-to-end neural network model to induce such programs on a real-world dataset. We enhance the objective function of Neural Programmer, a neural network with built-in discrete operations, and apply it on WikiTableQuestions, a natural language question-answering dataset. The model is trained end-to-end with weak supervision of question-answer pairs, and does not require domain-specific grammars, rules, or annotations that are key elements in previous approaches to program induction. The main experimental result in this paper is that a single Neural Programmer model achieves 34.2% accuracy using only 10,000 examples with weak supervision. An ensemble of 15 models, with a trivial combination technique, achieves 37.7% accuracy, which is competitive to the current state-of-the-art accuracy of 37.1% obtained by a traditional natural language semantic parser. 1 BACKGROUND AND INTRODUCTION Databases are a pervasive way to store and access knowledge. However, it is not straightforward for users to interact with databases since it often requires programming skills and knowledge about database schemas. Overcoming this difficulty by allowing users to communicate with databases via natural language is an active research area. The common approach to this task is by semantic parsing, which is the process of mapping natural language to symbolic representations of meaning. In this context, semantic parsing yields logical forms or programs that provide the desired response when executed on the databases (Zelle & Mooney, 1996). Semantic parsing is a challenging problem that involves deep language understanding and reasoning with discrete operations such as counting and row selection (Liang, 2016). The first learning methods for semantic parsing require expensive annotation of question-program pairs (Zelle & Mooney, 1996; Zettlemoyer & Collins, 2005). This annotation process is no longer necessary in the current state-of-the-art semantic parsers that are trained using only question-answer pairs (Liang et al., 2011; Kwiatkowski et al., 2013; Krishnamurthy & Kollar, 2013; Pasupat & Liang, 2015). However, the performance of these methods still heavily depends on domain-specific grammar or pruning strategies to ease program search. For example, in a recent work on building semantic parsers for various domains, the authors hand-engineer a separate grammar for each domain (Wang et al., 2015). Recently, many neural network models have been developed for program induction (Andreas et al., 2016; Jia & Liang, 2016; Reed & Freitas, 2016; Zaremba et al., 2016; Yin et al., 2015), despite ∗Work done at Google Brain. 1 ar X iv :1 61 1. 08 94 5v 4 [ cs .C L ] 2 M ar 2 01 7 Published as a conference paper at ICLR 2017 Operations Count Select ArgMax ArgMin ... ... > < Print Neural Network What was the total number of goals scored in 2005 Row Selector Scalar Answer Lookup Answer timestep t",
"title": ""
}
] |
[
{
"docid": "3a9d639e87d6163c18dd52ef5225b1a6",
"text": "A variety of approaches have been recently proposed to automatically infer users’ personality from their user generated content in social media. Approaches differ in terms of the machine learning algorithms and the feature sets used, type of utilized footprint, and the social media environment used to collect the data. In this paper, we perform a comparative analysis of state-of-the-art computational personality recognition methods on a varied set of social media ground truth data from Facebook, Twitter and YouTube. We answer three questions: (1) Should personality prediction be treated as a multi-label prediction task (i.e., all personality traits of a given user are predicted at once), or should each trait be identified separately? (2) Which predictive features work well across different on-line environments? and (3) What is the decay in accuracy when porting models trained in one social media environment to another?",
"title": ""
},
{
"docid": "3e4a2d4564e9904b3d3b0457860da5cf",
"text": "Model-based, torque-level control can offer precision and speed advantages over velocity-level or position-level robot control. However, the dynamic parameters of the robot must be identified accurately. Several steps are involved in dynamic parameter identification, including modeling the system dynamics, joint position/torque data acquisition and filtering, experimental design, dynamic parameters estimation and validation. In this paper, we propose a novel, computationally efficient and intuitive optimality criterion to design the excitation trajectory for the robot to follow. Experiments are carried out for a 6 degree of freedom (DOF) Staubli TX-90 robot. We validate the dynamics parameters using torque prediction accuracy and compare to existing methods. The RMS errors of the prediction were small, and the computation time for the new, optimal objective function is an order of magnitude less than for existing approaches. & 2014 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "5a18a7f42ab40cd238c92e19d23e0550",
"text": "As memory scales down to smaller technology nodes, new failure mechanisms emerge that threaten its correct operation. If such failure mechanisms are not anticipated and corrected, they can not only degrade system reliability and availability but also, perhaps even more importantly, open up security vulnerabilities: a malicious attacker can exploit the exposed failure mechanism to take over the entire system. As such, new failure mechanisms in memory can become practical and significant threats to system security. In this work, we discuss the RowHammer problem in DRAM, which is a prime (and perhaps the first) example of how a circuit-level failure mechanism in DRAM can cause a practical and widespread system security vulnerability. RowHammer, as it is popularly referred to, is the phenomenon that repeatedly accessing a row in a modern DRAM chip causes bit flips in physically-adjacent rows at consistently predictable bit locations. It is caused by a hardware failure mechanism called DRAM disturbance errors, which is a manifestation of circuit-level cell-to-cell interference in a scaled memory technology. Researchers from Google Project Zero recently demonstrated that this hardware failure mechanism can be effectively exploited by user-level programs to gain kernel privileges on real systems. Several other recent works demonstrated other practical attacks exploiting RowHammer. These include remote takeover of a server vulnerable to RowHammer, takeover of a victim virtual machine by another virtual machine running on the same system, and takeover of a mobile device by a malicious user-level application that requires no permissions. We analyze the root causes of the RowHammer problem and examine various solutions. We also discuss what other vulnerabilities may be lurking in DRAM and other types of memories, e.g., NAND flash memory or Phase Change Memory, that can potentially threaten the foundations of secure systems, as the memory technologies scale to higher densities. We conclude by describing and advocating a principled approach to memory reliability and security research that can enable us to better anticipate and prevent such vulnerabilities.",
"title": ""
},
{
"docid": "c8be82cceec30a4aa72cc23b844546df",
"text": "SVM is extensively used in pattern recognition because of its capability to classify future unseen data and its’ good generalization performance. Several algorithms and models have been proposed for pattern recognition that uses SVM for classification. These models proved the efficiency of SVM in pattern recognition. Researchers have compared their results for SVM with other traditional empirical risk minimization techniques, such as Artificial Neural Network, Decision tree, etc. Comparison results show that SVM is superior to these techniques. Also, different variants of SVM are developed for enhancing the performance. In this paper, SVM is briefed and some of the pattern recognition applications of SVM are surveyed and briefly summarized. Keyword Hyperplane, Pattern Recognition, Quadratic Programming Problem, Support Vector Machines.",
"title": ""
},
{
"docid": "bb4541462806313d314d1de5882c6dde",
"text": "Over the past decade, the genus Aeromonas has undergone a number of significant changes of practical importance to clinical microbiologists and scientists alike. In parallel with the molecular revolution in microbiology, several new species have been identified on a phylogenetic basis, and the genome of the type species, A. hydrophila ATCC 7966, has been sequenced. In addition to established disease associations, Aeromonas has been shown to be a significant cause of infections associated with natural disasters (hurricanes, tsunamis, and earthquakes) and has been linked to emerging or new illnesses, including near-drowning events, prostatitis, and hemolytic-uremic syndrome. Despite these achievements, issues still remain regarding the role that Aeromonas plays in bacterial gastroenteritis, the extent to which species identification should be attempted in the clinical laboratory, and laboratory reporting of test results from contaminated body sites containing aeromonads. This article provides an extensive review of these topics, in addition to others, such as taxonomic issues, microbial pathogenicity, and antimicrobial resistance markers.",
"title": ""
},
{
"docid": "383b029f9c10186a163f48c01e1ef857",
"text": "Neuroscience has focused on the detailed implementation of computation, studying neural codes, dynamics and circuits. In machine learning, however, artificial neural networks tend to eschew precisely designed codes, dynamics or circuits in favor of brute force optimization of a cost function, often using simple and relatively uniform initial architectures. Two recent developments have emerged within machine learning that create an opportunity to connect these seemingly divergent perspectives. First, structured architectures are used, including dedicated systems for attention, recursion and various forms of short- and long-term memory storage. Second, cost functions and training procedures have become more complex and are varied across layers and over time. Here we think about the brain in terms of these ideas. We hypothesize that (1) the brain optimizes cost functions, (2) the cost functions are diverse and differ across brain locations and over development, and (3) optimization operates within a pre-structured architecture matched to the computational problems posed by behavior. In support of these hypotheses, we argue that a range of implementations of credit assignment through multiple layers of neurons are compatible with our current knowledge of neural circuitry, and that the brain's specialized systems can be interpreted as enabling efficient optimization for specific problem classes. Such a heterogeneously optimized system, enabled by a series of interacting cost functions, serves to make learning data-efficient and precisely targeted to the needs of the organism. We suggest directions by which neuroscience could seek to refine and test these hypotheses.",
"title": ""
},
{
"docid": "921dd57d86e56f286f203bb59df6cb23",
"text": "Prokaryotes acquire virus resistance by integrating short fragments of viral nucleic acid into clusters of regularly interspaced short palindromic repeats (CRISPRs). Here we show how virus-derived sequences contained in CRISPRs are used by CRISPR-associated (Cas) proteins from the host to mediate an antiviral response that counteracts infection. After transcription of the CRISPR, a complex of Cas proteins termed Cascade cleaves a CRISPR RNA precursor in each repeat and retains the cleavage products containing the virus-derived sequence. Assisted by the helicase Cas3, these mature CRISPR RNAs then serve as small guide RNAs that enable Cascade to interfere with virus proliferation. Our results demonstrate that the formation of mature guide RNAs by the CRISPR RNA endonuclease subunit of Cascade is a mechanistic requirement for antiviral defense.",
"title": ""
},
{
"docid": "d4f2cf2b793a83bfb488d58842db5ea5",
"text": "If your letter had praised everything of mine, I would not have been as pleased as I am by your attempt to disprove and reject certain points. I regard this as a mark of friendship and the other as one of adulation. But in return I ask you to listen with an open mind to my rebuttal. For what you say, if it were allowed to pass without any reply from me, would be too one-sided. It is always a source of satisfaction to come across an article where one is cited often, especially by two scholars who have contributed so much to advance the study of subjective well-being. Of course, my happiness would have been greater had the references been favorable, rather than unfavorable. (Hereafter I use happiness and satisfaction interchangeably.) I take it that the Hagerty-Veenhoven (hereafter H-V) article (2003) is a rebuttal of my 1995 paper (Easterlin 1995), because there is only one reference to time series results of studies by other scholars done in the almost 10-year period since publication of my article. Indeed, I believe I detect an echo of a similar critique by one of the authors of my 1974 article (cf. Easterlin 1974 and Veenhoven 1991; for comments on the latter, see Easterlin 2004 forthcoming). 3 Apparently the editor and referee(s) of this Journal also viewed the H-V paper as a comment on my 1995 article; otherwise it would be hard to explain the absence of the customary literature review and reconciliation of new and disparate results with those of prior work. It seems appropriate, therefore, to offer a few comments in response, especially since the conclusions of the H-V article will no doubt be cited often as substantially different from my own when, in fact, they are not. I will focus on the time series analysis in the section \" Descriptive Statistics of Happiness and Income \" (pp. 11-18) which I take to be the heart of their article. Until one is sure about the data, methodology, and results of the time series analysis, hypothesis testing is superfluous. 1 THE UNITED STATES I was quite surprised to find the one country whose data I thought I knew fairly well to be among the seven for whom a significant positive correlation is reported between happiness and income. I had found no significant relationship between happiness and time over a period in which GDP …",
"title": ""
},
{
"docid": "03e7d909183b66cc3b45eed6ac2de9dd",
"text": "A s the millennium draws to a close, it is apparent that one question towers above all others in the life sciences: How does the set of processes we call mind emerge from the activity of the organ we call brain? The question is hardly new. It has been formulated in one way or another for centuries. Once it became possible to pose the question and not be burned at the stake, it has been asked openly and insistently. Recently the question has preoccupied both the experts—neuroscientists, cognitive scientists and philosophers—and others who wonder about the origin of the mind, specifically the conscious mind. The question of consciousness now occupies center stage because biology in general and neuroscience in particular have been so remarkably successful at unraveling a great many of life’s secrets. More may have been learned about the brain and the mind in the 1990s—the so-called decade of the brain—than during the entire previous history of psychology and neuroscience. Elucidating the neurobiological basis of the conscious mind—a version of the classic mind-body problem—has become almost a residual challenge. Contemplation of the mind may induce timidity in the contemplator, especially when consciousness becomes the focus of the inquiry. Some thinkers, expert and amateur alike, believe the question may be unanswerable in principle. For others, the relentless and exponential increase in new knowledge may give rise to a vertiginous feeling that no problem can resist the assault of science if only the theory is right and the techniques are powerful enough. The debate is intriguing and even unexpected, as no comparable doubts have been raised over the likelihood of explaining how the brain is responsible for processes such as vision or memory, which are obvious components of the larger process of the conscious mind. The multimedia mind-show occurs constantly as the brain processes external and internal sensory events. As the brain answers the unasked question of who is experiencing the mindshow, the sense of self emerges. by Antonio R. Damasio",
"title": ""
},
{
"docid": "b4803364e973142a82e1b3e5bea21f24",
"text": "Word2Vec is a widely used algorithm for extracting low-dimensional vector representations of words. It generated considerable excitement in the machine learning and natural language processing (NLP) communities recently due to its exceptional performance in many NLP applications such as named entity recognition, sentiment analysis, machine translation and question answering. State-of-the-art algorithms including those by Mikolov et al. have been parallelized for multi-core CPU architectures but are based on vector-vector operations that are memory-bandwidth intensive and do not efficiently use computational resources. In this paper, we improve reuse of various data structures in the algorithm through the use of minibatching, hence allowing us to express the problem using matrix multiply operations. We also explore different techniques to distribute word2vec computation across nodes in a compute cluster, and demonstrate good strong scalability up to 32 nodes. In combination, these techniques allow us to scale up the computation near linearly across cores and nodes, and process hundreds of millions of words per second, which is the fastest word2vec implementation to the best of our knowledge.",
"title": ""
},
{
"docid": "1c60ddeb7e940992094cb8f3913e811a",
"text": "In this paper, we address the scene segmentation task by capturing rich contextual dependencies based on the selfattention mechanism. Unlike previous works that capture contexts by multi-scale features fusion, we propose a Dual Attention Networks (DANet) to adaptively integrate local features with their global dependencies. Specifically, we append two types of attention modules on top of traditional dilated FCN, which model the semantic interdependencies in spatial and channel dimensions respectively. The position attention module selectively aggregates the features at each position by a weighted sum of the features at all positions. Similar features would be related to each other regardless of their distances. Meanwhile, the channel attention module selectively emphasizes interdependent channel maps by integrating associated features among all channel maps. We sum the outputs of the two attention modules to further improve feature representation which contributes to more precise segmentation results. We achieve new state-of-the-art segmentation performance on three challenging scene segmentation datasets, i.e., Cityscapes, PASCAL Context and COCO Stuff dataset. In particular, a Mean IoU score of 81.5% on Cityscapes test set is achieved without using coarse data. we make the code and trained models publicly available at https://github.com/junfu1115/DANet",
"title": ""
},
{
"docid": "004b9c1adb0e217c89b2266348d9bd88",
"text": "Branch-and-bound implicit enumeration algorithms for permutation problems (discrete optimization problems where the set of feasible solutions is the permutation group <italic>S<subscrpt>n</subscrpt></italic>) are characterized in terms of a sextuple (<italic>B<subscrpt>p</subscrpt> S,E,D,L,U</italic>), where (1) <italic>B<subscrpt>p</subscrpt></italic> is the branching rule for permutation problems, (2) <italic>S</italic> is the next node selection rule, (3) <italic>E</italic> is the set of node elimination rules, (4) <italic>D</italic> is the node dominance function, (5) <italic>L</italic> is the node lower-bound cost function, and (6) <italic>U</italic> is an upper-bound solution cost. A general algorithm based on this characterization is presented and the dependence of the computational requirements on the choice of algorithm parameters, <italic>S, E, D, L,</italic> and <italic>U</italic> is investigated theoretically. The results verify some intuitive notions but disprove others.",
"title": ""
},
{
"docid": "1cabb80b00c350367de61194f85fdb77",
"text": "Text summarization is the process of distilling the most important information from source/sources to produce an abridged version for a particular user/users and task/tasks. Automatically generated summaries can significantly reduce the information overload on intelligence analysts in their daily work. Moreover, automated text summarization can be utilized for automated classification and filtering of text documents, information search over the Internet, content recommendation systems, online social networks, etc. The increasing trend of cross-border globalization accompanied by the growing multi-linguality of the Internet requires text summarization techniques to work equally well on multiple languages. However, only some of the automated summarization methods proposed in the literature can be defined as “multi-lingual\" or “language-independent,\" as they are not based on any morphological analysis of the summarized text. In this chapter, we present a novel approach called MUSE (MUltilingual Sentence Extractor) to “language-independent\" extractive summarization, which represents the summary as a collection of the most informative fragments of the summarized document without any language-specific text analysis. We use a Genetic Algorithm to find the best linear combination of 31 sentence scoring metrics based on vector and graph representations of text documents. Our summarization methodology is evaluated on two monolingual corpora of English and Hebrew documents, and, in addition, on a bilingual collection of English and Hebrew documents. The results are compared to 15 statistical sentence scoring methods for extractive single-document summarization found in the literature and to several stateof-the-art summarization tools. These bilingual experiments show that the MUSE methodology significantly outperforms the existing approaches and tools in both languages.",
"title": ""
},
{
"docid": "5f5c78b74e1e576dd48690b903bf4de4",
"text": "Automatic facial expression recognition has been an active topic in computer science for over two decades, in particular facial action coding system action unit (AU) detection and classification of a number of discrete emotion states from facial expressive imagery. Standardization and comparability have received some attention; for instance, there exist a number of commonly used facial expression databases. However, lack of a commonly accepted evaluation protocol and, typically, lack of sufficient details needed to reproduce the reported individual results make it difficult to compare systems. This, in turn, hinders the progress of the field. A periodical challenge in facial expression recognition would allow such a comparison on a level playing field. It would provide an insight on how far the field has come and would allow researchers to identify new goals, challenges, and targets. This paper presents a meta-analysis of the first such challenge in automatic recognition of facial expressions, held during the IEEE conference on Face and Gesture Recognition 2011. It details the challenge data, evaluation protocol, and the results attained in two subchallenges: AU detection and classification of facial expression imagery in terms of a number of discrete emotion categories. We also summarize the lessons learned and reflect on the future of the field of facial expression recognition in general and on possible future challenges in particular.",
"title": ""
},
{
"docid": "ab0ab100e1de8ba417f029a76a4e1f2a",
"text": "In this paper, we describe a mapping language for converting data contained in arbitrary spreadsheets into the Web Ontology Language (OWL). The developed language overcomes shortcomings with existing spreadsheet mapping techniques, including their restriction to well-formed spreadsheets reminiscent of a single relational database table and verbose syntax for expressing mapping rules when transforming spreadsheet contents into OWL. We additionally present an implementation of the mapping approach, Mapping Master, which is available as a plug-in for the Protégé ontology editor.",
"title": ""
},
{
"docid": "88ab27740e5c957993fd70f0bf6ac841",
"text": "We examine the problem of discrete stock price prediction using a synthesis of linguistic, financial and statistical techniques to create the Arizona Financial Text System (AZFinText). The research within this paper seeks to contribute to the AZFinText system by comparing AZFinText’s predictions against existing quantitative funds and human stock pricing experts. We approach this line of research using textual representation and statistical machine learning methods on financial news articles partitioned by similar industry and sector groupings. Through our research, we discovered that stocks partitioned by Sectors were most predictable in measures of Closeness, Mean Squared Error (MSE) score of 0.1954, predicted Directional Accuracy of 71.18% and a Simulated Trading return of 8.50% (compared to 5.62% for the S&P 500 index). In direct comparisons to existing market experts and quantitative mutual funds, our system’s trading return of 8.50% outperformed well-known trading experts. Our system also performed well against the top 10 quantitative mutual funds of 2005, where our system would have placed fifth. When comparing AZFinText against only those quantitative funds that monitor the same securities, AZFinText had a 2% higher return than the best performing quant fund.",
"title": ""
},
{
"docid": "7b062e53ee71abeac2f70fe0c686ab26",
"text": "Much research has been done regarding how to visualize and interact with observations and attributes of high-dimensional data for exploratory data analysis. From the analyst's perceptual and cognitive perspective, current visualization approaches typically treat the observations of the high-dimensional dataset very differently from the attributes. Often, the attributes are treated as inputs (e.g., sliders), and observations as outputs (e.g., projection plots), thus emphasizing investigation of the observations. However, there are many cases in which analysts wish to investigate both the observations and the attributes of the dataset, suggesting a symmetry between how analysts think about attributes and observations. To address this, we define SIRIUS (Symmetric Interactive Representations In a Unified System), a symmetric, dual projection technique to support exploratory data analysis of high-dimensional data. We provide an example implementation of SIRIUS and demonstrate how this symmetry affords additional insights.",
"title": ""
},
{
"docid": "3cb25b6438593a36c6867a2edbbd6136",
"text": "One of the most significant challenges of human-robot interaction research is designing systems which foster an appropriate level of trust in their users: in order to use a robot effectively and safely, a user must place neither too little nor too much trust in the system. In order to better understand the factors which influence trust in a robot, we present a survey of prior work on trust in automated systems. We also discuss issues specific to robotics which pose challenges not addressed in the automation literature, particularly related to reliability, capability, and adjustable autonomy. We conclude with the results of a preliminary web-based questionnaire which illustrate some of the biases which autonomous robots may need to overcome in order to promote trust in users.",
"title": ""
},
{
"docid": "8a3e49797223800cb644fe2b819f9950",
"text": "In this paper, we present machine learning approaches for characterizing and forecasting the short-term demand for on-demand ride-hailing services. We propose the spatio-temporal estimation of the demand that is a function of variable effects related to traffic, pricing and weather conditions. With respect to the methodology, a single decision tree, bootstrap-aggregated (bagged) decision trees, random forest, boosted decision trees, and artificial neural network for regression have been adapted and systematically compared using various statistics, e.g. R-square, Root Mean Square Error (RMSE), and slope. To better assess the quality of the models, they have been tested on a real case study using the data of DiDi Chuxing, the main on-demand ride-hailing service provider in China. In the current study, 199,584 time-slots describing the spatio-temporal ride-hailing demand has been extracted with an aggregated-time interval of 10 mins. All the methods are trained and validated on the basis of two independent samples from this dataset. The results revealed that boosted decision trees provide the best prediction accuracy (RMSE=16.41), while avoiding the risk of over-fitting, followed by artificial neural network (20.09), random forest (23.50), bagged decision trees (24.29) and single decision tree (33.55). ∗Currently under review for publication †Local Environment Management & Analysis (LEMA), Department of Urban and Environmental Engineering (UEE), University of Liège, Allée de la Découverte 9, Quartier Polytech 1, Liège, Belgium, Email: ismail.saadi@ulg.ac.be ‡Laboratory of Innovations in Transportation (LITrans), Department of Civil, Geotechnical, and Mining Engineering, Polytechnique Montréal, Montréal, Canada, Email: melvin.wong@polymtl.ca §Laboratory of Innovations in Transportation (LITrans), Department of Civil, Geotechnical, and Mining Engineering, Polytechnique Montréal, Montréal, Canada, Email: bilal.farooq@polymtl.ca ¶Local Environment Management & Analysis (LEMA), Department of Urban and Environmental Engineering (UEE), University of Liège, Allée de la Découverte 9, Quartier Polytech 1, Liège, Belgium ‖Local Environment Management & Analysis (LEMA), Department of Urban and Environmental Engineering (UEE), University of Liège, Allée de la Découverte 9, Quartier Polytech 1, Liège, Belgium ar X iv :1 70 3. 02 43 3v 1 [ cs .L G ] 7 M ar 2 01 7",
"title": ""
},
{
"docid": "6fd89ac5ec4cfd0f6c28e01c8d94ff7a",
"text": "This paper describes the development of a student attendance system based on Radio Frequency Identification (RFID) technology. The existing conventional attendance system requires students to manually sign the attendance sheet every time they attend a class. As common as it seems, such system lacks of automation, where a number of problems may arise. This include the time unnecessarily consumed by the students to find and sign their name on the attendance sheet, some students may mistakenly or purposely signed another student's name and the attendance sheet may got lost. Having a system that can automatically capture student's attendance by flashing their student card at the RFID reader can really save all the mentioned troubles. This is the main motive of our system and in addition having an online system accessible anywhere and anytime can greatly help the lecturers to keep track of their students' attendance. Looking at a bigger picture, deploying the system throughout the academic faculty will benefit the academic management as students' attendance to classes is one of the key factor in improving the quality of teaching and monitoring their students' performance. Besides, this system provides valuable online facilities for easy record maintenance offered not only to lecturers but also to related academic management staffs especially for the purpose of students' progress monitoring.",
"title": ""
}
] |
scidocsrr
|
7ae82c8c9a24e86e496993d96498043d
|
A 80 nW, 32 kHz charge-pump based ultra low power oscillator with temperature compensation
|
[
{
"docid": "8b33ce7ccfdd87dc9f1da56157b7331f",
"text": "This work presents an ultra-low power oscillator designed for wake-up timers in compact wireless sensors. A constant charge subtraction scheme removes continuous comparator delay from the oscillation period, which is the source of temperature dependence in conventional RC relaxation oscillators. This relaxes comparator design constraints, enabling low power operation. In 0.18μm CMOS, the oscillator consumes 5.8nW at room temperature with temperature stability of 45ppm/°C (-10°C to 90°C) and 1%V line sensitivity.",
"title": ""
},
{
"docid": "7579ea317e216e80bcd08eabb4615711",
"text": "This paper presents an ultra low power clock source using a 1μW temperature compensated on-chip digitally controlled oscillator (Osc<sub>CMP</sub>) and a 100nW uncompensated oscillator (Osc<sub>UCMP</sub>) with respective temperature stabilities of 5ppm/°C and 1.67%/°C. A fast locking circuit re-locks Osc<sub>UCMP</sub> to Osc<sub>CMP</sub> often enough to achieve a high effective temperature stability. Measurements of a 130nm CMOS chip show that this combination gives a stability of 5ppm/°C from 20°C to 40°C (14ppm/°C from 20°C to 70°C) at 150nW if temperature changes by 1°C or less every second. This result is 7X lower power than typical XTALs and 6X more stable than prior on-chip solutions.",
"title": ""
}
] |
[
{
"docid": "14049dd7ee7a07107702c531fec4ff61",
"text": "Reducing errors and improving quality are an integral part of Pathology and Laboratory Medicine. The rate of errors is reviewed for the pre-analytical, analytical, and post-analytical phases for a specimen. The quality systems in place in pathology today are identified and compared with benchmarks for quality. The types and frequency of errors and quality systems are reviewed for surgical pathology, cytopathology, clinical chemistry, hematology, microbiology, molecular biology, and transfusion medicine. Seven recommendations are made to reduce errors in future for Pathology and Laboratory Medicine.",
"title": ""
},
{
"docid": "10fff590f9c8e99ebfd1b4b4e453241f",
"text": "Object-oriented programming has many advantages over conventional procedural programming languages for constructing highly flexible, adaptable, and extensible systems. Therefore a transformation of procedural programs to object-oriented architectures becomes an important process to enhance the reuse of procedural programs. Moreover, it would be useful to assist by automatic methods the software developers in transforming procedural code into an equivalent object-oriented one. In this paper we aim at introducing an agglomerative hierarchical clustering algorithm that can be used for assisting software developers in the process of transforming procedural code into an object-oriented architecture. We also provide a code example showing how our approach works, emphasizing, this way, the potential of our proposal.",
"title": ""
},
{
"docid": "1667c7e872bac649051bb45fc85e9921",
"text": "Mobile devices are becoming increasingly sophisticated and now incorporate many diverse and powerful sensors. The latest generation of smart phones is especially laden with sensors, including GPS sensors, vision sensors (cameras), audio sensors (microphones), light sensors, temperature sensors, direction sensors (compasses), and acceleration sensors. In this paper we describe and evaluate a system that uses phone-based acceleration sensors, called accelerometers, to identify and authenticate cell phone users. This form of behavioral biométrie identification is possible because a person's movements form a unique signature and this is reflected in the accelerometer data that they generate. To implement our system we collected accelerometer data from thirty-six users as they performed normal daily activities such as walking, jogging, and climbing stairs, aggregated this time series data into examples, and then applied standard classification algorithms to the resulting data to generate predictive models. These models either predict the identity of the individual from the set of thirty-six users, a task we call user identification, or predict whether (or not) the user is a specific user, a task we call user authentication. This work is notable because it enables identification and authentication to occur unobtrusively, without the users taking any extra actions-all they need to do is carry their cell phones. There are many uses for this work. For example, in environments where sharing may take place, our work can be used to automatically customize a mobile device to a user. It can also be used to provide device security by enabling usage for only specific users and can provide an extra level of identity verification.",
"title": ""
},
{
"docid": "344be59c5bb605dec77e4d7bd105d899",
"text": "Recently, style transfer has received a lot of attention. While much of this research has aimed at speeding up processing, the approaches are still lacking from a principled, art historical standpoint: a style is more than just a single image or an artist, but previous work is limited to only a single instance of a style or shows no benefit from more images. Moreover, previous work has relied on a direct comparison of art in the domain of RGB images or on CNNs pre-trained on ImageNet, which requires millions of labeled object bounding boxes and can introduce an extra bias, since it has been assembled without artistic consideration. To circumvent these issues, we propose a style-aware content loss, which is trained jointly with a deep encoder-decoder network for real-time, high-resolution stylization of images and videos. We propose a quantitative measure for evaluating the quality of a stylized image and also have art historians rank patches from our approach against those from previous work. These and our qualitative results ranging from small image patches to megapixel stylistic images and videos show that our approach better captures the subtle nature in which a style affects content.",
"title": ""
},
{
"docid": "8bb30efa3f14fa0860d1e5bc1265c988",
"text": "The introduction of microgrids in distribution networks based on power electronics facilitates the use of renewable energy resources, distributed generation (DG) and storage systems while improving the quality of electric power and reducing losses thus increasing the performance and reliability of the electrical system, opens new horizons for microgrid applications integrated into electrical power systems. The hierarchical control structure consists of primary, secondary, and tertiary levels for microgrids that mimic the behavior of the mains grid is reviewed. The main objective of this paper is to give a description of state of the art for the distributed power generation systems (DPGS) based on renewable energy and explores the power converter connected in parallel to the grid which are distinguished by their contribution to the formation of the grid voltage and frequency and are accordingly classified in three classes. This analysis is extended focusing mainly on the three classes of configurations grid-forming, grid-feeding, and gridsupporting. The paper ends up with an overview and a discussion of the control structures and strategies to control distribution power generation system (DPGS) units connected to the network. Keywords— Distributed power generation system (DPGS); hierarchical control; grid-forming; grid-feeding; grid-supporting. Nomenclature Symbols id − iq Vd − Vq P Q ω E f U",
"title": ""
},
{
"docid": "ba4faa0390c2c75aab79822a1e523e71",
"text": "The number of linked data sources and the size of the linked open data graph keep growing every day. As a consequence, semantic RDF services are more and more confronted to various “big data” problems. Query processing is one of them and needs to be efficiently addressed with executions over scalable, highly available and fault tolerant frameworks. Data management systems requiring these properties are rarely built from scratch but are rather designed on top of an existing cluster computing engine. In this work, we consider the processing of SPARQL queries with Apache Spark. We propose and compare five different query processing approaches based on different join execution models and Spark components. A detailed experimentation, on real-world and synthetic data sets, emphasizes that two approaches tailored for the RDF data model outperform the other ones on all major query shapes, i.e., star, snowflake, chain and hybrid.",
"title": ""
},
{
"docid": "5a248466c2e82b8453baa483a05bc25b",
"text": "Early severe stress and maltreatment produces a cascade of neurobiological events that have the potential to cause enduring changes in brain development. These changes occur on multiple levels, from neurohumoral (especially the hypothalamic-pituitary-adrenal [HPA] axis) to structural and functional. The major structural consequences of early stress include reduced size of the mid-portions of the corpus callosum and attenuated development of the left neocortex, hippocampus, and amygdala. Major functional consequences include increased electrical irritability in limbic structures and reduced functional activity of the cerebellar vermis. There are also gender differences in vulnerability and functional consequences. The neurobiological sequelae of early stress and maltreatment may play a significant role in the emergence of psychiatric disorders during development.",
"title": ""
},
{
"docid": "09380650b0af3851e19f18de4a2eacb2",
"text": "This paper presents a novel self-assembly modular robot (Sambot) that also shares characteristics with self-reconfigurable and self-assembly and swarm robots. Each Sambot can move autonomously and connect with the others. Multiple Sambot can be self-assembled to form a robotic structure, which can be reconfigured into different configurable robots and can locomote. A novel mechanical design is described to realize function of autonomous motion and docking. Introducing embedded mechatronics integrated technology, whole actuators, sensors, microprocessors, power and communication unit are embedded in the module. The Sambot is compact and flexble, the overall size is 80×80×102mm. The preliminary self-assembly and self-reconfiguration of Sambot is discussed, and several possible configurations consisting of multiple Sambot are designed in simulation environment. At last, the experiment of self-assembly and self-reconfiguration and locomotion of multiple Sambot has been implemented.",
"title": ""
},
{
"docid": "50df49f3c9de66798f89fdeab9d2ae85",
"text": "Predictive modeling is increasingly being employed to assist human decision-makers. One purported advantage of replacing or augmenting human judgment with computer models in high stakes settings– such as sentencing, hiring, policing, college admissions, and parole decisions– is the perceived “neutrality” of computers. It is argued that because computer models do not hold personal prejudice, the predictions they produce will be equally free from prejudice. There is growing recognition that employing algorithms does not remove the potential for bias, and can even amplify it if the training data were generated by a process that is itself biased. In this paper, we provide a probabilistic notion of algorithmic bias. We propose a method to eliminate bias from predictive models by removing all information regarding protected variables from the data to which the models will ultimately be trained. Unlike previous work in this area, our procedure accommodates data on any measurement scale. Motivated by models currently in use in the criminal justice system that inform decisions on pre-trial release and parole, we apply our proposed method to a dataset on the criminal histories of individuals at the time of sentencing to produce “race-neutral” predictions of re-arrest. In the process, we demonstrate that a common approach to creating “race-neutral” models– omitting race as a covariate– still results in racially disparate predictions. We then demonstrate that the application of our proposed method to these data removes racial disparities from predictions with minimal impact on predictive accuracy.",
"title": ""
},
{
"docid": "f119b00fd10eeb9e4bfa0441a5534933",
"text": "Network intrusion detection systems (NIDS) are essential security building-blocks for today's organizations to ensure safe and trusted communication of information. In this paper, we study the feasibility of off-line deep learning based NIDSes by constructing the detection engine with multiple advanced deep learning models and conducting a quantitative and comparative evaluation of those models. We first introduce the general deep learning methodology and its potential implication on the network intrusion detection problem. We then review multiple machine learning solutions to two network intrusion detection tasks (NSL-KDD and UNSW-NB15 datasets). We develop a TensorFlow-based deep learning library, called NetLearner, and implement a handful of cutting-edge deep learning models for NIDS. Finally, we conduct a quantitative and comparative performance evaluation of those models using NetLearner.",
"title": ""
},
{
"docid": "30e89edb65cbf54b27115c037ee9c322",
"text": "AbstructIGBT’s are available with short-circuit withstand times approaching those of bipolar transistors. These IGBT’s can therefore be protected by the same relatively slow-acting circuitry. The more efficient IGBT’s, however, have lower shortcircuit withstand times. While protection of these types of IGBT’s is not difficult, it does require a reassessment of the traditional protection methods used for the bipolar transistors. An in-depth discussion on the behavior of IGBT’s under different short-circuit conditions is carried out and the effects of various parameters on permissible short-circuit time are analyzed. The paper also rethinks the problem of providing short-circuit protection in relation to the special characteristics of the most efficient IGBT’s. The pros and cons of some of the existing protection circuits are discussed and, based on the recommendations, a protection scheme is implemented to demonstrate that reliable short-circuit protection of these types of IGBT’s can be achieved without difficulty in a PWM motor-drive application. volts",
"title": ""
},
{
"docid": "bf9910e87c2294e307f142e0be4ed4f6",
"text": "The rapidly developing cloud computing and virtualization techniques provide mobile devices with battery energy saving opportunities by allowing them to offload computation and execute applications remotely. A mobile device should judiciously decide whether to offload computation and which portion of application should be offloaded to the cloud. In this paper, we consider a mobile cloud computing (MCC) interaction system consisting of multiple mobile devices and the cloud computing facilities. We provide a nested two stage game formulation for the MCC interaction system. In the first stage, each mobile device determines the portion of its service requests for remote processing in the cloud. In the second stage, the cloud computing facilities allocate a portion of its total resources for service request processing depending on the request arrival rate from all the mobile devices. The objective of each mobile device is to minimize its power consumption as well as the service request response time. The objective of the cloud computing controller is to maximize its own profit. Based on the backward induction principle, we derive the optimal or near-optimal strategy for all the mobile devices as well as the cloud computing controller in the nested two stage game using convex optimization technique. Experimental results demonstrate the effectiveness of the proposed nested two stage game-based optimization framework on the MCC interaction system. The mobile devices can achieve simultaneous reduction in average power consumption and average service request response time, by 21.8% and 31.9%, respectively, compared with baseline methods.",
"title": ""
},
{
"docid": "914d17433df678e9ace1c9edd1c968d3",
"text": "We propose a Deep Learning approach to the visual question answering task, where machines answer to questions about real-world images. By combining latest advances in image representation and natural language processing, we propose Ask Your Neurons, a scalable, jointly trained, end-to-end formulation to this problem. In contrast to previous efforts, we are facing a multi-modal problem where the language output (answer) is conditioned on visual and natural language inputs (image and question). We evaluate our approaches on the DAQUAR as well as the VQA dataset where we also report various baselines, including an analysis how much information is contained in the language part only. To study human consensus, we propose two novel metrics and collect additional answers which extend the original DAQUAR dataset to DAQUAR-Consensus. Finally, we evaluate a rich set of design choices how to encode, combine and decode information in our proposed Deep Learning formulation.",
"title": ""
},
{
"docid": "a1f05b8954434a782f9be3d9cd10bb8b",
"text": "Because of their avid use of new media and their increased spending power, children and teens have become primary targets of a new \"media and marketing ecosystem.\" The digital marketplace is undergoing rapid innovation as new technologies and software applications continue to reshape the media landscape and user behaviors. The advertising industry, in many instances led by food and beverage marketers, is purposefully exploiting the special relationship that youth have with new media, as online marketing campaigns create unprecedented intimacies between adolescents and the brands and products that now literally surround them.",
"title": ""
},
{
"docid": "df35b679204e0729266a1076685600a1",
"text": "A new innovations state space modeling framework, incorporating Box-Cox transformations, Fourier series with time varying coefficients and ARMA error correction, is introduced for forecasting complex seasonal time series that cannot be handled using existing forecasting models. Such complex time series include time series with multiple seasonal periods, high frequency seasonality, non-integer seasonality and dual-calendar effects. Our new modelling framework provides an alternative to existing exponential smoothing models, and is shown to have many advantages. The methods for initialization and estimation, including likelihood evaluation, are presented, and analytical expressions for point forecasts and interval predictions under the assumption of Gaussian errors are derived, leading to a simple, comprehensible approach to forecasting complex seasonal time series. Our trigonometric formulation is also presented as a means of decomposing complex seasonal time series, which cannot be decomposed using any of the existing decomposition methods. The approach is useful in a broad range of applications, and we illustrate its versatility in three empirical studies where it demonstrates excellent forecasting performance over a range of prediction horizons. In addition, we show that our trigonometric decomposition leads to the identification and extraction of seasonal components, which are otherwise not apparent in the time series plot itself.",
"title": ""
},
{
"docid": "e13d935c4950323a589dce7fd5bce067",
"text": "Worker reliability is a longstanding issue in crowdsourcing, and the automatic discovery of high quality workers is an important practical problem. Most previous work on this problem mainly focuses on estimating the quality of each individual worker jointly with the true answer of each task. However, in practice, for some tasks, worker quality could be associated with some explicit characteristics of the worker, such as education level, major and age. So the following question arises: how do we automatically discover related worker attributes for a given task, and further utilize the findings to improve data quality? In this paper, we propose a general crowd targeting framework that can automatically discover, for a given task, if any group of workers based on their attributes have higher quality on average; and target such groups, if they exist, for future work on the same task. Our crowd targeting framework is complementary to traditional worker quality estimation approaches. Furthermore, an advantage of our framework is that it is more budget efficient because we are able to target potentially good workers before they actually do the task. Experiments on real datasets show that the accuracy of final prediction can be improved significantly for the same budget (or even less budget in some cases). Our framework can be applied to many real word tasks and can be easily integrated in current crowdsourcing platforms.",
"title": ""
},
{
"docid": "ba8d73938ea51f1b41add8c572c1667b",
"text": "Traditionally, when storage systems employ erasure codes, they are designed to tolerate the failures of entire disks. However, the most common types of failures are latent sector failures, which only affect individual disk sectors, and block failures which arise through wear on SSD’s. This paper introduces SD codes, which are designed to tolerate combinations of disk and sector failures. As such, they consume far less storage resources than traditional erasure codes. We specify the codes with enough detail for the storage practitioner to employ them, discuss their practical properties, and detail an open-source implementation.",
"title": ""
},
{
"docid": "152182336e620ee94f24e3865b7b377f",
"text": "In Theory III we characterize with a mix of theory and experiments the generalization properties of Stochastic Gradient Descent in overparametrized deep convolutional networks. We show that Stochastic Gradient Descent (SGD) selects with high probability solutions that 1) have zero (or small) empirical error, 2) are degenerate as shown in Theory II and 3) have maximum generalization. This work was supported by the Center for Brains, Minds and Machines (CBMM), funded by NSF STC award CCF 123 1216. H.M. is supported in part by ARO Grant W911NF-15-10385.",
"title": ""
},
{
"docid": "0c025ec05a1f98d71c9db5bfded0a607",
"text": "Many organizations, such as banks, airlines, telecommunications companies, and police departments, routinely use queueing models to help determine capacity levels needed to respond to experienced demands in a timely fashion. Though queueing analysis has been used in hospitals and other healthcare settings, its use in this sector is not widespread. Yet, given the pervasiveness of delays in healthcare and the fact that many healthcare facilities are trying to meet increasing demands with tightly constrained resources, queueing models can be very useful in developing more effective policies for bed allocation and staffing, and in identifying other opportunities for improving service. Queueing analysis is also a key tool in estimating capacity requirements for possible future scenarios, including demand surges due to new diseases or acts of terrorism. This chapter describes basic queueing models as well as some simple modifications and extensions that are particularly useful in the healthcare setting, and give examples of their use. The critical issue of data requirements is also be discussed as well as model choice, modelbuilding and the interpretation and use of results.",
"title": ""
},
{
"docid": "0d1da055e444a90ec298a2926de9fe7b",
"text": "Cryptocurrencies have experienced recent surges in interest and price. It has been discovered that there are time intervals where cryptocurrency prices and certain online and social media factors appear related. In addition it has been noted that cryptocurrencies are prone to experience intervals of bubble-like price growth. The hypothesis investigated here is that relationships between online factors and price are dependent on market regime. In this paper, wavelet coherence is used to study co-movement between a cryptocurrency price and its related factors, for a number of examples. This is used alongside a well-known test for financial asset bubbles to explore whether relationships change dependent on regime. The primary finding of this work is that medium-term positive correlations between online factors and price strengthen significantly during bubble-like regimes of the price series; this explains why these relationships have previously been seen to appear and disappear over time. A secondary finding is that short-term relationships between the chosen factors and price appear to be caused by particular market events (such as hacks / security breaches), and are not consistent from one time interval to another in the effect of the factor upon the price. In addition, for the first time, wavelet coherence is used to explore the relationships between different cryptocurrencies.",
"title": ""
}
] |
scidocsrr
|
97e1a8afed4a442f63b7d9f993806e68
|
Semi Supervised Logistic Regression
|
[
{
"docid": "2a1920f22f22dcf473612a6d35cf0132",
"text": "We address statistical classifier design given a mixed training set consisting of a small labelled feature set and a (generally larger) set of unlabelled features. This situation arises, e.g., for medical images, where although training features may be plentiful, expensive expertise is required to extract their class labels. We propose a classifier structure and learning algorithm that make effective use of unlabelled data to improve performance. The learning is based on maximization of the total data likelihood, i.e. over both the labelled and unlabelled data subsets. Two distinct EM learning algorithms are proposed, differing in the EM formalism applied for unlabelled data. The classifier, based on a joint probability model for features and labels, is a \"mixture of experts\" structure that is equivalent to the radial basis function (RBF) classifier, but unlike RBFs, is amenable to likelihood-based training. The scope of application for the new method is greatly extended by the observation that test data, or any new data to classify, is in fact additional, unlabelled data thus, a combined learning/classification operation much akin to what is done in image segmentation can be invoked whenever there is new data to classify. Experiments with data sets from the UC Irvine database demonstrate that the new learning algorithms and structure achieve substantial performance gains over alternative approaches.",
"title": ""
},
{
"docid": "3ac2f2916614a4e8f6afa1c31d9f704d",
"text": "This paper shows that the accuracy of learned text classifiers can be improved by augmenting a small number of labeled training documents with a large pool of unlabeled documents. This is important because in many text classification problems obtaining training labels is expensive, while large quantities of unlabeled documents are readily available. We introduce an algorithm for learning from labeled and unlabeled documents based on the combination of Expectation-Maximization (EM) and a naive Bayes classifier. The algorithm first trains a classifier using the available labeled documents, and probabilistically labels the unlabeled documents. It then trains a new classifier using the labels for all the documents, and iterates to convergence. This basic EM procedure works well when the data conform to the generative assumptions of the model. However these assumptions are often violated in practice, and poor performance can result. We present two extensions to the algorithm that improve classification accuracy under these conditions: (1) a weighting factor to modulate the contribution of the unlabeled data, and (2) the use of multiple mixture components per class. Experimental results, obtained using text from three different real-world tasks, show that the use of unlabeled data reduces classification error by up to 30%.",
"title": ""
}
] |
[
{
"docid": "5ed24bc652901423b5f2922c41b2702b",
"text": "We put forward a new framework that makes it possible to re-write or compress the content of any number of blocks in decentralized services exploiting the blockchain technology. As we argue, there are several reasons to prefer an editable blockchain, spanning from the necessity to remove inappropriate content and the possibility to support applications requiring re-writable storage, to \"the right to be forgotten.\" Our approach generically leverages so-called chameleon hash functions (Krawczyk and Rabin, NDSS '00), which allow determining hash collisions efficiently, given a secret trapdoor information. We detail how to integrate a chameleon hash function in virtually any blockchain-based technology, for both cases where the power of redacting the blockchain content is in the hands of a single trusted entity and where such a capability is distributed among several distrustful parties (as is the case with Bitcoin). We also report on a proof-of-concept implementation of a redactable blockchain, building on top of Nakamoto's Bitcoin core. The prototype only requires minimal changes to the way current client software interprets the information stored in the blockchain and to the current blockchain, block, or transaction structures. Moreover, our experiments show that the overhead imposed by a redactable blockchain is small compared to the case of an immutable one.",
"title": ""
},
{
"docid": "eff844ffdf2ef5408e23d98564d540f0",
"text": "The motions of wheeled mobile robots are largely governed by contact forces between the wheels and the terrain. Inasmuch as future wheel-terrain interactions are unpredictable and unobservable, high performance autonomous vehicles must ultimately learn the terrain by feel and extrapolate, just as humans do. We present an approach to the automatic calibration of dynamic models of arbitrary wheeled mobile robots on arbitrary terrain. Inputs beyond our control (disturbances) are assumed to be responsible for observed differences between what the vehicle was initially predicted to do and what it was subsequently observed to do. In departure from much previous work, and in order to directly support adaptive and predictive controllers, we concentrate on the problem of predicting candidate trajectories rather than measuring the current slip. The approach linearizes the nominal vehicle model and then calibrates the perturbative dynamics to explain the observed prediction residuals. Both systematic and stochastic disturbances are used, and we model these disturbances as functions over the terrain, the velocities, and the applied inertial and gravitational forces. In this way, we produce a model which can be used to predict behavior across all of state space for arbitrary terrain geometry. Results demonstrate that the approach converges quickly and produces marked improvements in the prediction of trajectories for multiple vehicle classes throughout the performance envelope of the platform, including during aggressive maneuvering.",
"title": ""
},
{
"docid": "d81282c41c609b980442f481d0a7fa3d",
"text": "Some of the recent applications in the field of the power supplies use multiphase converters to achieve fast dynamic response, smaller input/output filters, or better packaging. Typically, these converters have several paralleled power stages, with a current loop in each phase and a single voltage loop. The presence of the current loops avoids current imbalance among phases. The purpose of this paper is to demonstrate that, in CCM, with a proper design, there is an intrinsic mechanism of self-balance that reduces the current imbalance. Thus, in the buck converter, if natural zero-voltage switching (ZVS) is achieved in both transitions, the instantaneous inductor current compensates partially the different DC currents through the phases. The need for using n current loops will be finally determined by the application but not by the converter itself. Using the buck converter as a base, a multiphase converter has been developed. Several tests have been carried out in the laboratory and the results show clearly that, when the conditions are met, the phase currents are very well balanced even during transient conditions.",
"title": ""
},
{
"docid": "df92fe7057593a9312de91c06e1525ca",
"text": "The Formal Theory of Fun and Creativity (1990–2010) [Schmidhuber, J.: Formal theory of creativity, fun, and intrinsic motivation (1990–2010). IEEE Trans. Auton. Mental Dev. 2(3), 230–247 (2010b)] describes principles of a curious and creative agent that never stops generating nontrivial and novel and surprising tasks and data. Two modules are needed: a data encoder and a data creator. The former encodes the growing history of sensory data as the agent is interacting with its environment; the latter executes actions shaping the history. Both learn. The encoder continually tries to encode the created data more efficiently, by discovering new regularities in it. Its learning progress is the wow-effect or fun or intrinsic reward of the creator, which maximizes future expected reward, being motivated to invent skills leading to interesting data that the encoder does not yet know but can easily learn with little computational effort. I have argued that this simple formal principle explains science and art and music and humor. Note: This overview heavily draws on previous publications since 1990, especially Schmidhuber (2010b), parts of which are reprinted with friendly permission by IEEE.",
"title": ""
},
{
"docid": "e7155ddcd4b47466b97fd2967501ccd3",
"text": "We demonstrate a use of deep neural networks (DNN) for OSNR monitoring with minimum prior knowledge. By using 5-layers DNN trained with 400,000 samples, the DNN successfully estimates OSNR in a 16-GBd DP-QPSK system.",
"title": ""
},
{
"docid": "2af4d946d00b37ec0f6d37372c85044b",
"text": "Training of discrete latent variable models remains challenging because passing gradient information through discrete units is difficult. We propose a new class of smoothing transformations based on a mixture of two overlapping distributions, and show that the proposed transformation can be used for training binary latent models with either directed or undirected priors. We derive a new variational bound to efficiently train with Boltzmann machine priors. Using this bound, we develop DVAE++, a generative model with a global discrete prior and a hierarchy of convolutional continuous variables. Experiments on several benchmarks show that overlapping transformations outperform other recent continuous relaxations of discrete latent variables including Gumbel-Softmax (Maddison et al., 2016; Jang et al., 2016), and discrete variational autoencoders (Rolfe, 2016).",
"title": ""
},
{
"docid": "e7bb89000329245bccdecbc80549109c",
"text": "This paper presents a tutorial overview of the use of coupling between nonadjacent resonators to produce transmission zeros at real frequencies in microwave filters. Multipath coupling diagrams are constructed and the relative phase shifts of multiple paths are observed to produce the known responses of the cascaded triplet and quadruplet sections. The same technique is also used to explore less common nested cross-coupling structures and to predict their behavior. A discussion of the effects of nonzero electrical length coupling elements is presented. Finally, a brief categorization of the various synthesis and implementation techniques available for these types of filters is given.",
"title": ""
},
{
"docid": "246f56b1b5aa4f095c6dd281a670210f",
"text": "The Allen Brain Atlas (http://www.brain-map.org) provides a unique online public resource integrating extensive gene expression data, connectivity data and neuroanatomical information with powerful search and viewing tools for the adult and developing brain in mouse, human and non-human primate. Here, we review the resources available at the Allen Brain Atlas, describing each product and data type [such as in situ hybridization (ISH) and supporting histology, microarray, RNA sequencing, reference atlases, projection mapping and magnetic resonance imaging]. In addition, standardized and unique features in the web applications are described that enable users to search and mine the various data sets. Features include both simple and sophisticated methods for gene searches, colorimetric and fluorescent ISH image viewers, graphical displays of ISH, microarray and RNA sequencing data, Brain Explorer software for 3D navigation of anatomy and gene expression, and an interactive reference atlas viewer. In addition, cross data set searches enable users to query multiple Allen Brain Atlas data sets simultaneously. All of the Allen Brain Atlas resources can be accessed through the Allen Brain Atlas data portal.",
"title": ""
},
{
"docid": "2e07ca60f1b720c94eed8e9ca76afbdd",
"text": "This paper is concerned with the problem of how to better exploit 3D geometric information for dense semantic image labeling. Existing methods often treat the available 3D geometry information (e.g., 3D depth-map) simply as an additional image channel besides the R-G-B color channels, and apply the same technique for RGB image labeling. In this paper, we demonstrate that directly performing 3D convolution in the framework of a residual connected 3D voxel top-down modulation network can lead to superior results. Specifically, we propose a 3D semantic labeling method to label outdoor street scenes whenever a dense depth map is available. Experiments on the “Synthia” and “Cityscape” datasets show our method outperforms the state-of-the-art methods, suggesting such a simple 3D representation is effective in incorporating 3D geometric information.",
"title": ""
},
{
"docid": "42cfea27f8dcda6c58d2ae0e86f2fb1a",
"text": "Most of the lane marking detection algorithms reported in the literature are suitable for highway scenarios. This paper presents a novel clustered particle filter based approach to lane detection, which is suitable for urban streets in normal traffic conditions. Furthermore, a quality measure for the detection is calculated as a measure of reliability. The core of this approach is the usage of weak models, i.e. the avoidance of strong assumptions about the road geometry. Experiments were carried out in Sydney urban areas with a vehicle mounted laser range scanner and a ccd camera. Through experimentations, we have shown that a clustered particle filter can be used to efficiently extract lane markings.",
"title": ""
},
{
"docid": "52212ff3e1c85b5f5c3fcf0ec71f6f8b",
"text": "Embodied cognition theory proposes that individuals' abstract concepts can be associated with sensorimotor processes. The authors examined the effects of teaching participants novel embodied metaphors, not based in prior physical experience, and found evidence suggesting that they lead to embodied simulation, suggesting refinements to current models of embodied cognition. Creating novel embodiments of abstract concepts in the laboratory may be a useful method for examining mechanisms of embodied cognition.",
"title": ""
},
{
"docid": "447c36d34216b8cb890776248d9cc010",
"text": "Fuzzy cognitive maps (FCMs) are fuzzy-graph structures for representing causal reasoning. Their fuzziness allows hazy degrees of causality between hazy causal objects (concepts). Their graph structure allows systematic causal propagation, in particular forward and backward chaining, and it allows knowledge bases to be grown by connecting different FCMs. FCMs are especially applicable to soft knowledge domains and several example FCMs are given. Causality is represented as a fuzzy relation on causal concepts. A fuzzy causal algebra for governing causal propagation on FCMs is developed. FCM matrix representation and matrix operations are presented in the Appendix.",
"title": ""
},
{
"docid": "d1a4abaa57f978858edf0d7b7dc506ba",
"text": "Abstraction in imagery results from the strategic simplification and elimination of detail to clarify the visual structure of the depicted shape. It is a mainstay of artistic practice and an important ingredient of effective visual communication. We develop a computational method for the abstract depiction of 2D shapes. Our approach works by organizing the shape into parts using a new synthesis of holistic features of the part shape, local features of the shape boundary, and global aspects of shape organization. Our abstractions are new shapes with fewer and clearer parts.",
"title": ""
},
{
"docid": "b85c37caf53d0b70230bfca2cc4c0fa4",
"text": "Both authentication and deauthentication are instrumental for preventing unauthorized access to computers and other resources. While there are obvious motivating factors for using strong authentication mechanisms, convincing users to deauthenticate is not straight-forward, since deauthentication is not considered mandatory. A user who leaves a logged-in workstation unattended (especially for a short time) is typically not inconvenienced in any way; in fact, the other way around - no annoying reauthentication is needed upon return. However, an unattended workstation is trivially susceptible to the well-known \"lunchtime attack\" by any nearby adversary who simply takes over the departed user's log-in session. At the same time, since deauthentication does not intrinsically require user secrets, it can, in principle, be made unobtrusive. To this end, this paper designs the first automatic user deauthentication system - FADEWICH - that does not rely on biometric-or behavior-based techniques (e.g., keystroke dynamics) and does not require users to carry any devices. It uses physical properties of wireless signals and the effect of human bodies on their propagation. To assess FADEWICH's feasibility and performance, extensive experiments were conducted with its prototype. Results show that it suffices to have nine inexpensive wireless sensors deployed in a shared office setting to correctly deauthenticate all users within six seconds (90% within four seconds) after they leave their workstation's vicinity. We considered two realistic scenarios where the adversary attempts to subvert FADEWICH and showed that lunchtime attacks fail.",
"title": ""
},
{
"docid": "2bd3f3e72d99401cdf6f574982bc65ff",
"text": "In the future smart grid, both users and power companies can potentially benefit from the economical and environmental advantages of smart pricing methods to more effectively reflect the fluctuations of the wholesale price into the customer side. In addition, smart pricing can be used to seek social benefits and to implement social objectives. To achieve social objectives, the utility company may need to collect various information about users and their energy consumption behavior, which can be challenging. In this paper, we propose an efficient pricing method to tackle this problem. We assume that each user is equipped with an energy consumption controller (ECC) as part of its smart meter. All smart meters are connected to not only the power grid but also a communication infrastructure. This allows two-way communication among smart meters and the utility company. We analytically model each user's preferences and energy consumption patterns in form of a utility function. Based on this model, we propose a Vickrey-Clarke-Groves (VCG) mechanism which aims to maximize the social welfare, i.e., the aggregate utility functions of all users minus the total energy cost. Our design requires that each user provides some information about its energy demand. In return, the energy provider will determine each user's electricity bill payment. Finally, we verify some important properties of our proposed VCG mechanism for demand side management such as efficiency, user truthfulness, and nonnegative transfer. Simulation results confirm that the proposed pricing method can benefit both users and utility companies.",
"title": ""
},
{
"docid": "11f8f9bcee6375f499a5db0435e10f30",
"text": "In the field of reverse engineering one often faces the problem of repairing triangulations with holes, intersecting triangles, Möbius-band-like structures or other artifacts. In this paper we present a novel approach for generating manifold triangle meshes from such incomplete or imperfect triangulations. Even for heavily damaged triangulations, representing closed surfaces with arbitrary genus, our algorithm results in correct manifold triangle meshes. The algorithm is based on a randomized optimization technique from probability calculus called simulated annealing.",
"title": ""
},
{
"docid": "5a9563f3186414cace353bb261792118",
"text": "Solid waste management is one of major aspect which has to be considered in terms of making urban area environment healthier. The common dustbins placed by the municipal corporation are leading no. of health, environmental and social issues. Various causes are there like improper dustbin placement in city, improper system of collecting waste by City Corporation, and more specifically people are not aware enough to use dustbins in proper way. These various major causes are leading serious problems like, an unhygienic condition, air pollution, and unhealthy environment creating health disease. Up till now, research has been carried out by developing a Software Applications for indicating dustbin status, another by Shortest path method for garbage collecting vehicles by integrating RFID, GSM, GIS system; but no any active efforts has been taken paying attention towards managing such waste in atomized way. Considering all these major factors, a smart solid waste management system is designed that will check status and give alert of dustbin fullness and more significantly system has a feature to literate people to use dustbin properly and to automatically sense and clean garbage present outside the dustbin. Thus presented solution achieves smart solid waste management satisfying goal of making Indian cities clean, healthy and hygienic.",
"title": ""
},
{
"docid": "d3db526d2ee21c3f79ad67589055b7da",
"text": "The success of self-attention in NLP has led to recent applications in end-to-end encoder-decoder architectures for speech recognition. Separately, connectionist temporal classification (CTC) has matured as an alignment-free, non-autoregressive approach to sequence transduction, either by itself or in various multitask and decoding frameworks. We propose SAN-CTC, a deep, fully self-attentional network for CTC, and show it is tractable and competitive for end-toend speech recognition. SAN-CTC trains quickly and outperforms existing CTC models and most encoder-decoder models, with character error rates (CERs) of 4.7% in 1 day on WSJ eval92 and 2.8% in 1 week on LibriSpeech test-clean, with a fixed architecture and one GPU. Similar improvements hold for WERs after LM decoding. We motivate the architecture for speech, evaluate position and downsampling approaches, and explore how label alphabets (character, phoneme, subword) affect attention heads and performance.",
"title": ""
},
{
"docid": "f6c56abce40b67850b37f611e92c2340",
"text": "How do users generate an illusion of presence in a rich and consistent virtual environment from an impoverished, incomplete, and often inconsistent set of sensory cues? We conducted an experiment to explore how multimodal perceptual cues are integrated into a coherent experience of virtual objects and spaces. Specifically, we explored whether inter-modal integration contributes to generating the illusion of presence in virtual environments. To discover whether intermodal integration might play a role in presence, we looked for evidence of intermodal integration in the form of cross-modal interactionsperceptual illusions in which users use sensory cues in one modality to fill in the missing components of perceptual experience. One form of cross-modal interaction, a cross-modal transfer, is defined as a form of synesthesia, that is, a perceptual illusion in which stimulation to a sensory modality connected to the interface (such as the visual modality) is accompanied by perceived stimulation to an unconnected sensory modality that receives no apparent stimulation from the virtual environment (such as the haptic modality). Users of our experimental virtual environment who manipulated the visual analog of a physical force, a virtual spring, reported haptic sensations of physical resistance, even though the interface included no haptic displays. A path model of the data suggested that this cross-modal illusion was correlated with and dependent upon the sensation of spatial and sensory presence. We conclude that this is evidence that presence may derive from the process of multi-modal integration and, therefore, may be associated with other illusions, such as cross-modal transfers, that result from the process of creating a coherent mental model of the space. Finally, we suggest that this perceptual phenomenon might be used to improve user experiences with multimodal interfaces, specifically by supporting limited sensory displays (such as haptic displays) with appropriate synesthetic stimulation to other sensory modalities (such as visual and auditory analogs of haptic forces).",
"title": ""
},
{
"docid": "c227012b6edc39017353d8208fd53703",
"text": "In this article we discuss the implementation of the combined first and second order total variation inpainting that was introduced by Papafitsoros and Schönlieb. We describe the algorithm we use (split Bregman) in detail, and we give some examples that indicate the difference between pure first and pure second order total variation inpainting. Source Code We provide a source code for the algorithm written in C and an online demonstration, accessible on the article web page http://dx.doi.org/10.5201/ipol.2013.40.",
"title": ""
}
] |
scidocsrr
|
5ffaa5aa093ac99fe745beba05fef4d3
|
HIGH-CONVERSION-RATIO BIDIRECTIONAL DC – DC CONVERTER WITH COUPLED INDUCTOR
|
[
{
"docid": "91ff59f45c49a6951b6ae0e801661d57",
"text": "This paper presents the analysis, design, and implementation of a parallel connected maximum power point tracking (MPPT) system for stand-alone photovoltaic power generation. The parallel connection of the MPPT system reduces the negative influence of power converter losses in the overall efficiency because only a part of the generated power is processed by the MPPT system. Furthermore, all control algorithms used in the classical series-connected MPPT can be applied to the parallel system. A simple bidirectional dc-dc power converter is proposed for the MPPT implementation and presents the functions of battery charger and step-up converter. The operation characteristics of the proposed circuit are analyzed with the implementation of a prototype in a practical application.",
"title": ""
}
] |
[
{
"docid": "a1a4ebdc979e4618527b6dcd1d9b69f1",
"text": "Hardware-based malware detectors (HMDs) are a key emerging technology to build trustworthy computing platforms, especially mobile platforms. Quantifying the efficacy of HMDs against malicious adversaries is thus an important problem. The challenge lies in that real-world malware typically adapts to defenses, evades being run in experimental settings, and hides behind benign applications. Thus, realizing the potential of HMDs as a line of defense – that has a small and battery-efficient code base – requires a rigorous foundation for evaluating HMDs. To this end, we introduce EMMA—a platform to evaluate the efficacy of HMDs for mobile platforms. EMMA deconstructs malware into atomic, orthogonal actions and introduces a systematic way of pitting different HMDs against a diverse subset of malware hidden inside benign applications. EMMA drives both malware and benign programs with real user-inputs to yield an HMD’s effective operating range— i.e., the malware actions a particular HMD is capable of detecting. We show that small atomic actions, such as stealing a Contact or SMS, have surprisingly large hardware footprints, and use this insight to design HMD algorithms that are less intrusive than prior work and yet perform 24.7% better. Finally, EMMA brings up a surprising new result— obfuscation techniques used by malware to evade static analyses makes them more detectable using HMDs.",
"title": ""
},
{
"docid": "135deb35cf3600cba8e791d604e26ffb",
"text": "Much of this book describes the algorithms behind search engines and information retrieval systems. By contrast, this chapter focuses on the human users of search systems, and the window through which search systems are seen: the search user interface. The role of the search user interface is to aid in the searcher's understanding and expression of their information needs, and to help users formulate their queries, select among available information sources, understand search results, and keep track of the progress of their search. In the first edition of this book, very little was known about what makes for an effective search interface. In the intervening years, much has become understood about which ideas work from a usability perspective, and which do not. This chapter briefly summarizes the state of the art of search interface design, both in terms of developments in academic research as well as in deployment in commercial systems. The sections that follow discuss how people search, search interfaces today, visualization in search interfaces, and the design and evaluation of search user interfaces. Search tasks range from the relatively simple (e.g., looking up disputed facts or finding weather information) to the rich and complex (e.g., job seeking and planning vacations). Search interfaces should support a range of tasks, while taking into account how people think about searching for information. This section summarizes theoretical models about and empirical observations of the process of online information seeking. Information Lookup versus Exploratory Search User interaction with search interfaces differs depending on the type of task, the amount of time and effort available to invest in the process, and the domain expertise of the information seeker. The simple interaction dialogue used in Web search engines is most appropriate for finding answers to questions or to finding Web sites or other resources that act as search starting points. But, as Marchionini [89] notes, the \" turn-taking \" interface of Web search engines is inherently limited and is many cases is being supplanted by speciality search engines – such as for travel and health information – that offer richer interaction models. Marchionini [89] makes a distinction between information lookup and exploratory search. Lookup tasks are akin to fact retrieval or question answering, and are satisfied by short, discrete pieces of information: numbers, dates, names, or names of files or Web sites. Standard Web search interactions (as well as standard database management system queries) can …",
"title": ""
},
{
"docid": "efdfac22c6cdf96d17e89b4452865eff",
"text": "In India, demand for various fruits and vegetables are increasing as population grows. Automation in agriculture plays a vital role in increasing the productivity and economical growth of the Country, therefore there is a need for automated system for accurate, fast and quality fruits determination. Researchers have developed numerous algorithms for quality grading and sorting of fruit. Color is most striking feature for identifying disease and maturity of the fruit. In this paper; efficient algorithms for color feature extraction are reviewed. Then after, various classification techniques are compared based on their merits and demerits. The objective of the paper is to provide introduction to machine learning and color based grading algorithms, its components and current work reported on an automatic fruit grading system.",
"title": ""
},
{
"docid": "6e63abd83cc2822f011c831234c6d2e7",
"text": "The rapid uptake of mobile devices and the rising popularity of mobile applications and services pose unprecedented demands on mobile and wireless networking infrastructure. Upcoming 5G systems are evolving to support exploding mobile traffic volumes, real-time extraction of fine-grained analytics, and agile management of network resources, so as to maximize user experience. Fulfilling these tasks is challenging, as mobile environments are increasingly complex, heterogeneous, and evolving. One potential solution is to resort to advanced machine learning techniques, in order to help manage the rise in data volumes and algorithm-driven applications. The recent success of deep learning underpins new and powerful tools that tackle problems in this space. In this paper we bridge the gap between deep learning and mobile and wireless networking research, by presenting a comprehensive survey of the crossovers between the two areas. We first briefly introduce essential background and state-of-theart in deep learning techniques with potential applications to networking. We then discuss several techniques and platforms that facilitate the efficient deployment of deep learning onto mobile systems. Subsequently, we provide an encyclopedic review of mobile and wireless networking research based on deep learning, which we categorize by different domains. Drawing from our experience, we discuss how to tailor deep learning to mobile environments. We complete this survey by pinpointing current challenges and open future directions for research.",
"title": ""
},
{
"docid": "5420818f35031e07207a9bc9168be3c2",
"text": "DFRWS is dedicated to the sharing of knowledge and ideas about digital forensics research. Ever since it organized the first open workshop devoted to digital forensics in 2001, DFRWS continues to bring academics and practitioners together in an informal environment. As a non-profit, volunteer organization, DFRWS sponsors technical working groups, annual conferences and challenges to help drive the direction of research and development.",
"title": ""
},
{
"docid": "ddc0b599dc2cb3672e9a2a1f5a9a9163",
"text": "Head and modifier detection is an important problem for applications that handle short texts such as search queries, ads keywords, titles, captions, etc. In many cases, short texts such as search queries do not follow grammar rules, and existing approaches for head and modifier detection are coarse-grained, domain specific, and/or require labeling of large amounts of training data. In this paper, we introduce a semantic approach for head and modifier detection. We first obtain a large number of instance level head-modifier pairs from search log. Then, we develop a conceptualization mechanism to generalize the instance level pairs to concept level. Finally, we derive weighted concept patterns that are concise, accurate, and have strong generalization power in head and modifier detection. Furthermore, we identify a subset of modifiers that we call constraints. Constraints are usually specific and not negligible as far as the intent of the short text is concerned, while non-constraint modifiers are more subjective. The mechanism we developed has been used in production for search relevance and ads matching. We use extensive experiment results to demonstrate the effectiveness of our approach.",
"title": ""
},
{
"docid": "7916a261319dad5f257a0b8e0fa97fec",
"text": "INTRODUCTION\nPreliminary research has indicated that recreational ketamine use may be associated with marked cognitive impairments and elevated psychopathological symptoms, although no study to date has determined how these are affected by differing frequencies of use or whether they are reversible on cessation of use. In this study we aimed to determine how variations in ketamine use and abstention from prior use affect neurocognitive function and psychological wellbeing.\n\n\nMETHOD\nWe assessed a total of 150 individuals: 30 frequent ketamine users, 30 infrequent ketamine users, 30 ex-ketamine users, 30 polydrug users and 30 controls who did not use illicit drugs. Cognitive tasks included spatial working memory, pattern recognition memory, the Stockings of Cambridge (a variant of the Tower of London task), simple vigilance and verbal and category fluency. Standardized questionnaires were used to assess psychological wellbeing. Hair analysis was used to verify group membership.\n\n\nRESULTS\nFrequent ketamine users were impaired on spatial working memory, pattern recognition memory, Stockings of Cambridge and category fluency but exhibited preserved verbal fluency and prose recall. There were no differences in the performance of the infrequent ketamine users or ex-users compared to the other groups. Frequent users showed increased delusional, dissociative and schizotypal symptoms which were also evident to a lesser extent in infrequent and ex-users. Delusional symptoms correlated positively with the amount of ketamine used currently by the frequent users.\n\n\nCONCLUSIONS\nFrequent ketamine use is associated with impairments in working memory, episodic memory and aspects of executive function as well as reduced psychological wellbeing. 'Recreational' ketamine use does not appear to be associated with distinct cognitive impairments although increased levels of delusional and dissociative symptoms were observed. As no performance decrements were observed in the ex-ketamine users, it is possible that the cognitive impairments observed in the frequent ketamine group are reversible upon cessation of ketamine use, although delusional symptoms persist.",
"title": ""
},
{
"docid": "405cd4bacbcfddc9b4254aee166ee394",
"text": "A fundamental problem for the visual perception of 3D shape is that patterns of optical stimulation are inherently ambiguous. Recent mathematical analyses have shown, however, that these ambiguities can be highly constrained, so that many aspects of 3D structure are uniquely specified even though others might be underdetermined. Empirical results with human observers reveal a similar pattern of performance. Judgments about 3D shape are often systematically distorted relative to the actual structure of an observed scene, but these distortions are typically constrained to a limited class of transformations. These findings suggest that the perceptual representation of 3D shape involves a relatively abstract data structure that is based primarily on qualitative properties that can be reliably determined from visual information.",
"title": ""
},
{
"docid": "747a068cb499411dd0c9fcb786cd3c8a",
"text": "Identifying public misinformation is a complicated and challenging task. Stance detection, i.e. determining the relative perspective a news source takes towards a specific claim, is an important part of evaluating the veracity of the assertion. Automating the process of stance detection would arguably benefit human fact checkers. In this paper, we present our stance detection model which claimed third place in the first stage of the Fake News Challenge. Despite our straightforward approach, our model performs at a competitive level with the complex ensembles of the top two winning teams. We therefore propose our model as the ‘simple but tough-to-beat baseline’ for the Fake News Challenge stance detection task.",
"title": ""
},
{
"docid": "56e1778df9d5b6fa36cbf4caae710e67",
"text": "The Levenberg-Marquardt method is a standard technique used to solve nonlinear least squares problems. Least squares problems arise when fitting a parameterized function to a set of measured data points by minimizing the sum of the squares of the errors between the data points and the function. Nonlinear least squares problems arise when the function is not linear in the parameters. Nonlinear least squares methods involve an iterative improvement to parameter values in order to reduce the sum of the squares of the errors between the function and the measured data points. The Levenberg-Marquardt curve-fitting method is actually a combination of two minimization methods: the gradient descent method and the Gauss-Newton method. In the gradient descent method, the sum of the squared errors is reduced by updating the parameters in the direction of the greatest reduction of the least squares objective. In the Gauss-Newton method, the sum of the squared errors is reduced by assuming the least squares function is locally quadratic, and finding the minimum of the quadratic. The Levenberg-Marquardt method acts more like a gradient-descent method when the parameters are far from their optimal value, and acts more like the Gauss-Newton method when the parameters are close to their optimal value. This document describes these methods and illustrates the use of software to solve nonlinear least squares curve-fitting problems.",
"title": ""
},
{
"docid": "c780b818ed970d0cd8cc2884a265c852",
"text": "Subpixel rendering technologies take advantage of the subpixel structure of a display to increase the apparent resolution and to improve the display quality of text, graphics, or images. These techniques can potentially improve the apparent resolution because a single pixel on color liquid crystal display (LCD) or organic light-emitting diode (OLED) displays consists of several independently controllable colored subpixels. Applications of subpixel rendering are font rendering and image/video subsampling. By controlling individual subpixel values of neighboring pixels, it is possible to microshift the apparent position of a line to give greater details of text. Similarly, since the individual selectable components are increased threefold by controlling subpixels rather than pixels, subpixel-based subsampling can potentially improve the apparent resolution of a down-scaled image. However, the increased apparent luminance resolution often comes at the price of color fringing artifacts. A major challenge is to suppress chrominance distortion while maintaining apparent luminance sharpness. This column introduces subpixel arrangement in color displays, how subpixel rendering works, and several practical subpixel rendering applications in font rendering and image subsampling.",
"title": ""
},
{
"docid": "7749b46bc899b3d876d63d8f3d0981ea",
"text": "This paper details the control and guidance architecture for the T-wing tail-sitter unmanned air vehicle, (UAV). The T-wing is a vertical take off and landing (VTOL) UAV that is capable of both wing-born horizontal flight and propeller born vertical mode flight including hover and descent. During low-speed vertical flight the T-wing uses propeller wash over its aerodynamic surfaces to effect control. At the lowest level, the vehicle uses a mixture of classical and LQR controllers for angular rate and translational velocity control. These low-level controllers are directed by a series of proportional guidance controllers for the vertical, horizontal and transition flight modes that allow the vehicle to achieve autonomous waypoint navigation. The control design for the T-wing is complicated by the large differences in vehicle dynamics between vertical and horizontal flight; the difficulty of accurately predicting the low-speed vehicle aerodynamics; and the basic instability of the vertical flight mode. This paper considers the control design problem for the T-wing in light of these factors. In particular it focuses on the integration of all the different types and levels of controllers into a full flight-vehicle control system.",
"title": ""
},
{
"docid": "8ea44a793f57f036db0142cf51b12928",
"text": "This paper presents a comparative study of various classification methods in the application of automatic brain tumor segmentation. The data used in the study are 3D MRI volumes from MICCAI2016 brain tumor segmentation (BRATS) benchmark. 30 volumes are chosen randomly as a training set and 57 volumes are randomly chosen as a test set. The volumes are preprocessed and a feature vector is retrieved from each volume's four modalities (T1, T1 contrast-enhanced, T2 and Fluid-attenuated inversion recovery). The popular Dice score is used as an accuracy measure to record each classifier recognition results. All classifiers are implemented in the popular machine learning suit of algorithms, WEKA.",
"title": ""
},
{
"docid": "f8fc595f60fda530cc7796dbba83481c",
"text": "This paper proposes a pseudo random number generator using Elman neural network. The proposed neural network is a recurrent neural network able to generate pseudo-random numbers from the weight matrices obtained from the layer weights of the Elman network. The proposed method is not computationally demanding and is easy to implement for varying bit sequences. The random numbers generated using our method have been subjected to frequency test and ENT test program. The results show that recurrent neural networks can be used as a pseudo random number generator(prng).",
"title": ""
},
{
"docid": "3230fba68358a08ab9112887bdd73bb9",
"text": "The local field potential (LFP) reflects activity of many neurons in the vicinity of the recording electrode and is therefore useful for studying local network dynamics. Much of the nature of the LFP is, however, still unknown. There are, for instance, contradicting reports on the spatial extent of the region generating the LFP. Here, we use a detailed biophysical modeling approach to investigate the size of the contributing region by simulating the LFP from a large number of neurons around the electrode. We find that the size of the generating region depends on the neuron morphology, the synapse distribution, and the correlation in synaptic activity. For uncorrelated activity, the LFP represents cells in a small region (within a radius of a few hundred micrometers). If the LFP contributions from different cells are correlated, the size of the generating region is determined by the spatial extent of the correlated activity.",
"title": ""
},
{
"docid": "00ccf224c9188cf26f1da60ec9aa741b",
"text": "In recent years, distributed representations of inputs have led to performance gains in many applications by allowing statistical information to be shared across inputs. However, the predicted outputs (labels, and more generally structures) are still treated as discrete objects even though outputs are often not discrete units of meaning. In this paper, we present a new formulation for structured prediction where we represent individual labels in a structure as dense vectors and allow semantically similar labels to share parameters. We extend this representation to larger structures by defining compositionality using tensor products to give a natural generalization of standard structured prediction approaches. We define a learning objective for jointly learning the model parameters and the label vectors and propose an alternating minimization algorithm for learning. We show that our formulation outperforms structural SVM baselines in two tasks: multiclass document classification and part-of-speech tagging.",
"title": ""
},
{
"docid": "77d4f92c994ab1c0703e6a9583d9aabd",
"text": "To approach the question of what life is, we first have to state that life exists exclusively as the “being-alive” of discrete spatio-temporal entities. The simplest “unit” that can legitimately be considered to be alive is an intact prokaryotic cell as a whole. In this review, I discuss critically various aspects of the nature and singularity of living beings from the biologist’s point of view. In spite of the enormous richness of forms and performances in the biotic realm, there is a considerable uniformity in the chemical “machinery of life,” which powers all organisms. Life represents a dynamic state; it is performance of a system of singular kind: “life-as-action” approach. All “life-as-things” hypotheses are wrong from the beginning. Life is conditioned by certain substances but not defined by them. Living systems are endowed with a power to maintain their inherent functional order (organization) permanently against disruptive influences. The term organization inherently involves the aspect of functionality, the teleonomic, purposeful cooperation of structural and functional elements. Structures in turn require information for their specification, and information presupposes a source. This source is constituted in living systems by the nucleic acids. Organisms are unique in having a capacity to use, maintain, and replicate internal information, which yields the basis for their specific organization in its perpetuation. The existence of a genome is a necessary condition for life and one of the absolute differences between living and non-living matter. Organization includes both what makes life possible and what is determined by it. It is not something “implanted” into the living beings but has its origin and capacity for maintenance within the system itself. It is the essence of life. The property of being alive we can consider as an emergent property of cells that corresponds to a certain level of self-maintained complex order or organization.",
"title": ""
},
{
"docid": "8b6e2ef05f59868363beaa9b810a8d36",
"text": "Causal inference from observational data is a subject of active research and development in statistics and computer science. Many statistical software packages have been developed for this purpose. However, these toolkits do not scale to large datasets. We propose and demonstrate ZaliQL: a SQL-based framework for drawing causal inference from observational data. ZaliQL supports the state-of-the-art methods for causal inference and runs at scale within PostgreSQL database system. In addition, we built a visual interface to wrap around ZaliQL. In our demonstration, we will use this GUI to show a live investigation of the causal effect of different weather conditions on flight delays.",
"title": ""
},
{
"docid": "84f7b499cd608de1ee7443fcd7194f19",
"text": "In this paper, we present a new computationally efficient numerical scheme for the minimizing flow approach for optimal mass transport (OMT) with applications to non-rigid 3D image registration. The approach utilizes all of the gray-scale data in both images, and the optimal mapping from image A to image B is the inverse of the optimal mapping from B to A. Further, no landmarks need to be specified, and the minimizer of the distance functional involved is unique. Our implementation also employs multigrid, and parallel methodologies on a consumer graphics processing unit (GPU) for fast computation. Although computing the optimal map has been shown to be computationally expensive in the past, we show that our approach is orders of magnitude faster then previous work and is capable of finding transport maps with optimality measures (mean curl) previously unattainable by other works (which directly influences the accuracy of registration). We give results where the algorithm was used to compute non-rigid registrations of 3D synthetic data as well as intra-patient pre-operative and post-operative 3D brain MRI datasets.",
"title": ""
},
{
"docid": "f840350d14a99f3da40729cfe6d56ef5",
"text": "This paper presents a sub-radix-2 redundant architecture to improve the performance of switched-capacitor successive-approximation-register (SAR) analog-to-digital converters (ADCs). The redundancy not only guarantees digitally correctable static nonlinearities of the converter, it also offers means to combat dynamic errors in the conversion process, and thus, accelerating the speed of the SAR architecture. A perturbation-based digital calibration technique is also described that closely couples with the architecture choice to accomplish simultaneous identification of multiple capacitor mismatch errors of the ADC, enabling the downsizing of all sampling capacitors to save power and silicon area. A 12-bit prototype measured a Nyquist 70.1-dB signal-to-noise-plus-distortion ratio (SNDR) and a Nyquist 90.3-dB spurious free dynamic range (SFDR) at 22.5 MS/s, while dissipating 3.0-mW power from a 1.2-V supply and occupying 0.06-mm2 silicon area in a 0.13-μm CMOS process. The figure of merit (FoM) of this ADC is 51.3 fJ/step measured at 22.5 MS/s and 36.7 fJ/step at 45 MS/s.",
"title": ""
}
] |
scidocsrr
|
6840294ae55a2ac503a2f135db4a6006
|
Depth Estimation via Affinity Learned with Convolutional Spatial Propagation Network
|
[
{
"docid": "f7a1ecc5bb377961737c37de02953cf1",
"text": "Surface reconstruction from a point cloud is a standard subproblem in many algorithms for dense 3D reconstruction from RGB images or depth maps. Methods, performing only local operations in the vicinity of individual points, are very fast, but reconstructed models typically contain lots of holes. On the other hand, regularized volumetric approaches, formulated as a global optimization, are typically too slow for real-time interactive applications. We propose to use a regression forest based method, which predicts the projection of a grid point to the surface, depending on the spatial configuration of point density in the grid point neighborhood. We designed a suitable feature vector and efficient oct-tree based GPU evaluation, capable of predicting surface of high resolution 3D models in milliseconds. Our method learns and predicts surfaces from an observed point cloud sparser than the evaluation grid, and therefore effectively acts as a regularizer.",
"title": ""
},
{
"docid": "b3a85b88e4a557fcb7f0efb6ba628418",
"text": "We present the bilateral solver, a novel algorithm for edgeaware smoothing that combines the flexibility and speed of simple filtering approaches with the accuracy of domain-specific optimization algorithms. Our technique is capable of matching or improving upon state-of-the-art results on several different computer vision tasks (stereo, depth superresolution, colorization, and semantic segmentation) while being 10-1000× faster than baseline techniques with comparable accuracy, and producing lower-error output than techniques with comparable runtimes. The bilateral solver is fast, robust, straightforward to generalize to new domains, and simple to integrate into deep learning pipelines.",
"title": ""
}
] |
[
{
"docid": "07fe7ad68e4f7bb1a978cda02a564044",
"text": "Temporomandibular disorders (TMDs) affect 8–12 % of the adolescent and adult population, resulting in patient discomfort and affecting quality of life. Despite the growing incidence of these disorders, an effective screening modality to detect TMDs is still lacking. Although magnetic resonance imaging is the gold standard for imaging of the temporomandibular joint (TMJ), it has a few drawbacks such as cost and its time-consuming nature. High-resolution ultrasonography is a non-invasive and cost-effective imaging modality that enables simultaneous visualization of the hard and soft tissue components of the TMJ. This study aimed to evaluate the correlations between the clinical signs and symptoms of patients with chronic TMJ disorders and their ultrasonographic findings, thereby enabling the use of ultrasonography as an imaging modality for screening of TMDs. Twenty patients with chronic TMDs were selected according to the Research Diagnostic Criteria for TMDs. Ultrasonographic imaging of individual TMJs was performed to assess the destructive changes, effusion, and disc dislocation. Fisher’s exact test was used to examine the correlations between the findings obtained from the ultrasonographic investigation and the clinical signs and symptoms. There was a significant correlation between pain and joint effusion as well as between clicking and surface erosion. The present findings suggest that ultrasonography can be used as a screening modality to assess the hard and soft tissue changes in patients presenting with signs and symptoms of TMDs.",
"title": ""
},
{
"docid": "b389cf1f4274b250039414101cf0cc98",
"text": "We present a framework for analyzing the structure of digital media streams. Though our methods work for video, text, and audio, we concentrate on detecting the structure of digital music files. In the first step, spectral data is used to construct a similarity matrix calculated from inter-frame spectral similarity. The digital audio can be robustly segmented by correlating a kernel along the diagonal of the similarity matrix. Once segmented, spectral statistics of each segment are computed. In the second step, segments are clustered based on the selfsimilarity of their statistics. This reveals the structure of the digital music in a set of segment boundaries and labels. Finally, the music can be summarized by selecting clusters with repeated segments throughout the piece. The summaries can be customized for various applications based on the structure of the original music.",
"title": ""
},
{
"docid": "33b8475e5149ce08e50a346401f2542b",
"text": "Emerging non-volatile memory (NVM) technologies, such as PCRAM and STT-RAM, have demonstrated great potentials to be the candidates as replacement for DRAM-based main memory design for computer systems. It is important for computer architects to model such emerging memory technologies at the architecture level, to understand the benefits and limitations for better utilizing them to improve the performance/energy/reliability of future computing systems. In this paper, we introduce an architectural-level simulator called NV Main, which can model main memory design with both DRAM and emerging non-volatile memory technologies, and can facilitate designers to perform design space explorations utilizing these emerging memory technologies. We discuss design points of the simulator and provide validation of the model, along with case studies on using the tool for design space explorations.",
"title": ""
},
{
"docid": "5b01c2e7bba6ab1abdda9b1a23568d2a",
"text": "First, we theoretically analyze the MMD-based estimates. Our analysis establishes that, under some mild conditions, the estimate is statistically consistent. More importantly, it provides an upper bound on the error in the estimate in terms of intuitive geometric quantities like class separation and data spread. Next, we use the insights obtained from the theoretical analysis, to propose a novel convex formulation that automatically learns the kernel to be employed in the MMD-based estimation. We design an efficient cutting plane algorithm for solving this formulation. Finally, we empirically compare our estimator with several existing methods, and show significantly improved performance under varying datasets, class ratios, and training sizes.",
"title": ""
},
{
"docid": "74a58d57501b25e8378628fef6471ea9",
"text": "Employee engagement leads to commitment and psychological attachment and reflects in the form of high retention (low attrition) of employees. The level of engagement in employees can be enhanced by identifying its drivers (influential factors) and work on them.For the purpose of study, the drivers of the employee engagement are identified and hypotheses have been formulated. The relationship between employee engagement and employee retention is examined from the response to separate questionnaires from 185 employees who are chosen based on random sampling. The study finds that the employee retention can be improved by increasing the level of employee engagement and focusing on few non-financial drivers.Practical implication of this study is the retention can be improved without financial expenditure when there are economic constraints. Organizations can design good practices in the light of findings to retain their besttalent (highly skilled and specialized human resources) without much financial burden.",
"title": ""
},
{
"docid": "fc5a2b6f5258e59afff3f910010b1f9a",
"text": "This paper proposes a novel isolated bidirectional converter, which can efficiently transfer energy between 400 V DC micro grid and 48 V DC batteries. The proposed structure includes primary windings of two flyback transformers, which are connected in series and sharing the high DC micro grid voltage equally, and secondary windings, which are connected in parallel to batteries. Few decoupling diodes are added into the proposed circuit on both sides, which can let the leakage inductance energy of flyback transformers be recycled easily and reduce the voltage stress as well as power losses during bidirectional power transfer. Therefore, low voltage rating and low conduction resistance switches can be selected to improve system efficiency. A laboratory prototype of the proposed converter with an input/output nominal voltage of 400 V/48 V and the maximum capacity of 500 W is implemented. The highest power conversion efficiency is 93.1 % in step-down function, and near 93 % in step-up function.",
"title": ""
},
{
"docid": "83d98a92aa02ac6f21385e964a690a43",
"text": "This research work highlights the effects of acoustic emission (AE) signals emitted during the milling of H13 tool steel as an important parameter in the identification of tool wear. These generated AE signals provide information on the chip formation, wear, fracture and general deformation. Furthermore, it is aimed at implementing an online monitoring system for machine tools, using a sensor fusion approach to adequately determine process parameters necessary for creating an adequate tool change timing schedule for machining operations. Keywords-Tool Wear Monitoring, acoustic emission, milling",
"title": ""
},
{
"docid": "ac07682e0fa700a8f0c9df025feb2c53",
"text": "Today's web applications run inside a complex browser environment that is buggy, ill-specified, and implemented in different ways by different browsers. Thus, web applications that desire robustness must use a variety of conditional code paths and ugly hacks to deal with the vagaries of their runtime. Our new exokernel browser, called Atlantis, solves this problem by providing pages with an extensible execution environment. Atlantis defines a narrow API for basic services like collecting user input, exchanging network data, and rendering images. By composing these primitives, web pages can define custom, high-level execution environments. Thus, an application which does not want a dependence on Atlantis'predefined web stack can selectively redefine components of that stack, or define markup formats and scripting languages that look nothing like the current browser runtime. Unlike prior microkernel browsers like OP, and unlike compile-to-JavaScript frameworks like GWT, Atlantis is the first browsing system to truly minimize a web page's dependence on black box browser code. This makes it much easier to develop robust, secure web applications.",
"title": ""
},
{
"docid": "333800eb8bb529aa724dd43abffd88d8",
"text": "The efficiency of Boolean function manipulation depends on the form of representation of Boolean functions. Binary Decision Diagrams (BDD's) are graph representations proposed by Akers and Bryant. BDD's have some properties which can be used to enable efficient Boolean function manipulation.\nIn this paper, we describe a technique of more efficient Boolean function manipulation that uses Shared Binary Decision Diagrams (SBDD's) with attributed edges. Our implements include an ordering algorithm of input variables and a method of handling don't care. We show experimental results produced by the implementation of the Boolean function manipulator.",
"title": ""
},
{
"docid": "43e5146e4a7723cf391b013979a1da32",
"text": "The notions of disintegration and Bayesian inversion are fundamental in conditional probability theory. They produce channels, as conditional probabilities, from a joint state, or from an already given channel (in opposite direction). These notions exist in the literature, in concrete situations, but are presented here in abstract graphical formulations. The resulting abstract descriptions are used for proving basic results in conditional probability theory. The existence of disintegration and Bayesian inversion is discussed for discrete probability, and also for measure-theoretic probability — via standard Borel spaces and via likelihoods. Finally, the usefulness of disintegration and Bayesian inversion is illustrated in several examples.",
"title": ""
},
{
"docid": "59a66eef643c02fe5e6c22108a6b983e",
"text": "Corrosion is a prime concern for active implantable devices. In this paper we review the principles underlying the concepts of hermetic packages and encapsulation, used to protect implanted electronics, some of which remain widely overlooked. We discuss how technological advances have created a need to update the way we evaluate the suitability of both protection methods. We demonstrate how lifetime predictability is lost for very small hermetic packages and introduce a single parameter to compare different packages, with an equation to calculate the minimum sensitivity required from a test method to guarantee a given lifetime. In the second part of this paper, we review the literature on the corrosion of encapsulated integrated circuits (ICs) and, following a new analysis of published data, we propose an equation for the pre-corrosion lifetime of implanted ICs, and discuss the influence of the temperature, relative humidity, encapsulation and field-strength. As any new protection will be tested under accelerated conditions, we demonstrate the sensitivity of acceleration factors to some inaccurately known parameters. These results are relevant for any application of electronics working in a moist environment. Our comparison of encapsulation and hermetic packages suggests that both concepts may be suitable for future implants.",
"title": ""
},
{
"docid": "4a6e382b9db87bf5915fec8de4a67b55",
"text": "BACKGROUND\nThe aim of the study is to analyze the nature, extensions, and dural relationships of hormonally inactive giant pituitary tumors. The relevance of the anatomic relationships to surgery is analyzed.\n\n\nMETHODS\nThere were 118 cases of hormonally inactive pituitary tumors analyzed with the maximum dimension of more than 4 cm. These cases were surgically treated in our neurosurgical department from 1995 to 2002. Depending on the anatomic extensions and the nature of their meningeal coverings, these tumors were divided into 4 grades. The grades reflected an increasing order of invasiveness of adjacent dural and arachnoidal compartments. The strategy and outcome of surgery and radiotherapy was analyzed for these 4 groups. Average duration of follow-up was 31 months.\n\n\nRESULTS\nThere were 54 giant pituitary tumors, which remained within the confines of sellar dura and under the diaphragma sellae and did not enter into the compartment of cavernous sinus (Grade I). Transgression of the medial wall and invasion into the compartment of the cavernous sinus (Grade II) was seen in 38 cases. Elevation of the dura of the superior wall of the cavernous sinus and extension of this elevation into various compartments of brain (Grade III) was observed in 24 cases. Supradiaphragmatic-subarachnoid extension (Grade IV) was seen in 2 patients. The majority of patients were treated by transsphenoidal route.\n\n\nCONCLUSIONS\nGiant pituitary tumors usually have a meningeal cover and extend into well-defined anatomic pathways. Radical surgery by a transsphenoidal route is indicated and possible in Grade I-III pituitary tumors. Such a strategy offers a reasonable opportunity for recovery in vision and a satisfactory postoperative and long-term outcome. Biopsy of the tumor followed by radiotherapy could be suitable for Grade IV pituitary tumors.",
"title": ""
},
{
"docid": "a03d0772d8c3e1fd5c954df2b93757e3",
"text": "The tumor microenvironment is a complex system, playing an important role in tumor development and progression. Besides cellular stromal components, extracellular matrix fibers, cytokines, and other metabolic mediators are also involved. In this review we outline the potential role of hypoxia, a major feature of most solid tumors, within the tumor microenvironment and how it contributes to immune resistance and immune suppression/tolerance and can be detrimental to antitumor effector cell functions. We also outline how hypoxic stress influences immunosuppressive pathways involving macrophages, myeloid-derived suppressor cells, T regulatory cells, and immune checkpoints and how it may confer tumor resistance. Finally, we discuss how microenvironmental hypoxia poses both obstacles and opportunities for new therapeutic immune interventions.",
"title": ""
},
{
"docid": "1ab13d8abe63d25ba5da7f1e19e641fe",
"text": "Recording of patient-reported outcomes (PROs) enables direct measurement of the experiences of patients with cancer. In the past decade, the use of PROs has become a prominent topic in health-care innovation; this trend highlights the role of the patient experience as a key measure of health-care quality. Historically, PROs were used solely in the context of research studies, but a growing body of literature supports the feasibility of electronic collection of PROs, yielding reliable data that are sometimes of better quality than clinician-reported data. The incorporation of electronic PRO (ePRO) assessments into standard health-care settings seems to improve the quality of care delivered to patients with cancer. Such efforts, however, have not been widely adopted, owing to the difficulties of integrating PRO-data collection into clinical workflows and electronic medical-record systems. The collection of ePRO data is expected to enhance the quality of care received by patients with cancer; however, for this approach to become routine practice, uniquely trained people, and appropriate policies and analytical solutions need to be implemented. In this Review, we discuss considerations regarding measurements of PROs, implementation challenges, as well as evidence of outcome improvements associated with the use of PROs, focusing on the centrality of PROs as part of 'big-data' initiatives in learning health-care systems.",
"title": ""
},
{
"docid": "89a1e91c2ab1393f28a6381ba94de12d",
"text": "In this paper, a simulation environment encompassing realistic propagation conditions and system parameters is employed in order to analyze the performance of future multigigabit indoor communication systems at tetrahertz frequencies. The influence of high-gain antennas on transmission aspects is investigated. Transmitter position for optimal signal coverage is also analyzed. Furthermore, signal coverage maps and achievable data rates are calculated for generic indoor scenarios with and without furniture for a variety of possible propagation conditions.",
"title": ""
},
{
"docid": "855eca0df1b44fdbe27236e5ff6c6a68",
"text": "This paper addresses the eye gaze tracking problem using a lowcost andmore convenient web camera in a desktop environment, as opposed to gaze tracking techniques requiring specific hardware, e.g., infrared high-resolution camera and infrared light sources, as well as a cumbersome calibration process. In the proposed method, we first track the human face in a real-time video sequence to extract the eye regions. Then, we combine intensity energy and edge strength to obtain the iris center and utilize the piecewise eye corner detector to detect the eye corner. We adopt a sinusoidal head model to simulate the 3-D head shape, and propose an adaptive weighted facial features embedded in the pose from the orthography and scaling with iterations algorithm, whereby the head pose can be estimated. Finally, the eye gaze tracking is accomplished by integration of the eye vector and the head movement information. Experiments are performed to estimate the eye movement and head pose on the BioID dataset and pose dataset, respectively. In addition, experiments for gaze tracking are performed in real-time video sequences under a desktop environment. The proposed method is not sensitive to the light conditions. Experimental results show that ourmethod achieves an average accuracy of around 1.28◦ without head movement and 2.27◦ with minor movement of the head. INTRODUCTION EYE gaze tracking has many potential attractive applications including human– computer interaction, virtual reality, and eye disease diagnosis. For example, it can help the disabled to control the computer effectively [1]. In addition, it can support controlling the mouse pointer with one’s eyes so that the user can speed up the selection of the focus point.Moreover, the integration of user’s gaze and face information can improve the security of the existing access control systems. Eye gaze has been used to study human cognition [2], memory [3] and multielement target tracking task [4]. Along this line, eye gaze tracking is closely related with the detection of visual saliency, which reveals a person’s International Journal of Research Available at https://edupediapublications.org/journals p-ISSN: 2348-6848 e-ISSN: 2348-795X Volume 03 Issue 13 September 2016 Available online: http://internationaljournalofresearch.org/ P a g e | 1231 focus of attention. The video-based gaze approaches commonly use two types of imaging techniques: infrared imaging and visible imaging. The former needs infrared cameras and infrared light sources to capture the infrared images,while the latter usually utilizes highresolution cameras for images (see Fig. 1). As infrared-imaging techniques utilize invisible infrared light sources to obtain the controlled light and a better contrast image, it can reduce the effects of light conditions, and produce a sharp contrast between the iris and pupil (i.e., bright-dark eye effect), as well as the reflective properties of the pupil and the cornea (PCCR) [9]–[12]. As a result, an infrared imaging-based method is capable of performing eye gaze tracking. Most of video-based approaches belong to this class. Unfortunately, an infrared-imaging-based gaze tracking system can be quite expensive. Other shortcomings include: 1) An infraredimaging system will not be reliable under the disturbance of other infrared d sources; 2) not all users produce the bright-dark effect, which can make the gaze tracker fail; and 3) the reflection of infrared light sources on glasses is still an issue Compared with the infrared-imaging approaches, visibleimaging methods circumvent the aforementioned problems without the need for the specific infrared devices and infrared light sources. They are not sensitive to the utilization of glasses and the infrared sources in the environment. Visible-imaging methods should work in a natural environment, where the ambient light is uncontrolled and usually results in lower contrast images. The iris center detection will become more difficult than the pupil center detection because the iris is usually partially occluded by the upper eyelid. In this paper, we concentrate on visibleimaging and present an approach to the eye gaze tracking using a web camera in a desktop environment. First, we track the human face in a realtime video sequence to extract the eye region. Then, we combine intensity energy and edge strength to locate the iris center and utilize the piecewise eye corner detector to detect the eye corner. To compensate for head movement causing gaze error, we adopt a sinusoidal head model (SHM) to simulate the 3-D head shape, and propose an adaptive-weighted facial features embedded in the POSIT algorithm (AWPOSIT), EXISTING SYSTEM: The video-based gaze approaches commonly use two types of imaging International Journal of Research Available at https://edupediapublications.org/journals p-ISSN: 2348-6848 e-ISSN: 2348-795X Volume 03 Issue 13 September 2016 Available online: http://internationaljournalofresearch.org/ P a g e | 1232 techniques: infrared imaging and visible imaging. The former needs infrared cameras and infrared light sources to capture the infrared images,while the latter usually utilizes high resolution cameras for images. Compared with the infrared-imaging approaches, visible imaging methods circumvent the aforementioned problems without the need for the specific infrared devices and infrared light sources. They are not sensitive to the utilization of glasses and the infrared sources in the environment. Visible-imaging methods should work in a natural environment, where the ambient light is uncontrolled and usually results in lower contrast images. Sugano et al. have presented an online learning algorithm within the incremental learning framework for gaze estimation, which utilized the user’s operations (i.e., mouse click) on the PC monitor. Nguyen first utilized a new training model to detect and track the eye, and then employed the cropped image of the eye to train Gaussian process functions for gaze estimation. In their applications, a user has to stabilize the position of his/her head in front of the camera after the training procedure. Williams et al. proposed a sparse and semi-supervised Gaussian process model to infer the gaze, which simplified the process of collecting training data. DISADVANTAGES OF EXISTING SYSTEM: The iris center detection will become more difficult than the pupil center detection because the iris is usually partially occluded by the upper eyelid. The construction of the classifier needs a large number of training samples, which consist of the eye images from subjects looking at different positions on the screen under the different conditions. They are sensitive to head motion and light changes, as well as the number of training samples. They are not tolerant to head",
"title": ""
},
{
"docid": "2d0c5f6be15408d4814b22d28b1541af",
"text": "OBJECTIVE\nOur previous study has found that circulating microRNA (miRNA, or miR) -122, -140-3p, -720, -2861, and -3149 are significantly elevated during early stage of acute coronary syndrome (ACS). This study was conducted to determine the origin of these elevated plasma miRNAs in ACS.\n\n\nMETHODS\nqRT-PCR was performed to detect the expression profiles of these 5 miRNAs in liver, spleen, lung, kidney, brain, skeletal muscles, and heart. To determine their origins, these miRNAs were detected in myocardium of acute myocardial infarction (AMI), and as well in platelets and peripheral blood mononuclear cells (PBMCs, including monocytes, circulating endothelial cells (CECs) and lymphocytes) of the AMI pigs and ACS patients.\n\n\nRESULTS\nMiR-122 was specifically expressed in liver, and miR-140-3p, -720, -2861, and -3149 were highly expressed in heart. Compared with the sham pigs, miR-122 was highly expressed in the border zone of the ischemic myocardium in the AMI pigs without ventricular fibrillation (P < 0.01), miR-122 and -720 were decreased in platelets of the AMI pigs, and miR-122, -140-3p, -720, -2861, and -3149 were increased in PBMCs of the AMI pigs (all P < 0.05). Compared with the non-ACS patients, platelets miR-720 was decreased and PBMCs miR-122, -140-3p, -720, -2861, and -3149 were increased in the ACS patients (all P < 0.01). Furthermore, PBMCs miR-122, -720, and -3149 were increased in the AMI patients compared with the unstable angina (UA) patients (all P < 0.05). Further origin identification revealed that the expression levels of miR-122 in CECs and lymphocytes, miR-140-3p and -2861 in monocytes and CECs, miR-720 in monocytes, and miR-3149 in CECs were greatly up-regulated in the ACS patients compared with the non-ACS patients, and were higher as well in the AMI patients than that in the UA patients except for the miR-122 in CECs (all P < 0.05).\n\n\nCONCLUSION\nThe elevated plasma miR-122, -140-3p, -720, -2861, and -3149 in the ACS patients were mainly originated from CECs and monocytes.",
"title": ""
},
{
"docid": "5646238d9ad52b6b96193c401e39ca50",
"text": "In this study, we present WindTalker, a novel and practical keystroke inference framework that allows an attacker to infer the sensitive keystrokes on a mobile device through WiFi-based side-channel information. WindTalker is motivated from the observation that keystrokes on mobile devices will lead to different hand coverage and the finger motions, which will introduce a unique interference to the multi-path signals and can be reflected by the channel state information (CSI). The adversary can exploit the strong correlation between the CSI fluctuation and the keystrokes to infer the user's number input. WindTalker presents a novel approach to collect the target's CSI data by deploying a public WiFi hotspot. Compared with the previous keystroke inference approach, WindTalker neither deploys external devices close to the target device nor compromises the target device. Instead, it utilizes the public WiFi to collect user's CSI data, which is easy-to-deploy and difficult-to-detect. In addition, it jointly analyzes the traffic and the CSI to launch the keystroke inference only for the sensitive period where password entering occurs. WindTalker can be launched without the requirement of visually seeing the smart phone user's input process, backside motion, or installing any malware on the tablet. We implemented Windtalker on several mobile phones and performed a detailed case study to evaluate the practicality of the password inference towards Alipay, the largest mobile payment platform in the world. The evaluation results show that the attacker can recover the key with a high successful rate.",
"title": ""
},
{
"docid": "a007b0d7b1325d17711ea89a1ac1e926",
"text": "In this paper, we study the design of the transmitter in the downlink of a multiuser and multiantenna wireless communications system, considering the realistic scenario where only an imperfect estimate of the actual channel is available at both communication ends. Precisely, the actual channel is assumed to be inside an uncertainty region around the channel estimate, which models the imperfections of the channel knowledge that may arise from, e.g., estimation Gaussian errors, quantization effects, or combinations of both sources of errors. In this context, our objective is to design a robust power allocation among the information symbols that are to be sent to the users such that the total transmitted power is minimized, while maintaining the necessary quality of service to obtain reliable communication links between the base station and the users for any possible realization of the actual channel inside the uncertainty region. This robust power allocation is obtained as the solution to a convex optimization problem, which, in general, can be numerically solved in a very efficient way, and even for a particular case of the uncertainty region, a quasi-closed form solution can be found. Finally, the goodness of the robust proposed transmission scheme is presented through numerical results. Robust designs, imperfect CSI, multiantenna systems, broadcast channel, convex optimization.",
"title": ""
},
{
"docid": "a0080a7751287b2ec32409c3cd2e3803",
"text": "Semantic Complex Event Processing (CEP) is a promising approach for analysing streams of social media data in crisis situations. Traditional CEP approaches lack the capability to semantically interpret and analyse data, which Semantic CEP attempts to address, but current approaches have a number of limitations. In this paper we survey four semantic stream processing engines, and discuss them with the specific requirements of CEP for social media monitoring in mind. Current approaches assume well-structured data, known streams and vocabularies, and mainly static event patterns and ontologies, neither of which are realistic assumptions in our scenario. Additionally, the languages commonly used for event pattern detection, i.e., SPARQL extensions, lack several important features that would facilitate more advanced statistical and textual analyses, as well as adequate support for temporal and spatial reasoning. Being able to utilize external tools for processing specific tasks would also be of great value in processing social data streams.",
"title": ""
}
] |
scidocsrr
|
a435164708a9e52139061514d0f71a56
|
The use of NARX neural networks to predict chaotic time series
|
[
{
"docid": "303548167773a86d20a3ea13209a0ef3",
"text": "This paper reports empirical evidence that a neural network model is applicable to the prediction of foreign exchange rates. Time series data and technical indicators, such as moving average, are fed to neural networks to capture the underlying `rulesa of the movement in currency exchange rates. The exchange rates between American Dollar and \"ve other major currencies, Japanese Yen, Deutsch Mark, British Pound, Swiss Franc and Australian Dollar are forecast by the trained neural networks. The traditional rescaled range analysis is used to test the `e$ciencya of each market before using historical data to train the neural networks. The results presented here show that without the use of extensive market data or knowledge, useful prediction can be made and signi\"cant paper pro\"ts can be achieved for out-of-sample data with simple technical indicators. A further research on exchange rates between Swiss Franc and American Dollar is also conducted. However, the experiments show that with e$cient market it is not easy to make pro\"ts using technical indicators or time series input neural networks. This article also discusses several issues on the frequency of sampling, choice of network architecture, forecasting periods, and measures for evaluating the model's predictive power. After presenting the experimental results, a discussion on future research concludes the paper. ( 2000 Elsevier Science B.V. All rights reserved.",
"title": ""
},
{
"docid": "64a731c3e7d98f90729afc838ccd032c",
"text": "It has previously been shown that gradient-descent learning algorithms for recurrent neural networks can perform poorly on tasks that involve long-term dependencies, i.e. those problems for which the desired output depends on inputs presented at times far in the past. We show that the long-term dependencies problem is lessened for a class of architectures called nonlinear autoregressive models with exogenous (NARX) recurrent neural networks, which have powerful representational capabilities. We have previously reported that gradient descent learning can be more effective in NARX networks than in recurrent neural network architectures that have \"hidden states\" on problems including grammatical inference and nonlinear system identification. Typically, the network converges much faster and generalizes better than other networks. The results in this paper are consistent with this phenomenon. We present some experimental results which show that NARX networks can often retain information for two to three times as long as conventional recurrent neural networks. We show that although NARX networks do not circumvent the problem of long-term dependencies, they can greatly improve performance on long-term dependency problems. We also describe in detail some of the assumptions regarding what it means to latch information robustly and suggest possible ways to loosen these assumptions.",
"title": ""
}
] |
[
{
"docid": "c538390f75ae57ab65e6f9388fbfd1a0",
"text": "Deep Deterministic Policy Gradient (DDPG) algorithm has been successful for state-of-the-art performance in high-dimensional continuous control tasks. However, due to the complexity and randomness of the environment, DDPG tends to suffer from inefficient exploration and unstable training. In this work, we propose Self-Adaptive Double Bootstrapped DDPG (SOUP), an algorithm that extends DDPG to bootstrapped actor-critic architecture. SOUP improves the efficiency of exploration by multiple actor heads capturing more potential actions and multiple critic heads evaluating more reasonable Q-values collaboratively. The crux of double bootstrapped architecture is to tackle the fluctuations in performance, caused by multiple heads of spotty capacity varying throughout training. To alleviate the instability, a self-adaptive confidence mechanism is introduced to dynamically adjust the weights of bootstrapped heads and enhance the ensemble performance effectively and efficiently. We demonstrate that SOUP achieves faster learning by at least 45% while improving cumulative reward and stability substantially in comparison to vanilla DDPG on OpenAI Gym’s MuJoCo environments.",
"title": ""
},
{
"docid": "490df7bfea3338d98cbc0bd945463606",
"text": "This study examined perceived coping (perceived problem-solving ability and progress in coping with problems) as a mediator between adult attachment (anxiety and avoidance) and psychological distress (depression, hopelessness, anxiety, anger, and interpersonal problems). Survey data from 515 undergraduate students were analyzed using structural equation modeling. Results indicated that perceived coping fully mediated the relationship between attachment anxiety and psychological distress and partially mediated the relationship between attachment avoidance and psychological distress. These findings suggest not only that it is important to consider attachment anxiety or avoidance in understanding distress but also that perceived coping plays an important role in these relationships. Implications for these more complex relations are discussed for both counseling interventions and further research.",
"title": ""
},
{
"docid": "ea8450e8e1a217f1af596bb70051f5e7",
"text": "Supplier selection is nowadays one of the critical topics in supply chain management. This paper presents a new decision making approach for group multi-criteria supplier selection problem, which clubs supplier selection process with order allocation for dynamic supply chains to cope market variations. More specifically, the developed approach imitates the knowledge acquisition and manipulation in a manner similar to the decision makers who have gathered considerable knowledge and expertise in procurement domain. Nevertheless, under many conditions, exact data are inadequate to model real-life situation and fuzzy logic can be incorporated to handle the vagueness of the decision makers. As per this concept, fuzzy-AHP method is used first for supplier selection through four classes (CLASS I: Performance strategy, CLASS II: Quality of service, CLASS III: Innovation and CLASS IV: Risk), which are qualitatively meaningful. Thereafter, using simulation based fuzzy TOPSIS technique, the criteria application is quantitatively evaluated for order allocation among the selected suppliers. As a result, the approach generates decision-making knowledge, and thereafter, the developed combination of rules order allocation can easily be interpreted, adopted and at the same time if necessary, modified by decision makers. To demonstrate the applicability of the proposed approach, an illustrative example is presented and the results analyzed. & 2011 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "f90860e5dff49c3c2bc59871cf60fbe9",
"text": "Recent academic procedures have depicted that work involving scientific research tends to be more prolific through collaboration and cooperation among researchers and research groups. On the other hand, discovering new collaborators who are smart enough to conduct joint-research work is accompanied with both difficulties and opportunities. One notable difficulty as well as opportunity is the big scholarly data. In this paper, we satisfy the demand of collaboration recommendation through co-authorship in an academic network. We propose a random walk model using three academic metrics as basics for recommending new collaborations. Each metric is studied through mutual paper co-authoring information and serves to compute the link importance such that a random walker is more likely to visit the valuable nodes. Our experiments on DBLP dataset show that our approach can improve the precision, recall rate and coverage rate of recommendation, compared with other state-of-the-art approaches.",
"title": ""
},
{
"docid": "e1366b0128c4d76addd57bb2b02a19b5",
"text": "OBJECTIVE\nThe present study examined the association between child sexual abuse (CSA) and sexual health outcomes in young adult women. Maladaptive coping strategies and optimism were investigated as possible mediators and moderators of this relationship.\n\n\nMETHOD\nData regarding sexual abuse, coping, optimism and various sexual health outcomes were collected using self-report and computerized questionnaires with a sample of 889 young adult women from the province of Quebec aged 20-23 years old.\n\n\nRESULTS\nA total of 31% of adult women reported a history of CSA. Women reporting a severe CSA were more likely to report more adverse sexual health outcomes including suffering from sexual problems and engaging in more high-risk sexual behaviors. CSA survivors involving touching only were at greater risk of reporting more negative sexual self-concept such as experiencing negative feelings during sex than were non-abused participants. Results indicated that emotion-oriented coping mediated outcomes related to negative sexual self-concept while optimism mediated outcomes related to both, negative sexual self-concept and high-risk sexual behaviors. No support was found for any of the proposed moderation models.\n\n\nCONCLUSIONS\nSurvivors of more severe CSA are more likely to engage in high-risk sexual behaviors that are potentially harmful to their health as well as to experience more sexual problems than women without a history of sexual victimization. Personal factors, namely emotion-oriented coping and optimism, mediated some sexual health outcomes in sexually abused women. The results suggest that maladaptive coping strategies and optimism regarding the future may be important targets for interventions optimizing sexual health and sexual well-being in CSA survivors.",
"title": ""
},
{
"docid": "1289f47ea43ddd72fc90977b0a538d1c",
"text": "This study identifies evaluative, attitudinal, and behavioral factors that enhance or reduce the likelihood of consumers aborting intended online transactions (transaction abort likelihood). Path analyses show that risk perceptions associated with eshopping have direct influence on the transaction abort likelihood, whereas benefit perceptions do not. In addition, consumers who have favorable attitudes toward e-shopping, purchasing experiences from the Internet, and high purchasing frequencies from catalogs are less likely to abort intended transactions. The results also show that attitude toward e-shopping mediate relationships between the transaction abort likelihood and other predictors (i.e., effort saving, product offering, control in the information search, and time spent on the Internet per visit). # 2003 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "37249acdade38893c6d026ba1961ccc1",
"text": "High performance PMOSFETs with gate length as short as 18-nm are reported. A self-aligned double-gate MOSFET structure (FinFET) is used to suppress the short channel effect. A 45 nm gate-length PMOS FinEET has an I/sub dsat/ of 410 /spl mu/A//spl mu/m (or 820 /spl mu/A//spl mu/m depending on the definition of the width of a double-gate device) at Vd=Vg=1.2 V and Tox=2.5 nm. The quasi-planar nature of this variant of the double-gate MOSFETs makes device fabrication relatively easy using the conventional planar MOSFET process technologies. Simulation shows possible scaling to 10-nm gate length.",
"title": ""
},
{
"docid": "65d2cb9f55ee169347df6dc957c36629",
"text": "This paper presents data driven control system of DC motor by using system identification process. In this paper we use component base modeling similar to real DC motor by using simscape electronic systems for obtaining the input voltage and output speed of DC motor, the system identification toolbox and the nonlinear autoregressive with exogenous input (NARX) neural network for identification and obtaining the model of an object. The object model and training the neural network for data driven control system are developed by using MATLAB/SIMULINK platform. So, simulation results of this paper present the advantage of the suggested control method and the acceptable accuracy with respect to dynamic characteristics of the system.",
"title": ""
},
{
"docid": "96d2a6082de66034759b521547e8c8d2",
"text": "Recent developments in deep convolutional neural networks (DCNNs) have shown impressive performance improvements on various object detection/recognition problems. This has been made possible due to the availability of large annotated data and a better understanding of the nonlinear mapping between images and class labels, as well as the affordability of powerful graphics processing units (GPUs). These developments in deep learning have also improved the capabilities of machines in understanding faces and automatically executing the tasks of face detection, pose estimation, landmark localization, and face recognition from unconstrained images and videos. In this article, we provide an overview of deep-learning methods used for face recognition. We discuss different modules involved in designing an automatic face recognition system and the role of deep learning for each of them. Some open issues regarding DCNNs for face recognition problems are then discussed. This article should prove valuable to scientists, engineers, and end users working in the fields of face recognition, security, visual surveillance, and biometrics.",
"title": ""
},
{
"docid": "c3cba3dd3fd219fb2c025b8d8b40a480",
"text": "We analyze the sum-rate performance of a multi- antenna downlink system carrying more users than transmit antennas, with partial channel knowledge at the transmitter due to finite rate feedback. In order to exploit multiuser diversity, we show that the transmitter must have, in addition to directional information, information regarding the quality of each channel. Such information should reflect both the channel magnitude and the quantization error. Expressions for the SINR distribution and the sum-rate are derived, and tradeoffs between the number of feedback bits, the number of users, and the SNR are observed. In particular, for a target performance, having more users reduces feedback load.",
"title": ""
},
{
"docid": "79c4578c1383233c48f9cf6943070892",
"text": "Performance optimization, system reliability and operational efficiency are key characteristics of smart grid systems. In this paper a novel model of smart grid-connected PV/WT hybrid system is developed. It comprises photovoltaic array, wind turbine, asynchronous (induction) generator, controller and converters. The model is implemented using MATLAB/SIMULINK software package. Perturb and observe (P&O) algorithm is used for maximizing the generated power based on maximum power point tracker (MPPT) implementation. The dynamic behavior of the proposed model is examined under different operating conditions. Solar irradiance, temperature and wind speed data is gathered from a grid connected, 28.8kW solar power system located in central Manchester. Real-time measured parameters are used as inputs for the developed system. The proposed model and its control strategy offer a proper tool for smart grid performance optimization.",
"title": ""
},
{
"docid": "d4c7493c755a3fde5da02e3f3c873d92",
"text": "Edge-directed image super resolution (SR) focuses on ways to remove edge artifacts in upsampled images. Under large magnification, however, textured regions become blurred and appear homogenous, resulting in a super-resolution image that looks unnatural. Alternatively, learning-based SR approaches use a large database of exemplar images for “hallucinating” detail. The quality of the upsampled image, especially about edges, is dependent on the suitability of the training images. This paper aims to combine the benefits of edge-directed SR with those of learning-based SR. In particular, we propose an approach to extend edge-directed super-resolution to include detail from an image/texture example provided by the user (e.g., from the Internet). A significant benefit of our approach is that only a single exemplar image is required to supply the missing detail – strong edges are obtained in the SR image even if they are not present in the example image due to the combination of the edge-directed approach. In addition, we can achieve quality results at very large magnification, which is often problematic for both edge-directed and learning-based approaches.",
"title": ""
},
{
"docid": "21130eded44790720e79a750ecdf3847",
"text": "Enabled by Web 2.0 technologies social media provide an unparalleled platform for consumers to share their product experiences and opinions---through word-of-mouth (WOM) or consumer reviews. It has become increasingly important to understand how WOM content and metrics thereof are related to consumer purchases and product sales. By integrating network analysis with text sentiment mining techniques, we propose product comparison networks as a novel construct, computed from consumer product reviews. To test the validity of these product ranking measures, we conduct an empirical study based on a digital camera dataset from Amazon.com. The results demonstrate significant linkage between network-based measures and product sales, which is not fully captured by existing review measures such as numerical ratings. The findings provide important insights into the business impact of social media and user-generated content, an emerging problem in business intelligence research. From a managerial perspective, our results suggest that WOM in social media also constitutes a competitive landscape for firms to understand and manipulate.",
"title": ""
},
{
"docid": "4bf253b2349978d17fd9c2400df61d21",
"text": "This paper proposes an architecture for the mapping between syntax and phonology – in particular, that aspect of phonology that determines the linear ordering of words. We propose that linearization is restricted in two key ways. (1) the relative ordering of words is fixed at the end of each phase, or ‘‘Spell-out domain’’; and (2) ordering established in an earlier phase may not be revised or contradicted in a later phase. As a consequence, overt extraction out of a phase P may apply only if the result leaves unchanged the precedence relations established in P. We argue first that this architecture (‘‘cyclic linearization’’) gives us a means of understanding the reasons for successive-cyclic movement. We then turn our attention to more specific predictions of the proposal: in particular, the e¤ects of Holmberg’s Generalization on Scandinavian Object Shift; and also the Inverse Holmberg Effects found in Scandinavian ‘‘Quantifier Movement’’ constructions (Rögnvaldsson (1987); Jónsson (1996); Svenonius (2000)) and in Korean scrambling configurations (Ko (2003, 2004)). The cyclic linearization proposal makes predictions that cross-cut the details of particular syntactic configurations. For example, whether an apparent case of verb fronting results from V-to-C movement or from ‘‘remnant movement’’ of a VP whose complements have been removed by other processes, the verb should still be required to precede its complements after fronting if it preceded them before fronting according to an ordering established at an earlier phase. We argue that ‘‘cross-construction’’ consistency of this sort is in fact found.",
"title": ""
},
{
"docid": "094e09f2d7d7ce91b9bbf30f31825eb3",
"text": "・ This leads to the problem of structured matching of regions and phrases: (1) individual regions agree with their corresponding phrases. (2) visual relations among regions agree with textual relations among corresponding phrases. ・ For the task of phrase localization, we propose a structured matching of phrases and regions that encourages the semantic relations between phrases to agree with the visual relations between regions.",
"title": ""
},
{
"docid": "ec271fa90e4eb72cdda63e0dfddc5b80",
"text": "One property of electromagnetic waves that has been recently explored is the ability to multiplex multiple beams, such that each beam has a unique helical phase front. The amount of phase front 'twisting' indicates the orbital angular momentum state number, and beams with different orbital angular momentum are orthogonal. Such orbital angular momentum based multiplexing can potentially increase the system capacity and spectral efficiency of millimetre-wave wireless communication links with a single aperture pair by transmitting multiple coaxial data streams. Here we demonstrate a 32-Gbit s(-1) millimetre-wave link over 2.5 metres with a spectral efficiency of ~16 bit s(-1) Hz(-1) using four independent orbital-angular momentum beams on each of two polarizations. All eight orbital angular momentum channels are recovered with bit-error rates below 3.8 × 10(-3). In addition, we demonstrate a millimetre-wave orbital angular momentum mode demultiplexer to demultiplex four orbital angular momentum channels with crosstalk less than -12.5 dB and show an 8-Gbit s(-1) link containing two orbital angular momentum beams on each of two polarizations.",
"title": ""
},
{
"docid": "b8ac61e2026f3dd7e775d440dcb43772",
"text": "This paper presents a design methodology of a highly efficient power link based on Class-E driven, inductively coupled coil pair. An optimal power link design for retinal prosthesis and/or other implants must take into consideration the allowable safety limits of magnetic fields, which in turn govern the inductances of the primary and secondary coils. In retinal prosthesis, the optimal coil inductances have to deal with the constraints of the coil sizes, the tradeoffs between the losses, H-field limitation and dc supply voltage required by the Class-E driver. Our design procedure starts with the formation of equivalent circuits, followed by the analysis of the loss of the rectifier and coils and the H-field for induced voltage and current. Both linear and nonlinear models for the analysis are presented. Based on the procedure, an experimental power link is implemented with an overall efficiency of 67% at the optimal distance of 7 mm between the coils. In addition to the coil design methodology, we are also presenting a closed-loop control of Class-E amplifier for any duty cycle and any value of the systemQ.",
"title": ""
},
{
"docid": "2b0969dd0089bd2a2054957477ea4ce1",
"text": "A self-signaling action is an action chosen partly to secure good news about one’s traits or abilities, even when the action has no causal impact on these traits and abilities. We discuss some of the odd things that happen when self-signaling is introduced into an otherwise rational conception of action. We employ a signaling game perspective in which the diagnostic signals are an endogenous part of the equilibrium choice. We are interested (1) in pure self-signaling, separate from any desire to be regarded well by others, and (2) purely diagnostic motivation, that is, caring about what an action might reveal about a trait even when that action has no causal impact on it. When diagnostic motivation is strong, the person’s actions exhibit a rigidity characteristic of personal rules. Our model also predicts that a boost in self-image positively affects actions even though it leaves true preferences unchanged — we call this a “moral placebo effect.” 1 The chapter draws on (co-authored) Chapter 3 of Bodner’s doctoral dissertation (Bodner, 1995) and an unpublished MIT working paper (Bodner and Prelec, 1997). The authors thank Bodner’s dissertation advisors France Leclerc and Richard Thaler, workshop discussants Thomas Schelling, Russell Winer, and Mathias Dewatripont, and George Ainslie, Michael Bratman, Juan Carillo, Itzakh Gilboa, George Loewenstein, Al Mela, Matthew Rabin, Duncan Simester and Florian Zettelmeyer for comments on these ideas (with the usual disclaimer). We are grateful to Birger Wernerfelt for drawing attention to Bernheim's work on social conformity. Author addresses: Bodner – Director, Learning Innovations, 13\\4 Shimshon St., Jerusalem, 93501, Israel, learning@netvision.net.il; Prelec — E56-320, MIT, Sloan School, 38 Memorial Drive, Cambridge, MA 02139, dprelec@mit.edu. 1 Psychological evidence When we make a choice we reveal something of our inner traits or dispositions, not only to others, but also to ourselves. After the fact, this can be a source of pleasure or pain, depending on whether we were impressed or disappointed by our actions. Before the fact, the anticipation of future pride or remorse can influence what we choose to do. In a previous paper (Bodner and Prelec, 1997), we described how the model of a utility maximizing individual could be expanded to include diagnostic utility as a separate motive for action. We review the basic elements of that proposal here. The inspiration comes directly from signaling games in which actions of one person provide an informative signal to others, which in turn affects esteem (Bernheim, 1994). Here, however, actions provide a signal to ourselves, that is, actions are selfsignaling. For example, a person who takes the daily jog in spite of the rain may see that as a gratifying signal of willpower, dedication, or future well being. For someone uncertain about where he or she stands with respect to these dispositions, each new choice can provide a bit of good or bad \"news.” We incorporate the value of such \"news\" into the person's utility function. The notion that a person may draw inferences from an action he enacted partially in order to gain that inference has been posed as a philosophical paradox (e.g. Campbell and Sawden, 1985; Elster, 1985, 1989). A key problem is the following: Suppose that the disposition in question is altruism, and a person interprets a 25¢ donation to a panhandler as evidence of altruism. If the boost in self-esteem makes it worth giving the quarter even when there is no concern for the poor, than clearly, such a donation is not valid evidence of altruism. Logically, giving is valid evidence of high altruism only if a person with low altruism would not have given the quarter. This reasoning motivates our equilibrium approach, in which inferences from actions are an endogenous part of the equilibrium choice. As an empirical matter several studies have demonstrated that diagnostic considerations do indeed affect behavior (Quattrone and Tversky, 1984; Shafir and Tversky, 1992; Bodner, 1995). An elegant experiment by Quattrone and Tversky (1984) both defines the self-signaling phenomenon and demonstrates its existence. Quattrone and Tversky first asked each subject to take a cold pressor pain test in which the subject's arm is submerged in a container of cold water until the subject can no longer tolerate the pain. Subsequently the subject was told that recent medical studies had discovered a certain inborn heart condition, and that people with this condition are “frequently ill, prone to heart-disease, and have shorter-than-average life expectancy.” Subjects were also told that this type could be identified by the effect of exercise on the cold pressor test. Subjects were randomly assigned to one of two conditions in which they were told that the bad type of heart was associated with either increases or with decreases in tolerance to the cold water after exercise. Subjects then repeated the cold pressor test, after riding an Exercycle for one minute. As predicted, the vast majority of subjects showed changes in tolerance on the second cold pressor trial in the direction correlated of “good news”—if told that decreased tolerance is diagnostic of a bad heart they endured the near-freezing water longer (and vice versa). The result shows that people are willing to bear painful consequences for a behavior that is a signal, though not a cause, of a medical diagnosis. An experiment by Shafir and Tversky (1992) on \"Newcomb's paradox\" reinforces the same point. In the philosophical version of the paradox, a person is (hypothetically) presented with two boxes, A and B. Box A contains either nothing or some large amount of money deposited by an \"omniscient being.\" Box B contains a small amount of money for sure. The decision-maker doesn’t know what Box A contains choice, and has to choose whether to take the contents of that box (A) or of both boxes (A+B). What makes the problem a paradox is that the person is asked to believe that the omniscient being has already predicted her choice, and on that basis has already either \"punished\" a greedy choice of (A+B) with no deposit in A or \"rewarded\" a choice of (A) with a large deposit. The dominance principle argues in favor of choosing both boxes, because the deposits are fixed at the moment of choice. This is the philosophical statement of the problem. In the actual experiment, Shafir and Tversky presented a variant of Newcomb’s problem at the end of another, longer experiment, in which subjects repeatedly played a Prisoner’s Dilemma game against (virtual) opponents via computer terminals. After finishing these games, a final “bonus” problem appeared, with the two Newcomb boxes, and subjects had to choose whether to take money from one box or from both boxes. The experimental cover story did not mention an omniscient being but instead informed the subjects that \"a program developed at MIT recently was applied during the entire session [of Prisoner’s Dilemma choices] to analyze the pattern of your preference.” Ostensibly, this mighty program could predict choices, one or two boxes, with 85% accuracy, and, of course, if the program predicted a choice of both boxes it would then put nothing in Box A. Although it was evident that the money amounts were already set at the moment of choice, most experimental subjects opted for the single box. It is “as if” they believed that by declining to take the money in Box B, they could change the amount of money already deposited in box A. Although these are relatively recent experiments, their results are consistent with a long stream of psychological research, going back at least to the James-Lange theory of emotions which claimed that people infer their own states from behavior (e.g., they feel afraid if they see themselves running). The notion that people adopt the perspective of an outside observer when interpreting their own actions has been extensively explored in the research on self-perception (Bem, 1972). In a similar vein, there is an extensive literature confirming the existence of “self-handicapping” strategies, where a person might get too little sleep or under-prepare for an examination. In such a case, a successful performance could be attributed to ability while unsuccessful performance could be externalized as due to the lack of proper preparation (e.g. Berglas and Jones, 1978; Berglas and Baumeister, 1993). This broader context of psychological research suggests that we should view the results of Quattrone and Tversky, and Shafir and Tversky not as mere curiosities, applying to only contrived experimental situations, but instead as evidence of a general motivational “short circuit.” Motivation does not require causality, even when the lack of causality is utterly transparent. If anything, these experiments probably underestimate the impact of diagnosticity in realistic decisions, where the absence of causal links between actions and dispositions is less evident. Formally, our model distinguishes between outcome utility — the utility of the anticipated causal consequences of choice — and diagnostic utility — the value of the adjusted estimate of one’s disposition, adjusted in light of the choice. Individuals act so as to maximize some combination of the two sources of utility, and (in one version of the model) make correct inferences about what their choices imply about their dispositions. When diagnostic utility is sufficiently important, the individual chooses the same action independent of disposition. We interpret this as a personal rule. We describe other ways in which the behavior of self-signaling individuals is qualitatively different from that of standard economic agents. First, a self-signaling person will be more likely to reveal discrepancies between resolutions and actions when resolutions pertain to actions that are contingent or delayed. Thus she might honestly commit to do some worthy action if the circumstances requiring t",
"title": ""
},
{
"docid": "c8bbc713aecbc6682d21268ee58ca258",
"text": "Traditional approaches to knowledge base completion have been based on symbolic representations. Lowdimensional vector embedding models proposed recently for this task are attractive since they generalize to possibly unlimited sets of relations. A significant drawback of previous embedding models for KB completion is that they merely support reasoning on individual relations (e.g., bornIn(X,Y )⇒ nationality(X,Y )). In this work, we develop models for KB completion that support chains of reasoning on paths of any length using compositional vector space models. We construct compositional vector representations for the paths in the KB graph from the semantic vector representations of the binary relations in that path and perform inference directly in the vector space. Unlike previous methods, our approach can generalize to paths that are unseen in training and, in a zero-shot setting, predict target relations without supervised training data for that relation.",
"title": ""
}
] |
scidocsrr
|
b27fbe92b8d05ff3f505035c84ac2873
|
Fritzing: a tool for advancing electronic prototyping for designers
|
[
{
"docid": "4bd345055dc13160b4bff4e245757763",
"text": "Processing: A Programming Handbook for Visual Designers and Artists With this completely revised edition, Casey Reas and Ben Fry show readers how. Processing: A Programming Handbook for Visual Designers and Artists by Ben Fry, Casey Reas, John Maeda download pdf book. Jun 28, 2010 – All right,. With Ben Fry, Reas initiated Processing in 2001. Reas and Fry published Processing: A Programming Handbook for Visual Designers and Artists, a comprehensive introduction to programming within the context of visual media (MIT Press.",
"title": ""
}
] |
[
{
"docid": "f0242a2a54b1c4538abdd374c74f69f6",
"text": "Background: An increasing research effort has devoted to just-in-time (JIT) defect prediction. A recent study by Yang et al. at FSE'16 leveraged individual change metrics to build unsupervised JIT defect prediction model. They found that many unsupervised models performed similarly to or better than the state-of-the-art supervised models in effort-aware JIT defect prediction. Goal: In Yang et al.'s study, code churn (i.e. the change size of a code change) was neglected when building unsupervised defect prediction models. In this study, we aim to investigate the effectiveness of code churn based unsupervised defect prediction model in effort-aware JIT defect prediction. Methods: Consistent with Yang et al.'s work, we first use code churn to build a code churn based unsupervised model (CCUM). Then, we evaluate the prediction performance of CCUM against the state-of-the-art supervised and unsupervised models under the following three prediction settings: cross-validation, time-wise cross-validation, and cross-project prediction. Results: In our experiment, we compare CCUM against the state-of-the-art supervised and unsupervised JIT defect prediction models. Based on six open-source projects, our experimental results show that CCUM performs better than all the prior supervised and unsupervised models. Conclusions: The result suggests that future JIT defect prediction studies should use CCUM as a baseline model for comparison when a novel model is proposed.",
"title": ""
},
{
"docid": "9f5e077550650f3ecf9fd5f25e33330c",
"text": "We study a random graph model called the “stochastic block model” in statistics and the “planted partition model” in theoretical computer science. In its simplest form, this is a random graph with two equal-sized classes of vertices, with a within-class edge probability of q and a between-class edge probability of q′. A striking conjecture of Decelle, Krzkala, Moore and Zdeborová [9], based on deep, nonrigorous ideas from statistical physics, gave a precise prediction for the algorithmic threshold of clustering in the sparse planted partition model. In particular, if q = a/n and q′ = b/n, s = (a − b)/2 and d = (a + b)/2 then Decelle et al. conjectured that it is possible to efficiently cluster in a way correlated with the true partition if s > d and impossible if s < d. By comparison, until recently the best-known rigorous result showed that clustering is possible if s > Cd lnd for sufficiently large C. In a previous work, we proved that indeed it is information theoretically impossible to cluster if s ≤ d and moreover that it is information theoretically impossible to even estimate the model parameters from the graph when s < d. Here we prove the rest of the conjecture by providing an efficient algorithm for clustering in a way that is correlated with the true partition when s > d. A different independent proof of the same result was recently obtained by Massoulié [21]. U.C. Berkeley. Supported by NSF grant DMS-1106999, NSF grant CCF 1320105 and DOD ONR grant N000141110140 U.T. Austin and the University of Bonn. Supported by NSF grant DMS-1106999 and DOD ONR grant N000141110140 U.C. Berkeley and the Australian National University. Supported by an Alfred Sloan Fellowship and NSF grant DMS-1208338.",
"title": ""
},
{
"docid": "cc37744c95e5e41cb46b166132da53f6",
"text": "This work is part of research to build a system to combine facial and prosodic information to recognize commonly occurring user states such as delight and frustration. We create two experimental situations to elicit two emotional states: the first involves recalling situations while expressing either delight or frustration; the second experiment tries to elicit these states directly through a frustrating experience and through a delightful video. We find two significant differences in the nature of the acted vs. natural occurrences of expressions. First, the acted ones are much easier for the computer to recognize. Second, in 90% of the acted cases, participants did not smile when frustrated, whereas in 90% of the natural cases, participants smiled during the frustrating interaction, despite self-reporting significant frustration with the experience. This paper begins to explore the differences in the patterns of smiling that are seen under natural frustration and delight conditions, to see if there might be something measurably different about the smiles in these two cases, which could ultimately improve the performance of classifiers applied to natural expressions.",
"title": ""
},
{
"docid": "58a016629de2a2556fae9ca3fa81040a",
"text": "This paper studies a type of image priors that are constructed implicitly through the alternating direction method of multiplier (ADMM) algorithm, called the algorithm-induced prior. Different from classical image priors which are defined before running the reconstruction algorithm, algorithm-induced priors are defined by the denoising procedure used to replace one of the two modules in the ADMM algorithm. Since such prior is not explicitly defined, analyzing the performance has been difficult in the past. Focusing on the class of symmetric smoothing filters, this paper presents an explicit expression of the prior induced by the ADMM algorithm. The new prior is reminiscent to the conventional graph Laplacian but with stronger reconstruction performance. It can also be shown that the overall reconstruction has an efficient closed-form implementation if the associated symmetric smoothing filter is low rank. The results are validated with experiments on image inpainting.",
"title": ""
},
{
"docid": "fd35019f37ea3b05b7b6a14bf74d5ad1",
"text": "Given the tremendous growth of sport fans, the “Intelligent Arena”, which can greatly improve the fun of traditional sports, becomes one of the new-emerging applications and research topics. The development of multimedia computing and artificial intelligence technologies support intelligent sport video analysis to add live video broadcast, score detection, highlight video generation, and online sharing functions to the intelligent arena applications. In this paper, we have proposed a deep learning based video analysis scheme for intelligent basketball arena applications. First of all, with multiple cameras or mobile devices capturing the activities in arena, the proposed scheme can automatically select the camera to give high-quality broadcast in real-time. Furthermore, with basketball energy image based deep conventional neural network, we can detect the scoring clips as the highlight video reels to support the wonderful actions replay and online sharing functions. Finally, evaluations on a built real-world basketball match dataset demonstrate that the proposed system can obtain 94.59% accuracy with only less than 45m s processing time (i.e., 10m s broadcast camera selection, and 35m s for scoring detection) for each frame. As the outstanding performance, the proposed deep learning based basketball video analysis scheme is implemented into a commercial intelligent basketball arena application named “Standz Basketball”. Although the application had been only released for one month, it achieves the 85t h day download ranking place in the sport category of Chinese iTunes market.",
"title": ""
},
{
"docid": "e723f76f4c9b264cbf4361b72c7cbf10",
"text": "With the constant growth in Information and Communication Technology (ICT) in the last 50 years or so, electronic communication has become part of the present day system of living. Equally, smileys or emoticons were innovated in 1982, and today the genre has attained a substantial patronage in various aspects of computer-mediated communication (CMC). Ever since written forms of electronic communication lack the face-to-face (F2F) situation attributes, emoticons are seen as socio-emotional suppliers to the CMC. This article reviews scholarly research in that field in order to compile variety of investigations on the application of emoticons in some facets of CMC, i.e. Facebook, Instant Messaging (IM), and Short Messaging Service (SMS). Key findings of the review show that emoticons do not just serve as paralanguage elements rather they are compared to word morphemes with distinctive significative functions. In other words, they are morpheme-like units and could be derivational, inflectional, or abbreviations but not unbound. The findings also indicate that emoticons could be conventionalized as well as being paralinguistic elements, therefore, they should be approached as contributory to conversation itself not mere compensatory to language.",
"title": ""
},
{
"docid": "8318d49318f442749bfe3a33a3394f42",
"text": "Driving Scene understanding is a key ingredient for intelligent transportation systems. To achieve systems that can operate in a complex physical and social environment, they need to understand and learn how humans drive and interact with traffic scenes. We present the Honda Research Institute Driving Dataset (HDD), a challenging dataset to enable research on learning driver behavior in real-life environments. The dataset includes 104 hours of real human driving in the San Francisco Bay Area collected using an instrumented vehicle equipped with different sensors. We provide a detailed analysis of HDD with a comparison to other driving datasets. A novel annotation methodology is introduced to enable research on driver behavior understanding from untrimmed data sequences. As the first step, baseline algorithms for driver behavior detection are trained and tested to demonstrate the feasibility of the proposed task.",
"title": ""
},
{
"docid": "a8ddaed8209d09998159014307233874",
"text": "Traditional image-based 3D reconstruction methods use multiple images to extract 3D geometry. However, it is not always possible to obtain such images, for example when reconstructing destroyed structures using existing photographs or paintings with proper perspective (figure 1), and reconstructing objects without actually visiting the site using images from the web or postcards (figure 2). Even when multiple images are possible, parts of the scene appear in only one image due to occlusions and/or lack of features to match between images. Methods for 3D reconstruction from a single image do exist (e.g. [1] and [2]). We present a new method that is more accurate and more flexible so that it can model a wider variety of sites and structures than existing methods. Using this approach, we reconstructed in 3D many destroyed structures using old photographs and paintings. Sites all over the world have been reconstructed from tourist pictures, web pages, and postcards.",
"title": ""
},
{
"docid": "d76e649c6daeb71baf377c2b36623e29",
"text": "The somatic marker hypothesis proposes that decision-making is a process that depends on emotion. Studies have shown that damage of the ventromedial prefrontal (VMF) cortex precludes the ability to use somatic (emotional) signals that are necessary for guiding decisions in the advantageous direction. However, given the role of the amygdala in emotional processing, we asked whether amygdala damage also would interfere with decision-making. Furthermore, we asked whether there might be a difference between the roles that the amygdala and VMF cortex play in decision-making. To address these two questions, we studied a group of patients with bilateral amygdala, but not VMF, damage and a group of patients with bilateral VMF, but not amygdala, damage. We used the \"gambling task\" to measure decision-making performance and electrodermal activity (skin conductance responses, SCR) as an index of somatic state activation. All patients, those with amygdala damage as well as those with VMF damage, were (1) impaired on the gambling task and (2) unable to develop anticipatory SCRs while they pondered risky choices. However, VMF patients were able to generate SCRs when they received a reward or a punishment (play money), whereas amygdala patients failed to do so. In a Pavlovian conditioning experiment the VMF patients acquired a conditioned SCR to visual stimuli paired with an aversive loud sound, whereas amygdala patients failed to do so. The results suggest that amygdala damage is associated with impairment in decision-making and that the roles played by the amygdala and VMF in decision-making are different.",
"title": ""
},
{
"docid": "2cc86f9445e09d966bc582f99bc068f6",
"text": "The paper presents possibilities and concepts of a university student professional profile forming with a possible implementation in form of a decision support system (DSS). Depending on the obligatory courses, requested by law or university standards, the general background of the profile is created. The variability of profile forming is assured by voluntary and partially voluntary choice of courses that students take each semester. The novelty of proposed concept lies in the possibilities to use various decision-making algorithms and methods for profile forming. The concept is based on empirical observations, theoretical assumptions, strongly on the theory of decision making and its properties.",
"title": ""
},
{
"docid": "748996944ebd52a7d82c5ca19b90656b",
"text": "The experiment was conducted with three biofloc treatments and one control in triplicate in 500 L capacity indoor tanks. Biofloc tanks, filled with 350 L of water, were fed with sugarcane molasses (BFTS), tapioca flour (BFTT), wheat flour (BFTW) and clean water as control without biofloc and allowed to stand for 30 days. The postlarvae of Litopenaeus vannamei (Boone, 1931) with an Average body weight of 0.15 0.02 g were stocked at the rate of 130 PL m 2 and cultured for a period of 60 days fed with pelleted feed at the rate of 1.5% of biomass. The total suspended solids (TSS) level was maintained at around 500 mg L 1 in BFT tanks. The addition of carbohydrate significantly reduced the total ammoniaN (TAN), nitrite-N and nitrate-N in water and it significantly increased the total heterotrophic bacteria (THB) population in the biofloc treatments. There was a significant difference in the final average body weight (8.49 0.09 g) in the wheat flour treatment (BFTW) than those treatment and control group of the shrimp. Survival of the shrimps was not affected by the treatments and ranged between 82.02% and 90.3%. The proximate and chemical composition of biofloc and proximate composition of the shrimp was significantly different between the biofloc treatments and control. Tintinids, ciliates, copepods, cyanobacteria and nematodes were identified in all the biofloc treatments, nematodes being the most dominant group of organisms in the biofloc. It could be concluded that the use of wheat flour (BFTW) effectively enhanced the biofloc production and contributed towards better water quality which resulted in higher production of shrimp.",
"title": ""
},
{
"docid": "cdbae854801ba0eda33a88a35246814f",
"text": "BACKGROUND\nThis study evaluates the dose distribution of reversed planned tangential beam intensity modulated radiotherapy (IMRT) compared to standard wedged tangential beam three-dimensionally planned conformal radiotherapy (3D-CRT) of the chest wall in unselected postmastectomy breast cancer patients\n\n\nMETHODS\nFor 20 unselected subsequent postmastectomy breast cancer patients tangential beam IMRT and tangential beam 3D-CRT plans were generated for the radiotherapy of the chest wall. The prescribed dose was 50 Gy in 25 fractions. Dose-volume histograms were evaluated for the PTV and organs at risk. Parameters of the dose distribution were compared using the Wilcoxon matched pairs test.\n\n\nRESULTS\nTangential beam IMRT statistically significantly reduced the ipsilateral mean lung dose by an average of 21% (1129 cGy versus 1437 cGy). In all patients treated on the left side, the heart volume encompassed by the 70% isodose line (V70%; 35 Gy) was reduced by an average of 43% (5.7% versus 10.6%), and the mean heart dose by an average of 20% (704 cGy versus 877 cGy). The PTV showed a significantly better conformity index with IMRT; the homogeneity index was not significantly different.\n\n\nCONCLUSIONS\nTangential beam IMRT significantly reduced the dose-volume of the ipsilateral lung and heart in unselected postmastectomy breast cancer patients.",
"title": ""
},
{
"docid": "70be8e5a26cb56fdd2c230cf36e00364",
"text": "If investors are not fully rational, what can smart money do? This paper provides an example in which smart money can strategically take advantage of investors’ behavioral biases and manipulate the price process to make profit. The paper considers three types of traders, behavior-driven investors who are less willing to sell losers than to sell winners (dispositional effect), arbitrageurs, and a manipulator who can influence asset prices. We show that, due to the investors’ behavioral biases and the limit of arbitrage, the manipulator can profit from a “pump-and-dump” trading strategy by accumulating the speculative asset while pushing the asset price up, and then selling the asset at high prices. Since nobody has private information, manipulation here is completely trade-based. The paper also endogenously derives several asset-pricing anomalies, including excess volatility, momentum and reversal. As an empirical test, the paper presents some empirical evidence from the U.S. SEC prosecution of “pump-and-dump” manipulation cases that are consistent with our model. JEL: G12, G18",
"title": ""
},
{
"docid": "e02e9a4347ce290ed1ce5780014cae35",
"text": "Objective: Our report describes a case of nonpuerperal induced lactation in a transgender woman. Methods: We present the relevant clinical and laboratory findings, along with a review of the relevant literature. Results: A 30-year-old transgender woman who had been receiving feminizing hormone therapy for the past 6 years presented to our clinic with the goal of being able to breastfeed her adopted infant. After implementing a regimen of domperidone, estradiol, progesterone, and breast pumping, she was able to achieve sufficient breast milk volume to be the sole source of nourishment for her child for 6 weeks. This case illustrates that, in some circumstances, modest but functional lactation can be induced in transgender women.",
"title": ""
},
{
"docid": "a93833a6ad41bdc5011a992509e77c9a",
"text": "We present the implementation of a largevocabulary continuous speech recognition (LVCSR) system on NVIDIA’s Tegra K1 hyprid GPU-CPU embedded platform. The system is trained on a standard 1000hour corpus, LibriSpeech, features a trigram WFST-based language model, and achieves state-of-the-art recognition accuracy. The fact that the system is realtime-able and consumes less than 7.5 watts peak makes the system perfectly suitable for fast, but precise, offline spoken dialog applications, such as in robotics, portable gaming devices, or in-car systems.",
"title": ""
},
{
"docid": "cb3d1448269b29807dc62aa96ff6ad1a",
"text": "OBJECTIVES\nInformation overload in electronic medical records can impede providers' ability to identify important clinical data and may contribute to medical error. An understanding of the information requirements of ICU providers will facilitate the development of information systems that prioritize the presentation of high-value data and reduce information overload. Our objective was to determine the clinical information needs of ICU physicians, compared to the data available within an electronic medical record.\n\n\nDESIGN\nProspective observational study and retrospective chart review.\n\n\nSETTING\nThree ICUs (surgical, medical, and mixed) at an academic referral center.\n\n\nSUBJECTS\nNewly admitted ICU patients and physicians (residents, fellows, and attending staff).\n\n\nMEASUREMENTS AND MAIN RESULTS\nThe clinical information used by physicians during the initial diagnosis and treatment of admitted patients was captured using a questionnaire. Clinical information concepts were ranked according to the frequency of reported use (primary outcome) and were compared to information availability in the electronic medical record (secondary outcome). Nine hundred twenty-five of 1,277 study questionnaires (408 patients) were completed. Fifty-one clinical information concepts were identified as being useful during ICU admission. A median (interquartile range) of 11 concepts (6-16) was used by physicians per patient admission encounter with four used greater than 50% of the time. Over 25% of the clinical data available in the electronic medical record was never used, and only 33% was used greater than 50% of the time by admitting physicians.\n\n\nCONCLUSIONS\nPhysicians use a limited number of clinical information concepts at the time of patient admission to the ICU. The electronic medical record contains an abundance of unused data. Better electronic data management strategies are needed, including the priority display of frequently used clinical concepts within the electronic medical record, to improve the efficiency of ICU care.",
"title": ""
},
{
"docid": "33c113db245fb36c3ce8304be9909be6",
"text": "Bring Your Own Device (BYOD) is growing in popularity. In fact, this inevitable and unstoppable trend poses new security risks and challenges to control and manage corporate networks and data. BYOD may be infected by viruses, spyware or malware that gain access to sensitive data. This unwanted access led to the disclosure of information, modify access policy, disruption of service, loss of productivity, financial issues, and legal implications. This paper provides a review of existing literature concerning the access control and management issues, with a focus on recent trends in the use of BYOD. This article provides an overview of existing research articles which involve access control and management issues, which constitute of the recent rise of usage of BYOD devices. This review explores a broad area concerning information security research, ranging from management to technical solution of access control in BYOD. The main aim for this is to investigate the most recent trends touching on the access control issues in BYOD concerning information security and also to analyze the essential and comprehensive requirements needed to develop an access control framework in the future. Keywords— Bring Your Own Device, BYOD, access control, policy, security.",
"title": ""
},
{
"docid": "fcd30a667cb2f4e89d9174cc37ac698c",
"text": "v TABLE OF CONTENTS vii",
"title": ""
},
{
"docid": "dd271275654da4bae73ee41d76fe165c",
"text": "BACKGROUND\nThe recovery period for patients who have been in an intensive care unitis often prolonged and suboptimal. Anxiety, depression and post-traumatic stress disorder are common psychological problems. Intensive care staff offer various types of intensive aftercare. Intensive care follow-up aftercare services are not standard clinical practice in Norway.\n\n\nOBJECTIVE\nThe overall aim of this study is to investigate how adult patients experience theirintensive care stay their recovery period, and the usefulness of an information pamphlet.\n\n\nMETHOD\nA qualitative, exploratory research with semi-structured interviews of 29 survivors after discharge from intensive care and three months after discharge from the hospital.\n\n\nRESULTS\nTwo main themes emerged: \"Being on an unreal, strange journey\" and \"Balancing between who I was and who I am\" Patients' recollection of their intensive care stay differed greatly. Continuity of care and the nurse's ability to see and value individual differences was highlighted. The information pamphlet helped intensive care survivors understand that what they went through was normal.\n\n\nCONCLUSIONS\nContinuity of care and an individual approach is crucial to meet patients' uniqueness and different coping mechanisms. Intensive care survivors and their families must be included when information material and rehabilitation programs are designed and evaluated.",
"title": ""
},
{
"docid": "27c47b97f67dae335b3bc1a09ad78778",
"text": "State-of-charge (SOC) determination is an increasingly important issue in battery technology. In addition to the immediate display of the remaining battery capacity to the user, precise knowledge of SOC exerts additional control over the charging/discharging process, which can be employed to increase battery life. This reduces the risk of overvoltage and gassing, which degrade the chemical composition of the electrolyte and plates. The proposed model in this paper determines the SOC by incorporating the changes occurring due to terminal voltage, current load, and internal resistance, which mitigate the disadvantages of using impedance only. Electromotive force (EMF) voltage is predicted while the battery is under load conditions; from the estimated EMF voltage, the SOC is then determined. The method divides the battery voltage curve into two regions: 1) the linear region for full to partial SOC and 2) the hyperbolic region from partial to low SOC. Algorithms are developed to correspond to the different characteristic changes occurring within each region. In the hyperbolic region, the rate of change in impedance and terminal voltage is greater than that in the linear region. The magnitude of current discharge causes varying rates of change to the terminal voltage and impedance. Experimental tests and results are presented to validate the new models.",
"title": ""
}
] |
scidocsrr
|
91e6d9418158b8076c68703d93b4d782
|
RIEMANN’S ZETA FUNCTION AND BEYOND
|
[
{
"docid": "0c45c5ee2433578fbc29d29820042abe",
"text": "When Andrew John Wiles was 10 years old, he read Eric Temple Bell’s The Last Problem and was so impressed by it that he decided that he would be the first person to prove Fermat’s Last Theorem. This theorem states that there are no nonzero integers a, b, c, n with n > 2 such that an + bn = cn. This object of this paper is to prove that all semistable elliptic curves over the set of rational numbers are modular. Fermat’s Last Theorem follows as a corollary by virtue of work by Frey, Serre and Ribet.",
"title": ""
}
] |
[
{
"docid": "02a7675468d8e02aaf43ddcd2c36e3fd",
"text": "Speech synthesis is the artificial production of human voice. A computer system used for this task is called a speech synthesizer. Anyone can use this synthesizer in software or hardware products. The main aim of text-to-speech (TTS) system is to convert normal language text into speech. Synthesized speech can be produced by concatenating pieces of recorded speech that are stored in a database. TTS Systems differ in size of the stored speech units. A system which stores phones or diphones provides the largest output range, but this may give low clarity. For specific application domains, the storage of entire words or sentences allows for highquality output. Alternatively, a synthesizer can constitute a model of the vocal tract and other human voice characteristics to create a fully synthetic voice output. The quality of a speech synthesizer is decided by its naturalness or simillarity to the human voice and by its ability to be understood clearly. This paper summarizes the published literatures on Text to Speech (TTS), with discussing about the efforts taken in each paper. This system will be more helpful for an illiterate and visually impaired people to hear and understand the text.",
"title": ""
},
{
"docid": "ef98966f79d5c725b33e227f86e610a2",
"text": "We introduce adaptive input representations for neural language modeling which extend the adaptive softmax of Grave et al. (2017) to input representations of variable capacity. There are several choices on how to factorize the input and output layers, and whether to model words, characters or sub-word units. We perform a systematic comparison of popular choices for a self-attentional architecture. Our experiments show that models equipped with adaptive embeddings are more than twice as fast to train than the popular character input CNN while having a lower number of parameters. We achieve a new state of the art on the WIKITEXT-103 benchmark of 20.51 perplexity, improving the next best known result by 8.7 perplexity. On the BILLION WORD benchmark, we achieve a state of the art of 24.14 perplexity.1",
"title": ""
},
{
"docid": "775080dc04241460f585c49752850148",
"text": "In this paper, we propose a general purpose approach to handwriting beautification using online input from a stylus. Given a sample of writings, drawings, or sketches from the same user, our method improves a user's strokes in real-time as they are drawn. Our approach relies on one main insight. The appearance of the average of multiple instances of the same written word or shape is better than most of the individual instances. We utilize this observation using a two-stage approach. First, we propose an efficient real-time method for finding matching sets of stroke samples called tokens in a potentially large database of writings from a user. Second, we refine the user's most recently written strokes by averaging them with the matching tokens. Our approach works without handwriting recognition, and does not require a database of predefined letters, words, or shapes. Our results show improved results for a wide range of writing styles and drawings.",
"title": ""
},
{
"docid": "123f5d93d0b7c483a50d73ba04762550",
"text": "Chemistry and biology are intimately connected sciences yet the chemistry-biology interface remains problematic and central issues regarding the very essence of living systems remain unresolved. In this essay we build on a kinetic theory of replicating systems that encompasses the idea that there are two distinct kinds of stability in nature-thermodynamic stability, associated with \"regular\" chemical systems, and dynamic kinetic stability, associated with replicating systems. That fundamental distinction is utilized to bridge between chemistry and biology by demonstrating that within the parallel world of replicating systems there is a second law analogue to the second law of thermodynamics, and that Darwinian theory may, through scientific reductionism, be related to that second law analogue. Possible implications of these ideas to the origin of life problem and the relationship between chemical emergence and biological evolution are discussed.",
"title": ""
},
{
"docid": "4db9cf56991edae0f5ca34546a8052c4",
"text": "This chapter presents a survey of interpolation and resampling techniques in the context of exact, separable interpolation of regularly sampled data. In this context, the traditional view of interpolation is to represent an arbitrary continuous function as a discrete sum of weighted and shifted synthesis functions—in other words, a mixed convolution equation. An important issue is the choice of adequate synthesis functions that satisfy interpolation properties. Examples of finite-support ones are the square pulse (nearest-neighbor interpolation), the hat function (linear interpolation), the cubic Keys' function, and various truncated or windowed versions of the sinc function. On the other hand, splines provide examples of infinite-support interpolation functions that can be realized exactly at a finite, surprisingly small computational cost. We discuss implementation issues and illustrate the performance of each synthesis function. We also highlight several artifacts that may arise when performing interpolation, such as ringing, aliasing, blocking and blurring. We explain why the approximation order inherent in the synthesis function is important to limit these interpolation artifacts, which motivates the use of splines as a tunable way to keep them in check without any significant cost penalty. I. I NTRODUCTION Interpolation is a technique that pervades many an application. Interpolation is almost never the goal in itself, yet it affects both the desired results and the ways to obtain them. Notwithstanding its nearly universal relevance, some authors give it less importance than it deserves, perhaps because considerations on interpolation are felt as being paltry when compared to the description of a more inspiring grand scheme of things of some algorithm or method. Due to this indifference, it appears as if the basic principles that underlie interpolation might be sometimes cast aside, or even misunderstood. The goal of this chapter is to refresh the notions encountered in classical interpolation, as well as to introduce the reader to more general approaches. 1.1. Definition What is interpolation? Several answers coexist. One of them defines interpolation as an informed estimate of the unknown [1]. We prefer the following—admittedly less concise—definition: modelbased recovery of continuous data from discrete data within a known range of abscissa. The reason for this preference is to allow for a clearer distinction between interpolation and extrapolation. The former postulates the existence of a known range where the model applies, and asserts that the deterministicallyrecovered continuous data is entirely described by the discrete data, while the latter authorizes the use of the model outside of the known range, with the implicit assumption that the model is \"good\" near data samples, and possibly less good elsewhere. Finally, the three most important hypothesis for interpolation are:",
"title": ""
},
{
"docid": "35d11265d367c6eeca6f3dfb8ef67a36",
"text": "A synthetic aperture radar (SAR) can produce high-resolution two-dimensional images of mapped areas. The SAR comprises a pulsed transmitter, an antenna, and a phase-coherent receiver. The SAR is borne by a constant velocity vehicle such as an aircraft or satellite, with the antenna beam axis oriented obliquely to the velocity vector. The image plane is defined by the velocity vector and antenna beam axis. The image orthogonal coordinates are range and cross range (azimuth). The amplitude and phase of the received signals are collected for the duration of an integration time after which the signal is processed. High range resolution is achieved by the use of wide bandwidth transmitted pulses. High azimuth resolution is achieved by focusing, with a signal processing technique, an extremely long antenna that is synthesized from the coherent phase history. The pulse repetition frequency of the SAR is constrained within bounds established by the geometry and signal ambiguity limits. SAR operation requires relative motion between radar and target. Nominal velocity values are assumed for signal processing and measurable deviations are used for error compensation. Residual uncertainties and high-order derivatives of the velocity which are difficult to compensate may cause image smearing, defocusing, and increased image sidelobes. The SAR transforms the ocean surface into numerous small cells, each with dimensions of range and azimuth resolution. An image of a cell can be produced provided the radar cross section of the cell is sufficiently large and the cell phase history is deterministic. Ocean waves evidently move sufficiently uniformly to produce SAR images which correlate well with optical photographs and visual observations. The relationship between SAR images and oceanic physical features is not completely understood, and more analyses and investigations are desired.",
"title": ""
},
{
"docid": "acd0450b78a83819bf54b82efdf7668f",
"text": "Localization of mult i-agent systems is a fundamental requirement for multi-agent systems to operate and cooperate properly. The problem of localization can be divided into two categories; one in which a -priori informat ion is available and the second where the global position is to be asce rtained without a-priori informat ion. This paper gives a comprehensive survey of localization techniques that exist in the literature for both the categories with the objectives of knowing the current state-of-the-art, helping in selecting the proper approach in a given scenario and promoting research in this area. A detailed description of methods that exist in the literature are provided in considerable detail. Then these methods are compared, and their weaknesses and strengths are discussed. Finally, some future research recommendations are drawn out of this survey.",
"title": ""
},
{
"docid": "f119b0ee9a237ab1e9acdae19664df0f",
"text": "Recent editorials in this journal have defended the right of eminent biologist James Watson to raise the unpopular hypothesis that people of sub-Saharan African descent score lower, on average, than people of European or East Asian descent on tests of general intelligence. As those editorials imply, the scientific evidence is substantial in showing a genetic contribution to these differences. The unjustified ill treatment meted out to Watson therefore requires setting the record straight about the current state of the evidence on intelligence, race, and genetics. In this paper, we summarize our own previous reviews based on 10 categories of evidence: The worldwide distribution of test scores; the g factor of mental ability; heritability differences; brain size differences; trans-racial adoption studies; racial admixture studies; regression-to-the-mean effects; related life-history traits; human origins research; and the poverty of predictions from culture-only explanations. The preponderance of evidence demonstrates that in intelligence, brain size, and other life-history variables, East Asians average a higher IQ and larger brain than Europeans who average a higher IQ and larger brain than Africans. Further, these group differences are 50–80% heritable. These are facts, not opinions and science must be governed by data. There is no place for the ‘‘moralistic fallacy’’ that reality must conform to our social, political, or ethical desires. !c 2008 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "4fa99994915bba8621e186a7e6804743",
"text": "We address the problem of synthesizing a robust data-extractor from a family of websites that contain the same kind of information. This problem is common when trying to aggregate information from many web sites, for example, when extracting information for a price-comparison site.\n Given a set of example annotated web pages from multiple sites in a family, our goal is to synthesize a robust data extractor that performs well on all sites in the family (not only on the provided example pages). The main challenge is the need to trade off precision for generality and robustness. Our key contribution is the introduction of forgiving extractors that dynamically adjust their precision to handle structural changes, without sacrificing precision on the training set.\n Our approach uses decision tree learning to create a generalized extractor and converts it into a forgiving extractor, inthe form of an XPath query. The forgiving extractor captures a series of pruned decision trees with monotonically decreasing precision, and monotonically increasing recall, and dynamically adjusts precision to guarantee sufficient recall. We have implemented our approach in a tool called TREEX and applied it to synthesize extractors for real-world large scale web sites. We evaluate the robustness and generality of the forgiving extractors by evaluating their precision and recall on: (i) different pages from sites in the training set (ii) pages from different versions of sites in the training set (iii) pages from different (unseen) sites. We compare the results of our synthesized extractor to those of classifier-based extractors, and pattern-based extractors, and show that TREEX significantly improves extraction accuracy.",
"title": ""
},
{
"docid": "6cd5b8ef199d926bccc583b7e058d9ee",
"text": "Over the last three decades, a large number of evolutionary algorithms have been developed for solving multi-objective optimization problems. However, there lacks an upto-date and comprehensive software platform for researchers to properly benchmark existing algorithms and for practitioners to apply selected algorithms to solve their real-world problems. The demand of such a common tool becomes even more urgent, when the source code of many proposed algorithms has not been made publicly available. To address these issues, we have developed a MATLAB platform for evolutionary multi-objective optimization in this paper, called PlatEMO, which includes more than 50 multiobjective evolutionary algorithms and more than 100 multi-objective test problems, along with several widely used performance indicators. With a user-friendly graphical user interface, PlatEMO enables users to easily compare several evolutionary algorithms at one time and collect statistical results in Excel or LaTeX files. More importantly, PlatEMO is completely open source, such that users are able to develop new algorithms on the basis of it. This paper introduces the main features of PlatEMO and illustrates how to use it for performing comparative experiments, embedding new algorithms, creating new test problems, and developing performance indicators. Source code of PlatEMO is now available at: http://bimk.ahu.edu.cn/index.php?s=/Index/Software/index.html.",
"title": ""
},
{
"docid": "ac9fb08fd12fc776138b2735cd370118",
"text": "In this paper we study 3D convolutional networks for video understanding tasks. Our starting point is the stateof-the-art I3D model of [3], which “inflates” all the 2D filters of the Inception architecture to 3D. We first consider “deflating” the I3D model at various levels to understand the role of 3D convolutions. Interestingly, we found that 3D convolutions at the top layers of the network contribute more than 3D convolutions at the bottom layers, while also being computationally more efficient. This indicates that I3D is better at capturing high-level temporal patterns than low-level motion signals. We also consider replacing 3D convolutions with spatiotemporal-separable 3D convolutions (i.e., replacing convolution using a kt×k×k filter with 1× k× k followed by kt× 1× 1 filters); we show that such a model, which we call S3D, is 1.5x more computationally efficient (in terms of FLOPS) than I3D, and achieves better accuracy. Finally, we explore spatiotemporal feature gating on top of S3D. The resulting model, which we call S3D-G, outperforms the state-of-the-art I3D model by 3.5% accuracy on Kinetics and reduces the FLOPS by 34%. It also achieves a new state-of-the-art performance when transferred to other action classification (UCF-101 and HMDB51) and detection (UCF-101 and JHMDB) datasets.",
"title": ""
},
{
"docid": "49ffd8624fc677ce51d0c079ca2e52f3",
"text": "Chatbots have been around since the 1960's, but recently they have risen in popularity especially due to new compatibility with social networks and messenger applications. Chatbots are different from traditional user interfaces, for they unveil themselves to the user one sentence at a time. Because of that, users may struggle to interact with them and to understand what they can do. Hence, it is important to support designers in deciding how to convey chatbots' features to users, as this might determine whether the user continues to chat or not. As a first step in this direction, in this paper our goal is to analyze the communicative strategies that have been used by popular chatbots to convey their features to users. To perform this analysis we use the Semiotic Inspection Method (SIM). As a result we identify and discuss the different strategies used by the analyzed chatbots to present their features to users. We also discuss the challenges and limitations of using SIM on such interfaces.",
"title": ""
},
{
"docid": "3910a3317ea9ff4ea6c621e562b1accc",
"text": "Compaction of agricultural soils is a concern for many agricultural soil scientists and farmers since soil compaction, due to heavy field traffic, has resulted in yield reduction of most agronomic crops throughout the world. Soil compaction is a physical form of soil degradation that alters soil structure, limits water and air infiltration, and reduces root penetration in the soil. Consequences of soil compaction are still underestimated. A complete understanding of processes involved in soil compaction is necessary to meet the future global challenge of food security. We review here the advances in understanding, quantification, and prediction of the effects of soil compaction. We found the following major points: (1) When a soil is exposed to a vehicular traffic load, soil water contents, soil texture and structure, and soil organic matter are the three main factors which determine the degree of compactness in that soil. (2) Soil compaction has direct effects on soil physical properties such as bulk density, strength, and porosity; therefore, these parameters can be used to quantify the soil compactness. (3) Modified soil physical properties due to soil compaction can alter elements mobility and change nitrogen and carbon cycles in favour of more emissions of greenhouse gases under wet conditions. (4) Severe soil compaction induces root deformation, stunted shoot growth, late germination, low germination rate, and high mortality rate. (5) Soil compaction decreases soil biodiversity by decreasing microbial biomass, enzymatic activity, soil fauna, and ground flora. (6) Boussinesq equations and finite element method models, that predict the effects of the soil compaction, are restricted to elastic domain and do not consider existence of preferential paths of stress propagation and localization of deformation in compacted soils. (7) Recent advances in physics of granular media and soil mechanics relevant to soil compaction should be used to progress in modelling soil compaction.",
"title": ""
},
{
"docid": "aa03d917910a3da1f22ceea8f5b8d1c8",
"text": "We train a language-universal dependency parser on a multilingual collection of treebanks. The parsing model uses multilingual word embeddings alongside learned and specified typological information, enabling generalization based on linguistic universals and based on typological similarities. We evaluate our parser’s performance on languages in the training set as well as on the unsupervised scenario where the target language has no trees in the training data, and find that multilingual training outperforms standard supervised training on a single language, and that generalization to unseen languages is competitive with existing model-transfer approaches.",
"title": ""
},
{
"docid": "ce83a16a6ccce5ccc58577b25ab33788",
"text": "In this paper, we address the problem of automatically extracting disease-symptom relationships from health question-answer forums due to its usefulness for medical question answering system. To cope with the problem, we divide our main task into two subtasks since they exhibit different challenges: (1) disease-symptom extraction across sentences, (2) disease-symptom extraction within a sentence. For both subtasks, we employed machine learning approach leveraging several hand-crafted features, such as syntactic features (i.e., information from part-of-speech tags) and pre-trained word vectors. Furthermore, we basically formulate our problem as a binary classification task, in which we classify the \"indicating\" relation between a pair of Symptom and Disease entity. To evaluate the performance, we also collected and annotated corpus containing 463 pairs of question-answer threads from several Indonesian health consultation websites. Our experiment shows that, as our expected, the first subtask is relatively more difficult than the second subtask. For the first subtask, the extraction of disease-symptom relation only achieved 36% in terms of F1 measure, while the second one was 76%. To the best of our knowledge, this is the first work addressing such relation extraction task for both \"across\" and \"within\" sentence, especially in Indonesia.",
"title": ""
},
{
"docid": "75567866ec1a72c48d78658a0b3115f9",
"text": "BACKGROUND\nImpingement is a common cause of shoulder pain. Impingement mechanisms may occur subacromially (under the coraco-acromial arch) or internally (within the shoulder joint), and a number of secondary pathologies may be associated. These include subacromial-subdeltoid bursitis (inflammation of the subacromial portion of the bursa, the subdeltoid portion, or both), tendinopathy or tears affecting the rotator cuff or the long head of biceps tendon, and glenoid labral damage. Accurate diagnosis based on physical tests would facilitate early optimisation of the clinical management approach. Most people with shoulder pain are diagnosed and managed in the primary care setting.\n\n\nOBJECTIVES\nTo evaluate the diagnostic accuracy of physical tests for shoulder impingements (subacromial or internal) or local lesions of bursa, rotator cuff or labrum that may accompany impingement, in people whose symptoms and/or history suggest any of these disorders.\n\n\nSEARCH METHODS\nWe searched electronic databases for primary studies in two stages. In the first stage, we searched MEDLINE, EMBASE, CINAHL, AMED and DARE (all from inception to November 2005). In the second stage, we searched MEDLINE, EMBASE and AMED (2005 to 15 February 2010). Searches were delimited to articles written in English.\n\n\nSELECTION CRITERIA\nWe considered for inclusion diagnostic test accuracy studies that directly compared the accuracy of one or more physical index tests for shoulder impingement against a reference test in any clinical setting. We considered diagnostic test accuracy studies with cross-sectional or cohort designs (retrospective or prospective), case-control studies and randomised controlled trials.\n\n\nDATA COLLECTION AND ANALYSIS\nTwo pairs of review authors independently performed study selection, assessed the study quality using QUADAS, and extracted data onto a purpose-designed form, noting patient characteristics (including care setting), study design, index tests and reference standard, and the diagnostic 2 x 2 table. We presented information on sensitivities and specificities with 95% confidence intervals (95% CI) for the index tests. Meta-analysis was not performed.\n\n\nMAIN RESULTS\nWe included 33 studies involving 4002 shoulders in 3852 patients. Although 28 studies were prospective, study quality was still generally poor. Mainly reflecting the use of surgery as a reference test in most studies, all but two studies were judged as not meeting the criteria for having a representative spectrum of patients. However, even these two studies only partly recruited from primary care.The target conditions assessed in the 33 studies were grouped under five main categories: subacromial or internal impingement, rotator cuff tendinopathy or tears, long head of biceps tendinopathy or tears, glenoid labral lesions and multiple undifferentiated target conditions. The majority of studies used arthroscopic surgery as the reference standard. Eight studies utilised reference standards which were potentially applicable to primary care (local anaesthesia, one study; ultrasound, three studies) or the hospital outpatient setting (magnetic resonance imaging, four studies). One study used a variety of reference standards, some applicable to primary care or the hospital outpatient setting. In two of these studies the reference standard used was acceptable for identifying the target condition, but in six it was only partially so. The studies evaluated numerous standard, modified, or combination index tests and 14 novel index tests. There were 170 target condition/index test combinations, but only six instances of any index test being performed and interpreted similarly in two studies. Only two studies of a modified empty can test for full thickness tear of the rotator cuff, and two studies of a modified anterior slide test for type II superior labrum anterior to posterior (SLAP) lesions, were clinically homogenous. Due to the limited number of studies, meta-analyses were considered inappropriate. Sensitivity and specificity estimates from each study are presented on forest plots for the 170 target condition/index test combinations grouped according to target condition.\n\n\nAUTHORS' CONCLUSIONS\nThere is insufficient evidence upon which to base selection of physical tests for shoulder impingements, and local lesions of bursa, tendon or labrum that may accompany impingement, in primary care. The large body of literature revealed extreme diversity in the performance and interpretation of tests, which hinders synthesis of the evidence and/or clinical applicability.",
"title": ""
},
{
"docid": "4df7522303220444651f85b38b1a120f",
"text": "An efficient and novel technique is developed for detecting and localizing corners of planar curves. This paper discusses the gradient feature distribution of planar curves and constructs gradient correlation matrices (GCMs) over the region of support (ROS) of these planar curves. It is shown that the eigenstructure and determinant of the GCMs encode the geometric features of these curves, such as curvature features and the dominant points. The determinant of the GCMs is shown to have a strong corner response, and is used as a ‘‘cornerness’’ measure of planar curves. A comprehensive performance evaluation of the proposed detector is performed, using the ACU and localization error criteria. Experimental results demonstrate that the GCM detector has a strong corner position response, along with a high detection rate and good localization performance. & 2009 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "5998f63a6670a961dfd987a22c6aaff8",
"text": "A novel conical horn antenna loaded with ball cone dielectric is designed in this paper. The design method of the dielectric surface equation is given. The comparison between the smooth wall conical horn antenna and the four types of horn antennas, the dielectric lens-corrected horn antenna, the dielectric loaded horn antenna, the cone dielectric loaded horn antenna, and the ball cone dielectric loaded horn antenna is well discussed. Simulations show that the ball cone dielectric loaded horn antenna has advantages of wide frequency band, high gain, small reflection, low peck cross-polarization level, symmetrical beam, sharp lobe, and low sidelobes. The antenna is characterized by simple structure, low cost, manufacture easily. Both as a direct radiator, and used as a feed for reflector antennas, the antenna has very good performance.",
"title": ""
},
{
"docid": "1bdf406fd827af2dddcecef934e291d4",
"text": "This study was conducted to collect data on specific volatile fatty acids (produced from soft tissue decomposition) and various anions and cations (liberated from soft tissue and bone), deposited in soil solution underneath decomposing human cadavers as an aid in determining the \"time since death.\" Seven nude subjects (two black males, a white female and four white males) were placed within a decay research facility at various times of the year and allowed to decompose naturally. Data were amassed every three days in the spring and summer, and weekly in the fall and winter. Analyses of the data reveal distinct patterns in the soil solution for volatile fatty acids during soft tissue decomposition and for specific anions and cations once skeletonized, when based on accumulated degree days. Decompositional rates were also obtained, providing valuable information for estimating the \"maximum time since death.\" Melanin concentrations observed in soil solution during this study also yields information directed at discerning racial affinities. Application of these data can significantly enhance \"time since death\" determinations currently in use.",
"title": ""
},
{
"docid": "f79472b17396fd180821b0c02fe92939",
"text": "Bull breeds are commonly kept as companion animals, but the pit bull terrier is restricted by breed-specific legislation (BSL) in parts of the United States and throughout the United Kingdom. Shelter workers must decide which breed(s) a dog is. This decision may influence the dog's fate, particularly in places with BSL. In this study, shelter workers in the United States and United Kingdom were shown pictures of 20 dogs and were asked what breed each dog was, how they determined each dog's breed, whether each dog was a pit bull, and what they expected the fate of each dog to be. There was much variation in responses both between and within the United States and United Kingdom. UK participants frequently labeled dogs commonly considered by U.S. participants to be pit bulls as Staffordshire bull terriers. UK participants were more likely to say their shelters would euthanize dogs deemed to be pit bulls. Most participants noted using dogs' physical features to determine breed, and 41% affected by BSL indicated they would knowingly mislabel a dog of a restricted breed, presumably to increase the dog's adoption chances.",
"title": ""
}
] |
scidocsrr
|
b77e5d2818be0d62e2d93a771cf09d5a
|
Ultra-Wideband Printed Slot Antenna With Graded Index Superstrate
|
[
{
"docid": "2da44919966d841d4a1d6f3cc2a648e9",
"text": "A composite cavity-backed folded sectorial bowtie antenna (FSBA) is proposed and investigated in this paper, which is differentially fed by an SMA connector through a balun, i.e. a transition from a microstrip line to a parallel stripline. The composite cavity as a general case, consisting of a conical part and a cylindrical rim, can be tuned freely from a cylindrical to a cup-shaped one. Parametric studies are performed to optimize the antenna performance. Experimental results reveal that it can achieve an impedance bandwidth of 143% for SWR les 2, a broadside gain of 8-15.3 dBi, and stable radiation pattern over the whole operating band. The total electrical dimensions are 0.66lambdam in diameter and 0.16lambdam in height, where lambdam is the free-space wavelength at lower edge of the operating frequency band. The problem about the distorted patterns in the upper frequency band for wideband cavity-backed antennas is solved in our work.",
"title": ""
}
] |
[
{
"docid": "8e13ed9649f794a81c93926166afd888",
"text": "Nowadays, with the advance of technology, many applications generate huge amounts of data streams at very high speed. Examples include network traffic, web click streams, video surveillance, and sensor networks. Data stream mining has become a hot research topic. Its goal is to extract hidden knowledge/patterns from continuous data streams. Unlike traditional data mining where the dataset is static and can be repeatedly read many times, data stream mining algorithms face many challenges and have to satisfy constraints such as bounded memory, single-pass, real-time response, and concept-drift detection. This paper presents a comprehensive survey of the state-of-the-art data stream mining algorithms with a focus on clustering and classification because of their ubiquitous usage. It identifies mining constraints, proposes a general model for data stream mining, and depicts the relationship between traditional data mining and data stream mining. Furthermore, it analyzes the advantages as well as limitations of data stream algorithms and suggests potential areas for future research.",
"title": ""
},
{
"docid": "d477e2a2678de720c57895bf1d047c4b",
"text": "Interpreting predictions from tree ensemble methods such as gradient boosting machines and random forests is important, yet feature attribution for trees is often heuristic and not individualized for each prediction. Here we show that popular feature attribution methods are inconsistent, meaning they can lower a feature’s assigned importance when the true impact of that feature actually increases. This is a fundamental problem that casts doubt on any comparison between features. To address it we turn to recent applications of game theory and develop fast exact tree solutions for SHAP (SHapley Additive exPlanation) values, which are the unique consistent and locally accurate attribution values. We then extend SHAP values to interaction effects and define SHAP interaction values. We propose a rich visualization of individualized feature attributions that improves over classic attribution summaries and partial dependence plots, and a unique “supervised” clustering (clustering based on feature attributions). We demonstrate better agreement with human intuition through a user study, exponential improvements in run time, improved clustering performance, and better identification of influential features. An implementation of our algorithm has also been merged into XGBoost and LightGBM, see http://github.com/slundberg/shap for details. ACM Reference Format: Scott M. Lundberg, Gabriel G. Erion, and Su-In Lee. 2018. Consistent Individualized Feature Attribution for Tree Ensembles. In Proceedings of ACM (KDD’18). ACM, New York, NY, USA, 9 pages. https://doi.org/none",
"title": ""
},
{
"docid": "89ae8d70488aae3ad8eccc3aa12d5ea2",
"text": "Quality of e-learning systems is one of the important topics that the researchers are investigating in the last years. This paper refines the concept of quality of e-learning systems and proposes a new framework, called TICS (Technology, Interaction, Content, Services), which focuses on the most important aspects to be considered when designing or evaluating an e-learning system. Our proposal emphasizes user-system interaction as one of such important aspects. Guidelines that address the TICS aspects and an evaluation methodology, called eLSE (e-Learning Systematic Evaluation) have been derived. eLSE methodology combines a specific inspection technique with user-testing. This inspection, called AT inspection, uses evaluation patterns, called Abstract Tasks (ATs), that precisely describe the activities to be performed during inspection. The results of an empirical validation of the AT inspection technique, carried out to validate this technique, have shown an advantage of the AT inspection over the other two usability evaluation methods, demonstrating that Abstract Tasks are effective and efficient tools to drive evaluators and improve their performance.",
"title": ""
},
{
"docid": "29cada31c1c7feff4f58a908cb940b7b",
"text": "Data seems cheap to get, and in many ways it is, but the process of creating a high quality labeled dataset from a mass of data is time-consuming and expensive. With the advent of rich 3D repositories, photo-realistic rendering systems offer the opportunity to provide nearly limitless data. Yet, their primary value for visual learning may be the quality of the data they can provide rather than the quantity. Rendering engines offer the promise of perfect labels in addition to the data: what the precise camera pose is; what the precise lighting location, temperature, and distribution is; what the geometry of the object is. In this work we focus on semi-automating dataset creation through use of synthetic data and apply this method to an important task – object viewpoint estimation. Using state-of-the-art rendering software we generate a large labeled dataset of cars rendered densely in viewpoint space. We investigate the effect of rendering parameters on estimation performance and show realism is important. We show that generalizing from synthetic data is not harder than the domain adaptation required between two real-image datasets and that combining synthetic images with a small amount of real data improves estimation accuracy.",
"title": ""
},
{
"docid": "447da1069d3b258a324c606a3d44612d",
"text": "Objective: We studied what are the characteristics of high performing software testers in the industry. Method: We conducted an exploratory case study, collecting data through recorded interviews of one development manager and three testers in each of the three companies, analysis of the defect database, and informal communication within our research partnership with the companies. Results: We found that experience, reflection, motivation and personal characteristics were the top level themes. Experience related to the domain, e.g. processes of the customer, and on the other hand, specialized technical skills, e.g. performance testing, were seen more important than skills of test case design and test planning.",
"title": ""
},
{
"docid": "6b57940d379cf06b3f68b1e3a68eb4fe",
"text": "This paper presents a temperature compensated logarithmic amplifier (log-amp) RF power detector implemented in CMOS 0.18μm technology. The input power can range from -50 to +10 dBm for RF signals ranging from 100MHz to 1.5 GHz. This design attains a typical DR of 39 dB for a ±1 dB log-conformance error (LCE). Up to 900MHz the temperature drift is never larger than ±1.1 dB for all 24 measured samples over a temperature range from -40 to +85°C. The current consumption is 6.3mA from a 1.8V power supply and the chip area is 0.76mm2.",
"title": ""
},
{
"docid": "56f619d7bd02a61cad2ed7c6f481cafb",
"text": "Personnel evaluation and selection is a very important activity for the enterprises. Different job needs different ability and the requirement of criteria which can measure ability is different. It needs a suitable and flexible method to evaluate the performance of each candidate according to different requirements of different jobs in relation to each criterion. Analytic Hierarchy Process (AHP) is one of Multi Criteria decision making methods derived from paired comparisons. Simple Additive Weighting (SAW) is most frequently used multi attribute decision technique. The method is based on the weighted average. It successfully models the ambiguity and imprecision associated with the pair wise comparison process and reduces the personal biasness. This study tries to analyze the Analytic Hierarchy Process in order to make the recruitment process more reasonable, based on the fuzzy multiple criteria decision making model to achieve the goal of personnel selection. Finally, an example is implemented to demonstrate the practicability of the proposed method.",
"title": ""
},
{
"docid": "7c1c0e74fcd2fb36c60915a6947fcdac",
"text": "Modern deep transfer learning approaches have mainly focused on learning generic feature vectors from one task that are transferable to other tasks, such as word embeddings in language and pretrained convolutional features in vision. However, these approaches usually transfer unary features and largely ignore more structured graphical representations. This work explores the possibility of learning generic latent relational graphs that capture dependencies between pairs of data units (e.g., words or pixels) from large-scale unlabeled data and transferring the graphs to downstream tasks. Our proposed transfer learning framework improves performance on various tasks including question answering, natural language inference, sentiment analysis, and image classification. We also show that the learned graphs are generic enough to be transferred to different embeddings on which the graphs have not been trained (including GloVe embeddings, ELMo embeddings, and task-specific RNN hidden units), or embedding-free units such as image pixels.",
"title": ""
},
{
"docid": "8c9fa849be0d462fcce5974814400768",
"text": "The integrin family of cell adhesion receptors regulates a diverse array of cellular functions crucial to the initiation, progression and metastasis of solid tumours. The importance of integrins in several cell types that affect tumour progression has made them an appealing target for cancer therapy. Integrin antagonists, including the αvβ3 and αvβ5 inhibitor cilengitide, have shown encouraging activity in Phase II clinical trials and cilengitide is currently being tested in a Phase III trial in patients with glioblastoma. These exciting clinical developments emphasize the need to identify how integrin antagonists influence the tumour and its microenvironment.",
"title": ""
},
{
"docid": "b2a9264030e56595024ce0e02da6c73f",
"text": "Traditional citation analysis has been widely applied to detect patterns of scientific collaboration, map the landscapes of scholarly disciplines, assess the impact of research outputs, and observe knowledge transfer across domains. It is, however, limited, as it assumes all citations are of similar value and weights each equally. Content-based citation analysis (CCA) addresses a citation’s value by interpreting each one based on its context at both the syntactic and semantic levels. This paper provides a comprehensive overview of CAA research in terms of its theoretical foundations, methodical approaches, and example applications. In addition, we highlight how increased computational capabilities and publicly available full-text resources have opened this area of research to vast possibilities, which enable deeper citation analysis, more accurate citation prediction, and increased knowledge discovery.",
"title": ""
},
{
"docid": "0ec8872c972335c11a63380fe1f1c51f",
"text": "MOTIVATION\nMany complex disease syndromes such as asthma consist of a large number of highly related, rather than independent, clinical phenotypes, raising a new technical challenge in identifying genetic variations associated simultaneously with correlated traits. Although a causal genetic variation may influence a group of highly correlated traits jointly, most of the previous association analyses considered each phenotype separately, or combined results from a set of single-phenotype analyses.\n\n\nRESULTS\nWe propose a new statistical framework called graph-guided fused lasso to address this issue in a principled way. Our approach represents the dependency structure among the quantitative traits explicitly as a network, and leverages this trait network to encode structured regularizations in a multivariate regression model over the genotypes and traits, so that the genetic markers that jointly influence subgroups of highly correlated traits can be detected with high sensitivity and specificity. While most of the traditional methods examined each phenotype independently, our approach analyzes all of the traits jointly in a single statistical method to discover the genetic markers that perturb a subset of correlated traits jointly rather than a single trait. Using simulated datasets based on the HapMap consortium data and an asthma dataset, we compare the performance of our method with the single-marker analysis, and other sparse regression methods that do not use any structural information in the traits. Our results show that there is a significant advantage in detecting the true causal single nucleotide polymorphisms when we incorporate the correlation pattern in traits using our proposed methods.\n\n\nAVAILABILITY\nSoftware for GFlasso is available at http://www.sailing.cs.cmu.edu/gflasso.html.",
"title": ""
},
{
"docid": "32c405ebed87b4e1ca47cd15b7b9b61b",
"text": "Video cameras are pervasively deployed for security and smart city scenarios, with millions of them in large cities worldwide. Achieving the potential of these cameras requires efficiently analyzing the live videos in realtime. We describe VideoStorm, a video analytics system that processes thousands of video analytics queries on live video streams over large clusters. Given the high costs of vision processing, resource management is crucial. We consider two key characteristics of video analytics: resource-quality tradeoff with multi-dimensional configurations, and variety in quality and lag goals. VideoStorm’s offline profiler generates query resourcequality profile, while its online scheduler allocates resources to queries to maximize performance on quality and lag, in contrast to the commonly used fair sharing of resources in clusters. Deployment on an Azure cluster of 101 machines shows improvement by as much as 80% in quality of real-world queries and 7× better lag, processing video from operational traffic cameras.",
"title": ""
},
{
"docid": "75a1c22e950ccb135c054353acb8571a",
"text": "We study the problem of building generative models of natural source code (NSC); that is, source code written and understood by humans. Our primary contribution is to describe a family of generative models for NSC that have three key properties: First, they incorporate both sequential and hierarchical structure. Second, we learn a distributed representation of source code elements. Finally, they integrate closely with a compiler, which allows leveraging compiler logic and abstractions when building structure into the model. We also develop an extension that includes more complex structure, refining how the model generates identifier tokens based on what variables are currently in scope. Our models can be learned efficiently, and we show empirically that including appropriate structure greatly improves the models, measured by the probability of generating test programs.",
"title": ""
},
{
"docid": "8180c0bb869da12f32a847f70846807e",
"text": "Large-scale adaptive radiations might explain the runaway success of a minority of extant vertebrate clades. This hypothesis predicts, among other things, rapid rates of morphological evolution during the early history of major groups, as lineages invade disparate ecological niches. However, few studies of adaptive radiation have included deep time data, so the links between extant diversity and major extinct radiations are unclear. The intensively studied Mesozoic dinosaur record provides a model system for such investigation, representing an ecologically diverse group that dominated terrestrial ecosystems for 170 million years. Furthermore, with 10,000 species, extant dinosaurs (birds) are the most speciose living tetrapod clade. We assembled composite trees of 614-622 Mesozoic dinosaurs/birds, and a comprehensive body mass dataset using the scaling relationship of limb bone robustness. Maximum-likelihood modelling and the node height test reveal rapid evolutionary rates and a predominance of rapid shifts among size classes in early (Triassic) dinosaurs. This indicates an early burst niche-filling pattern and contrasts with previous studies that favoured gradualistic rates. Subsequently, rates declined in most lineages, which rarely exploited new ecological niches. However, feathered maniraptoran dinosaurs (including Mesozoic birds) sustained rapid evolution from at least the Middle Jurassic, suggesting that these taxa evaded the effects of niche saturation. This indicates that a long evolutionary history of continuing ecological innovation paved the way for a second great radiation of dinosaurs, in birds. We therefore demonstrate links between the predominantly extinct deep time adaptive radiation of non-avian dinosaurs and the phenomenal diversification of birds, via continuing rapid rates of evolution along the phylogenetic stem lineage. This raises the possibility that the uneven distribution of biodiversity results not just from large-scale extrapolation of the process of adaptive radiation in a few extant clades, but also from the maintenance of evolvability on vast time scales across the history of life, in key lineages.",
"title": ""
},
{
"docid": "b7870b788d7951602a97380daf91cf4c",
"text": "Aims: Pain due to the removal of the chest tube is one of the important complications after open heart surgery. In the case of inadequate pain management, sympathetic system is stimulated and can lead to irreversible complications. Studies showed the effect of reflexology massage on pain relief in other cases; so this study had been done with the aim of “determining the effect of foot reflexology on the pain of the patients under open heart surgery during chest tube removal”. Methods: This randomized clinical study with control group was done in the hospitals covered by Baqiyatallah Medical Sciences University in 2013. Ninety samples were divided into three experimental, control and placebo-treated groups based on randomized allocation. Pain level was measured through Numerical Rating Scale (NRS) in all the three groups before intervention. In the experimental group center of the anterior one-third and in the placebotreated group, Posterior one-third of the left foot was being massaged for ten minutes before chest tube removal. There was no measurement in the control group. After removal of the chest tube, the level of the pain was measured and documented immediately. Data were analyzed by SPSS18 software by the help of descriptive and inferential statistics. Results: Difference in the mean of the quantitative variables including; age, height, weight and level of the body and demographic qualitative variables of the patients including; education status, occupation and marital status was not significant (p>0.05). Expected increasing of the pain due to the chest pain tube removal was not significant in the experimental group (p=0.08), while placebo-treated and control groups had significant increase of the pain (p=0.001 and p=0.000",
"title": ""
},
{
"docid": "d5bc3147e23f95a070bce0f37a96c2a8",
"text": "This paper presents a fully integrated wideband current-mode digital polar power amplifier (DPA) in CMOS with built-in AM–PM distortion self-compensation. Feedforward capacitors are implemented in each differential cascode digital power cell. These feedforward capacitors operate together with a proposed DPA biasing scheme to minimize the DPA output device capacitance <inline-formula> <tex-math notation=\"LaTeX\">$C_{d}$ </tex-math></inline-formula> variations over a wide output power range and a wide carrier frequency bandwidth, resulting in DPA AM–PM distortion reduction. A three-coil transformer-based DPA output passive network is implemented within a single transformer footprint (330 <inline-formula> <tex-math notation=\"LaTeX\">$\\mu \\text{m} \\,\\, \\times $ </tex-math></inline-formula> 330 <inline-formula> <tex-math notation=\"LaTeX\">$\\mu \\text{m}$ </tex-math></inline-formula>) and provides parallel power combining and load impedance transformation with a low loss, an octave bandwidth, and a large impedance transformation ratio. Moreover, this proposed power amplifier (PA) output passive network shows a desensitized phase response to <inline-formula> <tex-math notation=\"LaTeX\">$C_{d}$ </tex-math></inline-formula> variations and further suppresses the DPA AM–PM distortion. Both proposed AM–PM distortion self-compensation techniques are effective for a large carrier frequency range and a wide modulation bandwidth, and are independent of the DPA AM control codes. This results in a superior inherent DPA phase linearity and reduces or even eliminates the need for phase pre-distortion, which dramatically simplifies the DPA pre-distortion computations. As a proof-of-concept, a 2–4.3 GHz wideband DPA is implemented in a standard 28-nm bulk CMOS process. Operating with a low supply voltage of 1.4 V for enhanced reliability, the DPA demonstrates ±0.5 dB PA output power bandwidth from 2 to 4.3 GHz with +24.9 dBm peak output power at 3.1 GHz. The measured peak PA drain efficiency is 42.7% at 2.5 GHz and is more than 27% from 2 to 4.3 GHz. The measured PA AM–PM distortion is within 6.8° at 2.8 GHz over the PA output power dynamic range of 25 dB, achieving the lowest AM–PM distortion among recently reported current-mode DPAs in the same frequency range. Without any phase pre-distortion, modulation measurements with a 20-MHz 802.11n standard compliant signal demonstrate 2.95% rms error vector magnitude, −33.5 dBc adjacent channel leakage ratio, 15.6% PA drain efficiency, and +14.6 dBm PA average output power at 2.8 GHz.",
"title": ""
},
{
"docid": "ed5fd1bf16317256b56f4fa0db37a0f9",
"text": "In this paper we pursue an approach to scaling life-long learning using parallel off-policy reinforcement learning algorithms. In life-long learning a robot continually learns from a life-time of experience, slowly acquiring and applying skills and knowledge to new situations. Many of the benefits of life-long learning are a results of scaling the amount of training data, processed by the robot, to long sensorimotor streams. Another dimension of scaling can be added by allowing off-policy sampling from the unending stream of sensorimotor data generated by a long-lived robot. Recent algorithmic developments have made it possible to apply off-policy algorithms to life-long learning, in a sound way, for the first time. We assess the scalability of these off-policy algorithms on a physical robot. We show that hundreds of accurate multi-step predictions can be learned about several policies in parallel and in realtime. We present the first online measures of off-policy learning progress. Finally we demonstrate that our robot, using the new off-policy measures, can learn 8000 predictions about 300 distinct policies, a substantial increase in scale compared to previous simulated and robotic life-long learning systems.",
"title": ""
},
{
"docid": "b31676e958e8345132780499e5dd968d",
"text": "Following triggered corporate bankruptcies, an increasing number of prediction models have emerged since 1960s. This study provides a critical analysis of methodologies and empirical findings of applications of these models across 10 different countries. The study’s empirical exercise finds that predictive accuracies of different corporate bankruptcy prediction models are, generally, comparable. Artificially Intelligent Expert System (AIES) models perform marginally better than statistical and theoretical models. Overall, use of Multiple Discriminant Analysis (MDA) dominates the research followed by logit models. Study deduces useful observations and recommendations for future research in this field. JEL classification: G33; C49; C88",
"title": ""
},
{
"docid": "d3d5f135cc2a09bf0dfc1ef88c6089b5",
"text": "In this paper, we present the Expert Hub System, which was designed to help governmental structures find the best experts in different areas of expertise for better reviewing of the incoming grant proposals. In order to define the areas of expertise with topic modeling and clustering, and then to relate experts to corresponding areas of expertise and rank them according to their proficiency in certain areas of expertise, the Expert Hub approach uses the data from the Directorate of Science and Technology Programmes. Furthermore, the paper discusses the use of Big Data and Machine Learning in the Russian government",
"title": ""
},
{
"docid": "9f469cdc1864aad2026630a29c210c1f",
"text": "This paper proposes an asymptotically optimal hybrid beamforming solution for large antenna arrays by exploiting the properties of the singular vectors of the channel matrix. It is shown that the elements of the channel matrix with Rayleigh fading follow a normal distribution when large antenna arrays are employed. The proposed beamforming algorithm is effective in both sparse and rich propagation environments, and is applicable for both point-to-point and multiuser scenarios. In addition, a closed-form expression and a lower bound for the achievable rates are derived when analog and digital phase shifters are employed. It is shown that the performance of the hybrid beamformers using phase shifters with more than 2-bit resolution is comparable with analog phase shifting. A novel phase shifter selection scheme that reduces the power consumption at the phase shifter network is proposed when the wireless channel is modeled by Rayleigh fading. Using this selection scheme, the spectral efficiency can be increased as the power consumption in the phase shifter network reduces. Compared with the scenario that all of the phase shifters are in operation, the simulation results indicate that the spectral efficiency increases when up to 50% of phase shifters are turned OFF.",
"title": ""
}
] |
scidocsrr
|
586b8f0da04528aec5988e977a916bbf
|
Concept Development and Design of a Spherical Wheel Motor (SWM)
|
[
{
"docid": "b6f09a89a16474860091ddb325d49017",
"text": "This paper addresses the design and commutation of a novel kind of spherical stepper motor in which the poles of the stator are electromagnets and the poles of the rotor (rotating ball) are permanent magnets. Due to the fact that points on a sphere can only be arranged with equal spacing in a limited number of cases (corresponding to the Platonic solids), design of spherical stepper motors with fine rotational increments is fundamentally geometrical in nature. We address this problem and the related problem of how rotor and stator poles should be arranged in order to interact to cause motion. The resulting design has a much wider range of unhindered motion than other spherical stepper motor designs in the literature. We also address the problem of commutation, i.e., we determine the sequence of stator polarities in time that approximate a desired spherical motion.",
"title": ""
}
] |
[
{
"docid": "ad5943b20597be07646cca1af9d23660",
"text": "Defects in safety critical processes can lead to accidents that result in harm to people or damage to property. Therefore, it is important to find ways to detect and remove defects from such processes. Earlier work has shown that Fault Tree Analysis (FTA) [3] can be effective in detecting safety critical process defects. Unfortunately, it is difficult to build a comprehensive set of Fault Trees for a complex process, especially if this process is not completely welldefined. The Little-JIL process definition language has been shown to be effective for defining complex processes clearly and precisely at whatever level of granularity is desired [1]. In this work, we present an algorithm for generating Fault Trees from Little-JIL process definitions. We demonstrate the value of this work by showing how FTA can identify safety defects in the process from which the Fault Trees were automatically derived.",
"title": ""
},
{
"docid": "f79090002d75e922e272c44391ddb6f0",
"text": "Nowadays, coffee beans are almost exclusively used for the preparation of the beverage. The sustainability of coffee production can be achieved introducing new applications for the valorization of coffee by-products. Coffee silverskin is the by-product generated during roasting, and because of its powerful antioxidant capacity, coffee silverskin aqueous extract (CSE) may be used for other applications, such as antiaging cosmetics and dermaceutics. This study aims to contribute to the coffee sector's sustainability through the application of CSE to preserve skin health. Preclinical data regarding the antiaging properties of CSE employing human keratinocytes and Caenorhabditis elegans are collected during the present study. Accelerated aging was induced by tert-butyl hydroperoxide (t-BOOH) in HaCaT cells and by ultraviolet radiation C (UVC) in C. elegans. Results suggest that the tested concentrations of coffee extracts were not cytotoxic, and CSE 1 mg/mL gave resistance to skin cells when oxidative damage was induced by t-BOOH. On the other hand, nematodes treated with CSE (1 mg/mL) showed a significant increased longevity compared to those cultured on a standard diet. In conclusion, our results support the antiaging properties of the CSE and its great potential for improving skin health due to its antioxidant character associated with phenols among other bioactive compounds present in the botanical material.",
"title": ""
},
{
"docid": "aa4d12547a6b85a34ee818f1cc71d1da",
"text": "OBJECTIVE\nDevelopment of a new framework for the National Institute on Aging (NIA) to assess progress and opportunities toward stimulating and supporting rigorous research to address health disparities.\n\n\nDESIGN\nPortfolio review of NIA's health disparities research portfolio to evaluate NIA's progress in addressing priority health disparities areas.\n\n\nRESULTS\nThe NIA Health Disparities Research Framework highlights important factors for health disparities research related to aging, provides an organizing structure for tracking progress, stimulates opportunities to better delineate causal pathways and broadens the scope for malleable targets for intervention, aiding in our efforts to address health disparities in the aging population.\n\n\nCONCLUSIONS\nThe promise of health disparities research depends largely on scientific rigor that builds on past findings and aggressively pursues new approaches. The NIA Health Disparities Framework provides a landscape for stimulating interdisciplinary approaches, evaluating research productivity and identifying opportunities for innovative health disparities research related to aging.",
"title": ""
},
{
"docid": "a094fe8de029646a408bbb685824581c",
"text": "Will reading habit influence your life? Many say yes. Reading computational intelligence principles techniques and applications is a good habit; you can develop this habit to be such interesting way. Yeah, reading habit will not only make you have any favourite activity. It will be one of guidance of your life. When reading has become a habit, you will not make it as disturbing activities or as boring activity. You can gain many benefits and importances of reading.",
"title": ""
},
{
"docid": "284c7292bd7e79c5c907fc2aa21fb52c",
"text": "Monte Carlo Tree Search (MCTS) is an AI technique that has been successfully applied to many deterministic games of perfect information, leading to large advances in a number of domains, such as Go and General Game Playing. Imperfect information games are less well studied in the field of AI despite being popular and of significant commercial interest, for example in the case of computer and mobile adaptations of turn based board and card games. This is largely because hidden information and uncertainty leads to a large increase in complexity compared to perfect information games. In this thesis MCTS is extended to games with hidden information and uncertainty through the introduction of the Information Set MCTS (ISMCTS) family of algorithms. It is demonstrated that ISMCTS can handle hidden information and uncertainty in a variety of complex board and card games. This is achieved whilst preserving the general applicability of MCTS and using computational budgets appropriate for use in a commercial game. The ISMCTS algorithm is shown to outperform the existing approach of Perfect Information Monte Carlo (PIMC) search. Additionally it is shown that ISMCTS can be used to solve two known issues with PIMC search, namely strategy fusion and non-locality. ISMCTS has been integrated into a commercial game, Spades by AI Factory, with over 2.5 million downloads. The Information Capture And ReUSe (ICARUS) framework is also introduced in this thesis. The ICARUS framework generalises MCTS enhancements in terms of information capture (from MCTS simulations) and reuse (to improve MCTS tree and simulation policies). The ICARUS framework is used to express existing enhancements, to provide a tool to design new ones, and to rigorously define how MCTS enhancements can be combined. The ICARUS framework is tested across a wide variety of games.",
"title": ""
},
{
"docid": "f7276b8fee4bc0633348ce64594817b2",
"text": "Meta-modelling is at the core of Model-Driven Engineering, where it is used for language engineering and domain modelling. The OMG’s Meta-Object Facility is the standard framework for building and instantiating meta-models. However, in the last few years, several researchers have identified limitations and rigidities in such scheme, most notably concerning the consideration of only two meta-modelling levels at the same time. In this paper we present MetaDepth, a novel framework that supports a dual linguistic/ontological instantiation and permits building systems with an arbitrary number of meta-levels through deep meta-modelling. The framework implements advanced modelling concepts allowing the specification and evaluation of derived attributes and constraints across multiple meta-levels, linguistic extensions of ontological instance models, transactions, and hosting different constraint and action languages.",
"title": ""
},
{
"docid": "33431760dfc16c095a4f0b8d4ed94790",
"text": "Millions of individuals worldwide are afflicted with acute and chronic respiratory diseases, causing temporary and permanent disabilities and even death. Oftentimes, these diseases occur as a result of altered immune responses. The aryl hydrocarbon receptor (AhR), a ligand-activated transcription factor, acts as a regulator of mucosal barrier function and may influence immune responsiveness in the lungs through changes in gene expression, cell–cell adhesion, mucin production, and cytokine expression. This review updates the basic immunobiology of the AhR signaling pathway with regards to inflammatory lung diseases such as asthma, chronic obstructive pulmonary disease, and silicosis following data in rodent models and humans. Finally, we address the therapeutic potential of targeting the AhR in regulating inflammation during acute and chronic respiratory diseases.",
"title": ""
},
{
"docid": "9a7016a02eda7fcae628197b0625832b",
"text": "We present a vertical-silicon-nanowire-based p-type tunneling field-effect transistor (TFET) using CMOS-compatible process flow. Following our recently reported n-TFET , a low-temperature dopant segregation technique was employed on the source side to achieve steep dopant gradient, leading to excellent tunneling performance. The fabricated p-TFET devices demonstrate a subthreshold swing (SS) of 30 mV/decade averaged over a decade of drain current and an Ion/Ioff ratio of >; 105. Moreover, an SS of 50 mV/decade is maintained for three orders of drain current. This demonstration completes the complementary pair of TFETs to implement CMOS-like circuits.",
"title": ""
},
{
"docid": "f24fb451d6ee013a6bbc8737c0eae689",
"text": "Data on health literacy (HL) in the population is limited for Asian countries. This study aimed to test the validity of the Mandarin version of the European Health Literacy Survey Questionnaire (HLS-EU-Q) for use in the general public in Taiwan. Multistage stratification random sampling resulted in a sample of 2989 people aged 15 years and above. The HLS-EU-Q was validated by confirmatory factor analysis with excellent model data fit indices. The general HL of the Taiwanese population was 34.4 ± 6.6 on a scale of 50. Multivariate regression analysis showed that higher general HL is significantly associated with the higher ability to pay for medication, higher self-perceived social status, higher frequency of watching health-related TV, and community involvement but associated with younger age. HL is also associated with health status, health behaviors, and health care accessibility and use. The HLS-EU-Q was found to be a useful tool to assess HL and its associated factors in the general population.",
"title": ""
},
{
"docid": "b5e66fbded6c7be46a8d7c724fd18be9",
"text": "In augmented reality (AR), virtual objects and information are overlaid onto the user’s view of the physical world and can appear to become part of the real-world. Accurate registration of virtual objects is a key requirement for an effective and natural AR system, but misregistration can break the illusion of virtual objects being part of the real-world and disrupt immersion. End-to-end system latency severely impacts the quality of AR registration. In this research, we present a controlled study that aims at a deeper understanding of the effects of latency on virtual and real-world imagery and its influences on task performance in an AR training task. We utilize an AR simulation approach, in which an outdoor AR training task is simulated in a high-fidelity virtual reality (VR) system. The real and augmented portions of the AR training scenarios are simulated in VR, affording us detailed control over a variety of immersion parameters and the ability to explore the effects of different types of simulated latency. We utilized a representative task inspired by outdoor AR military training systems to compare various AR system configurations, including optical see-through and video see-through setups with both matched and unmatched levels of real and virtual objects latency. Our findings indicate that users are able to perform significantly better when virtual and real-world latencies are matched (as in the case of simulated video see-through AR with perfect augmentation-to-real-world registration). Unequal levels of latency led to reduction in performance, even when overall latency levels were lower compared to the matched case. The relative results hold up with increased overall latency.",
"title": ""
},
{
"docid": "c25d5fbbf26956d25334f66dbae61c94",
"text": "Roman seals associated with collyria (Latin expression for eye drops/washes and lotions for eye maintenance) provide valuable information about eye care in the antiquity. These small, usually stone-made pieces bore engravings with the names of eye doctors and also the collyria used to treat an eye disease. The collyria seals have been found all over the Roman empire and Celtic territories in particular and were usually associated with military camps. In Hispania (Iberian Peninsula), only three collyria seals have been found. These findings speak about eye care in this ancient Roman province as well as about of the life of the time. This article takes a look at the utility and social significance of the collyria seals and seeks to give an insight in the ophthalmological practice of in the Roman Empire.",
"title": ""
},
{
"docid": "f9880427e28ddfd4877be78e613d603a",
"text": "There is mounting evidence that mindfulness meditation is beneficial for the treatment of mood and anxiety disorders, yet little is known regarding the neural mechanisms through which mindfulness modulates emotional responses. Thus, a central objective of this functional magnetic resonance imaging study was to investigate the effects of mindfulness on the neural responses to emotionally laden stimuli. Another major goal of this study was to examine the impact of the extent of mindfulness training on the brain mechanisms supporting the processing of emotional stimuli. Twelve experienced (with over 1000 h of practice) and 10 beginner meditators were scanned as they viewed negative, positive, and neutral pictures in a mindful state and a non-mindful state of awareness. Results indicated that the Mindful condition attenuated emotional intensity perceived from pictures, while brain imaging data suggested that this effect was achieved through distinct neural mechanisms for each group of participants. For experienced meditators compared with beginners, mindfulness induced a deactivation of default mode network areas (medial prefrontal and posterior cingulate cortices) across all valence categories and did not influence responses in brain regions involved in emotional reactivity during emotional processing. On the other hand, for beginners relative to experienced meditators, mindfulness induced a down-regulation of the left amygdala during emotional processing. These findings suggest that the long-term practice of mindfulness leads to emotional stability by promoting acceptance of emotional states and enhanced present-moment awareness, rather than by eliciting control over low-level affective cerebral systems from higher-order cortical brain regions. These results have implications for affect-related psychological disorders.",
"title": ""
},
{
"docid": "799573bf08fb91b1ac644c979741e7d2",
"text": "This short paper reports the method and the evaluation results of Casio and Shinshu University joint team for the ISBI Challenge 2017 – Skin Lesion Analysis Towards Melanoma Detection – Part 3: Lesion Classification hosted by ISIC. Our online validation score was 0.958 with melanoma classifier AUC 0.924 and seborrheic keratosis classifier AUC 0.993.",
"title": ""
},
{
"docid": "9548bd2e37fdd42d09dc6828ac4675f9",
"text": "Recent years have seen increasing interest in ranking elite athletes and teams in professional sports leagues, and in predicting the outcomes of games. In this work, we draw an analogy between this problem and one in the field of search engine optimization, namely, that of ranking webpages on the Internet. Motivated by the famous PageRank algorithm, our TeamRank methods define directed graphs of sports teams based on the observed outcomes of individual games, and use these networks to infer the importance of teams that determines their rankings. In evaluating these methods on data from recent seasons in the National Football League (NFL) and National Basketball Association (NBA), we find that they can predict the outcomes of games with up to 70% accuracy, and that they provide useful rankings of teams that cluster by league divisions. We also propose some extensions to TeamRank that consider overall team win records and shifts in momentum over time.",
"title": ""
},
{
"docid": "da3e64f908cf068b1af2e7492fe52ac4",
"text": "Image tagging, also known as image annotation and image conception detection, has been extensively studied in the literature. However, most existing approaches can hardly achieve satisfactory performance owing to the deficiency and unreliability of the manually-labeled training data. In this paper, we propose a new image tagging scheme, termed social assisted media tagging (SAMT), which leverages the abundant user-generated images and the associated tags as the \"social assistance\" to learn the classifiers. We focus on addressing the following major challenges: (a) the noisy tags associated to the web images; and (b) the desirable robustness of the tagging model. We present a joint image tagging framework which simultaneously refines the erroneous tags of the web images as well as learns the reliable image classifiers. In particular, we devise a novel tag refinement module for identifying and eliminating the noisy tags by substantially exploring and preserving the low-rank nature of the tag matrix and the structured sparse property of the tag errors. We develop a robust image tagging module based on the l2,p-norm for learning the reliable image classifiers. The correlation of the two modules is well explored within the joint framework to reinforce each other. Extensive experiments on two real-world social image databases illustrate the superiority of the proposed approach as compared to the existing methods.",
"title": ""
},
{
"docid": "01a649c8115810c8318e572742d9bd00",
"text": "In this effort we propose a data-driven learning framework for reduced order modeling of fluid dynamics. Designing accurate and efficient reduced order models for nonlinear fluid dynamic problems is challenging for many practical engineering applications. Classical projection-based model reduction methods generate reduced systems by projecting full-order differential operators into low-dimensional subspaces. However, these techniques usually lead to severe instabilities in the presence of highly nonlinear dynamics, which dramatically deteriorates the accuracy of the reduced-order models. In contrast, our new framework exploits linear multistep networks, based on implicit Adams-Moulton schemes, to construct the reduced system. The advantage is that the method optimally approximates the full order model in the low-dimensional space with a given supervised learning task. Moreover, our approach is non-intrusive, such that it can be applied to other complex nonlinear dynamical systems with sophisticated legacy codes. We demonstrate the performance of our method through the numerical simulation of a twodimensional flow past a circular cylinder with Reynolds number Re = 100. The results reveal that the new data-driven model is significantly more accurate than standard projectionbased approaches.",
"title": ""
},
{
"docid": "945dea6576c6131fc33cd14e5a2a0be8",
"text": "■ This article recounts the development of radar signal processing at Lincoln Laboratory. The Laboratory’s significant efforts in this field were initially driven by the need to provide detected and processed signals for air and ballistic missile defense systems. The first processing work was on the Semi-Automatic Ground Environment (SAGE) air-defense system, which led to algorithms and techniques for detection of aircraft in the presence of clutter. This work was quickly followed by processing efforts in ballistic missile defense, first in surface-acoustic-wave technology, in concurrence with the initiation of radar measurements at the Kwajalein Missile Range, and then by exploitation of the newly evolving technology of digital signal processing, which led to important contributions for ballistic missile defense and Federal Aviation Administration applications. More recently, the Laboratory has pursued the computationally challenging application of adaptive processing for the suppression of jamming and clutter signals. This article discusses several important programs in these areas.",
"title": ""
},
{
"docid": "e4ea761d48fafeeea1f143833d7362fe",
"text": "This paper proposes a novel approach to help computing system administrators in monitoring the security of their systems. This approach is based on modeling the system as a privilege graph exhibiting operational security vulnerabilities and on transforming this privilege graph into a Markov chain corresponding to all possible successful attack scenarios. A set of tools has been developed to generate automatically the privilege graph of a Unix system, to transform it into the corresponding Markov chain and to compute characteristic measures of the operational system security.",
"title": ""
},
{
"docid": "8ca8d0bb6ef41b10392e5d64ca96d2ab",
"text": "This longitudinal study provides an analysis of the relationship between personality traits and work experiences with a special focus on the relationship between changes in personality and work experiences in young adulthood. Longitudinal analyses uncovered 3 findings. First, measures of personality taken at age 18 predicted both objective and subjective work experiences at age 26. Second, work experiences were related to changes in personality traits from age 18 to 26. Third, the predictive and change relations between personality traits and work experiences were corresponsive: Traits that \"selected\" people into specific work experiences were the same traits that changed in response to those same work experiences. The relevance of the findings to theories of personality development is discussed.",
"title": ""
}
] |
scidocsrr
|
44359f25250f7677a3c0846f60130a92
|
Supervised Neural Models Revitalize the Open Relation Extraction
|
[
{
"docid": "96669cea810d2918f2d35875f87d45f2",
"text": "In this paper, a new probabilistic tagging method is presented which avoids problems that Markov Model based taggers face, when they have to estimate transition probabilities from sparse data. In this tagging method, transition probabilities are estimated using a decision tree. Based on this method, a part-of-speech tagger (called TreeTagger) has been implemented which achieves 96.36 % accuracy on Penn-Treebank data which is better than that of a trigram tagger (96.06 %) on the same data.",
"title": ""
},
{
"docid": "be79f036d17e26a3df61a6712b169c50",
"text": "We introduce Question-Answer Meaning Representations (QAMRs), which represent the predicate-argument structure of a sentence as a set of question-answer pairs. We also develop a crowdsourcing scheme to show that QAMRs can be labeled with very little training, and gather a dataset with over 5,000 sentences and 100,000 questions. A detailed qualitative analysis demonstrates that the crowd-generated question-answer pairs cover the vast majority of predicate-argument relationships in existing datasets (including PropBank, NomBank, QASRL, and AMR) along with many previously under-resourced ones, including implicit arguments and relations. The QAMR data and annotation code is made publicly available1 to enable future work on how best to model these complex phenomena.",
"title": ""
},
{
"docid": "8057cddc406a90177fda5f3d4ee7c375",
"text": "This paper introduces the task of questionanswer driven semantic role labeling (QA-SRL), where question-answer pairs are used to represent predicate-argument structure. For example, the verb “introduce” in the previous sentence would be labeled with the questions “What is introduced?”, and “What introduces something?”, each paired with the phrase from the sentence that gives the correct answer. Posing the problem this way allows the questions themselves to define the set of possible roles, without the need for predefined frame or thematic role ontologies. It also allows for scalable data collection by annotators with very little training and no linguistic expertise. We gather data in two domains, newswire text and Wikipedia articles, and introduce simple classifierbased models for predicting which questions to ask and what their answers should be. Our results show that non-expert annotators can produce high quality QA-SRL data, and also establish baseline performance levels for future work on this task.",
"title": ""
}
] |
[
{
"docid": "839de75206c99c88fbc10f9f322235be",
"text": "This paper proposes a new fault-tolerant sensor network architecture for monitoring pipeline infrastructures. This architecture is an integrated wired and wireless network. The wired part of the network is considered the primary network while the wireless part is used as a backup among sensor nodes when there is any failure in the wired network. This architecture solves the current reliability issues of wired networks for pipelines monitoring and control. This includes the problem of disabling the network by disconnecting the network cables due to artificial or natural reasons. In addition, it solves the issues raised in recently proposed network architectures using wireless sensor networks for pipeline monitoring. These issues include the issues of power management and efficient routing for wireless sensor nodes to extend the life of the network. Detailed advantages of the proposed integrated network architecture are discussed under different application and fault scenarios.",
"title": ""
},
{
"docid": "69cbe1970732eeb5546decc250941179",
"text": "There is confusion and misunderstanding about the concepts of knowledge translation, knowledge transfer, knowledge exchange, research utilization, implementation, diffusion, and dissemination. We review the terms and definitions used to describe the concept of moving knowledge into action. We also offer a conceptual framework for thinking about the process and integrate the roles of knowledge creation and knowledge application. The implications of knowledge translation for continuing education in the health professions include the need to base continuing education on the best available knowledge, the use of educational and other transfer strategies that are known to be effective, and the value of learning about planned-action theories to be better able to understand and influence change in practice settings.",
"title": ""
},
{
"docid": "8b63800da2019180d266297647e3dbc0",
"text": "Most of the work in machine learning assume that examples are generated at random according to some stationary probability distribution. In this work we study the problem of learning when the class-probability distribution that generate the examples changes over time. We present a method for detection of changes in the probability distribution of examples. A central idea is the concept of context: a set of contiguous examples where the distribution is stationary. The idea behind the drift detection method is to control the online error-rate of the algorithm. The training examples are presented in sequence. When a new training example is available, it is classified using the actual model. Statistical theory guarantees that while the distribution is stationary, the error wil decrease. When the distribution changes, the error will increase. The method controls the trace of the online error of the algorithm. For the actual context we define a warning level, and a drift level. A new context is declared, if in a sequence of examples, the error increases reaching the warning level at example kw, and the drift level at example kd. This is an indication of a change in the distribution of the examples. The algorithm learns a new model using only the examples since kw. The method was tested with a set of eight artificial datasets and a real world dataset. We used three learning algorithms: a perceptron, a neural network and a decision tree. The experimental results show a good performance detecting drift and also with learning the new concept. We also observe that the method is independent of the learning algorithm.",
"title": ""
},
{
"docid": "dc766a7a35720b3337ed7006bc510c49",
"text": "This chapter presents the realization of second order active low pass, high pass and band pass filter using fully differential difference amplifier (FDDA). The fully differential difference amplifier is a balanced output differential difference amplifier. It provides low output distortion and high output voltage swing as compared to the DDA. The filters realized with FDDA possess attractive features that do not exist in both traditional (discrete) and modern fully integrated Op-amp circuits. However the frequency range of operation of FDDA is same as that made of the DDA or Op-Amp. The proposed filters possess orthogonality between the cutoff/central frequency and the quality factor. In view of the orthogonality property, the proposed circuits have wide applications in the instrumentation, control systems and signal processing. All the filter realizations have low sensitivity to parameter variations. The first two sections of this chapter present the implementation of DDA and FDDA. Subsequent sections are devoted to the proposed realization filters and main results. The Differential Difference Amplifier (DDA) is an emerging CMOS analog building block. It has been used in a number of applications such as instrumentation amplifier, continuous time filter, implementation of fully differential switch MOS capacitor circuit, common mode detection circuit, telephone line adaption circuit, biomedical applications, floating resistors, sample and hold circuits and MEMs. [61-89]. Differential Difference Amplifier is an extension of the conventional operational amplifier. An operational amplifier employs only one differential input, whereas two differential inputs in DDA. The schematic diagram of DDA is shown in Figure 3.1. It is a five terminal device with four input terminals named as V pp , V pn , V np , and V nn and the output terminal denoted by V 0 .",
"title": ""
},
{
"docid": "f256a37bbbcceadfbcef1471c78c6834",
"text": "PREFACE Several computational analytic tools have matured in the last 10 to 15 years that facilitate solving problems that were previously difficult or impossible to solve. These new analytical tools, known collectively as computational intelligence tools, include artificial neural networks, fuzzy systems, and evolutionary computation. They have recently been combined among themselves as well as with more traditional approaches, such as statistical analysis, to solve extremely challenging problems. Diagnostic systems, for example, are being developed that include Bayesian, neural network, and rule-based diagnostic modules, evolutionary algorithm-based explanation facilities, and expert system shells. All of these components work together in a \" seamless \" way that is transparent to the user, and they deliver results that significantly exceed what is available with any single approach. At a system prototype level, computational intelligence tools are capable of yielding results in a relatively short time. For instance, the implementation of a conventional expert system often takes one to three years and requires the active participation of a \" knowledge engineer \" to build the knowledge and rule bases. In contrast, computational intelligence (CI) system solutions can often be prototyped in a few weeks to a few months and are implemented using available engineering and computational resources. Indeed, computational intelligence tools are capable of being applied in many instances by \" domain experts \" rather than solely by \" computer gurus. \" This means that biomedical engineers, for example, can solve problems in biomedical engineering without relying on outside computer science expertise such as is required to build knowledge bases for classical expert systems. Furthermore, innovative ways to combine CI tools are cropping up every day. For example, tools have been developed that incorporate knowledge elements with neural networks, fuzzy logic, and evolutionary computing theory. Such tools are quickly able to solve classification and clustering problems that would otherwise be extremely time consuming using other techniques. The concepts, paradigms, algorithms, and implementation of computational intelligence and its constituent methodologies—evolutionary computation, neural networks, and fuzzy logic—are the focus of this book. In addition, we emphasize practical applications throughout; that is, how to apply the concepts, paradigms, algorithms, and implementations discussed to practical problems in engineering and computer science. This emphasis culminates in the real-world case studies in a final chapter available on the Web. Computational intelligence is closely related to the field called \"soft computing.\" There is, in fact, a significant overlap. According to Lotfi …",
"title": ""
},
{
"docid": "8cbbf630ac46c54b9d5369fa24a50d91",
"text": "We propose a computational method for verifying a state-space safety constraint of a network of interconnected dynamical systems satisfying a dissipativity property. We construct an invariant set as the sublevel set of a Lyapunov function comprised of local storage functions for each subsystem. This approach requires only knowledge of a local dissipativity property for each subsystem and the static interconnection matrix for the network, and we pose the safety verification as a sum-of-squares feasibility problem. In addition to reducing the computational burden of system design, we allow the safety constraint and initial conditions to depend on an unknown equilibrium, thus offering increased flexibility over existing techniques.",
"title": ""
},
{
"docid": "f237b861e72d79008305abea9f53547d",
"text": "Biopharmaceutical companies attempting to increase productivity through novel discovery technologies have fallen short of achieving the desired results. Repositioning existing drugs for new indications could deliver the productivity increases that the industry needs while shifting the locus of production to biotechnology companies. More and more companies are scanning the existing pharmacopoeia for repositioning candidates, and the number of repositioning success stories is increasing.",
"title": ""
},
{
"docid": "0a0dc05f3f34822b71c32a786bf5ccd1",
"text": "Chronic facial paralysis induces degenerative facial muscle changes on the involved side, thus, making the individual seem as older than their actual age. Furthermore, contralateral facial hypertrophy aggravates facial asymmetry. A thread-lifting procedure has been used widely for correction of a drooping or wrinkled face due to the aging process. In addition, botulinum toxin injection can be used to reduce facial hypertrophy. The aim of study was to evaluate the effectiveness of thread lifting with botulinum toxin injection for chronic facial paralysis. A total 34 of patients with chronic facial paralysis were enrolled from March to October 2014. Thread lifting for elevating loose facial muscles on the ipsilateral side and botulinum toxin A for controlling the facial muscle hypertrophy on the contralateral side were conducted. Facial function was evaluated using the Sunnybrook grading system and dynamic facial asymmetry ratios 1 year after treatment. All 34 patients displayed improved facial symmetry and showed improvement in Sunnybrook scores (37.4 vs. 83.3) and dynamic facial asymmetry ratios (0.58 vs 0.92). Of the 34 patients, 28 (82.4%) reported being satisfied with treatment. The application of subdermal suspension with a reabsorbable thread in conjunction with botulinum toxin A to optimize facial rejuvenation of the contralateral side constitutes an effective and safe procedure for face lifting and rejuvenation of a drooping face as a result of long-lasting facial paralysis. Die chronische Fazialisparese induziert degenerative Veränderungen der Gesichtsmuskulatur auf der betroffenen Seite. In der Folge wirkt der Patient älter, als er tatsächlich ist. Des Weiteren verstärkt eine kontralaterale Hypertrophie die Gesichtsasymmetrie. Ein Fadenliftingverfahren findet breite Anwendung zur Korrektur eines durch den Alterungsprozess hängenden oder faltigen Gesichts. Zusätzlich kann Botulinumtoxin injiziert werden, um die Gesichtshypertrophie zu verringern. In der vorliegenden Studie sollte die Wirksamkeit eines Fadenliftings mit Botulinumtoxininjektionen bei chronischer Fazialisparese beurteilt werden. Von März bis Oktober 2014 wurden insgesamt 34 Patienten mit chronischer Fazialisparese eingeschlossen. Ein Fadenlifting zur Hebung schlaffer Gesichtsmuskeln auf der ipsilateralen Seite und Botulinumtoxin-A-Injektionen zur Behandlung der Gesichtsmuskelhypertrophie auf der kontralateralen Seite wurden durchgeführt. Ein Jahr nach Behandlung wurde die Gesichtsfunktion mit dem Sunnybrook Grading System und anhand der dynamischen Gesichtsasymmetrieverhältnisse („dynamic facial asymmetry ratios“) beurteilt. Alle 34 Patienten hatten eine verbesserte Gesichtssymmetrie und zeigten Verbesserungen im Sunnybrook-Score (37,4 vs. 83,3) sowie in den dynamischen Gesichtsasymmetrieverhältnissen (0,58 vs. 0,92). Von den 34 Patienten äußerten 28 (82,4 %) ihre Zufriedenheit mit der Behandlung. Die Applikation einer subdermalen Suspension mit einem resorbierbaren Faden in Kombination mit Botulinumtoxin A, um die Gesichtsverjüngung auf der kontralateralen Seite zu optimieren, stellt ein wirksames und sicheres Verfahren zum Facelift und zur Verjüngung eines Gesichts dar, das bedingt durch eine lange bestehende Fazialisparese hängt.",
"title": ""
},
{
"docid": "c9a28a3d90f6d716643c45ed2c0b47bb",
"text": "A fast, completely automated method to create 3D watertight building models from airborne LiDAR point clouds is presented. The proposed method analyzes the scene content and produces multi-layer rooftops with complex boundaries and vertical walls that connect rooftops to the ground. A graph cuts based method is used to segment vegetative areas from the rest of scene content. The ground terrain and building rooftop patches are then extracted utilizing our technique, the hierarchical Euclidean clustering. Our method adopts a “divide-and-conquer” strategy. Once potential points on rooftops are segmented from terrain and vegetative areas, the whole scene is divided into individual pendent processing units which represent potential building footprints. For each individual building region, significant features on the rooftop are further detected using a specifically designed region growing algorithm with smoothness constraint. Boundaries for all of these features are refined in order to produce strict description. After this refinement, mesh models could be generated using an existing robust dual contouring method.",
"title": ""
},
{
"docid": "50648acbc0ec1d4a8c3c86f2456f4d14",
"text": "We present DKPro Similarity, an open source framework for text similarity. Our goal is to provide a comprehensive repository of text similarity measures which are implemented using standardized interfaces. DKPro Similarity comprises a wide variety of measures ranging from ones based on simple n-grams and common subsequences to high-dimensional vector comparisons and structural, stylistic, and phonetic measures. In order to promote the reproducibility of experimental results and to provide reliable, permanent experimental conditions for future studies, DKPro Similarity additionally comes with a set of full-featured experimental setups which can be run out-of-the-box and be used for future systems to built upon.",
"title": ""
},
{
"docid": "23bf7564a02a2926e39af2ef1d5499ad",
"text": "Welcome to PLoS Computational Biology, a community journal from the Public Library of Science dedicated to reporting biological advances achieved through computation. The journal is published in partnership with the International Society for Computational Biology (ISCB). The importance of this partnership is described in the accompanying letter from Michael Gribskov, ISCB president. What motivates us to start a new journal at this time? Computation, driven in part by the influx of large amounts of data at all biological scales, has become a central feature of research and discovery in the life sciences. This work tends to be published either in methods journals that are not read by experimentalists or in one of the numerous journals reporting novel biology, each of which publishes only small amounts of computational research. Hence, the impact of this research is diluted. PLoS Computational Biology provides a home for important biological research driven by computation—a place where computational biologists can find the best work produced by their colleagues, and where the broader biological community can see the myriad ways computation is advancing our understanding of biological systems. PLoS Computational Biology is governed by one overarching principle: scientific quality. This quality is reflected in the editorial board and the editorial staff. The editorial board members are leaders in their respective scientific areas and have agreed to give their valuable time to support a quality journal in their field. Behind the scenes, through a rigorous presubmission process, three quality reviews for each paper, and an acceptance rate below 20%, the editors and staff already knew in the six months since the journal was launched that we were producing a first-rate product. The scientific content is now here for all of you to see and will continue to build in the months and years to come.",
"title": ""
},
{
"docid": "dc7474e5e82f06eb1feb7c579fd713a7",
"text": "OBJECTIVE\nTo determine the current values and estimate the projected values (to the year 2041) for annual number of proximal femoral fractures (PFFs), age-adjusted rates of fracture, rates of death in the acute care setting, associated length of stay (LOS) in hospital, and seasonal variation by sex and age in elderly Canadians.\n\n\nDESIGN\nHospital discharge data for fiscal year 1993-94 from the Canadian Institute for Health Information were used to determine PFF incidence, and Statistics Canada population projections were used to estimate the rate and number of PFFs to 2041.\n\n\nSETTING\nCanada.\n\n\nPARTICIPANTS\nCanadian patients 65 years of age or older who underwent hip arthroplasty.\n\n\nOUTCOME MEASURES\nPFF rates, death rates and LOS by age, sex and province.\n\n\nRESULTS\nIn 1993-94 the incidence of PFF increased exponentially with increasing age. The age-adjusted rates were 479 per 100,000 for women and 187 per 100,000 for men. The number of PFFs was estimated at 23,375 (17,823 in women and 5552 in men), with a projected increase to 88,124 in 2041. The rate of death during the acute care stay increased exponentially with increasing age. The death rates for men were twice those for women. In 1993-94 an estimated 1570 deaths occurred in the acute care setting, and 7000 deaths were projected for 2041. LOS in the acute care setting increased with advancing age, as did variability in LOS, which suggests a more heterogeneous case mix with advancing age. The LOS for 1993-94 and 2041 was estimated at 465,000 and 1.8 million patient-days respectively. Seasonal variability in the incidence of PFFs by sex was not significant. Significant season-province interactions were seen (p < 0.05); however, the differences in incidence were small (on the order of 2% to 3%) and were not considered to have a large effect on resource use in the acute care setting.\n\n\nCONCLUSIONS\nOn the assumption that current conditions contributing to hip fractures will remain constant, the number of PFFs will rise exponentially over the next 40 years. The results of this study highlight the serious implications for Canadians if incidence rates are not reduced by some form of intervention.",
"title": ""
},
{
"docid": "0c487b9609add0666915411b8b56ba61",
"text": "In order to understand the reasons that lead individuals to practice physical activity, researchers developed the Motives for Physical Activity Measure-Revised (MPAM-R) scale. In 2010, a translation of MPAM-R to Portuguese and its validation was performed. However, psychometric measures were not acceptable. In addition, factor scores in some sports psychology scales are calculated by the mean of scores by items of the factor. Nevertheless, it seems appropriate that items with higher factor loadings, extracted by Factor Analysis, have greater weight in the factor score, as items with lower factor loadings have less weight in the factor score. The aims of the present study are to translate, validate the MPAM-R for Portuguese versions, and investigate agreement between two methods used to calculate factor scores. Three hundred volunteers who were involved in physical activity programs for at least 6 months were collected. Confirmatory Factor Analysis of the 30 items indicated that the version did not fit the model. After excluding four items, the final model with 26 items showed acceptable model fit measures by Exploratory Factor Analysis, as well as it conceptually supports the five factors as the original proposal. When two methods are compared to calculate factors scores, our results showed that only \"Enjoyment\" and \"Appearance\" factors showed agreement between methods to calculate factor scores. So, the Portuguese version of the MPAM-R can be used in a Brazilian context, and a new proposal for the calculation of the factor score seems to be promising.",
"title": ""
},
{
"docid": "1ce09062b1ced2cd643c04f7c075c4f1",
"text": "We propose a new approach to the task of fine grained entity type classifications based on label embeddings that allows for information sharing among related labels. Specifically, we learn an embedding for each label and each feature such that labels which frequently co-occur are close in the embedded space. We show that it outperforms state-of-the-art methods on two fine grained entity-classification benchmarks and that the model can exploit the finer-grained labels to improve classification of standard coarse types.",
"title": ""
},
{
"docid": "dbf5fd755e91c4a67446dcce2d8759ba",
"text": "JSTOR is a not-for-profit service that helps scholars, researchers, and students discover, use, and build upon a wide range of content in a trusted digital archive. We use information technology and tools to increase productivity and facilitate new forms of scholarship. For more information about JSTOR, please contact support@jstor.org. .",
"title": ""
},
{
"docid": "6dec69ef6fef78adf66911347aa8b800",
"text": "The use of random perturbations of ground truth data, such as random translation or scaling of bounding boxes, is a common heuristic used for data augmentation that has been shown to prevent overfitting and improve generalization. Since the design of data augmentation is largely guided by reported best practices, it is difficult to understand if those design choices are optimal. To provide a more principled perspective, we develop a game-theoretic interpretation of data augmentation in the context of object detection. We aim to find an optimal adversarial perturbations of the ground truth data (i.e., the worst case perturbations) that forces the object bounding box predictor to learn from the hardest distribution of perturbed examples for better test-time performance. We establish that the game theoretic solution, the Nash equilibrium, provides both an optimal predictor and optimal data augmentation distribution. We show that our adversarial method of training a predictor can significantly improve test time performance for the task of object detection. On the ImageNet object detection task, our adversarial approach improves performance by over 16% compared to the best performing data augmentation method.",
"title": ""
},
{
"docid": "30e798ef3668df14f1625d40c53011a0",
"text": "Classification with big data has become one of the latest trends when talking about learning from the available information. The data growth in the last years has rocketed the interest in effectively acquiring knowledge to analyze and predict trends. The variety and veracity that are related to big data introduce a degree of uncertainty that has to be handled in addition to the volume and velocity requirements. This data usually also presents what is known as the problem of classification with imbalanced datasets, a class distribution where the most important concepts to be learned are presented by a negligible number of examples in relation to the number of examples from the other classes. In order to adequately deal with imbalanced big data we propose the Chi-FRBCS-BigDataCS algorithm, a fuzzy rule based classification system that is able to deal with the uncertainly that is introduced in large volumes of data without disregarding the learning in the underrepresented class. The method uses the MapReduce framework to distribute the computational operations of the fuzzy model while it includes cost-sensitive learning techniques in its design to address the imbalance that is present in the data. The good performance of this approach is supported by the experimental analysis that is carried out over twenty-four imbalanced big data cases of study. The results obtained show that the proposal is able to handle these problems obtaining competitive results both in the classification performance of the model and the time needed for the computation. © 2014 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "656a83e0cc9631a282e81f4143042ae7",
"text": "Mobility Management in future cellular networks is becoming more challenging due to transition from macro only to multi-tier deployments. In this framework, the massive use of small cells rendered traditional Handover algorithms inappropriate to deal effectively with frequent Handovers, especially for fast users, in dense urban scenarios. Studies in this area focus mainly on the adjustment of the Hysteresis Margin and on the Time-to-Trigger (TTT) selection in line with the Self-Organized Networks (SON), concept. In that sense, the ability of each node to adapt its parameters to the actual scenario is considered vital for the smooth operation of the network. This work contributes to the latter by analyzing the dependence of the Handover performance on the inter-site distance between the macro cell and the small cell. Specifically, the most common KPIs (i.e. Handover, Ping Pong and Radio Link Failure probabilities) are analyzed for different inter-site distances and TTT values to provide solid basis for the TTT selection.",
"title": ""
},
{
"docid": "7b08d9e80d61788c9fdd01cdac917f5b",
"text": "Resonant dc-dc converters offer several advantages over the more conventional PWM converters. Some of these advantages include: 1) low switching losses and low transistor stresses; 2) medium speed diodes are sufficient (transistor parasitic, inverse-parallel diodes can be used, even for frequencies in the hundreds of kilohertz); and 3) ability to step the input voltage up or down. This paper presents an analysis of a resonant converter which contains a capacitive-input output filter, rather than the more conventional inductor-input output filter. The switching waveforms are derived and design curves presented along with experimental data. The results are compared to the inductor-input filter case obtained from an earlier paper.",
"title": ""
},
{
"docid": "8115fddcf7bd64ad0976619f0a51e5a8",
"text": "Current research in content-based semantic image understanding is largely confined to exemplar-based approaches built on low-level feature extraction and classification. The ability to extract both low-level and semantic features and perform knowledge integration of different types of features is expected to raise semantic image understanding to a new level. Belief networks, or Bayesian networks (BN), have proven to be an effective knowledge representation and inference engine in artificial intelligence and expert systems research. Their effectiveness is due to the ability to explicitly integrate domain knowledge in the network structure and to reduce a joint probability distribution to conditional independence relationships. In this paper, we present a general-purpose knowledge integration framework that employs BN in integrating both low-level and semantic features. The efficacy of this framework is demonstrated via three applications involving semantic understanding of pictorial images. The first application aims at detecting main photographic subjects in an image, the second aims at selecting the most appealing image in an event, and the third aims at classifying images into indoor or outdoor scenes. With these diverse examples, we demonstrate that effective inference engines can be built within this powerful and flexible framework according to specific domain knowledge and available training data to solve inherently uncertain vision problems. 2005 Pattern Recognition Society. Published by Elsevier Ltd. All rights reserved.",
"title": ""
}
] |
scidocsrr
|
6fdaf65ddf3fd2c7be9f96d314379737
|
Performance analysis of idle programs
|
[
{
"docid": "dcac926ace799d43fedb9c27056a7729",
"text": "Jinsight is a tool for exploring a program’s run-time behavior visually. It is helpful for performance analysis, debugging, and any task in which you need to better understand what your Java program is really doing. Jinsight is designed specifically with object-oriented and multithreaded programs in mind. It exposes many facets of program behavior that elude conventional tools. It reveals object lifetimes and communication, and attendant performance bottlenecks. It shows thread interactions, deadlocks, and garbage collector activity. It can also help you find and fix memory leaks, which remain a hazard despite garbage collection. A user explores program execution through one or more views. Jinsight offers several types of views, each geared toward distinct aspects of object-oriented and multithreaded program behavior. The user has several different perspectives from which to discern performance problems, unexpected behavior, or bugs small and large. Moreover, the views are linked to each other in many ways, allowing navigation from one view to another. Navigation makes the collection of views far more powerful than the sum of their individual strengths.",
"title": ""
},
{
"docid": "e06cc2a4291c800a76fd2a107d2230e4",
"text": "Surprisingly, console logs rarely help operators detect problems in large-scale datacenter services, for they often consist of the voluminous intermixing of messages from many software components written by independent developers. We propose a general methodology to mine this rich source of information to automatically detect system runtime problems. We first parse console logs by combining source code analysis with information retrieval to create composite features. We then analyze these features using machine learning to detect operational problems. We show that our method enables analyses that are impossible with previous methods because of its superior ability to create sophisticated features. We also show how to distill the results of our analysis to an operator-friendly one-page decision tree showing the critical messages associated with the detected problems. We validate our approach using the Darkstar online game server and the Hadoop File System, where we detect numerous real problems with high accuracy and few false positives. In the Hadoop case, we are able to analyze 24 million lines of console logs in 3 minutes. Our methodology works on textual console logs of any size and requires no changes to the service software, no human input, and no knowledge of the software's internals.",
"title": ""
},
{
"docid": "32e92e1be00613e06a7bc03d457704ac",
"text": "Computer systems often fail due to many factors such as software bugs or administrator errors. Diagnosing such production run failures is an important but challenging task since it is difficult to reproduce them in house due to various reasons: (1) unavailability of users' inputs and file content due to privacy concerns; (2) difficulty in building the exact same execution environment; and (3) non-determinism of concurrent executions on multi-processors.\n Therefore, programmers often have to diagnose a production run failure based on logs collected back from customers and the corresponding source code. Such diagnosis requires expert knowledge and is also too time-consuming, tedious to narrow down root causes. To address this problem, we propose a tool, called SherLog, that analyzes source code by leveraging information provided by run-time logs to infer what must or may have happened during the failed production run. It requires neither re-execution of the program nor knowledge on the log's semantics. It infers both control and data value information regarding to the failed execution.\n We evaluate SherLog with 8 representative real world software failures (6 software bugs and 2 configuration errors) from 7 applications including 3 servers. Information inferred by SherLog are very useful for programmers to diagnose these evaluated failures. Our results also show that SherLog can analyze large server applications such as Apache with thousands of logging messages within only 40 minutes.",
"title": ""
},
{
"docid": "7e0c7042c7bc4d1084234f48dd2e0333",
"text": "Many interesting large-scale systems are distributed systems of multiple communicating components. Such systems can be very hard to debug, especially when they exhibit poor performance. The problem becomes much harder when systems are composed of \"black-box\" components: software from many different (perhaps competing) vendors, usually without source code available. Typical solutions-provider employees are not always skilled or experienced enough to debug these systems efficiently. Our goal is to design tools that enable modestly-skilled programmers (and experts, too) to isolate performance bottlenecks in distributed systems composed of black-box nodes.We approach this problem by obtaining message-level traces of system activity, as passively as possible and without any knowledge of node internals or message semantics. We have developed two very different algorithms for inferring the dominant causal paths through a distributed system from these traces. One uses timing information from RPC messages to infer inter-call causality; the other uses signal-processing techniques. Our algorithms can ascribe delay to specific nodes on specific causal paths. Unlike previous approaches to similar problems, our approach requires no modifications to applications, middleware, or messages.",
"title": ""
}
] |
[
{
"docid": "232630ca0d4c30d9fc479e069578ad05",
"text": "Normalization techniques have only recently begun to be exploited in supervised learning tasks. Batch normalization exploits mini-batch statistics to normalize the activations. This was shown to speed up training and result in better models. However its success has been very limited when dealing with recurrent neural networks. On the other hand, layer normalization normalizes the activations across all activities within a layer. This was shown to work well in the recurrent setting. In this paper we propose a unified view of normalization techniques, as forms of divisive normalization, which includes layer and batch normalization as special cases. Our second contribution is the finding that a small modification to these normalization schemes, in conjunction with a sparse regularizer on the activations, leads to significant benefits over standard normalization techniques. We demonstrate the effectiveness of our unified divisive normalization framework in the context of convolutional neural nets and recurrent neural networks, showing improvements over baselines in image classification, language modeling as well as super-resolution.",
"title": ""
},
{
"docid": "2200edf1e0be6412c6c0ecfbb487ca2f",
"text": "Algebraic effects are an interesting way to structure effectful programs and offer new modularity properties. We present the Scala library Effekt, which is implemented in terms of a monad for multi-prompt delimited continuations and centered around capability passing. This makes the newly proposed feature of implicit function types a perfect fit for the syntax of our library. Basing the library design on capability passing and a polymorphic embedding of effect handlers furthermore opens up interesting dimensions of extensibility. Preliminary benchmarks comparing Effekt with an established library suggest significant speedups.",
"title": ""
},
{
"docid": "8ceb8a3f659b18e5d95da60c10ca7ae3",
"text": "In recent years the power systems research community has seen an explosion of work applying operations research techniques to challenging power network optimization problems. Regardless of the application under consideration, all of these works rely on power system test cases for evaluation and validation. However, many of the well established power system test cases were developed as far back as the 1960s with the aim of testing AC power flow algorithms. It is unclear if these power flow test cases are suitable for power system optimization studies. This report surveys all of the publicly available AC transmission system test cases, to the best of our knowledge, and assess their suitability for optimization tasks. It finds that many of the traditional test cases are missing key network operation constraints, such as line thermal limits and generator capability curves. To incorporate these missing constraints, data driven models are developed from a variety of publicly available data sources. The resulting extended test cases form a compressive archive, NESTA, for the evaluation and validation of power system optimization algorithms.",
"title": ""
},
{
"docid": "295decfc6cbfe44ee20455fd551c0a45",
"text": "Ultraviolet (UV) photodetectors have drawn extensive attention owing to their applications in industrial, environmental and even biological fields. Compared to UV-enhanced Si photodetectors, a new generation of wide bandgap semiconductors, such as (Al, In) GaN, diamond, and SiC, have the advantages of high responsivity, high thermal stability, robust radiation hardness and high response speed. On the other hand, one-dimensional (1D) nanostructure semiconductors with a wide bandgap, such as β-Ga2O3, GaN, ZnO, or other metal-oxide nanostructures, also show their potential for high-efficiency UV photodetection. In some cases such as flame detection, high-temperature thermally stable detectors with high performance are required. This article provides a comprehensive review on the state-of-the-art research activities in the UV photodetection field, including not only semiconductor thin films, but also 1D nanostructured materials, which are attracting more and more attention in the detection field. A special focus is given on the thermal stability of the developed devices, which is one of the key characteristics for the real applications.",
"title": ""
},
{
"docid": "ddb77ec8a722c50c28059d03919fb299",
"text": "Among the smart cities applications, optimizing lottery games is one of the urgent needs to ensure their fairness and transparency. The emerging blockchain technology shows a glimpse of solutions to fairness and transparency issues faced by lottery industries. This paper presents the design of a blockchain-based lottery system for smart cities applications. We adopt the smart contracts of blockchain technology and the cryptograph blockchain model, Hawk [8], to design the blockchain-based lottery system, FairLotto, for future smart cities applications. Fairness, transparency, and privacy of the proposed blockchain-based lottery system are discussed and ensured.",
"title": ""
},
{
"docid": "06abf54df209e736ada3a9a951b14300",
"text": "In this paper we present arguments supported by research examples for a fundamental shift of emphasis in education and its relation to technology, in particular AItechnology. No longer the ITS-paradigm dominates the field of AI and Education. New educational and pedagogic paradigms are being proposed and investigated, stressing the importance of learning how to learn instead of merely learning domain facts and rules of application. New uses of technology accompany this shift. We present trends and issues in this area exemplified by research projects and characterise three pedagogical scenarios in order to situate different modelling options for AI & Education.",
"title": ""
},
{
"docid": "8f2a2257948be74b02656bc8e693bd2e",
"text": "Due to the significant advancements in image processing and machine learning algorithms, it is much easier to create, edit, and produce high quality images. However, attackers can maliciously use these tools to create legitimate looking but fake images to harm others, bypass image detection algorithms, or fool image recognition classifiers. In this work, we propose neural network based classifiers to detect fake human faces created by both 1) machines and 2) humans. We use ensemble methods to detect GANs-created fake images and employ pre-processing techniques to improve fake face image detection created by humans. Our approaches focus on image contents for classification and do not use meta-data of images. Our preliminary results show that we can effectively detect both GANs-created images, and human-created fake images with 94% and 74.9% AUROC score.",
"title": ""
},
{
"docid": "afd35c8c8b59d99d5216f2f9999c41b7",
"text": "The paper throws light on Beam Division Multiple Access (BDMA) technique and various possible heirs of Orthogonal Frequency Division Multiplexing (OFDM) as modulation schemes anticipated to be used in futuristic 5G communications. This paper emphasizes on the concept of BDMA, a multiple access technique to overcome the limitations of existing frequency and time resource based access techniques for multiple access of same channel. The possible modulation schemes such as Filter Bank Multicarrier (FBMC), Faster Than Nyquist (FTN) signaling and Single Carrier Modulations (SCMs) that can be heir of OFDM are studied, simulated and a comparison of these techniques is done based on the simulation results. Possible application scenarios of OFDM heirs with respect to 5G implementation requirements are also proposed.",
"title": ""
},
{
"docid": "f551b3d24d1f6083e17ee60b925b0475",
"text": "This paper presents new image descriptors based on color, texture, shape, and wavelets for object and scene image classification. First, a new three Dimensional Local Binary Patterns (3D-LBP) descriptor, which produces three new color images, is proposed for encoding both color and texture information of an image. The 3D-LBP images together with the original color image then undergo the Haar wavelet and local features. Second, a novel H-descriptor, which integrates the 3D-LBP and the HOG of its wavelet transform, is presented to encode color, texture, shape, as well as local information. Feature extraction for the H-descriptor is implemented by means of Principal Component Analysis (PCA) and Enhanced Fisher Model (EFM) and classification by the nearest neighbor rule for object and scene image classification. And finally, an innovative H-fusion descriptor is proposed by fusing the PCA features of the H-descriptors in seven color spaces in order to further incorporate color information. Experimental results using three datasets, the Caltech 256 object categories dataset, the UIUC Sports Event dataset, and the MIT Scene dataset, show that the proposed new image descriptors achieve better image classification performance than other popular image descriptors, such as the Scale Invariant Feature Transform (SIFT), the Pyramid Histograms of visual Words (PHOW), the Pyramid Histograms of Oriented Gradients (PHOG), Spatial Envelope, Color SIFT four Concentric Circles (C4CC), Object Bank, the Hierarchical Matching Pursuit, as well as LBP. & 2013 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "98f246414ecd65785be73b6b95fbd2b4",
"text": "The past few years have seen an enormous progress in the performance of Boolean satisfiability (SAT) solvers. Despite the worst-case exponential run time of all known algorithms, satisfiability solvers are increasingly leaving their mark as a general-purpose tool in areas as diverse as software and hardware verification [29–31, 228], automatic test pattern generation [138, 221], planning [129, 197], scheduling [103], and even challenging problems from algebra [238]. Annual SAT competitions have led to the development of dozens of clever implementations of such solvers [e. and the creation of an extensive suite of real-world instances as well as challenging hand-crafted benchmark problems [cf. 115]. Modern SAT solvers provide a \" black-box \" procedure that can often solve hard structured problems with over a million variables and several million constraints. In essence, SAT solvers provide a generic combinatorial reasoning and search platform. The underlying representational formalism is propositional logic. However, the full potential of SAT solvers only becomes apparent when one considers their use in applications that are not normally viewed as propositional reasoning tasks. For example, consider AI planning, which is a PSPACE-complete problem. By restricting oneself to polynomial size plans, one obtains an NP-complete reasoning problem , easily encoded as a Boolean satisfiability problem, which can be given to a SAT solver [128, 129]. In hardware and software verification, a similar strategy leads one to consider bounded model checking, where one places a bound on the length of possible error traces one is willing to consider [30]. Another example of a recent application of SAT solvers is in computing stable models used in the answer set programming paradigm, a powerful knowledge representation and reasoning approach [81]. In these applications—planning, verification, and answer set programming—the translation into a propositional representation (the \" SAT encoding \") is done automatically",
"title": ""
},
{
"docid": "82815db4a7c126eac340f2310bd73638",
"text": "Torque ripples due to cogging torque, current measurement errors, and flux harmonics restrict the application of the permanent magnet synchronous motor (PMSM) that has a high-precision requirement. The torque pulsation varies periodically along with the rotor position, and it results in speed ripples, which further degrade the performance of the PMSM servo system. Iterative learning control (ILC), in parallel with the classical proportional integral (PI) controller (i.e., PI-ILC), is a conventional method to suppress the torque ripples. However, it is sensitive to the system uncertainties and external disturbances, i.e., it is paralyzed to nonperiodic disturbances. Therefore, this paper proposes a robust ILC scheme achieved by an adaptive sliding mode control (SMC) technique to further reduce the torque ripples and improve the antidisturbance ability of the servo system. ILC is employed to reduce the periodic torque ripples and the SMC is used to guarantee fast response and strong robustness. An adaptive algorithm is utilized to estimate the system lumped disturbances, including parameter variations and external disturbances. The estimated value is utilized to compensate the robust ILC speed controller in order to eliminate the effects of the disturbance, and it can suppress the sliding mode chattering phenomenon simultaneously. Experiments were carried out on a digital signal processor-field programmable gate array based platform. The obtained experimental results demonstrate that the robust ILC scheme has an improved performance with minimized torque ripples and it exhibits a satisfactory antidisturbance performance compared to the PI-ILC method.",
"title": ""
},
{
"docid": "45d6863e54b343d7a081e79c84b81e65",
"text": "In order to obtain optimal 3D structure and viewing parameter estimates, bundle adjustment is often used as the last step of feature-based structure and motion estimation algorithms. Bundle adjustment involves the formulation of a large scale, yet sparse minimization problem, which is traditionally solved using a sparse variant of the Levenberg-Marquardt optimization algorithm that avoids storing and operating on zero entries. This paper argues that considerable computational benefits can be gained by substituting the sparse Levenberg-Marquardt algorithm in the implementation of bundle adjustment with a sparse variant of Powell's dog leg non-linear least squares technique. Detailed comparative experimental results provide strong evidence supporting this claim",
"title": ""
},
{
"docid": "111b5bfb34a76b0ea78a0fd58311d31f",
"text": "Wireless micro sensor networks have been identified as one of the most important technologies for the 21st century. This paper traces the history of research in sensor networks over the past three decades, including two important programs of the Defense Advanced Research Projects Agency (DARPA) spanning this period: the Distributed Sensor Networks (DSN) and the Sensor Information Technology (SensIT) programs. Technology trends that impact the development of sensor networks are reviewed and new applications such as infrastructure security, habitat monitoring, and traffic control are presented. Technical challenges in sensor network development include network discovery, control and routing, collaborative signal and information processing, tasking and querying, and security. The paper concludes by presenting some recent research results in sensor network algorithms, including localized algorithms and directed diffusion, distributed tracking in wireless ad hoc networks, and distributed classification using local agents. Keywords— Collaborative signal processing, micro sensors, net-work routing and control, querying and tasking, sensor networks, tracking and classification, wireless networks.",
"title": ""
},
{
"docid": "a804d188b4fd2b89efaf072d96ef1023",
"text": "Current state-of-the-art sports statistics compare players and teams to league average performance. For example, metrics such as “Wins-above-Replacement” (WAR) in baseball [1], “Expected Point Value” (EPV) in basketball [2] and “Expected Goal Value” (EGV) in soccer [3] and hockey [4] are now commonplace in performance analysis. Such measures allow us to answer the question “how does this player or team compare to the league average?” Even “personalized metrics” which can answer how a “player’s or team’s current performance compares to its expected performance” have been used to better analyze and improve prediction of future outcomes [5].",
"title": ""
},
{
"docid": "0cdf08bd9c2e63f0c9bb1dd7472a23a8",
"text": "Under natural viewing conditions, human observers shift their gaze to allocate processing resources to subsets of the visual input. Many computational models try to predict such voluntary eye and attentional shifts. Although the important role of high level stimulus properties (e.g., semantic information) in search stands undisputed, most models are based on low-level image properties. We here demonstrate that a combined model of face detection and low-level saliency significantly outperforms a low-level model in predicting locations humans fixate on, based on eye-movement recordings of humans observing photographs of natural scenes, most of which contained at least one person. Observers, even when not instructed to look for anything particular, fixate on a face with a probability of over 80% within their first two fixations; furthermore, they exhibit more similar scanpaths when faces are present. Remarkably, our model’s predictive performance in images that do not contain faces is not impaired, and is even improved in some cases by spurious face detector responses.",
"title": ""
},
{
"docid": "d78609519636e288dae4b1fce36cb7a6",
"text": "Intelligent vehicles have increased their capabilities for highly and, even fully, automated driving under controlled environments. Scene information is received using onboard sensors and communication network systems, i.e., infrastructure and other vehicles. Considering the available information, different motion planning and control techniques have been implemented to autonomously driving on complex environments. The main goal is focused on executing strategies to improve safety, comfort, and energy optimization. However, research challenges such as navigation in urban dynamic environments with obstacle avoidance capabilities, i.e., vulnerable road users (VRU) and vehicles, and cooperative maneuvers among automated and semi-automated vehicles still need further efforts for a real environment implementation. This paper presents a review of motion planning techniques implemented in the intelligent vehicles literature. A description of the technique used by research teams, their contributions in motion planning, and a comparison among these techniques is also presented. Relevant works in the overtaking and obstacle avoidance maneuvers are presented, allowing the understanding of the gaps and challenges to be addressed in the next years. Finally, an overview of future research direction and applications is given.",
"title": ""
},
{
"docid": "ac95ed317bfcde1fd9e146cdd0c50fe5",
"text": "The development of literacy and reading proficiency is a building block of lifelong learning that must be supported both in the classroom and at home. While the promise of interactive learning technologies has widely been demonstrated, little is known about how an interactive robot might play a role in this development. We used eight design features based on recommendations from interest-development and human-robot-interaction literatures to design an in-home learning companion robot for children aged 11--12. The robot was used as a technology probe to explore families' (N=8) habits and views about reading, how a reading technology might be used, and how children perceived reading with the robot. Our results indicate reading with the learning companion to be a way to socially engage with reading, which may promote the development of reading interest and ability. We discuss design and research implications based on our findings.",
"title": ""
},
{
"docid": "a4a56e0647849c22b48e7e5dc3f3049b",
"text": "The paper describes a 2D sound source mapping system for a mobile robot. We developed a multiple sound sources localization method for a mobile robot with a 32 channel concentric microphone array. The system can separate multiple moving sound sources using direction localization. Directional localization and separation of different pressure sound sources is achieved using the delay and sum beam forming (DSBF) and the frequency band selection (FBS) algorithm. Sound sources were mapped by using a wheeled robot equipped with the microphone array. The robot localizes sounds direction on the move and estimates sound sources position using triangulation. Assuming the movement of sound sources, the system set a time limit and uses only the last few seconds data. By using the random sample consensus (RANSAC) algorithm for position estimation, we achieved 2D multiple sound source mapping from time limited data with high accuracy. Also, moving sound source separation is experimentally demonstrated with segments of the DSBF enhanced signal derived from the localization process",
"title": ""
},
{
"docid": "afd378cf5e492a9627e746254586763b",
"text": "Gradient-based optimization has enabled dramatic advances in computational imaging through techniques like deep learning and nonlinear optimization. These methods require gradients not just of simple mathematical functions, but of general programs which encode complex transformations of images and graphical data. Unfortunately, practitioners have traditionally been limited to either hand-deriving gradients of complex computations, or composing programs from a limited set of coarse-grained operators in deep learning frameworks. At the same time, writing programs with the level of performance needed for imaging and deep learning is prohibitively difficult for most programmers.\n We extend the image processing language Halide with general reverse-mode automatic differentiation (AD), and the ability to automatically optimize the implementation of gradient computations. This enables automatic computation of the gradients of arbitrary Halide programs, at high performance, with little programmer effort. A key challenge is to structure the gradient code to retain parallelism. We define a simple algorithm to automatically schedule these pipelines, and show how Halide's existing scheduling primitives can express and extend the key AD optimization of \"checkpointing.\"\n Using this new tool, we show how to easily define new neural network layers which automatically compile to high-performance GPU implementations, and how to solve nonlinear inverse problems from computational imaging. Finally, we show how differentiable programming enables dramatically improving the quality of even traditional, feed-forward image processing algorithms, blurring the distinction between classical and deep methods.",
"title": ""
}
] |
scidocsrr
|
6cd50035535e13f8adc89e56f4f12971
|
Training Bit Fully Convolutional Network for Fast Semantic Segmentation
|
[
{
"docid": "3ff9dbdc3a28a55465121cab38c9ad64",
"text": "Recent advances in deep learning have made the use of large, deep neural networks with tens of millions of parameters suitable for a number of applications that require real-time processing. The sheer size of these networks can represent a challenging computational burden, even for modern CPUs. For this reason, GPUs are routinely used instead to train and run such networks. This paper is a tutorial for students and researchers on some of the techniques that can be used to reduce this computational cost considerably on modern x86 CPUs. We emphasize data layout, batching of the computation, the use of SSE2 instructions, and particularly leverage SSSE3 and SSE4 fixed-point instructions which provide a 3× improvement over an optimized floating-point baseline. We use speech recognition as an example task, and show that a real-time hybrid hidden Markov model / neural network (HMM/NN) large vocabulary system can be built with a 10× speedup over an unoptimized baseline and a 4× speedup over an aggressively optimized floating-point baseline at no cost in accuracy. The techniques described extend readily to neural network training and provide an effective alternative to the use of specialized hardware.",
"title": ""
}
] |
[
{
"docid": "553719cb1cb8829ceaf8e0f1a40953ff",
"text": "“The distinctive faculties of Man are visibly expressed in his elevated cranial domeda feature which, though much debased in certain savage races, essentially characterises the human species. But, considering that the Neanderthal skull is eminently simial, both in its general and particular characters, I feel myself constrained to believe that the thoughts and desires which once dwelt within it never soared beyond those of a brute. The Andamaner, it is indisputable, possesses but the dimmest conceptions of the existence of the Creator of the Universe: his ideas on this subject, and on his own moral obligations, place him very little above animals of marked sagacity; nevertheless, viewed in connection with the strictly human conformation of his cranium, they are such as to specifically identify him with Homo sapiens. Psychical endowments of a lower grade than those characterising the Andamaner cannot be conceived to exist: they stand next to brute benightedness. (.) Applying the above argument to the Neanderthal skull, and considering . that it more closely conforms to the brain-case of the Chimpanzee, . there seems no reason to believe otherwise than that similar darkness characterised the being to which the fossil belonged” (King, 1864; pp. 96).",
"title": ""
},
{
"docid": "fec345f9a3b2b31bd76507607dd713d4",
"text": "E-government is a relatively new branch of study within the Information Systems (IS) field. This paper examines the factors influencing adoption of e-government services by citizens. Factors that have been explored in the extant literature present inadequate understanding of the relationship that exists between ‘adopter characteristics’ and ‘behavioral intention’ to use e-government services. These inadequacies have been identified through a systematic and thorough review of empirical studies that have considered adoption of government to citizen (G2C) electronic services by citizens. This paper critically assesses key factors that influence e-government service adoption; reviews limitations of the research methodologies; discusses the importance of 'citizen characteristics' and 'organizational factors' in adoption of e-government services; and argues for the need to examine e-government service adoption in the developing world.",
"title": ""
},
{
"docid": "0252e39c527c3694da09dac7f136c403",
"text": "It is a generally accepted fact that Off-the-shelf OCR engines do not perform well in unconstrained scenarios like natural scene imagery, where text appears among the clutter of the scene. However, recent research demonstrates that a conventional shape-based OCR engine would be able to produce competitive results in the end-to-end scene text recognition task when provided with a conveniently preprocessed image. In this paper we confirm this finding with a set of experiments where two off-the-shelf OCR engines are combined with an open implementation of a state-of-the-art scene text detection framework. The obtained results demonstrate that in such pipeline, conventional OCR solutions still perform competitively compared to other solutions specifically designed for scene text recognition.",
"title": ""
},
{
"docid": "6610f89ba1776501d6c0d789703deb4e",
"text": "REVIEW QUESTION/OBJECTIVE\nThe objective of this review is to identify the effectiveness of mindfulness based programs in reducing stress experienced by nurses in adult hospitalized patient care settings.\n\n\nBACKGROUND\nNursing professionals face extraordinary stressors in the medical environment. Many of these stressors have always been inherent to the profession: long work hours, dealing with pain, loss and emotional suffering, caring for dying patients and providing support to families. Recently nurses have been experiencing increased stress related to other factors such as staffing shortages, increasingly complex patients, corporate financial constraints and the increased need for knowledge of ever-changing technology. Stress affects high-level cognitive functions, specifically attention and memory, and this increases the already high stakes for nurses. Nurses are required to cope with very difficult situations that require accurate, timely decisions that affect human lives on a daily basis.Lapses in attention increase the risk of serious consequences such as medication errors, failure to recognize life-threatening signs and symptoms, and other essential patient safety issues. Research has also shown that the stress inherent to health care occupations can lead to depression, reduced job satisfaction, psychological distress and disruptions to personal relationships. These outcomes of stress are factors that create scenarios for risk of patient harm.There are three main effects of stress on nurses: burnout, depression and lateral violence. Burnout has been defined as a syndrome of depersonalization, emotional exhaustion, and a sense of low personal accomplishment, and the occurrence of burnout has been closely linked to perceived stress. Shimizu, Mizoue, Mishima and Nagata state that nurses experience considerable job stress which has been a major factor in the high rates of burnout that has been recorded among nurses. Zangaro and Soeken share this opinion and state that work related stress is largely contributing to the current nursing shortage. They report that work stress leads to a much higher turnover, especially during the first year after graduation, lowering retention rates in general.In a study conducted in Pennsylvania, researchers found that while 43% of the nurses who reported high levels of burnout indicated their intent to leave their current position, only 11% of nurses who were not burned out intended to leave in the following 12 months. In the same study patient-to-nurse ratios were significantly associated with emotional exhaustion and burnout. An increase of one patient per nurse assignment to a hospital's staffing level increased burnout by 23%.Depression can be defined as a mood disorder that causes a persistent feeling of sadness and loss of interest. Wang found that high levels of work stress were associated with higher risk of mood and anxiety disorders. In Canada one out of every 10 nurses have shown depressive symptoms; compared to the average of 5.1% of the nurses' counterparts who do not work in healthcare. High incidences of depression and depressive symptoms were also reported in studies among Chinese nurses (38%) and Taiwanese nurses (27.7%). In the Taiwanese study the occurrence of depression was significantly and positively correlated to job stress experienced by the nurses (p<0.001).In a multivariate logistic regression, Ohler, Kerr and Forbes also found that job stress was significantly correlated to depression in nurses. The researchers reported that nurses who experienced a higher degree of job stress were 80% more likely to have suffered a major depressive episode in the previous year. A further finding in this study revealed that 75% of the participants also suffered from at least one chronic disease revealing a strong association between depression and other major health issues.A stressful working environment, such as a hospital, could potentially lead to lateral violence among nurses. Lateral violence is a serious occupational health concern among nurses as evidenced by extensive research and literature available on the topic. The impact of lateral violence has been well studied and documented over the past three decades. Griffin and Clark state that lateral violence is a form of bullying grounded in the theoretical framework of the oppression theory. The bullying behaviors occur among members of an oppressed group as a result of feeling powerless and having a perceived lack of control in their workplace. Griffin identified the ten most common forms of lateral violence among nurses as \"non-verbal innuendo, verbal affront, undermining activities, withholding information, sabotage, infighting, scape-goating, backstabbing, failure to respect privacy, and broken confidences\". Nurse-to-nurse lateral violence leads to negative workplace relationships and disrupts team performance, creating an environment where poor patient outcomes, burnout and high staff turnover rates are prevalent.Work-related stressors have been indicated as a potential cause of lateral violence. According to the Effort Reward Imbalance model (ERI) developed by Siegrist, work stress develops when an imbalance exists between the effort individuals put into their jobs and the rewards they receive in return. The ERI model has been widely used in occupational health settings based on its predictive power for adverse health and well-being outcomes. The model claims that both high efforts with low rewards could lead to negative emotions in the exposed employees. Vegchel, van Jonge, de Bosma & Schaufeli state that, according to the ERI model, occupational rewards mostly consist of money, esteem and job security or career opportunities. A survey conducted by Reineck & Furino indicated that registered nurses had a very high regard for the intrinsic rewards of their profession but that they identified workplace relationships and stress issues as some of the most important contributors to their frustration and exhaustion. Hauge, Skogstad & Einarsen state that work-related stress further increases the potential for lateral violence as it creates a negative environment for both the target and the perpetrator.Mindfulness based programs have proven to be a promising intervention in reducing stress experienced by nurses. Mindfulness was originally defined by Jon Kabat-Zinn in 1979 as \"paying attention on purpose, in the present moment, and nonjudgmentally, to the unfolding of experience moment to moment\". The Mindfulness Based Stress Reduction (MBSR) program is an educationally based program that focuses on training in the contemplative practice of mindfulness. It is an eight-week program where participants meet weekly for two-and-a-half hours and join a one-day long retreat for six hours. The program incorporates a combination of mindfulness meditation, body awareness and yoga to help increase mindfulness in participants. The practice is meant to facilitate relaxation in the body and calming of the mind by focusing on present-moment awareness. The program has proven to be effective in reducing stress, improving quality of life and increasing self-compassion in healthcare professionals.Researchers have demonstrated that mindfulness interventions can effectively reduce stress, anxiety and depression in both clinical and non-clinical populations. In a meta-analysis of seven studies conducted with healthy participants from the general public, the reviewers reported a significant reduction in stress when the treatment and control groups were compared. However, there have been limited studies to date that focused specifically on the effectiveness of mindfulness programs to reduce stress experienced by nurses.In addition to stress reduction, mindfulness based interventions can also enhance nurses' capacity for focused attention and concentration by increasing present moment awareness. Mindfulness techniques can be applied in everyday situations as well as stressful situations. According to Kabat-Zinn, work-related stress influences people differently based on their viewpoint and their interpretation of the situation. He states that individuals need to be able to see the whole picture, have perspective on the connectivity of all things and not operate on automatic pilot to effectively cope with stress. The goal of mindfulness meditation is to empower individuals to respond to situations consciously rather than automatically.Prior to the commencement of this systematic review, the Cochrane Library and JBI Database of Systematic Reviews and Implementation Reports were searched. No previous systematic reviews on the topic of reducing stress experienced by nurses through mindfulness programs were identified. Hence, the objective of this systematic review is to evaluate the best research evidence available pertaining to mindfulness-based programs and their effectiveness in reducing perceived stress among nurses.",
"title": ""
},
{
"docid": "b281f1244dbf31c492d34f0314f8b3e2",
"text": "CONTEXT\nThe National Consensus Project for Quality Palliative Care includes spiritual care as one of the eight clinical practice domains. There are very few standardized spirituality history tools.\n\n\nOBJECTIVES\nThe purpose of this pilot study was to test the feasibility for the Faith, Importance and Influence, Community, and Address (FICA) Spiritual History Tool in clinical settings. Correlates between the FICA qualitative data and quality of life (QOL) quantitative data also were examined to provide additional insight into spiritual concerns.\n\n\nMETHODS\nThe framework of the FICA tool includes Faith or belief, Importance of spirituality, individual's spiritual Community, and interventions to Address spiritual needs. Patients with solid tumors were recruited from ambulatory clinics of a comprehensive cancer center. Items assessing aspects of spirituality within the Functional Assessment of Cancer Therapy QOL tools were used, and all patients were assessed using the FICA. The sample (n=76) had a mean age of 57, and almost half were of diverse religions.\n\n\nRESULTS\nMost patients rated faith or belief as very important in their lives (mean 8.4; 0-10 scale). FICA quantitative ratings and qualitative comments were closely correlated with items from the QOL tools assessing aspects of spirituality.\n\n\nCONCLUSION\nFindings suggest that the FICA tool is a feasible tool for clinical assessment of spirituality. Addressing spiritual needs and concerns in clinical settings is critical in enhancing QOL. Additional use and evaluation by clinicians of the FICA Spiritual Assessment Tool in usual practice settings are needed.",
"title": ""
},
{
"docid": "5e95d54ef979a11ad18ec774210eb175",
"text": "Recently, neural network based sentence modeling methods have achieved great progress. Among these methods, the recursive neural networks (RecNNs) can effectively model the combination of the words in sentence. However, RecNNs need a given external topological structure, like syntactic tree. In this paper, we propose a gated recursive neural network (GRNN) to model sentences, which employs a full binary tree (FBT) structure to control the combinations in recursive structure. By introducing two kinds of gates, our model can better model the complicated combinations of features. Experiments on three text classification datasets show the effectiveness of our model.",
"title": ""
},
{
"docid": "35f61df81a2a31f68f2e5dd0501bcca4",
"text": "We present a generative framework for generalized zero-shot learning where the training and test classes are not necessarily disjoint. Built upon a variational autoencoder based architecture, consisting of a probabilistic encoder and a probabilistic conditional decoder, our model can generate novel exemplars from seen/unseen classes, given their respective class attributes. These exemplars can subsequently be used to train any off-the-shelf classification model. One of the key aspects of our encoder-decoder architecture is a feedback-driven mechanism in which a discriminator (a multivariate regressor) learns to map the generated exemplars to the corresponding class attribute vectors, leading to an improved generator. Our model's ability to generate and leverage examples from unseen classes to train the classification model naturally helps to mitigate the bias towards predicting seen classes in generalized zero-shot learning settings. Through a comprehensive set of experiments, we show that our model outperforms several state-of-the-art methods, on several benchmark datasets, for both standard as well as generalized zero-shot learning.",
"title": ""
},
{
"docid": "7c7adec92afb1fc3137de500d00c8c89",
"text": "Automatic service discovery is essential to realizing the full potential of the Internet of Things (IoT). While discovery protocols like Multicast DNS, Apple AirDrop, and Bluetooth Low Energy have gained widespread adoption across both IoT and mobile devices, most of these protocols do not offer any form of privacy control for the service, and often leak sensitive information such as service type, device hostname, device owner’s identity, and more in the clear. To address the need for better privacy in both the IoT and the mobile landscape, we develop two protocols for private service discovery and private mutual authentication. Our protocols provide private and authentic service advertisements, zero round-trip (0-RTT) mutual authentication, and are provably secure in the Canetti-Krawczyk key-exchange model. In contrast to alternatives, our protocols are lightweight and require minimal modification to existing key-exchange protocols. We integrate our protocols into an existing open-source distributed applications framework, and provide benchmarks on multiple hardware platforms: Intel Edisons, Raspberry Pis, smartphones, laptops, and desktops. Finally, we discuss some privacy limitations of the Apple AirDrop protocol (a peer-to-peer file sharing mechanism) and show how to improve the privacy of Apple AirDrop using our private mutual authentication protocol.",
"title": ""
},
{
"docid": "4eb2b9dbdee33f8fb2a45b1a67e119ab",
"text": "Wireless networks have faced increasing demand to cope with the exponential growth of data. Conventional architectures have hindered the evolution of network scalability. However, the introduction of cloud technology has brought tremendous flexible and scalable on demand resources. Thus, cloud radio access networks (C-RANs) have been introduced as a new trend in wireless technologies. Despite the novel advancements that C-RAN offers, remote radio head (RRH)-to-base band unit (BBU) resource allocation can cause significant downgrade in efficiency, particularly the allocation of computational resources in the BBU pool to densely deployed small cells. This causes an increase in power consumption and wasted resources. Consequently, an efficient resource allocation method is vital for achieving efficient resource consumption. In this paper, the optimal allocation of computational resources between RRHs and BBUs is modeled. This is dependent on having an optimal physical resource allocation for users to determine the required computational resources. For this purpose, an optimization problem that models the assignment of resources at these two levels is formulated. A decomposition model is adopted to solve the problem by formulating two binary integer programming subproblems; one for each level. Furthermore, two low complexity heuristic algorithms are developed to solve each subproblem. Results show that the computational resource requirements and the power consumption of BBUs and the physical machines decrease as the channel quality worsens. Moreover, the developed heuristic solution achieves a close to optimal performance while having a lower complexity. Finally, both models achieve high resource utilization, cementing the efficiency of the proposed solutions.",
"title": ""
},
{
"docid": "3d508f86f7f5b91b1d12617833adafdd",
"text": "In this paper, a lip-reading method using a novel dynamic feature of lip images is proposed. The dynamic feature of lip images is calculated as the first-order regression coefficients using a few neighboring frames (images). It constiutes a better representation of the time derivatives to the basic static image. The dynamic feature is processed by using convolution neural networks (CNNs), which are able to reduce the negative influence caused by shaking of the subject and face alignment blurring at the feature-extraction level. Its effectiveness has been confirmed by word-recognition experiments comparing the proposed method with the conventional static (original) image.",
"title": ""
},
{
"docid": "88ae7446c9a63086bda9109a696459bd",
"text": "OBJECTIVES\nTo perform a systematic review of neurologic involvement in Systemic sclerosis (SSc) and Localized Scleroderma (LS), describing clinical features, neuroimaging, and treatment.\n\n\nMETHODS\nWe performed a literature search in PubMed using the following MeSH terms, scleroderma, systemic sclerosis, localized scleroderma, localized scleroderma \"en coup de sabre\", Parry-Romberg syndrome, cognitive impairment, memory, seizures, epilepsy, headache, depression, anxiety, mood disorders, Center for Epidemiologic Studies Depression (CES-D), SF-36, Beck Depression Inventory (BDI), Beck Anxiety Inventory (BAI), Patient Health Questionnaire-9 (PHQ-9), neuropsychiatric, psychosis, neurologic involvement, neuropathy, peripheral nerves, cranial nerves, carpal tunnel syndrome, ulnar entrapment, tarsal tunnel syndrome, mononeuropathy, polyneuropathy, radiculopathy, myelopathy, autonomic nervous system, nervous system, electroencephalography (EEG), electromyography (EMG), magnetic resonance imaging (MRI), and magnetic resonance angiography (MRA). Patients with other connective tissue disease knowingly responsible for nervous system involvement were excluded from the analyses.\n\n\nRESULTS\nA total of 182 case reports/studies addressing SSc and 50 referring to LS were identified. SSc patients totalized 9506, while data on 224 LS patients were available. In LS, seizures (41.58%) and headache (18.81%) predominated. Nonetheless, descriptions of varied cranial nerve involvement and hemiparesis were made. Central nervous system involvement in SSc was characterized by headache (23.73%), seizures (13.56%) and cognitive impairment (8.47%). Depression and anxiety were frequently observed (73.15% and 23.95%, respectively). Myopathy (51.8%), trigeminal neuropathy (16.52%), peripheral sensorimotor polyneuropathy (14.25%), and carpal tunnel syndrome (6.56%) were the most frequent peripheral nervous system involvement in SSc. Autonomic neuropathy involving cardiovascular and gastrointestinal systems was regularly described. Treatment of nervous system involvement, on the other hand, varied in a case-to-case basis. However, corticosteroids and cyclophosphamide were usually prescribed in severe cases.\n\n\nCONCLUSIONS\nPreviously considered a rare event, nervous system involvement in scleroderma has been increasingly recognized. Seizures and headache are the most reported features in LS en coup de sabre, while peripheral and autonomic nervous systems involvement predominate in SSc. Moreover, recently, reports have frequently documented white matter lesions in asymptomatic SSc patients, suggesting smaller branches and perforating arteries involvement.",
"title": ""
},
{
"docid": "288f831e93e83b86d28624e31bb2f16c",
"text": "Deep learning has made significant improvements at many image processing tasks in recent years, such as image classification, object recognition and object detection. Convolutional neural networks (CNN), which is a popular deep learning architecture designed to process data in multiple array form, show great success to almost all detection & recognition problems and computer vision tasks. However, the number of parameters in a CNN is too high such that the computers require more energy and larger memory size. In order to solve this problem, we propose a novel energy efficient model Binary Weight and Hadamard-transformed Image Network (BWHIN), which is a combination of Binary Weight Network (BWN) and Hadamard-transformed Image Network (HIN). It is observed that energy efficiency is achieved with a slight sacrifice at classification accuracy. Among all energy efficient networks, our novel ensemble model outperforms other energy efficient models.",
"title": ""
},
{
"docid": "41b5a2a3cd5b9a338ed54b59ebf3022f",
"text": "Fucoidan is a complex sulfated polysaccharide, derived from marine brown seaweed. In the present study, we investigated the effects of fucoidan on improving learning and memory impairment in rats induced by infusion of Aβ (1-40), and its possible mechanisms. The results indicated that fucoidan could ameliorate Aβ-induced learning and memory impairment in animal behavioral tests. Furthermore, fucoidan reversed the decreased activity of choline acetyl transferase (ChAT), superoxide dismutase (SOD), glutathione peroxidase (GSH-Px) and content of acetylcholine (Ach), as well as the increased activity of acetylcholine esterase (AchE) and content of malondialdehyde (MDA) in hippocampal tissue of Aβ-injected rats. Moreover, these were accompanied by an increase of Bcl-2/Bax ratio and a decrease of caspase-3 activity. These results suggested that fucoidan could ameliorate the learning and memory abilities in Aβ-induced AD rats, and the mechanisms appeared to be due to regulating the cholinergic system, reducing oxidative stress and inhibiting the cell apoptosis.",
"title": ""
},
{
"docid": "eb3c507c88316ab5dce9d36708d1ba02",
"text": "Context: Agile software development with its emphasis on producing working code through frequent releases, extensive client interactions and iterative development has emerged as an alternative to traditional plan-based software development methods. While a number of case studies have provided insights into the use and consequences of agile, few empirical studies have examined the factors that drive the adoption and use of agile. Objective: We draw on intention-based theories and a dialectic perspective to identify factors driving the use of agile practices among adopters of this software development methodology. Method: Data for the study was gathered through an anonymous online survey of software development professionals. We requested participation from members of a selected list of online discussion groups, and received 98 responses. Results: Our analyses reveal that subjective norm and training play a significant role in influencing software developers’ use of agile processes and methods, while perceived benefits and perceived limitations are not primary drivers of agile use among adopters. Interestingly, perceived benefit emerges as a significant predictor of agile use only if adopters face hindrances to their agile practices. Conclusion: We conclude that research in the adoption of software development innovations should examine the effects of both enabling and detracting factors and the interactions between them. Since training, subjective norm, and the interplay between perceived benefits and perceived hindrances appear to be key factors influencing the adoption of agile methods, researchers can focus on how to (a) perform training on agile methods more effectively, (b) facilitate the dialog between developers and managers about perceived benefits and hindrances, and (c) capitalize on subjective norm to publicize the benefits of agile methods within an organization. Further, when managing the transition to new software development methods, we recommend that practitioners adapt their strategies and tactics contingent on the extent of perceived hindrances to the change. 2011 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "69e86a1f6f4d7f1039a3448e06df3725",
"text": "In this paper, a low profile LLC resonant converter with two planar transformers is proposed for a slim SMPS (Switching Mode Power Supply). Design procedures and voltage gain characteristics on the proposed planar transformer and converter are described in detail. Two planar transformers applied to LLC resonant converter are connected in series at primary and in parallel by the center-tap winding at secondary. Based on the theoretical analysis and simulation results of the voltage gain characteristics, a 300W LLC resonant converter for LED TV power module is designed and tested.",
"title": ""
},
{
"docid": "4a607e7c39907111675b4466d611ce7a",
"text": "Identification of the plant diseases manually is very difficult. Hence, image processing is used for the detection of plant diseases. Diseases are caused due to fungi, bacteria, viruses and so on. Diseases in plants reduce both quality and quantity of agricultural products. Hence it is important to identify the diseases. This paper gives the overview of methods used for the detection of plant diseases using their leaves images. Keywords— Image processing, RGB, Image acquisition, Image pre-processing, Image Segmentation, Feature extraction.",
"title": ""
},
{
"docid": "84ae9f9f1dd10a8910ff99d1dd4ec227",
"text": "With the advent of powerful ranging and visual sensors, nowadays, it is convenient to collect sparse 3-D point clouds and aligned high-resolution images. Benefitted from such convenience, this letter proposes a joint method to perform both depth assisted object-level image segmentation and image guided depth upsampling. To this end, we formulate these two tasks together as a bi-task labeling problem, defined in a Markov random field. An alternating direction method (ADM) is adopted for the joint inference, solving each sub-problem alternatively. More specifically, the sub-problem of image segmentation is solved by Graph Cuts, which attains discrete object labels efficiently. Depth upsampling is addressed via solving a linear system that recovers continuous depth values. By this joint scheme, robust object segmentation results and high-quality dense depth maps are achieved. The proposed method is applied to the challenging KITTI vision benchmark suite, as well as the Leuven dataset for validation. Comparative experiments show that our method outperforms stand-alone approaches.",
"title": ""
},
{
"docid": "4d6e9bc0a8c55e65d070d1776e781173",
"text": "As electronic device feature sizes scale-down, the power consumed due to onchip communications as compared to computations will increase dramatically; likewise, the available bandwidth per computational operation will continue to decrease. Integrated photonics can offer savings in power and potential increase in bandwidth for onchip networks. Classical diffraction-limited photonics currently utilized in photonic integrated circuits (PIC) is characterized by bulky and inefficient devices compared to their electronic counterparts due to weak light matter interactions (LMI). Performance critical for the PIC is electro-optic modulators (EOM), whose performances depend inherently on enhancing LMIs. Current EOMs based on diffraction-limited optical modes often deploy ring resonators and are consequently bulky, photon-lifetime modulation limited, and power inefficient due to large electrical...",
"title": ""
},
{
"docid": "8255146164ff42f8755d8e74fd24cfa1",
"text": "We present a named-entity recognition (NER) system for parallel multilingual text. Our system handles three languages (i.e., English, French, and Spanish) and is tailored to the biomedical domain. For each language, we design a supervised knowledge-based CRF model with rich biomedical and general domain information. We use the sentence alignment of the parallel corpora, the word alignment generated by the GIZA++[8] tool, and Wikipedia-based word alignment in order to transfer system predictions made by individual language models to the remaining parallel languages. We re-train each individual language system using the transferred predictions and generate a final enriched NER model for each language. The enriched system performs better than the initial system based on the predictions transferred from the other language systems. Each language model benefits from the external knowledge extracted from biomedical and general domain resources.",
"title": ""
},
{
"docid": "23e07013a82049f0c4e88bd071a083f8",
"text": "A triple-resonance LC network increases the bandwidth of cascaded differential pairs by a factor of 2/spl radic/3, yielding a 40-Gb/s CMOS amplifier with a gain of 15 dB and a power dissipation of 190 mW from a 2.2-V supply. An ESD protection circuit employs negative capacitance along with T-coils and pn junctions to operate at 40 Gb/s while tolerating 700-800 V.",
"title": ""
}
] |
scidocsrr
|
522a8f51e81e334b848d114b05bd6a97
|
Classifier and Exemplar Synthesis for Zero-Shot Learning
|
[
{
"docid": "78bd1c7ea28a4af60991b56ccd658d7f",
"text": "The number of categories for action recognition is growing rapidly. It is thus becoming increasingly hard to collect sufficient training data to learn conventional models for each category. This issue may be ameliorated by the increasingly popular “zero-shot learning” (ZSL) paradigm. In this framework a mapping is constructed between visual features and a human interpretable semantic description of each category, allowing categories to be recognised in the absence of any training data. Existing ZSL studies focus primarily on image data, and attribute-based semantic representations. In this paper, we address zero-shot recognition in contemporary video action recognition tasks, using semantic word vector space as the common space to embed videos and category labels. This is more challenging because the mapping between the semantic space and space-time features of videos containing complex actions is more complex and harder to learn. We demonstrate that a simple self-training and data augmentation strategy can significantly improve the efficacy of this mapping. Experiments on human action datasets including HMDB51 and UCF101 demonstrate that our approach achieves the state-of-the-art zero-shot action recognition performance.",
"title": ""
}
] |
[
{
"docid": "c2558388fb20454fa6f4653b1e4ab676",
"text": "Recently, Convolutional Neural Network (CNN) based models have achieved great success in Single Image Super-Resolution (SISR). Owing to the strength of deep networks, these CNN models learn an effective nonlinear mapping from the low-resolution input image to the high-resolution target image, at the cost of requiring enormous parameters. This paper proposes a very deep CNN model (up to 52 convolutional layers) named Deep Recursive Residual Network (DRRN) that strives for deep yet concise networks. Specifically, residual learning is adopted, both in global and local manners, to mitigate the difficulty of training very deep networks, recursive learning is used to control the model parameters while increasing the depth. Extensive benchmark evaluation shows that DRRN significantly outperforms state of the art in SISR, while utilizing far fewer parameters. Code is available at https://github.com/tyshiwo/DRRN_CVPR17.",
"title": ""
},
{
"docid": "d11c2dd512f680e79706f73d4cd3d0aa",
"text": "We describe the class of convexified convolutional neural networks (CCNNs), which capture the parameter sharing of convolutional neural networks in a convex manner. By representing the nonlinear convolutional filters as vectors in a reproducing kernel Hilbert space, the CNN parameters can be represented in terms of a lowrank matrix, and the rank constraint can be relaxed so as to obtain a convex optimization problem. For learning two-layer convolutional neural networks, we prove that the generalization error obtained by a convexified CNN converges to that of the best possible CNN. For learning deeper networks, we train CCNNs in a layerwise manner. Empirically, we find that CCNNs achieve competitive or better performance than CNNs trained by backpropagation, SVMs, fully-connected neural networks, stacked denoising auto-encoders, and other baseline methods.",
"title": ""
},
{
"docid": "aeef3eff9578d8bb1efdf3db59f39c16",
"text": "• NOTICE: this is the author's version of a work that was accepted for publication in Industrial Marketing Management. Changes resulting from the publishing process, such as peer review, editing, corrections, structural formatting, and other quality control mechanisms may not be reflected in this document. Changes may have been made to this work since it was submitted for publication. A definitive version was subsequently published at: http://dx.doi.org/10.1016/j.indmarman.2011.09.009",
"title": ""
},
{
"docid": "88077fe7ce2ad4a3c3052a988f9f96c1",
"text": "When collecting patient-level resource use data for statistical analysis, for some patients and in some categories of resource use, the required count will not be observed. Although this problem must arise in most reported economic evaluations containing patient-level data, it is rare for authors to detail how the problem was overcome. Statistical packages may default to handling missing data through a so-called 'complete case analysis', while some recent cost-analyses have appeared to favour an 'available case' approach. Both of these methods are problematic: complete case analysis is inefficient and is likely to be biased; available case analysis, by employing different numbers of observations for each resource use item, generates severe problems for standard statistical inference. Instead we explore imputation methods for generating 'replacement' values for missing data that will permit complete case analysis using the whole data set and we illustrate these methods using two data sets that had incomplete resource use information.",
"title": ""
},
{
"docid": "659cc5b1999c962c9fb0b3544c8b928a",
"text": "During the recent years the mainstream framework for HCI research — the informationprocessing cognitive psychology —has gained more and more criticism because of serious problems in applying it both in research and practical design. In a debate within HCI research the capability of information processing psychology has been questioned and new theoretical frameworks searched. This paper presents an overview of the situation and discusses potentials of Activity Theory as an alternative framework for HCI research and design.",
"title": ""
},
{
"docid": "3d67093ff0885734ca8be9be3b44429c",
"text": "The autoencoder algorithm and its deep version as traditional dimensionality reduction methods have achieved great success via the powerful representability of neural networks. However, they just use each instance to reconstruct itself and ignore to explicitly model the data relation so as to discover the underlying effective manifold structure. In this paper, we propose a dimensionality reduction method by manifold learning, which iteratively explores data relation and use the relation to pursue the manifold structure. The method is realized by a so called \"generalized autoencoder\" (GAE), which extends the traditional autoencoder in two aspects: (1) each instance xi is used to reconstruct a set of instances {xj} rather than itself. (2) The reconstruction error of each instance (||xj -- x'i||2) is weighted by a relational function of xi and xj defined on the learned manifold. Hence, the GAE captures the structure of the data space through minimizing the weighted distances between reconstructed instances and the original ones. The generalized autoencoder provides a general neural network framework for dimensionality reduction. In addition, we propose a multilayer architecture of the generalized autoencoder called deep generalized autoencoder to handle highly complex datasets. Finally, to evaluate the proposed methods, we perform extensive experiments on three datasets. The experiments demonstrate that the proposed methods achieve promising performance.",
"title": ""
},
{
"docid": "50ec4623e6b7c4bf6d9207474e16ae47",
"text": "We resolve a basic problem regarding subspace distances that has arisen considerably often in applications: How could one define a notion of distance between two linear subspaces of different dimensions in a way that generalizes the usual Grassmann distance between equidimensional subspaces? We show that a natural solution is given by the distance of a point to a Schubert variety within the Grassmannian. Aside from reducing to the usual Grassmann distance when the subspaces are equidimensional, this distance is intrinsic and does not depend on any embedding into a larger ambient space. Furthermore, it can be written down as concrete expressions involving principal angles, and is efficiently computable in numerically stable ways. Our results are also largely independent of the Grassmann distance — if desired, it may be substituted by any other common distances between subspaces. Central to our approach to these problems is a concrete algebraic geometric view of the Grassmannian that parallels the differential geometric perspective that is now well-established in applied and computational mathematics. A secondary goal of this article is to demonstrate that the basic algebraic geometry of Grassmannian can be just as accessible and useful to practitioners.",
"title": ""
},
{
"docid": "bd039cbb3b9640e917b9cc15e45e5536",
"text": "We introduce adversarial neural networks for representation learning as a novel approach to transfer learning in brain-computer interfaces (BCIs). The proposed approach aims to learn subject-invariant representations by simultaneously training a conditional variational autoencoder (cVAE) and an adversarial network. We use shallow convolutional architectures to realize the cVAE, and the learned encoder is transferred to extract subject-invariant features from unseen BCI users’ data for decoding. We demonstrate a proof-of-concept of our approach based on analyses of electroencephalographic (EEG) data recorded during a motor imagery BCI experiment.",
"title": ""
},
{
"docid": "6db44a34a5a78c4a65fa7653dbf8ab96",
"text": "Grush and Churchland (1995) attempt to address aspects of the proposal that we have been making concerning a possible physical mechanism underlying the phenomenon of consciousness. Unfortunately, they employ arguments that are highly misleading and, in some important respects, factually incorrect. Their article 'Gaps in Penrose's Toilings' is addressed specifically at the writings of one of us (Penrose), but since the particular model they attack is one put forward by both of us (Hameroff and Penrose, 1995; 1996), it is appropriate that we both reply; but since our individual remarks refer to different aspects of their criticism we are commenting on their article separately. The logical arguments discussed by Grush and Churchland, and the related physics are answered in Part l by Penrose, largely by pointing out precisely where these arguments have already been treated in detail in Shadows of the Mind (Penrose, 1994). In Part 2, Hameroff replies to various points on the biological side, showing for example how they have seriously misunderstood what they refer to as 'physiological evidence' regarding to effects of the drug colchicine. The reply serves also to discuss aspects of our model 'orchestrated objective reduction in brain microtubules – Orch OR' which attempts to deal with the serious problems of consciousness more directly and completely than any previous theory. Logical arguments It has been argued in the books by one of us, The Emperor's New Mind (Penrose, 1989 – henceforth Emperor) and Shadows of the Mind (Penrose, 1994 – henceforth Shadows) that Gödel's theorem shows that there must be something non–computational involved in mathematical thinking. The Grush and Churchland (1995 – henceforth G&C) discussion attempts to dismiss this argument from Gödel's theorem on certain grounds. However, the main points that they put forward are ones which have been amply addressed in Shadows. It is very hard to understand how G&C can make the claims that they do without giving any indication that virtually all their points are explicitly taken into account in Shadows. It might be the case that the",
"title": ""
},
{
"docid": "7f146edc6c98638ee48afd938895c1df",
"text": "Organizational law empowers firms to hold assets and enter contracts as entities that are legally distinct from their owners and managers. Legal scholars and economists have commented extensively on one form of this partitioning between firms and owners: namely, the rule of limited liability that insulates firm owners from business debts. But a less-noticed form of legal partitioning, which we call \"entity shielding,\" is both economically and historically more significant than limited liability. While limited liability shields owners' personal assets from a firm's creditors, entity shielding protects firm assets from the owners' personal creditors (and from creditors of other business ventures), thus reserving those assets for the firm's creditors. Entity shielding creates important economic benefits, including a lower cost of credit for firm owners, reduced bankruptcy administration costs, enhanced stability, and the possibility of a market in shares. But entity shielding also imposes costs by requiring specialized legal and business institutions and inviting opportunism vis-d-vis both personal and business creditors. The changing balance of these benefits and costs illuminates the evolution of legal entities across time and societies. To both illustrate and test this proposition, we describe the development of entity shielding in four historical epochs: ancient Rome, the Italian Middle Ages, England of the seventeenth to nineteenth centuries, and the United States from the nineteenth century to the present.",
"title": ""
},
{
"docid": "3be26bee8adb3e6f2b382a77925ddfcf",
"text": "Memory leaks compromise availability and security by crippling performance and crashing programs. Leaks are difficult to diagnose because they have no immediate symptoms. Online leak detection tools benefit from storing and reporting per-object sites (e.g., allocation sites) for potentially leaking objects. In programs with many small objects, per-object sites add high space overhead, limiting their use in production environments.This paper introduces Bit-Encoding Leak Location (Bell), a statistical approach that encodes per-object sites to a single bit per object. A bit loses information about a site, but given sufficient objects that use the site and a known, finite set of possible sites, Bell uses brute-force decoding to recover the site with high accuracy.We use this approach to encode object allocation and last-use sites in Sleigh, a new leak detection tool. Sleigh detects stale objects (objects unused for a long time) and uses Bell decoding to report their allocation and last-use sites. Our implementation steals four unused bits in the object header and thus incurs no per-object space overhead. Sleigh's instrumentation adds 29% execution time overhead, which adaptive profiling reduces to 11%. Sleigh's output is directly useful for finding and fixing leaks in SPEC JBB2000 and Eclipse, although sufficiently many objects must leak before Bell decoding can report sites with confidence. Bell is suitable for other leak detection approaches that store per-object sites, and for other problems amenable to statistical per-object metadata.",
"title": ""
},
{
"docid": "0c01112649df217074f1422b36420701",
"text": "PURPOSE\nThis study was conducted to evaluate the influence of the implant-abutment connection design and diameter on the screw joint stability.\n\n\nMATERIALS AND METHODS\nRegular and wide-diameter implant systems with three different joint connection designs: an external butt joint, a one-stage internal cone, and a two-stage internal cone were divided into seven groups (n=5, in each group). The initial removal torque values of the abutment screw were measured with a digital torque gauge. The postload removal torque values were measured after 100,000 cycles of a 150 N and a 10 Hz cyclic load had been applied. Subsequently, the rates of the initial and postload removal torque losses were calculated to evaluate the effect of the joint connection design and diameter on the screw joint stability. Each group was compared using Kruskal-Wallis test and Mann-Whitney U test as post-hoc test (α=0.05).\n\n\nRESULTS\nTHE POSTLOAD REMOVAL TORQUE VALUE WAS HIGH IN THE FOLLOWING ORDER WITH REGARD TO MAGNITUDE: two-stage internal cone, one-stage internal cone, and external butt joint systems. In the regular-diameter group, the external butt joint and one-stage internal cone systems showed lower postload removal torque loss rates than the two-stage internal cone system. In the wide-diameter group, the external butt joint system showed a lower loss rate than the one-stage internal cone and two-stage internal cone systems. In the two-stage internal cone system, the wide-diameter group showed a significantly lower loss rate than the regular-diameter group (P<.05).\n\n\nCONCLUSION\nThe results of this study showed that the external butt joint was more advantageous than the internal cone in terms of the postload removal torque loss. For the difference in the implant diameter, a wide diameter was more advantageous in terms of the torque loss rate.",
"title": ""
},
{
"docid": "3e3aee0dc9b21c19335a0d01ed43116d",
"text": "Blockchain is a distributed system with efficient transaction recording and has been widely adopted in sharing economy. Although many existing privacy-preserving methods on the blockchain have been proposed, finding a trade-off between keeping speed and preserving privacy of transactions remain challenging. To address this limitation, we propose a novel Fast and Privacy-preserving method based on the Permissioned Blockchain (FPPB) for fair transactions in sharing economy. Without breaking the verifying protocol and bringing additional off-blockchain interactive communication, FPPB protects the privacy and fairness of transactions. Additionally, experiments are implemented in EthereumJ (a Java implementation of the Ethereum protocol) to measure the performance of FPPB. Compared with normal transactions without cryptographic primitives, FPPB only slows down transactions slightly.",
"title": ""
},
{
"docid": "c002b17f95a154ab394fd345dbfd2fdb",
"text": "This paper presents a method to estimate 3D human pose and body shape from monocular videos. While recent approaches infer the 3D pose from silhouettes and landmarks, we exploit properties of optical flow to temporally constrain the reconstructed motion. We estimate human motion by minimizing the difference between computed flow fields and the output of our novel flow renderer. By just using a single semi-automatic initialization step, we are able to reconstruct monocular sequences without joint annotation. Our test scenarios demonstrate that optical flow effectively regularizes the under-constrained problem of human shape and motion estimation from monocular video. Fig. 1: Following our main idea we compute the optical flow between two consecutive frames and match it to an optical flow field estimated by our proposed optical flow renderer. From left to right: input frame, color-coded observed flow, estimated flow, resulting pose.",
"title": ""
},
{
"docid": "f234abc5c97e7705a7406794c63607ff",
"text": "Sensor Web Enablement (SWE) for health care allows the access of sensor data anytime, anywhere using standard protocol and Application Program Interface (API). In this paper Open Geo-Spatial Consortium (OGC) standard based remote health monitoring system is proposed that allows integration of sensor and web using standard web based interface. The aim is to provide the data in an open & interoperable manner, and reduce data redundancy. Fixed specification is used for exchange of sensor data globally for all sensor networks. OGC SWEis applicable to different sensor systems including medical sensor networks. A standard format is used to document sensor descriptions and encapsulate data. Sensor data is ported on to cloud which provides scalability, centralized user access, persistent data storage and no infrastructure maintenance cost for heavy volumes of sensitive health data. Decision tree pruning algorithm with high confidence factor is proposed for automatic decision making.",
"title": ""
},
{
"docid": "244a517d3a1c456a602ecc01fb99a78f",
"text": "Most literature on time series classification assumes that the beginning and ending points of the pattern of interest can be correctly identified, both during the training phase and later deployment. In this work, we argue that this assumption is unjustified, and this has in many cases led to unwarranted optimism about the performance of the proposed algorithms. As we shall show, the task of correctly extracting individual gait cycles, heartbeats, gestures, behaviors, etc., is generally much more difficult than the task of actually classifying those patterns. We propose to mitigate this problem by introducing an alignment-free time series classification framework. The framework requires only very weakly annotated data, such as “in this ten minutes of data, we see mostly normal heartbeats...,” and by generalizing the classic machine learning idea of data editing to streaming/continuous data, allows us to build robust, fast and accurate classifiers. We demonstrate on several diverse real-world problems that beyond removing unwarranted assumptions and requiring essentially no human intervention, our framework is both significantly faster and significantly more accurate than current state-of-the-art approaches.",
"title": ""
},
{
"docid": "57f3b7130d41a176410015ca03b9c954",
"text": "Sudhausia aristotokia n. gen., n. sp. and S. crassa n. gen., n. sp. (Nematoda: Diplogastridae): viviparous new species with precocious gonad development Matthias HERRMANN 1, Erik J. RAGSDALE 1, Natsumi KANZAKI 2 and Ralf J. SOMMER 1,∗ 1 Max Planck Institute for Developmental Biology, Department of Evolutionary Biology, Spemannstraße 37, Tübingen, Germany 2 Forest Pathology Laboratory, Forestry and Forest Products Research Institute, 1 Matsunosato, Tsukuba, Ibaraki 305-8687, Japan",
"title": ""
},
{
"docid": "a62c1426e09ab304075e70b61773914f",
"text": "Converting a scanned or shot line drawing image into a vector graph can facilitate further editand reuse, making it a hot research topic in computer animation and image processing. Besides avoiding noiseinfluence, its main challenge is to preserve the topological structures of the original line drawings, such as linejunctions, in the procedure of obtaining a smooth vector graph from a rough line drawing. In this paper, wepropose a vectorization method of line drawings based on junction analysis, which retains the original structureunlike done by existing methods. We first combine central line tracking and contour tracking, which allowsus to detect the encounter of line junctions when tracing a single path. Then, a junction analysis approachbased on intensity polar mapping is proposed to compute the number and orientations of junction branches.Finally, we make use of bending degrees of contour paths to compute the smoothness between adjacent branches,which allows us to obtain the topological structures corresponding to the respective ones in the input image.We also introduce a correction mechanism for line tracking based on a quadratic surface fitting, which avoidsaccumulating errors of traditional line tracking and improves the robustness for vectorizing rough line drawings.We demonstrate the validity of our method through comparisons with existing methods, and a large amount ofexperiments on both professional and amateurish line drawing images. 本文提出一种基于交叉点分析的线条矢量化方法, 克服了现有方法难以保持拓扑结构的不足。通过中心路径跟踪和轮廓路径跟踪相结合的方式, 准确检测交叉点的出现提出一种基于极坐标亮度映射的交叉点分析方法, 计算交叉点的分支数量和朝向; 利用轮廓路径的弯曲角度判断交叉点相邻分支间的光顺度, 从而获得与原图一致的拓扑结构。",
"title": ""
},
{
"docid": "ae1b3d2668ed17df54a2cdb758c6b427",
"text": "Word embeddings improve generalization over lexical features by placing each word in a lower-dimensional space, using distributional information obtained from unlabeled data. However, the effectiveness of word embeddings for downstream NLP tasks is limited by out-of-vocabulary (OOV) words, for which embeddings do not exist. In this paper, we present MIMICK, an approach to generating OOV word embeddings compositionally, by learning a function from spellings to distributional embeddings. Unlike prior work, MIMICK does not require re-training on the original word embedding corpus; instead, learning is performed at the type level. Intrinsic and extrinsic evaluations demonstrate the power of this simple approach. On 23 languages, MIMICK improves performance over a word-based baseline for tagging part-of-speech and morphosyntactic attributes. It is competitive with (and complementary to) a supervised characterbased model in low-resource settings.",
"title": ""
},
{
"docid": "cbc81f267b98cc3f3986552515657b0f",
"text": "Multivariate quantitative traits arise naturally in recent neuroimaging genetics studies, in which both structural and functional variability of the human brain is measured non-invasively through techniques such as magnetic resonance imaging (MRI). There is growing interest in detecting genetic variants associated with such multivariate traits, especially in genome-wide studies. Random forests (RFs) classifiers, which are ensembles of decision trees, are amongst the best performing machine learning algorithms and have been successfully employed for the prioritisation of genetic variants in case-control studies. RFs can also be applied to produce gene rankings in association studies with multivariate quantitative traits, and to estimate genetic similarities measures that are predictive of the trait. However, in studies involving hundreds of thousands of SNPs and high-dimensional traits, a very large ensemble of trees must be inferred from the data in order to obtain reliable rankings, which makes the application of these algorithms computationally prohibitive. We have developed a parallel version of the RF algorithm for regression and genetic similarity learning tasks in large-scale population genetic association studies involving multivariate traits, called PaRFR (Parallel Random Forest Regression). Our implementation takes advantage of the MapReduce programming model and is deployed on Hadoop, an open-source software framework that supports data-intensive distributed applications. Notable speed-ups are obtained by introducing a distance-based criterion for node splitting in the tree estimation process. PaRFR has been applied to a genome-wide association study on Alzheimer's disease (AD) in which the quantitative trait consists of a high-dimensional neuroimaging phenotype describing longitudinal changes in the human brain structure. PaRFR provides a ranking of SNPs associated to this trait, and produces pair-wise measures of genetic proximity that can be directly compared to pair-wise measures of phenotypic proximity. Several known AD-related variants have been identified, including APOE4 and TOMM40. We also present experimental evidence supporting the hypothesis of a linear relationship between the number of top-ranked mutated states, or frequent mutation patterns, and an indicator of disease severity. The Java codes are freely available at http://www2.imperial.ac.uk/~gmontana .",
"title": ""
}
] |
scidocsrr
|
69ba2f2a3d820831eaa15b3eeb060988
|
Learning Image Representations Tied to Egomotion from Unlabeled Video
|
[
{
"docid": "cc4c58f1bd6e5eb49044353b2ecfb317",
"text": "Today, visual recognition systems are still rarely employed in robotics applications. Perhaps one of the main reasons for this is the lack of demanding benchmarks that mimic such scenarios. In this paper, we take advantage of our autonomous driving platform to develop novel challenging benchmarks for the tasks of stereo, optical flow, visual odometry/SLAM and 3D object detection. Our recording platform is equipped with four high resolution video cameras, a Velodyne laser scanner and a state-of-the-art localization system. Our benchmarks comprise 389 stereo and optical flow image pairs, stereo visual odometry sequences of 39.2 km length, and more than 200k 3D object annotations captured in cluttered scenarios (up to 15 cars and 30 pedestrians are visible per image). Results from state-of-the-art algorithms reveal that methods ranking high on established datasets such as Middlebury perform below average when being moved outside the laboratory to the real world. Our goal is to reduce this bias by providing challenging benchmarks with novel difficulties to the computer vision community. Our benchmarks are available online at: www.cvlibs.net/datasets/kitti.",
"title": ""
},
{
"docid": "ed0d82bcc688a0101ae914ee208a6e13",
"text": "Visual recognition systems mounted on autonomous moving agents face the challenge of unconstrained data, but simultaneously have the opportunity to improve their performance by moving to acquire new views of test data. In this work, we first show how a recurrent neural network-based system may be trained to perform end-to-end learning of motion policies suited for the “active recognition” setting. Further, we hypothesize that active vision requires an agent to have the capacity to reason about the effects of its motions on its view of the world. To verify this hypothesis, we attempt to induce this capacity in our active recognition pipeline, by simultaneously learning to forecast the effects of the agent’s motions on its internal representation of its cumulative knowledge obtained from all past views. Results across two challenging datasets confirm both that our end-toend system successfully learns meaningful policies for active recognition, and that “learning to look ahead” further boosts recognition performance.",
"title": ""
}
] |
[
{
"docid": "7aad80319743ac72d2c4e117e5f831fa",
"text": "In this letter, we propose a novel method for classifying ambulatory activities using eight plantar pressure sensors within smart shoes. Using these sensors, pressure data of participants can be collected regarding level walking, stair descent, and stair ascent. Analyzing patterns of the ambulatory activities, we present new features with which to describe the ambulatory activities. After selecting critical features, a multi-class support vector machine algorithm is applied to classify these activities. Applying the proposed method to the experimental database, we obtain recognition rates up to 95.2% after six steps.",
"title": ""
},
{
"docid": "9e7c12fbc790314f6897f0b16d43d0af",
"text": "We study in this paper the rate of convergence for learning distributions with the Generative Adversarial Networks (GAN) framework, which subsumes Wasserstein, Sobolev and MMD GANs as special cases. We study a wide range of parametric and nonparametric target distributions, under a collection of objective evaluation metrics. On the nonparametric end, we investigate the minimax optimal rates and fundamental difficulty of the density estimation under the adversarial framework. On the parametric end, we establish theory for neural network classes, that characterizes the interplay between the choice of generator and discriminator. We investigate how to improve the GAN framework with better theoretical guarantee through the lens of regularization. We discover and isolate a new notion of regularization, called the generator/discriminator pair regularization, that sheds light on the advantage of GAN compared to classic parametric and nonparametric approaches for density estimation.",
"title": ""
},
{
"docid": "204ad3064d559c345caa2c6d1a140582",
"text": "In this paper, a face recognition method based on Convolution Neural Network (CNN) is presented. This network consists of three convolution layers, two pooling layers, two full-connected layers and one Softmax regression layer. Stochastic gradient descent algorithm is used to train the feature extractor and the classifier, which can extract the facial features and classify them automatically. The Dropout method is used to solve the over-fitting problem. The Convolution Architecture For Feature Extraction framework (Caffe) is used during the training and testing process. The face recognition rate of the ORL face database and AR face database based on this network is 99.82% and 99.78%.",
"title": ""
},
{
"docid": "fb3cd37ee8f89189753ded802aa42990",
"text": "Bone morphogenetic proteins (BMPs) belong to the TGF-β family, whose 33 members regulate multiple aspects of morphogenesis. TGF-β family members are secreted as procomplexes containing a small growth factor dimer associated with two larger prodomains. As isolated procomplexes, some members are latent, whereas most are active; what determines these differences is unknown. Here, studies on pro-BMP structures and binding to receptors lead to insights into mechanisms that regulate latency in the TGF-β family and into the functions of their highly divergent prodomains. The observed open-armed, nonlatent conformation of pro-BMP9 and pro-BMP7 contrasts with the cross-armed, latent conformation of pro-TGF-β1. Despite markedly different arm orientations in pro-BMP and pro-TGF-β, the arm domain of the prodomain can similarly associate with the growth factor, whereas prodomain elements N- and C-terminal to the arm associate differently with the growth factor and may compete with one another to regulate latency and stepwise displacement by type I and II receptors. Sequence conservation suggests that pro-BMP9 can adopt both cross-armed and open-armed conformations. We propose that interactors in the matrix stabilize a cross-armed pro-BMP conformation and regulate transition between cross-armed, latent and open-armed, nonlatent pro-BMP conformations.",
"title": ""
},
{
"docid": "a973ed3011d9c07ddab4c15ef82fe408",
"text": "OBJECTIVES\nTo assess the efficacy of a 6-week interdisciplinary treatment that combines coordinated psychological, medical, educational, and physiotherapeutic components (PSYMEPHY) over time compared to standard pharmacologic care.\n\n\nMETHODS\nRandomised controlled trial with follow-up at 6 months for the PSYMEPHY and control groups and 12 months for the PSYMEPHY group. Participants were 153 outpatients with FM recruited from a hospital pain management unit. Patients randomly allocated to the control group (CG) received standard pharmacologic therapy. The experimental group (EG) received an interdisciplinary treatment (12 sessions). The main outcome was changes in quality of life, and secondary outcomes were pain, physical function, anxiety, depression, use of pain coping strategies, and satisfaction with treatment as measured by the Fibromyalgia Impact Questionnaire, the Hospital Anxiety and Depression Scale, the Coping with Chronic Pain Questionnaire, and a question regarding satisfaction with the treatment.\n\n\nRESULTS\nSix months after the intervention, significant improvements in quality of life (p=0.04), physical function (p=0.01), and pain (p=0.03) were seen in the PSYMEPHY group (n=54) compared with controls (n=56). Patients receiving the intervention reported greater satisfaction with treatment. Twelve months after the intervention, patients in the PSYMEPHY group (n=58) maintained statistically significant improvements in quality of life, physical functioning, pain, and symptoms of anxiety and depression, and were less likely to use maladaptive passive coping strategies compared to baseline.\n\n\nCONCLUSIONS\nAn interdisciplinary treatment for FM was associated with improvements in quality of life, pain, physical function, anxiety and depression, and pain coping strategies up to 12 months after the intervention.",
"title": ""
},
{
"docid": "5c1183549b10f3e2fe87dc760941893a",
"text": "PURPOSE\nGuidelines for adopting and successfully implementing the requirements of the United States Pharmacopeia (USP) chapter 797 for compounding sterile preparations are presented.\n\n\nSUMMARY\nThe quality of a compounded sterile preparation (CSP) is directly related to the methods used to ensure that the CSP achieves the desired goal of purity, potency, and sterility. A properly designed, constructed, and maintained cleanroom contributes to the quality of CSPs. Design criteria of a sample clean-room are supplied, as are a summary and comparison of the liquid disinfectants that can be used to clean and sanitize the facility and maintain environmental controls. All activities associated with cleaning the cleanroom, including air and surface sampling, must be properly documented in logs, examples of which are provided. A robust employee-training program for properly teaching aseptic technique and a method to verify that personnel have successfully completed the program are integral to compliance with chapter 797 and thoroughly discussed herein. Emerging compounding and testing technology is also discussed.\n\n\nCONCLUSION\nAlthough the task of compliance with the requirements of USP chapter 797 may appear overwhelming, complicated, expensive, and even unattainable, quality can be established via a methodical and organized approach. After the systems have been implemented, maintaining them requires vigilance and follow-up. Compliance with chapter 797 involves up-front and ongoing costs associated with establishing these systems, but the time, energy, and cost required to maintain them are far less than those of retrospective or manual systems of collecting, reviewing, and collating quality assurance data on a monthly basis.",
"title": ""
},
{
"docid": "729581c92155092a82886e58284e8b92",
"text": "We investigate here the capabilities of a 400-element reconfigurable transmitarray antenna to synthesize monopulse radiation patterns for radar applications in X-band. The generation of the sum (Σ) and difference (A) patterns are demonstrated both theoretically and experimentally for broadside as well as tilted beams in different azimuthal planes. Two different feed configurations have been considered, namely, a single focal source and a four-element focal source configuration. The latter enables the simultaneous generation of a Σ- and two A-patterns in orthogonal planes, which is an important advantage for tracking applications with stringent requirements in speed and accuracy.",
"title": ""
},
{
"docid": "b47127a755d7bef1c5baf89253af46e7",
"text": "In an effort to explain pro-environmental behavior, environmental sociologists often study environmental attitudes. While much of this work is atheoretical, the focus on attitudes suggests that researchers are implicitly drawing upon attitude theory in psychology. The present research brings sociological theory to environmental sociology by drawing on identity theory to understand environmentally responsive behavior. We develop an environment identity model of environmental behavior that includes not only the meanings of the environment identity, but also the prominence and salience of the environment identity and commitment to the environment identity. We examine the identity process as it relates to behavior, though not to the exclusion of examining the effects of environmental attitudes. The findings reveal that individual agency is important in influencing environmentally responsive behavior, but this agency is largely through identity processes, rather than attitude processes. This provides an important theoretical and empirical advance over earlier work in environmental sociology.",
"title": ""
},
{
"docid": "c4525bcf7db5540a389b79330061eca6",
"text": "This work addresses design and implementation issues of a 24 GHz rectenna, which is developed to demonstrate the feasibility of wireless power harvesting and transmission (WPT) techniques towards millimeter-wave regime. The proposed structure includes a compact circularly polarized substrate integrated waveguide (SIW) cavity-backed antenna array integrated with a self-biased rectifier using commercial Schottky diodes. The antenna and the rectifier are individually designed, optimized, fabricated and measured. Then they are integrated into one circuit in order to validate the studied rectenna architecture. The maximum measured conversion efficiency and DC voltage are respectively equal to 24% and 0.6 V for an input power density of 10 mW/cm2.",
"title": ""
},
{
"docid": "728ea68ac1a50ae2d1b280b40c480aec",
"text": "This paper presents a new metaprogramming library, CL ARRAY, that offers multiplatform and generic multidimensional data containers for C++ specifically adapted for parallel programming. The CL ARRAY containers are built around a new formalism for representing the multidimensional nature of data as well as the semantics of multidimensional pointers and contiguous data structures. We also present OCL ARRAY VIEW, a concept based on metaprogrammed enveloped objects that supports multidimensional transformations and multidimensional iterators designed to simplify and formalize the interfacing process between OpenCL APIs, standard template library (STL) algorithms and CL ARRAY containers. Our results demonstrate improved performance and energy savings over the three most popular container libraries available to the developer community for use in the context of multi-linear algebraic applications.",
"title": ""
},
{
"docid": "30babb731ac21c11863bddb91a5f7df2",
"text": "V-band clamps are utilised in a wide range of industries to connect together a pair of circular flanges, for ducts, pipes, turbocharger housings and even to form a joint between satellites and their delivery vehicle. In this paper, using a previously developed axisymmetric finite element model, the impact of contact pressure on the contact surface of the V-band clamp was studied and surface roughness measurements were used to investigate the distribution of contact pressure around the circumference of the V-band.",
"title": ""
},
{
"docid": "160fc1d93296ea120ff3545e49f18de6",
"text": "The advent of high-tech journaling tools facilitates an image to be manipulated in a way that can easily evade state-of-the-art image tampering detection approaches. The recent success of the deep learning approaches in different recognition tasks inspires us to develop a high confidence detection framework which can localize manipulated regions in an image. Unlike semantic object segmentation where all meaningful regions (objects) are segmented, the localization of image manipulation focuses only the possible tampered region which makes the problem even more challenging. In order to formulate the framework, we employ a hybrid CNN-LSTM model to capture discriminative features between manipulated and non-manipulated regions. One of the key properties of manipulated regions is that they exhibit discriminative features in boundaries shared with neighboring non-manipulated pixels. Our motivation is to learn the boundary discrepancy, i.e., the spatial structure, between manipulated and non-manipulated regions with the combination of LSTM and convolution layers. We perform end-to-end training of the network to learn the parameters through back-propagation given ground-truth mask information. The overall framework is capable of detecting different types of image manipulations, including copy-move, removal and splicing. Our model shows promising results in localizing manipulated regions, which is demonstrated through rigorous experimentation on three diverse datasets.",
"title": ""
},
{
"docid": "b60474e6e2fa0f08241819bac709d6fd",
"text": "Patriarchy is the prime obstacle to women’s advancement and development. Despite differences in levels of domination the broad principles remain the same, i.e. men are in control. The nature of this control may differ. So it is necessary to understand the system, which keeps women dominated and subordinate, and to unravel its workings in order to work for women’s development in a systematic way. In the modern world where women go ahead by their merit, patriarchy there creates obstacles for women to go forward in society. Because patriarchal institutions and social relations are responsible for the inferior or secondary status of women. Patriarchal society gives absolute priority to men and to some extent limits women’s human rights also. Patriarchy refers to the male domination both in public and private spheres. In this way, feminists use the term ‘patriarchy’ to describe the power relationship between men and women as well as to find out the root cause of women’s subordination. This article, hence, is an attempt to analyse the concept of patriarchy and women’s subordination in a theoretical perspective.",
"title": ""
},
{
"docid": "e2b08d0d14a5561f2d6632d7cec87bcc",
"text": "In recent years, market forecasting by machine learning methods has been flourishing. Most existing works use a past market data set, because they assume that each trader’s individual decisions do not affect market prices at all. Meanwhile, there have been attempts to analyze economic phenomena by constructing virtual market simulators, in which human and artificial traders really make trades. Since prices in a market are, in fact, determined by every trader’s decisions, a virtual market is more realistic, and the above assumption does not apply. In this work, we design several reinforcement learners on the futures market simulator U-Mart (Unreal Market as an Artificial Research Testbed) and compare our learners with the previous champions of U-Mart competitions empirically.",
"title": ""
},
{
"docid": "4e2c4b8fccda7f8c9ca7ffb6ced1ae5a",
"text": "Fog/edge computing, function as a service, and programmable infrastructures, like software-defined networking or network function virtualisation, are becoming ubiquitously used in modern Information Technology infrastructures. These technologies change the characteristics and capabilities of the underlying computational substrate where services run (e.g. higher volatility, scarcer computational power, or programmability). As a consequence, the nature of the services that can be run on them changes too (smaller codebases, more fragmented state, etc.). These changes bring new requirements for service orchestrators, which need to evolve so as to support new scenarios where a close interaction between service and infrastructure becomes essential to deliver a seamless user experience. Here, we present the challenges brought forward by this new breed of technologies and where current orchestration techniques stand with regards to the new challenges. We also present a set of promising technologies that can help tame this brave new world.",
"title": ""
},
{
"docid": "fa07419129af7100fc0bf38746f084aa",
"text": "We are witnessing a dramatic change in computer architecture due to the multicore paradigm shift, as every electronic device from cell phones to supercomputers confronts parallelism of unprecedented scale. To fully unleash the potential of these systems, the HPC community must develop multicore specific optimization methodologies for important scientific computations. In this work, we examine sparse matrix-vector multiply (SpMV) - one of the most heavily used kernels in scientific computing - across a broad spectrum of multicore designs. Our experimental platform includes the homogeneous AMD dual-core and Intel quad-core designs, the heterogeneous STI Cell, as well as the first scientific study of the highly multithreaded Sun Niagara2. We present several optimization strategies especially effective for the multicore environment, and demonstrate significant performance improvements compared to existing state-of-the-art serial and parallel SpMV implementations. Additionally, we present key insights into the architectural tradeoffs of leading multicore design strategies, in the context of demanding memory-bound numerical algorithms.",
"title": ""
},
{
"docid": "51a6b1868082fc2963dd8bae513f6a9b",
"text": "The red blood cells or erythrocytes are biconcave shaped cells and consist mostly in a membrane delimiting a cytosol with a high concentration in hemoglobin. This membrane is highly deformable and allows the cells to go through narrow passages like the capillaries which diameters can be much smaller than red blood cells one. They carry oxygen thanks to hemoglobin, a complex molecule that have very high affinity for oxygen. The capacity of erythrocytes to load and unload oxygen is thus a determinant factor in their efficacy. In this paper, we will focus on the pulmonary capillary where red blood cells capture oxygen. In order to numerically study the behavior of red blood cells along a whole capillary, we propose a camera method that consists in working in a reference frame that follows the red blood cells. More precisely, the domain of study is reduced to a neighborhood of the red blood cells and moves along at erythrocytes mean velocity. This method avoids too large mesh deformation. Our goal is to understand how erythrocytes geometrical changes along the capillary can affect its capacity to capture oxygen. The first part of this document presents the model chosen for the red blood cells along with the numerical method used to determine and follow their shapes along the capillary. The membrane of the red blood cell is complex and has been modelled by an hyper-elastic approach coming from [16]. This camera method is then validated and confronted with a standard Arbitrary Lagrangian Eulerian (ALE) method in which the displacements of the red blood cells are correlated with the deformation of an initial mesh of the whole capillary with red blood cells at start positions. Some geometrical properties of the red blood cells observed in our simulations are then studied and discussed. The second part of this paper deals with the modeling of oxygen and hemoglobin chemistry in the geometries obtained in the first part. We have implemented a full complex hemoglobin behavior with allosteric states inspired from [4]. 1 Laboratoire MSC, Université Paris 7 / CNRS, 10 rue Alice Domon et Léonie Duquet, F-75205 Paris cedex 13 c © EDP Sciences, SMAI 2008",
"title": ""
},
{
"docid": "b651dab78e39d59e3043cb091b7e4f1b",
"text": "Learning an acoustic model directly from the raw waveform has been an active area of research. However, waveformbased models have not yet matched the performance of logmel trained neural networks. We will show that raw waveform features match the performance of log-mel filterbank energies when used with a state-of-the-art CLDNN acoustic model trained on over 2,000 hours of speech. Specifically, we will show the benefit of the CLDNN, namely the time convolution layer in reducing temporal variations, the frequency convolution layer for preserving locality and reducing frequency variations, as well as the LSTM layers for temporal modeling. In addition, by stacking raw waveform features with log-mel features, we achieve a 3% relative reduction in word error rate.",
"title": ""
}
] |
scidocsrr
|
4a19e25699a909235a4e1dbe84e4efd4
|
Anorexia on Tumblr: A Characterization Study
|
[
{
"docid": "96a79bc015e34db18e32a31bfaaace36",
"text": "We consider social media as a promising tool for public health, focusing on the use of Twitter posts to build predictive models about the forthcoming influence of childbirth on the behavior and mood of new mothers. Using Twitter posts, we quantify postpartum changes in 376 mothers along dimensions of social engagement, emotion, social network, and linguistic style. We then construct statistical models from a training set of observations of these measures before and after the reported childbirth, to forecast significant postpartum changes in mothers. The predictive models can classify mothers who will change significantly following childbirth with an accuracy of 71%, using observations about their prenatal behavior, and as accurately as 80-83% when additionally leveraging the initial 2-3 weeks of postnatal data. The study is motivated by the opportunity to use social media to identify mothers at risk of postpartum depression, an underreported health concern among large populations, and to inform the design of low-cost, privacy-sensitive early-warning systems and intervention programs aimed at promoting wellness postpartum.",
"title": ""
},
{
"docid": "e9d987351816570b29d0144a6a7bd2ae",
"text": "One’s state of mind will influence her perception of the world and people within it. In this paper, we explore attitudes and behaviors toward online social media based on whether one is depressed or not. We conducted semistructured face-to-face interviews with 14 active Twitter users, half of whom were depressed and the other half non-depressed. Our results highlight key differences between the two groups in terms of perception towards online social media and behaviors within such systems. Non-depressed individuals perceived Twitter as an information consuming and sharing tool, while depressed individuals perceived it as a tool for social awareness and emotional interaction. We discuss several design implications for future social networks that could better accommodate users with depression and provide insights towards helping depressed users meet their needs through online social media.",
"title": ""
}
] |
[
{
"docid": "71a9394d995cefb8027bed3c56ec176c",
"text": "A broadband microstrip-fed printed antenna is proposed for phased antenna array systems. The antenna consists of two parallel-modified dipoles of different lengths. The regular dipole shape is modified to a quasi-rhombus shape by adding two triangular patches. Using two dipoles helps maintain stable radiation patterns close to their resonance frequencies. A modified array configuration is proposed to further enhance the antenna radiation characteristics and usable bandwidth. Scanning capabilities are studied for a four-element array. The proposed antenna provides endfire radiation patterns with high gain, high front-to-back (F-to-B) ratio, low cross-polarization level, wide beamwidth, and wide scanning angles in a wide bandwidth of 103%",
"title": ""
},
{
"docid": "42fd4018cbfb098ef8e3957b1cee38f0",
"text": "We propose an algorithm for combinatorial optimization where an explicit check for the repetition of configurations is added to the basic scheme of Tabu search. In our Tabu scheme the appropriate size of the list is learned in an automated way by reacting to the occurrence of cycles. In addition, if the search appears to be repeating an excessive number of solutions excessively often, then the search is diversified by making a number of random moves proportional to a moving average of the cycle length. The reactive scheme is compared to a ”strict” Tabu scheme, that forbids the repetition of configurations and to schemes with a fixed or randomly varying list size. From the implementation point of view we show that the Hashing or Digital Tree techniques can be used in order to search for repetitions in a time that is approximately constant. We present the results obtained for a series of computational tests on a benchmark function, on the 0-1 Knapsack Problem, and on the Quadratic Assignment Problem.",
"title": ""
},
{
"docid": "889c8754c97db758b474a6f140b39911",
"text": "Herbal toothpaste Salvadora with comprehensive effective materials for dental health ranging from antibacterial, detergent and whitening properties including benzyl isothiocyanate, alkaloids, and anions such as thiocyanate, sulfate, and nitrate with potential antibacterial feature against oral microbial flora, silica and chloride for oral disinfection and bleaching the tooth, fluoride to strengthen tooth enamel, and saponin with appropriate detergent, and resin which protects tooth enamel by placing on it and is aggregated in Salvadora has been formulated. The paste is also from other herbs extract including valerian and chamomile. Current toothpaste has antibacterial, anti-plaque, anti-tartar and whitening, and wood extract of the toothbrush strengthens the tooth and enamel, and prevents the cancellation of enamel.From the other side, resin present in toothbrush wood creates a proper covering on tooth enamel and protects it against decay and benzyl isothiocyanate and also alkaloids present in miswak wood gives Salvadora toothpaste considerable antibacterial and bactericidal effects. Anti-inflammatory effects of the toothpaste are for apigenin and alpha bisabolol available in chamomile extract and seskuiterpen components including valeric acid with sedating features give the paste sedating and calming effect to oral tissues.",
"title": ""
},
{
"docid": "443a4fe9e7484a18aa53a4b142d93956",
"text": "BACKGROUND AND PURPOSE\nFrequency and duration of static stretching have not been extensively examined. Additionally, the effect of multiple stretches per day has not been evaluated. The purpose of this study was to determine the optimal time and frequency of static stretching to increase flexibility of the hamstring muscles, as measured by knee extension range of motion (ROM).\n\n\nSUBJECTS\nNinety-three subjects (61 men, 32 women) ranging in age from 21 to 39 years and who had limited hamstring muscle flexibility were randomly assigned to one of five groups. The four stretching groups stretched 5 days per week for 6 weeks. The fifth group, which served as a control, did not stretch.\n\n\nMETHODS\nData were analyzed with a 5 x 2 (group x test) two-way analysis of variance for repeated measures on one variable (test).\n\n\nRESULTS\nThe change in flexibility appeared to be dependent on the duration and frequency of stretching. Further statistical analysis of the data indicated that the groups that stretched had more ROM than did the control group, but no differences were found among the stretching groups.\n\n\nCONCLUSION AND DISCUSSION\nThe results of this study suggest that a 30-second duration is an effective amount of time to sustain a hamstring muscle stretch in order to increase ROM. No increase in flexibility occurred when the duration of stretching was increased from 30 to 60 seconds or when the frequency of stretching was increased from one to three times per day.",
"title": ""
},
{
"docid": "a7cdfc27dbc704140ef5b3199469898f",
"text": "This technical report updates the 2004 American Academy of Pediatrics technical report on the legalization of marijuana. Current epidemiology of marijuana use is presented, as are definitions and biology of marijuana compounds, side effects of marijuana use, and effects of use on adolescent brain development. Issues concerning medical marijuana specifically are also addressed. Concerning legalization of marijuana, 4 different approaches in the United States are discussed: legalization of marijuana solely for medical purposes, decriminalization of recreational use of marijuana, legalization of recreational use of marijuana, and criminal prosecution of recreational (and medical) use of marijuana. These approaches are compared, and the latest available data are presented to aid in forming public policy. The effects on youth of criminal penalties for marijuana use and possession are also addressed, as are the effects or potential effects of the other 3 policy approaches on adolescent marijuana use. Recommendations are included in the accompanying policy statement.",
"title": ""
},
{
"docid": "bb6f5899c4f1c652e30945c49ce4a2d0",
"text": "This paper reports the piezoelectric properties of ScAlN thin films. We evaluated the piezoelectric coefficients d<sub>33</sub> and d<sub>31</sub> of Sc<sub>x</sub>Al<sub>1-x</sub>N thin films directly deposited onto silicon wafers, as well the radio frequency (RF) electrical characteristics of Sc<sub>0.35</sub>Al<sub>0.65</sub>N bulk acoustic wave (BAW) resonators at around 2 GHz, and found a maximum value for d<sub>33</sub> of 28 pC/N and a maximum -d<sub>31</sub> of 13 pm/V at 40% scandium concentration. In BAW resonators that use Sc<sub>0.35</sub>Al<sub>0.65</sub>N as a piezoelectric film, the electromechanical coupling coefficient k<sup>2</sup> (=15.5%) was found to be 2.6 times that of resonators with AlN films. These experimental results are in very close agreement with first-principles calculations. The large electromechanical coupling coefficient and high sound velocity of these films should make them suitable for high frequency applications.",
"title": ""
},
{
"docid": "acd5879d3d2746e4c6036691e4099f7a",
"text": "Alkamides are fatty acid amides of wide distribution in plants, structurally related to N-acyl-L-homoserine lactones (AHLs) from Gram-negative bacteria and to N- acylethanolamines (NAEs) from plants and mammals. Global analysis of gene expression changes in Arabidopsis thaliana in response to N-isobutyl decanamide, the most highly active alkamide identified to date, revealed an overrepresentation of defense-responsive transcriptional networks. In particular, genes encoding enzymes for jasmonic acid (JA) biosynthesis increased their expression, which occurred in parallel with JA, nitric oxide (NO) and H₂O₂ accumulation. The activity of the alkamide to confer resistance against the necrotizing fungus Botrytis cinerea was tested by inoculating Arabidopsis detached leaves with conidiospores and evaluating disease symptoms and fungal proliferation. N-isobutyl decanamide application significantly reduced necrosis caused by the pathogen and inhibited fungal proliferation. Arabidopsis mutants jar1 and coi1 altered in JA signaling and a MAP kinase mutant (mpk6), unlike salicylic acid- (SA) related mutant eds16/sid2-1, were unable to defend from fungal attack even when N-isobutyl decanamide was supplied, indicating that alkamides could modulate some necrotrophic-associated defense responses through JA-dependent and MPK6-regulated signaling pathways. Our results suggest a role of alkamides in plant immunity induction.",
"title": ""
},
{
"docid": "41a4c88cb1446603f43a4888b6c13f61",
"text": "This paper gives an overview of the ArchWare European Project1. The broad scope of ArchWare is to respond to the ever-present demand for software systems that are capable of accommodating change over their lifetime, and therefore are evolvable. In order to achieve this goal, ArchWare develops an integrated set of architecture-centric languages and tools for the modeldriven engineering of evolvable software systems based on a persistent run-time framework. The ArchWare Integrated Development Environment comprises: (a) innovative formal architecture description, analysis, and refinement languages for describing the architecture of evolvable software systems, verifying their properties and expressing their refinements; (b) tools to support architecture description, analysis, and refinement as well as code generation; (c) enactable processes for supporting model-driven software engineering; (d) a persistent run-time framework including a virtual machine for process enactment. It has been developed using ArchWare itself and is available as Open Source Software.",
"title": ""
},
{
"docid": "455e3f0c6f755d78ecafcdff14c46014",
"text": "BACKGROUND\nIn neonatal and early childhood surgeries such as meningomyelocele repairs, closing deep wounds and oncological treatment, tensor fasciae lata (TFL) flaps are used. However, there are not enough data about structural properties of TFL in foetuses, which can be considered as the closest to neonates in terms of sampling. This study's main objective is to gather data about morphological structures of TFL in human foetuses to be used in newborn surgery.\n\n\nMATERIALS AND METHODS\nFifty formalin-fixed foetuses (24 male, 26 female) with gestational age ranging from 18 to 30 weeks (mean 22.94 ± 3.23 weeks) were included in the study. TFL samples were obtained by bilateral dissection and then surface area, width and length parameters were recorded. Digital callipers were used for length and width measurements whereas surface area was calculated using digital image analysis software.\n\n\nRESULTS\nNo statistically significant differences were found in terms of numerical value of parameters between sides and sexes (p > 0.05). Linear functions for TFL surface area, width, anterior and posterior margin lengths were calculated as y = -225.652 + 14.417 × age (weeks), y = -5.571 + 0.595 × age (weeks), y = -4.276 + 0.909 × age (weeks), and y = -4.468 + 0.779 × age (weeks), respectively.\n\n\nCONCLUSIONS\nLinear functions for TFL surface area, width and lengths can be used in designing TFL flap dimensions in newborn surgery. In addition, using those described linear functions can also be beneficial in prediction of TFL flap dimensions in autopsy studies.",
"title": ""
},
{
"docid": "fdbae668610803991b359702fbd8d430",
"text": "The progress of the social science disciplines depends on conducting relevant research. However, research methodology adopted and choices made during the course of the research project are underpinned by varying ontological, epistemological and axiological positions that may be known or unknown to the researcher. This paper sought to critically explore the philosophical underpinnings of the social science research. It was suggested that a “multiversal” ontological position, positivist-hermeneutic epistemological position and value-laden axiological position should be adopted for social science research by non-western scholars as alternative to the dominant naïve realist, positivist, and value-free orientation. Against the backdrop of producing context-relevant knowledge, non-western scholars are encouraged to re-examine their philosophical positions in the conduct of social science research.",
"title": ""
},
{
"docid": "7b7cb898d6d7f4383489f390a3479b8a",
"text": "Although the Evolved Packet System Authentication and Key Agreement (EPS-AKA) provides security and privacy enhancements in 3rd Generation Partnership Project (3GPP), the International Mobile Subscriber Identity (IMSI) is sent in clear text in order to obtain service. Various efforts to provide security mechanisms to protect this unique private identity have not resulted in methods implemented to protect the disclosure of the IMSI. The exposure of the IMSI brings risk to user privacy, and knowledge of it can lead to several passive and active attacks targeted at specific IMSI's and their respective users. Further, the Temporary Mobile Subscribers Identity (TMSI) generated by the Authentication Center (AuC) have been found to be prone to rainbow and brute force attacks, hence an attacker who gets hold of the TMSI can be able to perform social engineering in tracing the TMSI to the corresponding IMSI of a User Equipment (UE). This paper proposes a change to the EPS-AKA authentication process in 4G Long Term Evolution (LTE) Network by including the use of Public Key Infrastructure (PKI). The change would result in the IMSI never being released in the clear in an untrusted network.",
"title": ""
},
{
"docid": "977a1d6be20dd790e78bd47c8d8d7422",
"text": "Conformation, genetics, and behavioral drive are the major determinants of success in canine athletes, although controllable variables, such as training and nutrition, play an important role. The scope and breadth of canine athletic events has expanded dramatically in the past 30 years, but with limited research on performance nutrition. There are considerable data examining nutritional physiology in endurance dogs and in sprinting dogs; however, nutritional studies for agility, field trial, and detection are rare. This article highlights basic nutritional physiology and interventions for exercise, and reviews newer investigations regarding aging working and service dogs, and canine detection activities.",
"title": ""
},
{
"docid": "42043ee6577d791874c1aa34baf81e64",
"text": "Bagging, boosting and Random Forests are classical ensemble methods used to improve the performance of single classifiers. They obtain superior performance by increasing the accuracy and diversity of the single classifiers. Attempts have been made to reproduce these methods in the more challenging context of evolving data streams. In this paper, we propose a new variant of bagging, called leveraging bagging. This method combines the simplicity of bagging with adding more randomization to the input, and output of the classifiers. We test our method by performing an evaluation study on synthetic and real-world datasets comprising up to ten million examples.",
"title": ""
},
{
"docid": "d4cd0dabcf4caa22ad92fab40844c786",
"text": "NA",
"title": ""
},
{
"docid": "e3b3e4e75580f3dad0f2fb2b9e28fff4",
"text": "The present study introduced an integrated method for the production of biodiesel from microalgal oil. Heterotrophic growth of Chlorella protothecoides resulted in the accumulation of high lipid content (55%) in cells. Large amount of microalgal oil was efficiently extracted from these heterotrophic cells by using n-hexane. Biodiesel comparable to conventional diesel was obtained from heterotrophic microalgal oil by acidic transesterification. The best process combination was 100% catalyst quantity (based on oil weight) with 56:1 molar ratio of methanol to oil at temperature of 30 degrees C, which reduced product specific gravity from an initial value of 0.912 to a final value of 0.8637 in about 4h of reaction time. The results suggested that the new process, which combined bioengineering and transesterification, was a feasible and effective method for the production of high quality biodiesel from microalgal oil.",
"title": ""
},
{
"docid": "8745e21073db143341e376bad1f0afd7",
"text": "The Virtual Reality (VR) user interface style allows natural hand and body motions to manipulate virtual objects in 3D environments using one or more 3D input devices. This style is best suited to application areas where traditional two-dimensional styles fall short, such as scienti c visualization, architectural visualization, and remote manipulation. Currently, the programming e ort required to produce a VR application is too large, and many pitfalls must be avoided in the creation of successful VR programs. In this paper we describe the Decoupled Simulation Model for creating successful VR applications, and a software system that embodies this model. The MR Toolkit simpli es the development of VR applications by providing standard facilities required by a wide range of VR user interfaces. These facilities include support for distributed computing, head-mounted displays, room geometry management, performance monitoring, hand input devices, and sound feedback. The MR Toolkit encourages programmers to structure their applications to take advantage of the distributed computing capabilities of workstation networks improving the application's performance. In this paper, the motivations and the architecture of the toolkit are outlined, the programmer's view is described, and a simple application is brie y described. CR",
"title": ""
},
{
"docid": "8a0c295e620b68c07005d6d96d4acbe9",
"text": "One method of viral marketing involves seeding certain consumers within a population to encourage faster adoption of the product throughout the entire population. However, determining how many and which consumers within a particular social network should be seeded to maximize adoption is challenging. We define a strategy space for consumer seeding by weighting a combination of network characteristics such as average path length, clustering coefficient, and degree. We measure strategy effectiveness by simulating adoption on a Bass-like agent-based model, with five different social network structures: four classic theoretical models (random, lattice, small-world, and preferential attachment) and one empirical (extracted from Twitter friendship data). To discover good seeding strategies, we have developed a new tool, called BehaviorSearch, which uses genetic algorithms to search through the parameter-space of agent-based models. This evolutionary search also provides insight into the interaction between strategies and network structure. Our results show that one simple strategy (ranking by node degree) is near-optimal for the four theoretical networks, but that a more nuanced strategy performs significantly better on the empirical Twitter-based network. We also find a correlation between the optimal seeding budget for a network, and the inequality of the degree distribution.",
"title": ""
},
{
"docid": "bb2de14849800861d99b40cb8bfba562",
"text": "In this paper, the problem of time series prediction is studied. A Bayesian procedure based on Gaussian process models using a nonstationary covariance function is proposed. Experiments proved the approach e4ectiveness with an excellent prediction and a good tracking. The conceptual simplicity, and good performance of Gaussian process models should make them very attractive for a wide range of problems. c © 2004 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "1e3585a27b6373685544dc392140a4fb",
"text": "When operating in partially-known environments, autonomous vehicles must constantly update their maps and plans based on new sensor information. Much focus has been placed on developing efficient incremental planning algorithms that are able to efficiently replan when the map and associated cost function changes. However, much less attention has been placed on efficiently updating the cost function used by these planners, which can represent a significant portion of the time spent replanning. In this paper, we present the Limited Incremental Distance Transform algorithm, which can be used to efficiently update the cost function used for planning when changes in the environment are observed. Using this algorithm it is possible to plan paths in a completely incremental way starting from a list of changed obstacle classifications. We present results comparing the algorithm to the Euclidean distance transform and a mask-based incremental distance transform algorithm. Computation time is reduced by an order of magnitude for a UAV application. We also provide example results from an autonomous micro aerial vehicle with on-board sensing and computing.",
"title": ""
},
{
"docid": "cf54c485a54d9b22d06710684061eac2",
"text": "Many threads packages have been proposed for programming wireless sensor platforms. However, many sensor network operating systems still choose to provide an event-driven model, due to efficiency concerns. We present TOS-Threads, a threads package for TinyOS that combines the ease of a threaded programming model with the efficiency of an event-based kernel. TOSThreads is backwards compatible with existing TinyOS code, supports an evolvable, thread-safe kernel API, and enables flexible application development through dynamic linking and loading. In TOS-Threads, TinyOS code runs at a higher priority than application threads and all kernel operations are invoked only via message passing, never directly, ensuring thread-safety while enabling maximal concurrency. The TOSThreads package is non-invasive; it does not require any large-scale changes to existing TinyOS code.\n We demonstrate that TOSThreads context switches and system calls introduce an overhead of less than 0.92% and that dynamic linking and loading takes as little as 90 ms for a representative sensing application. We compare different programming models built using TOSThreads, including standard C with blocking system calls and a reimplementation of Tenet. Additionally, we demonstrate that TOSThreads is able to run computationally intensive tasks without adversely affecting the timing of critical OS services.",
"title": ""
}
] |
scidocsrr
|
21498e70834a40224f4a104d41a7868e
|
Aggression and Violent Behavior The neurobiology of antisocial personality disorder : The quest for rehabilitation and treatment ☆
|
[
{
"docid": "e9621784df5009b241c563a54583bab9",
"text": "CONTEXT\nPsychopathic antisocial individuals have previously been characterized by abnormal interhemispheric processing and callosal functioning, but there have been no studies on the structural characteristics of the corpus callosum in this group.\n\n\nOBJECTIVES\nTo assess whether (1) psychopathic individuals with antisocial personality disorder show structural and functional impairments in the corpus callosum, (2) group differences are mirrored by correlations between dimensional measures of callosal structure and psychopathy, (3) callosal abnormalities are associated with affective deficits, and (4) callosal abnormalities are independent of psychosocial deficits.\n\n\nDESIGN\nCase-control study.\n\n\nSETTING\nCommunity sample.\n\n\nPARTICIPANTS\nFifteen men with antisocial personality disorder and high psychopathy scores and 25 matched controls, all from a larger sample of 83 community volunteers.\n\n\nMAIN OUTCOME MEASURES\nStructural magnetic resonance imaging measures of the corpus callosum (volume estimate of callosal white matter, thickness, length, and genu and splenium area), functional callosal measures (2 divided visual field tasks), electrodermal and cardiovascular activity during a social stressor, personality measures of affective and interpersonal deficits, and verbal and spatial ability.\n\n\nRESULTS\nPsychopathic antisocial individuals compared with controls showed a 22.6% increase in estimated callosal white matter volume (P<.001), a 6.9% increase in callosal length (P =.002), a 15.3% reduction in callosal thickness (P =.04), and increased functional interhemispheric connectivity (P =.02). Correlational analyses in the larger unselected sample confirmed the association between antisocial personality and callosal structural abnormalities. Larger callosal volumes were associated with affective and interpersonal deficits, low autonomic stress reactivity, and low spatial ability. Callosal abnormalities were independent of psychosocial deficits.\n\n\nCONCLUSIONS\nCorpus callosum abnormalities in psychopathic antisocial individuals may reflect atypical neurodevelopmental processes involving an arrest of early axonal pruning or increased white matter myelination. These findings may help explain affective deficits and previous findings of abnormal interhemispheric transfer in psychopathic individuals.",
"title": ""
}
] |
[
{
"docid": "f3ca98a8e0600f0c80ef539cfc58e77e",
"text": "In this paper, we address a real life waste collection vehicle routing problem with time windows (VRPTW) with consideration of multiple disposal trips and drivers’ lunch breaks. Solomon’s well-known insertion algorithm is extended for the problem. While minimizing the number of vehicles and total traveling time is the major objective of vehicle routing problems in the literature, here we also consider the route compactness and workload balancing of a solution since they are very important aspects in practical applications. In order to improve the route compactness and workload balancing, a capacitated clustering-based waste collection VRPTW algorithm is developed. The proposed algorithms have been successfully implemented and deployed for the real life waste collection problems at Waste Management, Inc. A set of waste collection VRPTW benchmark problems is also presented in this paper. Waste collection problems are frequently considered as arc routing problems without time windows. However, that point of view can be applied only to residential waste collection problems. In the waste collection industry, there are three major areas: commercial waste collection, residential waste collection and roll-on-roll-off. In this paper, we mainly focus on the commercial waste collection problem. The problem can be characterized as a variant of VRPTW since commercial waste collection stops may have time windows. The major variation from a standard VRPTW is due to disposal operations and driver’s lunch break. When a vehicle is full, it needs to go to one of the disposal facilities (landfill or transfer station). Each vehicle can, and typically does, make multiple disposal trips per day. The purpose of this paper is to introduce the waste collection VRPTW, benchmark problem sets, and a solution approach for the problem. The proposed algorithms have been successfully implemented and deployed for the real life waste collection problems of Waste Management, the leading provider of comprehensive waste management services in North America with nearly 26,000 collection and transfer vehicles. 2005 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "ee0ba4a70bfa4f53d33a31b2d9063e89",
"text": "Since the identification of long-range dependence in network traffic ten years ago, its consistent appearance across numerous measurement studies has largely discredited Poisson-based models. However, since that original data set was collected, both link speeds and the number of Internet-connected hosts have increased by more than three orders of magnitude. Thus, we now revisit the Poisson assumption, by studying a combination of historical traces and new measurements obtained from a major backbone link belonging to a Tier 1 ISP. We show that unlike the older data sets, current network traffic can be well represented by the Poisson model for sub-second time scales. At multisecond scales, we find a distinctive piecewise-linear nonstationarity, together with evidence of long-range dependence. Combining our observations across both time scales leads to a time-dependent Poisson characterization of network traffic that, when viewed across very long time scales, exhibits the observed long-range dependence. This traffic characterization reconciliates the seemingly contradicting observations of Poisson and long-memory traffic characteristics. It also seems to be in general agreement with recent theoretical models for large-scale traffic aggregation",
"title": ""
},
{
"docid": "3ce09ec0f516894d027583d27814294f",
"text": "This paper provides a model of the use of computer algebra experimentation in algebraic graph theory. Starting from the semisymmetric cubic graph L on 112 vertices, we embed it into another semisymmetric graph N of valency 15 on the same vertex set. In order to consider systematically the links between L and N a number of combinatorial structures are involved and related coherent configurations are investigated. In particular, the construction of the incidence double cover of directed graphs is exploited. As a natural by-product of the approach presented here, a number of new interesting (mostly non-Schurian) association schemes on 56, 112 and 120 vertices are introduced and briefly discussed. We use computer algebra system GAP (including GRAPE and nauty), as well as computer package COCO.",
"title": ""
},
{
"docid": "8ffc78f24f56e6c3a46b0149a6842663",
"text": "In this paper, we present a hierarchical spatiotemporal blur-based approach to automatically detect contaminants on the camera lens. Contaminants adhering to camera lens corresponds to blur regions in digital image, as camera is focused on scene. We use kurtosis for a first level analysis to detect blur regions and filter them out. Next level of analysis computes lowpass energy and singular values to further validate blur regions. These analyses detect blur regions in an image efficiently and temporal consistency of blur is additionally incorporated to remove false detections. Once the presence of a contaminant is detected, we use an appearance-based classifier to categorize the type of contaminant on the lens. Our results are promising in terms of performance and latency when compared with state-of-the-art methods under a variety of real-world conditions.",
"title": ""
},
{
"docid": "e51d3dda4b53a01fbf12ce033321421f",
"text": "The tremendous growth in electronic data of universities creates the need to have some meaningful information extracted from these large volumes of data. The advancement in the data mining field makes it possible to mine educational data in order to improve the quality of the educational processes. This study, thus, uses data mining methods to study the performance of undergraduate students. Two aspects of students' performance have been focused upon. First, predicting students' academic achievement at the end of a fouryear study programme. Second, studying typical progressions and combining them with prediction results. Two important groups of students have been identified: the low and high achieving students. The results indicate that by focusing on a small number of courses that are indicators of particularly good or poor performance, it is possible to provide timely warning and support to low achieving students, and advice and opportunities to high performing students. © 2017 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "ab7663ef08505e37be080eab491d2607",
"text": "This paper has studied the fatigue and friction of big end bearing on an engine connecting rod by combining the multi-body dynamics and hydrodynamic lubrication model. First, the basic equations and the application on AVL-Excite software platform of multi-body dynamics have been described in detail. Then, introduce the hydrodynamic lubrication model, which is the extended Reynolds equation derived from the Navier-Stokes equation and the equation of continuity. After that, carry out the static calculation of connecting rod assembly. At the same time, multi-body dynamics analysis has been performed and stress history can be obtained by finite element data recovery. Next, execute the fatigue analysis combining the Static stress and dynamic stress, safety factor distribution of connecting rod will be obtained as result. At last, detailed friction analysis of the big-end bearing has been performed. And got a good agreement when contrast the simulation results to the Bearing wear in the experiment.",
"title": ""
},
{
"docid": "673f1315f3699e0fbc3701743a90eb71",
"text": "The majority of learning algorithms available today focus on approximating the state (V ) or state-action (Q) value function and efficient action selection comes as an afterthought. On the other hand, real-world problems tend to have large action spaces, where evaluating every possible action becomes impractical. This mismatch presents a major obstacle in successfully applying reinforcement learning to real-world problems. In this paper we present an effective approach to learning and acting in domains with multidimensional and/or continuous control variables where efficient action selection is embedded in the learning process. Instead of learning and representing the state or state-action value function of the MDP, we learn a value function over an implied augmented MDP, where states represent collections of actions in the original MDP and transitions represent choices eliminating parts of the action space at each step. Action selection in the original MDP is reduced to a binary search by the agent in the transformed MDP, with computational complexity logarithmic in the number of actions, or equivalently linear in the number of action dimensions. Our method can be combined with any discrete-action reinforcement learning algorithm for learning multidimensional continuous-action policies using a state value approximator in the transformed MDP. Our preliminary results with two well-known reinforcement learning algorithms (Least-Squares Policy Iteration and Fitted Q-Iteration) on two continuous action domains (1-dimensional inverted pendulum regulator, 2-dimensional bicycle balancing) demonstrate the viability and the potential of the proposed approach.",
"title": ""
},
{
"docid": "af3addd0c8e9af91eb10131ba0eba406",
"text": "Answering compositional questions requiring multi-step reasoning is challenging. We introduce an end-to-end differentiable model for interpreting questions about a knowledge graph (KG), which is inspired by formal approaches to semantics. Each span of text is represented by a denotation in a KG and a vector that captures ungrounded aspects of meaning. Learned composition modules recursively combine constituent spans, culminating in a grounding for the complete sentence which answers the question. For example, to interpret “not green”, the model represents “green” as a set of KG entities and “not” as a trainable ungrounded vector—and then uses this vector to parameterize a composition function that performs a complement operation. For each sentence, we build a parse chart subsuming all possible parses, allowing the model to jointly learn both the composition operators and output structure by gradient descent from endtask supervision. The model learns a variety of challenging semantic operators, such as quantifiers, disjunctions and composed relations, and infers latent syntactic structure. It also generalizes well to longer questions than seen in its training data, in contrast to RNN, its treebased variants, and semantic parsing baselines.",
"title": ""
},
{
"docid": "d0c5bb905973b3098b06f55232ed9c8f",
"text": "In recent years, theoretical and computational linguistics has paid much attention to linguistic items that form scales. In NLP, much research has focused on ordering adjectives by intensity (tiny < small). Here, we address the task of automatically ordering English adverbs by their intensifying or diminishing effect on adjectives (e.g. extremely small < very small). We experiment with 4 different methods: 1) using the association strength between adverbs and adjectives; 2) exploiting scalar patterns (such as not only X but Y); 3) using the metadata of product reviews; 4) clustering. The method that performs best is based on the use of metadata and ranks adverbs by their scaling factor relative to unmodified adjectives.",
"title": ""
},
{
"docid": "1994429bea369cf4f4395095789b3ec4",
"text": "Since Software-Defined Networking (SDN) gains popularity, mobile/wireless support is mentioned with importance to be handled as one of the crucial aspects in SDN. SDN introduces a centralized entity called SDN controller with the holistic view of the topology on the separated control/data plane architecture. Leveraging the features provided in the SDN controller, mobility management can be simply designed and lightweight, thus there is no need to define and rely on new mobility entities such as given in the traditional IP mobility management architectures. In this paper, we design and implement lightweight IPv6 mobility management in Open Network Operating System (ONOS) that is an open-source SDN control platform for service providers. For the lightweight mobility management, we implement the Neighbor Discovery Proxy (ND Proxy) function into the OpenFlow-enabled AP and switches, and ONOS controller module to handle the receiving ICMPv6 message and to send the unique home network prefix address to an IPv6 host. Thus this approach enables mobility management without bringing or integrating on traditional IP mobility protocols. The proposed idea was experimentally evaluated in the ONOS controller and Raspberry Pi based testbed, identifying the obtained handoff signaling latency is in the acceptable performance range.",
"title": ""
},
{
"docid": "cf70de0c40646e3564b7d04c9dc050c7",
"text": "After segmenting candidate exudates regions in colour retinal images we present and compare two methods for their classification. The Neural Network based approach performs marginally better than the Support Vector Machine based approach, but we show that the latter are more flexible given criteria such as control of sensitivity and specificity rates. We present classification results for different learning algorithms for the Neural Net and use both hard and soft margins for the Support Vector Machines. We also present ROC curves to examine the trade-off between the sensitivity and specificity of the classifiers.",
"title": ""
},
{
"docid": "955ae6e1dffbe580217b812f943b4339",
"text": "Successful applications of reinforcement learning in realworld problems often require dealing with partially observable states. It is in general very challenging to construct and infer hidden states as they often depend on the agent’s entire interaction history and may require substantial domain knowledge. In this work, we investigate a deep-learning approach to learning the representation of states in partially observable tasks, with minimal prior knowledge of the domain. In particular, we study reinforcement learning with deep neural networks, including RNN and LSTM, which are equipped with the desired property of being able to capture long-term dependency on history, and thus providing an effective way of learning the representation of hidden states. We further develop a hybrid approach that combines the strength of both supervised learning (for representing hidden states) and reinforcement learning (for optimizing control) with joint training. Extensive experiments based on a KDD Cup 1998 direct mailing campaign problem demonstrate the effectiveness and advantages of the proposed approach, which performs the best across the board.",
"title": ""
},
{
"docid": "41b8c1b04f11f5ac86d1d6e696007036",
"text": "The neural systems involved in hearing and repeating single words were investigated in a series of experiments using PET. Neuropsychological and psycholinguistic studies implicate the involvement of posterior and anterior left perisylvian regions (Wernicke's and Broca's areas). Although previous functional neuroimaging studies have consistently shown activation of Wernicke's area, there has been only variable implication of Broca's area. This study demonstrates that Broca's area is involved in both auditory word perception and repetition but activation is dependent on task (greater during repetition than hearing) and stimulus presentation (greater when hearing words at a slow rate). The peak of frontal activation in response to hearing words is anterior to that associated with repeating words; the former is probably located in Brodmann's area 45, the latter in Brodmann's area 44 and the adjacent precentral sulcus. As Broca's area activation is more subtle and complex than that in Wernicke's area during these tasks, the likelihood of observing it is influenced by both the study design and the image analysis technique employed. As a secondary outcome from the study, the response of bilateral auditory association cortex to 'own voice' during repetition was shown to be the same as when listening to \"other voice' from a prerecorded tape.",
"title": ""
},
{
"docid": "864ab702d0b45235efe66cd9e3bc5e66",
"text": "In this work we release our extensible and easily configurable neural network training software. It provides a rich set of functional layers with a particular focus on efficient training of recurrent neural network topologies on multiple GPUs. The source of the software package is public and freely available for academic research purposes and can be used as a framework or as a standalone tool which supports a flexible configuration. The software allows to train state-of-the-art deep bidirectional long short-term memory (LSTM) models on both one dimensional data like speech or two dimensional data like handwritten text and was used to develop successful submission systems in several evaluation campaigns.",
"title": ""
},
{
"docid": "9852e00f24fd8f626a018df99bea5f1f",
"text": "Business Process Reengineering is a discipline in which extensive research has been carried out and numerous methodologies churned out. But what seems to be lacking is a structured approach. In this paper we provide a review of BPR and present ‘best of breed ‘ methodologies from contemporary literature and introduce a consolidated, systematic approach to the redesign of a business enterprise. The methodology includes the five activities: Prepare for reengineering, Map and Analyze As-Is process, Design To-be process, Implement reengineered process and Improve continuously.",
"title": ""
},
{
"docid": "b14007d127629d7082d9bb5169140d0e",
"text": "The term \"selection bias\" encompasses various biases in epidemiology. We describe examples of selection bias in case-control studies (eg, inappropriate selection of controls) and cohort studies (eg, informative censoring). We argue that the causal structure underlying the bias in each example is essentially the same: conditioning on a common effect of 2 variables, one of which is either exposure or a cause of exposure and the other is either the outcome or a cause of the outcome. This structure is shared by other biases (eg, adjustment for variables affected by prior exposure). A structural classification of bias distinguishes between biases resulting from conditioning on common effects (\"selection bias\") and those resulting from the existence of common causes of exposure and outcome (\"confounding\"). This classification also leads to a unified approach to adjust for selection bias.",
"title": ""
},
{
"docid": "8bf1793ff3dacec5f88586a980d4f20a",
"text": "A dominant-pole substitution (DPS) technique for low-dropout regulator (LDO) is proposed in this paper. The DPS technique involves signal-current feedforward and amplification such that an ultralow-frequency zero is generated to cancel the dominant pole of LDO, while a higher frequency pole substitutes in and becomes the new dominant pole. With DPS, the loop bandwidth of the proposed LDO can be significantly extended, while a standard value and large output capacitor for transient purpose can still be used. The resultant LDO benefits from both the fast response time due to the wide loop bandwidth and the large charge reservoir from the output capacitor to achieve the significant enhancement in the dynamic performances. Implemented with a commercial 0.18-μm CMOS technology, the proposed LDO with DPS is validated to be capable of delivering 100 mA at 1.0-V output from a 1.2-V supply, with current efficiency of 99.86%. Experimental results also show that the error voltage at the output undergoing 100 mA of load transient in 10-ns edge time is about 25 mV. Line transient responses reveal that no more than 20-mV instantaneous changes at the output when the supply voltage swings between 1.2 and 1.8 V in 100 ns. The power-supply rejection ratio at 3 MHz is -47 dB.",
"title": ""
},
{
"docid": "81cb6b35dcf083fea3973f4ee75a9006",
"text": "We propose frameworks and algorithms for identifying communities in social networks that change over time. Communities are intuitively characterized as \"unusually densely knit\" subsets of a social network. This notion becomes more problematic if the social interactions change over time. Aggregating social networks over time can radically misrepresent the existing and changing community structure. Instead, we propose an optimization-based approach for modeling dynamic community structure. We prove that finding the most explanatory community structure is NP-hard and APX-hard, and propose algorithms based on dynamic programming, exhaustive search, maximum matching, and greedy heuristics. We demonstrate empirically that the heuristics trace developments of community structure accurately for several synthetic and real-world examples.",
"title": ""
},
{
"docid": "881615ecd53c20a93c96defee048f0e1",
"text": "Several research groups have previously constructed short forms of the MacArthur-Bates Communicative Development Inventories (CDI) for different languages. We consider the specific aim of constructing such a short form to be used for language screening in a specific age group. We present a novel strategy for the construction, which is applicable if results from a population-based study using the CDI long form are available for this age group. The basic approach is to select items in a manner implying a left-skewed distribution of the summary score and hence a reliable discrimination among children in the lower end of the distribution despite the measurement error of the instrument. We report on the application of the strategy in constructing a Danish CDI short form and present some results illustrating the validity of the short form. Finally we discuss the choice of the most appropriate age for language screening based on a vocabulary score.",
"title": ""
},
{
"docid": "ebb40b1e228c9f95ce2ea9229a16853c",
"text": "Continuum manipulators attract a lot of interests due to their advantageous properties, such as distal dexterity, design compactness, intrinsic compliance for safe interaction with unstructured environments. However, these manipulators sometimes suffer from the lack of enough stiffness while applied in surgical robotic systems. This paper presents an experimental kinestatic comparison between three continuum manipulators, aiming at revealing how structural variations could alter the manipulators' stiffness properties. These variations not only include modifying the arrangements of elastic components, but also include integrating a passive rigid kinematic chain to form a hybrid continuum-rigid manipulator. Results of this paper could contribute to the development of design guidelines for realizing desired stiffness properties of a continuum or hybrid manipulator.",
"title": ""
}
] |
scidocsrr
|
48abb15ae19b9881b249b646984e9683
|
Customized Regression Model for Airbnb Dynamic Pricing
|
[
{
"docid": "15dbf1ad05c8219be484c01145c09b6c",
"text": "In this paper, we study the contextual bandit problem (also known as the multi-armed bandit problem with expert advice) for linear payoff functions. For T rounds, K actions, and d dimensional feature vectors, we prove an O ( √ Td ln(KT ln(T )/δ) ) regret bound that holds with probability 1− δ for the simplest known (both conceptually and computationally) efficient upper confidence bound algorithm for this problem. We also prove a lower bound of Ω( √ Td) for this setting, matching the upper bound up to logarithmic factors.",
"title": ""
},
{
"docid": "bc7c5ab8ec28e9a5917fc94b776b468a",
"text": "Reasonable house price prediction is a meaningful task, and the house clustering is an important process in the prediction. In this paper, we propose the method of Multi-Scale Affinity Propagation(MSAP) aggregating the house appropriately by the landmark and the facility. Then in each cluster, using Linear Regression model with Normal Noise(LRNN) predicts the reasonable price, which is verified by the increasing number of the renting reviews. Experiments show that the precision of the reasonable price prediction improved greatly via the method of MSAP.",
"title": ""
}
] |
[
{
"docid": "a3a83c8c0592e8335f4687d0e2ee802f",
"text": "The rapid growth and development in technology has made computer as a weapon which can cause great loss if used with wrong intentions. Computer forensics aims at collecting, and analyzing evidences from the seized devices in such ways so that they are admissible in court of law. Anti-forensics, on the other hand, is collection of tricks and techniques that are used and applied with clear aim of forestalling the forensic investigation. Crime and crime prevention go hand in hand. Once a crime surfaces, then a defense is developed, then a new crime counters the new defense. Hence along with continuous developments in forensics, a thorough study and knowledge of developments in anti-forensics is equally important. This paper focuses on understanding different techniques that can be used for anti-forensic purposes with help of open source tools.",
"title": ""
},
{
"docid": "b57377a695ce7c5114d61bbe4f29e7a1",
"text": "Referring to existing illustrations helps novice drawers to realize their ideas. To find such helpful references from a large image collection, we first build a semantic vector representation of illustrations by training convolutional neural networks. As the proposed vector space correctly reflects the semantic meanings of illustrations, users can efficiently search for references with similar attributes. Besides the search with a single query, a semantic morphing algorithm that searches the intermediate illustrations that gradually connect two queries is proposed. Several experiments were conducted to demonstrate the effectiveness of our methods.",
"title": ""
},
{
"docid": "907940110f89714bf20a8395cd8932d5",
"text": "Polyphonic sound event detection (polyphonic SED) is an interesting but challenging task due to the concurrence of multiple sound events. Recently, SED methods based on convolutional neural networks (CNN) and recurrent neural networks (RNN) have shown promising performance. Generally, CNN are designed for local feature extraction while RNN are used to model the temporal dependency among these local features. Despite their success, it is still insufficient for existing deep learning techniques to separate individual sound event from their mixture, largely due to the overlapping characteristic of features. Motivated by the success of Capsule Networks (CapsNet), we propose a more suitable capsule based approach for polyphonic SED. Specifically, several capsule layers are designed to effectively select representative frequency bands for each individual sound event. The temporal dependency of capsule's outputs is then modeled by a RNN. And a dynamic threshold method is proposed for making the final decision based on RNN outputs. Experiments on the TUT-SED Synthetic 2016 dataset show that the proposed approach obtains an F1-score of 68.8% and an error rate of 0.45, outperforming the previous state-of-the-art method of 66.4% and 0.48, respectively.",
"title": ""
},
{
"docid": "ad11946cfb127e19b0ee80f5d77dbe93",
"text": "Air quality has great impact on individual and community health. In this demonstration, we present Citisense: a mobile air quality system that enables users to track their personal air quality exposure for discovery, self-reflection, and sharing within their local communities and online social networks.",
"title": ""
},
{
"docid": "e6289c25323dd5f4b7ff6648201a636e",
"text": "A new wideband differentially fed dual-polarized antenna with stable radiation pattern for base stations is proposed and studied. A cross-shaped feeding structure is specially designed to fit the differentially fed scheme and four parasitic loop elements are employed to achieve a wide impedance bandwidth. A stable antenna gain and a stable radiation pattern are realized by using a rectangular cavity-shaped reflector instead of a planar one. A detailed parametric study was performed to optimize the antenna’s performances. After that, a prototype was fabricated and tested. Measured results show that the antenna achieves a wide impedance bandwidth of 52% with differential standing-wave ratio <1.5 from 1.7 to 2.9 GHz and a high differential port-to-port isolation of better than 26.3 dB within the operating frequency bandwidth. A stable antenna gain ( $\\approx 8$ dBi) and a stable radiation pattern with 3-dB beamwidth of 65° ±5° were also found over the operating frequencies. Moreover, the proposed antenna can be easily built by using printed circuit board fabrication technique due to its compact and planar structure.",
"title": ""
},
{
"docid": "af0097bec55577049b08f2bc9e65dd4d",
"text": "The recent surge in using social media has created a massive amount of unstructured textual complaints about products and services. However, discovering and quantifying potential product defects from large amounts of unstructured text is a nontrivial task. In this paper, we develop a probabilistic defect model (PDM) that identifies the most critical product issues and corresponding product attributes, simultaneously. We facilitate domain-oriented key attributes (e.g., product model, year of production, defective components, symptoms, etc.) of a product to identify and acquire integral information of defect. We conduct comprehensive evaluations including quantitative evaluations and qualitative evaluations to ensure the quality of discovered information. Experimental results demonstrate that our proposed model outperforms existing unsupervised method (K-Means Clustering), and could find more valuable information. Our research has significant managerial implications for mangers, manufacturers, and policy makers. [Category: Data and Text Mining]",
"title": ""
},
{
"docid": "ea2af110b27015b83659182948a32b36",
"text": "BACKGROUND\nDescent of the lateral aspect of the brow is one of the earliest signs of aging. The purpose of this study was to describe an open surgical technique for lateral brow lifts, with the goal of achieving reliable, predictable, and long-lasting results.\n\n\nMETHODS\nAn incision was made behind and parallel to the temporal hairline, and then extended deeper through the temporoparietal fascia to the level of the deep temporal fascia. Dissection was continued anteriorly on the surface of the deep temporal fascia and subperiosteally beyond the temporal crest, to the level of the superolateral orbital rim. Fixation of the lateral brow and tightening of the orbicularis oculi muscle was achieved with the placement of sutures that secured the tissue directly to the galea aponeurotica on the lateral aspect of the incision. An additional fixation was made between the temporoparietal fascia and the deep temporal fascia, as well as between the temporoparietal fascia and the galea aponeurotica. The excess skin in the temporal area was excised and the incision was closed.\n\n\nRESULTS\nA total of 519 patients were included in the study. Satisfactory lateral brow elevation was obtained in most of the patients (94.41%). The following complications were observed: total relapse (n=8), partial relapse (n=21), neurapraxia of the frontal branch of the facial nerve (n=5), and limited alopecia in the temporal incision (n=9).\n\n\nCONCLUSIONS\nWe consider this approach to be a safe and effective procedure, with long-lasting results.",
"title": ""
},
{
"docid": "138ada76eb85092ec527e1265bffa36b",
"text": "Web service discovery is becoming a challenging and time consuming task due to large number of Web services available on the Internet. Organizing the Web services into functionally similar clusters is one of a very efficient approach for reducing the search space. However, similarity calculation methods that are used in current approaches such as string-based, corpus-based, knowledge-based and hybrid methods have problems that include discovering semantic characteristics, loss of semantic information, encoding fine-grained information and shortage of high-quality ontologies. Because of these issues, the approaches couldn't identify the correct clusters for some services and placed them in wrong clusters. As a result of this, cluster performance is reduced. This paper proposes post-filtering approach to increase precision by rearranging services incorrectly clustered. Our approach uses context aware method that learns term similarity by machine learning under domain context. Experimental results show that our post-filtering approach works efficiently.",
"title": ""
},
{
"docid": "ccf7390abc2924e4d2136a2b82639115",
"text": "The proposition of increased innovation in network applications and reduced cost for network operators has won over the networking world to the vision of software-defined networking (SDN). With the excitement of holistic visibility across the network and the ability to program network devices, developers have rushed to present a range of new SDN-compliant hardware, software, and services. However, amidst this frenzy of activity, one key element has only recently entered the debate: Network Security. In this paper, security in SDN is surveyed presenting both the research community and industry advances in this area. The challenges to securing the network from the persistent attacker are discussed, and the holistic approach to the security architecture that is required for SDN is described. Future research directions that will be key to providing network security in SDN are identified.",
"title": ""
},
{
"docid": "2da6c199c7561855fde9be6f4798a4af",
"text": "Ontogenetic development of the digestive system in golden pompano (Trachinotus ovatus, Linnaeus 1758) larvae was histologically and enzymatically studied from hatch to 32 day post-hatch (DPH). The development of digestive system in golden pompano can be divided into three phases: phase I starting from hatching and ending at the onset of exogenous feeding; phase II starting from first feeding (3 DPH) and finishing at the formation of gastric glands; and phase III starting from the appearance of gastric glands on 15 DPH and continuing onward. The specific activities of trypsin, amylase, and lipase increased sharply from the onset of first feeding to 5–7 DPH, followed by irregular fluctuations. Toward the end of this study, the specific activities of trypsin and amylase showed a declining trend, while the lipase activity remained at similar levels as it was at 5 DPH. The specific activity of pepsin was first detected on 15 DPH and increased with fish age. The dynamics of digestive enzymes corresponded to the structural development of the digestive system. The enzyme activities tend to be stable after the formation of the gastric glands in fish stomach on 15 DPH. The composition of digestive enzymes in larval pompano indicates that fish are able to digest protein, lipid and carbohydrate at early developmental stages. Weaning of larval pompano is recommended from 15 DPH onward. Results of the present study lead to a better understanding of the ontogeny of golden pompano during the larval stage and provide a guide to feeding and weaning of this economically important fish in hatcheries.",
"title": ""
},
{
"docid": "7ff2f2057d7e38f0258cd361c978eb70",
"text": "Sustainable production of renewable energy is being hotly debated globally since it is increasingly understood that first generation biofuels, primarily produced from food crops and mostly oil seeds are limited in their ability to achieve targets for biofuel production, climate change mitigation and economic growth. These concerns have increased the interest in developing second generation biofuels produced from non-food feedstocks such as microalgae, which potentially offer greatest opportunities in the longer term. This paper reviews the current status of microalgae use for biodiesel production, including their cultivation, harvesting, and processing. The microalgae species most used for biodiesel production are presented and their main advantages described in comparison with other available biodiesel feedstocks. The various aspects associated with the design of microalgae production units are described, giving an overview of the current state of development of algae cultivation systems (photo-bioreactors and open ponds). Other potential applications and products from microalgae are also presented such as for biological sequestration of CO2, wastewater treatment, in human health, as food additive, and for aquaculture. 2009 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "91e0722c00b109d7db137fb3468c088a",
"text": "This paper proposes a novel flexible piezoelectric micro-machined ultrasound transducer, which is based on PZT and a polyimide substrate. The transducer is made on the polyimide substrate and packaged with medical polydimethylsiloxane. Instead of etching the PZT ceramic, this paper proposes a method of putting diced PZT blocks into holes on the polyimide which are pre-etched. The device works in d31 mode and the electromechanical coupling factor is 22.25%. Its flexibility, good conformal contacting with skin surfaces and proper resonant frequency make the device suitable for heart imaging. The flexible packaging ultrasound transducer also has a good waterproof performance after hundreds of ultrasonic electric tests in water. It is a promising ultrasound transducer and will be an effective supplementary ultrasound imaging method in the practical applications.",
"title": ""
},
{
"docid": "620574da26151188171a91eb64de344d",
"text": "Major security issues for banking and financial institutions are Phishing. Phishing is a webpage attack, it pretends a customer web services using tactics and mimics from unauthorized persons or organization. It is an illegitimate act to steals user personal information such as bank details, social security numbers and credit card details, by showcasing itself as a truthful object, in the public network. When users provide confidential information, they are not aware of the fact that the websites they are using are phishing websites. This paper presents a technique for detecting phishing website attacks and also spotting phishing websites by combines source code and URL in the webpage. Keywords—Phishing, Website attacks, Source Code, URL.",
"title": ""
},
{
"docid": "117de8844d5a6c506d69de65ae6b62ae",
"text": "Computer-based conversational agents are becoming ubiquitous. However, for these systems to be engaging and valuable to the user, they must be able to express emotion, in addition to providing informative responses. Humans rely on much more than language during conversations; visual information is key to providing context. We present the first example of an image-grounded conversational agent using visual sentiment, facial expression and scene features. We show that key qualities of the generated dialogue can be manipulated by the features used for training the agent. We evaluate our model on a large and very challenging real-world dataset of conversations from social media (Twitter). The image-grounding leads to significantly more informative, emotional and specific responses, and the exact qualities can be tuned depending on the image features used. Furthermore, our model improves the objective quality of dialogue responses when evaluated on standard natural language metrics.",
"title": ""
},
{
"docid": "8704a4033132a1d26cf2da726a60045e",
"text": "In practical classification, there is often a mix of learnable and unlearnable classes and only a classifier above a minimum performance threshold can be deployed. This problem is exacerbated if the training set is created by active learning. The bias of actively learned training sets makes it hard to determine whether a class has been learned. We give evidence that there is no general and efficient method for reducing the bias and correctly identifying classes that have been learned. However, we characterize a number of scenarios where active learning can succeed despite these difficulties.",
"title": ""
},
{
"docid": "b3556499bf5d788de7c4d46100ac3a9f",
"text": "Reuse has been proposed as a microarchitecture-level mechanism to reduce the amount of executed instructions, collapsing dependencies and freeing resources for other instructions. Previous works have used reuse domains such as memory accesses, integer or not floating point, based on the reusability rate. However, these works have not studied the specific contribution of reusing different subsets of instructions for performance. In this work, we analysed the sensitivity of trace reuse to instruction subsets, comparing their efficiency to their complementary subsets. We also studied the amount of reuse that can be extracted from loops. Our experiments show that disabling trace reuse outside loops does not harm performance but reduces in 12% the number of accesses to the reuse table. Our experiments with reuse subsets show that most of the speedup can be retained even when not reusing all types of instructions previously found in the reuse domain. 1 ar X iv :1 71 1. 06 67 2v 1 [ cs .A R ] 1 7 N ov 2 01 7",
"title": ""
},
{
"docid": "0d65394a132dba6d4d6827be8afda33e",
"text": "PHYSICIANS’ ABILITY TO PROVIDE high-quality care can be adversely affected by many factors, including sleep deprivation. Concerns about the danger of physicians who are sleep deprived and providing care have led state legislatures and academic institutions to try to constrain the work hours of physicians in training (house staff). Unlike commercial aviation, for example, medicine is an industry in which public safety is directly at risk but does not have mandatory restrictions on work hours. Legislation before the US Congress calls for limiting resident work hours to 80 hours per week and no more than 24 hours of continuous work. Shifts of residents working in the emergency department would be limited to 12 hours. The proposed legislation, which includes public disclosure and civil penalties for hospitals that violate the work hour restrictions, does not address extended duty shifts of attending or private practice physicians. There is still substantial controversy within the medical community about the magnitude and significance of the clinical impairment resulting from work schedules that aggravate sleep deprivation. There is extensive literature on the adverse effects of sleep deprivation in laboratory and nonmedical settings. However, studies on sleep deprivation of physicians performing clinically relevant tasks have been less conclusive. Opinions have been further influenced by the potential adverse impact of reduced work schedules on the economics of health care, on continuity of care, and on quality of care. This review focuses on the consequences of sleep loss both in controlled laboratory environments and in clinical studies involving medical personnel.",
"title": ""
},
{
"docid": "fa320a8347093bca4817da2ed7c54e61",
"text": "Gases for electrical insulation are essential for the operation of electric power equipment. This Review gives a brief history of gaseous insulation that involved the emergence of the most potent industrial greenhouse gas known today, namely sulfur hexafluoride. SF6 paved the way to space-saving equipment for the transmission and distribution of electrical energy. Its ever-rising usage in the electrical grid also played a decisive role in the continuous increase of atmospheric SF6 abundance over the last decades. This Review broadly covers the environmental concerns related to SF6 emissions and assesses the latest generation of eco-friendly replacement gases. They offer great potential for reducing greenhouse gas emissions from electrical equipment but at the same time involve technical trade-offs. The rumors of one or the other being superior seem premature, in particular because of the lack of dielectric, environmental, and chemical information for these relatively novel compounds and their dissociation products during operation.",
"title": ""
},
{
"docid": "c2bd5af9470671eabe3a591121cd0ebc",
"text": "Menus are a primary control in current interfaces, but there has been relatively little theoretical work to model their performance. We propose a model of menu performance that goes beyond previous work by incorporating components for Fitts' Law pointing time, visual search time when novice, Hick-Hyman Law decision time when expert, and for the transition from novice to expert behaviour. The model is able to predict performance for many different menu designs, including adaptive split menus, items with different frequencies and sizes, and multi-level menus. We tested the model by comparing predictions for four menu designs (traditional menus, recency and frequency based split menus, and an adaptive 'morphing' design) with empirical measures. The empirical data matched the predictions extremely well, suggesting that the model can be used to explore a wide range of menu possibilities before implementation.",
"title": ""
},
{
"docid": "a3cd3ec70b5d794173db36cb9a219403",
"text": "We consider the problem of grasping novel objects in cluttered environments. If a full 3-d model of the scene were available, one could use the model to estimate the stability and robustness of different grasps (formalized as form/force-closure, etc); in practice, however, a robot facing a novel object will usually be able to perceive only the front (visible) faces of the object. In this paper, we propose an approach to grasping that estimates the stability of different grasps, given only noisy estimates of the shape of visible portions of an object, such as that obtained from a depth sensor. By combining this with a kinematic description of a robot arm and hand, our algorithm is able to compute a specific positioning of the robot’s fingers so as to grasp an object. We test our algorithm on two robots (with very different arms/manipulators, including one with a multi-fingered hand). We report results on the task of grasping objects of significantly different shapes and appearances than ones in the training set, both in highly cluttered and in uncluttered environments. We also apply our algorithm to the problem of unloading items from a dishwasher. Introduction We consider the problem of grasping novel objects, in the presence of significant amounts of clutter. A key challenge in this setting is that a full 3-d model of the scene is typically not available. Instead, a robot’s depth sensors can usually estimate only the shape of the visible portions of the scene. In this paper, we propose an algorithm that, given such partial models of the scene, selects a grasp—that is, a configuration of the robot’s arm and fingers—to try to pick up an object. If a full 3-d model (including the occluded portions of a scene) were available, then methods such as formand forceclosure (Mason and Salisbury 1985; Bicchi and Kumar 2000; Pollard 2004) and other grasp quality metrics (Pelossof et al. 2004; Hsiao, Kaelbling, and Lozano-Perez 2007; Ciocarlie, Goldfeder, and Allen 2007) can be used to try to find a good grasp. However, given only the point cloud returned by stereo vision or other depth sensors, a straightforward application of these ideas is impossible, since we do not have a model of the occluded portions of the scene. Copyright c © 2008, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. Figure 1: Image of an environment (left) and the 3-d pointcloud (right) returned by the Swissranger depth sensor. In detail, we will consider a robot that uses a camera, together with a depth sensor, to perceive a scene. The depth sensor returns a “point cloud,” corresponding to 3-d locations that it has found on the front unoccluded surfaces of the objects. (See Fig. 1.) Such point clouds are typically noisy (because of small errors in the depth estimates); but more importantly, they are also incomplete. 1 This work builds on Saxena et al. (2006a; 2006b; 2007; 2008) which applied supervised learning to identify visual properties that indicate good grasps, given a 2-d image of the scene. However, their algorithm only chose a 3-d “grasp point”—that is, the 3-d position (and 3-d orientation; Saxena et al. 2007) of the center of the end-effector. Thus, it did not generalize well to more complex arms and hands, such as to multi-fingered hands where one has to not only choose the 3d position (and orientation) of the hand, but also address the high dof problem of choosing the positions of all the fingers. Our approach begins by computing a number of features of grasp quality, using both 2-d image and the 3-d point cloud features. For example, the 3-d data is used to compute a number of grasp quality metrics, such as the degree to which the fingers are exerting forces normal to the surfaces of the object, and the degree to which they enclose the object. Using such features, we then apply a supervised learning algorithm to estimate the degree to which different configurations of the full arm and fingers reflect good grasps. We test our algorithm on two robots, on a variety of objects of shapes very different from ones in the training set, including a ski boot, a coil of wire, a game controller, and Forexample, standard stereo vision fails to return depth values for textureless portions of the object, thus its point clouds are typically very sparse. Further, the Swissranger gives few points only because of its low spatial resolution of 144 × 176. Proceedings of the Twenty-Third AAAI Conference on Artificial Intelligence (2008)",
"title": ""
}
] |
scidocsrr
|
24a95d7ec6f14e6d319afb441fbcd4dd
|
Concurrency control algorithm for shared disk cloud DBMS
|
[
{
"docid": "8d197bf27af825b9972a490d3cc9934c",
"text": "The past decade has witnessed an increasing adoption of cloud database technology, which provides better scalability, availability, and fault-tolerance via transparent partitioning and replication, and automatic load balancing and fail-over. However, only a small number of cloud databases provide strong consistency guarantees for distributed transactions, despite decades of research on distributed transaction processing, due to practical challenges that arise in the cloud setting, where failures are the norm, and human administration is minimal. For example, dealing with locks left by transactions initiated by failed machines, and determining a multi-programming level that avoids thrashing without under-utilizing available resources, are some of the challenges that arise when using lock-based transaction processing mechanisms in the cloud context. Even in the case of optimistic concurrency control, most proposals in the literature deal with distributed validation but still require the database to acquire locks during two-phase commit when installing updates of a single transaction on multiple machines. Very little theoretical work has been done to entirely eliminate the need for locking in distributed transactions, including locks acquired during two-phase commit. In this paper, we re-design optimistic concurrency control to eliminate any need for locking even for atomic commitment, while handling the practical issues in earlier theoretical work related to this problem. We conduct an extensive experimental study to evaluate our approach against lock-based methods under various setups and workloads, and demonstrate that our approach provides many practical advantages in the cloud context.",
"title": ""
},
{
"docid": "0105247ab487c2d06f3ffa0d00d4b4f9",
"text": "Many distributed storage systems achieve high data access throughput via partitioning and replication, each system with its own advantages and tradeoffs. In order to achieve high scalability, however, today's systems generally reduce transactional support, disallowing single transactions from spanning multiple partitions. Calvin is a practical transaction scheduling and data replication layer that uses a deterministic ordering guarantee to significantly reduce the normally prohibitive contention costs associated with distributed transactions. Unlike previous deterministic database system prototypes, Calvin supports disk-based storage, scales near-linearly on a cluster of commodity machines, and has no single point of failure. By replicating transaction inputs rather than effects, Calvin is also able to support multiple consistency levels---including Paxos-based strong consistency across geographically distant replicas---at no cost to transactional throughput.",
"title": ""
},
{
"docid": "494fd53f53aa7c5de6536abe14d284ff",
"text": "The Kalray MPPA-256 processor integrates 256 user cores and 32 system cores on a chip with 28nm CMOS technology. Each core implements a 32-bit 5-issue VLIW architecture. These cores are distributed across 16 compute clusters of 16+1 cores, and 4 quad-core I/O subsystems. Each compute cluster and I/O subsystem owns a private address space, while communication and synchronization between them is ensured by data and control Networks-On-Chip (NoC). The MPPA-256 processor is also fitted with a variety of I/O controllers, in particular DDR, PCI, Ethernet, Interlaken and GPIO. We demonstrate that the MPPA-256 processor clustered manycore architecture is effective on two different classes of applications: embedded computing, with the implementation of a professional H.264 video encoder that runs in real-time at low power; and high-performance computing, with the acceleration of a financial option pricing application. In the first case, a cyclostatic dataflow programming environment is utilized, that automates application distribution over the execution resources. In the second case, an explicit parallel programming model based on POSIX processes, threads, and NoC-specific IPC is used.",
"title": ""
}
] |
[
{
"docid": "bbd1111d276e40870bffc3eac16cdd6d",
"text": "The problem of similarity search in large time series databases has attracted much attention recently. It is a non-trivial problem because of the inherent high dimensionality of the data. The most promising solutions involve first performing dimensionality reduction on the data, and then indexing the reduced data with a spatial access method. Three major dimensionality reduction techniques have been proposed: Singular Value Decomposition (SVD), the Discrete Fourier transform (DFT), and more recently the Discrete Wavelet Transform (DWT). In this work we introduce a new dimensionality reduction technique which we call Piecewise Aggregate Approximation (PAA). We theoretically and empirically compare it to the other techniques and demonstrate its superiority. In addition to being competitive with or faster than the other methods, our approach has numerous other advantages. It is simple to understand and to implement, it allows more flexible distance measures, including weighted Euclidean queries, and the index can be built in linear time.",
"title": ""
},
{
"docid": "24151cf5d4481ba03e6ffd1ca29f3441",
"text": "The design, fabrication and characterization of 79 GHz slot antennas based on substrate integrated waveguides (SIW) are presented in this paper. All the prototypes are fabricated in a polyimide flex foil using printed circuit board (PCB) fabrication processes. A novel concept is used to minimize the leakage losses of the SIWs at millimeter wave frequencies. Different losses in the SIWs are analyzed. SIW-based single slot antenna, longitudinal and four-by-four slot array antennas are numerically and experimentally studied. Measurements of the antennas show approximately 4.7%, 5.4% and 10.7% impedance bandwidth (S11=-10 dB) with 2.8 dBi, 6.0 dBi and 11.0 dBi maximum antenna gain around 79 GHz, respectively. The measured results are in good agreement with the numerical simulations.",
"title": ""
},
{
"docid": "addd824407b42f9850f3d29fb58a21f8",
"text": "In this paper we introduce ZhuSuan, a python probabilistic programming library for Bayesian deep learning, which conjoins the complimentary advantages of Bayesian methods and deep learning. ZhuSuan is built upon Tensorflow. Unlike existing deep learning libraries, which are mainly designed for deterministic neural networks and supervised tasks, ZhuSuan is featured for its deep root into Bayesian inference, thus supporting various kinds of probabilistic models, including both the traditional hierarchical Bayesian models and recent deep generative models. We use running examples to illustrate the probabilistic programming on ZhuSuan, including Bayesian logistic regression, variational auto-encoders, deep sigmoid belief networks and Bayesian recurrent neural networks.",
"title": ""
},
{
"docid": "72944a6ad81c2802d0401f9e0c2d8bb5",
"text": "Available online 10 August 2016 Big Data (BD), with their potential to ascertain valued insights for enhanced decision-making process, have recently attracted substantial interest from both academics and practitioners. Big Data Analytics (BDA) is increasingly becoming a trending practice that many organizations are adopting with the purpose of constructing valuable information from BD. The analytics process, including the deployment and use of BDA tools, is seen by organizations as a tool to improve operational efficiency though it has strategic potential, drive new revenue streams and gain competitive advantages over business rivals. However, there are different types of analytic applications to consider. Therefore, prior to hasty use and buying costly BD tools, there is a need for organizations to first understand the BDA landscape. Given the significant nature of theBDandBDA, this paper presents a state-ofthe-art review that presents a holistic view of the BD challenges and BDA methods theorized/proposed/ employed by organizations to help others understand this landscape with the objective of making robust investment decisions. In doing so, systematically analysing and synthesizing the extant research published on BD and BDA area. More specifically, the authors seek to answer the following two principal questions: Q1 –What are the different types of BD challenges theorized/proposed/confronted by organizations? and Q2 – What are the different types of BDA methods theorized/proposed/employed to overcome BD challenges?. This systematic literature review (SLR) is carried out through observing and understanding the past trends and extant patterns/themes in the BDA research area, evaluating contributions, summarizing knowledge, thereby identifying limitations, implications and potential further research avenues to support the academic community in exploring research themes/patterns. Thus, to trace the implementation of BD strategies, a profiling method is employed to analyze articles (published in English-speaking peer-reviewed journals between 1996 and 2015) extracted from the Scopus database. The analysis presented in this paper has identified relevant BD research studies that have contributed both conceptually and empirically to the expansion and accrual of intellectual wealth to the BDA in technology and organizational resource management discipline. © 2016 The Authors. Published by Elsevier Inc. This is an open access article under the CC BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/4.0/).",
"title": ""
},
{
"docid": "0b197f6bcce309812e0300536a266788",
"text": "Cross-Site Scripting (XSS) vulnerability is one of the most widespread security problems for web applications, which has been haunting the web application developers for years. Various approaches to defend against attacks (that use XSS vulnerabilities) are available today but no single approach solves all the loopholes. After investigating this area, we have been motivated to propose an efficient approach to prevent persistent XSS attack by applying pattern filtering method. In this work, along with necessary background, we present case studies to show the effectiveness of our approach.",
"title": ""
},
{
"docid": "b3a9ad04e7df1b2250f0a7b625509efd",
"text": "Emotions are very important in human-human communication but are usually ignored in human-computer interaction. Recent work focuses on recognition and generation of emotions as well as emotion driven behavior. Our work focuses on the use of emotions in dialogue systems that can be used with speech input or as well in multi-modal environments.This paper describes a framework for using emotional cues in a dialogue system and their informational characterization. We describe emotion models that can be integrated into the dialogue system and can be used in different domains and tasks. Our application of the dialogue system is planned to model multi-modal human-computer-interaction with a humanoid robotic system.",
"title": ""
},
{
"docid": "eba769c6246b44d8ed7e5f08aac17731",
"text": "One hundred men, living in three villages in a remote region of the Eastern Highlands of Papua New Guinea were asked to judge the attractiveness of photographs of women who had undergone micrograft surgery to reduce their waist-to-hip ratios (WHRs). Micrograft surgery involves harvesting adipose tissue from the waist and reshaping the buttocks to produce a low WHR and an \"hourglass\" female figure. Men consistently chose postoperative photographs as being more attractive than preoperative photographs of the same women. Some women gained, and some lost weight, postoperatively, with resultant changes in body mass index (BMI). However, changes in BMI were not related to men's judgments of attractiveness. These results show that the hourglass female figure is rated as attractive by men living in a remote, indigenous community, and that when controlling for BMI, WHR plays a crucial role in their attractiveness judgments.",
"title": ""
},
{
"docid": "c346820b43f99aa6714900c5b110db13",
"text": "BACKGROUND\nDiabetes Mellitus (DM) is a chronic disease that is considered a global public health problem. Education and self-monitoring by diabetic patients help to optimize and make possible a satisfactory metabolic control enabling improved management and reduced morbidity and mortality. The global growth in the use of mobile phones makes them a powerful platform to help provide tailored health, delivered conveniently to patients through health apps.\n\n\nOBJECTIVE\nThe aim of our study was to evaluate the efficacy of mobile apps through a systematic review and meta-analysis to assist DM patients in treatment.\n\n\nMETHODS\nWe conducted searches in the electronic databases MEDLINE (Pubmed), Cochrane Register of Controlled Trials (CENTRAL), and LILACS (Latin American and Caribbean Health Sciences Literature), including manual search in references of publications that included systematic reviews, specialized journals, and gray literature. We considered eligible randomized controlled trials (RCTs) conducted after 2008 with participants of all ages, patients with DM, and users of apps to help manage the disease. The meta-analysis of glycated hemoglobin (HbA1c) was performed in Review Manager software version 5.3.\n\n\nRESULTS\nThe literature search identified 1236 publications. Of these, 13 studies were included that evaluated 1263 patients. In 6 RCTs, there were a statistical significant reduction (P<.05) of HbA1c at the end of studies in the intervention group. The HbA1c data were evaluated by meta-analysis with the following results (mean difference, MD -0.44; CI: -0.59 to -0.29; P<.001; I²=32%).The evaluation favored the treatment in patients who used apps without significant heterogeneity.\n\n\nCONCLUSIONS\nThe use of apps by diabetic patients could help improve the control of HbA1c. In addition, the apps seem to strengthen the perception of self-care by contributing better information and health education to patients. Patients also become more self-confident to deal with their diabetes, mainly by reducing their fear of not knowing how to deal with potential hypoglycemic episodes that may occur.",
"title": ""
},
{
"docid": "055b5012a88d5890eb2445600b1e4ad6",
"text": "Wearable devices for fitness tracking and health monitoring have gained considerable popularity and become one of the fastest growing smart devices market. More and more companies are offering integrated health and activity monitoring solutions for fitness trackers. Recently insurances are offering their customers better conditions for health and condition monitoring. However, the extensive sensitive information collected by tracking products and accessibility by third party service providers poses vital security and privacy challenges on the employed solutions. In this paper, we present our security analysis of a representative sample of current fitness tracking products on the market. In particular, we focus on malicious user setting that aims at injecting false data into the cloud-based services leading to erroneous data analytics. We show that none of these products can provide data integrity, authenticity and confidentiality.",
"title": ""
},
{
"docid": "d71ac31768bf1adb80a8011360225443",
"text": "Person re-identification has recently attracted a lot of attention in the computer vision community. This is in part due to the challenging nature of matching people across cameras with different viewpoints and lighting conditions, as well as across human pose variations. The literature has since devised several approaches to tackle these challenges, but the vast majority of the work has been concerned with appearance-based methods. We propose an approach that goes beyond appearance by integrating a semantic aspect into the model. We jointly learn a discriminative projection to a joint appearance-attribute subspace, effectively leveraging the interaction between attributes and appearance for matching. Our experimental results support our model and demonstrate the performance gain yielded by coupling both tasks. Our results outperform several state-of-the-art methods on VIPeR, a standard re-identification dataset. Finally, we report similar results on a new large-scale dataset we collected and labeled for our task.",
"title": ""
},
{
"docid": "a0b9b40328c03cbbe801e027fb793117",
"text": "BACKGROUND\nA better knowledge of the job aspects that may predict home health care nurses' burnout and work engagement is important in view of stress prevention and health promotion. The Job Demands-Resources model predicts that job demands and resources relate to burnout and work engagement but has not previously been tested in the specific context of home health care nursing.\n\n\nPURPOSE\nThe present study offers a comprehensive test of the Job-Demands Resources model in home health care nursing. We investigate the main and interaction effects of distinctive job demands (workload, emotional demands and aggression) and resources (autonomy, social support and learning opportunities) on burnout and work engagement.\n\n\nMETHODS\nAnalyses were conducted using cross-sectional data from 675 Belgian home health care nurses, who participated in a voluntary and anonymous survey.\n\n\nRESULTS\nThe results show that workload and emotional demands were positively associated with burnout, whereas aggression was unrelated to burnout. All job resources were associated with higher levels of work engagement and lower levels of burnout. In addition, social support buffered the positive relationship between workload and burnout.\n\n\nCONCLUSIONS\nHome health care organizations should invest in dealing with workload and emotional demands and stimulating the job resources under study to reduce the risk of burnout and increase their nurses' work engagement.",
"title": ""
},
{
"docid": "87b67f9ed23c27a71b6597c94ccd6147",
"text": "Recently, deep learning approach, especially deep Convolutional Neural Networks (ConvNets), have achieved overwhelming accuracy with fast processing speed for image classification. Incorporating temporal structure with deep ConvNets for video representation becomes a fundamental problem for video content analysis. In this paper, we propose a new approach, namely Hierarchical Recurrent Neural Encoder (HRNE), to exploit temporal information of videos. Compared to recent video representation inference approaches, this paper makes the following three contributions. First, our HRNE is able to efficiently exploit video temporal structure in a longer range by reducing the length of input information flow, and compositing multiple consecutive inputs at a higher level. Second, computation operations are significantly lessened while attaining more non-linearity. Third, HRNE is able to uncover temporal tran-sitions between frame chunks with different granularities, i.e. it can model the temporal transitions between frames as well as the transitions between segments. We apply the new method to video captioning where temporal information plays a crucial role. Experiments demonstrate that our method outperforms the state-of-the-art on video captioning benchmarks.",
"title": ""
},
{
"docid": "58c67e70ce84572151b97731c8ef56a5",
"text": "Shadow detection is a fundamental and challenging task, since it requires an understanding of global image semantics and there are various backgrounds around shadows. This paper presents a novel network for shadow detection by analyzing image context in a direction-aware manner. To achieve this, we first formulate the direction-aware attention mechanism in a spatial recurrent neural network (RNN) by introducing attention weights when aggregating spatial context features in the RNN. By learning these weights through training, we can recover direction-aware spatial context (DSC) for detecting shadows. This design is developed into the DSC module and embedded in a CNN to learn DSC features at different levels. Moreover, a weighted cross entropy loss is designed to make the training more effective. We employ two common shadow detection benchmark datasets and perform various experiments to evaluate our network. Experimental results show that our network outperforms state-of-the-art methods and achieves 97% accuracy and 38% reduction on balance error rate.",
"title": ""
},
{
"docid": "e9a08bbaaa41eb3c4c402488b18fe246",
"text": "This paper presents a two-fingered haptic interface named RML-glove. With this system, the operator can feel the shape and size of virtual 3D objects, and control a robot through force feedback. The tendon driven system makes this haptic glove a lighter and portable system that fits on a bare hand, and adds a haptic sense of force feedback to the fingers without constraining their natural movement. In order to explore the effect of cable friction and frictional losses in this system, experiments were conducted to investigate the impact of different variables including pulleys' active arc, tendon velocity, as well as cable tension and lubrication.",
"title": ""
},
{
"docid": "47d8feb4c7ee6bc6e2b2b9bd21591a3b",
"text": "BACKGROUND\nAlthough local anesthetics (LAs) are hyperbaric at room temperature, density drops within minutes after administration into the subarachnoid space. LAs become hypobaric and therefore may cranially ascend during spinal anesthesia in an uncontrolled manner. The authors hypothesized that temperature and density of LA solutions have a nonlinear relation that may be described by a polynomial equation, and that conversion of this equation may provide the temperature at which individual LAs are isobaric.\n\n\nMETHODS\nDensity of cerebrospinal fluid was measured using a vibrating tube densitometer. Temperature-dependent density data were obtained from all LAs commonly used for spinal anesthesia, at least in triplicate at 5 degrees, 20 degrees, 30 degrees, and 37 degrees C. The hypothesis was tested by fitting the obtained data into polynomial mathematical models allowing calculations of substance-specific isobaric temperatures.\n\n\nRESULTS\nCerebrospinal fluid at 37 degrees C had a density of 1.000646 +/- 0.000086 g/ml. Three groups of local anesthetics with similar temperature (T, degrees C)-dependent density (rho) characteristics were identified: articaine and mepivacaine, rho1(T) = 1.008-5.36 E-06 T2 (heavy LAs, isobaric at body temperature); L-bupivacaine, rho2(T) = 1.007-5.46 E-06 T2 (intermediate LA, less hypobaric than saline); bupivacaine, ropivacaine, prilocaine, and lidocaine, rho3(T) = 1.0063-5.0 E-06 T (light LAs, more hypobaric than saline). Isobaric temperatures (degrees C) were as follows: 5 mg/ml bupivacaine, 35.1; 5 mg/ml L-bupivacaine, 37.0; 5 mg/ml ropivacaine, 35.1; 20 mg/ml articaine, 39.4.\n\n\nCONCLUSION\nSophisticated measurements and mathematic models now allow calculation of the ideal injection temperature of LAs and, thus, even better control of LA distribution within the cerebrospinal fluid. The given formulae allow the adaptation on subpopulations with varying cerebrospinal fluid density.",
"title": ""
},
{
"docid": "5f526d3ac8329fb801ece415f78eb343",
"text": "Usability evaluation is an increasingly important part of the user interface design process. However, usability evaluation can be expensive in terms of time and human resources, and automation is therefore a promising way to augment existing approaches. This article presents an extensive survey of usability evaluation methods, organized according to a new taxonomy that emphasizes the role of automation. The survey analyzes existing techniques, identifies which aspects of usability evaluation automation are likely to be of use in future research, and suggests new ways to expand existing approaches to better support usability evaluation.",
"title": ""
},
{
"docid": "4a2303d673b146dc9c2849d743aaaaa2",
"text": "With the recent advances in information networks, the problem of community detection has attracted much attention in the last decade. While network community detection has been ubiquitous, the task of collecting complete network data remains challenging in many real-world applications. Usually the collected network is incomplete with most of the edges missing. Commonly, in such networks, all nodes with attributes are available while only the edges within a few local regions of the network can be observed. In this paper, we study the problem of detecting communities in incomplete information networks with missing edges. We first learn a distance metric to reproduce the link-based distance between nodes from the observed edges in the local information regions. We then use the learned distance metric to estimate the distance between any pair of nodes in the network. A hierarchical clustering approach is proposed to detect communities within the incomplete information networks. Empirical studies on real-world information networks demonstrate that our proposed method can effectively detect community structures within incomplete information networks.",
"title": ""
},
{
"docid": "2bddeff754c6a21ffdfc644205d349be",
"text": "With a sampled light field acquired from a plenoptic camera, several low-resolution views of the scene are available from which to infer depth. Unlike traditional multiview stereo, these views may be highly aliased due to the sparse sampling lattice in space, which can lead to reconstruction errors. We first analyse the conditions under which aliasing is a problem, and discuss the trade-offs for different parameter choices in plenoptic cameras. We then propose a method to compensate for the aliasing, whilst fusing the information from the multiple views to correctly recover depth maps. We show results on synthetic and real data, demonstrating the effectiveness of our method.",
"title": ""
},
{
"docid": "e0450f09c579ddda37662cbdfac4265c",
"text": "Deep neural networks (DNNs) have recently achieved a great success in various learning task, and have also been used for classification of environmental sounds. While DNNs are showing their potential in the classification task, they cannot fully utilize the temporal information. In this paper, we propose a neural network architecture for the purpose of using sequential information. The proposed structure is composed of two separated lower networks and one upper network. We refer to these as LSTM layers, CNN layers and connected layers, respectively. The LSTM layers extract the sequential information from consecutive audio features. The CNN layers learn the spectro-temporal locality from spectrogram images. Finally, the connected layers summarize the outputs of two networks to take advantage of the complementary features of the LSTM and CNN by combining them. To compare the proposed method with other neural networks, we conducted a number of experiments on the TUT acoustic scenes 2016 dataset which consists of recordings from various acoustic scenes. By using the proposed combination structure, we achieved higher performance compared to the conventional DNN, CNN and LSTM architecture.",
"title": ""
}
] |
scidocsrr
|
405f45ccd33eb88026e32f8b9a7a1cb4
|
Improving Use Case Point (UCP) Based on Function Point (FP) Mechanism
|
[
{
"docid": "3442445fac9efd3acdd9931739aca189",
"text": "“Avoidable rework” is effort spent fixing difficulties with the software that could have been avoided or discovered earlier and less expensively. This definition implies that there is such thing as “unavoidable rework”. Reducing “avoidable rework” is a major source of software productivity improvement and most effort savings from improving software processes, architectures and risk management are results of reductions in “avoidable rework”.",
"title": ""
}
] |
[
{
"docid": "eca2d0509966e77c8a8445cdb297e7d3",
"text": "Interpretation of regression coefficients is sensitive to the scale of the inputs. One method often used to place input variables on a common scale is to divide each numeric variable by its standard deviation. Here we propose dividing each numeric variable by two times its standard deviation, so that the generic comparison is with inputs equal to the mean +/-1 standard deviation. The resulting coefficients are then directly comparable for untransformed binary predictors. We have implemented the procedure as a function in R. We illustrate the method with two simple analyses that are typical of applied modeling: a linear regression of data from the National Election Study and a multilevel logistic regression of data on the prevalence of rodents in New York City apartments. We recommend our rescaling as a default option--an improvement upon the usual approach of including variables in whatever way they are coded in the data file--so that the magnitudes of coefficients can be directly compared as a matter of routine statistical practice.",
"title": ""
},
{
"docid": "d5c4e44514186fa1d82545a107e87c94",
"text": "Recent research in computer vision has increasingly focused on building systems for observing humans and understanding their look, activities, and behavior providing advanced interfaces for interacting with humans, and creating sensible models of humans for various purposes. This paper presents a new algorithm for detecting moving objects from a static background scene based on frame difference. Firstly, the first frame is captured through the static camera and after that sequence of frames is captured at regular intervals. Secondly, the absolute difference is calculated between the consecutive frames and the difference image is stored in the system. Thirdly, the difference image is converted into gray image and then translated into binary image. Finally, morphological filtering is done to remove noise.",
"title": ""
},
{
"docid": "4581ed383a7c4397f16a67bbfebc4a71",
"text": "Cardiovascular disease (CVD) is the leading cause of morbidity and mortality worldwide. Elevated blood lipids may be a major risk factor for CVD. Due to consistent and robust association of higher low-density lipoprotein (LDL)-cholesterol levels with CVD across experimental and epidemiologic studies, therapeutic strategies to decrease risk have focused on LDL-cholesterol reduction as the primary goal. Current medication options for lipid-lowering therapy include statins, bile acid sequestrants, a cholesterol-absorption inhibitor, fibrates, nicotinic acid, and omega-3 fatty acids, which all have various mechanisms of action and pharmacokinetic properties. The most widely prescribed lipid-lowering agents are the HMG-CoA reductase inhibitors, or statins. Since their introduction in the 1980s, statins have emerged as the one of the best-selling medication classes to date, with numerous trials demonstrating powerful efficacy in preventing cardiovascular outcomes (Kapur and Musunuru, 2008 [1]). The statins are commonly used in the treatment of hypercholesterolemia and mixed hyperlipidemia. This chapter focuses on the biochemistry of statins including their structures, pharmacokinetics, and mechanism of actions as well as the potential adverse reactions linked to their clinical uses.",
"title": ""
},
{
"docid": "8bf514424a07e667cc566614c1f25ec2",
"text": "Clustering is one of the most commonly used data mining techniques. Shared nearest neighbor clustering is an important density-based clustering technique that has been widely adopted in many application domains, such as environmental science and urban computing. As the size of data becomes extremely large nowadays, it is impossible for large-scale data to be processed on a single machine. Therefore, the scalability problem of traditional clustering algorithms running on a single machine must be addressed. In this paper, we improve the traditional density-based clustering algorithm by utilizing powerful programming platform (Spark) and distributed computing clusters. In particular, we design and implement Spark-based shared nearest neighbor clustering algorithm called SparkSNN, a scalable density-based clustering algorithm on Spark for big data analysis. We conduct our experiments using real data, i.e., Maryland crime data, to evaluate the performance of the proposed algorithm with respect to speed-up and scale-up. The experimental results well confirm the effectiveness and efficiency of the proposed SparkSNN clustering algorithm.",
"title": ""
},
{
"docid": "abb43256001147c813d12b89d2f9e67b",
"text": "We study the distributed computing setting in which there are multiple servers, each holding a set of points, who wish to compute functions on the union of their point sets. A key task in this setting is Principal Component Analysis (PCA), in which the servers would like to compute a low dimensional subspace capturing as much of the variance of the union of their point sets as possible. Given a procedure for approximate PCA, one can use it to approximately solve problems such as k-means clustering and low rank approximation. The essential properties of an approximate distributed PCA algorithm are its communication cost and computational efficiency for a given desired accuracy in downstream applications. We give new algorithms and analyses for distributed PCA which lead to improved communication and computational costs for k-means clustering and related problems. Our empirical study on real world data shows a speedup of orders of magnitude, preserving communication with only a negligible degradation in solution quality. Some of these techniques we develop, such as a general transformation from a constant success probability subspace embedding to a high success probability subspace embedding with a dimension and sparsity independent of the success probability, may be of independent interest.",
"title": ""
},
{
"docid": "d6aed6e0504b21717f11db97cbf03368",
"text": "OBJECTIVES\nNeurofeedback is a technique that aims to teach a subject to regulate a brain parameter measured by a technical interface to modulate his/her related brain and cognitive activities. However, the use of neurofeedback as a therapeutic tool for psychiatric disorders remains controversial. The aim of this review is to summarize and to comment the level of evidence of electroencephalogram (EEG) neurofeedback and real-time functional magnetic resonance imaging (fMRI) neurofeedback for therapeutic application in psychiatry.\n\n\nMETHOD\nLiterature on neurofeedback and mental disorders but also on brain computer interfaces (BCI) used in the field of neurocognitive science has been considered by the group of expert of the Neurofeedback evaluation & training (NExT) section of the French Association of biological psychiatry and neuropsychopharmacology (AFPBN).\n\n\nRESULTS\nResults show a potential efficacy of EEG-neurofeedback in the treatment of attentional-deficit/hyperactivity disorder (ADHD) in children, even if this is still debated. For other mental disorders, there is too limited research to warrant the use of EEG-neurofeedback in clinical practice. Regarding fMRI neurofeedback, the level of evidence remains too weak, for now, to justify clinical use. The literature review highlights various unclear points, such as indications (psychiatric disorders, pathophysiologic rationale), protocols (brain signals targeted, learning characteristics) and techniques (EEG, fMRI, signal processing).\n\n\nCONCLUSION\nThe field of neurofeedback involves psychiatrists, neurophysiologists and researchers in the field of brain computer interfaces. Future studies should determine the criteria for optimizing neurofeedback sessions. A better understanding of the learning processes underpinning neurofeedback could be a key element to develop the use of this technique in clinical practice.",
"title": ""
},
{
"docid": "4f6a6f633e512a33fc0b396765adcdf0",
"text": "Interactive systems often require calibration to ensure that input and output are optimally configured. Without calibration, user performance can degrade (e.g., if an input device is not adjusted for the user's abilities), errors can increase (e.g., if color spaces are not matched), and some interactions may not be possible (e.g., use of an eye tracker). The value of calibration is often lost, however, because many calibration processes are tedious and unenjoyable, and many users avoid them altogether. To address this problem, we propose calibration games that gather calibration data in an engaging and entertaining manner. To facilitate the creation of calibration games, we present design guidelines that map common types of calibration to core tasks, and then to well-known game mechanics. To evaluate the approach, we developed three calibration games and compared them to standard procedures. Users found the game versions significantly more enjoyable than regular calibration procedures, without compromising the quality of the data. Calibration games are a novel way to motivate users to carry out calibrations, thereby improving the performance and accuracy of many human-computer systems.",
"title": ""
},
{
"docid": "07db8f037ff720c8b8b242879c14531f",
"text": "PURPOSE\nMatriptase-2 (also known as TMPRSS6) is a critical regulator of the iron-regulatory hormone hepcidin in the liver; matriptase-2 cleaves membrane-bound hemojuvelin and consequently alters bone morphogenetic protein (BMP) signaling. Hemojuvelin and hepcidin are expressed in the retina and play a critical role in retinal iron homeostasis. However, no information on the expression and function of matriptase-2 in the retina is available. The purpose of the present study was to examine the retinal expression of matriptase-2 and its role in retinal iron homeostasis.\n\n\nMETHODS\nRT-PCR, quantitative PCR (qPCR), and immunofluorescence were used to analyze the expression of matriptase-2 and other iron-regulatory proteins in the mouse retina. Polarized localization of matriptase-2 in the RPE was evaluated using markers for the apical and basolateral membranes. Morphometric analysis of retinas from wild-type and matriptase-2 knockout (Tmprss6(msk/msk) ) mice was also performed. Retinal iron status in Tmprss6(msk/msk) mice was evaluated by comparing the expression levels of ferritin and transferrin receptor 1 between wild-type and knockout mice. BMP signaling was monitored by the phosphorylation status of Smads1/5/8 and expression levels of Id1 while interleukin-6 signaling was monitored by the phosphorylation status of STAT3.\n\n\nRESULTS\nMatriptase-2 is expressed in the mouse retina with expression detectable in all retinal cell types. Expression of matriptase-2 is restricted to the apical membrane in the RPE where hemojuvelin, the substrate for matriptase-2, is also present. There is no marked difference in retinal morphology between wild-type mice and Tmprss6(msk/msk) mice, except minor differences in specific retinal layers. The knockout mouse retina is iron-deficient, demonstrable by downregulation of the iron-storage protein ferritin and upregulation of transferrin receptor 1 involved in iron uptake. Hepcidin is upregulated in Tmprss6(msk/msk) mouse retinas, particularly in the neural retina. BMP signaling is downregulated while interleukin-6 signaling is upregulated in Tmprss6(msk/msk) mouse retinas, suggesting that the upregulaton of hepcidin in knockout mouse retinas occurs through interleukin-6 signaling and not through BMP signaling.\n\n\nCONCLUSIONS\nThe iron-regulatory serine protease matriptase-2 is expressed in the retina, and absence of this enzyme leads to iron deficiency and increased expression of hemojuvelin and hepcidin in the retina. The upregulation of hepcidin expression in Tmprss6(msk/msk) mouse retinas does not occur via BMP signaling but likely via the proinflammatory cytokine interleukin-6. We conclude that matriptase-2 is a critical participant in retinal iron homeostasis.",
"title": ""
},
{
"docid": "95b112886d7278a4596c49d5a5360fb5",
"text": "The InfoVis 2004 contest led to the development of several bibliography visualization systems. Even though each of these systems offers some unique views of the bibliography data, there is no single best system offering all the desired views. We have thus studied how to consolidate the desirable functionalities of these systems into a cohesive design. We have also designed a few novel visualization methods. This paper presents our findings and creation: BiblioViz, a bibliography visualization system that gives the maximum number of views of the data using a minimum number of visualization constructs in a unified fashion.",
"title": ""
},
{
"docid": "3e6aac2e0ff6099aabeee97dc1292531",
"text": "A lthough ordinary least-squares (OLS) regression is one of the most familiar statistical tools, far less has been written − especially in the pedagogical literature − on regression through the origin (RTO). Indeed, the subject is surprisingly controversial. The present note highlights situations in which RTO is appropriate, discusses the implementation and evaluation of such models and compares RTO functions among three popular statistical packages. Some examples gleaned from past Teaching Statistics articles are used as illustrations. For expository convenience, OLS and RTO refer here to linear regressions obtained by least-squares methods with and without a constant term, respectively.",
"title": ""
},
{
"docid": "093cea661036e2bfa7f8778545a55b6b",
"text": "State-of-the-art learning based boundary detection methods require extensive training data. Since labelling object boundaries is one of the most expensive types of annotations, there is a need to relax the requirement to carefully annotate images to make both the training more affordable and to extend the amount of training data. In this paper we propose a technique to generate weakly supervised annotations and show that bounding box annotations alone suffice to reach high-quality object boundaries without using any object-specific boundary annotations. With the proposed weak supervision techniques we achieve the top performance on the object boundary detection task, outperforming by a large margin the current fully supervised state-of-theart methods.",
"title": ""
},
{
"docid": "57bebb90000790a1d76a400f69d5736d",
"text": "In computer vision and image analysis, image registration between 2D projections and a 3D image that achieves high accuracy and near real-time computation is challenging. In this paper, we propose a novel method that can rapidly detect an object's 3D rigid motion or deformation from a 2D projection image or a small set thereof. The method is called CLARET (Correction via Limited-Angle Residues in External Beam Therapy) and consists of two stages: registration preceded by shape space and regression learning. In the registration stage, linear operators are used to iteratively estimate the motion/deformation parameters based on the current intensity residue between the target projec-tion(s) and the digitally reconstructed radiograph(s) (DRRs) of the estimated 3D image. The method determines the linear operators via a two-step learning process. First, it builds a low-order parametric model of the image region's motion/deformation shape space from its prior 3D images. Second, using learning-time samples produced from the 3D images, it formulates the relationships between the model parameters and the co-varying 2D projection intensity residues by multi-scale linear regressions. The calculated multi-scale regression matrices yield the coarse-to-fine linear operators used in estimating the model parameters from the 2D projection intensity residues in the registration. The method's application to Image-guided Radiation Therapy (IGRT) requires only a few seconds and yields good results in localizing a tumor under rigid motion in the head and neck and under respiratory deformation in the lung, using one treatment-time imaging 2D projection or a small set thereof.",
"title": ""
},
{
"docid": "3ce57a831997e64086821fb30a08cce2",
"text": "This study comprised 4,641 Brånemark dental implants, which were retrospectively followed from stage 1 surgery to completion of the prosthetic restorations. The implants were placed during a 3-year period (1986 to 1988) in 943 jaws, representing 889 patients with complete and partial edentulism. The jaw and sex distribution revealed a predominance of mandibles (564/943) and females (534/943). The mean age of the patients was 57.5 years (range 13 to 88 years) at implant placement. Only 69 (1.5%) fixtures failed to integrate, and most losses were seen in completely edentulous maxillae (46/69), in which the jaw bone exhibited soft quality and severe resorption. A preponderance of failures could also be seen among the shortest fixtures (7 mm). A majority of the mobile implants were recorded at the abutment connection (stage 2) operation (48/69).",
"title": ""
},
{
"docid": "91f89990f9d41d3a92cbff38efc56b57",
"text": "ID3 algorithm was a classic classification of data mining. It always selected the attribute with many values. The attribute with many values wasn't the correct one, and it always created wrong classification. In the application of intrusion detection system, it would created fault alarm and omission alarm. To this fault, an improved decision tree algorithm was proposed. Though improvement of information gain formula, the correct attribute would be got. The decision tree was created after the data collected classified correctly. The tree would be not high and has a few of branches. The rule set would be got based on the decision tree. Experimental results showed the effectiveness of the algorithm, false alarm rate and omission rate decreased, increasing the detection rate and reducing the space consumption.",
"title": ""
},
{
"docid": "e4fb31ebacb093932517719884264b46",
"text": "Monitoring and control the environmental parameters in agricultural constructions are essential to improve energy efficiency and productivity. Real-time monitoring allows the detection and early correction of unfavourable situations, optimizing consumption and protecting crops against diseases. This work describes an automatic system for monitoring farm environments with the aim of increasing efficiency and quality of the agricultural environment. Based on the Internet of Things, the system uses a low-cost wireless sensor network, called Sun Spot, programmed in Java, with the Java VM running on the device itself and the Arduino platform for Internet connection. The data collected is shared through the social network of Facebook. The temperature and brightness parameters are monitored in real time. Other sensors can be added to monitor the issue for specific purposes. The results show that conditions within greenhouses may in some cases be very different from those expected. Therefore, the proposed system can provide an effective tool to improve the quality of agricultural production and energy efficiency.",
"title": ""
},
{
"docid": "8e80d8be3b8ccbc4b8b6b6a0dde4136f",
"text": "When an event occurs, it attracts attention of information sources to publish related documents along its lifespan. The task of event detection is to automatically identify events and their related documents from a document stream, which is a set of chronologically ordered documents collected from various information sources. Generally, each event has a distinct activeness development so that its status changes continuously during its lifespan. When an event is active, there are a lot of related documents from various information sources. In contrast when it is inactive, there are very few documents, but they are focused. Previous works on event detection did not consider the characteristics of the event's activeness, and used rigid thresholds for event detection. We propose a concept called life profile, modeled by a hidden Markov model, to model the activeness trends of events. In addition, a general event detection framework, LIPED, which utilizes the learned life profiles and the burst-and-diverse characteristic to adjust the event detection thresholds adaptively, can be incorporated into existing event detection methods. Based on the official TDT corpus and contest rules, the evaluation results show that existing detection methods that incorporate LIPED achieve better performance in the cost and F1 metrics, than without.",
"title": ""
},
{
"docid": "c84d41e54b12cca847135dfc2e9e13f8",
"text": "PURPOSE\nBaseline restraint prevalence for surgical step-down unit was 5.08%, and for surgical intensive care unit, it was 25.93%, greater than the National Database of Nursing Quality Indicators (NDNQI) mean. Project goal was sustained restraint reduction below the NDNQI mean and maintaining patient safety.\n\n\nBACKGROUND/RATIONALE\nSoft wrist restraints are utilized for falls reduction and preventing device removal but are not universally effective and may put patients at risk of injury. Decreasing use of restrictive devices enhances patient safety and decreases risk of injury.\n\n\nDESCRIPTION\nPhase 1 consisted of advanced practice nurse-facilitated restraint rounds on each restrained patient including multidisciplinary assessment and critical thinking with bedside clinicians including reevaluation for treatable causes of agitation and restraint indications. Phase 2 evaluated less restrictive mitts, padded belts, and elbow splint devices. Following a 4-month trial, phase 3 expanded the restraint initiative including critical care requiring education and collaboration among advanced practice nurses, physician team members, and nurse champions.\n\n\nEVALUATION AND OUTCOMES\nPhase 1 decreased surgical step-down unit restraint prevalence from 5.08% to 3.57%. Phase 2 decreased restraint prevalence from 3.57% to 1.67%, less than the NDNQI mean. Phase 3 expansion in surgical intensive care units resulted in wrist restraint prevalence from 18.19% to 7.12% within the first year, maintained less than the NDNQI benchmarks while preserving patient safety.\n\n\nINTERPRETATION/CONCLUSION\nThe initiative produced sustained reduction in acute/critical care well below the NDNQI mean without corresponding increase in patient medical device removal.\n\n\nIMPLICATIONS\nBy managing causes of agitation, need for restraints is decreased, protecting patients from injury and increasing patient satisfaction. Follow-up research may explore patient experiences with and without restrictive device use.",
"title": ""
},
{
"docid": "678ef706d4cb1c35f6b3d82bf25a4aa7",
"text": "This article is an extremely rapid survey of the modern theory of partial differential equations (PDEs). Sources of PDEs are legion: mathematical physics, geometry, probability theory, continuum mechanics, optimization theory, etc. Indeed, most of the fundamental laws of the physical sciences are partial differential equations and most papers published in applied math concern PDEs. The following discussion is consequently very broad, but also very shallow, and will certainly be inadequate for any given PDE the reader may care about. The goal is rather to highlight some of the many key insights and unifying principles across the entire subject.",
"title": ""
},
{
"docid": "744edec2b92f84dda850de14ddc09972",
"text": "Computing systems are becoming increasingly parallel and heterogeneous, and therefore new applications must be capable of exploiting parallelism in order to continue achieving high performance. However, targeting these emerging devices often requires using multiple disparate programming models and making decisions that can limit forward scalability. In previous work we proposed the use of domain-specific languages (DSLs) to provide high-level abstractions that enable transformations to high performance parallel code without degrading programmer productivity. In this paper we present a new end-to-end system for building, compiling, and executing DSL applications on parallel heterogeneous hardware, the Delite Compiler Framework and Runtime. The framework lifts embedded DSL applications to an intermediate representation (IR), performs generic, parallel, and domain-specific optimizations, and generates an execution graph that targets multiple heterogeneous hardware devices. Finally we present results comparing the performance of several machine learning applications written in OptiML, a DSL for machine learning that utilizes Delite, to C++ and MATLAB implementations. We find that the implicitly parallel OptiML applications achieve single-threaded performance comparable to C++ and outperform explicitly parallel MATLAB in nearly all cases.",
"title": ""
},
{
"docid": "0a55710ae4cb2a4a80a0c8bf58aaeb99",
"text": "Therapeutic footwear with specially-made insoles is often used in people with diabetes and rheumatoid arthritis to relieve ulcer risks and pain due to high pressures from areas beneath bony prominences of the foot, in particular to the metatarsal heads (MTHs). In a three-dimensional finite element study of the foot and footwear with sensitivity analysis, effects of geometrical variations of a therapeutic insole, in terms of insole thicknesses and metatarsal pad (MP) placements, on local peak plantar pressure under MTHs and stress/strain states within various forefoot tissues, were determined. A validated musculoskeletal finite element model of the human foot was employed. Analyses were performed in a simulated muscle-demanding instant in gait. For many design combinations, increasing insole thicknesses consistently reduce peak pressures and internal tissue strain under MTHs, but the effects reach a plateau when insole becomes very thick (e.g., a value of 12.7mm or greater). Altering MP placements, however, showed a proximally- and a distally-placed MP could result in reverse effects on MTH pressure-relief. The unsuccessful outcome due to a distally-placed MP may attribute to the way it interacts with plantar tissue (e.g., plantar fascia) adjacent to the MTH. A uniform pattern of tissue compression under metatarsal shaft is necessary for a most favorable pressure-relief under MTHs. The designated functions of an insole design can best be achieved when the insole is very thick, and when the MP can achieve a uniform tissue compression pattern adjacent to the MTH.",
"title": ""
}
] |
scidocsrr
|
6c7eed47795ef22b59c62c4d136e645d
|
Gaussian Processes for Regression
|
[
{
"docid": "7b806cbde7cd0c2682402441a578ec9c",
"text": "We had previously shown that regularization principles lead to approximation schemes which are equivalent to networks with one layer of hidden units, called Regularization Networks. In particular, standard smoothness functionals lead to a subclass of regularization networks, the well known Radial Basis Functions approximation schemes. This paper shows that regularization networks encompass a much broader range of approximation schemes, including many of the popular general additive models and some of the neural networks. In particular, we introduce new classes of smoothness functionals that lead to diierent classes of basis functions. Additive splines as well as some tensor product splines can be obtained from appropriate classes of smoothness functionals. Furthermore, the same generalization that extends Radial Basis Functions (RBF) to Hyper Basis Functions (HBF) also leads from additive models to ridge approximation models, containing as special cases Breiman's hinge functions, some forms of Projection Pursuit Regression and several types of neural networks. We propose to use the term Generalized Regularization Networks for this broad class of approximation schemes that follow from an extension of regularization. In the probabilistic interpretation of regularization, the diierent classes of basis functions correspond to diierent classes of prior probabilities on the approximating function spaces, and therefore to diierent types of smoothness assumptions. In summary, diierent multilayer networks with one hidden layer, which we collectively call Generalized Regularization Networks, correspond to diierent classes of priors and associated smoothness functionals in a classical regularization principle. Three broad classes are a) Radial Basis Functions that can be generalized to Hyper Basis Functions, b) some tensor product splines, and c) additive splines that can be generalized to schemes of the type of ridge approximation, hinge functions and several perceptron-like neural networks with one-hidden layer.",
"title": ""
}
] |
[
{
"docid": "b4833563159839519aaaf38b011e7e10",
"text": "In the past few years, some nonlinear dimensionality reduction (NLDR) or nonlinear manifold learning methods have aroused a great deal of interest in the machine learning community. These methods are promising in that they can automatically discover the low-dimensional nonlinear manifold in a high-dimensional data space and then embed the data points into a low-dimensional embedding space, using tractable linear algebraic techniques that are easy to implement and are not prone to local minima. Despite their appealing properties, these NLDR methods are not robust against outliers in the data, yet so far very little has been done to address the robustness problem. In this paper, we address this problem in the context of an NLDR method called locally linear embedding (LLE). Based on robust estimation techniques, we propose an approach to make LLE more robust. We refer to this approach as robust locally linear embedding (RLLE). We also present several specific methods for realizing this general RLLE approach. Experimental results on both synthetic and real-world data show that RLLE is very robust against outliers. 2005 Pattern Recognition Society. Published by Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "a0c1f5a7e283e1deaff38edff2d8a3b2",
"text": "BACKGROUND\nEarly detection of abused children could help decrease mortality and morbidity related to this major public health problem. Several authors have proposed tools to screen for child maltreatment. The aim of this systematic review was to examine the evidence on accuracy of tools proposed to identify abused children before their death and assess if any were adapted to screening.\n\n\nMETHODS\nWe searched in PUBMED, PsycINFO, SCOPUS, FRANCIS and PASCAL for studies estimating diagnostic accuracy of tools identifying neglect, or physical, psychological or sexual abuse of children, published in English or French from 1961 to April 2012. We extracted selected information about study design, patient populations, assessment methods, and the accuracy parameters. Study quality was assessed using QUADAS criteria.\n\n\nRESULTS\nA total of 2 280 articles were identified. Thirteen studies were selected, of which seven dealt with physical abuse, four with sexual abuse, one with emotional abuse, and one with any abuse and physical neglect. Study quality was low, even when not considering the lack of gold standard for detection of abused children. In 11 studies, instruments identified abused children only when they had clinical symptoms. Sensitivity of tests varied between 0.26 (95% confidence interval [0.17-0.36]) and 0.97 [0.84-1], and specificity between 0.51 [0.39-0.63] and 1 [0.95-1]. The sensitivity was greater than 90% only for three tests: the absence of scalp swelling to identify children victims of inflicted head injury; a decision tool to identify physically-abused children among those hospitalized in a Pediatric Intensive Care Unit; and a parental interview integrating twelve child symptoms to identify sexually-abused children. When the sensitivity was high, the specificity was always smaller than 90%.\n\n\nCONCLUSIONS\nIn 2012, there is low-quality evidence on the accuracy of instruments for identifying abused children. Identified tools were not adapted to screening because of low sensitivity and late identification of abused children when they have already serious consequences of maltreatment. Development of valid screening instruments is a pre-requisite before considering screening programs.",
"title": ""
},
{
"docid": "df0da015de06037ec402c3c0732deff6",
"text": "Two studies examined relationships between infants' early speech processing performance and later language and cognitive outcomes. Study 1 found that performance on speech segmentation tasks before 12 months of age related to expressive vocabulary at 24 months. However, performance on other tasks was not related to 2-year vocabulary. Study 2 assessed linguistic and cognitive skills at 4-6 years of age for children who had participated in segmentation studies as infants. Children who had been able to segment words from fluent speech scored higher on language measures, but not general IQ, as preschoolers. Results suggest that speech segmentation ability is an important prerequisite for successful language development, and they offer potential for developing measures to detect language impairment at an earlier age.",
"title": ""
},
{
"docid": "0cd1f01d1b2a5afd8c6eba13ef5082fa",
"text": "Automatic differentiation—the mechanical transformation of numeric computer programs to calculate derivatives efficiently and accurately—dates to the origin of the computer age. Reverse mode automatic differentiation both antedates and generalizes the method of backwards propagation of errors used in machine learning. Despite this, practitioners in a variety of fields, including machine learning, have been little influenced by automatic differentiation, and make scant use of available tools. Here we review the technique of automatic differentiation, describe its two main modes, and explain how it can benefit machine learning practitioners. To reach the widest possible audience our treatment assumes only elementary differential calculus, and does not assume any knowledge of linear algebra.",
"title": ""
},
{
"docid": "4f68f2a2ef6a21116a5b0814c4f504e6",
"text": "Biometric fingerprint scanners are positioned to provide improved security in a great span of applications from government to private. However, one highly publicized vulnerability is that it is possible to spoof a variety of fingerprint scanners using artificial fingers made from Play-Doh, gelatin and silicone molds. Therefore, it is necessary to offer protection for fingerprint systems against these threats. In this paper, an anti-spoofing detection method is proposed which is based on ridge signal and valley noise analysis, to quantify perspiration patterns along ridges in live subjects and noise patterns along valleys in spoofs. The signals representing gray level patterns along ridges and valleys are explored in spatial, frequency and wavelet domains. Based on these features, separation (live/spoof) is performed using standard pattern classification tools including classification trees and neural networks. We test this method on a larger dataset than previously considered which contains 644 live fingerprints (81 subjects with 2 fingers for an average of 4 sessions) and 570 spoof fingerprints (made from Play-Doh, gelatin and silicone molds in multiple sessions) collected from the Identix fingerprint scanner. Results show that the performance can reach 99.1% correct classification overall. The proposed anti-spoofing method is purely software based and integration of this method can provide protection for fingerprint scanners against gelatin, Play-Doh and silicone spoof fingers. ∗Phone: (1)315-2686536 ∗∗Fax: (1)315-2687600 Email addresses: tanb@clarkson.edu, sschucke@clarkson.edu (Bozhao Tan, Stephanie Schuckers) Preprint submitted to Pattern Recognition November 12, 2009",
"title": ""
},
{
"docid": "e095b0d96a6c0dcc87efbbc3e730b581",
"text": "In this paper, we present ObSteiner, an exact algorithm for the construction of obstacle-avoiding rectilinear Steiner minimum trees (OARSMTs) among complex rectilinear obstacles. This is the first paper to propose a geometric approach to optimally solve the OARSMT problem among complex obstacles. The optimal solution is constructed by the concatenation of full Steiner trees among complex obstacles, which are proven to be of simple structures in this paper. ObSteiner is able to handle complex obstacles, including both convex and concave ones. Benchmarks with hundreds of terminals among a large number of obstacles are solved optimally in a reasonable amount of time.",
"title": ""
},
{
"docid": "e660f61b47e68b87d8f5769995f09e28",
"text": "In this paper, we combine two ideas: persistence-based clustering and the Heat Kernel Signature (HKS) function to obtain a multi-scale isometry invariant mesh segmentation algorithm. The key advantages of this approach is that it is tunable through a few intuitive parameters and is stable under near-isometric deformations. Indeed the method comes with feedback on the stability of the number of segments in the form of a persistence diagram. There are also spatial guarantees on part of the segments. Finally, we present an extension to the method which first detects regions which are inherently unstable and segments them separately. Both approaches are reasonably scalable and come with strong guarantees. We show numerous examples and a comparison with the segmentation benchmark and the curvature function.",
"title": ""
},
{
"docid": "e28b0ab1bedd60ba83b8a575431ad549",
"text": "The Decision Model and Notation (DMN) is a standard notation to specify decision logic in business applications. A central construct in DMN is a decision table. The rising use of DMN decision tables to capture and to automate everyday business decisions fuels the need to support analysis tasks on decision tables. This paper presents an opensource DMN editor to tackle three analysis tasks: detection of overlapping rules, detection of missing rules and simplification of decision tables via rule merging. The tool has been tested on large decision tables derived from a credit lending data-set.",
"title": ""
},
{
"docid": "0b19bd9604fae55455799c39595c8016",
"text": "Our study concerns an important current problem, that of diffusion of information in social networks. This problem has received significant attention from the Internet research community in the recent times, driven by many potential applications such as viral marketing and sales promotions. In this paper, we focus on the target set selection problem, which involves discovering a small subset of influential players in a given social network, to perform a certain task of information diffusion. The target set selection problem manifests in two forms: 1) top-k nodes problem and 2) λ -coverage problem. In the top-k nodes problem, we are required to find a set of k key nodes that would maximize the number of nodes being influenced in the network. The λ-coverage problem is concerned with finding a set of key nodes having minimal size that can influence a given percentage λ of the nodes in the entire network. We propose a new way of solving these problems using the concept of Shapley value which is a well known solution concept in cooperative game theory. Our approach leads to algorithms which we call the ShaPley value-based Influential Nodes (SPINs) algorithms for solving the top-k nodes problem and the λ -coverage problem. We compare the performance of the proposed SPIN algorithms with well known algorithms in the literature. Through extensive experimentation on four synthetically generated random graphs and six real-world data sets (Celegans, Jazz, NIPS coauthorship data set, Netscience data set, High-Energy Physics data set, and Political Books data set), we show that the proposed SPIN approach is more powerful and computationally efficient.",
"title": ""
},
{
"docid": "5313d913c67668269bc95ccde8a48670",
"text": "A touchscreen can be overlaid on a tablet computer to support asymmetric two-handed interaction in which the preferred hand uses a stylus and the non-preferred hand operates the touchscreen. The result is a portable device that allows both hands to interact directly with the display, easily constructed from commonly available hardware. The method for tracking the independent motions of both hands is described. A wide variety of existing two-handed interaction techniques can be used on this platform, as well as some new ones that exploit the reconfigurability of touchscreen interfaces. Informal tests show that, when the non-preferred hand performs simple actions, users find direct manipulation on the display with both hands to be comfortable, natural, and efficient.",
"title": ""
},
{
"docid": "c9135f79c4516c73e7ba924e00d51218",
"text": "The experimental conditions by which electromagnetic signals (EMS) of low frequency can be emitted by diluted aqueous solutions of some bacterial and viral DNAs are described. That the recorded EMS and nanostructures induced in water carry the DNA information (sequence) is shown by retrieval of that same DNA by classical PCR amplification using the TAQ polymerase, including both primers and nucleotides. Moreover, such a transduction process has also been observed in living human cells exposed to EMS irradiation. These experiments suggest that coherent long-range molecular interaction must be present in water to observe the above-mentioned features. The quantum field theory analysis of the phenomenon is presented in this article.",
"title": ""
},
{
"docid": "4a227bddcaed44777eb7a29dcf940c6c",
"text": "Deep neural networks have achieved great success on a variety of machine learning tasks. There are many fundamental and open questions yet to be answered, however. We introduce the Extended Data Jacobian Matrix (EDJM) as an architecture-independent tool to analyze neural networks at the manifold of interest. The spectrum of the EDJM is found to be highly correlated with the complexity of the learned functions. After studying the effect of dropout, ensembles, and model distillation using EDJM, we propose a novel spectral regularization method, which improves network performance.",
"title": ""
},
{
"docid": "34d668b50d059c941d2e8df9f1aa038e",
"text": "Deep spiking neural networks are becoming increasingly powerful tools for cognitive computing platforms. However, most of the existing studies on such computing models are developed with limited insights on the underlying hardware implementation, resulting in area and power expensive designs. Although several neuromimetic devices emulating neural operations have been proposed recently, their functionality has been limited to very simple neural models that may prove to be inefficient at complex recognition tasks. In this paper, we venture into the relatively unexplored area of utilizing the inherent device stochasticity of such neuromimetic devices to model complex neural functionalities in a probabilistic framework in the time domain. We consider the implementation of a deep spiking neural network capable of performing high-accuracy and lowlatency classification tasks, where the neural computing unit is enabled by the stochastic switching behavior of a magnetic tunnel junction. The simulation studies indicate an energy improvement of 20× over a baseline CMOS design in 45-nm technology.",
"title": ""
},
{
"docid": "807cd6adc45a2adb7943c5a0fb5baa94",
"text": "Reliable performance evaluations require the use of representative workloads. This is no easy task because modern computer systems and their workloads are complex, with many interrelated attributes and complicated structures. Experts often use sophisticated mathematics to analyze and describe workload models, making these models difficult for practitioners to grasp. This book aims to close this gap by emphasizing the intuition and the reasoning behind the definitions and derivations related to the workload models. It provides numerous examples from real production systems, with hundreds of graphs. Using this book, readers will be able to analyze collected workload data and clean it if necessary, derive statistical models that include skewed marginal distributions and correlations, and consider the need for generative models and feedback from the system. The descriptive statistics techniques covered are also useful for other domains.",
"title": ""
},
{
"docid": "cb8658ef7bf2c170741a663bec43c466",
"text": "Carcinogenesis is a multi-step process which result in uncontrolled cell growth. Mutations in DNA that lead to cancer disrupt these orderly processes by disrupting the programming regulating the processes.. This results in uncontrolled cell division leading to carcinogenesis. Oncogenes are genes whose protein products stimulate or enhance the division and viability of cells. Oncogenes arise by activating mutation of their precursors, the proto-oncogenes. Proto-oncogenes are often directly involved in growth regulation of normal cells. Advances in molecular studies had led to the identification of many oncogenes in cancer formation. This will help in early detection of many cancers. The action of drugs on oncogenes will help in specific treatment of different types of cancers. An overview of the functions, properties and clinical importance of oncogenes is discussed in this review.",
"title": ""
},
{
"docid": "9ff912ad71c84cfba286f1be7bd8d4b3",
"text": "This article compares traditional industrial-organizational psychology (I-O) research published in Journal of Applied Psychology (JAP) with organizational behavior management (OBM) research published in Journal of Organizational Behavior Management (JOBM). The purpose of this comparison was to identify similarities and differences with respect to research topics and methodologies, and to offer suggestions for what OBM researchers and practitioners can learn from I-O. Articles published in JAP from 1987-1997 were reviewed and compared to articles published during the same decade in JOBM (Nolan, Jarema, & Austin, 1999). This comparison includes Barbara R. Bucklin, Alicia M. Alvero, Alyce M. Dickinson, John Austin, and Austin K. Jackson are affiliated with Western Michigan University. Address correspondence to Alyce M. Dickinson, Department of Psychology, Western Michigan University, Kalamazoo, MI 49008-5052 (E-mail: alyce.dickinson@ wmich.edu.) Journal of Organizational Behavior Management, Vol. 20(2) 2000 E 2000 by The Haworth Press, Inc. All rights reserved. 27 D ow nl oa de d by [ W es te rn M ic hi ga n U ni ve rs ity ] at 1 1: 14 0 3 Se pt em be r 20 12 JOURNAL OF ORGANIZATIONAL BEHAVIOR MANAGEMENT 28 (a) author characteristics, (b) authors published in both journals, (c) topics addressed, (d) type of article, and (e) research characteristics and methodologies. Among the conclusions are: (a) the primary relative strength of OBM is its practical significance, demonstrated by the proportion of research addressing applied issues; (b) the greatest strength of traditional I-O appears to be the variety and complexity of organizational research topics; and (c) each field could benefit from contact with research published in the other. [Article copies available for a fee from The Haworth Document Delivery Service: 1-800-342-9678. E-mail address: <getinfo@haworthpressinc.com> Website: <http://www.HaworthPress.com>]",
"title": ""
},
{
"docid": "ba8ae795796d9d5c1d33d4e5ce692a13",
"text": "This work presents a type of capacitive sensor for intraocular pressure (IOP) measurement on soft contact lens with Radio Frequency Identification (RFID) module. The flexible capacitive IOP sensor and Rx antenna was designed and fabricated using MEMS fabrication technologies that can be embedded on a soft contact lens. The IOP sensing unit is a sandwich structure composed of parylene C as the substrate and the insulating layer, gold as the top and bottom electrodes of the capacitor, and Hydroxyethylmethacrylate (HEMA) as dielectric material between top plate and bottom plate. The main sensing principle is using wireless IOP contact lenses sensor (CLS) system placed on corneal to detect the corneal deformation caused due to the variations of IOP. The variations of intraocular pressure will be transformed into capacitance change and this change will be transmitted to RFID system and recorded as continuous IOP monitoring. The measurement on in-vitro porcine eyes show the pressure reproducibility and a sensitivity of 0.02 pF/4.5 mmHg.",
"title": ""
},
{
"docid": "0ea07af19fc199f6a9909bd7df0576a1",
"text": "Detection of overlapping communities in complex networks has motivated recent research in the relevant fields. Aiming this problem, we propose a Markov dynamics based algorithm, called UEOC, which means, “unfold and extract overlapping communities”. In UEOC, when identifying each natural community that overlaps, a Markov random walk method combined with a constraint strategy, which is based on the corresponding annealed network (degree conserving random network), is performed to unfold the community. Then, a cutoff criterion with the aid of a local community function, called conductance, which can be thought of as the ratio between the number of edges inside the community and those leaving it, is presented to extract this emerged community from the entire network. The UEOC algorithm depends on only one parameter whose value can be easily set, and it requires no prior knowledge on the hidden community structures. The proposed UEOC has been evaluated both on synthetic benchmarks and on some real-world networks, and was compared with a set of competing algorithms. Experimental result has shown that UEOC is highly effective and efficient for discovering overlapping communities.",
"title": ""
},
{
"docid": "14024a813302548d0bd695077185de1c",
"text": "In this paper, we propose an innovative touch-less palm print recognition system. This project is motivated by the public’s demand for non-invasive and hygienic biometric technology. For various reasons, users are concerned about touching the biometric scanners. Therefore, we propose to use a low-resolution web camera to capture the user’s hand at a distance for recognition. The users do not need to touch any device for their palm print to be acquired. A novel hand tracking and palm print region of interest (ROI) extraction technique are used to track and capture the user’s palm in real-time video stream. The discriminative palm print features are extracted based on a new method that applies local binary pattern (LBP) texture descriptor on the palm print directional gradient responses. Experiments show promising result using the proposed method. Performance can be further improved when a modified probabilistic neural network (PNN) is used for feature matching. Verification can be performed in less than one second in the proposed system. 2008 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "103f95f36a5d740bbfa908f25f30514b",
"text": "We present the design, modeling, and implementation of a novel pneumatic actuator, the Pneumatic Reel Actuator (PRA). The PRA is highly extensible, lightweight, capable of operating in compression and tension, compliant, and inexpensive. An initial prototype of the PRA can reach extension ratios greater than 16:1, has a force-to-weight ratio over 28:1, reach speeds of 0.87 meters per second, and can be constructed with parts totaling less than $4 USD. We have developed a model describing the actuator and have conducted experiments characterizing the actuator's performance in regards to force, extension, pressure, and speed. We have implemented two parallel robotic applications in the form of a three degree of freedom robot arm and a tetrahedral robot.",
"title": ""
}
] |
scidocsrr
|
a2275e7a0e063c695851810b46a1bfba
|
Real-time monocular ranging by Bayesian triangulation
|
[
{
"docid": "f3d934a354b44c79dfafb6bbb79b7f7c",
"text": "The large number of rear end collisions due to driver inattention has been identified as a major automotive safety issue. Even a short advance warning can significantly reduce the number and severity of the collisions. This paper describes a vision based forward collision warning (FCW) system for highway safety. The algorithm described in this paper computes time to contact (TTC) and possible collision course directly from the size and position of the vehicles in the image - which are the natural measurements for a vision based system - without having to compute a 3D representation of the scene. The use of a single low cost image sensor results in an affordable system which is simple to install. The system has been implemented on real-time hardware and has been test driven on highways. Collision avoidance tests have also been performed on test tracks.",
"title": ""
}
] |
[
{
"docid": "b3bc34cfbe6729f7ce540a792c32bf4c",
"text": "The employment of MIMO OFDM technique constitutes a cost effective approach to high throughput wireless communications. The system performance is sensitive to frequency offset which increases with the doppler spread and causes Intercarrier interference (ICI). ICI is a major concern in the design as it can potentially cause a severe deterioration of quality of service (QoS) which necessitates the need for a high speed data detection and decoding with ICI cancellation along with the intersymbol interference (ISI) cancellation in MIMO OFDM communication systems. Iterative parallel interference canceller (PIC) with joint detection and decoding is a promising approach which is used in this work. The receiver consists of a two stage interference canceller. The co channel interference cancellation is performed based on Zero Forcing (ZF) Detection method used to suppress the effect of ISI in the first stage. The latter stage consists of a simplified PIC scheme. High bit error rates of wireless communication system require employing forward error correction (FEC) methods on the data transferred in order to avoid burst errors that occur in physical channel. To achieve high capacity with minimum error rate Low Density Parity Check (LDPC) codes which have recently drawn much attention because of their error correction performance is used in this system. The system performance is analyzed for two different values of normalized doppler shift for varying speeds. The bit error rate (BER) is shown to improve in every iteration due to the ICI cancellation. The interference analysis with the use of ICI cancellation is examined for a range of normalized doppler shift which corresponds to mobile speeds varying from 5Km/hr to 250Km/hr.",
"title": ""
},
{
"docid": "e0f635de755925344c9095fd2267dc32",
"text": "Suicide is a leading cause of death in the United States. One of the major challenges to suicide prevention is that those who may be most at risk cannot be relied upon to report their conditions to clinicians. This paper takes an initial step toward the automatic detection of suicidal risk factors through social media activity, with no reliance on self-reporting. We consider the performance of annotators with various degrees of expertise in suicide prevention at annotating microblog data for the purpose of training text-based models for detecting suicide risk behaviors. Consistent with crowdsourcing literature, we found that novice-novice annotator pairs underperform expert annotators and outperform automatic lexical analysis tools, such as Linguistic Inquiry and Word Count.",
"title": ""
},
{
"docid": "3e3953e09f35c418316370f2318550aa",
"text": "Poker is ideal for testing automated reason ing under uncertainty. It introduces un certainty both by physical randomization and by incomplete information about op ponents' hands. Another source of uncer tainty is the limited information available to construct psychological models of opponents, their tendencies to bluff, play conservatively, reveal weakness, etc. and the relation be tween their hand strengths and betting be haviour. All of these uncertainties must be assessed accurately and combined effectively for any reasonable level of skill in the game to be achieved, since good decision making is highly sensitive to those tasks. We de scribe our Bayesian Poker Program (BPP) , which uses a Bayesian network to model the program's poker hand, the opponent's hand and the opponent's playing behaviour con ditioned upon the hand, and betting curves which govern play given a probability of win ning. The history of play with opponents is used to improve BPP's understanding of their behaviour. We compare BPP experimentally with: a simple rule-based system; a program which depends exclusively on hand probabil ities (i.e., without opponent modeling); and with human players. BPP has shown itself to be an effective player against all these opponents, barring the better humans. We also sketch out some likely ways of improv ing play.",
"title": ""
},
{
"docid": "e89a3d1381791a8df39153bb85646926",
"text": "While smart cities have the potential to monitor and control the city in real-time through sensors and actuators, there is still an important road ahead to evolve from isolated smart city experiments to real large-scale deployments. Important research questions remain on how and which wireless technologies should be setup for connecting the city, how the data should be analysed and how the acceptance by users of applications can be assessed. In this paper we present the City of Things testbed, which is a smart city testbed located in the city of Antwerp, Belgium to address these questions. It allows the setup and validation of new smart city experiments both at a technology and user level. City of Things consists of a multi-wireless technology network infrastructure, the capacity to easily perform data experiments on top and a living lab approach to validate the experiments. In comparison to other smart city testbeds, City of Things consists of an integrated approach, allowing experimentation on three different layers: networks, data and living lab while supporting a wide range of wireless technologies. We give an overview of the City of Things architecture, explain how researchers can perform smart city experiments and illustrate this by a case study on air quality.",
"title": ""
},
{
"docid": "ac64e1183d9c53845ed5f98d308bcb4b",
"text": "Robust PCA methods are typically based on batch optimization and have to load all the samples into memory during optimization. This prevents them from efficiently processing big data. In this paper, we develop an Online Robust PCA (OR-PCA) that processes one sample per time instance and hence its memory cost is independent of the number of samples, significantly enhancing the computation and storage efficiency. The proposed OR-PCA is based on stochastic optimization of an equivalent reformulation of the batch RPCA. Indeed, we show that OR-PCA provides a sequence of subspace estimations converging to the optimum of its batch counterpart and hence is provably robust to sparse corruption. Moreover, OR-PCA can naturally be applied for tracking dynamic subspace. Comprehensive simulations on subspace recovering and tracking demonstrate the robustness and efficiency advantages of the OR-PCA over online PCA and batch RPCA methods.",
"title": ""
},
{
"docid": "476c85c8325b1781586646625a313cd1",
"text": "This paper describes a data driven approach to studying the science of cyber security (SoS). It argues that science is driven by data. It then describes issues and approaches towards the following three aspects: (i) Data Driven Science for Attack Detection and Mitigation, (ii) Foundations for Data Trustworthiness and Policy-based Sharing, and (iii) A Risk-based Approach to Security Metrics. We believe that the three aspects addressed in this paper will form the basis for studying the Science of Cyber Security.",
"title": ""
},
{
"docid": "7ae5a31b7d4c1138ec4dad6d2b4efb6a",
"text": "Deep Neural Networks have recently gained lots of success after enabling several breakthroughs in notoriously challenging problems. Training these networks is computationally expensive and requires vast amounts of training data. Selling such pre-trained models can, therefore, be a lucrative business model. Unfortunately, once the models are sold they can be easily copied and redistributed. To avoid this, a tracking mechanism to identify models as the intellectual property of a particular vendor is necessary. In this work, we present an approach for watermarking Deep Neural Networks in a black-box way. Our scheme works for general classification tasks and can easily be combined with current learning algorithms. We show experimentally that such a watermark has no noticeable impact on the primary task that the model is designed for and evaluate the robustness of our proposal against a multitude of practical attacks. Moreover, we provide a theoretical analysis, relating our approach to previous work on backdooring.",
"title": ""
},
{
"docid": "5b134fae94a5cc3a2e1b7cc19c5d29e5",
"text": "We explore making virtual desktops behave in a more physically realistic manner by adding physics simulation and using piling instead of filing as the fundamental organizational structure. Objects can be casually dragged and tossed around, influenced by physical characteristics such as friction and mass, much like we would manipulate lightweight objects in the real world. We present a prototype, called BumpTop, that coherently integrates a variety of interaction and visualization techniques optimized for pen input we have developed to support this new style of desktop organization.",
"title": ""
},
{
"docid": "f4cd7a70a257aea595bf4a26142127ff",
"text": "Recent state-of-the-art performance on human-body pose estimation has been achieved with Deep Convolutional Networks (ConvNets). Traditional ConvNet architectures include pooling and sub-sampling layers which reduce computational requirements, introduce invariance and prevent over-training. These benefits of pooling come at the cost of reduced localization accuracy. We introduce a novel architecture which includes an efficient `position refinement' model that is trained to estimate the joint offset location within a small region of the image. This refinement model is jointly trained in cascade with a state-of-the-art ConvNet model [21] to achieve improved accuracy in human joint location estimation. We show that the variance of our detector approaches the variance of human annotations on the FLIC [20] dataset and outperforms all existing approaches on the MPII-human-pose dataset [1].",
"title": ""
},
{
"docid": "a93320450450dd761ea73dfc395c8b46",
"text": "There has been much discussion recently about the scope and limits of purely symbolic models of the mind and abotlt the proper role of connectionism in cognitive modeling. This paper describes the \"symbol grounding problem\": How can the semantic interpretation of a formal symbol system be made intrinsic to the system, rather than just parasitic on the meanings in our heads? How can the meanings of the meaningless symbol tokens, manipulated solely on the basis of their (arbitrary) shapes, be grounded in anything but other meaningless symbols? The problem is analogous to trying to learn Chinese from a Chinese/Chinese dictionary alone. A candidate solution is sketched: Symbolic representations must be grounded bottom-up in nonsymbolic representations of two kinds: (1) iconic representations, which are analogs of the proximal sensory projections of distal objects and events, and (2) categorical representations, which are learned and innate feature detectors that pick out the invariant features of object and event categories from their sensory projections. Elementary symbols are the names of these object and event categories, assigned on the basis of their (nonsymbolic) categorical representations. Higher-order (3) symbolic representations, grounded in these elementary symbols, consist of symbol strings describing category membership relations (e.g. \"An X is a Y that is Z \"). Connectionism is one natural candidate for the mechanism that learns the invariant features underlying categorical representations, thereby connecting names to the proximal projections of the distal objects they stand for. In this way connectionism can be seen as a complementary component in a hybrid nonsymbolic/symbolic model of the mind, rather than a rival to purely symbolic modeling. Such a hybrid model would not have an autonomous symbolic \"module,\" however; the symbolic functions would emerge as an intrinsically \"dedicated\" symbol system as a consequence of the bottom-up grounding of categories' names in their sensory representations. Symbol manipulation would be governed not just by the arbitrary shapes of the symbol tokens, but by the nonarbitrary shapes of the icons and category invariants in which they are grounded.",
"title": ""
},
{
"docid": "254c0fa363a1eb83901ae16da531f5c2",
"text": "The recently developed variational autoencoders (VAEs) have proved to be an effective confluence of the rich representational power of neural networks with Bayesian methods. However, most work on VAEs use a rather simple prior over the latent variables such as standard normal distribution, thereby restricting its applications to relatively simple phenomena. In this work, we propose hierarchical non-parametric variational autoencoders, which combines tree-structured Bayesian nonparametric priors with VAEs, to enable infinite flexibility of the latent representation space. Both the neural parameters and Bayesian priors are learned jointly using tailored variational inference. The resulting model induces a hierarchical structure of latent semantic concepts underlying the data corpus, and infers accurate representations of data instances. We apply our model in video representation learning. Our method is able to discover highly interpretable activity hierarchies, and obtain improved clustering accuracy and generalization capacity based on the learned rich representations.",
"title": ""
},
{
"docid": "2d5db03d9aea5dfea1e3365a43bf3e8b",
"text": "Tracking a mobile node using a wireless sensor network under non-line of sight (NLOS) conditions, has been considered in this work, which is of interest to indoor positioning applications. A hybrid of time difference of arrival (TDOA) and angle of arrival (AOA) measurements, suitable for tracking asynchronous targets, is exploited. The NLOS biases of the TDOA measurements and the position and velocity of the target are included in the state vector. To track the latter, we use a modified form of the extended Kalman filter (EKF) with bound constraints on the NLOS biases, as derived from geometrical considerations. Through simulations, we show that our technique can outperform the EKF and the memoryless constrained optimization techniques. Keywords—Extended Kalman filter; localization; non-line of sight; ultra wideband.",
"title": ""
},
{
"docid": "04089f1c183a6fb1f42578f2cd619142",
"text": "2014 marks the 15th birthday for CLEF, an evaluation campaign activity which has applied the Cranfield evaluation paradigm to the testing of multilingual and multimodal information access systems in Europe. This paper provides a summary of the motivations which led to the establishment of CLEF, and a description of how it has evolved over the years, the major achievements, and what we see as the next challenges.",
"title": ""
},
{
"docid": "80ce6c8c9fc4bf0382c5f01d1dace337",
"text": "Customer loyalty is viewed as the strength of the relationship between an individual's relative attitude and repeat patronage. The relationship is seen as mediated by social norms and situational factors. Cognitive, affective, and conative antecedents of relative attitude are identified as contributing to loyalty, along with motivational, perceptual, and behavioral consequences. Implications for research and for the management of loyalty are derived.",
"title": ""
},
{
"docid": "07713323e19b00c93a21a3d121c0039b",
"text": "A CMOS nested-chopper instrumentation amplifier is presented with a typical offset of 100 nV. This performance is obtained by nesting an additional low-frequency chopper pair around a conventional chopper amplifier. The inner chopper pair removes the 1/f noise, while the outer chopper pair reduces the residual offset. The test chip is free from 1/f noise and has a thermal noise of 27 nV//spl radic/Hz consuming a total supply current of 200 /spl mu/A.",
"title": ""
},
{
"docid": "43db7c431cac1afd33f48774ee0dbc61",
"text": "We present a diff algorithm for XML data. This work is motivated by the support for change control in the context of the Xyleme project that is investigating dynamic warehouses capable of storing massive volume of XML data. Because of the context, our algorithm has to be very efficient in terms of speed and memory space even at the cost of some loss of “quality”. Also, it considers, besides insertions, deletions and updates (standard in diffs), a move operation on subtrees that is essential in the context of XML. Intuitively, our diff algorithm uses signatures to match (large) subtrees that were left unchanged between the old and new versions. Such exact matchings are then possibly propagated to ancestors and descendants to obtain more matchings. It also uses XML specific information such as ID attributes. We provide a performance analysis of the algorithm. We show that it runs in average in linear time vs. quadratic time for previous algorithms. We present experiments on synthetic data that confirm the analysis. Since this problem is NPhard, the linear time is obtained by trading some quality. We present experiments (again on synthetic data) that show that the output of our algorithm is reasonably close to the “optimal” in terms of quality. Finally we present experiments on a small sample of XML pages found on the Web.",
"title": ""
},
{
"docid": "c006fbd6887c7d080addcf814009bd40",
"text": "Aiming at diagnosing and preventing the cardiovascular disease, a portable ECG monitoring system based on Bluetooth mobile phones is presented. The system consists of some novel dry skin electrodes, an ECG monitoring circuit and a smart phone. The weak ECG signals extracted from the dry electrode can be amplified, band-pass filtered, analog-digital converted and so on. Finally it is sent to the mobile phone by Bluetooth technology for real-time display on screen. The core ECG monitoring circuit is composed of a CMOS preamplifier ASIC designed by ourselves, a band-pass filter, a microcontroller and a Bluetooth module. The volume is 5.5 cm × 3.4 cm × 1.6 cm, weight is only 20.76 g (without batteries), and power consumption is 115 mW. The tests show that the system can operate steadily, precisely and display the ECG in real time.",
"title": ""
},
{
"docid": "a9573ff9ad431673bd8c744093460586",
"text": "When an arc fault occurs in a medium-voltage (MV) metal enclosed switchgear, the arc heats the filling gas, resulting in a pressure rise, which may seriously damage the switchgear, the building it is contained in, or even endanger maintenance personnel. A pressure rise calculation method based on computational fluid dynamics (CFD) has been put forward in this paper. The pressure rise was calculated and the arc tests between the copper electrodes were performed in the container under different gap lengths by the current source. The results show that the calculated pressure rise agrees well with the measurement, and the relative error of the average pressure rise is about 2%. Arc volume has less effect on the pressure distribution in the container. Arc voltage Root-Mean-Square (RMS) has significant randomness with the change of arc current, and increases with the increase of gap length. The average arc voltage gradients measure at about 26, 20 and 16 V/cm when the gap lengths are 5, 10 and 15 cm, respectively. The proportion (thermal transfer coefficient kp) of the arc energy leading to the pressure rise in the container is about 44.9%. The pressure is symmetrically distributed in the container before the pressure wave reaches the walls and the process of the energy release is similar to an explosion. The maximum overpressure in the corner is increased under the reflection and superimposition effects of the pressure wave, but the pressure waves will be of no importance any longer than a few milliseconds in the closed container.",
"title": ""
},
{
"docid": "998e8d7e53b693680b63cd19e4b59cc1",
"text": "In this paper, a new computer tomography (CT) lung nodule computer-aided detection (CAD) method is proposed for detecting both solid nodules and ground-glass opacity (GGO) nodules (part solid and nonsolid). This method consists of several steps. First, the lung region is segmented from the CT data using a fuzzy thresholding method. Then, the volumetric shape index map, which is based on local Gaussian and mean curvatures, and the ldquodotrdquo map, which is based on the eigenvalues of a Hessian matrix, are calculated for each voxel within the lungs to enhance objects of a specific shape with high spherical elements (such as nodule objects). The combination of the shape index (local shape information) and ldquodotrdquo features (local intensity dispersion information) provides a good structure descriptor for the initial nodule candidates generation. Antigeometric diffusion, which diffuses across the image edges, is used as a preprocessing step. The smoothness of image edges enables the accurate calculation of voxel-based geometric features. Adaptive thresholding and modified expectation-maximization methods are employed to segment potential nodule objects. Rule-based filtering is first used to remove easily dismissible nonnodule objects. This is followed by a weighted support vector machine (SVM) classification to further reduce the number of false positive (FP) objects. The proposed method has been trained and validated on a clinical dataset of 108 thoracic CT scans using a wide range of tube dose levels that contain 220 nodules (185 solid nodules and 35 GGO nodules) determined by a ground truth reading process. The data were randomly split into training and testing datasets. The experimental results using the independent dataset indicate an average detection rate of 90.2%, with approximately 8.2 FP/scan. Some challenging nodules such as nonspherical nodules and low-contrast part-solid and nonsolid nodules were identified, while most tissues such as blood vessels were excluded. The method's high detection rate, fast computation, and applicability to different imaging conditions and nodule types shows much promise for clinical applications.",
"title": ""
},
{
"docid": "4b2e4a1bd3c6f6af713e507f1d63ba07",
"text": "Model validation constitutes a very important step in system dynamics methodology. Yet, both published and informal evidence indicates that there has been little effort in system dynamics community explicitly devoted to model validity and validation. Validation is a prolonged and complicated process, involving both formal/quantitative tools and informal/ qualitative ones. This paper focuses on the formal aspects of validation and presents a taxonomy of various aspects and steps of formal model validation. First, there is a very brief discussion of the philosophical issues involved in model validation, followed by a flowchart that describes the logical sequence in which various validation activities must be carried out. The crucial nature of structure validity in system dynamics (causal-descriptive) models is emphasized. Then examples are given of specific validity tests used in each of the three major stages of model validation: Structural tests. Introduction",
"title": ""
}
] |
scidocsrr
|
9b73c97f452d2264b8d13ac92ca36375
|
Classical Structured Prediction Losses for Sequence to Sequence Learning
|
[
{
"docid": "7db2f661465cb18abf68e9148f50ce66",
"text": "When training the parameters for a natural language system, one would prefer to minimize 1-best loss (error) on an evaluation set. Since the error surface for many natural language problems is piecewise constant and riddled with local minima, many systems instead optimize log-likelihood, which is conveniently differentiable and convex. We propose training instead to minimize the expected loss, or risk. We define this expectation using a probability distribution over hypotheses that we gradually sharpen (anneal) to focus on the 1-best hypothesis. Besides the linear loss functions used in previous work, we also describe techniques for optimizing nonlinear functions such as precision or the BLEU metric. We present experiments training log-linear combinations of models for dependency parsing and for machine translation. In machine translation, annealed minimum risk training achieves significant improvements in BLEU over standard minimum error training. We also show improvements in labeled dependency parsing. 1 Direct Minimization of Error Researchers in empirical natural language processing have expended substantial ink and effort in developing metrics to evaluate systems automatically against gold-standard corpora. The ongoing evaluation literature is perhaps most obvious in the machine translation community’s efforts to better BLEU (Papineni et al., 2002). Despite this research, parsing or machine translation systems are often trained using the much simpler and harsher metric of maximum likelihood. One reason is that in supervised training, the log-likelihood objective function is generally convex, meaning that it has a single global maximum that can be easily found (indeed, for supervised generative models, the parameters at this maximum may even have a closed-form solution). In contrast to the likelihood surface, the error surface for discrete structured prediction is not only riddled with local minima, but piecewise constant ∗This work was supported by an NSF graduate research fellowship for the first author and by NSF ITR grant IIS0313193 and ONR grant N00014-01-1-0685. The views expressed are not necessarily endorsed by the sponsors. We thank Sanjeev Khudanpur, Noah Smith, Markus Dreyer, and the reviewers for helpful discussions and comments. and not everywhere differentiable with respect to the model parameters (Figure 1). Despite these difficulties, some work has shown it worthwhile to minimize error directly (Och, 2003; Bahl et al., 1988). We show improvements over previous work on error minimization by minimizing the risk or expected error—a continuous function that can be derived by combining the likelihood with any evaluation metric (§2). Seeking to avoid local minima, deterministic annealing (Rose, 1998) gradually changes the objective function from a convex entropy surface to the more complex risk surface (§3). We also discuss regularizing the objective function to prevent overfitting (§4). We explain how to compute expected loss under some evaluation metrics common in natural language tasks (§5). We then apply this machinery to training log-linear combinations of models for dependency parsing and for machine translation (§6). Finally, we note the connections of minimum risk training to max-margin training and minimum Bayes risk decoding (§7), and recapitulate our results (§8). 2 Training Log-Linear Models In this work, we focus on rescoring with loglinear models. In particular, our experiments consider log-linear combinations of a relatively small number of features over entire complex structures, such as trees or translations, known in some previous work as products of experts (Hinton, 1999) or logarithmic opinion pools (Smith et al., 2005). A feature in the combined model might thus be a log probability from an entire submodel. Giving this feature a small or negative weight can discount a submodel that is foolishly structured, badly trained, or redundant with the other features. For each sentence xi in our training corpus S, we are given Ki possible analyses yi,1, . . . yi,Ki . (These may be all of the possible translations or parse trees; or only the Ki most probable under Figure 1: The loss surface for a machine translation system: while other parameters are held constant, we vary the weights on the distortion and word penalty features. Note the piecewise constant regions with several local maxima. some other model; or only a random sample of size Ki.) Each analysis has a vector of real-valued features (i.e., factors, or experts) denoted fi,k. The score of the analysis yi,k is θ · fi,k, the dot product of its features with a parameter vector θ. For each sentence, we obtain a normalized probability distribution over the Ki analyses as pθ(yi,k | xi) = exp θ · fi,k ∑Ki k′=1 exp θ · fi,k′ (1) We wish to adjust this model’s parameters θ to minimize the severity of the errors we make when using it to choose among analyses. A loss function Ly∗(y) assesses a penalty for choosing y when y∗ is correct. We will usually write this simply as L(y) since y∗ is fixed and clear from context. For clearer exposition, we assume below that the total loss over some test corpus is the sum of the losses on individual sentences, although we will revisit that assumption in §5. 2.1 Minimizing Loss or Expected Loss One training criterion directly mimics test conditions. It looks at the loss incurred if we choose the best analysis of each xi according to the model:",
"title": ""
},
{
"docid": "a702269cd9fce037f2f74f895595d573",
"text": "This paper tackles the reduction of redundant repeating generation that is often observed in RNN-based encoder-decoder models. Our basic idea is to jointly estimate the upper-bound frequency of each target vocabulary in the encoder and control the output words based on the estimation in the decoder. Our method shows significant improvement over a strong RNN-based encoder-decoder baseline and achieved its best results on an abstractive summarization benchmark.",
"title": ""
}
] |
[
{
"docid": "f1b137d4ac36e141415963d6fab14918",
"text": "Passive equipments operating in the 30-300 GHz (millimeter wave) band are compared to those in the 300 GHz-3 THz (submillimeter band). Equipments operating in the submillimeter band can measure distance and also spectral information and have been used to address new opportunities in security. Solid state spectral information is available in the submillimeter region making it possible to identify materials, whereas in millimeter region bulk optical properties determine the image contrast. The optical properties in the region from 30 GHz to 3 THz are discussed for some typical inorganic and organic solids. In the millimeter-wave region of the spectrum, obscurants such as poor weather, dust, and smoke can be penetrated and useful imagery generated for surveillance. In the 30 GHz-3 THz region dielectrics such as plastic and cloth are also transparent and the detection of contraband hidden under clothing is possible. A passive millimeter-wave imaging concept based on a folded Schmidt camera has been developed and applied to poor weather navigation and security. The optical design uses a rotating mirror and is folded using polarization techniques. The design is very well corrected over a wide field of view making it ideal for surveillance and security. This produces a relatively compact imager which minimizes the receiver count.",
"title": ""
},
{
"docid": "796ae2d702a66d7af19ac4bb6a52aa6b",
"text": "Methods for embedding secret data are more sophisticated than their ancient predecessors, but the basic principles remain unchanged.",
"title": ""
},
{
"docid": "196ddcefb2c3fcb6edd5e8d108f7e219",
"text": "This paper may be considered as a practical reference for those who wish to add (now sufficiently matured) Agent Based modeling to their analysis toolkit and may or may not have some System Dynamics or Discrete Event modeling background. We focus on systems that contain large numbers of active objects (people, business units, animals, vehicles, or even things like projects, stocks, products, etc. that have timing, event ordering or other kind of individual behavior associated with them). We compare the three major paradigms in simulation modeling: System Dynamics, Discrete Event and Agent Based Modeling with respect to how they approach such systems. We show in detail how an Agent Based model can be built from an existing System Dynamics or a Discrete Event model and then show how easily it can be further enhanced to capture much more complicated behavior, dependencies and interactions thus providing for deeper insight in the system being modeled. Commonly understood examples are used throughout the paper; all models are specified in the visual language supported by AnyLogic tool. We view and present Agent Based modeling not as a substitution to older modeling paradigms but as a useful add-on that can be efficiently combined with System Dynamics and Discrete Event modeling. Several multi-paradigm model architectures are suggested.",
"title": ""
},
{
"docid": "11a28e11ba6e7352713b8ee63291cd9c",
"text": "This review focuses on discussing the main changes on the upcoming fourth edition of the WHO Classification of Tumors of the Pituitary Gland emphasizing histopathological and molecular genetics aspects of pituitary neuroendocrine (i.e., pituitary adenomas) and some of the non-neuroendocrine tumors involving the pituitary gland. Instead of a formal review, we introduced the highlights of the new WHO classification by answering select questions relevant to practising pathologists. The revised classification of pituitary adenomas, in addition to hormone immunohistochemistry, recognizes the role of other immunohistochemical markers including but not limited to pituitary transcription factors. Recognizing this novel approach, the fourth edition of the WHO classification has abandoned the concept of \"a hormone-producing pituitary adenoma\" and adopted a pituitary adenohypophyseal cell lineage designation of the adenomas with subsequent categorization of histological variants according to hormone content and specific histological and immunohistochemical features. This new classification does not require a routine ultrastructural examination of these tumors. The new definition of the Null cell adenoma requires the demonstration of immunonegativity for pituitary transcription factors and adenohypophyseal hormones Moreover, the term of atypical pituitary adenoma is no longer recommended. In addition to the accurate tumor subtyping, assessment of the tumor proliferative potential by mitotic count and Ki-67 index, and other clinical parameters such as tumor invasion, is strongly recommended in individual cases for consideration of clinically aggressive adenomas. This classification also recognizes some subtypes of pituitary neuroendocrine tumors as \"high-risk pituitary adenomas\" due to the clinical aggressive behavior; these include the sparsely granulated somatotroph adenoma, the lactotroph adenoma in men, the Crooke's cell adenoma, the silent corticotroph adenoma, and the newly introduced plurihormonal Pit-1-positive adenoma (previously known as silent subtype III pituitary adenoma). An additional novel aspect of the new WHO classification was also the definition of the spectrum of thyroid transcription factor-1 expressing pituitary tumors of the posterior lobe as representing a morphological spectrum of a single nosological entity. These tumors include the pituicytoma, the spindle cell oncocytoma, the granular cell tumor of the neurohypophysis, and the sellar ependymoma.",
"title": ""
},
{
"docid": "fc5a2b6f5258e59afff3f910010b1f9a",
"text": "This paper proposes a novel isolated bidirectional converter, which can efficiently transfer energy between 400 V DC micro grid and 48 V DC batteries. The proposed structure includes primary windings of two flyback transformers, which are connected in series and sharing the high DC micro grid voltage equally, and secondary windings, which are connected in parallel to batteries. Few decoupling diodes are added into the proposed circuit on both sides, which can let the leakage inductance energy of flyback transformers be recycled easily and reduce the voltage stress as well as power losses during bidirectional power transfer. Therefore, low voltage rating and low conduction resistance switches can be selected to improve system efficiency. A laboratory prototype of the proposed converter with an input/output nominal voltage of 400 V/48 V and the maximum capacity of 500 W is implemented. The highest power conversion efficiency is 93.1 % in step-down function, and near 93 % in step-up function.",
"title": ""
},
{
"docid": "881da6fd2d6c77d9f31ba6237c3d2526",
"text": "Pakistan is a developing country with more than half of its population located in rural areas. These areas neither have sufficient health care facilities nor a strong infrastructure that can address the health needs of the people. The expansion of Information and Communication Technology (ICT) around the globe has set up an unprecedented opportunity for delivery of healthcare facilities and infrastructure in these rural areas of Pakistan as well as in other developing countries. Mobile Health (mHealth)—the provision of health care services through mobile telephony—will revolutionize the way health care is delivered. From messaging campaigns to remote monitoring, mobile technology will impact every aspect of health systems. This paper highlights the growth of ICT sector and status of health care facilities in the developing countries, and explores prospects of mHealth as a transformer for health systems and service delivery especially in the remote rural areas.",
"title": ""
},
{
"docid": "395fc8e1c25be4f1809c77a0088dfa91",
"text": "The recently released Stanford Question Answering Dataset (SQuAD) provides a unique version of the question-answer problem that more closely relates to the complex structure of natural language, and thus lends itself to the expressive power of neural networks. We explore combining tested techniques within an encoder-decoder architecture in an attempt to achieve a model that is both accurate and efficient. We ultimately propose a model that utlizes bidirectional LSTM’s fed into a coattention layer, and a fairly simple decoder consisting of an LSTM with two hidden layers. We find through our experimentation that the model performs better than combinations of coattention with both our simpler and more complex decoders. We also find that it excels at answering questions where the answer can rely on marker words or structural context rather than abstract context.",
"title": ""
},
{
"docid": "51c42a305039d65dc442910c8078a9aa",
"text": "Infants are experts at playing, with an amazing ability to generate novel structured behaviors in unstructured environments that lack clear extrinsic reward signals. We seek to mathematically formalize these abilities using a neural network that implements curiosity-driven intrinsic motivation. Using a simple but ecologically naturalistic simulated environment in which an agent can move and interact with objects it sees, we propose a “world-model” network that learns to predict the dynamic consequences of the agent’s actions. Simultaneously, we train a separate explicit “self-model” that allows the agent to track the error map of its worldmodel. It then uses the self-model to adversarially challenge the developing world-model. We demonstrate that this policy causes the agent to explore novel and informative interactions with its environment, leading to the generation of a spectrum of complex behaviors, including ego-motion prediction, object attention, and object gathering. Moreover, the world-model that the agent learns supports improved performance on object dynamics prediction, detection, localization and recognition tasks. Taken together, our results are initial steps toward creating flexible autonomous agents that self-supervise in realistic physical environments.",
"title": ""
},
{
"docid": "c16428f049cebdc383c4ee24f75da6b0",
"text": "Classification and regression trees are machine-learning methods for constructing prediction models from data. The models are obtained by recursively partitioning the data space and fitting a simple prediction model within each partition. As a result, the partitioning can be represented graphically as a decision tree. Classification trees are designed for dependent variables that take a finite number of unordered values, with prediction error measured in terms of misclassification cost. Regression trees are for dependent variables that take continuous or ordered discrete values, with prediction error typically measured by the squared difference between the observed and predicted values. This article gives an introduction to the subject by reviewing some widely available algorithms and comparing their capabilities, strengths, and weakness in two examples. C © 2011 John Wiley & Sons, Inc. WIREs Data Mining Knowl Discov 2011 1 14–23 DOI: 10.1002/widm.8",
"title": ""
},
{
"docid": "608bf85fa593c7ddff211c5bcc7dd20a",
"text": "We introduce a composite deep neural network architecture for supervised and language independent context sensitive lemmatization. The proposed method considers the task as to identify the correct edit tree representing the transformation between a word-lemma pair. To find the lemma of a surface word, we exploit two successive bidirectional gated recurrent structures the first one is used to extract the character level dependencies and the next one captures the contextual information of the given word. The key advantages of our model compared to the state-of-the-art lemmatizers such as Lemming and Morfette are (i) it is independent of human decided features (ii) except the gold lemma, no other expensive morphological attribute is required for joint learning. We evaluate the lemmatizer on nine languages Bengali, Catalan, Dutch, Hindi, Hungarian, Italian, Latin, Romanian and Spanish. It is found that except Bengali, the proposed method outperforms Lemming and Morfette on the other languages. To train the model on Bengali, we develop a gold lemma annotated dataset1 (having 1, 702 sentences with a total of 20, 257 word tokens), which is an additional contribution of this work.",
"title": ""
},
{
"docid": "9218597308b80bdfa41511e977b42dd1",
"text": "The biophysical characterization of CPX-351, a liposomal formulation of cytarabine and daunorubicin encapsulated in a synergistic 5:1 molar ratio (respectively), is presented. CPX-351 is a promising drug candidate currently in two concurrent Phase 2 trials for treatment of acute myeloid leukemia. Its therapeutic activity is dependent on maintenance of the synergistic 5:1 drug:drug ratio in vivo. CPX-351 liposomes have a mean diameter of 107 nm, a single phase transition temperature of 55.3 degrees C, entrapped volume of 1.5 microL/micromol lipid and a zeta potential of -33 mV. Characterization of these physicochemical properties led to identification of an internal structure within the liposomes, later shown to be produced during the cytarabine loading procedure. Fluorescence labeling studies are presented that definitively show that the structure is composed of lipid and represents a second lamella. Extensive spectroscopic studies of the drug-excipient interactions within the liposome and in solution reveal that interactions of both cytarabine and daunorubicin with the copper(II) gluconate/triethanolamine-based buffer system play a role in maintenance of the 5:1 cytarabine:daunorubicin ratio within the formulation. These studies demonstrate the importance of extensive biophysical study of liposomal drug products to elucidate the key physicochemical properties that may impact their in vivo performance.",
"title": ""
},
{
"docid": "8a28f3ad78a77922fd500b805139de4b",
"text": "Sina Weibo is the most popular and fast growing microblogging social network in China. However, more and more spam messages are also emerging on Sina Weibo. How to detect these spam is essential for the social network security. While most previous studies attempt to detect the microblogging spam by identifying spammers, in this paper, we want to exam whether we can detect the spam by each single Weibo message, because we notice that more and more spam Weibos are posted by normal users or even popular verified users. We propose a Weibo spam detection method based on machine learning algorithm. In addition, different from most existing microblogging spam detection methods which are based on English microblogs, our method is designed to deal with the features of Chinese microblogs. Our extensive empirical study shows the effectiveness of our approach.",
"title": ""
},
{
"docid": "20f98a15433514dc5aa76110f68a71ba",
"text": "We describe a case of secondary syphilis of the tongue in which the main clinical presentation of the disease was similar to oral hairy leukoplakia. In a man who was HIV seronegative, the first symptom was a dryness of the throat followed by a feeling of foreign body in the tongue. Lesions were painful without cutaneous manifestations of secondary syphilis. IgM-fluorescent treponemal antibody test and typical serologic parameters promptly led to the diagnosis of secondary syphilis. We initiated an appropriate antibiotic therapy using benzathine penicillin, which induced healing of the tongue lesions. The differential diagnosis of this lesion may include oral squamous carcinoma, leukoplakia, candidosis, lichen planus, and, especially, hairy oral leukoplakia. This case report emphasizes the importance of considering secondary syphilis in the differential diagnosis of hairy oral leukoplakia. Depending on the clinical picture, the possibility of syphilis should not be overlooked in the differential diagnosis of many diseases of the oral mucosa.",
"title": ""
},
{
"docid": "2223bfd504f5552df290bdaec0553a36",
"text": "Department of Computer Information Systems, J. Mack Robinson College of Business, Georgia State University, Atlanta, USA; Department of Information Technology & Decision Sciences, Strome College of Business, Old Dominion University, Norfolk, USA; Management Science and Information Systems Department, College of Management, University of Massachusetts Boston, 100 Morrissey Blvd., Boston, MA 02125, USA; Department of Computer Information Systems, Zicklin School of Business, The City University of New York, New York, USA; Department of Business Information Technology, Pamplin College of Business, Virginia Polytechnic Institute and State University, Blacksburg, USA",
"title": ""
},
{
"docid": "773d90c215b4c04cf713b1c1266f88d9",
"text": "Electromyography (EMG) is the study of muscles function through analysis of electrical activity produced from muscles. This electrical activity which is displayed in the form of signal is the result of neuromuscular activation associated with muscle contraction. The most common techniques of EMG signal recording are by using surface and needle/wire electrode where the latter is usually used for interest in deep muscle. This paper will focus on surface electromyogram (SEMG) signal. During SEMG recording, several problems had to been countered such as noise, motion artifact and signal instability. Thus, various signal processing techniques had been implemented to produce a reliable signal for analysis. SEMG signal finds broad application particularly in biomedical field. It had been analyzed and studied for various interests such as neuromuscular disease, enhancement of muscular function and human-computer interface. Keywords—Evolvable hardware (EHW), Functional Electrical Simulation (FES), Hidden Markov Model (HMM), Hjorth Time Domain (HTD).",
"title": ""
},
{
"docid": "fa9571673fe848d1d119e2d49f21d28d",
"text": "Convolutional Neural Networks (CNNs) trained on large scale RGB databases have become the secret sauce in the majority of recent approaches for object categorization from RGB-D data. Thanks to colorization techniques, these methods exploit the filters learned from 2D images to extract meaningful representations in 2.5D. Still, the perceptual signature of these two kind of images is very different, with the first usually strongly characterized by textures, and the second mostly by silhouettes of objects. Ideally, one would like to have two CNNs, one for RGB and one for depth, each trained on a suitable data collection, able to capture the perceptual properties of each channel for the task at hand. This has not been possible so far, due to the lack of a suitable depth database. This paper addresses this issue, proposing to opt for synthetically generated images rather than collecting by hand a 2.5D large scale database. While being clearly a proxy for real data, synthetic images allow to trade quality for quantity, making it possible to generate a virtually infinite amount of data. We show that the filters learned from such data collection, using the very same architecture typically used on visual data, learns very different filters, resulting in depth features (a) able to better characterize the different facets of depth images, and (b) complementary with respect to those derived from CNNs pre-trained on 2D datasets. Experiments on two publicly available databases show the power of our approach.",
"title": ""
},
{
"docid": "c006fbd6887c7d080addcf814009bd40",
"text": "Aiming at diagnosing and preventing the cardiovascular disease, a portable ECG monitoring system based on Bluetooth mobile phones is presented. The system consists of some novel dry skin electrodes, an ECG monitoring circuit and a smart phone. The weak ECG signals extracted from the dry electrode can be amplified, band-pass filtered, analog-digital converted and so on. Finally it is sent to the mobile phone by Bluetooth technology for real-time display on screen. The core ECG monitoring circuit is composed of a CMOS preamplifier ASIC designed by ourselves, a band-pass filter, a microcontroller and a Bluetooth module. The volume is 5.5 cm × 3.4 cm × 1.6 cm, weight is only 20.76 g (without batteries), and power consumption is 115 mW. The tests show that the system can operate steadily, precisely and display the ECG in real time.",
"title": ""
},
{
"docid": "2e6a47d8ec4b955992ec344d58984297",
"text": "Businesses increasingly attempt to learn more about their customers, suppliers, and operations by using millions of networked sensors integrated, for example, in mobile phones, cashier systems, automobiles, or weather stations. This development raises the question of how companies manage to cope with these ever-increasing amounts of data, referred to as Big Data. Consequently, the aim of this paper is to identify different Big Data strategies a company may implement and provide a set of organizational contingency factors that influence strategy choice. In order to do so, we reviewed existing literature in the fields of Big Data analytics, data warehousing, and business intelligence and synthesized our findings into a contingency matrix that may support practitioners in choosing a suitable Big Data approach. We find that while every strategy can be beneficial under certain corporate circumstances, the hybrid approach - a combination of traditional relational database structures and MapReduce techniques - is the strategy most often valuable for companies pursuing Big Data analytics.",
"title": ""
},
{
"docid": "80e0a6c270bb146a1a45994d27340639",
"text": "BACKGROUND\nThe promotion of active and healthy ageing is becoming increasingly important as the population ages. Physical activity (PA) significantly reduces all-cause mortality and contributes to the prevention of many chronic illnesses. However, the proportion of people globally who are active enough to gain these health benefits is low and decreases with age. Social support (SS) is a social determinant of health that may improve PA in older adults, but the association has not been systematically reviewed. This review had three aims: 1) Systematically review and summarise studies examining the association between SS, or loneliness, and PA in older adults; 2) clarify if specific types of SS are positively associated with PA; and 3) investigate whether the association between SS and PA differs between PA domains.\n\n\nMETHODS\nQuantitative studies examining a relationship between SS, or loneliness, and PA levels in healthy, older adults over 60 were identified using MEDLINE, PSYCInfo, SportDiscus, CINAHL and PubMed, and through reference lists of included studies. Quality of these studies was rated.\n\n\nRESULTS\nThis review included 27 papers, of which 22 were cross sectional studies, three were prospective/longitudinal and two were intervention studies. Overall, the study quality was moderate. Four articles examined the relation of PA with general SS, 17 with SS specific to PA (SSPA), and six with loneliness. The results suggest that there is a positive association between SSPA and PA levels in older adults, especially when it comes from family members. No clear associations were identified between general SS, SSPA from friends, or loneliness and PA levels. When measured separately, leisure time PA (LTPA) was associated with SS in a greater percentage of studies than when a number of PA domains were measured together.\n\n\nCONCLUSIONS\nThe evidence surrounding the relationship between SS, or loneliness, and PA in older adults suggests that people with greater SS for PA are more likely to do LTPA, especially when the SS comes from family members. However, high variability in measurement methods used to assess both SS and PA in included studies made it difficult to compare studies.",
"title": ""
},
{
"docid": "afb5f0090b11aafada24d056a6fd4f0a",
"text": "It is commonly believed that steganography within TCP/IP is easily achieved by embedding data in header fields seemingly filled with “random” data, such as the IP identifier, TCP initial sequence number (ISN) or the least significant bit of the TCP timestamp. We show that this is not the case; these fields naturally exhibit sufficient structure and non-uniformity to be efficiently and reliably differentiated from unmodified ciphertext. Previous work on TCP/IP steganography does not take this into account and, by examining TCP/IP specifications and open source implementations, we have developed tests to detect the use of näıve embedding. Finally, we describe reversible transforms that map block cipher output onto TCP ISNs, indistinguishable from those generated by Linux and OpenBSD. The techniques used can be extended to other operating systems. A message can thus be hidden so that an attacker cannot demonstrate its existence without knowing a secret key.",
"title": ""
}
] |
scidocsrr
|
4505a5afc603ae540b709f23db4da468
|
Topology optimization for galvanic coupled wireless intra-body communication
|
[
{
"docid": "04e094e8f1e0466248df9c1263285f0b",
"text": "We propose a mathematical formulation for the notion of optimal projective cluster, starting from natural requirements on the density of points in subspaces. This allows us to develop a Monte Carlo algorithm for iteratively computing projective clusters. We prove that the computed clusters are good with high probability. We implemented a modified version of the algorithm, using heuristics to speed up computation. Our extensive experiments show that our method is significantly more accurate than previous approaches. In particular, we use our techniques to build a classifier for detecting rotated human faces in cluttered images.",
"title": ""
}
] |
[
{
"docid": "0070d6e21bdb8bac260178603cfbf67d",
"text": "Sound is a medium that conveys functional and emotional information in a form of multilayered streams. With the use of such advantage, robot sound design can open a way for being more efficient communication in human-robot interaction. As the first step of research, we examined how individuals perceived the functional and emotional intention of robot sounds and whether the perceived information from sound is associated with their previous experience with science fiction movies. The sound clips were selected based on the context of the movie scene (i.e., Wall-E, R2-D2, BB8, Transformer) and classified as functional (i.e., platform, monitoring, alerting, feedback) and emotional (i.e., positive, neutral, negative). A total of 12 participants were asked to identify the perceived properties for each of the 30 items. We found that the perceived emotional and functional messages varied from those originally intended and differed by previous experience.",
"title": ""
},
{
"docid": "789a9d6e2a007938fa8f1715babcabd2",
"text": "We present a novel framework that enables efficient probabilistic inference in large-scale scientific models by allowing the execution of existing domain-specific simulators as probabilistic programs, resulting in highly interpretable posterior inference. Our framework is general purpose and scalable, and is based on a crossplatform probabilistic execution protocol through which an inference engine can control simulators in a language-agnostic way. We demonstrate the technique in particle physics, on a scientifically accurate simulation of the τ (tau) lepton decay, which is a key ingredient in establishing the properties of the Higgs boson. Highenergy physics has a rich set of simulators based on quantum field theory and the interaction of particles in matter. We show how to use probabilistic programming to perform Bayesian inference in these existing simulator codebases directly, in particular conditioning on observable outputs from a simulated particle detector to directly produce an interpretable posterior distribution over decay pathways. Inference efficiency is achieved via inference compilation where a deep recurrent neural network is trained to parameterize proposal distributions and control the stochastic simulator in a sequential importance sampling scheme, at a fraction of the computational cost of Markov chain Monte Carlo sampling.",
"title": ""
},
{
"docid": "5e530aefee0a4b1ef986a086a17078fd",
"text": "One key property of word embeddings currently under study is their capacity to encode hypernymy. Previous works have used supervised models to recover hypernymy structures from embeddings. However, the overall results do not clearly show how well we can recover such structures. We conduct the first dataset-centric analysis that shows how only the Baroni dataset provides consistent results. We empirically show that a possible reason for its good performance is its alignment to dimensions specific of hypernymy: generality and similarity.",
"title": ""
},
{
"docid": "1dad20d7f19e20945e9ad28aa5a70d93",
"text": "Article history: Received 3 January 2016 Received in revised form 9 June 2017 Accepted 26 September 2017 Available online 16 October 2017",
"title": ""
},
{
"docid": "98b908b6d1cddb4290b6c09e482a7745",
"text": "Systems for automated image analysis are useful for a variety of tasks and their importance is still growing due to technological advances and an increase of social acceptance. Especially in the field of driver assistance systems the progress in science has reached a level of high performance. Fully or partly autonomously guided vehicles, particularly for road-based traffic, pose high demands on the development of reliable algorithms due to the conditions imposed by natural environments. At the Institut für Neuroinformatik methods for analyzing driving relevant scenes by computer vision are developed in cooperation with several partners from the automobile industry. We introduce a system which extracts the important information from an image taken by a CCD camera installed at the rear view mirror in a car. The approach consists of a sequential and a parallel sensor and information processing. Three main tasks namely the initial segmentation (object detection), the object tracking and the object classification are realized by integration in the sequential branch and by fusion in the parallel branch. The main gain of this approach is given by the integrative coupling of different algorithms providing partly redundant information. Keywords— Driver Assistance, Machine Vision, Data",
"title": ""
},
{
"docid": "b66609e66cc9c3844974b3246b8f737e",
"text": "— Inspired by the evolutionary conjecture that sexually selected traits function as indicators of pathogen resistance in animals and humans, we examined the notion that human facial attractiveness provides evidence of health. Using photos of 164 males and 169 females in late adolescence and health data on these individuals in adolescence, middle adulthood, and later adulthood, we found that adolescent facial attractiveness was unrelated to adolescent health for either males or females, and was not predictive of health at the later times. We also asked raters to guess the health of each stimulus person from his or her photo. Relatively attractive stimulus persons were mistakenly rated as healthier than their peers. The correlation between perceived health and medically assessed health increased when attractiveness was statistically controlled, which implies that attractiveness suppressed the accurate recognition of health. These findings may have important implications for evolutionary models. 0 When social psychologists began in earnest to study physical attractiveness , they were startled by the powerful effect of facial attractiveness on choice of romantic partner (Walster, Aronson, Abrahams, & Rott-mann, 1966) and other aspects of human interaction (Berscheid & Wal-ster, 1974; Hatfield & Sprecher, 1986). More recent findings have been startling again in revealing that infants' preferences for viewing images of faces can be predicted from adults' attractiveness ratings of the faces The assumption that perceptions of attractiveness are culturally determined has thus given ground to the suggestion that they are in substantial part biologically based (Langlois et al., 1987). A biological basis for perception of facial attractiveness is aptly viewed as an evolutionary basis. It happens that evolutionists, under the rubric of sexual selection theory, have recently devoted increasing attention to the origin and function of sexually attractive traits in animal species (Andersson, 1994; Hamilton & Zuk, 1982). Sexual selection as a province of evolutionary theory actually goes back to Darwin (1859, 1871), who noted with chagrin that a number of animals sport an appearance that seems to hinder their survival chances. Although the females of numerous birds of prey, for example, are well camouflaged in drab plum-age, their mates wear bright plumage that must be conspicuous to predators. Darwin divined that the evolutionary force that \" bred \" the males' bright plumage was the females' preference for such showiness in a mate. Whereas Darwin saw aesthetic preferences as fundamental and did not seek to give them adaptive functions, other scholars, beginning …",
"title": ""
},
{
"docid": "8b09f1c9e5b20e2bc9c7c82c2cb39cd5",
"text": "Commercial Light-Field cameras provide spatial and angular information, but its limited resolution becomes an important problem in practical use. In this paper, we present a novel method for Light-Field image super-resolution (SR) via a deep convolutional neural network. Rather than the conventional optimization framework, we adopt a datadriven learning method to simultaneously up-sample the angular resolution as well as the spatial resolution of a Light-Field image. We first augment the spatial resolution of each sub-aperture image to enhance details by a spatial SR network. Then, novel views between the sub-aperture images are generated by an angular super-resolution network. These networks are trained independently but finally finetuned via end-to-end training. The proposed method shows the state-of-the-art performance on HCI synthetic dataset, and is further evaluated by challenging real-world applications including refocusing and depth map estimation.",
"title": ""
},
{
"docid": "93ea7c59bad8181b0379f39e00f4d2e8",
"text": "Breadth-First Search (BFS) is a key graph algorithm with many important applications. In this work, we focus on a special class of graph traversal algorithm - concurrent BFS - where multiple breadth-first traversals are performed simultaneously on the same graph. We have designed and developed a new approach called iBFS that is able to run i concurrent BFSes from i distinct source vertices, very efficiently on Graphics Processing Units (GPUs). iBFS consists of three novel designs. First, iBFS develops a single GPU kernel for joint traversal of concurrent BFS to take advantage of shared frontiers across different instances. Second, outdegree-based GroupBy rules enables iBFS to selectively run a group of BFS instances which further maximizes the frontier sharing within such a group. Third, iBFS brings additional performance benefit by utilizing highly optimized bitwise operations on GPUs, which allows a single GPU thread to inspect a vertex for concurrent BFS instances. The evaluation on a wide spectrum of graph benchmarks shows that iBFS on one GPU runs up to 30x faster than executing BFS instances sequentially, and on 112 GPUs achieves near linear speedup with the maximum performance of 57,267 billion traversed edges per second (TEPS).",
"title": ""
},
{
"docid": "6789e2e452a19da3a00b95a27994ee62",
"text": "Reflection in healthcare education is an emerging topic with many recently published studies and reviews. This current systematic review of reviews (umbrella review) of this field explores the following aspects: which definitions and models are currently in use; how reflection impacts design, evaluation, and assessment; and what future challenges must be addressed. Nineteen reviews satisfying the inclusion criteria were identified. Emerging themes include the following: reflection is currently regarded as self-reflection and critical reflection, and the epistemology-of-practice notion is less in tandem with the evidence-based medicine paradigm of modern science than expected. Reflective techniques that are recognised in multiple settings (e.g., summative, formative, group vs. individual) have been associated with learning, but assessment as a research topic, is associated with issues of validity, reliability, and reproducibility. Future challenges include the epistemology of reflection in healthcare education and the development of approaches for practising and assessing reflection without loss of theoretical background.",
"title": ""
},
{
"docid": "ba34f6120b08c57cec8794ec2b9256d2",
"text": "Principles of reconstruction dictate a number of critical points for successful repair. To achieve aesthetic and functional goals, the dermatologic surgeon should avoid deviation of anatomical landmarks and free margins, maintain shape and symmetry, and repair with skin of similar characteristics. Reconstruction of the ear presents a number of unique challenges based on the limited amount of adjacent lax tissue within the cosmetic unit and the structure of the auricle, which consists of a relatively thin skin surface and flexible cartilaginous framework.",
"title": ""
},
{
"docid": "61001cde9bf426f34bdaf290072f03b4",
"text": "Proportional-integral-derivative (PID) controllers are widely used in industrial control systems because of the reduced number of parameters to be tuned. The most popular design technique is the Ziegler–Nichols method. This paper presents the design of PID controller to realize governor action in power generation plant. The conventional PID controller is replaced by Ziegler Nichols tuning PID controller to make them more general and to achieve the minimum steady state error, also to improve the other dynamic behaviour. The performance and design for automatic selection of the PID constants are also discussed. Keywords—Proportional-Integral-Derivative controls, PID hardware, Ziegler-Nichols Tuning, PID controllers design.",
"title": ""
},
{
"docid": "152c11ef8449d53072bbdb28432641fa",
"text": "Flexible intelligent electronic devices (IEDs) are highly desirable to support free allocation of function to IED by means of software reconfiguration without any change of hardware. The application of generic hardware platforms and component-based software technology seems to be a good solution. Due to the advent of IEC 61850, generic hardware platforms with a standard communication interface can be used to implement different kinds of functions with high flexibility. The remaining challenge is the unified function model that specifies various software components with appropriate granularity and provides a framework to integrate them efficiently. This paper proposes the function-block (FB)-based function model for flexible IEDs. The standard FBs are established by combining the IEC 61850 model and the IEC 61499 model. The design of a simplified distance protection IED using standard FBs is described and investigated. The testing results of the prototype system in MATLAB/Simulink demonstrate the feasibility and flexibility of FB-based IEDs.",
"title": ""
},
{
"docid": "0c31ad159095de6057d43534199e1e45",
"text": "We present a novel spatial hashing based data structure to facilitate 3D shape analysis using convolutional neural networks (CNNs). Our method builds hierarchical hash tables for an input model under different resolutions that leverage the sparse occupancy of 3D shape boundary. Based on this data structure, we design two efficient GPU algorithms namely hash2col and col2hash so that the CNN operations like convolution and pooling can be efficiently parallelized. The perfect spatial hashing is employed as our spatial hashing scheme, which is not only free of hash collision but also nearly minimal so that our data structure is almost of the same size as the raw input. Compared with existing 3D CNN methods, our data structure significantly reduces the memory footprint during the CNN training. As the input geometry features are more compactly packed, CNN operations also run faster with our data structure. The experiment shows that, under the same network structure, our method yields comparable or better benchmark results compared with the state-of-the-art while it has only one-third memory consumption when under high resolutions (i.e. 256 3).",
"title": ""
},
{
"docid": "433340f3392257a8ac830215bf5e3ef2",
"text": "A compact Substrate Integrated Waveguide (SIW) Leaky-Wave Antenna (LWA) is proposed. Internal vias are inserted in the SIW in order to have narrow walls, and so reducing the size of the SIW-LWA, the new structure is called Slow Wave - Substrate Integrated Waveguide - Leaky Wave Antenna (SW-SIW-LWA), since inserting the vias induce the SW effect. After designing the antenna and simulating with HFSS a reduction of 30% of the transverse side of the antenna is attained while maintaining an acceptable gain. Other parameters like the radiation efficiency, Gain, directivity, and radiation pattern are analyzed. Finally a Comparison of our miniaturization technique with Half-Mode Substrate Integrated Waveguide (HMSIW) technique realized in recent articles is done, shows that SW-SIW-LWA technique could be a good candidate for SIW miniaturization.",
"title": ""
},
{
"docid": "ee141b7fd5c372fb65d355fe75ad47af",
"text": "As 100-Gb/s coherent systems based on polarization- division multiplexed quadrature phase shift keying (PDM-QPSK), with aggregate wavelength-division multiplexed (WDM) capacities close to 10 Tb/s, are getting widely deployed, the use of high-spectral-efficiency quadrature amplitude modulation (QAM) to increase both per-channel interface rates and aggregate WDM capacities is the next evolutionary step. In this paper we review high-spectral-efficiency optical modulation formats for use in digital coherent systems. We look at fundamental as well as at technological scaling trends and highlight important trade-offs pertaining to the design and performance of coherent higher-order QAM transponders.",
"title": ""
},
{
"docid": "0c12178e7c7d5c66343bb5a152b42fca",
"text": "This study was a randomized controlled trial to investigate the effect of treating women with stress or mixed urinary incontinence (SUI or MUI) by diaphragmatic, deep abdominal and pelvic floor muscle (PFM) retraining. Seventy women were randomly allocated to the training (n = 35) or control group (n = 35). Women in the training group received 8 individual clinical visits and followed a specific exercise program. Women in the control group performed self-monitored PFM exercises at home. The primary outcome measure was self-reported improvement. Secondary outcome measures were 20-min pad test, 3-day voiding diary, maximal vaginal squeeze pressure, holding time and quality of life. After a 4-month intervention period, more participants in the training group reported that they were cured or improved (p < 0.01). The cure/improved rate was above 90%. Both amount of leakage and number of leaks were significantly lower in the training group (p < 0.05) but not in the control group. More aspects of quality of life improved significantly in the training group than in the control group. Maximal vaginal squeeze pressure, however, decreased slightly in both groups. Coordinated retraining diaphragmatic, deep abdominal and PFM function could improve symptoms and quality of life. It may be an alternative management for women with SUI or MUI.",
"title": ""
},
{
"docid": "ef9df30505ee9c593af81284293e58f9",
"text": "The coding by which chromosomes represent candidate solutions is a fundamental design choice in a genetic algorithm. This paper describes a novel coding of spanning trees in a genetic algorithm for the degree-constrained minimum spanning tree problem. For a connected, weighted graph, this problem seeks to identify the shortest spanning tree whose degree does not exceed an upper bound k > 2. In the coding, chromosomes are strings of numerical weights associated with the target graph's vertices. The weights temporarily bias the graph's edge costs, and an extension of Prim's algorithm, applied to the biased costs, identifies the feasible spanning tree a chromosome represents. This decoding algorithm enforces the degree constraint, so that all chromosomes represent valid solutions and there is no need to discard, repair, or penalize invalid chromosomes. On a set of hard graphs whose unconstrained minimum spanning trees are of high degree, a genetic algorithm that uses this coding identifies degree-constrained minimum spanning trees that are on average shorter than those found by several competing algorithms.",
"title": ""
},
{
"docid": "24f110f2b34e9da32fbd78ad242808bc",
"text": "BACKGROUND\nSurvey research including multiple health indicators requires brief indices for use in cross-cultural studies, which have, however, rarely been tested in terms of their psychometric quality. Recently, the EUROHIS-QOL 8-item index was developed as an adaptation of the WHOQOL-100 and the WHOQOL-BREF. The aim of the current study was to test the psychometric properties of the EUROHIS-QOL 8-item index.\n\n\nMETHODS\nIn a survey on 4849 European adults, the EUROHIS-QOL 8-item index was assessed across 10 countries, with equal samples adjusted for selected sociodemographic data. Participants were also investigated with a chronic condition checklist, measures on general health perception, mental health, health-care utilization and social support.\n\n\nRESULTS\nFindings indicated good internal consistencies across a range of countries, showing acceptable convergent validity with physical and mental health measures, and the measure discriminates well between individuals that report having a longstanding condition and healthy individuals across all countries. Differential item functioning was less frequently observed in those countries that were geographically and culturally closer to the UK, but acceptable across all countries. A universal one-factor structure with a good fit in structural equation modelling analyses (SEM) was identified with, however, limitations in model fit for specific countires.\n\n\nCONCLUSIONS\nThe short EUROHIS-QOL 8-item index showed good cross-cultural field study performance and a satisfactory convergent and discriminant validity, and can therefore be recommended for use in public health research. In future studies the measure should also be tested in multinational clinical studies, particularly in order to test its sensitivity.",
"title": ""
},
{
"docid": "228f2487760407daf669676ce3677609",
"text": "The limitation of using low electron doses in non-destructive cryo-electron tomography of biological specimens can be partially offset via averaging of aligned and structurally homogeneous subsets present in tomograms. This type of sub-volume averaging is especially challenging when multiple species are present. Here, we tackle the problem of conformational separation and alignment with a \"collaborative\" approach designed to reduce the effect of the \"curse of dimensionality\" encountered in standard pair-wise comparisons. Our new approach is based on using the nuclear norm as a collaborative similarity measure for alignment of sub-volumes, and by exploiting the presence of symmetry early in the processing. We provide a strict validation of this method by analyzing mixtures of intact simian immunodeficiency viruses SIV mac239 and SIV CP-MAC. Electron microscopic images of these two virus preparations are indistinguishable except for subtle differences in conformation of the envelope glycoproteins displayed on the surface of each virus particle. By using the nuclear norm-based, collaborative alignment method presented here, we demonstrate that the genetic identity of each virus particle present in the mixture can be assigned based solely on the structural information derived from single envelope glycoproteins displayed on the virus surface.",
"title": ""
},
{
"docid": "774d5a1072fc18229975f1886afe2caa",
"text": "Previous studies have shown that with advancing age the size of the dental pulp cavity is reduced as a result of secondary dentine deposit, so that measurements of this reduction can be used as an indicator of age. The aim of the present study was to find a method which could be used to estimate the chronological age of an adult from measurements of the size of the pulp on full mouth dental radiographs. The material consisted of periapical radiographs from 100 dental patients who had attended the clinics of the Dental Faculty in Oslo. The radiographs of six types of teeth from each jaw were measured: maxillary central and lateral incisors and second premolars, and mandibular lateral incisors, canines and first premolars. To compensate for differences in magnification and angulation on the radiographs, the following ratios were calculated: pulp/root length, pulp/tooth length, tooth/root length and pulp/root width at three different levels. Statistical analyses showed that Pearson's correlation coefficient between age and the different ratios for each type of tooth was significant, except for the ratio between tooth and root length, which was, therefore, excluded from further analysis. Principal component analyses were performed on all ratios, followed by regression analyses with age as dependent variable and the principal components as independent variables. The principal component analyses showed that only the two first of them had significant influence on age, and a good and easily calculated approximation to the first component was found to be the mean of all the ratios. A good approximation to the second principal component was found to be the difference between the mean of two width ratios and the mean of two length ratios, and these approximations of the first and second principal components were chosen as predictors in regression analyses with age as the dependent variable. The coefficient of determination (r2) for the estimation was strongest when the ratios of the six teeth were included (r2 = 0.76) and weakest when measurements from the mandibular canines alone were included (r2 = 0.56). Measurement on dental radiographs may be a non-invasive technique for estimating the age of adults, both living and dead, in forensic work and in archaeological studies, but the method ought to be tested on an independent sample.",
"title": ""
}
] |
scidocsrr
|
043e9a62e6874e6f0e3a92f1b5d5cd25
|
Gamifying education: what is known, what is believed and what remains uncertain: a critical review
|
[
{
"docid": "bda419b065c53853f86f7fdbf0e330f2",
"text": "In current e-learning studies, one of the main challenges is to keep learners motivated in performing desirable learning behaviours and achieving learning goals. Towards tackling this challenge, social e-learning contributes favourably, but it requires solutions that can reduce side effects, such as abusing social interaction tools for ‘chitchat’, and further enhance learner motivation. In this paper, we propose a set of contextual gamification strategies, which apply flow and self-determination theory for increasing intrinsic motivation in social e-learning environments. This paper also presents a social e-learning environment that applies these strategies, followed by a user case study, which indicates increased learners’ perceived intrinsic motivation.",
"title": ""
}
] |
[
{
"docid": "c4f0e371ea3950e601f76f8d34b736e3",
"text": "Discretization is an essential preprocessing technique used in many knowledge discovery and data mining tasks. Its main goal is to transform a set of continuous attributes into discrete ones, by associating categorical values to intervals and thus transforming quantitative data into qualitative data. In this manner, symbolic data mining algorithms can be applied over continuous data and the representation of information is simplified, making it more concise and specific. The literature provides numerous proposals of discretization and some attempts to categorize them into a taxonomy can be found. However, in previous papers, there is a lack of consensus in the definition of the properties and no formal categorization has been established yet, which may be confusing for practitioners. Furthermore, only a small set of discretizers have been widely considered, while many other methods have gone unnoticed. With the intention of alleviating these problems, this paper provides a survey of discretization methods proposed in the literature from a theoretical and empirical perspective. From the theoretical perspective, we develop a taxonomy based on the main properties pointed out in previous research, unifying the notation and including all the known methods up to date. Empirically, we conduct an experimental study in supervised classification involving the most representative and newest discretizers, different types of classifiers, and a large number of data sets. The results of their performances measured in terms of accuracy, number of intervals, and inconsistency have been verified by means of nonparametric statistical tests. Additionally, a set of discretizers are highlighted as the best performing ones.",
"title": ""
},
{
"docid": "9f6429ac22b736bd988a4d6347d8475f",
"text": "The purpose of this paper is to defend the systematic introduction of formal ontological principles in the current practice of knowledge engineering, to explore the various relationships between ontology and knowledge representation, and to present the recent trends in this promising research area. According to the \"modelling view\" of knowledge acquisition proposed by Clancey, the modeling activity must establish a correspondence between a knowledge base and two separate subsystems: the agent's behavior (i.e. the problem-solving expertize) and its own environment (the problem domain). Current knowledge modelling methodologies tend to focus on the former subsystem only, viewing domain knowledge as strongly dependent on the particular task at hand: in fact, AI researchers seem to have been much more interested in the nature of reasoning rather than in the nature of the real world. Recently, however, the potential value of task-independent knowlege bases (or \"ontologies\") suitable to large scale integration has been underlined in many ways. In this paper, we compare the dichotomy between reasoning and representation to the philosophical distinction between epistemology and ontology. We introduce the notion of the ontological level, intermediate between the epistemological and the conceptual level discussed by Brachman, as a way to characterize a knowledge representation formalism taking into account the intended meaning of its primitives. We then discuss some formal ontological distinctions which may play an important role for such purpose.",
"title": ""
},
{
"docid": "5967c7705173ee346b4d47eb7422df20",
"text": "A novel learnable dictionary encoding layer is proposed in this paper for end-to-end language identification. It is inline with the conventional GMM i-vector approach both theoretically and practically. We imitate the mechanism of traditional GMM training and Supervector encoding procedure on the top of CNN. The proposed layer can accumulate high-order statistics from variable-length input sequence and generate an utterance level fixed-dimensional vector representation. Unlike the conventional methods, our new approach provides an end-to-end learning framework, where the inherent dictionary are learned directly from the loss function. The dictionaries and the encoding representation for the classifier are learned jointly. The representation is orderless and therefore appropriate for language identification. We conducted a preliminary experiment on NIST LRE07 closed-set task, and the results reveal that our proposed dictionary encoding layer achieves significant error reduction comparing with the simple average pooling.",
"title": ""
},
{
"docid": "41a0b9797c556368f84e2a05b80645f3",
"text": "This paper describes and evaluates log-linear parsing models for Combinatory Categorial Grammar (CCG). A parallel implementation of the L-BFGS optimisation algorithm is described, which runs on a Beowulf cluster allowing the complete Penn Treebank to be used for estimation. We also develop a new efficient parsing algorithm for CCG which maximises expected recall of dependencies. We compare models which use all CCG derivations, including nonstandard derivations, with normal-form models. The performances of the two models are comparable and the results are competitive with existing wide-coverage CCG parsers.",
"title": ""
},
{
"docid": "5006770c9f7a6fb171a060ad3d444095",
"text": "We developed a 56-GHz-bandwidth 2.0-Vppd linear MZM driver in 65-nm CMOS. It consumes only 180 mW for driving a 50-Ω impedance. We demonstrated the feasibility of drivers with less than 1 W for dual-polarization IQ modulation in 400-Gb/s systems.",
"title": ""
},
{
"docid": "881a495a8329c71a0202c3510e21b15d",
"text": "We apply basic statistical reasoning to signal reconstruction by machine learning – learning to map corrupted observations to clean signals – with a simple and powerful conclusion: it is possible to learn to restore images by only looking at corrupted examples, at performance at and sometimes exceeding training using clean data, without explicit image priors or likelihood models of the corruption. In practice, we show that a single model learns photographic noise removal, denoising synthetic Monte Carlo images, and reconstruction of undersampled MRI scans – all corrupted by different processes – based on noisy data only.",
"title": ""
},
{
"docid": "57ab94ce902f4a8b0082cc4f42cd3b3f",
"text": "In this work, we present a novel counter-fitting method which injects antonymy and synonymy constraints into vector space representations in order to improve the vectors’ capability for judging semantic similarity. Applying this method to publicly available pre-trained word vectors leads to a new state of the art performance on the SimLex-999 dataset. We also show how the method can be used to tailor the word vector space for the downstream task of dialogue state tracking, resulting in robust improvements across different dialogue domains.",
"title": ""
},
{
"docid": "0425ba0d95b98409d684b9b07b59b73a",
"text": "With a shift towards usage-based billing, the questions of how data costs affect mobile Internet use and how users manage mobile data arise. In this paper, we describe a mixed-methods study of mobile phone users' data usage practices in South Africa, a country where usage-based billing is prevalent and where data costs are high, to answer these questions. We do so using a large scale survey, in-depth interviews, and logs of actual data usage over time. Our findings suggest that unlike in more developed settings, when data is limited or expensive, mobile Internet users are extremely cost-conscious, and employ various strategies to optimize mobile data usage such as actively disconnecting from the mobile Internet to save data. Based on these findings, we suggest how the Ubicomp and related research communities can better support users that need to carefully manage their data to optimize costs.",
"title": ""
},
{
"docid": "1ac76924d3fae2bbcb7f7b84f1c2ea5e",
"text": "This chapter studies ontology matching : the problem of finding the semantic mappings between two given ontologies. This problem lies at the heart of numerous information processing applications. Virtually any application that involves multiple ontologies must establish semantic mappings among them, to ensure interoperability. Examples of such applications arise in myriad domains, including e-commerce, knowledge management, e-learning, information extraction, bio-informatics, web services, and tourism (see Part D of this book on ontology applications). Despite its pervasiveness, today ontology matching is still largely conducted by hand, in a labor-intensive and error-prone process. The manual matching has now become a key bottleneck in building large-scale information management systems. The advent of technologies such as the WWW, XML, and the emerging Semantic Web will further fuel information sharing applications and exacerbate the problem. Hence, the development of tools to assist in the ontology matching process has become crucial for the success of a wide variety of information management applications. In response to the above challenge, we have developed GLUE, a system that employs learning techniques to semi-automatically create semantic mappings between ontologies. We shall begin the chapter by describing a motivating example: ontology matching on the Semantic Web. Then we present our GLUE solution. Finally, we describe a set of experiments on several real-world domains, and show that GLUE proposes highly accurate semantic mappings.",
"title": ""
},
{
"docid": "b33c7e26d3a0a8fc7fc0fb73b72840d4",
"text": "As the number of Android malicious applications has explosively increased, effectively vetting Android applications (apps) has become an emerging issue. Traditional static analysis is ineffective for vetting apps whose code have been obfuscated or encrypted. Dynamic analysis is suitable to deal with the obfuscation and encryption of codes. However, existing dynamic analysis methods cannot effectively vet the applications, as a limited number of dynamic features have been explored from apps that have become increasingly sophisticated. In this work, we propose an effective dynamic analysis method called DroidWard in the aim to extract most relevant and effective features to characterize malicious behavior and to improve the detection accuracy of malicious apps. In addition to using the existing 9 features, DroidWard extracts 6 novel types of effective features from apps through dynamic analysis. DroidWard runs apps, extracts features and identifies benign and malicious apps with Support Vector Machine (SVM), Decision Tree (DTree) and Random Forest. 666 Android apps are used in the experiments and the evaluation results show that DroidWard correctly classifies 98.54% of malicious apps with 1.55% of false positives. Compared to existing work, DroidWard improves the TPR with 16.07% and suppresses the FPR with 1.31% with SVM, indicating that it is more effective than existing methods.",
"title": ""
},
{
"docid": "de0482515de1d6134b8ff907be49d4dc",
"text": "In this paper, we describe the Adaptive Place Advi sor, a conversational recommendation system designed to he lp users decide on a destination. We view the selection of destinations a an interactive, conversational process, with the advisory system in quiring about desired item characteristics and the human responding. The user model, which contains preferences regarding items, attributes, values and v lue combinations, is also acquired during the conversation. The system enhanc es the user’s requirements with the user model and retrieves suitable items fr om a case-base. If the number of items found by the system is unsuitable (too hig h, too low) the next attribute to be constrained or relaxed is selected based on t he information gain associated with the attributes. We also describe the current s tatu of the system and future work.",
"title": ""
},
{
"docid": "c629dfdd363f1599d397ccde1f7be360",
"text": "We propose a classification taxonomy over a large crawl of HTML tables on the Web, focusing primarily on those tables that express structured knowledge. The taxonomy separates tables into two top-level classes: a) those used for layout purposes, including navigational and formatting; and b) those containing relational knowledge, including listings, attribute/value, matrix, enumeration, and form. We then propose a classification algorithm for automatically detecting a subset of the classes in our taxonomy, namely layout tables and attribute/value tables. We report on the performance of our system over a large sample of manually annotated HTML tables on the Web.",
"title": ""
},
{
"docid": "95fa1dac07ce26c1ccd64a9c86c96a22",
"text": "Eyelid bags are the result of relaxation of lid structures like the skin, the orbicularis muscle, and mainly the septum, with subsequent protrusion or pseudo herniation of intraorbital fat contents. The logical treatment of baggy upper and lower eyelids should therefore include repositioning the herniated fat into the orbit and strengthening the attenuated septum in the form of a septorhaphy as a hernia repair. The preservation of orbital fat results in a more youthful appearance. The operative technique of the orbital septorhaphy is demonstrated for the upper and lower eyelid. A prospective series of 60 patients (50 upper and 90 lower blepharoplasties) with a maximum follow-up of 17 months were analyzed. Pleasing results were achieved in 56 patients. A partial recurrence was noted in 3 patients and widening of the palpebral fissure in 1 patient. Orbital septorhaphy for baggy eyelids is a rational, reliable procedure to correct the herniation of orbital fat in the upper and lower eyelids. Tightening of the orbicularis muscle and skin may be added as usual. The procedure is technically simple and without trauma to the orbital contents. The morbidity is minimal, the rate of complications is low, and the results are pleasing and reliable.",
"title": ""
},
{
"docid": "a0e9e04a3b04c1974951821d44499fa7",
"text": "PURPOSE\nTo examine factors related to turnover of new graduate nurses in their first job.\n\n\nDESIGN\nData were obtained from a 3-year panel survey (2006-2008) of the Graduates Occupational Mobility Survey that followed-up college graduates in South Korea. The sample consisted of 351 new graduates whose first job was as a full-time registered nurse in a hospital.\n\n\nMETHODS\nSurvival analysis was conducted to estimate survival curves and related factors, including individual and family, nursing education, hospital, and job dissatisfaction (overall and 10 specific job aspects).\n\n\nFINDINGS\nThe estimated probabilities of staying in their first job for 1, 2, and 3 years were 0.823, 0.666, and 0.537, respectively. Nurses reporting overall job dissatisfaction had significantly lower survival probabilities than those who reported themselves to be either neutral or satisfied. Nurses were more likely to leave if they were married or worked in small (vs. large), nonmetropolitan, and nonunionized hospitals. Dissatisfaction with interpersonal relationships, work content, and physical work environment was associated with a significant increase in the hazards of leaving the first job.\n\n\nCONCLUSIONS\nHospital characteristics as well as job satisfaction were significantly associated with new graduates' turnover.\n\n\nCLINICAL RELEVANCE\nThe high turnover of new graduates could be reduced by improving their job satisfaction, especially with interpersonal relationships, work content, and the physical work environment.",
"title": ""
},
{
"docid": "7e251f86e41d01778a143c231304aa92",
"text": "Adequate representation of natural language semantics requires access to vast amounts of common sense and domain-specific world knowledge. Prior work in the field was based on purely statistical techniques that did not make use of background knowledge, on limited lexicographic knowledge bases such as WordNet, or on huge manual efforts such as the CYC project. Here we propose a novel method, called Explicit Semantic Analysis (ESA), for fine-grained semantic interpretation of unrestricted natural language texts. Our method represents meaning in a high-dimensional space of concepts derived from Wikipedia, the largest encyclopedia in existence. We explicitly represent the meaning of any text in terms of Wikipedia-based concepts. We evaluate the effectiveness of our method on text categorization and on computing the degree of semantic relatedness between fragments of natural language text. Using ESA results in significant improvements over the previous state of the art in both tasks. Importantly, due to the use of natural concepts, the ESA model is easy to explain to human users.",
"title": ""
},
{
"docid": "b11592d07491ef9e0f67e257bfba6d84",
"text": "Convolutional networks have achieved great success in various vision tasks. This is mainly due to a considerable amount of research on network structure. In this study, instead of focusing on architectures, we focused on the convolution unit itself. The existing convolution unit has a fixed shape and is limited to observing restricted receptive fields. In earlier work, we proposed the active convolution unit (ACU), which can freely define its shape and learn by itself. In this paper, we provide a detailed analysis of the previously proposed unit and show that it is an efficient representation of a sparse weight convolution. Furthermore, we extend an ACU to a grouped ACU, which can observe multiple receptive fields in one layer. We found that the performance of a naive grouped convolution is degraded by increasing the number of groups; however, the proposed unit retains the accuracy even though the number of parameters decreases. Based on this result, we suggest a depthwise ACU, and various experiments have shown that our unit is efficient and can replace the existing convolutions.",
"title": ""
},
{
"docid": "3d5eb503f837adffb4468548b3f76560",
"text": "Purpose This study investigates the impact of such contingency factors as top management support, business vision, and external expertise, on the one hand, and ERP system success, on the other. Design/methodology/approach A conceptual model was developed and relevant hypotheses formulated. Surveys were conducted in two Northern European countries and a structural equation modeling technique was used to analyze the data. Originality/value It is argued that ERP systems are different from other IT implementations; as such, there is a need to provide insights as to how the aforementioned factors play out in the context of ERP system success evaluations for adopting organizations. As was predicted, the results showed that the three contingency factors positively influence ERP system success. More importantly, the relative importance of quality external expertise over the other two factors for ERP initiatives was underscored. The implications of the findings for both practitioners and researchers are discussed.",
"title": ""
},
{
"docid": "a2bd543446fb86da6030ce7f46db9f75",
"text": "This paper presents a risk assessment algorithm for automatic lane change maneuvers on highways. It is capable of reliably assessing a given highway situation in terms of the possibility of collisions and robustly giving a recommendation for lane changes. The algorithm infers potential collision risks of observed vehicles based on Bayesian networks considering uncertainties of its input data. It utilizes two complementary risk metrics (time-to-collision and minimal safety margin) in temporal and spatial aspects to cover all risky situations that can occur for lane changes. In addition, it provides a robust recommendation for lane changes by filtering out uncertain noise data pertaining to vehicle tracking. The validity of the algorithm is tested and evaluated on public highways in real traffic as well as a closed high-speed test track in simulated traffic through in-vehicle testing based on overtaking and overtaken scenarios in order to demonstrate the feasibility of the risk assessment for automatic lane change maneuvers on highways.",
"title": ""
},
{
"docid": "75591d4da0b01f1890022b320cdab705",
"text": "Many lakes in boreal and arctic regions have high concentrations of CDOM (coloured dissolved organic matter). Remote sensing of such lakes is complicated due to very low water leaving signals. There are extreme (black) lakes where the water reflectance values are negligible in almost entire visible part of spectrum (400–700 nm) due to the absorption by CDOM. In these lakes, the only water-leaving signal detectable by remote sensing sensors occurs as two peaks—near 710 nm and 810 nm. The first peak has been widely used in remote sensing of eutrophic waters for more than two decades. We show on the example of field radiometry data collected in Estonian and Swedish lakes that the height of the 810 nm peak can also be used in retrieving water constituents from remote sensing data. This is important especially in black lakes where the height of the 710 nm peak is still affected by CDOM. We have shown that the 810 nm peak can be used also in remote sensing of a wide variety of lakes. The 810 nm peak is caused by combined effect of slight decrease in absorption by water molecules and backscattering from particulate material in the water. Phytoplankton was the dominant particulate material in most of the studied lakes. Therefore, the height of the 810 peak was in good correlation with all proxies of phytoplankton biomass—chlorophyll-a (R2 = 0.77), total suspended matter (R2 = 0.70), and suspended particulate organic matter (R2 = 0.68). There was no correlation between the peak height and the suspended particulate inorganic matter. Satellite sensors with sufficient spatial and radiometric resolution for mapping lake water quality (Landsat 8 OLI and Sentinel-2 MSI) were launched recently. In order to test whether these satellites can capture the 810 nm peak we simulated the spectral performance of these two satellites from field radiometry data. Actual satellite imagery from a black lake was also used to study whether these sensors can detect the peak despite their band configuration. Sentinel 2 MSI has a nearly perfectly positioned band at 705 nm to characterize the 700–720 nm peak. We found that the MSI 783 nm band can be used to detect the 810 nm peak despite the location of this band is not in perfect to capture the peak.",
"title": ""
},
{
"docid": "64fbd2207a383bc4b04c66e8ee867922",
"text": "Ultra compact, short pulse, high voltage, high current pulsers are needed for a variety of non-linear electrical and optical applications. With a fast risetime and short pulse width, these drivers are capable of producing sub-nanosecond electrical and thus optical pulses by gain switching semiconductor laser diodes. Gain-switching of laser diodes requires a sub-nanosecond pulser capable of driving a low output impedance (5 /spl Omega/ or less). Optical pulses obtained had risetimes as fast as 20 ps. The designed pulsers also could be used for triggering photo-conductive semiconductor switches (PCSS), gating high speed optical imaging systems, and providing electrical and optical sources for fast transient sensor applications. Building on concepts from Lawrence Livermore National Laboratory, the development of pulsers based on solid state avalanche transistors was adapted to drive low impedances. As each successive stage is avalanched in the circuit, the amount of overvoltage increases, increasing the switching speed and improving the turn on time of the output pulse at the final stage. The output of the pulser is coupled into the load using a Blumlein configuration.",
"title": ""
}
] |
scidocsrr
|
ee1227e6fb7a9da21f3e3fcaeaac75ed
|
Decomposing Petri nets for process mining: A generic approach
|
[
{
"docid": "2c92948916257d9b164e7d65aa232d3e",
"text": "Contemporary workflow management systems are driven by explicit process models, i.e., a completely specified workflow design is required in order to enact a given workflow process. Creating a workflow design is a complicated time-consuming process and typically, there are discrepancies between the actual workflow processes and the processes as perceived by the management. Therefore, we propose a technique for rediscovering workflow models. This technique uses workflow logs to discover the workflow process as it is actually being executed. The workflow log contains information about events taking place. We assume that these events are totally ordered and each event refers to one task being executed for a single case. This information can easily be extracted from transactional information systems (e.g., Enterprise Resource Planning systems such as SAP and Baan). The rediscovering technique proposed in this paper can deal with noise and can also be used to validate workflow processes by uncovering and measuring the discrepancies between prescriptive models and actual process executions.",
"title": ""
}
] |
[
{
"docid": "fc29f8e0d932140b5f48b35e4175b51a",
"text": "A three-dimensional (3D) geometric model obtained from a 3D device or other approaches is not necessarily watertight due to the presence of geometric deficiencies. These inadequacies must be repaired to create a valid surface mesh on the model as a pre-process of computational engineering analyses. This procedure has been a tedious and labor-intensive step, as there are many kinds of deficiencies that can make the geometry to be nonwatertight, such as gaps and holes. It is still challenging to repair discrete surface models based on available geometric information. The focus of this paper is to develop a new automated method for patching holes on the surface models in order to achieve watertightness. It describes a numerical algorithm utilizing Non-Uniform Rational B-Splines (NURBS) surfaces to generate smooth triangulated surface patches for topologically simple holes on discrete surface models. The Delaunay criterion for point insertion and edge swapping is used in this algorithm to improve the outcome. Surface patches are generated based on existing points surrounding the holes without altering them. The watertight geometry produced can be used in a wide range of engineering applications in the field of computational engineering simulation studies.",
"title": ""
},
{
"docid": "564e1cdb388fd8b1f959dbbc0c8ef302",
"text": "This paper proposes the design optimization procedure of three-phase interior permanent magnet (IPM) synchronous motors with minimum weight, maximum power output, and suitability for wide constant-power region operation. The particular rotor geometry of the IPM synchronous motor and the presence of several variables and constraints make the design problem very complicated. The authors propose to combine an accurate finite-element analysis with a multiobjective optimization procedure using a new algorithm belonging to the class of controlled random search algorithms. The optimization procedure has been employed to design two IPM motors for industrial application and a city electrical scooter. A prototype has been realized and tested. The comparison between the predicted and measured performances shows the reliability of the simulation results and the effectiveness, versatility, and robustness of the proposed procedure.",
"title": ""
},
{
"docid": "c26eabb377db5f1033ec6d354d890a6f",
"text": "Recurrent neural networks have recently shown significant potential in different language applications, ranging from natural language processing to language modelling. This paper introduces a research effort to use such networks to develop and evaluate natural language acquisition on a humanoid robot. Here, the problem is twofold. First, the focus will be put on using the gesture-word combination stage observed in infants to transition from single to multi-word utterances. Secondly, research will be carried out in the domain of connecting action learning with language learning. In the former, the long-short term memory architecture will be implemented, whilst in the latter multiple time-scale recurrent neural networks will be used. This will allow for comparison between the two architectures, whilst highlighting the strengths and shortcomings of both with respect to the language learning problem. Here, the main research efforts, challenges and expected outcomes are described.",
"title": ""
},
{
"docid": "1ff51e3f6b73aa6fe8eee9c1fb404e4e",
"text": "The goal of this article is to review the state-of-the-art tracking methods, classify them into different categories, and identify new trends. Object tracking, in general, is a challenging problem. Difficulties in tracking objects can arise due to abrupt object motion, changing appearance patterns of both the object and the scene, nonrigid object structures, object-to-object and object-to-scene occlusions, and camera motion. Tracking is usually performed in the context of higher-level applications that require the location and/or shape of the object in every frame. Typically, assumptions are made to constrain the tracking problem in the context of a particular application. In this survey, we categorize the tracking methods on the basis of the object and motion representations used, provide detailed descriptions of representative methods in each category, and examine their pros and cons. Moreover, we discuss the important issues related to tracking including the use of appropriate image features, selection of motion models, and detection of objects.",
"title": ""
},
{
"docid": "7c3c06529ae52055de668cbefce39c5f",
"text": "Context-aware recommendation algorithms focus on refining recommendations by considering additional information, available to the system. This topic has gained a lot of attention recently. Among others, several factorization methods were proposed to solve the problem, although most of them assume explicit feedback which strongly limits their real-world applicability. While these algorithms apply various loss functions and optimization strategies, the preference modeling under context is less explored due to the lack of tools allowing for easy experimentation with various models. As context dimensions are introduced beyond users and items, the space of possible preference models and the importance of proper modeling largely increases. In this paper we propose a general factorization framework (GFF), a single flexible algorithm that takes the preference model as an input and computes latent feature matrices for the input dimensions. GFF allows us to easily experiment with various linear models on any context-aware recommendation task, be it explicit or implicit feedback based. The scaling properties makes it usable under real life circumstances as well. We demonstrate the framework’s potential by exploring various preference models on a 4-dimensional context-aware problem with contexts that are available for almost any real life datasets. We show in our experiments—performed on five real life, implicit feedback datasets—that proper preference modelling significantly increases recommendation accuracy, and previously unused models outperform the traditional ones. Novel models in GFF also outperform state-of-the-art factorization algorithms. We also extend the method to be fully compliant to the Multidimensional Dataspace Model, one of the most extensive data models of context-enriched data. Extended GFF allows the seamless incorporation of information into the factorization framework beyond context, like item metadata, social networks, session information, etc. Preliminary experiments show great potential of this capability.",
"title": ""
},
{
"docid": "93550eb8a3e703d54ed627bf2d63d79e",
"text": "This paper presents a comparison of methods for transforming voice quality in neutral synthetic speech to match cheerful, aggressive, and depressed expressive styles. Neutral speech is generated using the unit selection system in the MARY TTS platform and a large neutral database in German. The output is modified using voice conversion techniques to match the target expressive styles, the focus being on spectral envelope conversion for transforming the overall voice quality. Various improvements over the state-of-the-art weighted codebook mapping and GMM based voice conversion frameworks are employed resulting in three algorithms. Objective evaluation results show that all three methods result in comparable reduction in objective distance to target expressive TTS outputs whereas weighted frame mapping and GMM based transformations were perceived slightly better than the weighted codebook mapping outputs in generating the target expressive style in a listening test.",
"title": ""
},
{
"docid": "7cff04976bf78c5d8a1b4338b2107482",
"text": "Classifiers trained on given databases perform poorly when tested on data acquired in different settings. This is explained in domain adaptation through a shift among distributions of the source and target domains. Attempts to align them have traditionally resulted in works reducing the domain shift by introducing appropriate loss terms, measuring the discrepancies between source and target distributions, in the objective function. Here we take a different route, proposing to align the learned representations by embedding in any given network specific Domain Alignment Layers, designed to match the source and target feature distributions to a reference one. Opposite to previous works which define a priori in which layers adaptation should be performed, our method is able to automatically learn the degree of feature alignment required at different levels of the deep network. Thorough experiments on different public benchmarks, in the unsupervised setting, confirm the power of our approach.",
"title": ""
},
{
"docid": "c5dd31facf6d1f7709d58e7b0ddc0bab",
"text": "Website fingerprinting attacks allow a local, passive eavesdropper to identify a web browsing client’s destination web page by extracting noticeable and unique features from her traffic. Such attacks magnify the gap between privacy and security — a client who encrypts her communication traffic may still have her browsing behaviour exposed to lowcost eavesdropping. Previous authors have shown that privacysensitive clients who use anonymity technologies such as Tor are susceptible to website fingerprinting attacks, and some attacks have been shown to outperform others in specific experimental conditions. However, as these attacks differ in data collection, feature extraction and experimental setup, they cannot be compared directly. On the other side of the coin, proposed website fingerprinting defenses (countermeasures) are generally designed and tested only against specific attacks. Some defenses have been shown to fail against more advanced attacks, and it is unclear which defenses would be effective against all attacks. In this paper, we propose a feature-based comparative methodology that allows us to systematize attacks and defenses in order to compare them. We analyze attacks for their sensitivity to different packet sequence features, and analyze the effect of proposed defenses on these features by measuring whether or not the features are hidden. If a defense fails to hide a feature that an attack is sensitive to, then the defense will not work against this attack. Using this methodology, we propose a new network layer defense that can more effectively hide all of the features we consider.",
"title": ""
},
{
"docid": "aa94b089a665049e594c0a66a98ba099",
"text": "Brain endocannabinoid (eCB) signalling influences the motivation for natural rewards (such as palatable food, sexual activity and social interaction) and modulates the rewarding effects of addictive drugs. Pathological forms of natural and drug-induced reward are associated with dysregulated eCB signalling that may derive from pre-existing genetic factors or from prolonged drug exposure. Impaired eCB signalling contributes to dysregulated synaptic plasticity, increased stress responsivity, negative emotional states and cravings that propel addiction. Understanding the contributions of eCB disruptions to behavioural and physiological traits provides insight into the eCB influence on addiction vulnerability.",
"title": ""
},
{
"docid": "71ca5a461ff8eb6fc33c1a272c4acfac",
"text": "We introduce a tree manipulation language, Fast, that overcomes technical limitations of previous tree manipulation languages, such as XPath and XSLT which do not support precise program analysis, or TTT and Tiburon which only support trees over finite alphabets. At the heart of Fast is a combination of SMT solvers and tree transducers, enabling it to model programs whose input and output can range over any decidable theory. The language can express multiple applications. We write an HTML “sanitizer” in Fast and obtain results comparable to leading libraries but with smaller code. Next we show how augmented reality “tagging” applications can be checked for potential overlap in milliseconds using Fast type checking. We show how transducer composition enables deforestation for improved performance. Overall, we strike a balance between expressiveness and precise analysis that works for a large class of important tree-manipulating programs.",
"title": ""
},
{
"docid": "066e0f4902bb4020c6d3fad7c06ee519",
"text": "Automatic traffic light detection (TLD) plays an important role for driver-assistance system and autonomous vehicles. State-of-the-art TLD systems showed remarkable results by exploring visual information from static frames. However, traffic lights from different countries, regions, and manufactures are always visually distinct. The existing large intra-class variance makes the pre-trained detectors perform good on one dataset but fail on the others with different origins. One the other hand, LED traffic lights are widely used because of better energy efficiency. Based on the observation LED traffic light flashes in proportion to the input AC power frequency, we propose a hybrid TLD approach which combines the temporally frequency analysis and visual information using high-speed camera. Exploiting temporal information is shown to be very effective in the experiments. It is considered to be more robust than visual information-only methods.",
"title": ""
},
{
"docid": "7d8b256565f44be75e5d23130573580c",
"text": "Even the support vector machine (SVM) has been proposed to provide a good generalization performance, the classification result of the practically implemented SVM is often far from the theoretically expected level because their implementations are based on the approximated algorithms due to the high complexity of time and space. To improve the limited classification performance of the real SVM, we propose to use the SVM ensembles with bagging (bootstrap aggregating). Each individual SVM is trained independently using the randomly chosen training samples via a bootstrap technique. Then, they are aggregated into to make a collective decision in several ways such as the majority voting, the LSE(least squares estimation)-based weighting, and the double-layer hierarchical combining. Various simulation results for the IRIS data classification and the hand-written digit recognitionshow that the proposed SVM ensembles with bagging outperforms a single SVM in terms of classification accuracy greatly.",
"title": ""
},
{
"docid": "a5107c14ae046fb7742c69b14d223892",
"text": "PURPOSE\nThe tumor microenvironment is formed by many distinct and interacting cell populations, and its composition may predict patients' prognosis and response to therapies. Colorectal cancer is a heterogeneous disease in which immune classifications and four consensus molecular subgroups (CMS) have been described. Our aim was to integrate the composition of the tumor microenvironment with the consensus molecular classification of colorectal cancer.\n\n\nEXPERIMENTAL DESIGN\nWe retrospectively analyzed the composition and the functional orientation of the immune, fibroblastic, and angiogenic microenvironment of 1,388 colorectal cancer tumors from three independent cohorts using transcriptomics. We validated our findings using immunohistochemistry.\n\n\nRESULTS\nWe report that colorectal cancer molecular subgroups and microenvironmental signatures are highly correlated. Out of the four molecular subgroups, two highly express immune-specific genes. The good-prognosis microsatellite instable-enriched subgroup (CMS1) is characterized by overexpression of genes specific to cytotoxic lymphocytes. In contrast, the poor-prognosis mesenchymal subgroup (CMS4) expresses markers of lymphocytes and of cells of monocytic origin. The mesenchymal subgroup also displays an angiogenic, inflammatory, and immunosuppressive signature, a coordinated pattern that we also found in breast (n = 254), ovarian (n = 97), lung (n = 80), and kidney (n = 143) cancers. Pathologic examination revealed that the mesenchymal subtype is characterized by a high density of fibroblasts that likely produce the chemokines and cytokines that favor tumor-associated inflammation and support angiogenesis, resulting in a poor prognosis. In contrast, the canonical (CMS2) and metabolic (CMS3) subtypes with intermediate prognosis exhibit low immune and inflammatory signatures.\n\n\nCONCLUSIONS\nThe distinct immune orientations of the colorectal cancer molecular subtypes pave the way for tailored immunotherapies. Clin Cancer Res; 22(16); 4057-66. ©2016 AACR.",
"title": ""
},
{
"docid": "6cb2e41787378eca0dbbc892f46274e5",
"text": "Both reviews and user-item interactions (i.e., rating scores) have been widely adopted for user rating prediction. However, these existing techniques mainly extract the latent representations for users and items in an independent and static manner. That is, a single static feature vector is derived to encode user preference without considering the particular characteristics of each candidate item. We argue that this static encoding scheme is incapable of fully capturing users’ preferences, because users usually exhibit different preferences when interacting with different items. In this article, we propose a novel context-aware user-item representation learning model for rating prediction, named CARL. CARL derives a joint representation for a given user-item pair based on their individual latent features and latent feature interactions. Then, CARL adopts Factorization Machines to further model higher order feature interactions on the basis of the user-item pair for rating prediction. Specifically, two separate learning components are devised in CARL to exploit review data and interaction data, respectively: review-based feature learning and interaction-based feature learning. In the review-based learning component, with convolution operations and attention mechanism, the pair-based relevant features for the given user-item pair are extracted by jointly considering their corresponding reviews. However, these features are only reivew-driven and may not be comprehensive. Hence, an interaction-based learning component further extracts complementary features from interaction data alone, also on the basis of user-item pairs. The final rating score is then derived with a dynamic linear fusion mechanism. Experiments on seven real-world datasets show that CARL achieves significantly better rating prediction accuracy than existing state-of-the-art alternatives. Also, with the attention mechanism, we show that the pair-based relevant information (i.e., context-aware information) in reviews can be highlighted to interpret the rating prediction for different user-item pairs.",
"title": ""
},
{
"docid": "ef4fe854c263735dba35a45f9058ee05",
"text": "The flexor digitorum profundus, extensor digitorum communis and lumbrical muscle of the human hand play a significant role in the movement of the finger. The structure consisting of these muscles and tendons is important to consider an anthropomorphic tendon-driven finger. However, there are some problems to apply the structure found in humans to robotic fingers using mechanical elements. One of them is that the origin of the lumbrical muscle is not on any bones but on the tendon of the flexor digitorum profundus. Another is the non-constant length of the moment arm of the lateral band at the proximal interphalangeal (PIP) joint. We propose a design based on the kinematic model proposed by Leijnse et al. [1] considering the equalization of the joint torques. The proposed model can be easily realized by a structure consisting of actuators fixed to a base and a tendon-pulley system that maintains the function of those three muscle and their tendons.",
"title": ""
},
{
"docid": "8ccca373252c045107753081db3de051",
"text": "We describe a computer system that provides a real-time musical accompaniment for a live soloist in a piece of non-improvised music for soloist and accompaniment. A Bayesian network is developed that represents the joint distribution on the times at which the solo and accompaniment notes are played, relating the two parts through a layer of hidden variables. The network is first constructed using the rhythmic information contained in the musical score. The network is then trained to capture the musical interpretations of the soloist and accompanist in an off-line rehearsal phase. During live accompaniment the learned distribution of the network is combined with a real-time analysis of the soloist's acoustic signal, performed with a hidden Markov model, to generate a musically principled accompaniment that respects all available sources of knowledge. A live demonstration will be provided.",
"title": ""
},
{
"docid": "2ae1dfeae3c6b8a1ca032198f2989aef",
"text": "This study enhances the existing literature on online trust by integrating the consumers’ product evaluations model and technology adoption model in e-commerce environments. In this study, we investigate how perceived value influences the perceptions of online trust among online buyers and their willingness to repurchase from the same website. This study proposes a research model that compares the relative importance of perceived value and online trust to perceived usefulness in influencing consumers’ repurchase intention. The proposed model is tested using data collected from online consumers of e-commerce. The findings show that although trust and ecommerce adoption components are critical in influencing repurchase intention, product evaluation factors are also important in determining repurchase intention. Perceived quality is influenced by the perceptions of competitive price and website reputation, which in turn influences perceived value; and perceived value, website reputation, and perceived risk influence online trust, which in turn influence repurchase intention. The findings also indicate that the effect of perceived usefulness on repurchase intention is not significant whereas perceived value and online trust are the major determinants of repurchase intention. Major theoretical contributions and practical implications are discussed.",
"title": ""
},
{
"docid": "9a1d8c06cedb5c876515679088f55ab5",
"text": "A 5-axis hybrid computer numerical controlled machine was developed using a 2 degree of freedom spherical parallel mechanism known as the Agile Eye. The hybrid machine design consisted of a 3-axis serial gantry type structure, with the Agile Eye being placed at the end of the Z-Axis to allow for machining on inclined planes. A control system was designed that controlled two kinematic systems, the 3-axis serial kinematic system and the 2-axis parallel kinematic system. This paper details the design of the agile eye, its kinematic models and the integration of the agile eye mechanism to create a functional hybrid machine.",
"title": ""
},
{
"docid": "0deda73c3cb7e87bcf3e1df0716e13d2",
"text": "The continuous development and extensive use of computed tomography (CT) in medical practice has raised a public concern over the associated radiation dose to the patient. Reducing the radiation dose may lead to increased noise and artifacts, which can adversely affect the radiologists’ judgment and confidence. Hence, advanced image reconstruction from low-dose CT data is needed to improve the diagnostic performance, which is a challenging problem due to its ill-posed nature. Over the past years, various low-dose CT methods have produced impressive results. However, most of the algorithms developed for this application, including the recently popularized deep learning techniques, aim for minimizing the mean-squared error (MSE) between a denoised CT image and the ground truth under generic penalties. Although the peak signal-to-noise ratio is improved, MSE- or weighted-MSE-based methods can compromise the visibility of important structural details after aggressive denoising. This paper introduces a new CT image denoising method based on the generative adversarial network (GAN) with Wasserstein distance and perceptual similarity. The Wasserstein distance is a key concept of the optimal transport theory and promises to improve the performance of GAN. The perceptual loss suppresses noise by comparing the perceptual features of a denoised output against those of the ground truth in an established feature space, while the GAN focuses more on migrating the data noise distribution from strong to weak statistically. Therefore, our proposed method transfers our knowledge of visual perception to the image denoising task and is capable of not only reducing the image noise level but also trying to keep the critical information at the same time. Promising results have been obtained in our experiments with clinical CT images.",
"title": ""
},
{
"docid": "acdd0043b764fe8bb9904ea6ca71e5cf",
"text": "We investigate the task of 2D articulated human pose estimation in unconstrained still images. This is extremely challenging because of variation in pose, anatomy, clothing, and imaging conditions. Current methods use simple models of body part appearance and plausible configurations due to limitations of available training data and constraints on computational expense. We show that such models severely limit accuracy. Building on the successful pictorial structure model (PSM) we propose richer models of both appearance and pose, using state-of-the-art discriminative classifiers without introducing unacceptable computational expense. We introduce a new annotated database of challenging consumer images, an order of magnitude larger than currently available datasets, and demonstrate over 50% relative improvement in pose estimation accuracy over a stateof-the-art method.",
"title": ""
}
] |
scidocsrr
|
f525dc14ff98dea41dda09c343b8703c
|
OVERCONFIDENCE IN CURRENCY MARKETS
|
[
{
"docid": "42b9ba3cf10ff879799ae0a4272e68fa",
"text": "This article argues that ( a ) ego, or self, is an organization of knowledge, ( b ) ego is characterized by cognitive biases strikingly analogous to totalitarian information-control strategies, and ( c ) these totalitarian-ego biases junction to preserve organization in cognitive structures. Ego's cognitive biases are egocentricity (self as the focus of knowledge), \"beneffectance\" (perception of responsibility for desired, but not undesired, outcomes), and cognitive conservatism (resistance to cognitive change). In addition to being pervasively evident in recent studies of normal human cognition, these three biases are found in actively functioning, higher level organizations of knowledge, perhaps best exemplified by theoretical paradigms in science. The thesis that egocentricity, beneffectance, and conservatism act to preserve knowledge organizations leads to the proposal of an intrapsychic analog of genetic evolution, which in turn provides an alternative to prevalent motivational and informational interpretations of cognitive biases. The ego rejects the unbearable idea together with its associated affect and behaves as if the idea had never occurred to the person a t all. (Freud, 1894/1959, p. 72) Alike with the individual and the group, the past is being continually re-made, reconstructed in the interests of the present. (Bartlett, 1932, p. 309) As historians of our own lives we seem to be, on the one hand, very inattentive and, on the other, revisionists who will justify the present by changing the past. (Wixon & Laird, 1976, p. 384) \"Who controls the past,\" ran the Party slogan, \"controls the future: who controls the present controls the past.\" (Orwell, 1949, p. 32) totalitarian, was chosen only with substantial reservation because of this label's pejorative connotations. Interestingly, characteristics that seem undesirable in a political system can nonetheless serve adaptively in a personal organization of knowledge. The conception of ego as an organization of knowledge synthesizes influences from three sources --empirical, literary, and theoretical. First, recent empirical demonstrations of self-relevant cognitive biases suggest that the biases play a role in some fundamental aspect of personality. Second, George Orwell's 1984 suggests the analogy between ego's biases and totalitarian information con&ol. Last, the theories of Loevinger (1976) and Epstein ( 1973 ) suggest the additional analogy between ego's organization and theoretical organizations of scientific knowledge. The first part of this article surveys evidence indicating that ego's cognitive biases are pervasive in and characteristic of normal personalities. The second part sets forth arguments for interpreting the biases as manifestations of an effectively functioning organization of knowledge. The last section develops an explanation for the totalitarian-ego biases by analyzing their role in maintaining cognitive organization and in supporting effective behavior. I . Three Cognitive Biases: Fabrication and Revision of Personal History Ego, as an organization of knowledge (a. conclusion to be developed later), serves the functions of What follows is a portrait of self (or ego-the terms observing (perceiving) and recording (rememberare used interchangeably) constructed by intering) personal experience; it can be characterized, weaving strands drawn from several areas of recent therefore, as a perssnal historian. Many findings research. The most striking features of the portrait are three cognitive biases, which correspond disturbingly to thought control and propaganda devices Acknowledgments are given at the end of the article. Requests for reprints should be sent to Anthony G. that are to be defining characteristics of Greenwald, Department of Psychology, Ohio State Univera totalitarian political system. The epithet for ego, sity, 404C West 17th Avenue, Columbus, Ohio 43210. Copyright 1980 by the American Psychological Association, Inc. 0003466X/80/3S07-0603$00.75 from recent research in personality, cognitive, and social psychology demonstrate that ego fabricates and revises history, thereby engaging in practices not ordinarily admired in historians. These lapses in personal scholarship, or cognitive biases, are discussed below in three categories: egocentricity (self perceived as more central to events than it is), \"beneffectance\" l (self perceived as selectively responsible for desired, but not undesired, outcomes), and conservatism (resistance to cognitive",
"title": ""
},
{
"docid": "b7944edc9e6704cbf59489f112f46c11",
"text": "The basic paradigm of asset pricing is in vibrant f lux. The purely rational approach is being subsumed by a broader approach based upon the psychology of investors. In this approach, security expected returns are determined by both risk and misvaluation. This survey sketches a framework for understanding decision biases, evaluates the a priori arguments and the capital market evidence bearing on the importance of investor psychology for security prices, and reviews recent models. The best plan is . . . to profit by the folly of others. — Pliny the Elder, from John Bartlett, comp. Familiar Quotations, 9th ed. 1901. IN THE MUDDLED DAYS BEFORE THE RISE of modern finance, some otherwisereputable economists, such as Adam Smith, Irving Fisher, John Maynard Keynes, and Harry Markowitz, thought that individual psychology affects prices.1 What if the creators of asset-pricing theory had followed this thread? Picture a school of sociologists at the University of Chicago proposing the Deficient Markets Hypothesis: that prices inaccurately ref lect all available information. A brilliant Stanford psychologist, call him Bill Blunte, invents the Deranged Anticipation and Perception Model ~or DAPM!, in which proxies for market misvaluation are used to predict security returns. Imagine the euphoria when researchers discovered that these mispricing proxies ~such * Hirshleifer is from the Fisher College of Business, The Ohio State University. This survey was written for presentation at the American Finance Association Annual Meetings in New Orleans, January, 2001. I especially thank the editor, George Constantinides, for valuable comments and suggestions. I also thank Franklin Allen, the discussant, Nicholas Barberis, Robert Bloomfield, Michael Brennan, Markus Brunnermeier, Joshua Coval, Kent Daniel, Ming Dong, Jack Hirshleifer, Harrison Hong, Soeren Hvidkjaer, Ravi Jagannathan, Narasimhan Jegadeesh, Andrew Karolyi, Charles Lee, Seongyeon Lim, Deborah Lucas, Rajnish Mehra, Norbert Schwarz, Jayanta Sen, Tyler Shumway, René Stulz, Avanidhar Subrahmanyam, Siew Hong Teoh, Sheridan Titman, Yue Wang, Ivo Welch, and participants of the Dice Finance Seminar at Ohio State University for very helpful discussions and comments. 1 Smith analyzed how the “overweening conceit” of mankind caused labor to be underpriced in more enterprising pursuits. Young workers do not arbitrage away pay differentials because they are prone to overestimate their ability to succeed. Fisher wrote a book on money illusion; in The Theory of Interest ~~1930!, pp. 493–494! he argued that nominal interest rates systematically fail to adjust sufficiently for inf lation, and explained savings behavior in relation to self-control, foresight, and habits. Keynes ~1936! famously commented on animal spirits in stock markets. Markowitz ~1952! proposed that people focus on gains and losses relative to reference points, and that this helps explain the pricing of insurance and lotteries. THE JOURNAL OF FINANCE • VOL. LVI, NO. 4 • AUGUST 2001",
"title": ""
}
] |
[
{
"docid": "45f75c8d642be90e45abff69b4c6fbcf",
"text": "We describe a method for identifying the speakers of quoted speech in natural-language textual stories. We have assembled a corpus of more than 3,000 quotations, whose speakers (if any) are manually identified, from a collection of 19th and 20th century literature by six authors. Using rule-based and statistical learning, our method identifies candidate characters, determines their genders, and attributes each quote to the most likely speaker. We divide the quotes into syntactic classes in order to leverage common discourse patterns, which enable rapid attribution for many quotes. We apply learning algorithms to the remainder and achieve an overall accuracy of 83%.",
"title": ""
},
{
"docid": "b9147ef0cf66bdb7ecc007a4e3092790",
"text": "This paper is related to the use of social media for disaster management by humanitarian organizations. The past decade has seen a significant increase in the use of social media to manage humanitarian disasters. It seems, however, that it has still not been used to its full potential. In this paper, we examine the use of social media in disaster management through the lens of Attribution Theory. Attribution Theory posits that people look for the causes of events, especially unexpected and negative events. The two major characteristics of disasters are that they are unexpected and have negative outcomes/impacts. Thus, Attribution Theory may be a good fit for explaining social media adoption patterns by emergency managers. We propose a model, based on Attribution Theory, which is designed to understand the use of social media during the mitigation and preparedness phases of disaster management. We also discuss the theoretical contributions and some practical implications. This study is still in its nascent stage and is research in progress.",
"title": ""
},
{
"docid": "38b6660a0f246590ad97b75be074899d",
"text": "Technology has been playing a major role in our lives. One definition for technology is “all the knowledge, products, processes, tools, methods and systems employed in the creation of goods or in providing services”. This makes technological innovations raise the competitiveness between organizations that depend on supply chain and logistics in the global market. With increasing competitiveness, new challenges arise due to lack of information and assets tractability. This paper introduces three scenarios for solving these challenges using the Blockchain technology. In this work, Blockchain technology targets two main issues within the supply chain, namely, data transparency and resource sharing. These issues are reflected into the organization's strategies and",
"title": ""
},
{
"docid": "c10aa68158d3f9c655c17f867dacfd81",
"text": "The phenomenon of empathy entails the ability to share the affective experiences of others. In recent years social neuroscience made considerable progress in revealing the mechanisms that enable a person to feel what another is feeling. The present review provides an in-depth and critical discussion of these findings. Consistent evidence shows that sharing the emotions of others is associated with activation in neural structures that are also active during the first-hand experience of that emotion. Part of the neural activation shared between self- and other-related experiences seems to be rather automatically activated. However, recent studies also show that empathy is a highly flexible phenomenon, and that vicarious responses are malleable with respect to a number of factors--such as contextual appraisal, the interpersonal relationship between empathizer and other, or the perspective adopted during observation of the other. Future investigations are needed to provide more detailed insights into these factors and their neural underpinnings. Questions such as whether individual differences in empathy can be explained by stable personality traits, whether we can train ourselves to be more empathic, and how empathy relates to prosocial behavior are of utmost relevance for both science and society.",
"title": ""
},
{
"docid": "29786d164d0d5e76ea9c098944e27266",
"text": "Future mobile communications systems are likely to be very different to those of today with new service innovations driven by increasing data traffic demand, increasing processing power of smart devices and new innovative applications. To meet these service demands the telecommunications industry is converging on a common set of 5G requirements which includes network speeds as high as 10 Gbps, cell edge rate greater than 100 Mbps, and latency of less than 1 msec. To reach these 5G requirements the industry is looking at new spectrum bands in the range up to 100 GHz where there is spectrum availability for wide bandwidth channels. For the development of new 5G systems to operate in bands up to 100 GHz there is a need for accurate radio propagation models which are not addressed by existing channel models developed for bands below 6 GHz. This paper presents a preliminary overview of the 5G channel models for bands up to 100 GHz in indoor offices and shopping malls, derived from extensive measurements across a multitude of bands. These studies have found some extensibility of the existing 3GPP models (e.g. 3GPP TR36.873) to the higher frequency bands up to 100 GHz. The measurements indicate that the smaller wavelengths introduce an increased sensitivity of the propagation models to the scale of the environment and show some frequency dependence of the path loss as well as increased occurrence of blockage. Further, the penetration loss is highly dependent on the material and tends to increase with frequency. The small-scale characteristics of the channel such as delay spread and angular spread and the multipath richness is somewhat similar over the frequency range, which is encouraging for extending the existing 3GPP models to the wider frequency range. Further work will be carried out to complete these models, but this paper presents the first steps for an initial basis for the model development.",
"title": ""
},
{
"docid": "816771bbbae0a58e9c999b4a7e9320a7",
"text": "The lack of an immediate-release sedative (i.e., one for which no postsedation holding or withdrawal period is required) jeopardizes fish and fisheries research and poses considerable risk to those involved in aquatic resource *Corresponding author: saluski@siu.edu Received March 29, 2012; accepted September 14, 2012 156 D ow nl oa de d by [ So ut he rn I lli no is U ni ve rs ity ] at 0 6: 46 1 2 D ec em be r 20 12 SEDATIVE OPTIONS IN FISHERIES 157 management and the operation of public hatcheries and commercial fish farms. Carbon dioxide may be used as an immediate-release sedative, but it is slow-acting and difficult to apply uniformly and effectively. Tricaine methanesulfonate (MS-222) is easier to apply but requires a 21-d withdrawal period. The lack of an immediate-release sedative approved by the U.S. Food and Drug Administration (FDA) is a consequence of numerous factors, including the complexities of the approval process, the substantial human and monetary resources involved, and the specialized nature of the work. Efforts are currently underway to demonstrate the safety and effectiveness of benzocaineand eugenol-based products as immediate-release sedatives. However, pursuing approvals within the current framework will consume an exorbitant amount of public and private resources and will take years to complete, even though both compounds are “generally recognized as safe” for certain applications by the FDA. We recommend using risk management–based approaches to increase the efficiency of the drug approval process and the availability of safe and effective drugs, including immediate-release sedatives, for use in the fisheries and aquaculture disciplines. Access to safe and effective fish sedatives is a critical need of fisheries researchers, managers, and culturists. Federal, state, private, tribal, and academic fisheries professionals routinely sedate1 fish for transport (e.g., moving them to a captive holding facility, stocking site, or to market), the collection of tissue samples (e.g., scales, spines, gametes, and fin clips) or morphometric data (e.g., length and weight), and the surgical implantation of tags or tracking devices (e.g., for monitoring movement, spawning behavior, or survival). Ideally, a fish sedative will be easy to administer, safe to use, and effective at low doses; provide quick and predictable sedation; offer some analgesia; elicit a state of sedation that is easily managed; have a reasonable margin of safety with respect to oversedation; be usable over a broad range of water chemistries; allow for rapid recovery from sedation and the physiological responses to the sedative; and be inexpensive. Additionally, it is often desirable that the sedative have no withdrawal period, meaning that sedated fish can be immediately released into the wild or taken to market upon recovery (typically referred to as “zero withdrawal” or “immediate release”). Unfortunately, there are few fish sedatives that possess all of these qualities, and at this time there are none that can be legally used in North America without a lengthy withdrawal period. Our objectives were to review the need for immediate-release sedatives, describe the current regulatory process for making such compounds available to fisheries professionals in North America, assess the relative risks associated with the use of two candidate immediate-release sedatives (a benzocaine-based product and a eugenol-based product), and provide recommendations to increase “regulatory efficiency” in 1As discussed by Trushenski et al. (2012), the terms “anesthesia,” “sedation,” and “immobilization” are used somewhat interchangeably in fisheries science, but they actually have distinct definitions. Anesthesia is “a reversible, generalized loss of sensory perception accompanied by a sleep-like state induced by drugs or by physical means”; sedation is “a preliminary level of anesthesia, in which the response to stimulation is greatly reduced and some analgesia is achieved but sensory abilities are generally intact and loss of equilibrium does not occur”; and immobilization generally means the prevention of movement only (Ross and Ross 2008). Although these different definitions may be appropriate under different circumstances, most of the scenarios described herein are best described by the terms “sedate,” “sedation,” and “sedative”; for simplicity, we have used these terms throughout. the area of aquatic animal drug approvals as they pertain to fish sedatives. Specifically, we recommend a risk management– based approach to regulating the candidate immediate-release sedatives and outline a semiquantitative risk assessment which indicates that the proposed uses of these compounds have negligible risk. GENERAL NEED FOR SEDATION WHEN HANDLING FISH Unlike most terrestrial vertebrates, which can be handled without causing significant mechanical damage, fish are particularly vulnerable to external and internal injury during physical restraint. Compared with the epithelium of terrestrial vertebrates, that of most fishes is delicate and prone to damage. The epithelium can be damaged by simply disrupting the protective mucus layer, potentially compromising osmoregulation and predisposing the fish to infection or infestation (Shephard 1994). Fish are innately difficult to handle, and when they actively resist restraint, epithelial damage or other physical injury to the fish or the handler is more likely. If fish are sedated prior to handling, the risk to both fish and handler is greatly minimized. In addition to suffering mechanical damage, fish handled without proper sedation may be physiologically compromised as a result of stress. Stress may be defined as a natural reaction to a negative stimulus culminating in the mobilization and redirection of energy to support the “fight or flight” response (Selye 1950). During the stress response, the maintenance of important but not immediately critical functions is often sacrificed as a consequence of stress hormone release (Barton and Iwama 1991; Barton 2002). In fish, noncritical functions can include osmoregulation, reproduction, feeding, and particularly the exclusion and/or clearance of pathogens (Tort et al. 2004). As a result, stressed individuals may become homeostatically compromised and suffer tertiary consequences of stress, such as increased vulnerability to disease, reduced reproductive performance, and reduced growth (Barton and Iwama 1991; Wendelaar Bonga 1997; Barton 2002; Tort et al. 2004). Beyond the readily quantified physiological consequences of handling unsedated fish, fisheries professionals must consider D ow nl oa de d by [ So ut he rn I lli no is U ni ve rs ity ] at 0 6: 46 1 2 D ec em be r 20 12 158 TRUSHENSKI ET AL. TABLE 1. Attributes of currently available sedatives. Sedative Approved? Limitations Benzocaine No, but can be used under INADa authorization 3-d withdrawal period CO2 No, but FDAb unlikely to use regulatory authority Cumbersome and not all fish respond well Eugenol No, but can be used under INAD authorization 3-d withdrawal period MS-222 Yes for temporary immobilization 21-d withdrawal period aInvestigational New Animal Drug. bU.S. Food and Drug Administration. animal welfare (Huntingford et al. 2006). There is considerable scientific debate as to whether fish are capable of feeling pain or only exhibit nociception2 (e.g., Rose 2002, 2003; Chandroo et al. 2004; Sneddon 2006); the specifics of this debate and its resolution are largely outside the scope of the present review. Regardless of whether fish perceive pain in the same manner as higher vertebrates, with respect to fisheries research, relevant guidelines advise that “investigators should consider that procedures that cause pain or distress in human beings may cause pain or distress in other animals” (USPHS 1986; CCAC 2005), “prolonged stressful restraint [without appropriate sedation or anesthesia] should be avoided” (UFR 2004), and “procedures with animals that may cause more than momentary or slight pain or distress should be performed with appropriate sedation, analgesia, or anesthesia” (USPHS 1986). CURRENTLY AVAILABLE SEDATIVES AND THEIR LIMITATIONS Currently, there are few sedative options available to fisheries professionals that are safe, effective, and practical to use (Table 1). Perhaps more importantly, MS-222 (tricaine methanesulfonate [3-aminobenzoic acid ethyl ester methanesulfonate]) is the only compound approved by the U.S. Food and Drug Administration (FDA) and Health Canada for such use in these countries. Two MS-222 products (Tricaine-S and Finquel) are approved in the United States for the temporary immobilization of fish and other aquatic, cold-blooded animals, and one MS-222 product (Aqualife TMS) is approved in Canada for veterinary use only for anesthesia or the sedation of salmonids. Like other local anesthetics, MS-222 is rapidly absorbed through the gills and believed to exert its sedative effect by preventing the generation and conduction of nerve impulses (Frazier and Narahashi 1975), though there is some uncertainty regarding this (Popovic et al. 2012). MS-222 has direct actions on the central nervous system, cardiovascular system, neuro2As discussed by Sneddon (2009), the generally accepted definition of “pain” involves two elements: (1) the perception of stimuli associated with actual or potential tissue damage, referred to as nociception; and (2) awareness of an associated negative emotional experience, sometimes described as discomfort or suffering. It is relatively easy to demonstrate nociception in fish. However, it is impossible to demonstrate what a fish “feels” and therefore whether it can experience pain as it is defined. muscular junctions, and ganglion synapses. Lower doses induce tranquilization and sedation, and higher doses result in general/surgical anesthetic planes (Alpharma 2001). In fish, brief tachycardia (elevated heart rate) occurs within 30 s of exposure, follow",
"title": ""
},
{
"docid": "fdab4af34adebd0d682134f3cf13d794",
"text": "Threat evaluation (TE) is a process used to assess the threat values (TVs) of air-breathing threats (ABTs), such as air fighters, that are approaching defended assets (DAs). This study proposes an automatic method for conducting TE using radar information when ABTs infiltrate into territory where DAs are located. The method consists of target asset (TA) prediction and TE. We divide a friendly territory into discrete cells based on the effective range of anti-aircraft missiles. The TA prediction identifies the TA of each ABT by predicting the ABT’s movement through cells in the territory via a Markov chain, and the cell transition is modeled by neural networks. We calculate the TVs of the ABTs based on the TA prediction results. A simulation-based experiment revealed that the proposed method outperformed TE based on the closest point of approach or the radial speed vector methods. © 2016 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "73905bf74f0f66c7a02aeeb9ab231d7b",
"text": "This paper presents an anthropomorphic robot hand called the Gifu hand II, which has a thumb and four fingers, all the joints of which are driven by servomotors built into the fingers and the palm. The thumb has four joints with four-degrees-of-freedom (DOF); the other fingers have four joints with 3-DOF; and two axes of the joints near the palm cross orthogonally at one point, as is the case in the human hand. The Gifu hand II can be equipped with six-axes force sensor at each fingertip and a developed distributed tactile sensor with 624 detecting points on its surface. The design concepts and the specifications of the Gifu hand II, the basic characteristics of the tactile sensor, and the pressure distributions at the time of object grasping are described and discussed herein. Our results demonstrate that the Gifu hand II has a high potential to perform dexterous object manipulations like the human hand.",
"title": ""
},
{
"docid": "ace30c4ad4a74f1ba526b4868e47b5c5",
"text": "China and India are home to two of the world's largest populations, and both populations are aging rapidly. Our data compare health status, risk factors, and chronic diseases among people age forty-five and older in China and India. By 2030, 65.6 percent of the Chinese and 45.4 percent of the Indian health burden are projected to be borne by older adults, a population with high levels of noncommunicable diseases. Smoking (26 percent in both China and India) and inadequate physical activity (10 percent and 17.7 percent, respectively) are highly prevalent. Health policy and interventions informed by appropriate data will be needed to avert this burden.",
"title": ""
},
{
"docid": "4824f0ffb4a362aa113ced48a164d736",
"text": "This paper presents architectures for active phased arrays for multiple scanning, hopping, and reconfigurable shaped beams for satellite applications. A comparison of the different architectures is discussed. An experimental model which incorporates active monolithic microwave integrated circuit (MMIC) components for beam control is described. The technology for this model is directly applicable to the various architectures presented.",
"title": ""
},
{
"docid": "5a71d766ecd60b8973b965e53ef8ddfd",
"text": "An m-polar fuzzy model is useful for multi-polar information, multi-agent, multi-attribute and multiobject network models which gives more precision, flexibility, and comparability to the system as compared to the classical, fuzzy and bipolar fuzzy models. In this paper, m-polar fuzzy sets are used to introduce the notion of m-polar psi-morphism on product m-polar fuzzy graph (mFG). The action of this morphism is studied and established some results on weak and co-weak isomorphism. d2-degree and total d2-degree of a vertex in product mFG are defined and studied their properties. A real life situation has been modeled as an application of product mFG. c ©2018 World Academic Press, UK. All rights reserved.",
"title": ""
},
{
"docid": "cd891d5ecb9fa6bd8ae23e2a06151882",
"text": "Smart City represents one of the most promising and prominent Internet of Things (IoT) applications. In the last few years, smart city concept has played an important role in academic and industry fields, with the development and deployment of various middleware platforms. However, this expansion has followed distinct approaches creating a fragmented scenario, in which different IoT ecosystems are not able to communicate between them. To fill this gap, there is a need to revisit the smart city IoT semantic and offer a global common approach. To this purpose, this paper browses the semantic annotation of the sensors in the cloud, and innovative services can be implemented and considered by bridging Clouds and IoT. Things-like semantic will be considered to perform the aggregation of heterogeneous resources by defining the Clouds of Things (CoT) paradigm. We survey the smart city vision, providing information on the main requirements and highlighting the benefits of integrating different IoT ecosystems within the cloud under this new CoT vision and discuss relevant challenges in this research area.",
"title": ""
},
{
"docid": "3c9097afea0c4a59acc21fd8e68ebad0",
"text": "A path query aims to find the trajectories that pass a given sequence of connected road segments within a time period. It is very useful in many urban applications, e.g., 1) traffic modeling, 2) frequent path mining, and 3) traffic anomaly detection. Existing solutions for path query are implemented based on single machines, which are not efficient for the following tasks: 1) indexing large-scale historical data; 2) handling real-time trajectory updates; and 3) processing concurrent path queries. In this paper, we design and implement a cloud-based path query processing framework based on Microsoft Azure. We modify the suffix tree structure to index the trajectories using Azure Table. The proposed system consists of two main parts: 1) backend processing, which performs the pre-processing and suffix index building with distributed computing platform (i.e., Storm) used to efficiently handle massive real-time trajectory updates; and 2) query processing, which answers path queries using Azure Storm to improve efficiency and overcome the I/O bottleneck. We evaluate the performance of our proposed system based on a real taxi dataset from Guiyang, China.",
"title": ""
},
{
"docid": "d61094fb93deadb6c5fa2856fca267db",
"text": "We present a new design for a 1-b full adder featuring hybrid-CMOS design style. The quest to achieve a good-drivability, noise-robustness, and low-energy operations for deep submicrometer guided our research to explore hybrid-CMOS style design. Hybrid-CMOS design style utilizes various CMOS logic style circuits to build new full adders with desired performance. This provides the designer a higher degree of design freedom to target a wide range of applications, thus significantly reducing design efforts. We also classify hybrid-CMOS full adders into three broad categories based upon their structure. Using this categorization, many full-adder designs can be conceived. We will present a new full-adder design belonging to one of the proposed categories. The new full adder is based on a novel xor-xnor circuit that generates xor and xnor full-swing outputs simultaneously. This circuit outperforms its counterparts showing 5%-37% improvement in the power-delay product (PDP). A novel hybrid-CMOS output stage that exploits the simultaneous xor-xnor signals is also proposed. This output stage provides good driving capability enabling cascading of adders without the need of buffer insertion between cascaded stages. There is approximately a 40% reduction in PDP when compared to its best counterpart. During our experimentations, we found out that many of the previously reported adders suffered from the problems of low swing and high noise when operated at low supply voltages. The proposed full adder is energy efficient and outperforms several standard full adders without trading off driving capability and reliability. The new full-adder circuit successfully operates at low voltages with excellent signal integrity and driving capability. To evaluate the performance of the new full adder in a real circuit, we embedded it in a 4- and 8-b, 4-operand carry-save array adder with final carry-propagate adder. The new adder displayed better performance as compared to the standard full adders",
"title": ""
},
{
"docid": "9de8319702d6f1907d096d7da24911f6",
"text": "Provenance metadata has become increasingly important to support scientific discovery reproducibility, result interpretation, and problem diagnosis in scientific workflow environments. The provenance management problem concerns the efficiency and effectiveness of the modeling, recording, representation, integration, storage, and querying of provenance metadata. Our approach to provenance management seamlessly integrates the interoperability, extensibility, and inference advantages of Semantic Web technologies with the storage and querying power of an RDBMS to meet the emerging requirements of scientific workflow provenance management. In this paper, we elaborate on the design of a relational RDF store, called RDFProv, that is optimized for scientific workflow provenance querying and management. Specifically, we propose: i) two schema mapping algorithms to map an OWL provenance ontology to a relational database schema that is optimized for common provenance queries; ii) three efficient data mapping algorithms to map provenance RDF metadata to relational data according to the generated relational database schema, and iii) a schema-independent SPARQL-to-SQL translation algorithm that is optimized on-the-fly by using the type information of an instance available from the input provenance ontology and the statistics of the sizes of the tables in the database. Experimental results are presented to show that our algorithms are efficient and scalable. The comparison with two popular relational RDF stores, Jena and Sesame, and two commercial native RDF stores, AllegroGraph and BigOWLIM, showed that our optimizations result in improved performance and scalability for provenance metadata management. Finally, our case study for provenance management in a real-life biological simulation workflow showed the production quality and capability of the RDFProv system. Although presented in the context of scientific workflow provenance management, many of our proposed techniques apply to general RDF data management as well.",
"title": ""
},
{
"docid": "3faad0857fb0e355c7846d52bd2f5e8c",
"text": "The issue of cultural universality of waist-to-hip ratio (WHR) attractiveness in women is currently under debate. We tested men's preferences for female WHR in traditional society of Tsimane'(Native Amazonians) of the Bolivian rainforest (N = 66). Previous studies showed preferences for high WHR in traditional populations, but they did not control for the women's body mass.We used a method of stimulus creation that enabled us to overcome this problem. We found that WHR lower than the average WHR in the population is preferred independent of cultural conditions. Our participants preferred the silhouettes of low WHR, but high body mass index (BMI), which might suggest that previous results could be an artifact related to employed stimuli. We found also that preferences for female BMI are changeable and depend on environmental conditions and probably acculturation (distance from the city). Interestingly, the Tsimane' men did not associate female WHR with age, health, physical strength or fertility. This suggests that men do not have to be aware of the benefits associated with certain body proportions - an issue that requires further investigation.",
"title": ""
},
{
"docid": "e35933fd7f6a108e2473cc6a0e9d1182",
"text": "Web usage mining is a main research area in Web mining focused on learning about Web users and their interactions with Web sites. Main challenges in Web usage mining are the application of data mining techniques to Web data in an efficient way and the discovery of non trivial user behaviour patterns. In this paper we focus the attention on search engines analyzing query log data and showing several models about how users search and how users use search engine results.",
"title": ""
},
{
"docid": "cfee5bd5aaee1e8ea40ce6ce88746902",
"text": "A CPW-fed planar monopole antenna for triple band operation is presented. The antenna consists of an elliptical radiating patch with a curved ground plane with embedded slots. When two narrow slots are introduced on a wideband elliptical monopole antenna (2.2-7 GHz), two bands are rejected without affecting the antenna properties at the rest of the operating frequencies. By properly choosing the length and location of the slots, a triple band antenna design is achieved. Impedance and radiation characteristics of the antenna are studied and results indicate that it is suitable for the 2.5-2.69 GHz, 3.4-3.69 GHz, and 5.25-5.85 GHz WiMAX applications and also the 2.4-2.484 GHz, 5.15-5.35 GHz, and 5.725-5.825 GHz WLAN applications. The antenna exhibits omnidirectional radiation coverage with its gain significantly reduced at the notched frequency bands.",
"title": ""
},
{
"docid": "aa6bb8b62c3ee21655607e15d26bfcb5",
"text": "Elena Campione1, Chiara Centonze2, Laura Diluvio1, Augusto Orlandi3, Cesidio Cipriani4, Alessandro Di Stefani3, Emilio Piccione2, Sergio Chimenti1 and Luca Bianchi1 Departments of 1Dermatology, 2Gynaecology and 3Anatomic Pathology, University of Rome, Tor Vergata, Viale Oxford, 81, IT-00133 Rome, and 4Department of Nuclear Medicine, S. Eugenio Hospital, Rome, Italy. E-mail: campioneelena@hotmail.com Accepted Nov 7, 2011; Epub ahead of print Jun 21, 2012",
"title": ""
},
{
"docid": "3115c716a065334dc0cdec9e33e24149",
"text": "With the recent advances in the field of artificial intelligence, an increasing number of decision-making tasks are delegated to software systems. A key requirement for the success and adoption of such systems is that users must trust system choices or even fully automated decisions. To achieve this, explanation facilities have been widely investigated as a means of establishing trust in these systems since the early years of expert systems. With today’s increasingly sophisticated machine learning algorithms, new challenges in the context of explanations, accountability, and trust towards such systems constantly arise. In this work, we systematically review the literature on explanations in advice-giving systems. This is a family of systems that includes recommender systems, which is one of the most successful classes of advice-giving software in practice. We investigate the purposes of explanations as well as how they are generated, presented to users, and evaluated. As a result, we derive a novel comprehensive taxonomy of aspects to be considered when designing explanation facilities for current and future decision support systems. The taxonomy includes a variety of different facets, such as explanation objective, responsiveness, content and presentation. Moreover, we identified several challenges that remain unaddressed so far, for example related to fine-grained issues associated with the presentation of explanations and how explanation facilities are evaluated.",
"title": ""
}
] |
scidocsrr
|
2e0f343d907ea3312234a79373dbad3f
|
Distributing learning over time: the spacing effect in children's acquisition and generalization of science concepts.
|
[
{
"docid": "fedfacfc850aeec1313043051a66e35b",
"text": "BACKGROUND\nKnowledge of concepts and procedures seems to develop in an iterative fashion, with increases in one type of knowledge leading to increases in the other type of knowledge. This suggests that iterating between lessons on concepts and procedures may improve learning.\n\n\nAIMS\nThe purpose of the current study was to evaluate the instructional benefits of an iterative lesson sequence compared to a concepts-before-procedures sequence for students learning decimal place-value concepts and arithmetic procedures.\n\n\nSAMPLES\nIn two classroom experiments, sixth-grade students from two schools participated (N=77 and 26).\n\n\nMETHOD\nStudents completed six decimal lessons on an intelligent-tutoring systems. In the iterative condition, lessons cycled between concept and procedure lessons. In the concepts-first condition, all concept lessons were presented before introducing the procedure lessons.\n\n\nRESULTS\nIn both experiments, students in the iterative condition gained more knowledge of arithmetic procedures, including ability to transfer the procedures to problems with novel features. Knowledge of concepts was fairly comparable across conditions. Finally, pre-test knowledge of one type predicted gains in knowledge of the other type across experiments.\n\n\nCONCLUSIONS\nAn iterative sequencing of lessons seems to facilitate learning and transfer, particularly of mathematical procedures. The findings support an iterative perspective for the development of knowledge of concepts and procedures.",
"title": ""
},
{
"docid": "277bdeccc25baa31ba222ff80a341ef2",
"text": "Teaching by examples and cases is widely used to promote learning, but it varies widely in its effectiveness. The authors test an adaptation to case-based learning that facilitates abstracting problemsolving schemas from examples and using them to solve further problems: analogical encoding, or learning by drawing a comparison across examples. In 3 studies, the authors examined schema abstraction and transfer among novices learning negotiation strategies. Experiment 1 showed a benefit for analogical learning relative to no case study. Experiment 2 showed a marked advantage for comparing two cases over studying the 2 cases separately. Experiment 3 showed that increasing the degree of comparison support increased the rate of transfer in a face-to-face dynamic negotiation exercise.",
"title": ""
}
] |
[
{
"docid": "d763198d3bfb1d30b153e13245c90c08",
"text": "Inspired by the aerial maneuvering ability of lizards, we present the design and control of MSU (Michigan State University) tailbot - a miniature-tailed jumping robot. The robot can not only wheel on the ground, but also jump up to overcome obstacles. Moreover, once leaping into the air, it can control its body angle using an active tail to dynamically maneuver in midair for safe landings. We derive the midair dynamics equation and design controllers, such as a sliding mode controller, to stabilize the body at desired angles. To the best of our knowledge, this is the first miniature (maximum size 7.5 cm) and lightweight (26.5 g) robot that can wheel on the ground, jump to overcome obstacles, and maneuver in midair. Furthermore, tailbot is equipped with on-board energy, sensing, control, and wireless communication capabilities, enabling tetherless or autonomous operations. The robot in this paper exemplifies the integration of mechanical design, embedded system, and advanced control methods that will inspire the next-generation agile robots mimicking their biological counterparts. Moreover, it can serve as mobile sensor platforms for wireless sensor networks with many field applications.",
"title": ""
},
{
"docid": "007a42bdf781074a2d00d792d32df312",
"text": "This paper presents a new approach for road lane classification using an onboard camera. Initially, lane boundaries are detected using a linear-parabolic lane model, and an automatic on-the-fly camera calibration procedure is applied. Then, an adaptive smoothing scheme is applied to reduce noise while keeping close edges separated, and pairs of local maxima-minima of the gradient are used as cues to identify lane markings. Finally, a Bayesian classifier based on mixtures of Gaussians is applied to classify the lane markings present at each frame of a video sequence as dashed, solid, dashed solid, solid dashed, or double solid. Experimental results indicate an overall accuracy of over 96% using a variety of video sequences acquired with different devices and resolutions.",
"title": ""
},
{
"docid": "c7d9353fe149c95ae0b3f1c7fa38def9",
"text": "BACKGROUND\nCutaneous melanoma is often characterized by its pigmented appearance; however, up to 8.1% of such lesions contain little or no pigmentation. Amelanotic melanomas, lesions devoid of visible pigment, present a diagnostic quandary because they can masquerade as many other skin pathologies. Recognizing amelanotic melanoma is even more clinically challenging when it is first detected as a metastasis to the secondary tissue.\n\n\nMETHODS\nWe report a rare case of metastasis of an amelanotic melanoma to the parotid gland.\n\n\nRESULTS\nA 75-year-old man presented with an 8-month history of a painless, mobile, hardened mass in the right parotid region. Histopathological analysis of a fine-needle aspiration biopsy of the parotid mass indicated that the mass was melanoma. Careful clinical and radiological examination revealed an 8 mm erythematous papule in the right temporal scalp, initially diagnosed by visual examination as basal cell carcinoma. After right superficial parotidectomy, neck dissection, and excision of the temporal scalp lesion, histological examination revealed the scalp lesion to be amelanotic melanoma.\n\n\nCONCLUSION\nAlthough metastatic amelanotic melanoma to the parotid gland is a rare diagnosis, the clinician should be familiar with this presentation to increase the likelihood of making the correct diagnosis and delivering prompt treatment.",
"title": ""
},
{
"docid": "f172ad1f92b81f5d8b19fc4687ce2853",
"text": "Research conclusions in the social sciences are increasingly based on meta-analysis, making questions of the accuracy of meta-analysis critical to the integrity of the base of cumulative knowledge. Both fixed effects (FE) and random effects (RE) meta-analysis models have been used widely in published meta-analyses. This article shows that FE models typically manifest a substantial Type I bias in significance tests for mean effect sizes and for moderator variables (interactions), while RE models do not. Likewise, FE models, but not RE models, yield confidence intervals for mean effect sizes that are narrower than their nominal width, thereby overstating the degree of precision in meta-analysis findings. This article demonstrates analytically that these biases in FE procedures are large enough to create serious distortions in conclusions about cumulative knowledge in the research literature. We therefore recommend that RE methods routinely be employed in meta-analysis in preference to FE methods.",
"title": ""
},
{
"docid": "2107e4efdf7de92a850fc0142bf8c8c3",
"text": "Throughout the wide range of aerial robot related applications, selecting a particular airframe is often a trade-off. Fixed-wing small-scale unmanned aerial vehicles (UAVs) typically have difficulty surveying at low altitudes while quadrotor UAVs, having more maneuverability, suffer from limited flight time. Recent prior work [1] proposes a solar-powered small-scale aerial vehicle designed to transform between fixed-wing and quad-rotor configurations. Surplus energy collected and stored while in a fixed-wing configuration is utilized while in a quad-rotor configuration. This paper presents an improvement to the robot's design in [1] by pursuing a modular airframe, an optimization of the hybrid propulsion system, and solar power electronics. Two prototypes of the robot have been fabricated for independent testing of the airframe in fixed-wing and quad-rotor states. Validation of the solar power electronics and hybrid propulsion system designs were demonstrated through a combination of simulation and empirical data from prototype hardware.",
"title": ""
},
{
"docid": "8e3ced84f384192cfe742294dcee74bc",
"text": "The construction of software cost estimation models remains an active topic of research. The basic premise of cost modelling is that a historical database of software project cost data can be used to develop a quantitative model to predict the cost of future projects. One of the difficulties faced by workers in this area is that many of these historical databases contain substantial amounts of missing data. Thus far, the common practice has been to ignore observations with missing data. In principle, such a practice can lead to gross biases, and may be detrimental to the accuracy of cost estimation models. In this paper we describe an extensive simulation where we evaluate different techniques for dealing with missing data in the context of software cost modelling. Three techniques are evaluated: listwise deletion, mean imputation and eight different types of hot-deck imputation. Our results indicate that all the missing data techniques perform well, with small biases and high precision. This suggests that the simplest technique, listwise deletion, is a reasonable choice. However, this will not necessarily provide the best performance. Consistent best performance (minimal bias and highest precision) can be obtained by using hot-deck imputation with Euclidean distance and a z-score standardisation.",
"title": ""
},
{
"docid": "a393f05d29b6d8ff011ee079154e7e58",
"text": "This report provides a short survey of the field of virtual reality, highlighting application domains, technological requirements, and currently available solutions. The report is organized as follows: section 1 presents the background and motivation of virtual environment research and identifies typical application domain, section 2 discusses the characteristics a virtual reality system must have in order to exploit the perceptual and spatial skills of users, section 3 surveys current input/output devices for virtual reality, section 4 surveys current software approaches to support the creation of virtual reality systems, and section 5 summarizes the report.",
"title": ""
},
{
"docid": "1eba8eccf88ddb44a88bfa4a937f648f",
"text": "We present a deep learning framework for probabilistic pixel-wise semantic segmentation, which we term Bayesian SegNet. Semantic segmentation is an important tool for visual scene understanding and a meaningful measure of uncertainty is essential for decision making. Our contribution is a practical system which is able to predict pixelwise class labels with a measure of model uncertainty using Bayesian deep learning. We achieve this by Monte Carlo sampling with dropout at test time to generate a posterior distribution of pixel class labels. In addition, we show that modelling uncertainty improves segmentation performance by 2-3% across a number of datasets and architectures such as SegNet, FCN, Dilation Network and DenseNet.",
"title": ""
},
{
"docid": "5f45659c16ca98f991a31d62fd70cdab",
"text": "Iris recognition has legendary resistance to false matches, and the tools of information theory can help to explain why. The concept of entropy is fundamental to understanding biometric collision avoidance. This paper analyses the bit sequences of IrisCodes computed both from real iris images and from synthetic white noise iris images, whose pixel values are random and uncorrelated. The capacity of the IrisCode as a channel is found to be 0.566 bits per bit encoded, of which 0.469 bits of entropy per bit is encoded from natural iris images. The difference between these two rates reflects the existence of anatomical correlations within a natural iris, and the remaining gap from one full bit of entropy per bit encoded reflects the correlations in both phase and amplitude introduced by the Gabor wavelets underlying the IrisCode. A simple two-state hidden Markov model is shown to emulate exactly the statistics of bit sequences generated both from natural and white noise iris images, including their imposter distributions, and may be useful for generating large synthetic IrisCode databases.",
"title": ""
},
{
"docid": "10b3c67f99ea41185f262ddd8ba50ed4",
"text": "OBJECTIVE\nTo test the feasibility of short message service (SMS) usage between the clinic visits and to evaluate its effect on glycemic control in uncontrolled type 2 Diabetes Mellitus (DM) subjects.\n\n\nRESEARCH DESIGN AND METHODS\n34 cases with type 2 Diabetes were followed after fulfilling the inclusion criteria. The interventional group (n=12) had the same conventional approach of the control group but had two mobile numbers (physician and diabetic educator) provided for the SMS support until their next visit in 3 months. Both groups of age, BMI and the pre-study A1c were comparable.\n\n\nRESULTS\nBoth groups had a significant reduction in their A1c compared to their baseline visit. However, the interventional group had significantly greater reduction in A1c (p=0.001), 1.16% lower than controls. The service was highly satisfactory to the group.\n\n\nCONCLUSION\nThe results indicate effectiveness in lowering A1c and acceptance by the patients. Further research and large-scale studies are needed.",
"title": ""
},
{
"docid": "84f2072f32d2a29d372eef0f4622ddce",
"text": "This paper presents a new methodology for synthesis of broadband equivalent circuits for multi-port high speed interconnect systems from numerically obtained and/or measured frequency-domain and time-domain response data. The equivalent circuit synthesis is based on the rational function fitting of admittance matrix, which combines the frequency-domain vector fitting process, VECTFIT with its time-domain analog, TDVF to yield a robust and versatile fitting algorithm. The generated rational fit is directly converted into a SPICE-compatible circuit after passivity enforcement. The accuracy of the resulting algorithm is demonstrated through its application to the fitting of the admittance matrix of a power/ground plane structure",
"title": ""
},
{
"docid": "f355ed837561186cff4e7492470d6ae7",
"text": "Notions of Bayesian analysis are reviewed, with emphasis on Bayesian modeling and Bayesian calculation. A general hierarchical model for time series analysis is then presented and discussed. Both discrete time and continuous time formulations are discussed. An brief overview of generalizations of the fundamental hierarchical time series model concludes the article. Much of the Bayesian viewpoint can be argued (as by Jeereys and Jaynes, for examples) as direct application of the theory of probability. In this article the suggested approach for the construction of Bayesian time series models relies on probability theory to provide decompositions of complex joint probability distributions. Speciically, I refer to the familiar factorization of a joint density into an appropriate product of conditionals. Let x and y represent two random variables. I will not diierentiate between random variables and their realizations. Also, I will use an increasingly popular generic notation for probability densities: x] represents the density of x, xjy] is the conditional density of x given y, and x; y] denotes the joint density of x and y. In this notation we can write \\Bayes's Theorem\" as yjx] = xjy]]y]=x]: (1) y",
"title": ""
},
{
"docid": "09ecaf2cb56296c8097525b2c1ffb7dc",
"text": "Fruit and vegetables classification and recognition are still challenging in daily production and life. In this paper, we propose an efficient fruit and vegetables classification system using image saliency to draw the object regions and convolutional neural network (CNN) model to extract image features and implement classification. Image saliency is utilized to select main saliency regions according to saliency map. A VGG model is chosen to train for fruit and vegetables classification. Another contribution in this paper is that we establish a fruit and vegetables images database spanning 26 categories, which covers the major types in real life. Experiments are conducted on our own database, and the results show that our classification system achieves an excellent accuracy rate of 95.6%.",
"title": ""
},
{
"docid": "1404323d435b1b7999feda249f817f36",
"text": "The Process of Encryption and Decryption is performed by using Symmetric key cryptography and public key cryptography for Secure Communication. In this paper, we studied that how the process of Encryption and Decryption is perform in case of Symmetric key and public key cryptography using AES and DES algorithms and modified RSA algorithm.",
"title": ""
},
{
"docid": "0c61bfbb7106c5592ecb9677e617f83f",
"text": "BACKGROUND\nAcute exacerbations of chronic obstructive pulmonary disease (COPD) are associated with accelerated decline in lung function, diminished quality of life, and higher mortality. Proactively monitoring patients for early signs of an exacerbation and treating them early could prevent these outcomes. The emergence of affordable wearable technology allows for nearly continuous monitoring of heart rate and physical activity as well as recording of audio which can detect features such as coughing. These signals may be able to be used with predictive analytics to detect early exacerbations. Prior to full development, however, it is important to determine the feasibility of using wearable devices such as smartwatches to intensively monitor patients with COPD.\n\n\nOBJECTIVE\nWe conducted a feasibility study to determine if patients with COPD would wear and maintain a smartwatch consistently and whether they would reliably collect and transmit sensor data.\n\n\nMETHODS\nPatients with COPD were recruited from 3 hospitals and were provided with a smartwatch that recorded audio, heart rate, and accelerations. They were asked to wear and charge it daily for 90 days. They were also asked to complete a daily symptom diary. At the end of the study period, participants were asked what would motivate them to regularly use a wearable for monitoring of their COPD.\n\n\nRESULTS\nOf 28 patients enrolled, 16 participants completed the full 90 days. The average age of participants was 68.5 years, and 36% (10/28) were women. Survey, heart rate, and activity data were available for an average of 64.5, 65.1, and 60.2 days respectively. Technical issues caused heart rate and activity data to be unavailable for approximately 13 and 17 days, respectively. Feedback provided by participants indicated that they wanted to actively engage with the smartwatch and receive feedback about their activity, heart rate, and how to better manage their COPD.\n\n\nCONCLUSIONS\nSome patients with COPD will wear and maintain smartwatches that passively monitor audio, heart rate, and physical activity, and wearables were able to reliably capture near-continuous patient data. Further work is necessary to increase acceptability and improve the patient experience.",
"title": ""
},
{
"docid": "137eb8a6a90f628353b854995f88a46c",
"text": "How should we gather information to make effective decisions? We address Bayesian active learning and experimental design problems, where we sequentially select tests to reduce uncertainty about a set of hypotheses. Instead ofminimizing uncertainty per se, we consider a set of overlapping decision regions of these hypotheses. Our goal is to drive uncertainty into a single decision region as quickly as possible. We identify necessary and sufficient conditions for correctly identifying a decision region that contains all hypotheses consistent with observations. We develop a novel Hyperedge Cutting (HEC) algorithm for this problem, and prove that is competitive with the intractable optimal policy. Our efficient implementation of the algorithm relies on computing subsets of the complete homogeneous symmetric polynomials. Finally, we demonstrate its effectiveness on two practical applications: approximate comparison-based learning and active localization using a robotmanipulator.",
"title": ""
},
{
"docid": "eccbc87e4b5ce2fe28308fd9f2a7baf3",
"text": "3",
"title": ""
},
{
"docid": "d6976dd4280c0534049c33ff9efb2058",
"text": "Bitcoin, as well as many of its successors, require the whole transaction record to be reliably acquired by all nodes to prevent double-spending. Recently, many blockchains have been proposed to achieve scale-out throughput by letting nodes only acquire a fraction of the whole transaction set. However, these schemes, e.g., sharding and off-chain techniques, suffer from a degradation in decentralization or the capacity of fault tolerance. In this paper, we show that the complete set of transactions is not a necessity for the prevention of double-spending if the properties of value transfers is fully explored. In other words, we show that a value-transfer ledger like Bitcoin has the potential to scale-out by its nature without sacrificing security or decentralization. Firstly, we give a formal definition for the value-transfer ledger and its distinct features from a generic database. Then, we introduce the blockchain structure with a shared main chain for consensus and an individual chain for each node for recording transactions. A locally executable validation scheme is proposed with uncompromising validity and consistency. A beneficial consequence of our design is that nodes will spontaneously try to reduce their transmission cost by only providing the transactions needed to show that their transactions are not double spend. As a result, the network is sharded as each node only acquires part of the transaction record and a scale-out throughput could be achieved, which we call \"spontaneous sharding\".",
"title": ""
},
{
"docid": "435fdb671cc12959d2d971b847f851a4",
"text": "In volume data visualization, the classification step is used to determine voxel visibility and is usually carried out through the interactive editing of a transfer function that defines a mapping between voxel value and color/opacity. This approach is limited by the difficulties in working effectively in the transfer function space beyond two dimensions. We present a new approach to the volume classification problem which couples machine learning and a painting metaphor to allow more sophisticated classification in an intuitive manner. The user works in the volume data space by directly painting on sample slices of the volume and the painted voxels are used in an iterative training process. The trained system can then classify the entire volume. Both classification and rendering can be hardware accelerated, providing immediate visual feedback as painting progresses. Such an intelligent system approach enables the user to perform classification in a much higher dimensional space without explicitly specifying the mapping for every dimension used. Furthermore, the trained system for one data set may be reused to classify other data sets with similar characteristics.",
"title": ""
},
{
"docid": "a09cfa27c7e5492c6d09b3dff7171588",
"text": "This paper aims to provide a basis for the improvement of software-estimation research through a systematic review of previous work. The review identifies 304 software cost estimation papers in 76 journals and classifies the papers according to research topic, estimation approach, research approach, study context and data set. A Web-based library of these cost estimation papers is provided to ease the identification of relevant estimation research results. The review results combined with other knowledge provide support for recommendations for future software cost estimation research, including: 1) increase the breadth of the search for relevant studies, 2) search manually for relevant papers within a carefully selected set of journals when completeness is essential, 3) conduct more studies on estimation methods commonly used by the software industry, and 4) increase the awareness of how properties of the data sets impact the results when evaluating estimation methods",
"title": ""
}
] |
scidocsrr
|
f1cd6ca7a4182e30b7fc0a88c0815f23
|
Stating the Obvious: Extracting Visual Common Sense Knowledge
|
[
{
"docid": "5d79d7e9498d7d41fbc7c70d94e6a9ae",
"text": "Reasoning about objects and their affordances is a fundamental problem for visual intelligence. Most of the previous work casts this problem as a classification task where separate classifiers are trained to label objects, recognize attributes, or assign affordances. In this work, we consider the problem of object affordance reasoning using a knowledge base representation. Diverse information of objects are first harvested from images and other meta-data sources. We then learn a knowledge base (KB) using a Markov Logic Network (MLN). Given the learned KB, we show that a diverse set of visual inference tasks can be done in this unified framework without training separate classifiers, including zeroshot affordance prediction and object recognition given human poses.",
"title": ""
}
] |
[
{
"docid": "70991373ae71f233b0facd2b5dd1a0d3",
"text": "Information communications technology systems are facing an increasing number of cyber security threats, the majority of which are originated by insiders. As insiders reside behind the enterprise-level security defence mechanisms and often have privileged access to the network, detecting and preventing insider threats is a complex and challenging problem. In fact, many schemes and systems have been proposed to address insider threats from different perspectives, such as intent, type of threat, or available audit data source. This survey attempts to line up these works together with only three most common types of insider namely traitor, masquerader, and unintentional perpetrator, while reviewing the countermeasures from a data analytics perspective. Uniquely, this survey takes into account the early stage threats which may lead to a malicious insider rising up. When direct and indirect threats are put on the same page, all the relevant works can be categorised as host, network, or contextual data-based according to audit data source and each work is reviewed for its capability against insider threats, how the information is extracted from the engaged data sources, and what the decision-making algorithm is. The works are also compared and contrasted. Finally, some issues are raised based on the observations from the reviewed works and new research gaps and challenges identified.",
"title": ""
},
{
"docid": "204f7f8282954de4d6b725f5cce0b00f",
"text": "Traffic classification plays an important and basic role in network management and cyberspace security. With the widespread use of encryption techniques in network applications, encrypted traffic has recently become a great challenge for the traditional traffic classification methods. In this paper we proposed an end-to-end encrypted traffic classification method with one-dimensional convolution neural networks. This method integrates feature extraction, feature selection and classifier into a unified end-to-end framework, intending to automatically learning nonlinear relationship between raw input and expected output. To the best of our knowledge, it is the first time to apply an end-to-end method to the encrypted traffic classification domain. The method is validated with the public ISCX VPN-nonVPN traffic dataset. Among all of the four experiments, with the best traffic representation and the fine-tuned model, 11 of 12 evaluation metrics of the experiment results outperform the state-of-the-art method, which indicates the effectiveness of the proposed method.",
"title": ""
},
{
"docid": "c7d54d4932792f9f1f4e08361716050f",
"text": "In this paper, we address several puzzles concerning speech acts,particularly indirect speech acts. We show how a formal semantictheory of discourse interpretation can be used to define speech actsand to avoid murky issues concerning the metaphysics of action. Weprovide a formally precise definition of indirect speech acts, includingthe subclass of so-called conventionalized indirect speech acts. Thisanalysis draws heavily on parallels between phenomena at the speechact level and the lexical level. First, we argue that, just as co-predicationshows that some words can behave linguistically as if they're `simultaneously'of incompatible semantic types, certain speech acts behave this way too.Secondly, as Horn and Bayer (1984) and others have suggested, both thelexicon and speech acts are subject to a principle of blocking or ``preemptionby synonymy'': Conventionalized indirect speech acts can block their`paraphrases' from being interpreted as indirect speech acts, even ifthis interpretation is calculable from Gricean-style principles. Weprovide a formal model of this blocking, and compare it withexisting accounts of lexical blocking.",
"title": ""
},
{
"docid": "8fe95ffa1989c458c9955faad48df195",
"text": "While ever more companies use Enterprise Social Networks for knowledge management, there is still a lack of understanding of users’ knowledge exchanging behavior. In this context, it is important to be able to identify and characterize users who contribute and communicate their knowledge in the network and help others to get their work done. In this paper, we propose a new methodological approach consisting of three steps, namely ―message classification‖, ―identification of users’ roles‖ as well as ―characterization of users’ roles‖. We apply the approach to a dataset from a multinational consulting company, which allows us to identify three user roles based on their knowledge contribution in messages: givers, takers, and matchers. Going beyond this categorization, our data shows that whereas the majority of messages aims to share knowledge, matchers, that means people that give and take, are a central element of the network. In conclusion, the development and application of a new methodological approach allows us to contribute to a more refined understanding of users’ knowledge exchanging behavior in Enterprise Social Networks which can ultimately help companies to take measures to improve their knowledge management.",
"title": ""
},
{
"docid": "7cf8e2555cfccc1fc091272559ad78d7",
"text": "This paper presents a multimodal emotion recognition method that uses a feature-level combination of three-dimensional (3D) geometric features (coordinates, distance and angle of joints), kinematic features such as velocity and displacement of joints, and features extracted from daily behavioral patterns such as frequency of head nod, hand wave, and body gestures that represent specific emotions. Head, face, hand, body, and speech data were captured from 15 participants using an infrared sensor (Microsoft Kinect). The 3D geometric and kinematic features were developed using raw feature data from the visual channel. Human emotional behavior-based features were developed using inter-annotator agreement and commonly observed expressions, movements and postures associated to specific emotions. The features from each modality and the behavioral pattern-based features (head shake, arm retraction, body forward movement depicting anger) were combined to train the multimodal classifier for the emotion recognition system. The classifier was trained using 10-fold cross validation and support vector machine (SVM) to predict six basic emotions. The results showed improvement in emotion recognition accuracy (The precision increased by 3.28% and the recall rate by 3.17%) when the 3D geometric, kinematic, and human behavioral pattern-based features were combined for multimodal emotion recognition using supervised classification.",
"title": ""
},
{
"docid": "564872511b110238b1a2d755700fdf12",
"text": "The present paper makes use of factorial experiments to assess software complexity using insertion sort as a trivial example. We next propose to implement the methodology in quicksort and other advanced algorithms.",
"title": ""
},
{
"docid": "9b7e83fbcb9c725fbcc42cc082825f4f",
"text": "Amazon is well-known for personalization and recommendations, which help customers discover items they might otherwise not have found. In this update to their original paper, the authors discuss some of the changes as Amazon has grown.",
"title": ""
},
{
"docid": "dc812a89cadb88ec6cfc5d75f68052ff",
"text": "The recent advancements in sensor technology have made it possible to collect enormous amounts of data in real time. How to find out unusual pattern from time series data plays a very important role in data mining. In this paper, we focus on the abnormal subsequence detection. The original definition of discord subsequences is defective for some kind of time series, in this paper we give a more robust definition which is based on the k nearest neighbors. We also donate a novel method for time series representation, it has better performance than traditional methods (like PAA/SAX) to represent the characteristic of some special time series. To speed up the process of abnormal subsequence detection, we used the clustering method to optimize the outer loop ordering and early abandon subsequence which is impossible to be abnormal. The experiment results validate that the algorithm is correct and has a high efficiency.",
"title": ""
},
{
"docid": "ce4a19ccb75c82a0afde6b531776a23f",
"text": "This article describes posterior maximization for topic models, identifying computational and conceptual gains from inference under a non-standard parametrization. We then show that fitted parameters can be used as the basis for a novel approach to marginal likelihood estimation, via block-diagonal approximation to the information matrix, that facilitates choosing the number of latent topics. This likelihood-based model selection is complemented with a goodness-of-fit analysis built around estimated residual dispersion. Examples are provided to illustrate model selection as well as to compare our estimation against standard alternative techniques.",
"title": ""
},
{
"docid": "5c4f20fcde1cc7927d359fd2d79c2ba5",
"text": "There are different interpretations of user experience that lead to different scopes of measure. The ISO definition suggests measures of user experience are similar to measures of satisfaction in usability. A survey at Nokia showed that user experience was interpreted in a similar way to usability, but with the addition of anticipation and hedonic responses. CHI 2009 SIG participants identified not just measurement methods, but methods that help understanding of how and why people use products. A distinction can be made between usability methods that have the objective of improving human performance, and user experience methods that have the objective of improving user satisfaction with achieving both pragmatic and hedonic goals. Sometimes the term “user experience” is used to refer to both approaches. DEFINITIONS OF USABILITY AND USER EXPERIENCE There has been a lot of recent debate about the scope of user experience, and how it should be defined [5]. The definition of user experience in ISO FDIS 9241-210 is: A person's perceptions and responses that result from the use and/or anticipated use of a product, system or service. This contrasts with the revised definition of usability in ISO FDIS 9241-210: Extent to which a system, product or service can be used by specified users to achieve specified goals with effectiveness, efficiency and satisfaction in a specified context of use. Both these definitions suggest that usability or user experience can be measured during or after use of a product, system or service. A person's “perceptions and responses” in the definition of user experience are similar to the concept of satisfaction in usability. From this perspective, measures of user experience can be encompassed within the 3-component model of usability [1], particularly when the experience is task-related. A weakness of both definitions is that they are not explicitly concerned with time. Just as the ISO 9241-11 definition of usability has nothing to say about learnability (where usability changes over time), so the ISO 9241-210 definition of user experience has nothing to say about the way user experience evolves from expectation, through actual interaction, to a total experience that includes reflection on the experience [7]. USER EXPERIENCE NEEDS IN DESIGN AND DEVELOPMENT Ketola and Roto [4] surveyed the needs for information on user experience in Nokia, asking senior staff: Which User Experience information (measurable data gained from our target users directly or indirectly), is useful for your organization? How? 21 needs were identified from 18 respondents who worked in Research, Development, Care, and Quality. Ketola and Roto categorised the responses in terms of the area measured: UX lifecycle, retention, use of functions, breakdowns, customer care, localization, device performance and new technology. In Table 1, the needs have been recategorized by type of measure. It is clear that most of the measures are common to conventional approaches to user centred design, but three measures are specific to user experience: • The impact of expected UX to purchase decisions • Continuous excitement • Why and when the user experiences frustration? USER EXPERIENCE EVALUATION METHODS At the CHI 2009 SIG: “User Experience Evaluation – Do You Know Which Method to Use?” [6] [8], participants were asked to describe user experience evaluation methods that they used. 36 methods were collected (including the example methods presented by the organizers). These have been categorised in Table 2 by the type of evaluation context, and the type of data collected. There was very little mention of using measures specific to user experience, particularly from industry participants. It seems that industry's interpretation of user experience evaluation methods is much broader, going beyond conventional evaluation to encompass methods that collect information that helps design for user experience. In that sense user experience evaluation seems to be interpreted as user centred design methods for achieving user experience. The differentiating factor from more traditional usability work is thus a wider end goal: not just achieving effectiveness, efficiency and satisfaction, but optimising the whole user experience from expectation through actual interaction to reflection on the experience. DIFFERENCES BETWEEN USABILITY AND USER EXPERIENCE Although there is no fundamental difference between measures of usability and measures of user experience at a particular point in time, the difference in emphasis between task performance and pleasure leads to different concerns during development. In the context of user centred design, typical usability concerns include: Measurement category Measurement type Measure Area measured Anticipation Pre-purchase Anticipated use The impact of expected UX to purchase decisions UX lifecycle Overall usability First use Effectiveness Success of taking the product into use UX lifecycle Product upgrade Effectiveness Success in transferring content from old device to the new device UX lifecycle Expectations vs. reality Satisfaction Has the device met your expectations? Retention Long term experience Satisfaction Are you satisfied with the product quality (after 3 months of use) Retention Hedonic Engagement Pleasure Continuous excitement Retention UX Obstacles Frustration Why and when the user experiences frustration? Breakdowns Detailed usability Use of device functions How used What functions are used, how often, why, how, when, where? Use of functions Malfunction Technical problems Amount of “reboots” and severe technical problems experienced. Breakdowns Usability problems Usability problems Top 10 usability problems experienced by the customers. Breakdowns Effect of localization Satisfaction with localisation How do users perceive content in their local language? Localization Latencies Satisfaction with device performance Perceived latencies in key tasks. Device performance Performance Satisfaction with device performance Perceived UX on device performance Device performance Perceived complexity Satisfaction with task complexity Actual and perceived complexity of task accomplishments. Device performance User differences Previous devices Previous user experience Which device you had previously? Retention Differences in user groups User differences How different user groups access features? Use of functions Reliability of product planning User differences Comparison of target users vs. actual buyers? Use of functions Support Customer experience in “touchpoints” Satisfaction with support How does customer think & feel about the interaction in the touch points? Customer care Accuracy of support information Consequences of poor support Does inaccurate support information result in product returns? How? Customer care Innovation feedback User wish list New user ideas & innovations triggered by new experiences New technologies Impact of use Change in user behaviour How the device affects user behaviour How are usage patterns changing when new technologies are introduced New technologies Table 1. Categorisation of usability measures reported in [4] 1. Designing for and evaluating overall effectiveness and efficiency. 2. Designing for and evaluating user comfort and satisfaction. 3. Designing to make the product easy to use, and evaluating the product in order to identify and fix usability problems. 4. When relevant, the temporal aspect leads to a concern for learnability. In the context of user centred design, typical user experience concerns include: 1. Understanding and designing the user’s experience with a product: the way in which people interact with a product over time: what they do and why. 2. Maximising the achievement of the hedonic goals of stimulation, identification and evocation and associated emotional responses. Sometimes the two sets of issues are contrasted as usability and user experience. But some organisations would include both under the common umbrella of user experience. Evaluation context Lab tests Lab study with mind maps Paper prototyping Field tests Product / Tool Comparison Competitive evaluation of prototypes in the wild Field observation Long term pilot study Longitudinal comparison Contextual Inquiry Observation/Post Interview Activity Experience Sampling Longitudinal Evaluation Ethnography Field observations Longitudinal Studies Evaluation of groups Evaluating collaborative user experiences, Instrumented product TRUE Tracking Realtime User Experience Domain specific Nintendi Wii Children OPOS Outdoor Play Observation Scheme This-or-that Approaches Evaluating UX jointly with usability Evaluation data User opinion/interview Lab study with mind maps Quick and dirty evaluation Audio narrative Retrospective interview Contextual Inquiry Focus groups evaluation Observation \\ Post Interview Activity Experience Sampling Sensual Evaluation Instrument Contextual Laddering Interview ESM User questionnaire Survey Questions Emocards Experience sampling triggered by events, SAM Magnitude Estimation TRUE Tracking Realtime User Experience Questionnaire (e.g. AttrakDiff) Human responses PURE preverbal user reaction evaluation Psycho-physiological measurements Expert evaluation Expert evaluation Heuristic matrix Perspective-Based Inspection Table2. User experience evaluation methods (CHI 2009 SIG) CONCLUSIONS The scope of user experience The concept of user experience both broadens: • The range of human responses that would be measured to include pleasure. • The circumstances in which they would be measured to include anticipated use and reflection on use. Equally importantly the goal to achieve improved user experience over the whole lifecycle of user involvement with the product leads to increased emphasis on use of methods that help understand what can be done to improve this experience through the whole lifecycle of user involvement. However, notably absent from any of the current surveys or initiative",
"title": ""
},
{
"docid": "61615f5aefb0aa6de2dd1ab207a966d5",
"text": "Wikipedia provides an enormous amount of background knowledge to reason about the semantic relatedness between two entities. We propose Wikipedia-based Distributional Semantics for Entity Relatedness (DiSER), which represents the semantics of an entity by its distribution in the high dimensional concept space derived from Wikipedia. DiSER measures the semantic relatedness between two entities by quantifying the distance between the corresponding high-dimensional vectors. DiSER builds the model by taking the annotated entities only, therefore it improves over existing approaches, which do not distinguish between an entity and its surface form. We evaluate the approach on a benchmark that contains the relative entity relatedness scores for 420 entity pairs. Our approach improves the accuracy by 12% on state of the art methods for computing entity relatedness. We also show an evaluation of DiSER in the Entity Disambiguation task on a dataset of 50 sentences with highly ambiguous entity mentions. It shows an improvement of 10% in precision over the best performing methods. In order to provide the resource that can be used to find out all the related entities for a given entity, a graph is constructed, where the nodes represent Wikipedia entities and the relatedness scores are reflected by the edges. Wikipedia contains more than 4.1 millions entities, which required efficient computation of the relatedness scores between the corresponding 17 trillions of entity-pairs.",
"title": ""
},
{
"docid": "3fa0ab962ec54cea182a293810cf7ce8",
"text": "Peer review is at the heart of the processes of not just medical journals but of all of science. It is the method by which grants are allocated, papers published, academics promoted, and Nobel prizes won. Yet it is hard to define. It has until recently been unstudied. And its defects are easier to identify than its attributes. Yet it shows no sign of going away. Famously, it is compared with democracy: a system full of problems but the least worst we have. When something is peer reviewed it is in some sense blessed. Even journalists recognize this. When the BMJ published a highly controversial paper that argued that a new ‘disease’, female sexual dysfunction, was in some ways being created by pharmaceutical companies, a friend who is a journalist was very excited—not least because reporting it gave him a chance to get sex onto the front page of a highly respectable but somewhat priggish newspaper (the Financial Times). ‘But,’ the news editor wanted to know, ‘was this paper peer reviewed?’. The implication was that if it had been it was good enough for the front page and if it had not been it was not. Well, had it been? I had read it much more carefully than I read many papers and had asked the author, who happened to be a journalist, to revise the paper and produce more evidence. But this was not peer review, even though I was a peer of the author and had reviewed the paper. Or was it? (I told my friend that it had not been peer reviewed, but it was too late to pull the story from the front page.)",
"title": ""
},
{
"docid": "fd9db865b26556e99923346a5eb51938",
"text": "Optogenetic approaches promise to revolutionize neuroscience by using light to manipulate neural activity in genetically or functionally defined neurons with millisecond precision. Harnessing the full potential of optogenetic tools, however, requires light to be targeted to the right neurons at the right time. Here we discuss some barriers and potential solutions to this problem. We review methods for targeting the expression of light-activatable molecules to specific cell types, under genetic, viral or activity-dependent control. Next we explore new ways to target light to individual neurons to allow their precise activation and inactivation. These techniques provide a precision in the temporal and spatial activation of neurons that was not achievable in previous experiments. In combination with simultaneous recording and imaging techniques, these strategies will allow us to mimic the natural activity patterns of neurons in vivo, enabling previously impossible 'dream experiments'.",
"title": ""
},
{
"docid": "35830166ddf17086a61ab07ec41be6b0",
"text": "As the need for Human Computer Interaction (HCI) designers increases so does the need for courses that best prepare students for their future work life. Multidisciplinary teamwork is what very frequently meets the graduates in their new work situations. Preparing students for such multidisciplinary work through education is not easy to achieve. In this paper, we investigate ways to engage computer science students, majoring in design, use, and interaction (with technology), in design practices through an advanced graduate course in interaction design. Here, we take a closer look at how prior embodied and explicit knowledge of HCI that all of the students have, combined with understanding of design practice through the course, shape them as human-computer interaction designers. We evaluate the results of the effort in terms of increase in creativity, novelty of ideas, body language when engaged in design activities, and in terms of perceptions of how well this course prepared the students for the work practice outside of the university. Keywords—HCI education; interaction design; studio; design education; multidisciplinary teamwork.",
"title": ""
},
{
"docid": "e9bc802e8ce6a823526084c82aa89c95",
"text": "Non-orthogonal multiple access (NOMA) is a promising radio access technique for further cellular enhancements toward 5G. Single-user multiple-input multiple-output (SU-MIMO) is one of the key technologies in LTE /LTE-Advanced systems. Thus, it is of great interest to study how to efficiently and effectively combine NOMA and SU-MIMO techniques together for further system performance improvement. This paper investigates the combination of NOMA with open-loop and closed-loop SU-MIMO. The key issues involved in the combination are presented and discussed, including scheduling algorithm, successive interference canceller (SIC) order determination, transmission power assignment and feedback design. The performances of NOMA with SU-MIMO are investigated by system-level simulations with very practical assumptions. Simulation results show that compared to orthogonal multiple access system, NOMA can achieve large performance gains both open-loop and closed-loop SU-MIMO, which are about 23% for cell average throughput and 33% for cell-edge user throughput.",
"title": ""
},
{
"docid": "7a3573bfb32dc1e081d43fe9eb35a23b",
"text": "Collections of relational paraphrases have been automatically constructed from large text corpora, as a WordNet counterpart for the realm of binary predicates and their surface forms. However, these resources fall short in their coverage of hypernymy links (subsumptions) among the synsets of phrases. This paper closes this gap by computing a high-quality alignment between the relational phrases of the Patty taxonomy, one of the largest collections of this kind, and the verb senses of WordNet. To this end, we devise judicious features and develop a graph-based alignment algorithm by adapting and extending the SimRank random-walk method. The resulting taxonomy of relational phrases and verb senses, coined HARPY, contains 20,812 synsets organized into a Directed Acyclic Graph (DAG) with 616,792 hypernymy links. Our empirical assessment, indicates that the alignment links between Patty and WordNet have high accuracy, with Mean Reciprocal Rank (MRR) score 0.7 and Normalized Discounted Cumulative Gain (NDCG) score 0.73. As an additional extrinsic value, HARPY provides fine-grained lexical types for the arguments of verb senses in WordNet.",
"title": ""
},
{
"docid": "e871e2b5bd1ed95fd5302e71f42208bf",
"text": "Chapters 2–7 make up Part II of the book: artificial neural networks. After introducing the basic concepts of neurons and artificial neuron learning rules in Chapter 2, Chapter 3 describes a particular formalism, based on signal-plus-noise, for the learning problem in general. After presenting the basic neural network types this chapter reviews the principal algorithms for error function minimization/optimization and shows how these learning issues are addressed in various supervised models. Chapter 4 deals with issues in unsupervised learning networks, such as the Hebbian learning rule, principal component learning, and learning vector quantization. Various techniques and learning paradigms are covered in Chapters 3–6, and especially the properties and relative merits of the multilayer perceptron networks, radial basis function networks, self-organizing feature maps and reinforcement learning are discussed in the respective four chapters. Chapter 7 presents an in-depth examination of performance issues in supervised learning, such as accuracy, complexity, convergence, weight initialization, architecture selection, and active learning. Par III (Chapters 8–15) offers an extensive presentation of techniques and issues in evolutionary computing. Besides the introduction to the basic concepts in evolutionary computing, it elaborates on the more important and most frequently used techniques on evolutionary computing paradigm, such as genetic algorithms, genetic programming, evolutionary programming, evolutionary strategies, differential evolution, cultural evolution, and co-evolution, including design aspects, representation, operators and performance issues of each paradigm. The differences between evolutionary computing and classical optimization are also explained. Part IV (Chapters 16 and 17) introduces swarm intelligence. It provides a representative selection of recent literature on swarm intelligence in a coherent and readable form. It illustrates the similarities and differences between swarm optimization and evolutionary computing. Both particle swarm optimization and ant colonies optimization are discussed in the two chapters, which serve as a guide to bringing together existing work to enlighten the readers, and to lay a foundation for any further studies. Part V (Chapters 18–21) presents fuzzy systems, with topics ranging from fuzzy sets, fuzzy inference systems, fuzzy controllers, to rough sets. The basic terminology, underlying motivation and key mathematical models used in the field are covered to illustrate how these mathematical tools can be used to handle vagueness and uncertainty. This book is clearly written and it brings together the latest concepts in computational intelligence in a friendly and complete format for undergraduate/postgraduate students as well as professionals new to the field. With about 250 pages covering such a wide variety of topics, it would be impossible to handle everything at a great length. Nonetheless, this book is an excellent choice for readers who wish to familiarize themselves with computational intelligence techniques or for an overview/introductory course in the field of computational intelligence. Learning with Kernels: Support Vector Machines, Regularization, Optimization, and Beyond—Bernhard Schölkopf and Alexander Smola, (MIT Press, Cambridge, MA, 2002, ISBN 0-262-19475-9). Reviewed by Amir F. Atiya.",
"title": ""
},
{
"docid": "7ec5faf2081790e7baa1832d5f9ab5bd",
"text": "Text detection in complex background images is a challenging task for intelligent vehicles. Actually, almost all the widely-used systems focus on commonly used languages while for some minority languages, such as the Uyghur language, text detection is paid less attention. In this paper, we propose an effective Uyghur language text detection system in complex background images. First, a new channel-enhanced maximally stable extremal regions (MSERs) algorithm is put forward to detect component candidates. Second, a two-layer filtering mechanism is designed to remove most non-character regions. Third, the remaining component regions are connected into short chains, and the short chains are extended by a novel extension algorithm to connect the missed MSERs. Finally, a two-layer chain elimination filter is proposed to prune the non-text chains. To evaluate the system, we build a new data set by various Uyghur texts with complex backgrounds. Extensive experimental comparisons show that our system is obviously effective for Uyghur language text detection in complex background images. The F-measure is 85%, which is much better than the state-of-the-art performance of 75.5%.",
"title": ""
},
{
"docid": "9679713ae8ab7e939afba18223086128",
"text": "If, as many psychologists seem to believe, im mediate memory represents a distinct system or set of processes from long-term memory (L TM), then what might· it be for? This fundamental, functional question was surprisingly unanswer able in the 1970s, given the volume of research that had explored short-term memory (STM), and given the ostensible role that STM was thought to play in cognitive control (Atkinson & Shiffrin, 1971 ). Indeed, failed attempts to link STM to complex cognitive· functions, such as reading comprehension, loomed large in Crow der's (1982) obituary for the concept. Baddeley and Hitch ( 197 4) tried to validate immediate memory's functions by testing sub jects in reasoning, comprehension, and list learning tasks at the same time their memory was occupied by irrelevant material. Generally, small memory loads (i.e., three or fewer items) were retained with virtually no effect on the primary tasks, whereas memory loads of six items consistently impaired reasoning, compre hension, and learning. Baddeley and Hitch therefore argued that \"working memory\" (WM)",
"title": ""
}
] |
scidocsrr
|
e5bbc787e841e3c470de98a90b382bed
|
Video segmentation by tracing discontinuities in a trajectory embedding
|
[
{
"docid": "fea6d5cffd6b2943fac155231e7e9d89",
"text": "We propose a principled account on multiclass spectral clustering. Given a discrete clustering formulation, we first solve a relaxed continuous optimization problem by eigendecomposition. We clarify the role of eigenvectors as a generator of all optimal solutions through orthonormal transforms. We then solve an optimal discretization problem, which seeks a discrete solution closest to the continuous optima. The discretization is efficiently computed in an iterative fashion using singular value decomposition and nonmaximum suppression. The resulting discrete solutions are nearly global-optimal. Our method is robust to random initialization and converges faster than other clustering methods. Experiments on real image segmentation are reported. Spectral graph partitioning methods have been successfully applied to circuit layout [3, 1], load balancing [4] and image segmentation [10, 6]. As a discriminative approach, they do not make assumptions about the global structure of data. Instead, local evidence on how likely two data points belong to the same class is first collected and a global decision is then made to divide all data points into disjunct sets according to some criterion. Often, such a criterion can be interpreted in an embedding framework, where the grouping relationships among data points are preserved as much as possible in a lower-dimensional representation. What makes spectral methods appealing is that their global-optima in the relaxed continuous domain are obtained by eigendecomposition. However, to get a discrete solution from eigenvectors often requires solving another clustering problem, albeit in a lower-dimensional space. That is, eigenvectors are treated as geometrical coordinates of a point set. Various clustering heuristics such as Kmeans [10, 9], transportation [2], dynamic programming [1], greedy pruning or exhaustive search [3, 10] are subsequently employed on the new point set to retrieve partitions. We show that there is a principled way to recover a discrete optimum. This is based on a fact that the continuous optima consist not only of the eigenvectors, but of a whole family spanned by the eigenvectors through orthonormal transforms. The goal is to find the right orthonormal transform that leads to a discretization.",
"title": ""
}
] |
[
{
"docid": "aa32bff910ce6c7b438dc709b28eefe3",
"text": "Here we sketch the rudiments of what constitutes a smart city which we define as a city in which ICT is merged with traditional infrastructures, coordinated and integrated using new digital technologies. We first sketch our vision defining seven goals which concern: developing a new understanding of urban problems; effective and feasible ways to coordinate urban technologies; models and methods for using urban data across spatial and temporal scales; developing new technologies for communication and dissemination; developing new forms of urban governance and organisation; defining critical problems relating to cities, transport, and energy; and identifying risk, uncertainty, and hazards in the smart city. To this, we add six research challenges: to relate the infrastructure of smart cities to their operational functioning and planning through management, control and optimisation; to explore the notion of the city as a laboratory for innovation; to provide portfolios of urban simulation which inform future designs; to develop technologies that ensure equity, fairness and realise a better quality of city life; to develop technologies that ensure informed participation and create shared knowledge for democratic city governance; and to ensure greater and more effective mobility and access to opportunities for a e-mail: m.batty@ucl.ac.uk 482 The European Physical Journal Special Topics urban populations. We begin by defining the state of the art, explaining the science of smart cities. We define six scenarios based on new cities badging themselves as smart, older cities regenerating themselves as smart, the development of science parks, tech cities, and technopoles focused on high technologies, the development of urban services using contemporary ICT, the use of ICT to develop new urban intelligence functions, and the development of online and mobile forms of participation. Seven project areas are then proposed: Integrated Databases for the Smart City, Sensing, Networking and the Impact of New Social Media, Modelling Network Performance, Mobility and Travel Behaviour, Modelling Urban Land Use, Transport and Economic Interactions, Modelling Urban Transactional Activities in Labour and Housing Markets, Decision Support as Urban Intelligence, Participatory Governance and Planning Structures for the Smart City. Finally we anticipate the paradigm shifts that will occur in this research and define a series of key demonstrators which we believe are important to progressing a science",
"title": ""
},
{
"docid": "7eebeb133a9881e69bf3c367b9e20751",
"text": "Advanced driver assistance systems or highly automated driving systems for lane change maneuvers are expected to enhance highway traffic safety, transport efficiency, and driver comfort. To extend the capability of current advanced driver assistance systems, and eventually progress to highly automated highway driving, the task of automatically determine if, when, and how to perform a lane change maneuver, is essential. This paper thereby presents a low-complexity lane change maneuver algorithm which determines whether a lane change maneuver is desirable, and if so, selects an appropriate inter-vehicle traffic gap and time instance to perform the maneuver, and calculates the corresponding longitudinal and lateral control trajectory. The ability of the proposed lane change maneuver algorithm to make appropriate maneuver decisions and generate smooth and safe lane change trajectories in various traffic situations is demonstrated by simulation and experimental results.",
"title": ""
},
{
"docid": "e0fb10bf5f0206c8cf3f97f5daa33fc0",
"text": "Existing techniques on adversarial malware generation employ feature mutations based on feature vectors extracted from malware. However, most (if not all) of these techniques suffer from a common limitation: feasibility of these attacks is unknown. The synthesized mutations may break the inherent constraints posed by code structures of the malware, causing either crashes or malfunctioning of malicious payloads. To address the limitation, we present Malware Recomposition Variation (MRV), an approach that conducts semantic analysis of existing malware to systematically construct new malware variants for malware detectors to test and strengthen their detection signatures/models. In particular, we use two variation strategies (i.e., malware evolution attack and malware confusion attack) following structures of existing malware to enhance feasibility of the attacks. Upon the given malware, we conduct semantic-feature mutation analysis and phylogenetic analysis to synthesize mutation strategies. Based on these strategies, we perform program transplantation to automatically mutate malware bytecode to generate new malware variants. We evaluate our MRV approach on actual malware variants, and our empirical evaluation on 1,935 Android benign apps and 1,917 malware shows that MRV produces malware variants that can have high likelihood to evade detection while still retaining their malicious behaviors. We also propose and evaluate three defense mechanisms to counter MRV.",
"title": ""
},
{
"docid": "5e6c24f5f3a2a3c3b0aff67e747757cb",
"text": "Traps have been used extensively to provide early warning of hidden pest infestations. To date, however, there is only one type of trap on the market in the U.K. for storage mites, namely the BT mite trap, or monitor. Laboratory studies have shown that under the test conditions (20 °C, 65% RH) the BT trap is effective at detecting mites for at least 10 days for all three species tested: Lepidoglyphus destructor, Tyrophagus longior and Acarus siro. Further tests showed that all three species reached a trap at a distance of approximately 80 cm in a 24 h period. In experiments using 100 mites of each species, and regardless of either temperature (15 or 20 °C) or relative humidity (65 or 80% RH), the most abundant species in the traps was T. longior, followed by A. siro then L. destructor. Trap catches were highest at 20 °C and 65% RH. Temperature had a greater effect on mite numbers than humidity. Tests using different densities of each mite species showed that the number of L. destructor found in/on the trap was significantly reduced when either of the other two species was dominant. It would appear that there is an interaction between L. destructor and the other two mite species which affects relative numbers found within the trap.",
"title": ""
},
{
"docid": "da4ec6dcf7f47b8ec0261195db7af5ca",
"text": "Smart factories are on the verge of becoming the new industrial paradigm, wherein optimization permeates all aspects of production, from concept generation to sales. To fully pursue this paradigm, flexibility in the production means as well as in their timely organization is of paramount importance. AI is planning a major role in this transition, but the scenarios encountered in practice might be challenging for current tools. Task planning is one example where AI enables more efficient and flexible operation through an online automated adaptation and rescheduling of the activities to cope with new operational constraints and demands. In this paper we present SMarTplan, a task planner specifically conceived to deal with real-world scenarios in the emerging smart factory paradigm. Including both special-purpose and general-purpose algorithms, SMarTplan is based on current automated reasoning technology and it is designed to tackle complex application domains. In particular, we show its effectiveness on a logistic scenario, by comparing its specialized version with the general purpose one, and extending the comparison to other state-of-the-art task planners.",
"title": ""
},
{
"docid": "4193bd310422b555faa5f6de8a1a94cd",
"text": "Although hundreds of chemical compounds have been identified in grapes and wines, only a few compounds actually contribute to sensory perception of wine flavor. This critical review focuses on volatile compounds that contribute to wine aroma and provides an overview of recent developments in analytical techniques for volatiles analysis, including methods used to identify the compounds that make the greatest contributions to the overall aroma. Knowledge of volatile composition alone is not enough to completely understand the overall wine aroma, however, due to complex interactions of odorants with each other and with other nonvolatile matrix components. These interactions and their impact on aroma volatility are the focus of much current research and are also reviewed here. Finally, the sequencing of the grapevine and yeast genomes in the past approximately 10 years provides the opportunity for exciting multidisciplinary studies aimed at understanding the influences of multiple genetic and environmental factors on grape and wine flavor biochemistry and metabolism (147 references).",
"title": ""
},
{
"docid": "f3a89c01dbbd40663811817ef7ba4be3",
"text": "In order to address the mental health disparities that exist for Latino adolescents in the United States, psychologists must understand specific factors that contribute to the high risk of mental health problems in Latino youth. Given the significant percentage of Latino youth who are immigrants or the children of immigrants, acculturation is a key factor in understanding mental health among this population. However, limitations in the conceptualization and measurement of acculturation have led to conflicting findings in the literature. Thus, the goal of the current review is to examine and critique research linking acculturation and mental health outcomes for Latino youth, as well as to integrate individual, environmental, and family influences of this relationship. An integrated theoretical model is presented and implications for clinical practice and future directions are discussed.",
"title": ""
},
{
"docid": "12adb5e324d971d2c752f2193cec3126",
"text": "Despite recent excitement generated by the P2P paradigm and despite surprisingly fast deployment of some P2P applications, there are few quantitative evaluations of P2P systems behavior. Due to its open architecture and achieved scale, Gnutella is an interesting P2P architecture case study. Gnutella, like most other P2P applications, builds at the application level a virtual network with its own routing mechanisms. The topology of this overlay network and the routing mechanisms used have a significant influence on application properties such as performance, reliability, and scalability. We built a ‘crawler’ to extract the topology of Gnutella’s application level network, we analyze the topology graph and evaluate generated network traffic. We find that although Gnutella is not a pure power-law network, its current configuration has the benefits and drawbacks of a power-law structure. These findings lead us to propose changes to Gnutella protocol and implementations that bring significant performance and scalability improvements.",
"title": ""
},
{
"docid": "eeff8964179ebd51745fece9b2fd50f3",
"text": "In this paper, we present a novel structure-preserving image completion approach equipped with dynamic patches. We formulate the image completion problem into an energy minimization framework that accounts for coherence within the hole and global coherence simultaneously. The completion of the hole is achieved through iterative optimizations combined with a multi-scale solution. In order to avoid abnormal structure and disordered texture, we utilize a dynamic patch system to achieve efficient structure restoration. Our dynamic patch system functions in both horizontal and vertical directions of the image pyramid. In the horizontal direction, we conduct a parallel search for multi-size patches in each pyramid level and design a competitive mechanism to select the most suitable patch. In the vertical direction, we use large patches in higher pyramid level to maximize the structure restoration and use small patches in lower pyramid level to reduce computational workload. We test our approach on massive images with complex structure and texture. The results are visually pleasing and preserve nice structure. Apart from effective structure preservation, our approach outperforms previous state-of-the-art methods in time consumption.",
"title": ""
},
{
"docid": "5096194bcbfebd136c74c30b998fb1f3",
"text": "This present study is designed to propose a conceptual framework extended from the previously advanced Theory of Acceptance Model (TAM). The framework makes it possible to examine the effects of social media, and perceived risk as the moderating effects between intention and actual purchase to be able to advance the Theory of Acceptance Model (TAM). 400 samples will be randomly selected among Saudi in Jeddah, Dammam and Riyadh. Data will be collected using questionnaire survey. As the research involves the analysis of numerical data, the assessment is carried out using Structural Equation Model (SEM). The hypothesis will be tested and the result is used to explain the proposed TAM. The findings from the present study will be beneficial for marketers to understand the intrinsic behavioral factors that influence consumers' selection hence avoid trial and errors in their advertising drives.",
"title": ""
},
{
"docid": "c3112126fa386710fb478dcfe978630e",
"text": "In recent years, distributed intelligent microelectromechanical systems (DiMEMSs) have appeared as a new form of distributed embedded systems. DiMEMSs contain thousands or millions of removable autonomous devices, which will collaborate with each other to achieve the final target of the whole system. Programming such systems is becoming an extremely difficult problem. The difficulty is due not only to their inherent nature of distributed collaboration, mobility, large scale, and limited resources of their devices (e.g., in terms of energy, memory, communication, and computation) but also to the requirements of real-time control and tolerance for uncertainties such as inaccurate actuation and unreliable communications. As a result, existing programming languages for traditional distributed and embedded systems are not suitable for DiMEMSs. In this article, we first introduce the origin and characteristics of DiMEMSs and then survey typical implementations of DiMEMSs and related research hotspots. Finally, we propose a real-time programming framework that can be used to design new real-time programming languages for DiMEMSs. The framework is composed of three layers: a real-time programming model layer, a compilation layer, and a runtime system layer. The design challenges and requirements of these layers are investigated. The framework is then discussed in further detail and suggestions for future research are given.",
"title": ""
},
{
"docid": "69e90a5882bdea0055bb61463687b0c1",
"text": "www.frontiersinecology.org © The Ecological Society of America E generate a range of goods and services important for human well-being, collectively called ecosystem services. Over the past decade, progress has been made in understanding how ecosystems provide services and how service provision translates into economic value (Daily 1997; MA 2005; NRC 2005). Yet, it has proven difficult to move from general pronouncements about the tremendous benefits nature provides to people to credible, quantitative estimates of ecosystem service values. Spatially explicit values of services across landscapes that might inform land-use and management decisions are still lacking (Balmford et al. 2002; MA 2005). Without quantitative assessments, and some incentives for landowners to provide them, these services tend to be ignored by those making land-use and land-management decisions. Currently, there are two paradigms for generating ecosystem service assessments that are meant to influence policy decisions. Under the first paradigm, researchers use broad-scale assessments of multiple services to extrapolate a few estimates of values, based on habitat types, to entire regions or the entire planet (eg Costanza et al. 1997; Troy and Wilson 2006; Turner et al. 2007). Although simple, this “benefits transfer” approach incorrectly assumes that every hectare of a given habitat type is of equal value – regardless of its quality, rarity, spatial configuration, size, proximity to population centers, or the prevailing social practices and values. Furthermore, this approach does not allow for analyses of service provision and changes in value under new conditions. For example, if a wetland is converted to agricultural land, how will this affect the provision of clean drinking water, downstream flooding, climate regulation, and soil fertility? Without information on the impacts of land-use management practices on ecosystem services production, it is impossible to design policies or payment programs that will provide the desired ecosystem services. In contrast, under the second paradigm for generating policy-relevant ecosystem service assessments, researchers carefully model the production of a single service in a small area with an “ecological production function” – how provision of that service depends on local ecological variables (eg Kaiser and Roumasset 2002; Ricketts et al. 2004). Some of these production function approaches also use market prices and non-market valuation methods to estimate the economic value of the service and how that value changes under different ecological conditions. Although these methods are superior to the habitat assessment benefits transfer approach, these studies lack both the scope (number of services) and scale (geographic and temporal) to be relevant for most policy questions. What is needed are approaches that combine the rigor of the small-scale studies with the breadth of broad-scale assessments (see Boody et al. 2005; Jackson et al. 2005; ECOSYSTEM SERVICES ECOSYSTEM SERVICES ECOSYSTEM SERVICES",
"title": ""
},
{
"docid": "a3ae9af5962d5df8a001da8964edfe3b",
"text": "The problem of blind demodulation of multiuser information symbols in a high-rate code-division multiple-access (CDMA) network in the presence of both multiple-access interference (MAI) and intersymbol interference (ISI) is considered. The dispersive CDMA channel is first cast into a multipleinput multiple-output (MIMO) signal model framework. By applying the theory of blind MIMO channel identification and equalization, it is then shown that under certain conditions the multiuser information symbols can be recovered without any prior knowledge of the channel or the users’ signature waveforms (including the desired user’s signature waveform), although the algorithmic complexity of such an approach is prohibitively high. However, in practice, the signature waveform of the user of interest is always available at the receiver. It is shown that by incorporating this knowledge, the impulse response of each user’s dispersive channel can be identified using a subspace method. It is further shown that based on the identified signal subspace parameters and the channel response, two linear detectors that are capable of suppressing both MAI and ISI, i.e., a zeroforcing detector and a minimum-mean-square-errror (MMSE) detector, can be constructed in closed form, at almost no extra computational cost. Data detection can then be furnished by applying these linear detectors (obtained blindly) to the received signal. The major contribution of this paper is the development of these subspace-based blind techniques for joint suppression of MAI and ISI in the dispersive CDMA channels.",
"title": ""
},
{
"docid": "c9a78279a2dfb2b8ed7ab2424aa41c34",
"text": "It is widely recognized that people sometimes use theory-of-mind judgments in moral cognition. A series of recent studies shows that the connection can also work in the opposite direction: moral judgments can sometimes be used in theory-of-mind cognition. Thus, there appear to be cases in which people's moral judgments actually serve as input to the process underlying their application of theory-of-mind concepts.",
"title": ""
},
{
"docid": "9ba1b3b31d077ad9a8b05e3736cb8716",
"text": "This work addresses multi-class segmentation of indoor scenes with RGB-D inputs. While this area of research has gained much attention recently, most works still rely on handcrafted features. In contrast, we apply a multiscale convolutional network to learn features directly from the images and the depth information. Using a frame by frame labeling, we obtain nearly state-of-the-art performance on the NYU-v2 depth dataset with an accuracy of 64.5%. We then show that the labeling can be further improved by exploiting the temporal consistency in the video sequence of the scene. To that goal, we present a method producing temporally consistent superpixels from a streaming video. Among the different methods producing superpixel segmentations of an image, the graph-based approach of Felzenszwalb and Huttenlocher is broadly employed. One of its interesting properties is that the regions are computed in a greedy manner in quasi-linear time by using a minimum spanning tree. In a framework exploiting minimum spanning trees all along, we propose an efficient video segmentation approach that computes temporally consistent pixels in a causal manner, filling the need for causal and real-time applications. We illustrate the labeling of indoor scenes in video sequences that could be processed in real-time using appropriate hardware such as an FPGA.",
"title": ""
},
{
"docid": "187fe997bb78bf60c5aaf935719df867",
"text": "Access to clean, affordable and reliable energy has been a cornerstone of the world's increasing prosperity and economic growth since the beginning of the industrial revolution. Our use of energy in the twenty–first century must also be sustainable. Solar and water–based energy generation, and engineering of microbes to produce biofuels are a few examples of the alternatives. This Perspective puts these opportunities into a larger context by relating them to a number of aspects in the transportation and electricity generation sectors. It also provides a snapshot of the current energy landscape and discusses several research and development opportunities and pathways that could lead to a prosperous, sustainable and secure energy future for the world.",
"title": ""
},
{
"docid": "a5391753b4ac2b7cab9f58f28348ab8d",
"text": "We present a temporal map of key processes that occur during decision making, which consists of three stages: 1) formation of preferences among options, 2) selection and execution of an action, and 3) experience or evaluation of an outcome. This framework can be used to integrate findings of traditional choice psychology, neuropsychology, brain lesion studies, and functional neuroimaging. Decision making is distributed across various brain centers, which are differentially active across these stages of decision making. This approach can be used to follow developmental trajectories of the different stages of decision making and to identify unique deficits associated with distinct psychiatric disorders.",
"title": ""
},
{
"docid": "2a44dc875eac50b8fa08ea98ab5ca463",
"text": "Next-generation e-Science features large-scale, compute-intensive workflows of many computing modules that are typically executed in a distributed manner. With the recent emergence of cloud computing and the rapid deployment of cloud infrastructures, an increasing number of scientific workflows have been shifted or are in active transition to cloud environments. As cloud computing makes computing a utility, scientists across different application domains are facing the same challenge of reducing financial cost in addition to meeting the traditional goal of performance optimization. We develop a prototype generic workflow system by leveraging existing technologies for a quick evaluation of scientific workflow optimization strategies. We construct analytical models to quantify the network performance of scientific workflows using cloud-based computing resources, and formulate a task scheduling problem to minimize the workflow end-to-end delay under a user-specified financial constraint. We rigorously prove that the proposed problem is not only NP-complete but also non-approximable. We design a heuristic solution to this problem, and illustrate its performance superiority over existing methods through extensive simulations and real-life workflow experiments based on proof-of-concept implementation and deployment in a local cloud testbed.",
"title": ""
},
{
"docid": "47faebfa7d65ebf277e57436cf7c2ca4",
"text": "Steganography is a method which can put data into a media without a tangible impact on the cover media. In addition, the hidden data can be extracted with minimal differences. In this paper, twodimensional discrete wavelet transform is used for steganography in 24-bit color images. This steganography is of blind type that has no need for original images to extract the secret image. In this algorithm, by the help of a structural similarity and a two-dimensional correlation coefficient, it is tried to select part of sub-band cover image instead of embedding location. These sub-bands are obtained by 3levels of applying the DWT. Also to increase the steganography resistance against cropping or insert visible watermark, two channels of color image is used simultaneously. In order to raise the security, an encryption algorithm based on Arnold transform was also added to the steganography operation. Because diversity of chaos scenarios is limited in Arnold transform, it could be improved by its mirror in order to increase the diversity of key. Additionally, an ability is added to encryption algorithm that can still maintain its efficiency against image crop. Transparency of steganography image is measured by the peak signalto-noise ratio that indicates the adequate transparency of steganography process. Extracted image similarity is also measured by two-dimensional correlation coefficient with more than 99% similarity. Moreover, steganography resistance against increasing and decreasing brightness and contrast, lossy compression, cropping image, changing scale and adding noise is acceptable",
"title": ""
},
{
"docid": "7677f90e0d949488958b27422bdffeb5",
"text": "This vignette is a slightly modified version of Koenker (2008a). It was written in plain latex not Sweave, but all data and code for the examples described in the text are available from either the JSS website or from my webpages. Quantile regression for censored survival (duration) data offers a more flexible alternative to the Cox proportional hazard model for some applications. We describe three estimation methods for such applications that have been recently incorporated into the R package quantreg: the Powell (1986) estimator for fixed censoring, and two methods for random censoring, one introduced by Portnoy (2003), and the other by Peng and Huang (2008). The Portnoy and Peng-Huang estimators can be viewed, respectively, as generalizations to regression of the Kaplan-Meier and NelsonAalen estimators of univariate quantiles for censored observations. Some asymptotic and simulation comparisons are made to highlight advantages and disadvantages of the three methods.",
"title": ""
}
] |
scidocsrr
|
9263a2626034f037af2424c5c2c6b5cc
|
Design Decisions: The Bridge between Rationale and Architecture
|
[
{
"docid": "5ae890862d844ce03359624c3cb2012b",
"text": "Spend your time even for only few minutes to read a book. Reading a book will never reduce and waste your time to be useless. Reading, for some people become a need that is to do every day such as spending time for eating. Now, what about you? Do you like to read a book? Now, we will show you a new book enPDFd software architecture in practice second edition that can be a new way to explore the knowledge. When reading this book, you can get one thing to always remember in every reading time, even step by step.",
"title": ""
}
] |
[
{
"docid": "3c4a8623330c48558ca178a82b68f06c",
"text": "Humans assimilate information from the traffic environment mainly through visual perception. Obviously, the dominant information required to conduct a vehicle can be acquired with visual sensors. However, in contrast to most other sensor principles, video signals contain relevant information in a highly indirect manner and hence visual sensing requires sophisticated machine vision and image understanding techniques. This paper provides an overview on the state of research in the field of machine vision for intelligent vehicles. The functional spectrum addressed covers the range from advanced driver assistance systems to autonomous driving. The organization of the article adopts the typical order in image processing pipelines that successively condense the rich information and vast amount of data in video sequences. Data-intensive low-level “early vision” techniques first extract features that are later grouped and further processed to obtain information of direct relevance for vehicle guidance. Recognition and classification schemes allow to identify specific objects in a traffic scene. Recently, semantic labeling techniques using convolutional neural networks have achieved impressive results in this field. High-level decisions of intelligent vehicles are often influenced by map data. The emerging role of machine vision in the mapping and localization process is illustrated at the example of autonomous driving. Scene representation methods are discussed that organize the information from all sensors and data sources and thus build the interface between perception and planning. Recently, vision benchmarks have been tailored to various tasks in traffic scene perception that provide a metric for the rich diversity of machine vision methods. Finally, the paper addresses computing architectures suited to real-time implementation. Throughout the paper, numerous specific examples and real world experiments with prototype vehicles are presented.",
"title": ""
},
{
"docid": "b8124460ac2eeab0a5afa88ba6f92804",
"text": "Evidence from diverse literatures supports the viewpoint that two modes of self-regulation exist, a lower-order system that responds quickly to associative cues of the moment and a higher-order system that responds more reflectively and planfully; that low serotonergic function is linked to relative dominance of the lower-order system; that how dominance of the lower-order system is manifested depends on additional variables; and that low serotonergic function therefore can promote behavioral patterns as divergent as impulsive aggression and lethargic depression. Literatures reviewed include work on two-mode models; studies of brain function supporting the biological plausibility of the two-mode view and the involvement of serotonergic pathways in functions pertaining to it; and studies relating low serotonergic function to impulsiveness, aggression (including extreme violence), aspects of personality, and depression vulnerability. Substantial differences between depression and other phenomena reviewed are interpreted by proposing that depression reflects both low serotonergic function and low reward sensitivity. The article closes with brief consideration of the idea that low serotonergic function relates to even more diverse phenomena, whose natures depend in part on sensitivities of other systems.",
"title": ""
},
{
"docid": "e791574d97507c1ecc9912e6d5c5f1b0",
"text": "Sequential pattern mining is an important data mining problem with broad applications. However, it is also a difficult problem since the mining may have to generate or examine a combinatorially explosive number of intermediate subsequences. Most of the previously developed sequential pattern mining methods, such as GSP, explore a candidate generation-and-test approach [R. Agrawal et al. (1994)] to reduce the number of candidates to be examined. However, this approach may not be efficient in mining large sequence databases having numerous patterns and/or long patterns. In this paper, we propose a projection-based, sequential pattern-growth approach for efficient mining of sequential patterns. In this approach, a sequence database is recursively projected into a set of smaller projected databases, and sequential patterns are grown in each projected database by exploring only locally frequent fragments. Based on an initial study of the pattern growth-based sequential pattern mining, FreeSpan [J. Han et al. (2000)], we propose a more efficient method, called PSP, which offers ordered growth and reduced projected databases. To further improve the performance, a pseudoprojection technique is developed in PrefixSpan. A comprehensive performance study shows that PrefixSpan, in most cases, outperforms the a priori-based algorithm GSP, FreeSpan, and SPADE [M. Zaki, (2001)] (a sequential pattern mining algorithm that adopts vertical data format), and PrefixSpan integrated with pseudoprojection is the fastest among all the tested algorithms. Furthermore, this mining methodology can be extended to mining sequential patterns with user-specified constraints. The high promise of the pattern-growth approach may lead to its further extension toward efficient mining of other kinds of frequent patterns, such as frequent substructures.",
"title": ""
},
{
"docid": "de2f36b553a1b7d53659fd5d42a051d9",
"text": "In order to fit the diverse scenes in life, more and more people choose to join different types of social networks simultaneously. These different networks often contain the information that people leave in a particular scene. Under the circumstances, identifying the same person across different social networks is a crucial way to help us understand the user from multiple aspects. The current solution to this problem focuses on using only profile matching or relational matching method. Some other methods take the two aspect of information into consideration, but they associate the profile similarity with relation similarity simply by a parameter. The matching results on two dimensions may have large difference, directly link them may reduce the overall similarity. Unlike the most of the previous work, we propose to utilize collaborative training method to tackle this problem. We run experiments on two real-world social network datasets, and the experimental results confirmed the effectiveness of our method.",
"title": ""
},
{
"docid": "4e32fd1e49dc82e84df6de2714e918b2",
"text": "Chronic obstructive pulmonary disease (COPD), a disabling combination of emphysema and chronic bronchitis, relies on spirometric lung function measurements for clinical diagnosis and treatment. Because spirometers are unavailable in most of the developing world, this project developed a low cost point of care spirometer prototype for the mobile phone called the “TeleSpiro.” The key contributions of this work are the design of a novel repeat-use, sterilisable, low cost, phone-powered prototype meeting developing world user requirements. A differential pressure sensor, dual humidity/pressure sensor, microcontroller and USB hardware were mounted on a printed circuit board for measurement of air flow in a custom machine-lathed respiratory air flow tube. The embedded circuit electronics were programmed to transmit data to and receive power directly from either a computer or Android smartphone without the use of batteries. Software was written to filter and extract respiratory cycles from the digitised data. Differential pressure signals from Telespiro showed robust, reproducible responses to the delivery of physiologic lung volumes. The designed device satisfied the stringent design criteria of resource-limited settings and makes substantial inroads in providing evidence-based chronic respiratory disease management.",
"title": ""
},
{
"docid": "65ba10afbe83c269530e0779499c653c",
"text": "The subjective nature of qualitative research necessitates scrupulous scientific methods to ensure valid results. Although qualitative methods such as grounded theory, phenomenology, and ethnography yield rich data, consumers of research need to be able to trust the findings reported in such studies. Researchers are responsible for establishing the trustworthiness of qualitative research through a variety of ways. Specific challenges faced in the field can seriously threaten the dependability of the data. However, by minimizing potential errors that can occur when doing fieldwork, researchers can increase the trustworthiness of the study. The purpose of this article is to present three of the pitfalls that can occur in qualitative research during data collection and transcription: equipment failure, environmental hazards, and transcription errors. Specific strategies to minimize the risk for avoidable errors will be discussed.",
"title": ""
},
{
"docid": "d0ea12e1aa134a1b033d78e9c94ff59e",
"text": "This introductory tutorial presents an over view of the process of conducting a simulation study of any discrete system. The basic viewpoint is that conducting such a study requires both art and science. Some of the issues addressed are how to get started, the steps to be followed, the issues to be faced at each step, the potential pitfalls occurring at each step, and the most common causes of failures.",
"title": ""
},
{
"docid": "57d162c64d93b28f6be1e086b5a1c134",
"text": "Making deep convolutional neural networks more accurate typically comes at the cost of increased computational and memory resources. In this paper, we reduce this cost by exploiting the fact that the importance of features computed by convolutional layers is highly input-dependent, and propose feature boosting and suppression (FBS), a new method to predictively amplify salient convolutional channels and skip unimportant ones at run-time. FBS introduces small auxiliary connections to existing convolutional layers. In contrast to channel pruning methods which permanently remove channels, it preserves the full network structures and accelerates convolution by dynamically skipping unimportant input and output channels. FBS-augmented networks are trained with conventional stochastic gradient descent, making it readily available for many state-of-the-art CNNs. We compare FBS to a range of existing channel pruning and dynamic execution schemes and demonstrate large improvements on ImageNet classification. Experiments show that FBS can respectively provide 5× and 2× savings in compute on VGG-16 and ResNet-18, both with less than 0.6% top-5 accuracy loss.",
"title": ""
},
{
"docid": "69dfe6a7a3738a00d32eb18491a83f5c",
"text": "Real-time transmission of video data in network environments, such as wireless and Internet, is a challenging task, as it requires high compression efficiency and network friendly design. H.264/AVC is the newest international video coding standard, jointly developed by groups from ISO/IEC and ITU-T, which aims at achieving improved compression performance and a network-friendly video representation for different types of applications, such as conversational, storage, and streaming. In this paper, we discuss various error resiliency schemes employed by H.264/AVC. The related topics such as nonnormative error concealment and network environment are also described. Some experimental results are discussed to show the performance of error resiliency schemes.",
"title": ""
},
{
"docid": "34baa9b0e77f6ef290ab54889edf293d",
"text": "Concurrent with the enactment of metric conversion legislation by the U. S. Congress in 1975, the Motor and Generator Section of the National Electrical Manufacturer Association (NEMA) voted to proceed with the development of a Guide for the Development of Metric Standards for Motors and Generators, referred to as the \" IMetric Guide\" or \"the Guide.\" The first edition was published in 1978, followed by a second, more extensive, edition in November 1980. A summary of the Metric Guide, is given, including comparison with NEMA and International Electrotechnical Commission (IEC) standards.",
"title": ""
},
{
"docid": "a839016be99c3cb93d30fa48403086d8",
"text": "At synapses of the mammalian central nervous system, release of neurotransmitter occurs at rates transiently as high as 100 Hz, putting extreme demands on nerve terminals with only tens of functional vesicles at their disposal. Thus, the presynaptic vesicle cycle is particularly critical to maintain neurotransmission. To understand vesicle cycling at the most fundamental level, we studied single vesicles undergoing exo/endocytosis and tracked the fate of newly retrieved vesicles. This was accomplished by minimally stimulating boutons in the presence of the membrane-fluorescent styryl dye FM1-43, then selecting for terminals that contained only one dye-filled vesicle. We then observed the kinetics of dye release during single action potential stimulation. We found that most vesicles lost only a portion of their total dye during a single fusion event, but were able to fuse again soon thereafter. We interpret this as direct evidence of \"kiss-and-run\" followed by rapid reuse. Other interpretations such as \"partial loading\" and \"endosomal splitting\" were largely excluded on the basis of multiple lines of evidence. Our data placed an upper bound of <1.4 s on the lifetime of the kiss-and-run fusion event, based on the assumption that aqueous departitioning is rate limiting. The repeated use of individual vesicles held over a range of stimulus frequencies up to 30 Hz and was associated with neurotransmitter release. A small percentage of fusion events did release a whole vesicle's worth of dye in one action potential, consistent with a classical picture of exocytosis as fusion followed by complete collapse or at least very slow retrieval.",
"title": ""
},
{
"docid": "1547a67fd88ac720f4521a206a26dff3",
"text": "A core business in the fashion industry is the understanding and prediction of customer needs and trends. Search engines and social networks are at the same time a fundamental bridge and a costly middleman between the customer’s purchase intention and the retailer. To better exploit Europe’s distinctive characteristics e.g., multiple languages, fashion and cultural differences, it is pivotal to reduce retailers’ dependence to search engines. This goal can be achieved by harnessing various data channels (manufacturers and distribution networks, online shops, large retailers, social media, market observers, call centers, press/magazines etc.) that retailers can leverage in order to gain more insight about potential buyers, and on the industry trends as a whole. This can enable the creation of novel on-line shopping experiences, the detection of influencers, and the prediction of upcoming fashion trends. In this paper, we provide an overview of the main research challenges and an analysis of the most promising technological solutions that we are investigating in the FashionBrain project.",
"title": ""
},
{
"docid": "98f814584c555baa05a1292e7e14f45a",
"text": "This paper presents two types of dual band (2.4 and 5.8 GHz) wearable planar dipole antennas, one printed on a conventional substrate and the other on a two-dimensional metamaterial surface (Electromagnetic Bandgap (EBG) structure). The operation of both antennas is investigated and compared under different bending conditions (in E and H-planes) around human arm and leg of different radii. A dual band, Electromagnetic Band Gap (EBG) structure on a wearable substrate is used as a high impedance surface to control the Specific Absorption Rate (SAR) as well as to improve the antenna gain up to 4.45 dBi. The EBG inspired antenna has reduced the SAR effects on human body to a safe level (< 2W/Kg). I.e. the SAR is reduced by 83.3% for lower band and 92.8% for higher band as compared to the conventional antenna. The proposed antenna can be used for wearable applications with least health hazard to human body in Industrial, Scientific and Medical (ISM) band (2.4 GHz, 5.2 GHz) applications. The antennas on human body are simulated and analyzed in CST Microwave Studio (CST MWS).",
"title": ""
},
{
"docid": "c82f4117c7c96d0650eff810f539c424",
"text": "The Stock Market is known for its volatile and unstable nature. A particular stock could be thriving in one period and declining in the next. Stock traders make money from buying equity when they are at their lowest and selling when they are at their highest. The logical question would be: \"What Causes Stock Prices To Change?\". At the most fundamental level, the answer to this would be the demand and supply. In reality, there are many theories as to why stock prices fluctuate, but there is no generic theory that explains all, simply because not all stocks are identical, and one theory that may apply for today, may not necessarily apply for tomorrow. This paper covers various approaches taken to attempt to predict the stock market without extensive prior knowledge or experience in the subject area, highlighting the advantages and limitations of the different techniques such as regression and classification. We formulate both short term and long term predictions. Through experimentation we achieve 81% accuracy for future trend direction using classification, 0.0117 RMSE for next day price and 0.0613 RMSE for next day change in price using regression techniques. The results obtained in this paper are achieved using only historic prices and technical indicators. Various methods, tools and evaluation techniques will be assessed throughout the course of this paper, the result of this contributes as to which techniques will be selected and enhanced in the final artefact of a stock prediction model. Further work will be conducted utilising deep learning techniques to approach the problem. This paper will serve as a preliminary guide to researchers wishing to expose themselves to this area.",
"title": ""
},
{
"docid": "afae709279cd8adeda2888089872d70e",
"text": "One-class classification problemhas been investigated thoroughly for past decades. Among one of themost effective neural network approaches for one-class classification, autoencoder has been successfully applied for many applications. However, this classifier relies on traditional learning algorithms such as backpropagation to train the network, which is quite time-consuming. To tackle the slow learning speed in autoencoder neural network, we propose a simple and efficient one-class classifier based on extreme learning machine (ELM).The essence of ELM is that the hidden layer need not be tuned and the output weights can be analytically determined, which leads to much faster learning speed.The experimental evaluation conducted on several real-world benchmarks shows that the ELM based one-class classifier can learn hundreds of times faster than autoencoder and it is competitive over a variety of one-class classification methods.",
"title": ""
},
{
"docid": "d455f379442de99caaccc312737546df",
"text": "Research suggests that rumination increases anger and aggression. Mindfulness, or present-focused and intentional awareness, may counteract rumination. Using structural equation modeling, we examined the relations between mindfulness, rumination, and aggression. In a pair of studies, we found a pattern of correlations consistent with rumination partially mediating a causal link between mindfulness and hostility, anger, and verbal aggression. The pattern was not consistent with rumination mediating the association between mindfulness and physical aggression. Although it is impossible with the current nonexperimental data to test causal mediation, these correlations support the idea that mindfulness could reduce rumination, which in turn could reduce aggression. These results suggest that longitudinal work and experimental manipulations mindfulness would be worthwhile approaches for further study of rumination and aggression. We discuss possible implications of these results.",
"title": ""
},
{
"docid": "a8b8f36f7093c79759806559fb0f0cf4",
"text": "Cooperative adaptive cruise control (CACC) is an extension of ACC. In addition to measuring the distance to a predecessor, a vehicle can also exchange information with a predecessor by wireless communication. This enables a vehicle to follow its predecessor at a closer distance under tighter control. This paper focuses on the impact of CACC on traffic-flow characteristics. It uses the traffic-flow simulation model MIXIC that was specially designed to study the impact of intelligent vehicles on traffic flow. The authors study the impacts of CACC for a highway-merging scenario from four to three lanes. The results show an improvement of traffic-flow stability and a slight increase in traffic-flow efficiency compared with the merging scenario without equipped vehicles",
"title": ""
},
{
"docid": "e3fe0379cf84f2834eeee6e29d6206b3",
"text": "A method is presented in this paper for the design of a high frequency CMOS operational amplifier (OpAmp) which operates at 3V power supply using tsmc 0.18 micron CMOS technology. The OPAMP designed is a two-stage CMOS OPAMP followed by an output buffer. This Operational Transconductance Amplifier (OTA) employs a Miller capacitor and is compensated with a current buffer compensation technique. The unique behaviour of the MOS transistors in saturation region not only allows a designer to work at a low voltage, but also at a high frequency. Designing of two-stage op-amps is a multi-dimensional-optimization problem where optimization of one or more parameters may easily result into degradation of others. The OPAMP is designed to exhibit a unity gain frequency of 2.02GHz and exhibits a gain of 49.02dB with a 60.5 0 phase margin. As compared to the conventional approach, the proposed compensation method results in a higher unity gain frequency under the same load condition. Design has been carried out in Tanner tools. Simulation results are verified using S-edit and W-edit.",
"title": ""
},
{
"docid": "a2eee3cd0e8ee3e97af54f11b8a29fc9",
"text": "Internet Service Providers (ISPs) are responsible for transmitting and delivering their customers’ data requests, ranging from requests for data from websites, to that from filesharing applications, to that from participants in Voice over Internet Protocol (VoIP) chat sessions. Using contemporary packet inspection and capture technologies, ISPs can investigate and record the content of unencrypted digital communications data packets. This paper explains the structure of these packets, and then proceeds to describe the packet inspection technologies that monitor their movement and extract information from the packets as they flow across ISP networks. After discussing the potency of contemporary deep packet inspection devices, in relation to their earlier packet inspection predecessors, and their potential uses in improving network operators’ network management systems, I argue that they should be identified as surveillance technologies that can potentially be incredibly invasive. Drawing on Canadian examples, I argue that Canadian ISPs are using DPI technologies to implicitly ‘teach’ their customers norms about what are ‘inappropriate’ data transfer programs, and the appropriate levels of ISP manipulation of customer data traffic. Version 1.2 :: January 10, 2008. * Doctoral student in the University of Victoria’s Political Science department. Thanks to Colin Bennett, Andrew Clement, Fenwick Mckelvey and Joyce Parsons for comments.",
"title": ""
}
] |
scidocsrr
|
59e961dd5a4db454129f31cd2e85e782
|
Probabilistic risk analysis and terrorism risk.
|
[
{
"docid": "7adb0a3079fb3b64f7a503bd8eae623e",
"text": "Attack trees have found their way to practice because they have proved to be an intuitive aid in threat analysis. Despite, or perhaps thanks to, their apparent simplicity, they have not yet been provided with an unambiguous semantics. We argue that such a formal interpretation is indispensable to precisely understand how attack trees can be manipulated during construction and analysis. We provide a denotational semantics, based on a mapping to attack suites, which abstracts from the internal structure of an attack tree, we study transformations between attack trees, and we study the attribution and projection of an attack tree.",
"title": ""
}
] |
[
{
"docid": "a2189a6b0cf23e40e2d1948e86330466",
"text": "Evolutionary psychology is an approach to the psychological sciences in which principles and results drawn from evolutionary biology, cognitive science, anthropology, and neuroscience are integrated with the rest of psychology in order to map human nature. By human nature, evolutionary psychologists mean the evolved, reliably developing, species-typical computational and neural architecture of the human mind and brain. According to this view, the functional components that comprise this architecture were designed by natural selection to solve adaptive problems faced by our hunter-gatherer ancestors, and to regulate behavior so that these adaptive problems were successfully addressed (for discussion, see Cosmides & Tooby, 1987, Tooby & Cosmides, 1992). Evolutionary psychology is not a specific subfield of psychology, such as the study of vision, reasoning, or social behavior. It is a way of thinking about psychology that can be applied to any topic within it including the emotions.",
"title": ""
},
{
"docid": "f555a50f629bd9868e1be92ebdcbc154",
"text": "The transformation of traditional energy networks to smart grids revolutionizes the energy industry in terms of reliability, performance, and manageability by providing bi-directional communications to operate, monitor, and control power flow and measurements. However, communication networks in smart grid bring increased connectivity with increased severe security vulnerabilities and challenges. Smart grid can be a prime target for cyber terrorism because of its critical nature. As a result, smart grid security is already getting a lot of attention from governments, energy industries, and consumers. There have been several research efforts for securing smart grid systems in academia, government and industries. This article provides a comprehensive study of challenges in smart grid security, which we concentrate on the problems and proposed solutions. Then, we outline current state of the research and future perspectives.With this article, readers can have a more thorough understanding of smart grid security and the research trends in this topic.",
"title": ""
},
{
"docid": "60fbaecc398f04bdb428ccec061a15a5",
"text": "A decade earlier, work on modeling and analyzing social network, was primarily focused on manually collected datasets where the friendship links were sparse but relatively noise free (i.e. all links represented strong physical relation). With the popularity of online social networks, the notion of “friendship” changed dramatically. The data collection, now although automated, contains dense friendship links but the links contain noisier information (i.e. some weaker relationships). The aim of this study is to identify these weaker links and suggest how these links (identification) play a vital role in improving social media design elements such as privacy control, detection of auto-bots, friend introductions, information prioritization and so on. The binary metric used so far for modeling links in social network (i.e. friends or not) is of little importance as it groups all our relatives, close friends and acquaintances in the same category. Therefore a popular notion of tie-strength has been incorporated for modeling links. In this paper, a predictive model is presented that helps evaluate tie-strength for each link in network based on transactional features (e.g. communication, file transfer, photos). The model predicts tie strength with 76.4% efficiency. This work also suggests that important link properties manifest similarly across different social media sites.",
"title": ""
},
{
"docid": "b8fe5687c8b18a8cfdac14a198b77033",
"text": "1 Sia Siew Kien, Michael Rosemann and Phillip Yetton are the accepting senior editors for this article. 2 This research was partly funded by an Australian Research Council Discovery grant. The authors are grateful to the interviewees, whose willingness to share their valuable insights and experiences made this study possible, and to the senior editors and reviewers for their very helpful feedback and advice throughout the review process. 3 All quotes in this article are from employees of “RetailCo,” the subject of this case study. The names of the organization and its business divisions have been anonymized. 4 A digital business platform is “an integrated set of electronic business processes and the technologies, applications and data supporting those processes” Weill, P. and Ross, J. W. IT Savvy: What Top Executives Must Know to Go from Pain to Gain, Harvard Business School Publishing, 2009, p. 4; for more on digitized platforms, see pp. 67-87 of this publication. How an Australian Retailer Enabled Business Transformation Through Enterprise Architecture",
"title": ""
},
{
"docid": "de17b1fcae6336947e82adab0881b5ba",
"text": "Presence of duplicate documents in the World Wide Web adversely affects crawling, indexing and relevance, which are the core building blocks of web search. In this paper, we present a set of techniques to mine rules from URLs and utilize these learnt rules for de-duplication using just URL strings without fetching the content explicitly. Our technique is composed of mining the crawl logs and utilizing clusters of similar pages to extract specific rules from URLs belonging to each cluster. Preserving each mined rules for de-duplication is not efficient due to the large number of specific rules. We present a machine learning technique to generalize the set of rules, which reduces the resource footprint to be usable at web-scale. The rule extraction techniques are robust against web-site specific URL conventions. We demonstrate the effectiveness of our techniques through experimental evaluation.",
"title": ""
},
{
"docid": "171d9acd0e2cb86a02d5ff56d4515f0d",
"text": "We explore two solutions to the problem of mistranslating rare words in neural machine translation. First, we argue that the standard output layer, which computes the inner product of a vector representing the context with all possible output word embeddings, rewards frequent words disproportionately, and we propose to fix the norms of both vectors to a constant value. Second, we integrate a simple lexical module which is jointly trained with the rest of the model. We evaluate our approaches on eight language pairs with data sizes ranging from 100k to 8M words, and achieve improvements of up to +4.3 BLEU, surpassing phrasebased translation in nearly all settings.1",
"title": ""
},
{
"docid": "2d6523ef6609c11274449d3b9a57c53c",
"text": "Performing information retrieval tasks while preserving data confidentiality is a desirable capability when a database is stored on a server maintained by a third-party service provider. This paper addresses the problem of enabling content-based retrieval over encrypted multimedia databases. Search indexes, along with multimedia documents, are first encrypted by the content owner and then stored onto the server. Through jointly applying cryptographic techniques, such as order preserving encryption and randomized hash functions, with image processing and information retrieval techniques, secure indexing schemes are designed to provide both privacy protection and rank-ordered search capability. Retrieval results on an encrypted color image database and security analysis of the secure indexing schemes under different attack models show that data confidentiality can be preserved while retaining very good retrieval performance. This work has promising applications in secure multimedia management.",
"title": ""
},
{
"docid": "3caa8fc1ea07fcf8442705c3b0f775c5",
"text": "Recent research in the field of computational social science have shown how data resulting from the widespread adoption and use of social media channels such as twitter can be used to predict outcomes such as movie revenues, election winners, localized moods, and epidemic outbreaks. Underlying assumptions for this research stream on predictive analytics are that social media actions such as tweeting, liking, commenting and rating are proxies for user/consumer's attention to a particular object/product and that the shared digital artefact that is persistent can create social influence. In this paper, we demonstrate how social media data from twitter can be used to predict the sales of iPhones. Based on a conceptual model of social data consisting of social graph (actors, actions, activities, and artefacts) and social text (topics, keywords, pronouns, and sentiments), we develop and evaluate a linear regression model that transforms iPhone tweets into a prediction of the quarterly iPhone sales with an average error close to the established prediction models from investment banks. This strong correlation between iPhone tweets and iPhone sales becomes marginally stronger after incorporating sentiments of tweets. We discuss the findings and conclude with implications for predictive analytics with big social data.",
"title": ""
},
{
"docid": "52b1c306355e6bf8ba10ea7e3cf1d05e",
"text": "QUESTION\nIs there a means of assessing research impact beyond citation analysis?\n\n\nSETTING\nThe case study took place at the Washington University School of Medicine Becker Medical Library.\n\n\nMETHOD\nThis case study analyzed the research study process to identify indicators beyond citation count that demonstrate research impact.\n\n\nMAIN RESULTS\nThe authors discovered a number of indicators that can be documented for assessment of research impact, as well as resources to locate evidence of impact. As a result of the project, the authors developed a model for assessment of research impact, the Becker Medical Library Model for Assessment of Research.\n\n\nCONCLUSION\nAssessment of research impact using traditional citation analysis alone is not a sufficient tool for assessing the impact of research findings, and it is not predictive of subsequent clinical applications resulting in meaningful health outcomes. The Becker Model can be used by both researchers and librarians to document research impact to supplement citation analysis.",
"title": ""
},
{
"docid": "5e85b2fedd9fc66b198ccfc5b010da54",
"text": "a r t i c l e i n f o Keywords: Theory of planned behaviour Post-adoption Perceived value Facebook Social networking sites TPB SNS This study examines the continuance participation intentions and behaviour on Facebook, as a representative of Social Networking Sites (SNSs), from a social and behavioural perspective. The study extends the Theory of Planned Behaviour (TPB) through the inclusion of perceived value construct and utilizes the extended theory to explain users' continuance participation intentions and behaviour on Facebook. Despite the recent massive uptake of Facebook, our review of the related-literature revealed that very few studies tackled such technologies from the context of post-adoption as in this research. Using data from surveys of undergraduate and postgraduate students in Jordan (n=403), the extended theory was tested using statistical analysis methods. The results show that attitude, subjective norm, perceived behavioural control, and perceived value have significant effect on the continuance participation intention of post-adopters. Further, the results show that continuance participation intention and perceived value have significant effect on continuance participation behaviour. However, the results show that perceived be-havioural control has no significant effect on continuance participation behaviour of post-adopters. When comparing the extended theory developed in this study with the standard TPB, it was found that the inclusion of the perceived value construct in the extended theory is fruitful; as such an extension explained an additional 11.6% of the variance in continuance participation intention and 4.5% of the variance in continuance participation behaviour over the standard TPB constructs. Consistent with the research on value-driven post-adoption behaviour, these findings suggest that continuance intentions and behaviour of users of Facebook are likely to be greater when they perceive the behaviour to be associated with significant added-value (i.e. benefits outperform sacrifices). Since its introduction, the Internet has enabled entirely new forms of social interaction and activities, thanks to its basic features such as the prevalent usability and access. As the Internet is massively evolving over time, the World Wide Web or otherwise referred to as Web 1.0 has been transformed to the so-called Web 2.0. In fact, Web 2.0 refers to the second generation of the World Wide Web that facilitates information sharing, interoperability, user-centred design and collaboration. The advent of Web 2.0 has led to the development and evolution of Web-based communities, hosted services, and Web applications that work as a mainstream medium for value creation and exchange. Examples of Web …",
"title": ""
},
{
"docid": "11a9d7a218d1293878522252e1f62778",
"text": "This paper presents a wideband circularly polarized millimeter-wave (mmw) antenna design. We introduce a novel 3-D-printed polarizer, which consists of several air and dielectric slabs to transform the polarization of the antenna radiation from linear to circular. The proposed polarizer is placed above a radiating aperture operating at the center frequency of 60 GHz. An electric field, <inline-formula> <tex-math notation=\"LaTeX\">${E}$ </tex-math></inline-formula>, radiated from the aperture generates two components of electric fields, <inline-formula> <tex-math notation=\"LaTeX\">${E} _{\\mathrm {x}}$ </tex-math></inline-formula> and <inline-formula> <tex-math notation=\"LaTeX\">${E} _{\\mathrm {y}}$ </tex-math></inline-formula>. After passing through the polarizer, both <inline-formula> <tex-math notation=\"LaTeX\">${E} _{\\mathrm {x}}$ </tex-math></inline-formula> and <inline-formula> <tex-math notation=\"LaTeX\">${E} _{\\mathrm {y}}$ </tex-math></inline-formula> fields can be degenerated with an orthogonal phase difference which results in having a wide axial ratio bandwidth. The phase difference between <inline-formula> <tex-math notation=\"LaTeX\">${E} _{\\mathrm {x}}$ </tex-math></inline-formula> and <inline-formula> <tex-math notation=\"LaTeX\">${E} _{\\mathrm {y}}$ </tex-math></inline-formula> is determined by the incident angle <inline-formula> <tex-math notation=\"LaTeX\">$\\phi $ </tex-math></inline-formula>, of the polarization of the electric field to the polarizer as well as the thickness, <inline-formula> <tex-math notation=\"LaTeX\">${h}$ </tex-math></inline-formula>, of the dielectric slabs. With the help of the thickness of the polarizer, the directivity of the radiation pattern is increased so as to devote high-gain and wideband characteristics to the antenna. To verify our concept, an intensive parametric study and an experiment were carried out. Three antenna sources, including dipole, patch, and aperture antennas, were investigated with the proposed 3-D-printed polarizer. All measured results agree with the theoretical analysis. The proposed antenna with the polarizer achieves a wide impedance bandwidth of 50% from 45 to 75 GHz for the reflection coefficient less than or equal −10 dB, and yields an overlapped axial ratio bandwidth of 30% from 49 to 67 GHz for the axial ratio ≤ 3 dB. The maximum gain of the antenna reaches to 15 dBic. The proposed methodology of this design can apply to applications related to mmw wireless communication systems. The ultimate goal of this paper is to develop a wideband, high-gain, and low-cost antenna for the mmw frequency band.",
"title": ""
},
{
"docid": "289b67247b109ee0de851c0cd4e76ec3",
"text": "User engagement is a key concept in designing user-centred web applications. It refers to the quality of the user experience that emphasises the positive aspects of the interaction, and in particular the phenomena associated with being captivated by technology. This definition is motivated by the observation that successful technologies are not just used, but they are engaged with. Numerous methods have been proposed in the literature to measure engagement, however, little has been done to validate and relate these measures and so provide a firm basis for assessing the quality of the user experience. Engagement is heavily influenced, for example, by the user interface and its associated process flow, the user’s context, value system and incentives. In this paper we propose an approach to relating and developing unified measures of user engagement. Our ultimate aim is to define a framework in which user engagement can be studied, measured, and explained, leading to recommendations and guidelines for user interface and interaction design for front-end web technology. Towards this aim, in this paper, we consider how existing user engagement metrics, web analytics, information retrieval metrics, and measures from immersion in gaming can bring new perspective to defining, measuring and explaining user engagement.",
"title": ""
},
{
"docid": "00602badbfba6bc97dffbdd6c5a2ae2d",
"text": "Accurately drawing 3D objects is difficult for untrained individuals, as it requires an understanding of perspective and its effects on geometry and proportions. Step-by-step tutorials break the complex task of sketching an entire object down into easy-to-follow steps that even a novice can follow. However, creating such tutorials requires expert knowledge and is time-consuming. As a result, the availability of tutorials for a given object or viewpoint is limited. How2Sketch (H2S) addresses this problem by automatically generating easy-to-follow tutorials for arbitrary 3D objects. Given a segmented 3D model and a camera viewpoint, H2S computes a sequence of steps for constructing a drawing scaffold comprised of geometric primitives, which helps the user draw the final contours in correct perspective and proportion. To make the drawing scaffold easy to construct, the algorithm solves for an ordering among the scaffolding primitives and explicitly makes small geometric modifications to the size and location of the object parts to simplify relative positioning. Technically, we formulate this scaffold construction as a single selection problem that simultaneously solves for the ordering and geometric changes of the primitives. We generate different tutorials on man-made objects using our method and evaluate how easily the tutorials can be followed with a user study.",
"title": ""
},
{
"docid": "d19e825235b5fbb759ff49a1c8398cea",
"text": "Febrile seizures are common and mostly benign. They are the most common cause of seizures in children less than five years of age. There are two categories of febrile seizures, simple and complex. Both the International League against Epilepsy and the National Institute of Health has published definitions on the classification of febrile seizures. Simple febrile seizures are mostly benign, but a prolonged (complex) febrile seizure can have long term consequences. Most children who have a febrile seizure have normal health and development after the event, but there is recent evidence that suggests a small subset of children that present with seizures and fever may have recurrent seizure or develop epilepsy. This review will give an overview of the definition of febrile seizures, epidemiology, evaluation, treatment, outcomes and recent research.",
"title": ""
},
{
"docid": "bb799a3aac27f4ac764649e1f58ee9fb",
"text": "White grubs (larvae of Coleoptera: Scarabaeidae) are abundant in below-ground systems and can cause considerable damage to a wide variety of crops by feeding on roots. White grub populations may be controlled by natural enemies, but the predator guild of the European species is barely known. Trophic interactions within soil food webs are difficult to study with conventional methods. Therefore, a polymerase chain reaction (PCR)-based approach was developed to investigate, for the first time, a soil insect predator-prey system. Can, however, highly sensitive detection methods identify carrion prey in predators, as has been shown for fresh prey? Fresh Melolontha melolontha (L.) larvae and 1- to 9-day-old carcasses were presented to Poecilus versicolor Sturm larvae. Mitochondrial cytochrome oxidase subunit I fragments of the prey, 175, 327 and 387 bp long, were detectable in 50% of the predators 32 h after feeding. Detectability decreased to 18% when a 585 bp sequence was amplified. Meal size and digestion capacity of individual predators had no influence on prey detection. Although prey consumption was negatively correlated with cadaver age, carrion prey could be detected by PCR as efficiently as fresh prey irrespective of carrion age. This is the first proof that PCR-based techniques are highly efficient and sensitive, both in fresh and carrion prey detection. Thus, if active predation has to be distinguished from scavenging, then additional approaches are needed to interpret the picture of prey choice derived by highly sensitive detection methods.",
"title": ""
},
{
"docid": "1255c63b8fc0406b1f3a0161f59ebfb1",
"text": "This paper proposes an EMI filter design software which can serve as an aid to the designer to quickly arrive at optimal filter sizes based on off-line measurement data or simulation results. The software covers different operating conditions-such as: different switching devices, different types of switching techniques, different load conditions and layout of the test setup. The proposed software design works for both silicon based and WBG based power converters.",
"title": ""
},
{
"docid": "0c41de0df5dd88c87061c57ae26c5b32",
"text": "Context. The share and importance of software within automotive vehicles is growing steadily. Most functionalities in modern vehicles, especially safety related functions like advanced emergency braking, are controlled by software. A complex and common phenomenon in today’s automotive vehicles is the distribution of such software functions across several Electronic Control Units (ECUs) and consequently across several ECU system software modules. As a result, integration testing of these distributed software functions has been found to be a challenge. The automotive industry neither has infinite resources, nor has the time to carry out exhaustive testing of these functions. On the other hand, the traditional approach of implementing an ad-hoc selection of test scenarios based on the tester’s experience, can lead to test gaps and test redundancies. Hence, there is a pressing need within the automotive industry for a feasible and effective verification strategy for testing distributed software functions. Objectives. Firstly, to identify the current approach used to test the distributed automotive embedded software functions in literature and in a case company. Secondly, propose and validate a feasible and effective verification strategy for testing the distributed software functions that would help improve test coverage while reducing test redundancies and test gaps. Methods. To accomplish the objectives, a case study was conducted at Scania CV AB, Södertälje, Sweden. One of the data collection methods was through conducting interviews of different employees involved in the software testing activities. Based on the research objectives, an interview questionnaire with open-ended and close-ended questions has been used. Apart from interviews, data from relevant artifacts in databases and archived documents has been used to achieve data triangulation. Moreover, to further strengthen the validity of the results obtained, adequate literature support has been presented throughout. Towards the end, a verification strategy has been proposed and validated using existing historical data at Scania. Conclusions. The proposed verification strategy to test distributed automotive embedded software functions has given promising results by providing means to identify test gaps and test redundancies. It helps establish an effective and feasible approach to capture function test coverage information that helps enhance the effectiveness of integration testing of the distributed software functions.",
"title": ""
},
{
"docid": "5bb98a6655f823b38c3866e6d95471e9",
"text": "This article describes the HR Management System in place at Sears. Key emphases of Sears' HR management infrastructure include : (1) formulating and communicating a corporate mission, vision, and goals, (2) employee education and development through the Sears University, (3) performance management and incentive compensation systems linked closely to the firm's strategy, (4) validated employee selection systems, and (5) delivering the \"HR Basics\" very competently. Key challenges for the future include : (1) maintaining momentum in the performance improvement process, (2) identifying barriers to success, and (3) clearly articulating HR's role in the change management process . © 1999 John Wiley & Sons, Inc .",
"title": ""
},
{
"docid": "f14b2dda47ff1eed966a3dad44514334",
"text": "Diced cartilage rolled up in a fascia (DC-F) is a recent technique developed by Rollin K Daniel. It consists to tailor make a composite graft composed by pieces of cartilage cut in small dices wrapped in a layer of deep temporal aponeurosis. This initially malleable graft allows an effective dorsum augmentation (1 to 10 mm), adjustable until the end of the operation and even post operatively. The indications are all the primary and secondary augmentation rhinoplasties. However, the elective indications are the secondary augmentation rhinoplasties with cartilaginous donor site depletion, or when cartilaginous grafts are of poor quality (insufficient length, multifragmented...), or finally when the recipient site is uneven or asymmetrical. We report our experience of 20 patients operated in 2006 and 2007, with one year minimal follow-up. All the cases are relative or absolute saddle noses, idiopathic, post-traumatic or iatrogenic. Moreover, two patients also had a concomitant chin augmentation with DC-F. No case of displacement or resorption was noted. We modified certain technical points in order to make this technique even more powerful and predictable.",
"title": ""
},
{
"docid": "fb9bbd096fa29cbb0abf646b33f7693b",
"text": "This paper presents a new parameter extraction methodology, based on an accurate and continuous MOS model dedicated to low-voltage and low-current analog circuit design and simulation (EKV MOST Model). The extraction procedure provides the key parameters from the pinch-off versus gate voltage characteristic, measured at constant current from a device biased in moderate inversion. Unique parameter sets, suitable for statistical analysis, describe the device behavior in all operating regions and over all device geometries. This efficient and simple method is shown to be accurate for both submicron bulk CMOS and fully depleted SOI technologies. INTRODUCTION The requirements for good MOS analog simulation models such as accuracy and continuity of the largeand small-signal characteristics are well established [1][2]. Continuity of the largeand small-signal characteristics from weak to strong inversion is one of the main features of the Enz-Krummenacher-Vittoz or EKV MOS transistor model [3][4][5]. One of the basic concepts of this model is the pinch-off voltage. A constant current bias is used to measure the pinch-off voltage versus gate voltage characteristic in moderate inversion (MI). This measure allows for an efficient and simple characterization method to be formulated for the most important model parameters as the threshold voltage and the other parameters related to the channel doping, using a single measured characteristic. The same principle is applied for various geometries, including shortand narrow-channel devices, and forms the major part of the complete characterization methodology. The simplicity of the model and the relatively small number of parameters to be extracted eases the parameter extraction. This is of particular importance if large statistical data are to be gathered. This method has been validated on a large number of different CMOS processes. To show its flexibility as well as the abilities of the model, results are presented for submicron bulk and fully depleted SOI technologies. SHORT DESCRIPTION OF THE STATIC MODEL A detailed description of the model formulation can be found in [3]; important concepts are shortly recalled here since they form the basis of the parameter extraction. A set of 13 intrinsic parameters is used for first and second order effects, listed in Table I. Unlike most other MOS simulation models, in the EKV model the gate, source and drain voltages, VG , VS and VD , are all referred to the substrate in order to preserve the intrinsic symmetry of the device. The Pinch-off Voltage The threshold voltage VTO, which is consequently also referred to the bulk, is defined as the gate voltage for which the inversion charge forming the channel is zero at equilibrium. The pinch-off voltage VP corresponds to the value of the channel potential Vch for which the inversion charge becomes zero in a non-equilibrium situation. VP can be directly related to VG :",
"title": ""
}
] |
scidocsrr
|
28e09f044884c7dabe3fcc4bac387d16
|
Real time streaming pattern detection for eCommerce
|
[
{
"docid": "172561db4f6d4bfe2b15c8d26adc3d91",
"text": "\"Big Data\" in map-reduce (M-R) clusters is often fundamentally temporal in nature, as are many analytics tasks over such data. For instance, display advertising uses Behavioral Targeting (BT) to select ads for users based on prior searches, page views, etc. Previous work on BT has focused on techniques that scale well for offline data using M-R. However, this approach has limitations for BT-style applications that deal with temporal data: (1) many queries are temporal and not easily expressible in M-R, and moreover, the set-oriented nature of M-R front-ends such as SCOPE is not suitable for temporal processing, (2) as commercial systems mature, they may need to also directly analyze and react to real-time data feeds since a high turnaround time can result in missed opportunities, but it is difficult for current solutions to naturally also operate over real-time streams. Our contributions are twofold. First, we propose a novel framework called TiMR (pronounced timer), that combines a time-oriented data processing system with a M-R framework. Users write and submit analysis algorithms as temporal queries - these queries are succinct, scale-out-agnostic, and easy to write. They scale well on large-scale offline data using TiMR, and can work unmodified over real-time streams. We also propose new cost-based query fragmentation and temporal partitioning schemes for improving efficiency with TiMR. Second, we show the feasibility of this approach for BT, with new temporal algorithms that exploit new targeting opportunities. Experiments using real data from a commercial ad platform show that TiMR is very efficient and incurs orders-of-magnitude lower development effort. Our BT solution is easy and succinct, and performs up to several times better than current schemes in terms of memory, learning time, and click-through-rate/coverage.",
"title": ""
},
{
"docid": "06db3ede44c48a09f8d280cf13bd8fd2",
"text": "An increasing number of distributed applications requires processing continuously flowing data from geographically distributed sources at unpredictable rate to obtain timely responses to complex queries. Examples of such applications come from the most disparate fields: from wireless sensor networks to financial tickers, from traffic management to click stream inspection.\n These requirements led to the development of a number of systems specifically designed to process information as a flow according to a set of pre-deployed processing rules. We collectively call them Information Flow Processing (IFP) Systems. Despite having a common goal, IFP systems differ in a wide range of aspects, including architectures, data models, rule languages, and processing mechanisms.\n In this tutorial we draw a general framework to analyze and compare the results achieved so far in the area of IFP systems. This allows us to offer a systematic overview of the topic, favoring the communication between different communities, and highlighting a number of open issue that still need to be addressed in research.",
"title": ""
}
] |
[
{
"docid": "eb0eec2fe000511a37e6487ff51ddb68",
"text": "We report on a laboratory study that compares reading from paper to reading on-line. Critical differences have to do with the major advantages paper offers in supporting annotation while reading, quick navigation, and flexibility of spatial layout. These, in turn, allow readers to deepen their understanding of the text, extract a sense of its structure, create a plan for writing, cross-refer to other documents, and interleave reading and writing. We discuss the design implications of these findings for the development of better reading technologies.",
"title": ""
},
{
"docid": "9cce96168421bf0a220ea9302df6cd3a",
"text": "An overview is given on the fundamental problems when combining acoustic echo cancellation (AEC) with adaptive beamforming microphone array (ABMA) in fullduplex communications. For reasons of computational complexity and functionality, a compromise is necessary between one echo canceller per microphone signal and a single echo canceller for the beamforming output. Here, the decomposition of the adaptive beamforming into a time-invariant beam-steering and a time-variant voting is proposed. Some synergies arising from the combination of ABMA and AEC are highlighted.",
"title": ""
},
{
"docid": "8c8891c2e0d4a10deb2c91af6397447f",
"text": "One of important cues of deception detection is micro-expression. It has three characteristics: short duration, low intensity and usually local movements. These characteristics imply that micro-expression is sparse. In this paper, we use the sparse part of Robust PCA (RPCA) to extract the subtle motion information of micro-expression. The local texture features of the information are extracted by Local Spatiotemporal Directional Features (LSTD). In order to extract more effective local features, 16 Regions of Interest (ROIs) are assigned based on the Facial Action Coding System (FACS). The experimental results on two micro-expression databases show the proposed method gain better performance. Moreover, the proposed method may further be used to extract other subtle motion information (such as lip-reading, the human pulse, and micro-gesture etc.) from video.",
"title": ""
},
{
"docid": "8a1ba356c34935a2f3a14656138f0414",
"text": "We propose an efficient approach to exploiting motion information from consecutive frames of a video sequence to recover the 3D pose of people. Previous approaches typically compute candidate poses in individual frames and then link them in a post-processing step to resolve ambiguities. By contrast, we directly regress from a spatio-temporal volume of bounding boxes to a 3D pose in the central frame. We further show that, for this approach to achieve its full potential, it is essential to compensate for the motion in consecutive frames so that the subject remains centered. This then allows us to effectively overcome ambiguities and improve upon the state-of-the-art by a large margin on the Human3.6m, HumanEva, and KTH Multiview Football 3D human pose estimation benchmarks.",
"title": ""
},
{
"docid": "087f9c2abb99d8576645a2460298c1b5",
"text": "In a community cloud, multiple user groups dynamically share a massive number of data blocks. The authors present a new associative data sharing method that uses virtual disks in the MeePo cloud, a research storage cloud built at Tsinghua University. Innovations in the MeePo cloud design include big data metering, associative data sharing, data block prefetching, privileged access control (PAC), and privacy preservation. These features are improved or extended from competing features implemented in DropBox, CloudViews, and MySpace. The reported results support the effectiveness of the MeePo cloud.",
"title": ""
},
{
"docid": "1b844eb4aeaac878ebffaaf5b4d6e3ab",
"text": "Recently, deep residual networks have been successfully applied in many computer vision and natural language processing tasks, pushing the state-of-the-art performance with deeper and wider architectures. In this work, we interpret deep residual networks as ordinary differential equations (ODEs), which have long been studied in mathematics and physics with rich theoretical and empirical success. From this interpretation, we develop a theoretical framework on stability and reversibility of deep neural networks, and derive three reversible neural network architectures that can go arbitrarily deep in theory. The reversibility property allows a memoryefficient implementation, which does not need to store the activations for most hidden layers. Together with the stability of our architectures, this enables training deeper networks using only modest computational resources. We provide both theoretical analyses and empirical results. Experimental results demonstrate the efficacy of our architectures against several strong baselines on CIFAR-10, CIFAR-100 and STL-10 with superior or on-par state-of-the-art performance. Furthermore, we show our architectures yield superior results when trained using fewer training data.",
"title": ""
},
{
"docid": "36e3fc3b9a24277a8eb5a736047f9525",
"text": "The quantitative analysis of a randomized system, modeled by a Markov decision process, against an LTL formula can be performed by a combination of graph algorithms, automata-theoretic concepts and numerical methods to compute maximal or minimal reachability probabilities. In this paper, we present various reduction techniques that serve to improve the performance of the quantitative analysis, and report on their implementation on the top of the probabilistic model checker \\LiQuor. Although our techniques are purely heuristic and cannot improve the worst-case time complexity of standard algorithms for the quantitative analysis, a series of examples illustrates that the proposed methods can yield a major speed-up.",
"title": ""
},
{
"docid": "57a2ef4a644f0fc385185a381f309fcd",
"text": "Despite recent emergence of adversarial based methods for video prediction, existing algorithms often produce unsatisfied results in image regions with rich structural information (i.e., object boundary) and detailed motion (i.e., articulated body movement). To this end, we present a structure preserving video prediction framework to explicitly address above issues and enhance video prediction quality. On one hand, our framework contains a two-stream generation architecture which deals with high frequency video content (i.e., detailed object or articulated motion structure) and low frequency video content (i.e., location or moving directions) in two separate streams. On the other hand, we propose a RNN structure for video prediction, which employs temporal-adaptive convolutional kernels to capture time-varying motion patterns as well as tiny objects within a scene. Extensive experiments on diverse scenes, ranging from human motion to semantic layout prediction, demonstrate the effectiveness of the proposed video prediction approach.",
"title": ""
},
{
"docid": "397036a265637f5a84256bdba80d93a2",
"text": "0167-4730/$ see front matter 2008 Elsevier Ltd. A doi:10.1016/j.strusafe.2008.06.002 * Corresponding author. E-mail address: abliel@stanford.edu (A.B. Liel). The primary goal of seismic provisions in building codes is to protect life safety through the prevention of structural collapse. To evaluate the extent to which current and past building code provisions meet this objective, the authors have conducted detailed assessments of collapse risk of reinforced-concrete moment frame buildings, including both ‘ductile’ frames that conform to current building code requirements, and ‘non-ductile’ frames that are designed according to out-dated (pre-1975) building codes. Many aspects of the assessment process can have a significant impact on the evaluated collapse performance; this study focuses on methods of representing modeling parameter uncertainties in the collapse assessment process. Uncertainties in structural component strength, stiffness, deformation capacity, and cyclic deterioration are considered for non-ductile and ductile frame structures of varying heights. To practically incorporate these uncertainties in the face of the computationally intensive nonlinear response analyses needed to simulate collapse, the modeling uncertainties are assessed through a response surface, which describes the median collapse capacity as a function of the model random variables. The response surface is then used in conjunction with Monte Carlo methods to quantify the effect of these modeling uncertainties on the calculated collapse fragilities. Comparisons of the response surface based approach and a simpler approach, namely the first-order second-moment (FOSM) method, indicate that FOSM can lead to inaccurate results in some cases, particularly when the modeling uncertainties cause a shift in the prediction of the median collapse point. An alternate simplified procedure is proposed that combines aspects of the response surface and FOSM methods, providing an efficient yet accurate technique to characterize model uncertainties, accounting for the shift in median response. The methodology for incorporating uncertainties is presented here with emphasis on the collapse limit state, but is also appropriate for examining the effects of modeling uncertainties on other structural limit states. 2008 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "66cd10e39a91fb421d1145b2ebe7246c",
"text": "Previous research suggests that heterosexual women's sexual arousal patterns are nonspecific; heterosexual women demonstrate genital arousal to both preferred and nonpreferred sexual stimuli. These patterns may, however, be related to the intense and impersonal nature of the audiovisual stimuli used. The current study investigated the gender specificity of heterosexual women's sexual arousal in response to less intense sexual stimuli, and also examined the role of relationship context on both women's and men's genital and subjective sexual responses. Assessments were made of 43 heterosexual women's and 9 heterosexual men's genital and subjective sexual arousal to audio narratives describing sexual or neutral encounters with female and male strangers, friends, or long-term relationship partners. Consistent with research employing audiovisual sexual stimuli, men demonstrated a category-specific pattern of genital and subjective arousal with respect to gender, while women showed a nonspecific pattern of genital arousal, yet reported a category-specific pattern of subjective arousal. Heterosexual women's nonspecific genital response to gender cues is not a function of stimulus intensity or relationship context. Relationship context did significantly affect women's genital sexual arousal--arousal to both female and male friends was significantly lower than to the stranger and long-term relationship contexts--but not men's. These results suggest that relationship context may be a more important factor in heterosexual women's physiological sexual response than gender cues.",
"title": ""
},
{
"docid": "f21850cde63b844e95db5b9916db1c30",
"text": "Foreign Exchange (Forex) market is a complex and challenging task for prediction due to uncertainty movement of exchange rate. However, these movements over timeframe also known as historical Forex data that offered a generic repeated trend patterns. This paper uses the features extracted from trend patterns to model and predict the next day trend. Hidden Markov Models (HMMs) is applied to learn the historical trend patterns, and use to predict the next day movement trends. We use the 2011 Forex historical data of Australian Dollar (AUS) and European Union Dollar (EUD) against the United State Dollar (USD) for modeling, and the 2012 and 2013 Forex historical data for validating the proposed model. The experimental results show outperforms prediction result for both years.",
"title": ""
},
{
"docid": "b32e1d3474c5db96f188981b29cbb9c0",
"text": "An adversarial example is an example that has been adjusted to produce a wrong label when presented to a system at test time. To date, adversarial example constructions have been demonstrated for classifiers, but not for detectors. If adversarial examples that could fool a detector exist, they could be used to (for example) maliciously create security hazards on roads populated with smart vehicles. In this paper, we demonstrate a construction that successfully fools two standard detectors, Faster RCNN and YOLO. The existence of such examples is surprising, as attacking a classifier is very different from attacking a detector, and that the structure of detectors – which must search for their own bounding box, and which cannot estimate that box very accurately – makes it quite likely that adversarial patterns are strongly disrupted. We show that our construction produces adversarial examples that generalize well across sequences digitally, even though large perturbations are needed. We also show that our construction yields physical objects that are adversarial.",
"title": ""
},
{
"docid": "a7bbf188c7219ff48af391a5f8b140b8",
"text": "The paper presents the results of studies concerning the designation of COD fraction in raw wastewater. The research was conducted in three mechanical-biological sewage treatment plants. The results were compared with data assumed in the ASM models. During the investigation, the following fractions of COD were determined: dissolved non-biodegradable SI, dissolved easily biodegradable SS, in organic suspension slowly degradable XS, and in organic suspension non-biodegradable XI. The methodology for determining the COD fraction was based on the ATVA 131guidelines. The real concentration of fractions in raw wastewater and the percentage of each fraction in total COD are different from data reported in the literature.",
"title": ""
},
{
"docid": "6e2ecc13dc0a1151c8e921dc6a2b2b97",
"text": "A Continuous Integration system is often considered one of the key elements involved in supporting an agile software development and testing environment. As a traditional software tester transitioning to an agile development environment it became clear to me that I would need to put this essential infrastructure in place and promote improved development practices in order to make the transition to agile testing possible. This experience report discusses a continuous integration implementation I led last year. The initial motivations for implementing continuous integration are discussed and a pre and post-assessment using Martin Fowler's\" Practices of Continuous Integration\" is provided along with the technical specifics of the implementation. The report concludes with a retrospective of my experiences implementing and promoting continuous integration within the context of agile testing.",
"title": ""
},
{
"docid": "83525470a770a036e9c7bb737dfe0535",
"text": "It is known that the performance of the i-vectors/PLDA based speaker verification systems is affected in the cases of short utterances and limited training data. The performance degradation appears because the shorter the utterance, the less reliable the extracted i-vector is, and because the total variability covariance matrix and the underlying PLDA matrices need a significant amount of data to be robustly estimated. Considering the “MIT Mobile Device Speaker Verification Corpus” (MIT-MDSVC) as a representative dataset for robust speaker verification tasks on limited amount of training data, this paper investigates which configuration and which parameters lead to the best performance of an i-vectors/PLDA based speaker verification. The i-vectors/PLDA based system achieved good performance only when the total variability matrix and the underlying PLDA matrices were trained with data belonging to the enrolled speakers. This way of training means that the system should be fully retrained when new enrolled speakers were added. The performance of the system was more sensitive to the amount of training data of the underlying PLDA matrices than to the amount of training data of the total variability matrix. Overall, the Equal Error Rate performance of the i-vectors/PLDA based system was around 1% below the performance of a GMM-UBM system on the chosen dataset. The paper presents at the end some preliminary experiments in which the utterances comprised in the CSTR VCTK corpus were used besides utterances from MIT-MDSVC for training the total variability covariance matrix and the underlying PLDA matrices.",
"title": ""
},
{
"docid": "a39fb4e8c15878ba4fdac54f02451789",
"text": "The Cloud computing system can be easily threatened by various attacks, because most of the cloud computing systems provide service to so many people who are not proven to be trustworthy. Due to their distributed nature, cloud computing environment are easy targets for intruders[1]. There are various Intrusion Detection Systems having various specifications to each. Cloud computing have two approaches i. e. Knowledge-based IDS and Behavior-Based IDS to detect intrusions in cloud computing. Behavior-Based IDS assumes that an intrusion can be detected by observing a deviation from normal to expected behavior of the system or user[2]s. Knowledge-based IDS techniques apply knowledge",
"title": ""
},
{
"docid": "38301e7db178d7072baf0226a1747c03",
"text": "We present an algorithm for ray tracing displacement maps that requires no additional storage over the base model. Displacement maps are rarely used in ray tracing due to the cost associated with storing and intersecting the displaced geometry. This is unfortunate because displacement maps allow the addition of large amounts of geometric complexity into models. Our method works for models composed of triangles with normals at the vertices. In addition, we discuss a special purpose displacement that creates a smooth surface that interpolates the triangle vertices and normals of a mesh. The combination allows relatively coarse models to be displacement mapped and ray traced effectively.",
"title": ""
},
{
"docid": "e2fad89377936c0a576164998a24ace8",
"text": "Probabilistic neural networks (PNNs) are artificial neural network algorithms widely used in pattern recognition and classification problems. In the traditional PNN algorithm, the probability density function (PDF) is approximated using the entire training dataset for each class. In some complex datasets, classmate clusters may be located far from each other and these distances between clusters may cause a reduction in the correct class’s posterior probability and lead to misclassification. This paper presents a novel PNN algorithm, the competitive probabilistic neural network (CPNN). In the CPNN, a competitive layer ranks kernels for each class and an optimum fraction of kernels are selected to estimate the class-conditional probability. Using a stratified, repeated, random subsampling cross-validation procedure and 9 benchmark classification datasets, CPNN is compared to both traditional PNN and the state of the art (e.g. enhanced probabilistic neural network, EPNN). These datasets are examined with and without noise and the algorithm is evaluated with several ratios of training to testing data. In all datasets (225 simulation categories), performance percentages of both CPNN and EPNN are greater than or equivalent to that of the traditional PNN; in 73% of simulation categories, the CPNN analyses show modest improvement in performance over the state of the art.",
"title": ""
},
{
"docid": "07ff0274408e9ba5d6cd2b1a2cb7cbf8",
"text": "Though tremendous strides have been made in object recognition, one of the remaining open challenges is detecting small objects. We explore three aspects of the problem in the context of finding small faces: the role of scale invariance, image resolution, and contextual reasoning. While most recognition approaches aim to be scale-invariant, the cues for recognizing a 3px tall face are fundamentally different than those for recognizing a 300px tall face. We take a different approach and train separate detectors for different scales. To maintain efficiency, detectors are trained in a multi-task fashion: they make use of features extracted from multiple layers of single (deep) feature hierarchy. While training detectors for large objects is straightforward, the crucial challenge remains training detectors for small objects. We show that context is crucial, and define templates that make use of massively-large receptive fields (where 99% of the template extends beyond the object of interest). Finally, we explore the role of scale in pre-trained deep networks, providing ways to extrapolate networks tuned for limited scales to rather extreme ranges. We demonstrate state-of-the-art results on massively-benchmarked face datasets (FDDB and WIDER FACE). In particular, when compared to prior art on WIDER FACE, our results reduce error by a factor of 2 (our models produce an AP of 82% while prior art ranges from 29-64%).",
"title": ""
},
{
"docid": "5846c9761ec90040feaf71656401d6dd",
"text": "Internet of Things (IoT) is an emergent technology that provides a promising opportunity to improve industrial systems by the smartly use of physical objects, systems, platforms and applications that contain embedded technology to communicate and share intelligence with each other. In recent years, a great range of industrial IoT applications have been developed and deployed. Among these applications, the Water and Oil & Gas Distribution System is tremendously important considering the huge amount of fluid loss caused by leakages and other possible hydraulic failures. Accordingly, to design an accurate Fluid Distribution Monitoring System (FDMS) represents a critical task that imposes a serious study and an adequate planning. This paper reviews the current state-of-the-art of IoT, major IoT applications in industries and focus more on the Industrial IoT FDMS (IIoT FDMS).",
"title": ""
}
] |
scidocsrr
|
8e523f8a0ce8ba6d9cf26ea375f9341f
|
User Personalized Satisfaction Prediction via Multiple Instance Deep Learning
|
[
{
"docid": "1768ecf6a2d8a42ea701d7f242edb472",
"text": "Satisfaction prediction is one of the prime concerns in search performance evaluation. It is a non-trivial task for two major reasons: (1) The definition of satisfaction is rather subjective and different users may have different opinions in satisfaction judgement. (2) Most existing studies on satisfaction prediction mainly rely on users' click-through or query reformulation behaviors but there are many sessions without such kind of interactions. To shed light on these research questions, we construct an experimental search engine that could collect users' satisfaction feedback as well as mouse click-through/movement data. Different from existing studies, we compare for the first time search users' and external assessors' opinions on satisfaction. We find that search users pay more attention to the utility of results while external assessors emphasize on the efforts spent in search sessions. Inspired by recent studies in predicting result relevance based on mouse movement patterns (namely motifs), we propose to estimate the utilities of search results and the efforts in search sessions with motifs extracted from mouse movement data on search result pages (SERPs). Besides the existing frequency-based motif selection method, two novel selection strategies (distance-based and distribution-based) are also adopted to extract high quality motifs for satisfaction prediction. Experimental results on over 1,000 user sessions show that the proposed strategies outperform existing methods and also have promising generalization capability for different users and queries.",
"title": ""
}
] |
[
{
"docid": "70e34d4ccd294d7811e344616638a3af",
"text": "The rapid development of social media sharing has created a huge demand for automatic media classification and annotation techniques. Attribute learning has emerged as a promising paradigm for bridging the semantic gap and addressing data sparsity via transferring attribute knowledge in object recognition and relatively simple action classification. In this paper, we address the task of attribute learning for understanding multimedia data with sparse and incomplete labels. In particular, we focus on videos of social group activities, which are particularly challenging and topical examples of this task because of their multimodal content and complex and unstructured nature relative to the density of annotations. To solve this problem, we 1) introduce a concept of semilatent attribute space, expressing user-defined and latent attributes in a unified framework, and 2) propose a novel scalable probabilistic topic model for learning multimodal semilatent attributes, which dramatically reduces requirements for an exhaustive accurate attribute ontology and expensive annotation effort. We show that our framework is able to exploit latent attributes to outperform contemporary approaches for addressing a variety of realistic multimedia sparse data learning tasks including: multitask learning, learning with label noise, N-shot transfer learning, and importantly zero-shot learning.",
"title": ""
},
{
"docid": "078f875d35d61689475a1507c5525eaa",
"text": "This paper discusses the actuator-level control of Valkyrie, a new humanoid robot designed by NASA’s Johnson Space Center in collaboration with several external partners. We focus on several topics pertaining to Valkyrie’s series elastic actuators including control architecture, controller design, and implementation in hardware. A decentralized approach is taken in controlling Valkyrie’s many series elastic degrees of freedom. By conceptually decoupling actuator dynamics from robot limb dynamics, we simplify the problem of controlling a highly complex system and streamline the controller development process compared to other approaches. This hierarchical control abstraction is realized by leveraging disturbance observers in the robot’s joint-level torque controllers. We apply a novel analysis technique to understand the ability of a disturbance observer to attenuate the effects of unmodeled dynamics. The performance of our control approach is demonstrated in two ways. First, we characterize torque tracking performance of a single Valkyrie actuator in terms of controllable torque resolution, tracking error, bandwidth, and power consumption. Second, we perform tests on Valkyrie’s arm, a serial chain of actuators, and demonstrate its ability to accurately track torques with our decentralized control approach.",
"title": ""
},
{
"docid": "87eed2ab66bd9bda90cf2a838b990207",
"text": "We present a new framework for compositional distributional semantics in which the distributional contexts of lexemes are expressed in terms of anchored packed dependency trees. We show that these structures have the potential to capture the full sentential contexts of a lexeme and provide a uniform basis for the composition of distributional knowledge in a way that captures both mutual disambiguation and generalization.",
"title": ""
},
{
"docid": "15da6453d3580a9f26ecb79f9bc8e270",
"text": "In 2005 the Commission for Africa noted that ‘Tackling HIV and AIDS requires a holistic response that recognises the wider cultural and social context’ (p. 197). Cultural factors that range from beliefs and values regarding courtship, sexual networking, contraceptive use, perspectives on sexual orientation, explanatory models for disease and misfortune and norms for gender and marital relations have all been shown to be factors in the various ways that HIV/AIDS has impacted on African societies (UNESCO, 2002). Increasingly the centrality of culture is being recognised as important to HIV/AIDS prevention, treatment, care and support. With culture having both positive and negative influences on health behaviour, international donors and policy makers are beginning to acknowledge the need for cultural approaches to the AIDS crisis (Nguyen et al., 2008). The development of cultural approaches to HIV/AIDS presents two major challenges for South Africa. First, the multi-cultural nature of the country means that there is no single sociocultural context in which the HIV/AIDS epidemic is occurring. South Africa is home to a rich tapestry of racial, ethnic, religious and linguistic groups. As a result of colonial history and more recent migration, indigenous Africans have come to live alongside large populations of people with European, Asian and mixed descent, all of whom could lay claim to distinctive cultural practices and spiritual beliefs. Whilst all South Africans are affected by the spread of HIV, the burden of the disease lies with the majority black African population (see Shisana et al., 2005; UNAIDS, 2007). Therefore, this chapter will focus on some sociocultural aspects of life within the majority black African population of South Africa, most of whom speak languages that are classified within the broad linguistic grouping of Bantu languages. This large family of linguistically related ethnic groups span across southern Africa and comprise the bulk of the African people who reside in South Africa today (Hammond-Tooke, 1974). A second challenge involves the legitimacy of the culture concept. Whilst race was used in apartheid as the rationale for discrimination, notions of culture and cultural differences were legitimised by segregating the country into various ‘homelands’. Within the homelands, the majority black South Africans could presumably",
"title": ""
},
{
"docid": "05f3d2097efffb3e1adcbede16ec41d2",
"text": "BACKGROUND\nDialysis patients with uraemic pruritus (UP) have significantly impaired quality of life. To assess the therapeutic effect of UP treatments, a well-validated comprehensive and multidimensional instrument needed to be established.\n\n\nOBJECTIVES\nTo develop and validate a multidimensional scale assessing UP in patients on dialysis: the Uraemic Pruritus in Dialysis Patients (UP-Dial).\n\n\nMETHODS\nThe development and validation of the UP-Dial instrument were conducted in four phases: (i) item generation, (ii) development of a pilot questionnaire, (iii) refinement of the questionnaire with patient recruitment and (iv) psychometric validation. Participants completed the UP-Dial, the visual analogue scale (VAS) of UP, the Dermatology Life Quality Index (DLQI), the Kidney Disease Quality of Life-36 (KDQOL-36), the Pittsburgh Sleep Quality Index (PSQI) and the Beck Depression Inventory (BDI) between 15 May 2012 and 30 November 2015.\n\n\nRESULTS\nThe 27-item pilot UP-Dial was generated, with 168 participants completing the pilot scale. After factor analysis was performed, the final 14-item UP-Dial encompassed three domains: signs and symptoms, psychosocial, and sleep. Face and content validity were satisfied through the item generation process and expert review. Psychometric analysis demonstrated that the UP-Dial had good convergent and discriminant validity. The UP-Dial was significantly correlated [Spearman rank coefficient, 95% confidence interval (CI)] with the VAS-UP (0·76, 0·69-0·83), DLQI (0·78, 0·71-0·85), KDQOL-36 (-0·86, -0·91 to -0·81), PSQI (0·85, 0·80-0·89) and BDI (0·70, 0·61-0·79). The UP-Dial revealed excellent internal consistency (Cronbach's α 0·90, 95% CI 0·87-0·92) and reproducibility (intraclass correlation 0·95, 95% CI 0·90-0·98).\n\n\nCONCLUSIONS\nThe UP-Dial is valid and reliable for assessing UP among patients on dialysis. Future research should focus on the cross-cultural adaptation and translation of the scale to other languages.",
"title": ""
},
{
"docid": "7645c6a0089ab537cb3f0f82743ce452",
"text": "Behavioral studies of facial emotion recognition (FER) in autism spectrum disorders (ASD) have yielded mixed results. Here we address demographic and experiment-related factors that may account for these inconsistent findings. We also discuss the possibility that compensatory mechanisms might enable some individuals with ASD to perform well on certain types of FER tasks in spite of atypical processing of the stimuli, and difficulties with real-life emotion recognition. Evidence for such mechanisms comes in part from eye-tracking, electrophysiological, and brain imaging studies, which often show abnormal eye gaze patterns, delayed event-related-potential components in response to face stimuli, and anomalous activity in emotion-processing circuitry in ASD, in spite of intact behavioral performance during FER tasks. We suggest that future studies of FER in ASD: 1) incorporate longitudinal (or cross-sectional) designs to examine the developmental trajectory of (or age-related changes in) FER in ASD and 2) employ behavioral and brain imaging paradigms that can identify and characterize compensatory mechanisms or atypical processing styles in these individuals.",
"title": ""
},
{
"docid": "93625a1cc77929e98a3bdbf30ac16f3a",
"text": "The performance of rasterization-based rendering on current GPUs strongly depends on the abilities to avoid overdraw and to prevent rendering triangles smaller than the pixel size. Otherwise, the rates at which highresolution polygon models can be displayed are affected significantly. Instead of trying to build these abilities into the rasterization-based rendering pipeline, we propose an alternative rendering pipeline implementation that uses rasterization and ray-casting in every frame simultaneously to determine eye-ray intersections. To make ray-casting competitive with rasterization, we introduce a memory-efficient sample-based data structure which gives rise to an efficient ray traversal procedure. In combination with a regular model subdivision, the most optimal rendering technique can be selected at run-time for each part. For very large triangle meshes our method can outperform pure rasterization and requires a considerably smaller memory budget on the GPU. Since the proposed data structure can be constructed from any renderable surface representation, it can also be used to efficiently render isosurfaces in scalar volume fields. The compactness of the data structure allows rendering from GPU memory when alternative techniques already require exhaustive paging.",
"title": ""
},
{
"docid": "c59e72c374b3134e347674dccb86b0a4",
"text": "Lane detection and tracking and departure warning systems are important components of Intelligent Transportation Systems. They have particularly attracted great interest from industry and academia. Many architectures and commercial systems have been proposed in the literature. In this paper, we discuss the design of such systems regarding the following stages: pre-processing, detection, and tracking. For each stage, a short description of its working principle as well as their advantages and shortcomings are introduced. Our paper may possibly help in designing new systems that overcome and improve the shortcomings of current architectures.",
"title": ""
},
{
"docid": "684378877ed9bd19ef9b02ba3974eb85",
"text": "Digital forensics is essential for the successful opposition of computer crime. It is associated with many challenges, including rapid changes in computer and digital devices, and more sophisticated attacks on computer systems and networks and the rapid increase in abuse of ICT systems. For a forensic investigation to be performed successfully there are a number of important steps that have to be considered and taken. Since digital forensics is a relatively new field compared to other forensic disciplines, there are ongoing efforts to develop examination standards and to provide structure to digital forensic examinations. This paper attempts to address the diversity of methodologies applied in digital forensic investigations.",
"title": ""
},
{
"docid": "2b00c07248c468447e12aff67c52a192",
"text": "Video fluoroscopy is commonly used in the study of swallowing kinematics. However, various procedures used in linear measurements obtained from video fluoroscopy may contribute to increased variability or measurement error. This study evaluated the influence of calibration referent and image rotation on measurement variability for hyoid and laryngeal displacement during swallowing. Inter- and intrarater reliabilities were also estimated for hyoid and laryngeal displacement measurements across conditions. The use of different calibration referents did not contribute significantly to variability in measures of hyoid and laryngeal displacement but image rotation affected horizontal measures for both structures. Inter- and intrarater reliabilities were high. Using the 95% confidence interval as the error index, measurement error was estimated to range from 2.48 to 3.06 mm. These results address procedural decisions for measuring hyoid and laryngeal displacement in video fluoroscopic swallowing studies.",
"title": ""
},
{
"docid": "57fd4b59ffb27c35faa6a5ee80001756",
"text": "This paper describes a novel method for motion generation and reactive collision avoidance. The algorithm performs arbitrary desired velocity profiles in absence of external disturbances and reacts if virtual or physical contact is made in a unified fashion with a clear physically interpretable behavior. The method uses physical analogies for defining attractor dynamics in order to generate smooth paths even in presence of virtual and physical objects. The proposed algorithm can, due to its low complexity, run in the inner most control loop of the robot, which is absolutely crucial for safe Human Robot Interaction. The method is thought as the locally reactive real-time motion generator connecting control, collision detection and reaction, and global path planning.",
"title": ""
},
{
"docid": "5cf444f83a8b4b3f9482e18cea796348",
"text": "This paper investigates L-shaped iris (LSI) embedded in substrate integrated waveguide (SIW) structures. A lumped element equivalent circuit is utilized to thoroughly discuss the iris behavior in a wide frequency band. This structure has one more degree of freedom and design parameter compared with the conventional iris structures; therefore, it enables design flexibility with enhanced performance. The LSI is utilized to realize a two-pole evanescent-mode filter with an enhanced stopband and a dual-band filter combining evanescent and ordinary modes excitation. Moreover, a prescribed filtering function is demonstrated using the lumped element analysis not only including evanescent-mode pole, but also close-in transmission zero. The proposed LSI promises to substitute the conventional posts in (SIW) filter design.",
"title": ""
},
{
"docid": "73128099f3ddd19e4f88d10cdafbd506",
"text": "BACKGROUND\nRecently, there has been an increased interest in the effects of essential oils on athletic performances and other physiological effects. This study aimed to assess the effects of Citrus sinensis flower and Mentha spicata leaves essential oils inhalation in two different groups of athlete male students on their exercise performance and lung function.\n\n\nMETHODS\nTwenty physical education students volunteered to participate in the study. The subjects were randomly assigned into two groups: Mentha spicata and Citrus sinensis (ten participants each). One group was nebulized by Citrus sinensis flower oil and the other by Mentha spicata leaves oil in a concentration of (0.02 ml/kg of body mass) which was mixed with 2 ml of normal saline for 5 min before a 1500 m running tests. Lung function tests were measured using a spirometer for each student pre and post nebulization giving the same running distance pre and post oils inhalation.\n\n\nRESULTS\nA lung function tests showed an improvement on the lung status for the students after inhaling of the oils. Interestingly, there was a significant increase in Forced Expiratory Volume in the first second and Forced Vital Capacity after inhalation for the both oils. Moreover significant reductions in the means of the running time were observed among these two groups. The normal spirometry results were 50 %, while after inhalation with M. spicata oil the ratio were 60 %.\n\n\nCONCLUSION\nOur findings support the effectiveness of M. spicata and C. sinensis essential oils on the exercise performance and respiratory function parameters. However, our conclusion and generalisability of our results should be interpreted with caution due to small sample size and lack of control groups, randomization or masking. We recommend further investigations to explain the mechanism of actions for these two essential oils on exercise performance and respiratory parameters.\n\n\nTRIAL REGISTRATION\nISRCTN10133422, Registered: May 3, 2016.",
"title": ""
},
{
"docid": "fdfcf2f910884bf899623d2711386db2",
"text": "A number of vehicles may be controlled and supervised by traffic security and its management. The License Plate Recognition is broadly employed in traffic management to recognize a vehicle whose owner has despoiled traffic laws or to find stolen vehicles. Vehicle License Plate Detection and Recognition is a key technique in most of the traffic related applications such as searching of stolen vehicles, road traffic monitoring, airport gate monitoring, speed monitoring and automatic parking lots access control. It is simply the ability of automatically extract and recognition of the vehicle license number plate's character from a captured image. Number Plate Recognition method suffered from problem of feature selection process. The current method of number plate recognition system only focus on local, global and Neural Network process of Feature Extraction and process for detection. The Optimized Feature Selection process improves the detection ratio of number plate recognition. In this paper, it is proposed a new methodology for `License Plate Recognition' based on wavelet transform function. This proposed methodology compare with Correlation based method for detection of number plate. Empirical result shows that better performance in comparison of correlation based technique for number plate recognition. Here, it is modified the Matching Technique for numberplate recognition by using Multi-Class RBF Neural Network Optimization.",
"title": ""
},
{
"docid": "6a3cc8319b7a195ce7ec05a70ad48c7a",
"text": "Image caption generation is the problem of generating a descriptive sentence of an image. Automatically describing the content of an image is a fundamental problem in artificial intelligence that connects computer vision and natural language processing. This paper presents a brief survey of some technical aspects and methods for description-generation of images. As there has been great interest in research community, to come up with automatic ways to retrieve images based on content. There are numbers of techniques, that, have been used to solve this problem, and purpose of this paper is to have an overview of many of these approaches and databases used for description generation purpose. Finally, we discuss open challenges and future directions for upcoming researchers.",
"title": ""
},
{
"docid": "531aad1188cb41024ce0e3f397e35252",
"text": "CMF is a technique for simultaneously learning low-rank representations based on a collection of matrices with shared entities. A typical example is the joint modeling of useritem, item-property, and user-feature matrices in a recommender system. The key idea in CMF is that the embeddings are shared across the matrices, which enables transferring information between them. The existing solutions, however, break down when the individual matrices have low-rank structure not shared with others. In this work we present a novel CMF solution that allows each of the matrices to have a separate low-rank structure that is independent of the other matrices, as well as structures that are shared only by a subset of them. We compare MAP and variational Bayesian solutions based on alternating optimization algorithms and show that the model automatically infers the nature of each factor using group-wise sparsity. Our approach supports in a principled way continuous, binary and count observations and is efficient for sparse matrices involving missing data. We illustrate the solution on a number of examples, focusing in particular on an interesting use-case of augmented multi-view learning.",
"title": ""
},
{
"docid": "beb90397ff3d1ef0d71463fb2d9b1b97",
"text": "Due to the strong competition that exists today, most manufacturing organizations are in a continuous effort for increasing their profits and reducing their costs. Accurate sales forecasting is certainly an inexpensive way to meet the aforementioned goals, since this leads to improved customer service, reduced lost sales and product returns and more efficient production planning. Especially for the food industry, successful sales forecasting systems can be very beneficial, due to the short shelf-life of many food products and the importance of the product quality which is closely related to human health. In this paper we present a complete framework that can be used for developing nonlinear time series sales forecasting models. The method is a combination of two artificial intelligence technologies, namely the radial basis function (RBF) neural network architecture and a specially designed genetic algorithm (GA). The methodology is applied successfully to sales data of fresh milk provided by a major manufacturing company of dairy products. 2005 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "0869a75f158b04513c848bc7bfb10e37",
"text": "Tracking of multiple objects is an important application in AI City geared towards solving salient problems related to safety and congestion in an urban environment. Frequent occlusion in traffic surveillance has been a major problem in this research field. In this challenge, we propose a model-based vehicle localization method, which builds a kernel at each patch of the 3D deformable vehicle model and associates them with constraints in 3D space. The proposed method utilizes shape fitness evaluation besides color information to track vehicle objects robustly and efficiently. To build 3D car models in a fully unsupervised manner, we also implement evolutionary camera self-calibration from tracking of walking humans to automatically compute camera parameters. Additionally, the segmented foreground masks which are crucial to 3D modeling and camera self-calibration are adaptively refined by multiple-kernel feedback from tracking. For object detection/ classification, the state-of-theart single shot multibox detector (SSD) is adopted to train and test on the NVIDIA AI City Dataset. To improve the accuracy on categories with only few objects, like bus, bicycle and motorcycle, we also employ the pretrained model from YOLO9000 with multiscale testing. We combine the results from SSD and YOLO9000 based on ensemble learning. Experiments show that our proposed tracking system outperforms both state-of-the-art of tracking by segmentation and tracking by detection. Keywords—multiple object tracking, constrained multiple kernels, 3D deformable model, camera self-calibration, adaptive segmentation, object detection, object classification",
"title": ""
},
{
"docid": "3c0b072b1b2c5082552aff2379bbeeee",
"text": "Big Data is a recent research style which brings up challenges in decision making process. The size of the dataset turn intotremendously big, the process of extracting valuablefacts by analyzing these data also has become tedious. To solve this problem of information extraction with Big Data, parallel programming models can be used. Parallel Programming model achieves information extraction by partitioning the huge data into smaller chunks. MapReduce is one of the parallel programming models which works well with Hadoop Distributed File System(HDFS) that can be used to partition the data in a more efficient and effective way. In MapReduce, once the data is partitioned based on the <key, value> pair, it is ready for data analytics. Time Series data play an important role in Big Data Analytics where Time Series analysis can be performed with many machine learning algorithms as well as traditional algorithmic concepts such as regression, exponential smoothing, moving average, classification, clustering and model-based recommendation. For Big Data, these algorithms can be used with MapReduce programming model on Hadoop clusters by translating their data analytics logic to the MapReduce job which is to be run over Hadoop clusters. But Time Series data are sequential in nature so that the partitioning of Time Series data must be carefully done to retain its prediction accuracy.In this paper, a novel parallel approach to forecast Time Series data with Holt-Winters model (PAFHW) is proposed and the proposed approach PAFHW is enhanced by combining K-means clusteringfor forecasting the Time Series data in distributed environment.",
"title": ""
}
] |
scidocsrr
|
08f22c1d84c5e40a7ce26f5715f362a2
|
International Call Fraud Detection Systems and Techniques
|
[
{
"docid": "1a13a0d13e0925e327c9b151b3e5b32d",
"text": "The topic of this thesis is fraud detection in mobile communications networks by means of user profiling and classification techniques. The goal is to first identify relevant user groups based on call data and then to assign a user to a relevant group. Fraud may be defined as a dishonest or illegal use of services, with the intention to avoid service charges. Fraud detection is an important application, since network operators lose a relevant portion of their revenue to fraud. Whereas the intentions of the mobile phone users cannot be observed, it is assumed that the intentions are reflected in the call data. The call data is subsequently used in describing behavioral patterns of users. Neural networks and probabilistic models are employed in learning these usage patterns from call data. These models are used either to detect abrupt changes in established usage patterns or to recognize typical usage patterns of fraud. The methods are shown to be effective in detecting fraudulent behavior by empirically testing the methods with data from real mobile communications networks. © All rights reserved. No part of this publication may be reproduced, stored in a retrieval system, or transmitted, in any form or by any means, electronic, mechanical, photocopying, recording, or otherwise, without the prior permission of the author.",
"title": ""
}
] |
[
{
"docid": "7786fac57e0c1392c6a5101681baecb0",
"text": "We deployed 72 sensors of 10 modalities in 15 wireless and wired networked sensor systems in the environment, in objects, and on the body to create a sensor-rich environment for the machine recognition of human activities. We acquired data from 12 subjects performing morning activities, yielding over 25 hours of sensor data. We report the number of activity occurrences observed during post-processing, and estimate that over 13000 and 14000 object and environment interactions occurred. We describe the networked sensor setup and the methodology for data acquisition, synchronization and curation. We report on the challenges and outline lessons learned and best practice for similar large scale deployments of heterogeneous networked sensor systems. We evaluate data acquisition quality for on-body and object integrated wireless sensors; there is less than 2.5% packet loss after tuning. We outline our use of the dataset to develop new sensor network self-organization principles and machine learning techniques for activity recognition in opportunistic sensor configurations. Eventually this dataset will be made public.",
"title": ""
},
{
"docid": "d49c30d24333c263b43000f268a8f20d",
"text": "Give us 5 minutes and we will show you the best book to read today. This is it, the handbook of blind source separation independent component analysis and applications that will be your best choice for better reading book. Your five times will not spend wasted by reading this website. You can take the book as a source to make better concept. Referring the books that can be situated with your needs is sometime difficult. But here, this is so easy. You can find the best thing of book that you can read.",
"title": ""
},
{
"docid": "82304a1d1c48da0932b8426e4eaba95f",
"text": "Mobile payment normally occurs as a wireless transaction of monetary value and includes the initiation, authorization and the realization of the payment. Such transactions are facilitated by purpose-built mobile payment systems that are part of the service infrastructure supporting the functioning of mobile business applications. A number of stakeholder groups may be involved in concluding a mobile payment transaction, among them customers, mobile operators, financial institutions, merchants, and intermediaries. In this paper, mobile payment systems are characterised from the point of view of the stakeholder groups. Building on existing work, a supply and demand model for the investigation of mPayment services is presented, and applied to a case study.",
"title": ""
},
{
"docid": "ae89e4fbe0bf0269d215f0fb00511ff0",
"text": "Concept location identifies parts of a software system that implement a specific concept that originates from the problem or the solution domain. Concept location is a very common software engineering activity that directly supports software maintenance and evolution tasks such as incremental change and reverse engineering. This work addresses the problem of concept location using an advanced information retrieval method, Latent Semantic Indexing (LSI). LSI is used to map concepts expressed in natural language by the programmer to the relevant parts of the source code. Results of a case study on NCSA Mosaic are presented and compared with previously published results of other static methods for concept location.",
"title": ""
},
{
"docid": "a1ed4c514380fb0d7b7083fb1cee520d",
"text": "We show two important findings on the use of deep convolutional neural networks (CNN) in medical image analysis. First, we show that CNN models that are pre-trained using computer vision databases (e.g., Imagenet) are useful in medical image applications, despite the significant differences in image appearance. Second, we show that multiview classification is possible without the pre-registration of the input images. Rather, we use the high-level features produced by the CNNs trained in each view separately. Focusing on the classification of mammograms using craniocaudal (CC) and mediolateral oblique (MLO) views and their respective mass and micro-calcification segmentations of the same breast, we initially train a separate CNN model for each view and each segmentation map using an Imagenet pre-trained model. Then, using the features learned from each segmentation map and unregistered views, we train a final CNN classifier that estimates the patient’s risk of developing breast cancer using the Breast Imaging-Reporting and Data System (BI-RADS) score. We test our methodology in two publicly available datasets (InBreast and DDSM), containing hundreds of cases, and show that it produces a volume under ROC surface of over 0.9 and an area under ROC curve (for a 2-class problem benign and malignant) of over 0.9. In general, our approach shows state-of-the-art classification results and demonstrates a new comprehensive way of addressing this challenging classification problem.",
"title": ""
},
{
"docid": "0cd4102eca54c7b0da4e798fa4bf5509",
"text": "The amount of images being uploaded to the internet is rapidly increasing, with Facebook users uploading over 2.5 billion new photos every month [Facebook 2010], however, applications that make use of this data are severely lacking. Current computer vision applications use a small number of input images because of the difficulty is in acquiring computational resources and storage options for large amounts of data [Guo. . . 2005; White et al. 2010]. As such, development of vision applications that use a large set of images has been limited [Ghemawat and Gobioff. . . 2003]. The Hadoop Mapreduce platform provides a system for large and computationally intensive distributed processing (Dean, 2004), though use of Hadoops system is severely limited by the technical complexities of developing useful applications [Ghemawat and Gobioff. . . 2003; White et al. 2010]. To immediately address this, we propose an open-source Hadoop Image Processing Interface (HIPI) that aims to create an interface for computer vision with MapReduce technology. HIPI abstracts the highly technical details of Hadoop’s system and is flexible enough to implement many techniques in current computer vision literature. This paper describes the HIPI framework, and describes two example applications that have been implemented with HIPI. The goal of HIPI is to create a tool that will make development of large-scale image processing and vision projects extremely accessible in hopes that it will empower researchers and students to create applications with ease.",
"title": ""
},
{
"docid": "b6c69ee2b9bce4c60c3ef9eaff07f93f",
"text": "Videos taken in the wild sometimes contain unexpected rain streaks, which brings difficulty in subsequent video processing tasks. Rain streak removal in a video (RSRV) is thus an important issue and has been attracting much attention in computer vision. Different from previous RSRV methods formulating rain streaks as a deterministic message, this work first encodes the rains in a stochastic manner, i.e., a patch-based mixture of Gaussians. Such modification makes the proposed model capable of finely adapting a wider range of rain variations instead of certain types of rain configurations as traditional. By integrating with the spatiotemporal smoothness configuration of moving objects and low-rank structure of background scene, we propose a concise model for RSRV, containing one likelihood term imposed on the rain streak layer and two prior terms on the moving object and background scene layers of the video. Experiments implemented on videos with synthetic and real rains verify the superiority of the proposed method, as compared with the state-of-the-art methods, both visually and quantitatively in various performance metrics.",
"title": ""
},
{
"docid": "d8567a34caacdb22a0aea281a1dbbccb",
"text": "Traditionally, interference protection is guaranteed through a policy of spectrum licensing, whereby wireless systems get exclusive access to spectrum. This is an effective way to prevent interference, but it leads to highly inefficient use of spectrum. Cognitive radio along with software radio, spectrum sensors, mesh networks, and other emerging technologies can facilitate new forms of spectrum sharing that greatly improve spectral efficiency and alleviate scarcity, if policies are in place that support these forms of sharing. On the other hand, new technology that is inconsistent with spectrum policy will have little impact. This paper discusses policies that can enable or facilitate use of many spectrum-sharing arrangements, where the arrangements are categorized as being based on coexistence or cooperation and as sharing among equals or primary-secondary sharing. A shared spectrum band may be managed directly by the regulator, or this responsibility may be delegated in large part to a license-holder. The type of sharing arrangement and the entity that manages it have a great impact on which technical approaches are viable and effective. The most efficient and cost-effective form of spectrum sharing will depend on the type of systems involved, where systems under current consideration are as diverse as television broadcasters, cellular carriers, public safety systems, point-to-point links, and personal and local-area networks. In addition, while cognitive radio offers policy-makers the opportunity to improve spectral efficiency, cognitive radio also provides new challenges for policy enforcement. A responsible regulator will not allow a device into the marketplace that might harm other systems. Thus, designers must seek innovative ways to assure regulators that new devices will comply with policy requirements and will not cause harmful interference.",
"title": ""
},
{
"docid": "acfe7531f67a40e27390575a69dcd165",
"text": "This paper reviews the relationship between attention deficit hyperactivity disorder (ADHD) and academic performance. First, the relationship at different developmental stages is examined, focusing on pre-schoolers, children, adolescents and adults. Second, the review examines the factors underpinning the relationship between ADHD and academic underperformance: the literature suggests that it is the symptoms of ADHD and underlying cognitive deficits not co-morbid conduct problems that are at the root of academic impairment. The review concludes with an overview of the literature examining strategies that are directed towards remediating the academic impairment of individuals with ADHD.",
"title": ""
},
{
"docid": "76d10dc3b823d7cae01269b2b7f15745",
"text": "The new challenge for designers and HCI researchers is to develop software tools for effective e-learning. Learner-Centered Design (LCD) provides guidelines to make new learning domains accessible in an educationally productive manner. A number of new issues have been raised because of the new \"vehicle\" for education. Effective e-learning systems should include sophisticated and advanced functions, yet their interface should hide their complexity, providing an easy and flexible interaction suited to catch students' interest. In particular, personalization and integration of learning paths and communication media should be provided.It is first necessary to dwell upon the difference between attributes for platforms (containers) and for educational modules provided by a platform (contents). In both cases, it is hard to go deeply into pedagogical issues of the provided knowledge content. This work is a first step towards identifying specific usability attributes for e-learning systems, capturing the peculiar features of this kind of applications. We report about a preliminary users study involving a group of e-students, observed during their interaction with an e-learning system in a real situation. We then propose to adapt to the e-learning domain the so called SUE (Systematic Usability Evaluation) inspection, providing evaluation patterns able to drive inspectors' activities in the evaluation of an e-learning tool.",
"title": ""
},
{
"docid": "be3ffc29a165b37b47d3ea28285a86a1",
"text": "(11.1) Here we describe a mathematical model in the field of cellular biology. It is a model for two similar cells which interact via diffusion past a membrane. Each cell by itself is inert or dead in the sense that the concentrations of its enzymes achieve a constant equilibrium. In interaction however, the cellular system pulses (or expressed perhaps over dramatically, becomes alive:) in the sense that the concentrations of the enzymes in each cell will oscillate indefinitely. Of course we are using an extremely simplified picture of actual cells. The model is an example of Turing's equations of cellular biology [1] which are described in the next section. I would like to thank H. Hartman for bringing to my attention Reprinted with permission of the publisher, American Mathematical Society,",
"title": ""
},
{
"docid": "86d58f4196ceb48e29cb143e6a157c22",
"text": "In this paper, we challenge a form of paragraph-to-question generation task. We propose a question generation system which can generate a set of comprehensive questions from a body of text. Besides the tree kernel functions to assess the grammatically of the generated questions, our goal is to rank them by using community-based question answering systems to calculate the importance of the generated questions. The main assumption behind our work is that each body of text is related to a topic of interest and it has a comprehensive information about the topic.",
"title": ""
},
{
"docid": "4b6da0b9c88f4d94abfbbcb08bb0fc43",
"text": "In this paper we show how word embeddings can be used to increase the effectiveness of a state-of-the art Locality Sensitive Hashing (LSH) based first story detection (FSD) system over a standard tweet corpus. Vocabulary mismatch, in which related tweets use different words, is a serious hindrance to the effectiveness of a modern FSD system. In this case, a tweet could be flagged as a first story even if a related tweet, which uses different but synonymous words, was already returned as a first story. In this work, we propose a novel approach to mitigate this problem of lexical variation, based on tweet expansion. In particular, we propose to expand tweets with semantically related paraphrases identified via automatically mined word embeddings over a background tweet corpus. Through experimentation on a large data stream comprised of 50 million tweets, we show that FSD effectiveness can be improved by 9.5% over a state-of-the-art FSD system.",
"title": ""
},
{
"docid": "6eb85c1a42dd2e4eaa6835e924fdfebf",
"text": "The concept of ‘sleeping on a problem’ is familiar to most of us. But with myriad stages of sleep, forms of memory and processes of memory encoding and consolidation, sorting out how sleep contributes to memory has been anything but straightforward. Nevertheless, converging evidence, from the molecular to the phenomenological, leaves little doubt that offline memory reprocessing during sleep is an important component of how our memories are formed and ultimately shaped.",
"title": ""
},
{
"docid": "9e8cf31a711a77fa5c5dcc932473dc27",
"text": "The opening book is an important component of a chess engine, and thus computer chess programmers have been developing automated methods to improve the quality of their books. For chess, which has a very rich opening theory, large databases of highquality games can be used as the basis of an opening book, from which statistics relating to move choices from given positions can be collected. In order to nd out whether the opening books used by modern chess engines in machine versus machine competitions are \\comparable\" to those used by chess players in human versus human competitions, we carried out analysis on 26 test positions using statistics from two opening books one compiled from humans’ games and the other from machines’ games. Our analysis using several nonparametric measures, shows that, overall, there is a strong association between humans’ and machines’ choices of opening moves when using a book to guide their choices.",
"title": ""
},
{
"docid": "1360ab7fef48f6913b188447aa3841b5",
"text": "Optical music recognition (OMR) systems are used to convert music scanned from paper into a format suitable for playing or editing on a computer. These systems generally have two phases: recognizing the graphical symbols (such as note-heads and lines) and determining the musical meaning and relationships of the symbols (such as the pitch and rhythm of the notes). In this paper we explore the second phase and give a two-step approach that admits an economical representation of the parsing rules for the system. The approach is flexible and allows the system to be extended to new notations with little effort—the current system can parse common music notation, Sacred Harp notation and plainsong. It is based on a string grammar and a customizable graph that specifies relationships between musical objects. We observe that this graph can be related to printing as well as recognizing music notation, bringing the opportunity for cross-fertilization between the two areas of research. Copyright c © 2003 John Wiley & Sons, Ltd.",
"title": ""
},
{
"docid": "82dcbecb4c1c6bb61ac9b029fc2f9871",
"text": "A complete list of the titles in this series appears at the end of this volume. No part of this publication may he reproduced, stored in a retrieval system, or transmitted in any form or by any means Limit of Liability/Disclaimer of Warranty: While the publisher and author have used their best efforts in preparing this book, they make no representations or warranties with respect to the accuracy or completeness of the contents of this book and specifically disclaim any implied warranties of merchantability or fitness for a particular purpose. No warranty may be created or extended by sales representatives or written sales materials. The advice and strategies contained herein may not be suitable for your situation. You should consult with a professional where appropriate. Neither the publisher nor author shall be liable for any loss of profit or any other commercial damages, including but not limited to special, incidental, consequential, or other damages. Wiley also publishes its books in a variety of electronic formats. Some content that appears in print may not be available in electronic format. For information about Wiley products, visit our web site at www.wiley.com.",
"title": ""
},
{
"docid": "6eda9275b33c107dc207fa06f6d1a26e",
"text": "For many image processing and computer vision problems, data points are in matrix form. Traditional methods often convert a matrix into a vector and then use vector-based approaches. They will ignore the location of matrix elements and the converted vector often has high dimensionality. How to select features for 2D matrix data directly is still an uninvestigated important issue. In this paper, we propose an algorithm named sparse matrix regression (SMR) for direct feature selection on matrix data. It employs the matrix regression model to accept matrix as input and bridges each matrix to its label. Based on the intrinsic property of regression coefficients, we design some sparse constraints on the coefficients to perform feature selection. An effective optimization method with provable convergence behavior is also proposed. We reveal that the number of regression vectors can be regarded as a tradeoff parameter to balance the capacity of learning and generalization in essence. To examine the effectiveness of SMR, we have compared it with several vector-based approaches on some benchmark data sets. Furthermore, we have also evaluated SMR in the application of scene classification. They all validate the effectiveness of our method.",
"title": ""
},
{
"docid": "5d3977c0a7e3e1a4129693342c6be3d3",
"text": "With the fast advances in nextgen sequencing technology, high-throughput RNA sequencing has emerged as a powerful and cost-effective way for transcriptome study. De novo assembly of transcripts provides an important solution to transcriptome analysis for organisms with no reference genome. However, there lacked understanding on how the different variables affected assembly outcomes, and there was no consensus on how to approach an optimal solution by selecting software tool and suitable strategy based on the properties of RNA-Seq data. To reveal the performance of different programs for transcriptome assembly, this work analyzed some important factors, including k-mer values, genome complexity, coverage depth, directional reads, etc. Seven program conditions, four single k-mer assemblers (SK: SOAPdenovo, ABySS, Oases and Trinity) and three multiple k-mer methods (MK: SOAPdenovo-MK, trans-ABySS and Oases-MK) were tested. While small and large k-mer values performed better for reconstructing lowly and highly expressed transcripts, respectively, MK strategy worked well for almost all ranges of expression quintiles. Among SK tools, Trinity performed well across various conditions but took the longest running time. Oases consumed the most memory whereas SOAPdenovo required the shortest runtime but worked poorly to reconstruct full-length CDS. ABySS showed some good balance between resource usage and quality of assemblies. Our work compared the performance of publicly available transcriptome assemblers, and analyzed important factors affecting de novo assembly. Some practical guidelines for transcript reconstruction from short-read RNA-Seq data were proposed. De novo assembly of C. sinensis transcriptome was greatly improved using some optimized methods.",
"title": ""
},
{
"docid": "a527c7205e06531645a2ec3bb926f296",
"text": "The quantum Hall effect (QHE), one example of a quantum phenomenon that occurs on a truly macroscopic scale, has attracted intense interest since its discovery in 1980 and has helped elucidate many important aspects of quantum physics. It has also led to the establishment of a new metrological standard, the resistance quantum. Disappointingly, however, the QHE has been observed only at liquid-helium temperatures. We show that in graphene, in a single atomic layer of carbon, the QHE can be measured reliably even at room temperature, which makes possible QHE resistance standards becoming available to a broader community, outside a few national institutions.",
"title": ""
}
] |
scidocsrr
|
aee49cbedf5f859063f88dd1f31944b5
|
Face Synthesis for Eyeglass-Robust Face Recognition
|
[
{
"docid": "7c799fdfde40289ba4e0ce549f02a5ad",
"text": "In this paper, we design a benchmark task and provide the associated datasets for recognizing face images and link them to corresponding entity keys in a knowledge base. More specifically, we propose a benchmark task to recognize one million celebrities from their face images, by using all the possibly collected face images of this individual on the web as training data. The rich information provided by the knowledge base helps to conduct disambiguation and improve the recognition accuracy, and contributes to various real-world applications, such as image captioning and news video analysis. Associated with this task, we design and provide concrete measurement set, evaluation protocol, as well as training data. We also present in details our experiment setup and report promising baseline results. Our benchmark task could lead to one of the largest classification problems in computer vision. To the best of our knowledge, our training dataset, which contains 10M images in version 1, is the largest publicly available one in the world.",
"title": ""
}
] |
[
{
"docid": "4f8822deb045eec9e8fca676353f1d1d",
"text": "Data mining plays an important role in the business world and it helps to the educational institution to predict and make decisions related to the students' academic status. With a higher education, now a days dropping out of students' has been increasing, it affects not only the students' career but also on the reputation of the institute. The existing system is a system which maintains the student information in the form of numerical values and it just stores and retrieve the information what it contains. So the system has no intelligence to analyze the data. The proposed system is a web based application which makes use of the Naive Bayesian mining technique for the extraction of useful information. The experiment is conducted on 700 students' with 19 attributes in Amrita Vishwa Vidyapeetham, Mysuru. Result proves that Naive Bayesian algorithm provides more accuracy over other methods like Regression, Decision Tree, Neural networks etc., for comparison and prediction. The system aims at increasing the success graph of students using Naive Bayesian and the system which maintains all student admission details, course details, subject details, student marks details, attendance details, etc. It takes student's academic history as input and gives students' upcoming performances on the basis of semester.",
"title": ""
},
{
"docid": "c9ea42872164e65424498c6a5c5e0c6d",
"text": "Inverse problems appear in many applications, such as image deblurring and inpainting. The common approach to address them is to design a specific algorithm for each problem. The Plug-and-Play (P&P) framework, which has been recently introduced, allows solving general inverse problems by leveraging the impressive capabilities of existing denoising algorithms. While this fresh strategy has found many applications, a burdensome parameter tuning is often required in order to obtain high-quality results. In this paper, we propose an alternative method for solving inverse problems using off-the-shelf denoisers, which requires less parameter tuning. First, we transform a typical cost function, composed of fidelity and prior terms, into a closely related, novel optimization problem. Then, we propose an efficient minimization scheme with a P&P property, i.e., the prior term is handled solely by a denoising operation. Finally, we present an automatic tuning mechanism to set the method’s parameters. We provide a theoretical analysis of the method and empirically demonstrate its competitiveness with task-specific techniques and the P&P approach for image inpainting and deblurring.",
"title": ""
},
{
"docid": "98e3279056e9bc15ce4b32c6dc027af9",
"text": "Publication Information Bazrafkan, Shabab , Javidnia, Hossein , Lemley, Joseph , & Corcoran, Peter (2018). Semiparallel deep neural network hybrid architecture: first application on depth from monocular camera. Journal of Electronic Imaging, 27(4), 19. doi: 10.1117/1.JEI.27.4.043041 Publisher Society of Photo-optical Instrumentation Engineers (SPIE) Link to publisher's version https://dx.doi.org/10.1117/1.JEI.27.4.043041",
"title": ""
},
{
"docid": "9d49f9b508b06d1481488b23af48785e",
"text": "Modern network interfaces demand highly intelligent traffic management in addition to the basic requirement of wire speed packet forwarding. Several vendors are releasing network processors in order to handle these demands. Network workloads can be classified into data plane and control plane workloads, however most network processors are optimized for data plane. Also, existing benchmark suites for network processors primarily contain data plane workloads, which perform packet processing for a forwarding function. In this paper, we present a set of benchmarks, called NpBench, targeted towards control plane (e.g., traffic management, quality of service, etc.) as well as data plane workloads. The characteristics of NpBench workloads, such as instruction mix, parallelism, cache behavior and required processing capability per packet, are presented and compared with CommBench, an existing network processor benchmark suite [9]. We also discuss the architectural characteristics of the benchmarks having control plane functions, their implications to designing network processors and the significance of Instruction Level Parallelism (ILP) in network processors.",
"title": ""
},
{
"docid": "ffba4650ec3349c096c35779775d350d",
"text": "Massively parallel short-read sequencing technologies, coupled with powerful software platforms, are enabling investigators to analyse tens of thousands of genetic markers. This wealth of data is rapidly expanding and allowing biological questions to be addressed with unprecedented scope and precision. The sizes of the data sets are now posing significant data processing and analysis challenges. Here we describe an extension of the Stacks software package to efficiently use genotype-by-sequencing data for studies of populations of organisms. Stacks now produces core population genomic summary statistics and SNP-by-SNP statistical tests. These statistics can be analysed across a reference genome using a smoothed sliding window. Stacks also now provides several output formats for several commonly used downstream analysis packages. The expanded population genomics functions in Stacks will make it a useful tool to harness the newest generation of massively parallel genotyping data for ecological and evolutionary genetics.",
"title": ""
},
{
"docid": "d87295095ef11648890b19cd0608d5da",
"text": "Link prediction and recommendation is a fundamental problem in social network analysis. The key challenge of link prediction comes from the sparsity of networks due to the strong disproportion of links that they have potential to form to links that do form. Most previous work tries to solve the problem in single network, few research focus on capturing the general principles of link formation across heterogeneous networks. In this work, we give a formal definition of link recommendation across heterogeneous networks. Then we propose a ranking factor graph model (RFG) for predicting links in social networks, which effectively improves the predictive performance. Motivated by the intuition that people make friends in different networks with similar principles, we find several social patterns that are general across heterogeneous networks. With the general social patterns, we develop a transfer-based RFG model that combines them with network structure information. This model provides us insight into fundamental principles that drive the link formation and network evolution. Finally, we verify the predictive performance of the presented transfer model on 12 pairs of transfer cases. Our experimental results demonstrate that the transfer of general social patterns indeed help the prediction of links.",
"title": ""
},
{
"docid": "46c754d52ccda0e334cd691e10f8aeac",
"text": "This study examines the development of technology, pedagogy, and content knowledge (TPACK) in four in-service secondary science teachers as they participated in a professional development program focusing on technology integration into K-12 classrooms to support science as inquiry teaching. In the program, probeware, mind-mapping tools (CMaps), and Internet applications ― computer simulations, digital images, and movies — were introduced to the science teachers. A descriptive multicase study design was employed to track teachers’ development over the yearlong program. Data included interviews, surveys, classroom observations, teachers’ technology integration plans, and action research study reports. The program was found to have positive impacts to varying degrees on teachers’ development of TPACK. Contextual factors and teachers’ pedagogical reasoning affected teachers’ ability to enact in their classrooms what they learned in the program. Suggestions for designing effective professional development programs to improve science teachers’ TPACK are discussed. Contemporary Issues in Technology and Teacher Education, 9(1) 26 Science teaching is such a complex, dynamic profession that it is difficult for a teacher to stay up-to-date. For a teacher to grow professionally and become better as a teacher of science, a special, continuous effort is required (Showalter, 1984, p. 21). To better prepare students for the science and technology of the 21st century, the current science education reforms ask science teachers to integrate technology and inquiry-based teaching into their instruction (American Association for the Advancement of Science, 1993; National Research Council [NRC], 1996, 2000). The National Science Education Standards (NSES) define inquiry as “the diverse ways in which scientists study the natural world and propose explanations based on the evidence derived from their work” (NRC, 1996, p. 23). The NSES encourage teachers to apply “a variety of technologies, such as hand tools, measuring instruments, and calculators [as] an integral component of scientific investigations” to support student inquiry (p.175). Utilizing technology tools in inquiry-based science classrooms allows students to work as scientists (Novak & Krajcik, 2006, p. 76). Teaching science as emphasized in the reform documents, however, is not easy. Science teachers experience various constraints, such as lack of time, equipment, pedagogical content knowledge, and pedagogical skills in implementing reform-based teaching strategies (Crawford, 1999, 2000; Roehrig & Luft, 2004, 2006). One way to overcome the barriers and to reform teaching is to participate in professional development programs that provide opportunities for social, personal, and professional development (Bell & Gilbert, 2004). Professional development programs in which teachers collaborate with other teachers, reflect on their classroom practices, and receive support and feedback have been shown to foster teachers’ professional development (Grossman, Wineburg, & Woolworth, 2001; Huffman, 2006; Loucks-Horsley, Love, Stiles, Mundry, & Hewson, 2003). In this light, the professional development program, Technology Enhanced Communities (TEC), which is presented in this paper, was designed to create a learning community where science teachers can learn to integrate technology into their teaching to support student inquiry. TEC has drawn heavily on situated learning theory, which defines learning as situated, social, and distributed (Brown, Collins, & Duguid, 1989; Lave & Wenger, 1991; Putnam & Borko, 2000). Since a situated learning environment supports collaboration among participants (Brown et al., 1989; Lave & Wenger, 1991; Putnam & Borko, 2000), and the collaboration among teachers enhances teacher learning (CochranSmith & Lytle, 1999; Krajcik, Blumenfeld, Marx, & Soloway, 1994; Little, 1990), TEC was designed to provide teachers with opportunities to build a community that enables learning and is distributed among teachers. The situated learning theory was used as a design framework for TEC, but technology, pedagogy, and content knowledge (TPACK) was employed as a theoretical framework for the present study. Since the concept of TPACK has emerged recently, there has been no consensus on the nature and development of TPACK among researchers and teacher educators. As suggested by many authors in the Handbook of Technological Pedagogical Content Knowledge (AACTE Committee on Innovation and Technology, 2008), more research needs to examine the role of teacher preparation programs teachers’ beliefs (Niess, 2008), and specific student and school contexts (McCrory, 2008) regarding the nature and development of TPACK. Thus, this study was conducted to investigate the effects of an in-service teacher education program (TEC) on science teachers’ development of Contemporary Issues in Technology and Teacher Education, 9(1) 27 TPACK. The research question guiding this study was: How does the professional development program, TEC, enhance science teachers’ TPACK? Review of the Relevant Literature Technology Integration Into Science Classrooms Educational technology tools such as computers, probeware, data collection and analysis software, digital microscopes, hypermedia/multimedia, student response systems, and interactive white boards can help students actively engage in the acquisition of scientific knowledge and development of the nature of science and inquiry. When educational technology tools are used appropriately and effectively in science classrooms, students actively engage in their knowledge construction and improve their thinking and problem solving skills (Trowbridge, Bybee, & Powell, 2008). Many new educational technology tools are now available for science teachers. However, integrating technology into instruction is still challenging for most teachers (Norris, Sullivan, Poirot, & Soloway, 2003; Office of Technology Assessment [OTA], 1995). The existing studies demonstrate that technology integration is a long-term process requiring commitment (Doering, Hughes, & Huffman, 2003; Hughes, Kerr, & Ooms, 2005; Sandholtz, Ringstaff, & Dwyer, 1997). Teachers need ongoing support while they make efforts to develop and sustain effective technology integration. Professional learning communities, where teachers collaborate with other teachers to improve and support their learning and teaching, are effective for incorporating technology into teaching (Krajcik et al., 1994; Little, 1990). As a part of a community, teachers share their knowledge, practices, and experiences; discuss issues related to student learning; and critique and support each others’ knowledge and pedagogical growth while they are learning about new technologies (Hughes et al., 2005). Technology integration is most commonly associated with professional development opportunities. The need for participant-driven professional development programs in which teachers engage in inquiry and reflect on their practices to improve their learning about technology has been emphasized by many researchers (Loucks-Horsley et al., 2003; Zeichner, 2003). Zeichner, for example, argued that teacher action research is an important aspect of effective professional development. According to Zeichner, to improve their learning and practices, teachers should become teacher researchers, conduct self-study research, and engage in teacher research groups. These collaborative groups provide teachers with support and opportunities to deeply analyze their learning and practices. Pedagogical Content Knowledge Shulman (1987) defined seven knowledge bases for teachers: content knowledge, general pedagogical knowledge, curriculum knowledge, pedagogical content knowledge (PCK), knowledge of learners and their characteristics, knowledge of educational context, and knowledge of educational ends, goals, and values. According to Shulman, among these knowledge bases, PCK plays the most important role in effective teaching. He argued that teachers should develop PCK, which is “the particular form of content knowledge that embodies the aspects of content most germane to its teachability” (Shulman, 1986, p. 9). PCK is not only a special form of content knowledge but also a “blending of content and pedagogy into an understanding of how particular topics, problems, or issues are Contemporary Issues in Technology and Teacher Education, 9(1) 28 organized, presented, and adapted to the diverse interests and abilities of learners, and presented for instruction” (Shulman, 1987, p. 8). Shulman argued that teachers not only need to know their content but also need to know how to present it effectively. Good teaching “begins with an act of reason, continues with a process of reasoning, culminates in performances of imparting, eliciting, involving, or enticing, and is then thought about some more until the process begins again” (Shulman, 1987, p. 13). Thus, to make effective pedagogical decisions about what to teach and how to teach it, teachers should develop both their PCK and pedagogical reasoning skills. Since Shulman’s initial conceptualization of PCK, researchers have developed new forms and components of PCK (e.g., Cochran, DeRuiter, & King, 1993; Grossman, 1990; Marks, 1990; Magnusson, Borko, & Krajcik, 1994; Tamir, 1988). Some researchers while following Shulman’s original classification have added new components (Grossman, 1990; Marks 1990; Fernandez-Balboa & Stiehl, 1995), while others have developed different conceptions of PCK and argued about the blurry borders between PCK and content knowledge (Cochran et al., 1993). Building on Shulman’s groundbreaking work, these researchers have generated a myriad of versions of PCK. In a recent review of the PCK literature, Lee, Brown, Luft, and Roehrig (2007) identified a consensus among researchers on the following two components of PCK: (a) teachers’ knowledge of student learning to translate and transform content to",
"title": ""
},
{
"docid": "1f72fad6fd2394011f608f7f80a96d2b",
"text": "Flooding Peer-to-Peer (P2P) networks form the basis of services such as the electronic currency system Bitcoin. The decentralized architecture enables robustness against failure. However, knowledge of the network's topology can allow adversaries to attack specific peers in order to, e.g., isolate certain peers or even partition the network. Knowledge of the topology might be gained by observing the flooding process, which is inherently possible in such networks,, performing a timing analysis on the observations. In this paper we present a timing analysis method that targets flooding P2P networks, show its theoretical, practical feasibility. A validation in the real-world Bitcoin network proves the possibility of inferring network links of actively participating peers with substantial precision, recall (both ~ 40%), potentially enabling attacks on the network. Additionally, we analyze the countermeasure of trickling, quantify the tradeoff between the effectiveness of the countermeasure, the expected performance penalty. The analysis shows that inappropriate parametrization can actually facilitate inference attacks.",
"title": ""
},
{
"docid": "68c7cf8a10382fab04a7c851a9caebb0",
"text": "Circular economy (CE) is a term that exists since the 1970s and has acquired greater importance in the past few years, partly due to the scarcity of natural resources available in the environment and changes in consumer behavior. Cutting-edge technologies such as big data and internet of things (IoT) have the potential to leverage the adoption of CE concepts by organizations and society, becoming more present in our daily lives. Therefore, it is fundamentally important for researchers interested in this subject to understand the status quo of studies being undertaken worldwide and to have the overall picture of it. We conducted a bibliometric literature review from the Scopus Database over the period of 2006–2015 focusing on the application of big data/IoT on the context of CE. This produced the combination of 30,557 CE documents with 32,550 unique big data/IoT studies resulting in 70 matching publications that went through content and social network analysis with the use of ‘R’ statistical tool. We then compared it to some current industry initiatives. Bibliometrics findings indicate China and USA are the most interested countries in the area and reveal a context with significant opportunities for research. In addition, large producers of greenhouse gas emissions, such as Brazil and Russia, still lack studies in the area. Also, a disconnection between important industry initiatives and scientific research seems to exist. The results can be useful for institutions and researchers worldwide to understand potential research gaps and to focus future investments/studies in the field.",
"title": ""
},
{
"docid": "db0e61e6988106203f6780023ba6902b",
"text": "In first stage of each microwave receiver there is Low Noise Amplifier (LNA) circuit, and this stage has important rule in quality factor of the receiver. The design of a LNA in Radio Frequency (RF) circuit requires the trade-off many importance characteristics such as gain, Noise Figure (NF), stability, power consumption and complexity. This situation Forces desingners to make choices in the desing of RF circuits. In this paper the aim is to design and simulate a single stage LNA circuit with high gain and low noise using MESFET for frequency range of 5 GHz to 6 GHz. The desing simulation process is down using Advance Design System (ADS). A single stage LNA has successfully designed with 15.83 dB forward gain and 1.26 dB noise figure in frequency of 5.3 GHz. Also the designed LNA should be working stably In a frequency range of 5 GHz to 6 GHz. Keywords—Advance Design System, Low Noise Amplifier, Radio Frequency, Noise Figure.",
"title": ""
},
{
"docid": "fe0acb0df485e08c9a6cab4859173668",
"text": "Objective: To report a review of various machine learning and hybrid algorithms for detecting SMS spam messages and comparing them according to accuracy criterion. Data sources: Original articles written in English found in Sciencedirect.com, Google-scholar.com, Search.com, IEEE explorer, and the ACM library. Study selection: Those articles dealing with machine learning and hybrid approaches for SMS spam filtering. Data extraction: Many articles extracted by searching a predefined string and the outcome was reviewed by one author and checked by the second. The primary paper was reviewed and edited by the third author. Results: A total of 44 articles were selected which were concerned machine learning and hybrid methods for detecting SMS spam messages. 28 methods and algorithms were extracted from these papers and studied and finally 15 algorithms among them have been compared in one table according to their accuracy, strengths, and weaknesses in detecting spam messages of the Tiago dataset of spam message. Actually, among the proposed methods DCA algorithm, the large cellular network method and graph-based KNN are three most accurate in filtering SMS spams of Tiago data set. Moreover, Hybrid methods are discussed in this paper.",
"title": ""
},
{
"docid": "78b7987361afd8c7814ee416c81a311b",
"text": "This paper presents the characterization of various types of SubMiniature version A (SMA) connectors. The characterization is performed by measurements in frequency and time domain. The SMA connectors are mounted on microstrip (MS) and conductor-backed coplanar waveguide (CPW-CB) manufactured on high-frequency (HF) laminates. The designed characteristic impedance of the transmission lines is 50 Ω and deviation from the designed characteristic impedance is measured. The measurement results suggest that for a given combination of the transmission line and SMA connector, the discontinuity in terms of characteristic impedance can be significantly improved by choosing the right connector type.",
"title": ""
},
{
"docid": "0f753c9122f71dc4558afcc8a60bef54",
"text": "Humanity has just crossed a major landmark in its history with the majority of people now living in cities. Cities have long been known to be society's predominant engine of innovation and wealth creation, yet they are also its main source of crime, pollution, and disease. The inexorable trend toward urbanization worldwide presents an urgent challenge for developing a predictive, quantitative theory of urban organization and sustainable development. Here we present empirical evidence indicating that the processes relating urbanization to economic development and knowledge creation are very general, being shared by all cities belonging to the same urban system and sustained across different nations and times. Many diverse properties of cities from patent production and personal income to electrical cable length are shown to be power law functions of population size with scaling exponents, beta, that fall into distinct universality classes. Quantities reflecting wealth creation and innovation have beta approximately 1.2 >1 (increasing returns), whereas those accounting for infrastructure display beta approximately 0.8 <1 (economies of scale). We predict that the pace of social life in the city increases with population size, in quantitative agreement with data, and we discuss how cities are similar to, and differ from, biological organisms, for which beta<1. Finally, we explore possible consequences of these scaling relations by deriving growth equations, which quantify the dramatic difference between growth fueled by innovation versus that driven by economies of scale. This difference suggests that, as population grows, major innovation cycles must be generated at a continually accelerating rate to sustain growth and avoid stagnation or collapse.",
"title": ""
},
{
"docid": "18243a9ac4961caef5434d3f043b5d78",
"text": "There is a number of automated sign language recognition systems proposed in the computer vision literature. The biggest drawback of all these systems is that every nation has their own culture oriented sign language. In other words, everyone needs to develop a specific sign language recognition system for their nation. Although the main building blocks of all signs are gestures and facial expressions in all sign languages, the nation specific requirements make it difficult to design a multinational recognition framework. In this paper, we focus on the advancements in computer assisted sign language recognition systems. More specifically, we discuss if the ongoing research may trigger the start of an international sign language design. We categorize and present a summary of the current sign language recognition systems. In addition, we present a list of publicly available databases that can be used for designing sign language recognition systems.",
"title": ""
},
{
"docid": "f81261c4a64359778fd3d399ba3fe749",
"text": "Credit card frauds are increasing day by day regardless of the various techniques developed for its detection. Fraudsters are so expert that they engender new ways for committing fraudulent transactions each day which demands constant innovation for its detection techniques as well. Many techniques based on Artificial Intelligence, Data mining, Fuzzy logic, Machine learning, Sequence Alignment, decision tree, neural network, logistic regression, naïve Bayesian, Bayesian network, metalearning, Genetic Programming etc., has evolved in detecting various credit card fraudulent transactions. A steady indulgent on all these approaches will positively lead to an efficient credit card fraud detection system. This paper presents a survey of various techniques used in credit card fraud detection mechanisms and Hidden Markov Model (HMM) in detail. HMM categorizes card holder’s profile as low, medium and high spending based on their spending behavior in terms of amount. A set of probabilities for amount of transaction is being assigned to each cardholder. Amount of each incoming transaction is then matched with card owner’s category, if it justifies a predefined threshold value then the transaction is decided to be legitimate else declared as fraudulent. Index Terms — Credit card, fraud detection, Hidden Markov Model, online shopping",
"title": ""
},
{
"docid": "42a518270a12cccc4e775b6e215c6cb4",
"text": "Performance of a dual-band coplanar patch antenna integrated with an electromagnetic band gap substrate is described. The antenna structure is made from common clothing fabrics and operates at the 2.45 and 5 GHz wireless bands. The design of the coplanar antenna, band gap substrate, and their integration is presented. The band gap array consists of just 3 times 3 elements but reduces radiation into the body by over 10 dB and improves the antenna gain by 3 dB. The performance of the antenna under bending conditions and when placed on the human body are presented.",
"title": ""
},
{
"docid": "b19fa7fa211e36b0049fd5745e30f0c3",
"text": "Multilevel clock-and-data recovery (CDR) systems are analyzed, modeled, and designed. A stochastic analysis provides probability density functions that are used to estimate the effect of intersymbol interference (ISI) and additive white noise on the characteristics of the phase detector (PD) in the CDR. A slope detector based novel multilevel bang-bang CDR architecture is proposed and modeled using the stochastic analysis and its performance compared with a typical multilevel Alexander PD-based CDR for equal-loop bandwidths. The rms jitter of the CDRs are predicted using a linear jitter model and a Markov chain and verified using behavioral simulations. Jitter tolerance simulations are also employed to compare the two CDRs. Both analytical calculations and behavioral simulations predict that at equal-loop bandwidths, the proposed architecture is superior to the Alexander type CDR at large ISI and low signal-to-noise ratios.",
"title": ""
},
{
"docid": "25be81188b38af7ec939b881706fdc2f",
"text": "OBJECTIVES\nTo outline the prevalence and disparities of inattention and hyperactivity among school-aged urban minority youth, causal pathways through which inattention and hyperactivity adversely affects academic achievement, and proven or promising approaches for schools to address these problems.\n\n\nMETHODS\nLiterature review.\n\n\nRESULTS\nApproximately 4.6 million (8.4%) of American youth aged 6-17 have received a diagnosis of attention deficit/hyperactivity disorder (ADHD), and almost two thirds of these youth are reportedly under treatment with prescription medications. Urban minority youth are not only more likely to be affected but also less likely to receive accurate diagnosis and treatment. Causal pathways through which ADHD may affect academic achievement include sensory perceptions, cognition, school connectedness, absenteeism, and dropping out. In one study, youth with diagnosed ADHD were 2.7 times as likely to drop out (10.0% vs. 22.9%). A similar odds ratio for not graduating from high school was found in another prospective study, with an 8-year follow-up period (odds ratio = 2.4). There are many children who are below the clinical diagnostic threshold for ADHD but who exhibit signs and symptoms that interfere with learning. Evidence-based programs emphasizing functional academic and social outcomes are available.\n\n\nCONCLUSIONS\nInattention and hyperactivity are highly and disproportionately prevalent among school-aged urban minority youth, have a negative impact on academic achievement through their effects on sensory perceptions, cognition, school connectedness, absenteeism, and dropping out, and effective practices are available for schools to address these problems. This prevalent and complex syndrome has very powerful effects on academic achievement and educational attainment, and should be a high priority in efforts to help close the achievement gap.",
"title": ""
},
{
"docid": "d0e7edc4bee2ae952ac4d2d711a4c23b",
"text": "Edmodo is a free and safe virtual learning environment, helping students and teachers connect and collaborate outside the face-to-face learning time, which makes it an ideal tool to be explored and adopted by teachers of English as a foreign language (EFL). Our case study will reflect on using the Edmodo Assignment feature as an ePortfolio of EFL student productions and progress. Written productions, speaking and listening contributions, which would be otherwise rather difficult to process and assess in real time, are accommodated by the platform and contribute to a finer assessment process. The outcomes and qualitative results of employing the Edmodo EFL portfolio for a mixed-ability group of undergraduate Geography of Tourism students, for two-semesters, are presented.",
"title": ""
},
{
"docid": "a06256ecdfffd0295bfb462a045749c8",
"text": "Many important NLP problems can be posed as dual-sequence or sequence-tosequence modeling tasks. Recent advances in building end-to-end neural architectures have been highly successful in solving such tasks. In this work we propose a new architecture for dual-sequence modeling that is based on associative memory. We derive AM-RNNs, a recurrent associative memory (AM) which augments generic recurrent neural networks (RNN). This architecture is extended to the Dual AM-RNN which operates on two AMs at once. Our models achieve very competitive results on textual entailment. A qualitative analysis demonstrates that long range dependencies between source and target-sequence can be bridged effectively using Dual AM-RNNs. However, an initial experiment on autoencoding reveals that these benefits are not exploited by the system when learning to solve sequence-to-sequence tasks which indicates that additional supervision or regularization is needed.",
"title": ""
}
] |
scidocsrr
|
ca48c9c0014753549bd29a61a5924f01
|
Design of a High-Performance System for Secure Image Communication in the Internet of Things
|
[
{
"docid": "adc9e237e2ca2467a85f54011b688378",
"text": "Quadrotors are rapidly emerging as a popular platform for unmanned aerial vehicle (UAV) research, due to the simplicity of their construction and maintenance, their ability to hover, and their vertical take off and landing (VTOL) capability. Current designs have often considered only nominal operating conditions for vehicle control design. This work seeks to address issues that arise when deviating significantly from the hover flight regime. Aided by well established research for helicopter flight control, four separate aerodynamic effects are investigated as they pertain to quadrotor flight. The effects result from either translational or vertical vehicular velocity components, and cause both moments that affect attitude control and thrust variation that affects altitude control. Where possible, a theoretical development is first presented, and is then validated through both thrust test stand measurements and vehicle flight tests using the Stanford Testbed of Autonomous Rotorcraft for Multi-Agent Control (STARMAC) quadrotor helicopter. The results have enabled improved controller tracking throughout the flight envelope, including at higher speeds and in gusting winds.",
"title": ""
}
] |
[
{
"docid": "339b405d32b9afb4a36f2a8f9bba485d",
"text": "Inspired by the recent advances in generative models, we introduce a human action generation model in order to generate a consecutive sequence of human motions to formulate novel actions. We propose a framework of an autoencoder and a generative adversarial network (GAN) to produce multiple and consecutive human actions conditioned on the initial state and the given class label. The proposed model is trained in an end-to-end fashion, where the autoencoder is jointly trained with the GAN. The model is trained on the NTU RGB+D dataset and we show that the proposed model can generate different styles of actions. Moreover, the model can successfully generate a sequence of novel actions given different action labels as conditions. The conventional human action prediction and generation models lack those features, which are essential for practical applications.",
"title": ""
},
{
"docid": "c64dd1051c5b6892df08813e38285843",
"text": "Diabetes has emerged as a major healthcare problem in India. Today Approximately 8.3 % of global adult population is suffering from Diabetes. India is one of the most diabetic populated country in the world. Today the technologies available in the market are invasive methods. Since invasive methods cause pain, time consuming, expensive and there is a potential risk of infectious diseases like Hepatitis & HIV spreading and continuous monitoring is therefore not possible. Now a days there is a tremendous increase in the use of electrical and electronic equipment in the medical field for clinical and research purposes. Thus biomedical equipment’s have a greater role in solving medical problems and enhance quality of life. Hence there is a great demand to have a reliable, instantaneous, cost effective and comfortable measurement system for the detection of blood glucose concentration. Non-invasive blood glucose measurement device is one such which can be used for continuous monitoring of glucose levels in human body.",
"title": ""
},
{
"docid": "317f1a01a8df4becdb3611c63cef618f",
"text": "High brightness white LED has attracted a lot of attention for its high efficacy, simple to drive, environmentally friendly, long lifespan and small size. The power supply for LED lighting also requires long life while maintaining high efficiency, high power factor and low cost. However, a typical design employs electrolytic capacitor as storage capacitor, which is not only bulky, but also with short lifespan, thus hampering the entire LED lighting system. To prolong the lifespan of power supply, it has to use film capacitor with small capacitance to replace electrolytic capacitor. In this paper, a universal input high efficiency, high power factor LED driver is proposed based on the modified SEPIC converter. Along with a relatively large voltage ripple allowable in a PFC design, the proposal of LED lamp driver is able to eliminate the electrolytic capacitor while maintaining high power factor. To increase the efficiency of LED driver, the presented SEPIC-derived converter is modified further as the twin-bus output stage for matching ultra-high efficiency twin-bus LED current regulator. The operation principle and related analysis is described in detail. A 50-W prototype has been built and tested to verify the proposed LED Driver.",
"title": ""
},
{
"docid": "5a1cdadf05fc4c5ae6f7fa3142e7ed16",
"text": "One major obstacle towards AI is the poor ability of models to solve new problems quicker, and without forgetting previously acquired knowledge. To better understand this issue, we study the problem of continual learning, where the model observes, once and one by one, examples concerning a sequence of tasks. First, we propose a set of metrics to evaluate models learning over a continuum of data. These metrics characterize models not only by their test accuracy, but also in terms of their ability to transfer knowledge across tasks. Second, we propose a model for continual learning, called Gradient Episodic Memory (GEM) that alleviates forgetting, while allowing beneficial transfer of knowledge to previous tasks. Our experiments on variants of the MNIST and CIFAR-100 datasets demonstrate the strong performance of GEM when compared to the state-of-the-art.",
"title": ""
},
{
"docid": "00bd0665891eb9cd9c865074dcf89e9a",
"text": "This case report presents the treatment of a patient with skeletal Cl II malocclusion and anterior open-bite who was treated with zygomatic miniplates through the intrusion of maxillary posterior teeth. A 16-year-old female patient with a chief complaint of anterior open-bite had a symmetric face, incompetent lips, convex profile, retrusive lower lip and chin. Intraoral examination showed that the buccal segments were in Class II relationship, and there was anterior open-bite (overbite -6.5 mm). The cephalometric analysis showed Class II skeletal relationship with increased lower facial height. The treatment plan included intrusion of the maxillary posterior teeth using zygomatic miniplates followed by fixed orthodontic treatment. At the end of treatment Class I canine and molar relationships were achieved, anterior open-bite was corrected and normal smile line was obtained. Skeletal anchorage using zygomatic miniplates is an effective method for open-bite treatment through the intrusion of maxillary posterior teeth.",
"title": ""
},
{
"docid": "cf32fb173182e8bd64150019f9fa36bb",
"text": "LEARNING OBJECTIVES\nAfter reading this article, the participant should be able to: 1. Identify and describe the anatomy of and changes to the aging face, including changes in bone mass and structure and changes to the skin, tissue, and muscles. 2. Assess each individual's unique anatomy before embarking on face-lift surgery and incorporate various surgical techniques, including fat grafting and other corrective procedures in addition to shifting existing fat to a higher position on the face, into discussions with patients. 3. Identify risk factors and potential complications in prospective patients. 4. Describe the benefits and risks of various techniques.\n\n\nSUMMARY\nThe ability to surgically rejuvenate the aging face has progressed in parallel with plastic surgeons' understanding of facial anatomy. In turn, a more clear explanation now exists for the visible changes seen in the aging face. This article and its associated video content review the current understanding of facial anatomy as it relates to facial aging. The standard face-lift techniques are explained and their various features, both good and bad, are reviewed. The objective is for surgeons to make a better aesthetic diagnosis before embarking on face-lift surgery, and to have the ability to use the appropriate technique depending on the clinical situation.",
"title": ""
},
{
"docid": "543099ac1bb00e14f4fc757a25d9487c",
"text": "With the development of personalized services, collaborative filtering techniques have been successfully applied to the network recommendation system. But sparse data seriously affect the performance of collaborative filtering algorithms. To alleviate the impact of data sparseness, using user interest information, an improved user-based clustering Collaborative Filtering (CF) algorithm is proposed in this paper, which improves the algorithm by two ways: user similarity calculating method and user-item rating matrix extended. The experimental results show that the algorithm could describe the user similarity more accurately and alleviate the impact of data sparseness in collaborative filtering algorithm. Also the results show that it can improve the accuracy of the collaborative recommendation algorithm.",
"title": ""
},
{
"docid": "66878197b06f3fac98f867d5457acafe",
"text": "As a result of disparities in the educational system, numerous scholars and educators across disciplines currently support the STEAM (Science, Technology, Engineering, Art, and Mathematics) movement for arts integration. An educational approach to learning focusing on guiding student inquiry, dialogue, and critical thinking through interdisciplinary instruction, STEAM values proficiency, knowledge, and understanding. Despite extant literature urging for this integration, the trend has yet to significantly influence federal or state standards for K-12 education in the United States. This paper provides a brief and focused review of key theories and research from the fields of cognitive psychology and neuroscience outlining the benefits of arts integrative curricula in the classroom. Cognitive psychologists have found that the arts improve participant retention and recall through semantic elaboration, generation of information, enactment, oral production, effort after meaning, emotional arousal, and pictorial representation. Additionally, creativity is considered a higher-order cognitive skill and EEG results show novel brain patterns associated with creative thinking. Furthermore, cognitive neuroscientists have found that long-term artistic training can augment these patterns as well as lead to greater plasticity and neurogenesis in associated brain regions. Research suggests that artistic training increases retention and recall, generates new patterns of thinking, induces plasticity, and results in strengthened higher-order cognitive functions related to creativity. These benefits of arts integration, particularly as approached in the STEAM movement, are what develops students into adaptive experts that have the skills to then contribute to innovation in a variety of disciplines.",
"title": ""
},
{
"docid": "17ceaef57bfa8bf97a75f4f341c58783",
"text": "Slip is the major cause of falls in human locomotion. We present a new bipedal modeling approach to capture and predict human walking locomotion with slips. Compared with the existing bipedal models, the proposed slip walking model includes the human foot rolling effects, the existence of the double-stance gait and active ankle joints. One of the major developments is the relaxation of the nonslip assumption that is used in the existing bipedal models. We conduct extensive experiments to optimize the gait profile parameters and to validate the proposed walking model with slips. The experimental results demonstrate that the model successfully predicts the human recovery gaits with slips.",
"title": ""
},
{
"docid": "a2842352924cbd1deff52976425a0bd6",
"text": "Content-based music information retrieval tasks have traditionally been solved using engineered features and shallow processing architectures. In recent years, there has been increasing interest in using feature learning and deep architectures instead, thus reducing the required engineering effort and the need for prior knowledge. However, this new approach typically still relies on mid-level representations of music audio, e.g. spectrograms, instead of raw audio signals. In this paper, we investigate whether it is possible to apply feature learning directly to raw audio signals. We train convolutional neural networks using both approaches and compare their performance on an automatic tagging task. Although they do not outperform a spectrogram-based approach, the networks are able to autonomously discover frequency decompositions from raw audio, as well as phase-and translation-invariant feature representations.",
"title": ""
},
{
"docid": "c949e051cbfd9cff13d939a7b594e6e6",
"text": "Propagation measurements at 28 GHz were conducted in outdoor urban environments in New York City using four different transmitter locations and 83 receiver locations with distances of up to 500 m. A 400 mega- chip per second channel sounder with steerable 24.5 dBi horn antennas at the transmitter and receiver was used to measure the angular distributions of received multipath power over a wide range of propagation distances and urban settings. Measurements were also made to study the small-scale fading of closely-spaced power delay profiles recorded at half-wavelength (5.35 mm) increments along a small-scale linear track (10 wavelengths, or 107 mm) at two different receiver locations. Our measurements indicate that power levels for small- scale fading do not significantly fluctuate from the mean power level at a fixed angle of arrival. We propose here a new lobe modeling technique that can be used to create a statistical channel model for lobe path loss and shadow fading, and we provide many model statistics as a function of transmitter- receiver separation distance. Our work shows that New York City is a multipath-rich environment when using highly directional steerable horn antennas, and that an average of 2.5 signal lobes exists at any receiver location, where each lobe has an average total angle spread of 40.3° and an RMS angle spread of 7.8°. This work aims to create a 28 GHz statistical spatial channel model for future 5G cellular networks.",
"title": ""
},
{
"docid": "799043a0617a8a9e5aa22fdb1501084d",
"text": "Test case prioritization is a crucial element in software quality assurance in practice, specially, in the context of regression testing. Typically, test cases are prioritized in a way that they detect the potential faults earlier. The effectiveness of test cases, in terms of fault detection, is estimated using quality metrics, such as code coverage, size, and historical fault detection. Prior studies have shown that previously failing test cases are highly likely to fail again in the next releases, therefore, they are highly ranked, while prioritizing. However, in practice, a failing test case may not be exactly the same as a previously failed test case, but quite similar, e.g., when the new failing test is a slightly modified version of an old failing one to catch an undetected fault. In this paper, we define a class of metrics that estimate the test cases quality using their similarity to the previously failing test cases. We have conducted several experiments with five real world open source software systems, with real faults, to evaluate the effectiveness of these quality metrics. The results of our study show that our proposed similarity-based quality measure is significantly more effective for prioritizing test cases compared to existing test case quality measures.",
"title": ""
},
{
"docid": "58920ab34e358c13612d793bb3127c9f",
"text": "We revisit the problem of interval estimation of a binomial proportion. The erratic behavior of the coverage probability of the standard Wald confidence interval has previously been remarked on in the literature (Blyth and Still, Agresti and Coull, Santner and others). We begin by showing that the chaotic coverage properties of the Wald interval are far more persistent than is appreciated. Furthermore, common textbook prescriptions regarding its safety are misleading and defective in several respects and cannot be trusted. This leads us to consideration of alternative intervals. A number of natural alternatives are presented, each with its motivation and context. Each interval is examined for its coverage probability and its length. Based on this analysis, we recommend the Wilson interval or the equal-tailed Jeffreys prior interval for small n and the interval suggested in Agresti and Coull for larger n. We also provide an additional frequentist justification for use of the Jeffreys interval.",
"title": ""
},
{
"docid": "81ef390009fb64bf235147bc0e186bab",
"text": "In this paper, we show how to calibrate a camera and to recover the geometry and the photometry (textures) of objects from a single image. The aim of this work is to make it possible walkthrough and augment reality in a 3D model reconstructed from a single image. The calibration step does not need any calibration target and makes only four assumptions: (1) the single image contains at least two vanishing points, (2) the length (in 3D space) of one line segment (for determining the translation vector) in the image is known, (3) the principle point is the center of the image, and (4) the aspect ratio is fixed by the user. Each vanishing point is determined from a set of parallel lines. These vanishing points help determine a 3D world coordinate system R o. After having computed the focal length, the rotation matrix and the translation vector are evaluated in turn for describing the rigid motion between R o and the camera coordinate system R c. Next, the reconstruction step consists in placing, rotating, scaling, and translating a rectangular 3D box that must fit at best with the potential objects within the scene as seen through the single image. With each face of a rectangular box, a texture that may contain holes due to invisible parts of certain objects is assigned. We show how the textures are extracted and how these holes are located and filled. Our method has been applied to various real images (pictures scanned from books, photographs) and synthetic images.",
"title": ""
},
{
"docid": "611eacd767f1ea709c1c4aca7acdfcdb",
"text": "This paper presents a bi-directional converter applied in electric bike. The main structure is a cascade buck-boost converter, which transfers the energy stored in battery for driving motor, and can recycle the energy resulted from the back electromotive force (BEMF) to charge battery by changing the operation mode. Moreover, the proposed converter can also serve as a charger by connecting with AC line directly. Besides, the single-chip DSP TMS320F2812 is adopted as a control core to manage the switching behaviors of each mode and to detect the battery capacity. In this paper, the equivalent models of each mode and complete design considerations are all detailed. All the experimental results are used to demonstrate the feasibility.",
"title": ""
},
{
"docid": "cf5f3db56feb7d46c4806be434f6a665",
"text": "Computational propaganda has recently exploded into public consciousness. The U.S. presidential campaign of 2016 was marred by evidence, which continues to emerge, of targeted political propaganda and the use of bots to distribute political messages on social media. This computational propaganda is both a social and technical phenomenon. Technical knowledge is necessary to work with the massive databases used for audience targeting; it is necessary to create the bots and algorithms that distribute propaganda; it is necessary to monitor and evaluate the results of these efforts in agile campaigning. Thus, a technical knowledge comparable to those who create and distribute this propaganda is necessary to investigate the phenomenon. However, viewing computational propaganda only from a technical perspective—as a set of variables, models, codes, and algorithms—plays into the hands of those who create it, the platforms that serve it, and the firms that profit from it. The very act of making something technical and impartial makes it seem inevitable and unbiased. This undermines the opportunities to argue for change in the social value and meaning of this content and the structures in which it exists. Bigdata research is necessary to understand the sociotechnical issue of computational propaganda and the influence of technology in politics. However, big data researchers must maintain a critical stance toward the data being used and analyzed so as to ensure that we are critiquing as we go about describing, predicting, or recommending changes. If research studies of computational propaganda and political big data do not engage with the forms of power and knowledge that produce it, then the very possibility for improving the role of social-media platforms in public life evaporates. Definitionally, computational propaganda has two important parts: the technical and the social. Focusing on the technical, Woolley and Howard define computational propaganda as the assemblage of social-media platforms, autonomous agents, and big data tasked with the manipulation of public opinion. In contrast, the social definition of computational propaganda derives from the definition of propaganda—communications that deliberately misrepresent symbols, appealing to emotions and prejudices and bypassing rational thought, to achieve a specific goal of its creators—with computational propaganda understood as propaganda created or disseminated using computational (technical) means. Propaganda has a long history. Scholars who study propaganda as an offline or historical phenomenon have long been split over whether the existence of propaganda is necessarily detrimental to the functioning of democracies. However, the rise of the Internet and, in particular, social media has profoundly changed the landscape of propaganda. It has opened the creation and dissemination of propaganda messages, which were once the province of states and large institutions, to a wide variety of individuals and groups. It has allowed cross-border computational propaganda and interference in domestic political processes by foreign states. The anonymity of the Internet has allowed stateproduced propaganda to be presented as if it were not produced by state actors. The Internet has also provided new affordances for the efficient dissemination of propaganda, through the manipulation of the algorithms and processes that govern online information and through audience targeting based on big data analytics. The social effects of the changing nature of propaganda are only just beginning to be understood, and the advancement of this understanding is complicated by the unprecedented marrying of the social and the technical that the Internet age has enabled. The articles in this special issue showcase the state of the art in the use of big data in the study of computational propaganda and the influence of social media on politics. This rapidly emerging field represents a new clash of the highly social and highly technical in both",
"title": ""
},
{
"docid": "99a4fc6540802ff820fef9ca312cdc1c",
"text": "Problem diagnosis is one crucial aspect in the cloud operation that is becoming increasingly challenging. On the one hand, the volume of logs generated in today's cloud is overwhelmingly large. On the other hand, cloud architecture becomes more distributed and complex, which makes it more difficult to troubleshoot failures. In order to address these challenges, we have developed a tool, called LOGAN, that enables operators to quickly identify the log entries that potentially lead to the root cause of a problem. It constructs behavioral reference models from logs that represent the normal patterns. When problem occurs, our tool enables operators to inspect the divergence of current logs from the reference model and highlight logs likely to contain the hints to the root cause. To support these capabilities we have designed and developed several mechanisms. First, we developed log correlation algorithms using various IDs embedded in logs to help identify and isolate log entries that belong to the failed request. Second, we provide efficient log comparison to help understand the differences between different executions. Finally we designed mechanisms to highlight critical log entries that are likely to contain information pertaining to the root cause of the problem. We have implemented the proposed approach in a popular cloud management system, OpenStack, and through case studies, we demonstrate this tool can help operators perform problem diagnosis quickly and effectively.",
"title": ""
},
{
"docid": "6aab23ee181e8db06cc4ca3f7f7367be",
"text": "In their original article, Ericsson, Krampe, and Tesch-Römer (1993) reviewed the evidence concerning the conditions of optimal learning and found that individualized practice with training tasks (selected by a supervising teacher) with a clear performance goal and immediate informative feedback was associated with marked improvement. We found that this type of deliberate practice was prevalent when advanced musicians practice alone and found its accumulated duration related to attained music performance. In contrast, Macnamara, Moreau, and Hambrick's (2016, this issue) main meta-analysis examines the use of the term deliberate practice to refer to a much broader and less defined concept including virtually any type of sport-specific activity, such as group activities, watching games on television, and even play and competitions. Summing up every hour of any type of practice during an individual's career implies that the impact of all types of practice activity on performance is equal-an assumption that I show is inconsistent with the evidence. Future research should collect objective measures of representative performance with a longitudinal description of all the changes in different aspects of the performance so that any proximal conditions of deliberate practice related to effective improvements can be identified and analyzed experimentally.",
"title": ""
},
{
"docid": "29734bed659764e167beac93c81ce0a7",
"text": "Fashion classification encompasses the identification of clothing items in an image. The field has applications in social media, e-commerce, and criminal law. In our work, we focus on four tasks within the fashion classification umbrella: (1) multiclass classification of clothing type; (2) clothing attribute classification; (3) clothing retrieval of nearest neighbors; and (4) clothing object detection. We report accuracy measurements for clothing style classification (50.2%) and clothing attribute classification (74.5%) that outperform baselines in the literature for the associated datasets. We additionally report promising qualitative results for our clothing retrieval and clothing object detection tasks.",
"title": ""
},
{
"docid": "2657bb2a6b2fb59714417aa9e6c6c5eb",
"text": "Mash extends the MinHash dimensionality-reduction technique to include a pairwise mutation distance and P value significance test, enabling the efficient clustering and search of massive sequence collections. Mash reduces large sequences and sequence sets to small, representative sketches, from which global mutation distances can be rapidly estimated. We demonstrate several use cases, including the clustering of all 54,118 NCBI RefSeq genomes in 33 CPU h; real-time database search using assembled or unassembled Illumina, Pacific Biosciences, and Oxford Nanopore data; and the scalable clustering of hundreds of metagenomic samples by composition. Mash is freely released under a BSD license ( https://github.com/marbl/mash ).",
"title": ""
}
] |
scidocsrr
|
f6d8b57317b9b054453e22c65e37e879
|
5G cellular: key enabling technologies and research challenges
|
[
{
"docid": "f84c399ff746a8721640e115fd20745e",
"text": "Self-interference cancellation invalidates a long-held fundamental assumption in wireless network design that radios can only operate in half duplex mode on the same channel. Beyond enabling true in-band full duplex, which effectively doubles spectral efficiency, self-interference cancellation tremendously simplifies spectrum management. Not only does it render entire ecosystems like TD-LTE obsolete, it enables future networks to leverage fragmented spectrum, a pressing global issue that will continue to worsen in 5G networks. Self-interference cancellation offers the potential to complement and sustain the evolution of 5G technologies toward denser heterogeneous networks and can be utilized in wireless communication systems in multiple ways, including increased link capacity, spectrum virtualization, any-division duplexing (ADD), novel relay solutions, and enhanced interference coordination. By virtue of its fundamental nature, self-interference cancellation will have a tremendous impact on 5G networks and beyond.",
"title": ""
}
] |
[
{
"docid": "bfd19a8b2c11c9c3083b358f72314fc5",
"text": "Changes in temperature, precipitation, and other climatic drivers and sea-level rise will affect populations of existing native and non-native aquatic species and the vulnerability of aquatic environments to new invasions. Monitoring surveys provide the foundation for assessing the combined effects of climate change and invasions by providing baseline biotic and environmental conditions, although the utility of a survey depends on whether the results are quantitative or qualitative, and other design considerations. The results from a variety of monitoring programs in the United States are available in integrated biological information systems, although many include only non-native species, not native species. Besides including natives, we suggest these systems could be improved through the development of standardized methods that capture habitat and physiological requirements and link regional and national biological databases into distributed Web portals that allow drawing information from multiple sources. Combining the outputs from these biological information systems with environmental data would allow the development of ecological-niche models that predict the potential distribution or abundance of native and non-native species on the basis of current environmental conditions. Environmental projections from climate models can be used in these niche models to project changes in species distributions or abundances under altered climatic conditions and to identify potential high-risk invaders. There are, however, a number of challenges, such as uncertainties associated with projections from climate and niche models and difficulty in integrating data with different temporal and spatial granularity. Even with these uncertainties, integration of biological and environmental information systems, niche models, and climate projections would improve management of aquatic ecosystems under the dual threats of biotic invasions and climate change.",
"title": ""
},
{
"docid": "f20c08bd1194f8589d6e56e66951a7f8",
"text": "The computational complexity grows exponentially for multi-level thresholding (MT) with the increase of the number of thresholds. Taking Kapur’s entropy as the optimized objective function, the paper puts forward the modified quick artificial bee colony algorithm (MQABC), which employs a new distance strategy for neighborhood searches. The experimental results show that MQABC can search out the optimal thresholds efficiently, precisely, and speedily, and the thresholds are very close to the results examined by exhaustive searches. In comparison to the EMO (Electro-Magnetism optimization), which is based on Kapur’s entropy, the classical ABC algorithm, and MDGWO (modified discrete grey wolf optimizer) respectively, the experimental results demonstrate that MQABC has exciting advantages over the latter three in terms of the running time in image thesholding, while maintaining the efficient segmentation quality.",
"title": ""
},
{
"docid": "80114263a722c25125803c7c8ecebb91",
"text": "features suggest that this patient is an atypical presentation of chemotherapy-induced acral erythema, sparing the classic palmar location. The suggestion for an overlapping spectrum of chemotherapyinduced toxic injury of the skin helps resolve the clinicopathological challenge of this case. Toxic erythema of chemotherapy describes a particular category of toxin-associated diseases, some of which are specific, eg, chemotherapyassociated neutrophilic hidradenitis, and others, such as the eruption presented, defy further classification. Although dermatologists will likely preserve some of their preferred appellations, the field of dermatology will benefit from including toxic erythema of chemotherapy within the conceptual framework of chemotherapy-associated dermatoses.",
"title": ""
},
{
"docid": "5168f7f952d937460d250c44b43f43c0",
"text": "This letter presents the design of a coplanar waveguide (CPW) circularly polarized antenna for the central frequency 900 MHz, it comes in handy for radio frequency identification (RFID) short-range reading applications within the band of 902-928 MHz where the axial ratio of proposed antenna model is less than 3 dB. The proposed design has an axial-ratio bandwidth of 36 MHz (4%) and impedance bandwidth of 256 MHz (28.5%).",
"title": ""
},
{
"docid": "b0e316e2efe4b408985216a33492897b",
"text": "Human activity detection within smart homes is one of the basis of unobtrusive wellness monitoring of a rapidly aging population in developed countries. Most works in this area use the concept of \"activity\" as the building block with which to construct applications such as healthcare monitoring or ambient assisted living. The process of identifying a specific activity encompasses the selection of the appropriate set of sensors, the correct preprocessing of their provided raw data and the learning/reasoning using this information. If the selection of the sensors and the data processing methods are wrongly performed, the whole activity detection process may fail, leading to the consequent failure of the whole application. Related to this, the main contributions of this review are the following: first, we propose a classification of the main activities considered in smart home scenarios which are targeted to older people's independent living, as well as their characterization and formalized context representation; second, we perform a classification of sensors and data processing methods that are suitable for the detection of the aforementioned activities. Our aim is to help researchers and developers in these lower-level technical aspects that are nevertheless fundamental for the success of the complete application.",
"title": ""
},
{
"docid": "2b30506690acbae9240ef867e961bc6c",
"text": "Background Breast milk can turn pink with Serratia marcescens colonization, this bacterium has been associated with several diseases and even death. It is seen most commonly in the intensive care settings. Discoloration of the breast milk can lead to premature termination of nursing. We describe two cases of pink-colored breast milk in which S. marsescens was isolated from both the expressed breast milk. Antimicrobial treatment was administered to the mothers. Return to breastfeeding was successful in both the cases. Conclusions Pink breast milk is caused by S. marsescens colonization. In such cases,early recognition and treatment before the development of infection is recommended to return to breastfeeding.",
"title": ""
},
{
"docid": "b169a813dcaa659555f082911bcc843f",
"text": "Pharmacogenomics studies the impact of genetic variation of patients on drug responses and searches for correlations between gene expression or Single Nucleotide Polymorphisms (SNPs) of patient's genome and the toxicity or efficacy of a drug. SNPs data, produced by microarray platforms, need to be preprocessed and analyzed in order to find correlation between the presence/absence of SNPs and the toxicity or efficacy of a drug. Due to the large number of samples and the high resolution of instruments, the data to be analyzed can be very huge, requiring high performance computing. The paper presents the design and experimentation of Cloud4SNP, a novel Cloud-based bioinformatics tool for the parallel preprocessing and statistical analysis of pharmacogenomics SNP microarray data. Experimental evaluation shows good speed-up and scalability. Moreover, the availability on the Cloud platform allows to face in an elastic way the requirements of small as well as very large pharmacogenomics studies.",
"title": ""
},
{
"docid": "fd0cfef7be75a9aa98229c25ffaea864",
"text": "A capsule is a group of neurons whose activity vector represents the instantiation parameters of a specific type of entity such as an object or an object part. We use the length of the activity vector to represent the probability that the entity exists and its orientation to represent the instantiation parameters. Active capsules at one level make predictions, via transformation matrices, for the instantiation parameters of higher-level capsules. When multiple predictions agree, a higher level capsule becomes active. We show that a discrimininatively trained, multi-layer capsule system achieves state-of-the-art performance on MNIST and is considerably better than a convolutional net at recognizing highly overlapping digits. To achieve these results we use an iterative routing-by-agreement mechanism: A lower-level capsule prefers to send its output to higher level capsules whose activity vectors have a big scalar product with the prediction coming from the lower-level capsule.",
"title": ""
},
{
"docid": "945b8c26961fb3a2329b6356b853b358",
"text": "This paper presents a synteny visualization and analysis tool developed in connection with IMAS - the Interactive Multigenomic Analysis System. This visual analysis tool enables biologists to analyze the relationships among genomes of closely related organisms in terms of the locations of genes and clusters of genes. A biologist starts IMAS with the DNA sequence, uses BLAST to find similar genes in related sequences, and uses these similarity linkages to create an enhanced node-link diagram of syntenic sequences. We refer to this as Spring Synteny visualization, which is aimed at helping a biologist discover similar gene ordering relationships across species. The paper describes the techniques that are used to support synteny visualization, in terms of computation, visual design, and interaction design.",
"title": ""
},
{
"docid": "0ce05b9c26df484fc59366762d31465a",
"text": "This paper presents an algorithm that extracts the tempo of a musical excerpt. The proposed system assumes a constant tempo and deals directly with the audio signal. A sliding window is applied to the signal and two feature classes are extracted. The first class is the log-energy of each band of a mel-scale triangular filterbank, a common feature vector used in various MIR applications. For the second class, a novel feature for the tempo induction task is presented; the strengths of the twelve western musical tones at all octaves are calculated for each audio frame, in a similar fashion with Pitch Class Profile. The timeevolving feature vectors are convolved with a bank of resonators, each resonator corresponding to a target tempo. Then the results of each feature class are combined to give the final output. The algorithm was evaluated on the popular ISMIR 2004 Tempo Induction Evaluation Exchange Dataset. Results demonstrate that the superposition of the different types of features enhance the performance of the algorithm, which is in the current state-of-the-art algorithms of the tempo induction task.",
"title": ""
},
{
"docid": "712be4d6aabf8e76b050c30e6241ad0f",
"text": "The United States, like many nations, continues to experience rapid growth in its racial minority population and is projected to attain so-called majority-minority status by 2050. Along with these demographic changes, staggering racial disparities persist in health, wealth, and overall well-being. In this article, we review the social psychological literature on race and race relations, beginning with the seemingly simple question: What is race? Drawing on research from different fields, we forward a model of race as dynamic, malleable, and socially constructed, shifting across time, place, perceiver, and target. We then use classic theoretical perspectives on intergroup relations to frame and then consider new questions regarding contemporary racial dynamics. We next consider research on racial diversity, focusing on its effects during interpersonal encounters and for groups. We close by highlighting emerging topics that should top the research agenda for the social psychology of race and race relations in the twenty-first century.",
"title": ""
},
{
"docid": "8a538c63adfd618d8967f736d8c59761",
"text": "Skyline queries ask for a set of interesting points from a potentially large set of data points. If we are traveling, for instance, a restaurant might be interesting if there is no other restaurant which is nearer, cheaper, and has better food. Skyline queries retrieve all such interesting restaurants so that the user can choose the most promising one. In this paper, we present a new online algorithm that computes the Skyline. Unlike most existing algorithms that compute the Skyline in a batch, this algorithm returns the first results immediately, produces more and more results continuously, and allows the user to give preferences during the running time of the algorithm so that the user can control what kind of results are produced next (e.g., rather cheap or rather near restaurants).",
"title": ""
},
{
"docid": "0ce5f897c55f40451878e37a4da1c91c",
"text": "The analysis of drainage morphometry is usually a prerequisite to the assessment of hydrological characteristics of surface water basin. In this study, the western region of the Arabian Peninsula was selected for detailed morphometric analysis. In this region, there are a large number of drainage systems that are originated from the mountain chains of the Arabian Shield to the east and outlet into the Red Sea. As a typical type of these drainage systems, the morphometry of Wadi Aurnah was analyzed. The study performed manual and computerized delineation and drainage sampling, which enables applying detailed morphological measures. Topographic maps in combination with remotely sensed data, (i.e. different types of satellite images) were utilized to delineate the existing drainage system, thus to identify precisely water divides. This was achieved using Geographic Information System (GIS) to provide computerized data that can be manipulated for different calculations and hydrological measures. The obtained morhpometric analysis in this study tackled: 1) stream behavior, 2) morphometric setting of streams within the drainage system and 3) interrelation between connected streams. The study introduces an imperial approach of morphometric analysis that can be utilized in different hydrological assessments (e.g., surface water harvesting, flood mitigation, etc). As well as, the applied analysis using remote sensing and GIS can be followed in the rest drainage systems of the Western Arabian Peninsula.",
"title": ""
},
{
"docid": "0251f38f48c470e2e04fb14fc7ba34b2",
"text": "The fast development of Internet of Things (IoT) and cyber-physical systems (CPS) has triggered a large demand of smart devices which are loaded with sensors collecting information from their surroundings, processing it and relaying it to remote locations for further analysis. The wide deployment of IoT devices and the pressure of time to market of device development have raised security and privacy concerns. In order to help better understand the security vulnerabilities of existing IoT devices and promote the development of low-cost IoT security methods, in this paper, we use both commercial and industrial IoT devices as examples from which the security of hardware, software, and networks are analyzed and backdoors are identified. A detailed security analysis procedure will be elaborated on a home automation system and a smart meter proving that security vulnerabilities are a common problem for most devices. Security solutions and mitigation methods will also be discussed to help IoT manufacturers secure their products.",
"title": ""
},
{
"docid": "91e4994a20bb3b48ef3d70c3affa5c0c",
"text": "In this paper, we address the challenging task of simultaneous recognition of overlapping sound events from single channel audio. Conventional frame-based methods aren’t well suited to the problem, as each time frame contains a mixture of information from multiple sources. Missing feature masks are able to improve the recognition in such cases, but are limited by the accuracy of the mask, which is a non-trivial problem. In this paper, we propose an approach based on Local Spectrogram Features (LSFs) which represent local spectral information that is extracted from the two-dimensional region surrounding “keypoints” detected in the spectrogram. The keypoints are designed to locate the sparse, discriminative peaks in the spectrogram, such that we can model sound events through a set of representative LSF clusters and their occurrences in the spectrogram. To recognise overlapping sound events, we use a Generalised Hough Transform (GHT) voting system, which sums the information over many independent keypoints to produce onset hypotheses, that can detect any arbitrary combination of sound events in the spectrogram. Each hypothesis is then scored against the class distribution models to recognise the existence of the sound in the spectrogram. Experiments on a set of five overlapping sound events, in the presence of non-stationary background noise, demonstrates the potential of our approach.",
"title": ""
},
{
"docid": "1dc0d5c7dbc0ae85a424b17e463bd7a4",
"text": "Plasma protein binding (PPB) strongly affects drug distribution and pharmacokinetic behavior with consequences in overall pharmacological action. Extended plasma protein binding may be associated with drug safety issues and several adverse effects, like low clearance, low brain penetration, drug-drug interactions, loss of efficacy, while influencing the fate of enantiomers and diastereoisomers by stereoselective binding within the body. Therefore in holistic drug design approaches, where ADME(T) properties are considered in parallel with target affinity, considerable efforts are focused in early estimation of PPB mainly in regard to human serum albumin (HSA), which is the most abundant and most important plasma protein. The second critical serum protein α1-acid glycoprotein (AGP), although often underscored, plays also an important and complicated role in clinical therapy and thus the last years it has been studied thoroughly too. In the present review, after an overview of the principles of HSA and AGP binding as well as the structure topology of the proteins, the current trends and perspectives in the field of PPB predictions are presented and discussed considering both HSA and AGP binding. Since however for the latter protein systematic studies have started only the last years, the review focuses mainly to HSA. One part of the review highlights the challenge to develop rapid techniques for HSA and AGP binding simulation and their performance in assessment of PPB. The second part focuses on in silico approaches to predict HSA and AGP binding, analyzing and evaluating structure-based and ligand-based methods, as well as combination of both methods in the aim to exploit the different information and overcome the limitations of each individual approach. Ligand-based methods use the Quantitative Structure-Activity Relationships (QSAR) methodology to establish quantitate models for the prediction of binding constants from molecular descriptors, while they provide only indirect information on binding mechanism. Efforts for the establishment of global models, automated workflows and web-based platforms for PPB predictions are presented and discussed. Structure-based methods relying on the crystal structures of drug-protein complexes provide detailed information on the underlying mechanism but are usually restricted to specific compounds. They are useful to identify the specific binding site while they may be important in investigating drug-drug interactions, related to PPB. Moreover, chemometrics or structure-based modeling may be supported by experimental data a promising integrated alternative strategy for ADME(T) properties optimization. In the case of PPB the use of molecular modeling combined with bioanalytical techniques is frequently used for the investigation of AGP binding.",
"title": ""
},
{
"docid": "604b46c973be0a277faa96a407dc845f",
"text": "A nonlinear dynamic model for a quadrotor unmanned aerial vehicle is presented with a new vision of state parameter control which is based on Euler angles and open loop positions state observer. This method emphasizes on the control of roll, pitch and yaw angle rather than the translational motions of the UAV. For this reason the system has been presented into two cascade partial parts, the first one relates the rotational motion whose the control law is applied in a closed loop form and the other one reflects the translational motion. A dynamic feedback controller is developed to transform the closed loop part of the system into linear, controllable and decoupled subsystem. The wind parameters estimation of the quadrotor is used to avoid more sensors. Hence an estimator of resulting aerodynamic moments via Lyapunov function is developed. Performance and robustness of the proposed controller are tested in simulation.",
"title": ""
},
{
"docid": "221970fad528f2538930556dde7a0062",
"text": "The recent explosive growth in convolutional neural network (CNN) research has produced a variety of new architectures for deep learning. One intriguing new architecture is the bilinear CNN (B-CNN), which has shown dramatic performance gains on certain fine-grained recognition problems [15]. We apply this new CNN to the challenging new face recognition benchmark, the IARPA Janus Benchmark A (IJB-A) [12]. It features faces from a large number of identities in challenging real-world conditions. Because the face images were not identified automatically using a computerized face detection system, it does not have the bias inherent in such a database. We demonstrate the performance of the B-CNN model beginning from an AlexNet-style network pre-trained on ImageNet. We then show results for fine-tuning using a moderate-sized and public external database, FaceScrub [17]. We also present results with additional fine-tuning on the limited training data provided by the protocol. In each case, the fine-tuned bilinear model shows substantial improvements over the standard CNN. Finally, we demonstrate how a standard CNN pre-trained on a large face database, the recently released VGG-Face model [20], can be converted into a B-CNN without any additional feature training. This B-CNN improves upon the CNN performance on the IJB-A benchmark, achieving 89.5% rank-1 recall.",
"title": ""
},
{
"docid": "c3ae2b20405aa932bb5ada3874cdd29c",
"text": "In this letter, a novel compact quadrature hybrid using low-pass and high-pass lumped elements is proposed. This proposed topology enables significant circuit size reduction in comparison with former approaches applying microstrip branch line or Lange couplers. In addition, it provides wider bandwidth in terms of operational frequency, and provides more convenience to the monolithic microwave integrated circuit layout since it does not have any bulky via holes as compared to those with lumped elements that have been published. In addition, the simulation and measurement of the fabricated hybrid implemented using PHEMT processes are evidently good. With the operational bandwidth ranging from 25 to 30 GHz, the measured results of the return loss are better than 17.6 dB, and the insertion losses of coupled and direct ports are approximately 3.4plusmn0.7 dB, while the relative phase difference is approximately 92.3plusmn1.4deg. The core dimension of the circuit is 0.4 mm times 0.15 mm.",
"title": ""
},
{
"docid": "be1b9731df45408571e75d1add5dfe9c",
"text": "We investigate a new commonsense inference task: given an event described in a short free-form text (“X drinks coffee in the morning”), a system reasons about the likely intents (“X wants to stay awake”) and reactions (“X feels alert”) of the event’s participants. To support this study, we construct a new crowdsourced corpus of 25,000 event phrases covering a diverse range of everyday events and situations. We report baseline performance on this task, demonstrating that neural encoder-decoder models can successfully compose embedding representations of previously unseen events and reason about the likely intents and reactions of the event participants. In addition, we demonstrate how commonsense inference on people’s intents and reactions can help unveil the implicit gender inequality prevalent in modern movie scripts.",
"title": ""
}
] |
scidocsrr
|
55d3c195e80b46d3de5dff8f2a53b16c
|
Attributing Fake Images to GANs: Analyzing Fingerprints in Generated Images
|
[
{
"docid": "0ff96a055763aa3af122c42723b7c140",
"text": "In this paper, we propose the Self-Attention Generative Adversarial Network (SAGAN) which allows attention-driven, long-range dependency modeling for image generation tasks. Traditional convolutional GANs generate high-resolution details as a function of only spatially local points in lower-resolution feature maps. In SAGAN, details can be generated using cues from all feature locations. Moreover, the discriminator can check that highly detailed features in distant portions of the image are consistent with each other. Furthermore, recent work has shown that generator conditioning affects GAN performance. Leveraging this insight, we apply spectral normalization to the GAN generator and find that this improves training dynamics. The proposed SAGAN achieves the state-ofthe-art results, boosting the best published Inception score from 36.8 to 52.52 and reducing Fréchet Inception distance from 27.62 to 18.65 on the challenging ImageNet dataset. Visualization of the attention layers shows that the generator leverages neighborhoods that correspond to object shapes rather than local regions of fixed shape.",
"title": ""
}
] |
[
{
"docid": "d65e4b79ae580d3b8572c1746357f854",
"text": "We present a large-scale object detection system by team PFDet. Our system enables training with huge datasets using 512 GPUs, handles sparsely verified classes, and massive class imbalance. Using our method, we achieved 2nd place in the Google AI Open Images Object Detection Track 2018 on Kaggle. 1",
"title": ""
},
{
"docid": "31da7acfb9d98421bbf7e70a508ba5df",
"text": "Habronema muscae (Spirurida: Habronematidae) occurs in the stomach of equids, is transmitted by adult muscid dipterans and causes gastric habronemiasis. Scanning electron microscopy (SEM) was used to study the morphological aspects of adult worms of this nematode in detail. The worms possess two trilobed lateral lips. The buccal cavity was cylindrical, with thick walls and without teeth. Around the mouth, four submedian cephalic papillae and two amphids were seen. A pair of lateral cervical papillae was present. There was a single lateral ala and in the female the vulva was situated in the middle of the body. In the male, there were wide caudal alae, and the spicules were unequal and dissimilar. At the posterior end of the male, four pairs of stalked precloacal papillae, unpaired post-cloacal papillae and a cluster of small papillae were present. In one case, the anterior end showed abnormal features.",
"title": ""
},
{
"docid": "c6d84be944630cec1b19d84db2ace2ee",
"text": "This paper describes an effort to model a student’s changing knowledge state during skill acquisition. Dynamic Bayes Nets (DBNs) provide a powerful way to represent and reason about uncertainty in time series data, and are therefore well-suited to model student knowledge. Many general-purpose Bayes net packages have been implemented and distributed; however, constructing DBNs often involves complicated coding effort. To address this problem, we introduce a tool called BNTSM. BNT-SM inputs a data set and a compact XML specification of a Bayes net model hypothesized by a researcher to describe causal relationships among student knowledge and observed behavior. BNT-SM generates and executes the code to train and test the model using the Bayes Net Toolbox [1]. Compared to the BNT code it outputs, BNT-SM reduces the number of lines of code required to use a DBN by a factor of 5. In addition to supporting more flexible models, we illustrate how to use BNT-SM to simulate Knowledge Tracing (KT) [2], an established technique for student modeling. The trained DBN does a better job of modeling and predicting student performance than the original KT code (Area Under Curve = 0.610 > 0.568), due to differences in how it estimates parameters.",
"title": ""
},
{
"docid": "a13a302e7e2fd5e09a054f1bf23f1702",
"text": "A number of machine learning (ML) techniques have recently been proposed to solve color constancy problem in computer vision. Neural networks (NNs) and support vector regression (SVR) in particular, have been shown to outperform many traditional color constancy algorithms. However, neither neural networks nor SVR were compared to simpler regression tools in those studies. In this article, we present results obtained with a linear technique known as ridge regression (RR) and show that it performs better than NNs, SVR, and gray world (GW) algorithm on the same dataset. We also perform uncertainty analysis for NNs, SVR, and RR using bootstrapping and show that ridge regression and SVR are more consistent than neural networks. The shorter training time and single parameter optimization of the proposed approach provides a potential scope for real time video tracking application.",
"title": ""
},
{
"docid": "50e081b178a1a308c61aae4a29789816",
"text": "The ability to engineer enzymes and other proteins to any desired stability would have wide-ranging applications. Here, we demonstrate that computational design of a library with chemically diverse stabilizing mutations allows the engineering of drastically stabilized and fully functional variants of the mesostable enzyme limonene epoxide hydrolase. First, point mutations were selected if they significantly improved the predicted free energy of protein folding. Disulfide bonds were designed using sampling of backbone conformational space, which tripled the number of experimentally stabilizing disulfide bridges. Next, orthogonal in silico screening steps were used to remove chemically unreasonable mutations and mutations that are predicted to increase protein flexibility. The resulting library of 64 variants was experimentally screened, which revealed 21 (pairs of) stabilizing mutations located both in relatively rigid and in flexible areas of the enzyme. Finally, combining 10-12 of these confirmed mutations resulted in multi-site mutants with an increase in apparent melting temperature from 50 to 85°C, enhanced catalytic activity, preserved regioselectivity and a >250-fold longer half-life. The developed Framework for Rapid Enzyme Stabilization by Computational libraries (FRESCO) requires far less screening than conventional directed evolution.",
"title": ""
},
{
"docid": "4f527bddf622c901a7894ce7cc381ee1",
"text": "Most popular programming languages support situations where a value of one type is converted into a value of another type without any explicit cast. Such implicit type conversions, or type coercions, are a highly controversial language feature. Proponents argue that type coercions enable writing concise code. Opponents argue that type coercions are error-prone and that they reduce the understandability of programs. This paper studies the use of type coercions in JavaScript, a language notorious for its widespread use of coercions. We dynamically analyze hundreds of programs, including real-world web applications and popular benchmark programs. We find that coercions are widely used (in 80.42% of all function executions) and that most coercions are likely to be harmless (98.85%). Furthermore, we identify a set of rarely occurring and potentially harmful coercions that safer subsets of JavaScript or future language designs may want to disallow. Our results suggest that type coercions are significantly less evil than commonly assumed and that analyses targeted at real-world JavaScript programs must consider coercions. 1998 ACM Subject Classification D.3.3 Language Constructs and Features, F.3.2 Semantics of Programming Languages, D.2.8 Metrics",
"title": ""
},
{
"docid": "a4154317f6bb6af635edb1b2ef012d09",
"text": "The pulp industry in Taiwan discharges tons of wood waste and pulp sludge (i.e., wastewater-derived secondary sludge) per year. The mixture of these two bio-wastes, denoted as wood waste with pulp sludge (WPS), has been commonly converted to organic fertilizers for agriculture application or to soil conditioners. However, due to energy demand, the WPS can be utilized in a beneficial way to mitigate an energy shortage. This study elucidated the performance of applying torrefaction, a bio-waste to energy method, to transform the WPS into solid bio-fuel. Two batches of the tested WPS (i.e., WPS1 and WPS2) were generated from a virgin pulp factory in eastern Taiwan. The WPS1 and WPS2 samples contained a large amount of organics and had high heating values (HHV) on a dry-basis (HHD) of 18.30 and 15.72 MJ/kg, respectively, exhibiting a potential for their use as a solid bio-fuel. However, the wet WPS as received bears high water and volatile matter content and required de-watering, drying, and upgrading. After a 20 min torrefaction time (tT), the HHD of torrefied WPS1 (WPST1) can be enhanced to 27.49 MJ/kg at a torrefaction temperature (TT) of 573 K, while that of torrefied WPS2 (WPST2) increased to 19.74 MJ/kg at a TT of 593 K. The corresponding values of the energy densification ratio of torrefied solid bio-fuels of WPST1 and WPST2 can respectively rise to 1.50 and 1.25 times that of the raw bio-waste. The HHD of WPST1 of 27.49 MJ/kg is within the range of 24–35 MJ/kg for bituminous coal. In addition, the wet-basis HHV of WPST1 with an equilibrium moisture content of 5.91 wt % is 25.87 MJ/kg, which satisfies the Quality D coal specification of the Taiwan Power Co., requiring a value of above 20.92 MJ/kg.",
"title": ""
},
{
"docid": "54b11a906e212a34320d6bbed2cac0fc",
"text": "PURPOSE\nThis study aimed to compare strategies for assessing nutritional adequacy in the dietary intake of elite female athletes.\n\n\nMETHODS\nDietary intake was assessed using an adapted food-frequency questionnaire in 72 elite female athletes from a variety of sports. Nutritional adequacy was evaluated and compared using mean intake; the proportion of participants with intakes below Australian nutrient reference values (NRV), U.S. military dietary reference intakes (MDRI), and current sports nutrition recommendations; and probability estimates of nutrient inadequacy.\n\n\nRESULTS\nMean energy intake was 10,551 +/- 3,836 kJ/day with macronutrient distribution 18% protein, 31% fat, and 46% carbohydrate, consistent with Australian acceptable macronutrient distribution ranges. Mean protein intake (1.6 g . kg(-1) . d(-1)) was consistent with (>1.2 g . kg(-1) . d(-1)), and carbohydrate intake (4.5 g . kg(-1) . d(-1)), below, current sports nutrition recommendations (>5 g . kg(-1) . d(-1)), with 30% and 65% of individuals not meeting these levels, respectively. Mean micronutrient intake met the relevant NRV and MDRI except for vitamin D and folate. A proportion of participants failed to meet the estimated average requirement for folate (48%), calcium (24%), magnesium (19%), and iron (4%). Probability estimates of inadequacy identified intake of folate (44%), calcium (22%), iron (19%), and magnesium (15%) as inadequate.\n\n\nCONCLUSION\nInterpretation of dietary adequacy is complex and varies depending on whether the mean, proportion of participants below the relevant NRV, or statistical probability estimate of inadequacy is used. Further research on methods to determine dietary adequacy in athlete populations is required.",
"title": ""
},
{
"docid": "dbbd98ed1a7ee32ab9626a923925c45d",
"text": "In this paper, we present the gated selfmatching networks for reading comprehension style question answering, which aims to answer questions from a given passage. We first match the question and passage with gated attention-based recurrent networks to obtain the question-aware passage representation. Then we propose a self-matching attention mechanism to refine the representation by matching the passage against itself, which effectively encodes information from the whole passage. We finally employ the pointer networks to locate the positions of answers from the passages. We conduct extensive experiments on the SQuAD dataset. The single model achieves 71.3% on the evaluation metrics of exact match on the hidden test set, while the ensemble model further boosts the results to 75.9%. At the time of submission of the paper, our model holds the first place on the SQuAD leaderboard for both single and ensemble model.",
"title": ""
},
{
"docid": "1d8cd32e2a2748b9abd53cf32169d798",
"text": "Optimizing the weights of Artificial Neural Networks (ANNs) is a great important of a complex task in the research of machine learning due to dependence of its performance to the success of learning process and the training method. This paper reviews the implementation of meta-heuristic algorithms in ANNs’ weight optimization by studying their advantages and disadvantages giving consideration to some meta-heuristic members such as Genetic algorithim, Particle Swarm Optimization and recently introduced meta-heuristic algorithm called Harmony Search Algorithm (HSA). Also, the application of local search based algorithms to optimize the ANNs weights and their benefits as well as their limitations are briefly elaborated. Finally, a comparison between local search methods and global optimization methods is carried out to speculate the trends in the progresses of ANNs’ weight optimization in the current resrearch.",
"title": ""
},
{
"docid": "7c30377fffb3c154240f330db8b8756f",
"text": "Geneticists and breeders are positioned to breed plants with root traits that improve productivity under drought. However, a better understanding of root functional traits and how traits are related to whole plant strategies to increase crop productivity under different drought conditions is needed. Root traits associated with maintaining plant productivity under drought include small fine root diameters, long specific root length, and considerable root length density, especially at depths in soil with available water. In environments with late season water deficits, small xylem diameters in targeted seminal roots save soil water deep in the soil profile for use during crop maturation and result in improved yields. Capacity for deep root growth and large xylem diameters in deep roots may also improve root acquisition of water when ample water at depth is available. Xylem pit anatomy that makes xylem less \"leaky\" and prone to cavitation warrants further exploration holding promise that such traits may improve plant productivity in water-limited environments without negatively impacting yield under adequate water conditions. Rapid resumption of root growth following soil rewetting may improve plant productivity under episodic drought. Genetic control of many of these traits through breeding appears feasible. Several recent reviews have covered methods for screening root traits but an appreciation for the complexity of root systems (e.g., functional differences between fine and coarse roots) needs to be paired with these methods to successfully identify relevant traits for crop improvement. Screening of root traits at early stages in plant development can proxy traits at mature stages but verification is needed on a case by case basis that traits are linked to increased crop productivity under drought. Examples in lesquerella (Physaria) and rice (Oryza) show approaches to phenotyping of root traits and current understanding of root trait genetics for breeding.",
"title": ""
},
{
"docid": "7f9b7f50432d04968a1fb62855481eda",
"text": "BACKGROUND/PURPOSE\nAccurate prenatal diagnosis of complex anatomic connections and associated anomalies has only been possible recently with the use of ultrasonography, echocardiography, and fetal magnetic resonance imaging (MRI). To assess the impact of improved antenatal diagnosis in the management and outcome of conjoined twins, the authors reviewed their experience with 14 cases.\n\n\nMETHODS\nA retrospective review of prenatally diagnosed conjoined twins referred to our institution from 1996 to present was conducted.\n\n\nRESULTS\nIn 14 sets of conjoined twins, there were 10 thoracoomphalopagus, 2 dicephalus tribrachius dipus, 1 ischiopagus, and 1 ischioomphalopagus. The earliest age at diagnosis was 9 weeks' gestation (range, 9 to 29; mean, 20). Prenatal imaging with ultrasonography, echocardiography, and ultrafast fetal MRI accurately defined the shared anatomy in all cases. Associated anomalies included cardiac malformations (11 of 14), congenital diaphragmatic hernia (4 of 14), abdominal wall defects (2 of 14), and imperforate anus (2 of 14). Three sets of twins underwent therapeutic abortion, 1 set of twins died in utero, and 10 were delivered via cesarean section at a mean gestational age of 34 weeks. There were 5 individual survivors in the series after separation (18%). In one case, in which a twin with a normal heart perfused the cotwin with a rudimentary heart, the ex utero intrapartum treatment procedure (EXIT) was utilized because of concern that the normal twin would suffer immediate cardiac decompensation at birth. This EXIT-to-separation strategy allowed prompt control of the airway and circulation before clamping the umbilical cord and optimized control over a potentially emergent situation, leading to survival of the normal cotwin. In 2 sets of twins in which each twin had a normal heart, tissue expanders were inserted before separation.\n\n\nCONCLUSIONS\nAdvances in prenatal diagnosis allow detailed, accurate evaluations of conjoined twins. Careful prenatal studies may uncover cases in which emergent separation at birth is lifesaving.",
"title": ""
},
{
"docid": "762559c49626834fadb0256e1d9365bc",
"text": "NB-IoT is the 3GPP standard for machine-tomachine communications, recently finalized within LTE release 13. This article gives a brief overview about this new LTE-based radio access technology and presents a implementation developed using the srsLTE software radio suite. We also carry out a performance study in which we compare a theoretical analysis with experimental results obtained in our testbed. Furthermore, we provide some interesting details and share our experience in exploring one of the worldwide first commercial NB-IoT deployments. Keywords—NB-IoT, LTE, Software Defined Radio, srsLTE",
"title": ""
},
{
"docid": "2e94ce662010f06c57891b3be62a0fad",
"text": "This paper discusses conceptual frameworks for actively involving highly distributed loads in power system control actions. The context for load control is established by providing an overview of system control objectives, including economic dispatch, automatic generation control, and spinning reserve. The paper then reviews existing initiatives that seek to develop load control programs for the provision of power system services. We then discuss some of the challenges to achieving a load control scheme that balances device-level objectives with power system-level objectives. One of the central premises of the paper is that, in order to achieve full responsiveness, direct load control (as opposed to price response) is required to enable fast time scale, predictable control opportunities, especially for the provision of ancillary services such as regulation and contingency reserves. Centralized, hierarchical, and distributed control architectures are discussed along with benefits and disadvantages, especially in relation to integration with the legacy power system control architecture. Implications for the supporting communications infrastructure are also considered. Fully responsive load control is illustrated in the context of thermostatically controlled loads and plug-in electric vehicles.",
"title": ""
},
{
"docid": "3bf5eaa6400ae63000a1d100114fe8fd",
"text": "In Fig. 4e of this Article, the labels for ‘Control’ and ‘HFD’ were reversed (‘Control’ should have been labelled blue rather than purple, and ‘HFD’ should have been labelled purple rather than blue). Similarly, in Fig. 4f of this Article, the labels for ‘V’ and ‘GW’ were reversed (‘V’ should have been labelled blue rather than purple, and ‘GW’ should have been labelled purple instead of blue). The original figure has been corrected online.",
"title": ""
},
{
"docid": "1cf3ee00f638ca44a3b9772a2df60585",
"text": "Navigation has been a popular area of research in both academia and industry. Combined with maps, and different localization technologies, navigation systems have become robust and more usable. By combining navigation with augmented reality, it can be improved further to become realistic and user friendly. This paper surveys existing researches carried out in this area, describes existing techniques for building augmented reality navigation systems, and the problems faced.",
"title": ""
},
{
"docid": "3c4aaea63fd829828c75b85509cceac8",
"text": "When maintaining equilibrium in upright stance, humans use sensory feedback control to cope with unforeseen external disturbances such as support surface motion, this despite biological 'complications' such as noisy and inaccurate sensor signals and considerable neural, motor, and processing time delays. The control method they use apparently differs from established methods one normally finds in technical fields. System identification recently led us design a control model that we currently test in our laboratory. The tests include hardware-in-the-loop simulations after the model's embodiment into a robot. The model is called disturbance estimation and compensation (DEC) model. Disturbance estimation is performed by on-line multisensory interactions using joint angle, joint torque, and vestibular cues. For disturbance compensation, the method of direct disturbance rejection is used (\" Störgrös-senaufschaltung \"). So far, biomechanics of a single inverted pendulum (SIP) were applied. Here we extend the DEC concept to the control of a double inverted pendulum (DIP; moving links: trunk on hip joint and legs on ankle joints). The aim is that the model copes in addition with inter-link torques and still describes human experimental data. As concerns the inter-link torque arising during leg motion in the hip joint (support base of the outer link, the trunk), it is already covered by the DEC concept we so far used for the SIP. The inter-link torque arising from trunk motion in the ankle joint is largely neutralized by the concept's whole-body COM control through the ankle joint (due to the fact that body geometry and thus COM location changes with the inter-link motion). Experimentally, we applied pseudorandom support surface tilt stimuli in the sagittal plane to healthy human subjects who were standing with eyes closed on a motion platform (frequency range, 0.16 – 2.2 Hz). Angular excursions of trunk, leg, and whole-body COM (center of mass) with respect to the space vertical as well as COP (center of pressure) shifts were recorded and analyzed. The human data was compared to corresponding model and robot simulation data. The human findings were well described by the model and robot simulations. This shows that the DIP biomechanics of human reactive stance can be controlled using a purely sensor-based control.",
"title": ""
},
{
"docid": "9223330ceb0b0575379c238672b8afc2",
"text": "Contact networks are often used in epidemiological studies to describe the patterns of interactions within a population. Often, such networks merely indicate which individuals interact, without giving any indication of the strength or intensity of interactions. Here, we use weighted networks, in which every connection has an associated weight, to explore the influence of heterogeneous contact strengths on the effectiveness of control measures. We show that, by using contact weights to evaluate an individual's influence on an epidemic, individual infection risk can be estimated and targeted interventions such as preventative vaccination can be applied effectively. We use a diary study of social mixing behaviour to indicate the patterns of contact weights displayed by a real population in a range of different contexts, including physical interactions; we use these data to show that considerations of link weight can in some cases lead to improved interventions in the case of infections that spread through close contact interactions. However, we also see that simpler measures, such as an individual's total number of social contacts or even just their number of contacts during a single day, can lead to great improvements on random vaccination. We therefore conclude that, for many infections, enhanced social contact data can be simply used to improve disease control but that it is not necessary to have full social mixing information in order to enhance interventions.",
"title": ""
},
{
"docid": "081390fc7a870821295a4a8b5341658b",
"text": "Sound events often occur in unstructured environments where they exhibit wide variations in their frequency content and temporal structure. Convolutional neural networks CNNs are able to extract higher level features that are invariant to local spectral and temporal variations. Recurrent neural networks RNNs are powerful in learning the longer term temporal context in the audio signals. CNNs and RNNs as classifiers have recently shown improved performances over established methods in various sound recognition tasks. We combine these two approaches in a convolutional recurrent neural network CRNN and apply it on a polyphonic sound event detection task. We compare the performance of the proposed CRNN method with CNN, RNN, and other established methods, and observe a considerable improvement for four different datasets consisting of everyday sound events.",
"title": ""
},
{
"docid": "39168bcf3cd49c13c86b13e89197ce7d",
"text": "An unprecedented booming has been witnessed in the research area of artistic style transfer ever since Gatys et al. introduced the neural method. One of the remaining challenges is to balance a trade-off among three critical aspects—speed, flexibility, and quality: (i) the vanilla optimization-based algorithm produces impressive results for arbitrary styles, but is unsatisfyingly slow due to its iterative nature, (ii) the fast approximation methods based on feed-forward neural networks generate satisfactory artistic effects but bound to only a limited number of styles, and (iii) feature-matching methods like AdaIN achieve arbitrary style transfer in a real-time manner but at a cost of the compromised quality. We find it considerably difficult to balance the trade-off well merely using a single feed-forward step and ask, instead, whether there exists an algorithm that could adapt quickly to any style, while the adapted model maintains high efficiency and good image quality. Motivated by this idea, we propose a novel method, coined MetaStyle, which formulates the neural style transfer as a bilevel optimization problem and combines learning with only a few post-processing update steps to adapt to a fast approximation model with satisfying artistic effects, comparable to the optimization-based methods for an arbitrary style. The qualitative and quantitative analysis in the experiments demonstrates that the proposed approach achieves high-quality arbitrary artistic style transfer effectively, with a good trade-off among speed, flexibility, and quality.",
"title": ""
}
] |
scidocsrr
|
aae57b583448dfb7466fc99318286ef8
|
Social Identity Link Across Incomplete Social Information Sources Using Anchor Link Expansion
|
[
{
"docid": "e31b5b120d485d77e8743132f028d8b3",
"text": "In this paper, we consider the problem of linking users across multiple online communities. Specifically, we focus on the alias-disambiguation step of this user linking task, which is meant to differentiate users with the same usernames. We start quantitatively analyzing the importance of the alias-disambiguation step by conducting a survey on 153 volunteers and an experimental analysis on a large dataset of About.me (75,472 users). The analysis shows that the alias-disambiguation solution can address a major part of the user linking problem in terms of the coverage of true pairwise decisions (46.8%). To the best of our knowledge, this is the first study on human behaviors with regards to the usages of online usernames. We then cast the alias-disambiguation step as a pairwise classification problem and propose a novel unsupervised approach. The key idea of our approach is to automatically label training instances based on two observations: (a) rare usernames are likely owned by a single natural person, e.g. pennystar88 as a positive instance; (b) common usernames are likely owned by different natural persons, e.g. tank as a negative instance. We propose using the n-gram probabilities of usernames to estimate the rareness or commonness of usernames. Moreover, these two observations are verified by using the dataset of Yahoo! Answers. The empirical evaluations on 53 forums verify: (a) the effectiveness of the classifiers with the automatically generated training data and (b) that the rareness and commonness of usernames can help user linking. We also analyze the cases where the classifiers fail.",
"title": ""
},
{
"docid": "8cddb1fed30976de82d62de5066a5ce6",
"text": "Today, more and more people have their virtual identities on the web. It is common that people are users of more than one social network and also their friends may be registered on multiple websites. A facility to aggregate our online friends into a single integrated environment would enable the user to keep up-to-date with their virtual contacts more easily, as well as to provide improved facility to search for people across different websites. In this paper, we propose a method to identify users based on profile matching. We use data from two popular social networks to study the similarity of profile definition. We evaluate the importance of fields in the web profile and develop a profile comparison tool. We demonstrate the effectiveness and efficiency of our tool in identifying and consolidating duplicated users on different websites.",
"title": ""
},
{
"docid": "27513d1309f370e9bd8426d0d9971447",
"text": "Online social networks can often be represented as heterogeneous information networks containing abundant information about: who, where, when and what. Nowadays, people are usually involved in multiple social networks simultaneously. The multiple accounts of the same user in different networks are mostly isolated from each other without any connection between them. Discovering the correspondence of these accounts across multiple social networks is a crucial prerequisite for many interesting inter-network applications, such as link recommendation and community analysis using information from multiple networks. In this paper, we study the problem of anchor link prediction across multiple heterogeneous social networks, i.e., discovering the correspondence among different accounts of the same user. Unlike most prior work on link prediction and network alignment, we assume that the anchor links are one-to-one relationships (i.e., no two edges share a common endpoint) between the accounts in two social networks, and a small number of anchor links are known beforehand. We propose to extract heterogeneous features from multiple heterogeneous networks for anchor link prediction, including user's social, spatial, temporal and text information. Then we formulate the inference problem for anchor links as a stable matching problem between the two sets of user accounts in two different networks. An effective solution, MNA (Multi-Network Anchoring), is derived to infer anchor links w.r.t. the one-to-one constraint. Extensive experiments on two real-world heterogeneous social networks show that our MNA model consistently outperform other commonly-used baselines on anchor link prediction.",
"title": ""
}
] |
[
{
"docid": "caa6f0769cc62cbde30b96ae31dabb3f",
"text": "ThyssenKrupp Transrapid developed a new motor winding for synchronous long stator propulsion with optimized grounding system. The motor winding using a cable without metallic screen is presented. The function as well as the mechanical and electrical design of the grounding system is illustrated. The new design guarantees a much lower electrical stress than the load capacity of the system. The main design parameters, simulation and testing results as well as calculations of the electrical stress of the grounding system are described.",
"title": ""
},
{
"docid": "e645deb8bfd17dd8ef657ef0a0e0e960",
"text": "HR Tool Employee engagement refers to the level of commitment workers make to their employer, seen in their willingness to stay at the firm and to go beyond the call of duty.1 Firms want employees that are highly motivated and feel they have a real stake in the company’s success. Such employees are willing to finish tasks in their own time and see a strong link between the firm’s success and their own career prospects. In short, motivated, empowered employees work hand in hand with employers in an atmosphere of mutual trust. Companies with engaged workforces have also reported less absenteeism, more engagement with customers, greater employee satisfaction, less mistakes, fewer employees leaving, and naturally higher profits. Such is the power of this concept that former Secretary of State for Business, Peter Mandelson, commissioned David McLeod and Nita Clarke to investigate how much UK competitiveness could be enhanced by wider use of employee engagement. David and Nita concluded that in a world where work tasks have become increasingly similar, engaged employees could give some companies the edge over their rivals. They also identified significant barriers to engagement such as a lack of appreciation for the concept of employee engagement by some companies and managers. Full participation by line managers is particularly crucial. From the employee point of view, it is easy to view engagement as a management fad, particularly if the company fails to demonstrate the necessary commitment. Some also feel that in a recession, employee engagement becomes less of a priority when in Performance Management and Appraisal 8 CHATE R",
"title": ""
},
{
"docid": "d51408ad40bdc9a3a846aaf7da907cef",
"text": "Accessing online information from various data sources has become a necessary part of our everyday life. Unfortunately such information is not always trustworthy, as different sources are of very different qualities and often provide inaccurate and conflicting information. Existing approaches attack this problem using unsupervised learning methods, and try to infer the confidence of the data value and trustworthiness of each source from each other by assuming values provided by more sources are more accurate. However, because false values can be widespread through copying among different sources and out-of-date data often overwhelm up-to-date data, such bootstrapping methods are often ineffective.\n In this paper we propose a semi-supervised approach that finds true values with the help of ground truth data. Such ground truth data, even in very small amount, can greatly help us identify trustworthy data sources. Unlike existing studies that only provide iterative algorithms, we derive the optimal solution to our problem and provide an iterative algorithm that converges to it. Experiments show our method achieves higher accuracy than existing approaches, and it can be applied on very huge data sets when implemented with MapReduce.",
"title": ""
},
{
"docid": "5ca1c503cba0db452d0e5969e678db97",
"text": "Deep neural network models have recently achieved state-of-the-art performance gains in a variety of natural language processing (NLP) tasks (Young, Hazarika, Poria, & Cambria, 2017). However, these gains rely on the availability of large amounts of annotated examples, without which state-of-the-art performance is rarely achievable. This is especially inconvenient for the many NLP fields where annotated examples are scarce, such as medical text. To improve NLP models in this situation, we evaluate five improvements on named entity recognition (NER) tasks when only ten annotated examples are available: (1) layer-wise initialization with pre-trained weights, (2) hyperparameter tuning, (3) combining pre-training data, (4) custom word embeddings, and (5) optimizing out-of-vocabulary (OOV) words. Experimental results show that the F1 score of 69.3% achievable by state-of-the-art models can be improved to 78.87%.",
"title": ""
},
{
"docid": "3b886932b4b036ec4e9ceafc5066397b",
"text": "0957-4174/$ see front matter 2009 Elsevier Ltd. A doi:10.1016/j.eswa.2009.11.083 E-mail address: mo23628@sgh.waw.pl 1 PLN is the abbreviation of the Polish currency unit In this article, we test the usefulness of the popular data mining models to predict churn of the clients of the Polish cellular telecommunication company. When comparing to previous studies on this topic, our research is novel in the following areas: (1) we deal with prepaid clients (previous studies dealt with postpaid clients) who are far more likely to churn, are less stable and much less is known about them (no application, demographical or personal data), (2) we have 1381 potential variables derived from the clients’ usage (previous studies dealt with data with at least tens of variables) and (3) we test the stability of models across time for all the percentiles of the lift curve – our test sample is collected six months after the estimation of the model. The main finding from our research is that linear models, especially logistic regression, are a very good choice when modelling churn of the prepaid clients. Decision trees are unstable in high percentiles of the lift curve, and we do not recommend their usage. 2009 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "6ef6c1b19d8c82f500ea2b1e213d750d",
"text": "Video summarization aims to facilitate large-scale video browsing by producing short, concise summaries that are diverse and representative of original videos. In this paper, we formulate video summarization as a sequential decisionmaking process and develop a deep summarization network (DSN) to summarize videos. DSN predicts for each video frame a probability, which indicates how likely a frame is selected, and then takes actions based on the probability distributions to select frames, forming video summaries. To train our DSN, we propose an end-to-end, reinforcement learningbased framework, where we design a novel reward function that jointly accounts for diversity and representativeness of generated summaries and does not rely on labels or user interactions at all. During training, the reward function judges how diverse and representative the generated summaries are, while DSN strives for earning higher rewards by learning to produce more diverse and more representative summaries. Since labels are not required, our method can be fully unsupervised. Extensive experiments on two benchmark datasets show that our unsupervised method not only outperforms other stateof-the-art unsupervised methods, but also is comparable to or even superior than most of published supervised approaches.",
"title": ""
},
{
"docid": "bbcd0a157ee615d5a7c45e688c49aa8f",
"text": "The study of brain networks by resting-state functional magnetic resonance imaging (rs-fMRI) is a promising method for identifying patients with dementia from healthy controls (HC). Using graph theory, different aspects of the brain network can be efficiently characterized by calculating measures of integration and segregation. In this study, we combined a graph theoretical approach with advanced machine learning methods to study the brain network in 89 patients with mild cognitive impairment (MCI), 34 patients with Alzheimer’s disease (AD), and 45 age-matched HC. The rs-fMRI connectivity matrix was constructed using a brain parcellation based on a 264 putative functional areas. Using the optimal features extracted from the graph measures, we were able to accurately classify three groups (i.e., HC, MCI, and AD) with accuracy of 88.4 %. We also investigated performance of our proposed method for a binary classification of a group (e.g., MCI) from two other groups (e.g., HC and AD). The classification accuracies for identifying HC from AD and MCI, AD from HC and MCI, and MCI from HC and AD, were 87.3, 97.5, and 72.0 %, respectively. In addition, results based on the parcellation of 264 regions were compared to that of the automated anatomical labeling atlas (AAL), consisted of 90 regions. The accuracy of classification of three groups using AAL was degraded to 83.2 %. Our results show that combining the graph measures with the machine learning approach, on the basis of the rs-fMRI connectivity analysis, may assist in diagnosis of AD and MCI.",
"title": ""
},
{
"docid": "a7b0f0455482765efd3801c3ae9f85b7",
"text": "The Business Process Modelling Notation (BPMN) is a standard for capturing business processes in the early phases of systems development. The mix of constructs found in BPMN makes it possible to create models with semantic errors. Such errors are especially serious, because errors in the early phases of systems development are among the most costly and hardest to correct. The ability to statically check the semantic correctness of models is thus a desirable feature for modelling tools based on BPMN. Accordingly, this paper proposes a mapping from BPMN to a formal language, namely Petri nets, for which efficient analysis techniques are available. The proposed mapping has been implemented as a tool that, in conjunction with existing Petri net-based tools, enables the static analysis of BPMN models. The formalisation also led to the identification of deficiencies in the BPMN standard specification.",
"title": ""
},
{
"docid": "acb2177446deb8e279deca87724dbdca",
"text": "All teachers acknowledge appropriate student behaviors and desired social skills and provide differential attention/response to inappropriate behaviors. (CL22) Evidence Review: Research demonstrates that teachers who establish an orderly and positive classroom environment by teaching and reinforcing rules and routines reduce behavior problems. Teacher's acknowledgement of appropriate behavior is related to both initial and long-term academic engagement and social success (Akin-Little et al. (2004); Cameron et al.(2001). Rewards (such as approval, praise, recognition, special privileges, points, or other incentives) are most effective in reinforcing students' appropriate behavior when teachers: Use small rewards frequently, rather than large rewards infrequently; Deliver rewards quickly after the desired behavior is exhibited; Reward behavior, not the individual, and communicate to students the specific behavior that led to the reward; Use several different kinds of rewards selected carefully to ensure that they are reinforcing positive behavior; and Gradually begin to reduce and then eliminate rewards. Research also shows that the amount of praise that students receive for appropriate behavior should exceed the amount of times they are corrected or reprimanded by a ratio of four to one to improve student academic and behavioral outcomes. Evidence Review: Deci, Koestner, and Ryan (2001) conducted a meta-analysis in which they examined the effect of extrinsic rewards on intrinsic motivation. They found that verbal rewards can enhance intrinsic motivation; however, verbal rewards are less likely to have a positive effect for children than for older individuals (i.e., college students). Verbal rewards can have a negative effect on intrinsic motivation if they are administered in a controlling rather than informational way. When presenting high-level interest tasks, the use of tangible rewards can have negative consequences for subsequent interest, persistence and preference for challenge, especially for children. Evidence Review: There is compelling meta-analytic evidence that appropriate disciplinary interventions including teacher reaction to students appropriate and inappropriate behavior produce positive change in student behavior. Simple and often subtle teacher reactions have been shown to decrease student misbehavior including eye contact, moving closer to the student, a shake of the head, a simple verbal reminder-ideally as privately and subtly as possible, reminder of the desired appropriate behavior, and simply telling the student to stop the inappropriate behavior (Madsen, Becker, & Thomas, 1968). Teachers should also quietly and privately acknowledge appropriate behavior.",
"title": ""
},
{
"docid": "78a2bf1c2edec7ec9eb48f8b07dc9e04",
"text": "The performance of the most commonly used metal antennas close to the human body is one of the limiting factors of the performance of bio-sensors and wireless body area networks (WBAN). Due to the high dielectric and conductivity contrast with respect to most parts of the human body (blood, skin, …), the range of most of the wireless sensors operating in RF and microwave frequencies is limited to 1–2 cm when attached to the body. In this paper, we introduce the very novel idea of liquid antennas, that is based on engineering the properties of liquids. This approach allows for the improvement of the range by a factor of 5–10 in a very easy-to-realize way, just modifying the salinity of the aqueous solution of the antenna. A similar methodology can be extended to the development of liquid RF electronics for implantable devices and wearable real-time bio-signal monitoring, since it can potentially lead to very flexible antenna and electronic configurations.",
"title": ""
},
{
"docid": "2b53e3494d58b2208f95d5bb67589677",
"text": "In his paper ‘Logic and conversation’ Grice (1989: 37) introduced a distinction between generalized and particularized conversational implicatures. His notion of a generalized conversational implicature (GCI) has been developed in two competing directions, by neo-Griceans such as Horn (1989) and Levinson (1983, 1987b, 1995, 2000) on the one hand, and relevance theorists such as Sperber & Wilson (1986) and Carston (1988, 1993, 1995, 1997, 1998a,b) on the other. Levinson defends the claim that GCIs are inferred on the basis of a set of default heuristics that are triggered by the presence of certain sorts of lexical items. These default inferences will be drawn unless something unusual in the context blocks them. Carston reconceives GCIs as contents that a speaker directly communicates, rather than as contents that are merely conversationally implicated. GCIs are treated as pragmatic developments of semantically underspecified logical forms. They are not the products of default inferences, since what is communicated depends heavily on the specific context, and not merely on the presence or absence of certain lexical items. We introduce two processing models, the Default Model and the Underspecified Model, that are inspired by these rival theoretical views. This paper describes an eye monitoring experiment that is intended to test the predictions of these two models. Our primary concern is to make a case for the claim that it is fruitful to apply an eye tracking methodology to an area of pragmatic research that has not previously been explored from a processing perspective.",
"title": ""
},
{
"docid": "78b6312b787b015fb9d238b03840566e",
"text": "Recent advances in computed tomography (CT) technology allow for acquisition of two CT datasets with different X-ray spectra. There are different dual-energy computed tomography (DECT) technical approaches such as: the dual-source CT, the fast kilovoltage-switching method, and the sandwich detectors technique. There are various postprocessing algorithms that are available to provide clinically relevant spectral information. There are several clinical applications of DECT that are easily accessible in the emergency setting. In this review article, we aim to provide the emergency radiologist with a discussion on how this new technology works and how some of its applications can be useful in the emergency room setting.",
"title": ""
},
{
"docid": "fd184f271a487aba70025218fd8c76e4",
"text": "BACKGROUND\nIron deficiency anaemia is common in patients with chronic kidney disease, and intravenous iron is the preferred treatment for those on haemodialysis. The aim of this trial was to compare the efficacy and safety of iron isomaltoside 1000 (Monofer®) with iron sucrose (Venofer®) in haemodialysis patients.\n\n\nMETHODS\nThis was an open-label, randomized, multicentre, non-inferiority trial conducted in 351 haemodialysis subjects randomized 2:1 to either iron isomaltoside 1000 (Group A) or iron sucrose (Group B). Subjects in Group A were equally divided into A1 (500 mg single bolus injection) and A2 (500 mg split dose). Group B were also treated with 500 mg split dose. The primary end point was the proportion of subjects with haemoglobin (Hb) in the target range 9.5-12.5 g/dL at 6 weeks. Secondary outcome measures included haematology parameters and safety parameters.\n\n\nRESULTS\nA total of 351 subjects were enrolled. Both treatments showed similar efficacy with >82% of subjects with Hb in the target range (non-inferiority, P = 0.01). Similar results were found when comparing subgroups A1 and A2 with Group B. No statistical significant change in Hb concentration was found between any of the groups. There was a significant increase in ferritin from baseline to Weeks 1, 2 and 4 in Group A compared with Group B (Weeks 1 and 2: P < 0.001; Week 4: P = 0.002). There was a significant higher increase in reticulocyte count in Group A compared with Group B at Week 1 (P < 0.001). The frequency, type and severity of adverse events were similar.\n\n\nCONCLUSIONS\nIron isomaltoside 1000 and iron sucrose have comparative efficacy in maintaining Hb concentrations in haemodialysis subjects and both preparations were well tolerated with a similar short-term safety profile.",
"title": ""
},
{
"docid": "1ee444fda98b312b0462786f5420f359",
"text": "After years of banning consumer devices (e.g., iPads and iPhone) and applications (e.g., DropBox, Evernote, iTunes) organizations are allowing employees to use their consumer tools in the workplace. This IT consumerization phenomenon will have serious consequences on IT departments which have historically valued control, security, standardization and support (Harris et al. 2012). Based on case studies of three organizations in different stages of embracing IT consumerization, this study identifies the conflicts IT consumerization creates for IT departments. All three organizations experienced similar goal and behavior conflicts, while identity conflict varied depending upon the organizations’ stage implementing consumer tools (e.g., embryonic, initiating or institutionalized). Theoretically, this study advances IT consumerization research by applying a role conflict perspective to understand consumerization’s impact on the IT department.",
"title": ""
},
{
"docid": "8a695d5913c3b87fb21864c0bdd3d522",
"text": "Environmental topics have gained much consideration in corporate green operations. Globalization, stakeholder pressures, and stricter environmental regulations have made organizations develop environmental practices. Thus, green supply chain management (GSCM) is now a proactive approach for organizations to enhance their environmental performance and achieve competitive advantages. This study pioneers using the decision-making trial and evaluation laboratory (DEMATEL) method with intuitionistic fuzzy sets to handle the important and causal relationships between GSCM practices and performances. DEMATEL evaluates GSCM practices to find the main practices to improve both environmental and economic performances. This study uses intuitionistic fuzzy set theory to handle the linguistic imprecision and the ambiguity of human being’s judgment. A case study from the automotive industry is presented to evaluate the efficiency of the proposed method. The results reveal ‘‘internal management support’’, ‘‘green purchasing’’ and ‘‘ISO 14001 certification’’ are the most significant GSCM practices. The practical results of this study offer useful insights for managers to become more environmentally responsible, while improving their economic and environmental performance goals. Further, a sensitivity analysis of results, managerial implications, conclusions, limitations and future research opportunities are provided. 2015 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "295d94e49b08e5a4bae0ba3cdcd3ba05",
"text": "Imitation learning (IL) consists of a set of tools that leverage expert demonstrations to quickly learn policies. However, if the expert is suboptimal, IL can yield policies with inferior performance compared to reinforcement learning (RL). In this paper, we aim to provide an algorithm that combines the best aspects of RL and IL. We accomplish this by formulating several popular RL and IL algorithms in a common mirror descent framework, showing that these algorithms can be viewed as a variation on a single approach. We then propose LOKI, a strategy for policy learning that first performs a small but random number of IL iterations before switching to a policy gradient RL method. We show that if the switching time is properly randomized, LOKI can learn to outperform a suboptimal expert and converge faster than running policy gradient from scratch. Finally, we evaluate the performance of LOKI experimentally in several simulated environments.",
"title": ""
},
{
"docid": "70e89d5d0b886b1c32b1f1b8c01db99b",
"text": "In clinical dictation, speakers try to be as concise as possible to save time, often resulting in utterances without explicit punctuation commands. Since the end product of a dictated report, e.g. an out-patient letter, does require correct orthography, including exact punctuation, the latter need to be restored, preferably by automated means. This paper describes a method for punctuation restoration based on a stateof-the-art stack of NLP and machine learning techniques including B-RNNs with an attention mechanism and late fusion, as well as a feature extraction technique tailored to the processing of medical terminology using a novel vocabulary reduction model. To the best of our knowledge, the resulting performance is superior to that reported in prior art on similar tasks.",
"title": ""
},
{
"docid": "0cbc2eb794f44b178a54d97aeff69c19",
"text": "Automatic identification of predatory conversations i chat logs helps the law enforcement agencies act proactively through early detection of predatory acts in cyberspace. In this paper, we describe the novel application of a deep learnin g method to the automatic identification of predatory chat conversations in large volumes of ch at logs. We present a classifier based on Convolutional Neural Network (CNN) to address this problem domain. The proposed CNN architecture outperforms other classification techn iques that are common in this domain including Support Vector Machine (SVM) and regular Neural Network (NN) in terms of classification performance, which is measured by F 1-score. In addition, our experiments show that using existing pre-trained word vectors are no t suitable for this specific domain. Furthermore, since the learning algorithm runs in a m ssively parallel environment (i.e., general-purpose GPU), the approach can benefit a la rge number of computation units (neurons) compared to when CPU is used. To the best of our knowledge, this is the first tim e that CNNs are adapted and applied to this application do main.",
"title": ""
},
{
"docid": "b17e8d67c08934d89a58833898bf7955",
"text": "The Mandelbrot set is a famous fractal. It serves as the source of a large number of complex mathematical images. Evolutionary computation can be used to search the Mandelbrot set for interesting views. This study compares the results of using several different fitness functions for this search. Some of the fitness functions give substantial control over the appearance of the resulting views while others simply locate parts of the Mandelbrot set in which there are complicated structures. All of the fitness functions are based on finding desirable patterns in the number of iterations of the basic Mandelbrot formula to diverge on a set of points arranged in a regular grid near the boundary of the set. It is shown that using different fitness functions causes an evolutionary algorithm to locate difference types of views into the Mandelbrot set.",
"title": ""
},
{
"docid": "4bc7687ba89699a537329f37dda4e74d",
"text": "At the same time as cities are growing, their share of older residents is increasing. To engage and assist cities to become more “age-friendly,” the World Health Organization (WHO) prepared the Global Age-Friendly Cities Guide and a companion “Checklist of Essential Features of Age-Friendly Cities”. In collaboration with partners in 35 cities from developed and developing countries, WHO determined the features of age-friendly cities in eight domains of urban life: outdoor spaces and buildings; transportation; housing; social participation; respect and social inclusion; civic participation and employment; communication and information; and community support and health services. In 33 cities, partners conducted 158 focus groups with persons aged 60 years and older from lower- and middle-income areas of a locally defined geographic area (n = 1,485). Additional focus groups were held in most sites with caregivers of older persons (n = 250 caregivers) and with service providers from the public, voluntary, and commercial sectors (n = 515). No systematic differences in focus group themes were noted between cities in developed and developing countries, although the positive, age-friendly features were more numerous in cities in developed countries. Physical accessibility, service proximity, security, affordability, and inclusiveness were important characteristics everywhere. Based on the recurring issues, a set of core features of an age-friendly city was identified. The Global Age-Friendly Cities Guide and companion “Checklist of Essential Features of Age-Friendly Cities” released by WHO serve as reference for other communities to assess their age readiness and plan change.",
"title": ""
}
] |
scidocsrr
|
bd9696dbeb9f275fa10f67a6205f3393
|
Managing RFID Data: Challenges, Opportunities and Solutions
|
[
{
"docid": "564f9c0a1e1f395d59837e1a4b7f08ef",
"text": "To compensate for the inherent unreliability of RFID data streams, most RFID middleware systems employ a \"smoothing filter\", a sliding-window aggregate that interpolates for lost readings. In this paper, we propose SMURF, the first declarative, adaptive smoothing filter for RFID data cleaning. SMURF models the unreliability of RFID readings by viewing RFID streams as a statistical sample of tags in the physical world, and exploits techniques grounded in sampling theory to drive its cleaning processes. Through the use of tools such as binomial sampling and π-estimators, SMURF continuously adapts the smoothing window size in a principled manner to provide accurate RFID data to applications.",
"title": ""
},
{
"docid": "8ae12d8ef6e58cb1ac376eb8c11cd15a",
"text": "This paper surveys recent technical research on the problems of privacy and security for radio frequency identification (RFID). RFID tags are small, wireless devices that help identify objects and people. Thanks to dropping cost, they are likely to proliferate into the billions in the next several years-and eventually into the trillions. RFID tags track objects in supply chains, and are working their way into the pockets, belongings, and even the bodies of consumers. This survey examines approaches proposed by scientists for privacy protection and integrity assurance in RFID systems, and treats the social and technical context of their work. While geared toward the nonspecialist, the survey may also serve as a reference for specialist readers.",
"title": ""
}
] |
[
{
"docid": "36c4b2ab451c24d2d0d6abcbec491116",
"text": "A key advantage of scientific workflow systems over traditional scripting approaches is their ability to automatically record data and process dependencies introduced during workflow runs. This information is often represented through provenance graphs, which can be used by scientists to better understand, reproduce, and verify scientific results. However, while most systems record and store data and process dependencies, few provide easy-to-use and efficient approaches for accessing and querying provenance information. Instead, users formulate provenance graph queries directly against physical data representations (e.g., relational, XML, or RDF), leading to queries that are difficult to express and expensive to evaluate. We address these problems through a high-level query language tailored for expressing provenance graph queries. The language is based on a general model of provenance supporting scientific workflows that process XML data and employ update semantics. Query constructs are provided for querying both structure and lineage information. Unlike other languages that return sets of nodes as answers, our query language is closed, i.e., answers to lineage queries are sets of lineage dependencies (edges) allowing answers to be further queried. We provide a formal semantics for the language and present novel techniques for efficiently evaluating lineage queries. Experimental results on real and synthetic provenance traces demonstrate that our lineage based optimizations outperform an in-memory and standard database implementation by orders of magnitude. We also show that our strategies are feasible and can significantly reduce both provenance storage size and query execution time when compared with standard approaches.",
"title": ""
},
{
"docid": "8b1734f040031e22c50b6b2a573ff58a",
"text": "Is it permissible to harm one to save many? Classic moral dilemmas are often defined by the conflict between a putatively rational response to maximize aggregate welfare (i.e., the utilitarian judgment) and an emotional aversion to harm (i.e., the non-utilitarian judgment). Here, we address two questions. First, what specific aspect of emotional responding is relevant for these judgments? Second, is this aspect of emotional responding selectively reduced in utilitarians or enhanced in non-utilitarians? The results reveal a key relationship between moral judgment and empathic concern in particular (i.e., feelings of warmth and compassion in response to someone in distress). Utilitarian participants showed significantly reduced empathic concern on an independent empathy measure. These findings therefore reveal diminished empathic concern in utilitarian moral judges.",
"title": ""
},
{
"docid": "d13ce7762aeded7a40a7fbe89f1beccf",
"text": "[Purpose] This study aims to examined the effect of the self-myofascial release induced with a foam roller on the reduction of stress by measuring the serum concentration of cortisol. [Subjects and Methods] The subjects of this study were healthy females in their 20s. They were divided into the experimental and control groups. Both groups, each consisting of 12 subjects, were directed to walk for 30 minutes on a treadmill. The control group rested for 30 minutes of rest by lying down, whereas the experimental group was performed a 30 minutes of self-myofascial release program. [Results] Statistically significant levels of cortisol concentration reduction were observed in both the experimental group, which used the foam roller, and the control group. There was no statistically significant difference between the two groups. [Conclusion] The Self-myofascial release induced with a foam roller did not affect the reduction of stress.",
"title": ""
},
{
"docid": "94a35547a45c06a90f5f50246968b77e",
"text": "In this paper we present a process called color transfer which can borrow one image's color characteristics from another. Recently Reinhard and his colleagues reported a pioneering work of color transfer. Their technology can produce very believable results, but has to transform pixel values from RGB to lαβ. Inspired by their work, we advise an approach which can directly deal with the color transfer in any 3D space.From the view of statistics, we consider pixel's value as a three-dimension stochastic variable and an image as a set of samples, so the correlations between three components can be measured by covariance. Our method imports covariance between three components of pixel values while calculate the mean along each of the three axes. Then we decompose the covariance matrix using SVD algorithm and get a rotation matrix. Finally we can scale, rotate and shift pixel data of target image to fit data points' cluster of source image in the current color space and get resultant image which takes on source image's look and feel. Besides the global processing, a swatch-based method is introduced in order to manipulate images' color more elaborately. Experimental results confirm the validity and usefulness of our method.",
"title": ""
},
{
"docid": "c507ce14998e9ef9e574b1b4cc021dec",
"text": "There are no scientific publications on a electric motor in Tesla cars, so let's try to deduce something. Tesla's induction motor is very enigmatic so the paper tries to introduce a basic model. This secrecy could be interesting for the engineering and physics students. Multidisciplinary problem is considered: kinematics, mechanics, electric motors, numerical methods, control of electric drives. Identification based on three points in the steady-state torque-speed curve of the induction motor is presented. The field weakening mode of operation of the motor is analyzed. The Kloss' formula is obtained. The main aim of the article is determination of a mathematical description of the torque vs. speed curve of induction motor and its application for vehicle motion modeling. Additionally, the moment of inertia of the motor rotor and the electric vehicle mass are considered in one equation as electromechanical system. Presented approach may seem like speculation, but it allows to understand the problem of a vehicle motion. The article composition is different from classical approach - studying should be intriguing.",
"title": ""
},
{
"docid": "25751673cedf36c5e8b7ae310b66a8f2",
"text": "BACKGROUND\nMuscle dysmorphia (MD) describes a condition characterised by a misconstrued body image in which individuals who interpret their body size as both small or weak even though they may look normal or highly muscular.MD has been conceptualized as a type of body dysmorphic disorder, an eating disorder, and obsessive–compulsive disorder symptomatology. METHOD AND AIM: Through a review of the most salient literature on MD, this paper proposes an alternative classification of MD--the ‘Addiction to Body Image’ (ABI) model--using Griffiths (2005)addiction components model as the framework in which to define MD as an addiction.\n\n\nRESULTS\nIt is argued the addictive activity in MD is the maintaining of body image via a number of different activities such as bodybuilding, exercise,eating certain foods, taking specific drugs (e.g., anabolic steroids), shopping for certain foods, food supplements,and the use or purchase of physical exercise accessories). In the ABI model, the perception of the positive effects on the self-body image is accounted for as a critical aspect of the MD condition (rather than addiction to exercise or certain types of eating disorder).\n\n\nCONCLUSIONS\nBased on empirical evidence to date, it is proposed that MD could be re-classified as an addiction due to the individual continuing to engage in maintenance behaviours that may cause long-term harm.",
"title": ""
},
{
"docid": "3e177f8b02a5d67c7f4d93ce601c4539",
"text": "This research proposes an approach for text classification that uses a simple neural network called Dynamic Text Classifier Neural Network (DTCNN). The neural network uses as input vectors of words with variable dimension without information loss called Dynamic Token Vectors (DTV). The proposed neural network is designed for the classification of large and short text into categories. The learning process combines competitive and Hebbian learning. Due to the combination of these learning rules the neural network is able to work in a supervised or semi-supervised mode. In addition, it provides transparency in the classification. The network used in this paper is quite simple, and that is what makes enough for its task. The results of evaluation the proposed method shows an improvement in the text classification problem using the DTCNN compared to baseline approaches.",
"title": ""
},
{
"docid": "fbddd20271cf134e15b33e7d6201c374",
"text": "Authors and publishers who wish their publications to be considered for review in Computational Linguistics should send a copy to the book review editor, Graeme Hirst, Department of Computer Science, University of Toronto, Toronto, Canada M5S 3G4. All relevant books received will be listed, but not all can be reviewed. Technical reports (other than dissertations) will not be listed or reviewed. Authors should be aware that some publishers will not send books for review (even when instructed to do so); authors wishing to enquire as to whether their book has been received for review may contact the book review editor.",
"title": ""
},
{
"docid": "2d254443a7cbe748250acc0070c4a08b",
"text": "This paper introduces a new supervised Bayesian approach to hyperspectral image segmentation with active learning, which consists of two main steps. First, we use a multinomial logistic regression (MLR) model to learn the class posterior probability distributions. This is done by using a recently introduced logistic regression via splitting and augmented Lagrangian algorithm. Second, we use the information acquired in the previous step to segment the hyperspectral image using a multilevel logistic prior that encodes the spatial information. In order to reduce the cost of acquiring large training sets, active learning is performed based on the MLR posterior probabilities. Another contribution of this paper is the introduction of a new active sampling approach, called modified breaking ties, which is able to provide an unbiased sampling. Furthermore, we have implemented our proposed method in an efficient way. For instance, in order to obtain the time-consuming maximum a posteriori segmentation, we use the α-expansion min-cut-based integer optimization algorithm. The state-of-the-art performance of the proposed approach is illustrated using both simulated and real hyperspectral data sets in a number of experimental comparisons with recently introduced hyperspectral image analysis methods.",
"title": ""
},
{
"docid": "3e3dc575858c21806edbe6149475f5e0",
"text": "This paper describes a new model for understanding natural language commands given to autonomous systems that perform navigation and mobile manipulation in semi-structured environments. Previous approaches have used models with fixed structure to infer the likelihood of a sequence of actions given the environment and the command. In contrast, our framework, called Generalized Grounding Graphs (G), dynamically instantiates a probabilistic graphical model for a particular natural language command according to the command’s hierarchical and compositional semantic structure. Our system performs inference in the model to successfully find and execute plans corresponding to natural language commands such as “Put the tire pallet on the truck.” The model is trained using a corpus of commands collected using crowdsourcing. We pair each command with robot actions and use the corpus to learn the parameters of the model. We evaluate the robot’s performance by inferring plans from natural language commands, executing each plan in a realistic robot simulator, and asking users to evaluate the system’s performance. We demonstrate that our system can successfully follow many natural language commands from the corpus.",
"title": ""
},
{
"docid": "6cd317113158241a98517ad5a8247174",
"text": "Feature Oriented Programming (FOP) is an emerging paradigmfor application synthesis, analysis, and optimization. Atarget application is specified declaratively as a set of features,like many consumer products (e.g., personal computers,automobiles). FOP technology translates suchdeclarative specifications into efficient programs.",
"title": ""
},
{
"docid": "619af7dc39e21690c1d164772711d7ed",
"text": "The prevalence of smart mobile devices has promoted the popularity of mobile applications (a.k.a. apps). Supporting mobility has become a promising trend in software engineering research. This article presents an empirical study of behavioral service profiles collected from millions of users whose devices are deployed with Wandoujia, a leading Android app-store service in China. The dataset of Wandoujia service profiles consists of two kinds of user behavioral data from using 0.28 million free Android apps, including (1) app management activities (i.e., downloading, updating, and uninstalling apps) from over 17 million unique users and (2) app network usage from over 6 million unique users. We explore multiple aspects of such behavioral data and present patterns of app usage. Based on the findings as well as derived knowledge, we also suggest some new open opportunities and challenges that can be explored by the research community, including app development, deployment, delivery, revenue, etc.",
"title": ""
},
{
"docid": "44e374587f199b4161315850b58fe2fa",
"text": "This paper discusses a new kind of distortion mechanism found in transistorized audio power amplifiers. It is shown that this distortion arises from the multistage feedback loop found in most high-quality amplifiers, provided that the open-loop transient response of the power amplifier is slower than the transient response of the preamplifier. The results of the analysis are verified by measurements from a simulated power amplifier, and a number of constructional rules for eliminating this distortion are derived. Manuscript received December 3, 1969; revised January 23, 1970. introduction An ordinary transistorized audio amplifier consists of a preamplifier and a power amplifier. The typical preamplifier incorporates two to eight stages with local feedback. The power amplifier has, however, usually a feedback loop enclosing three to four stages. The power amplifier generally determines the frequency response and the distortion of the whole amplifier, For stationary signals, the harmonic distortion of the power amplifier decreases proportionally with increasing feedback, provided that the transfer function of the amplifier is monotonically continuous and that the gain is always greater than zero. (These assumptions are not valid, of course, in case of overload or crossover distortion.) With the same assumptions, the intermodulation distortion decreases similarly. The frequency response is also enhanced in proportion with the feedback. It would seem, then, that feedback is highly beneficial to the power amplifier. The purpose of this paper is, however, to show that the usable frequency response of the amplifier does not necessarily become better due to feedback, and that, under certain circumstances, the feedback can cause severe transient distortion resembling intermodulation distortion. These facts are well known among amplifier designers and have been discussed on a phenomenological basis (for instance [l]). They have not, however, received a. quantitative ,treatment except in some special cases [2], [3 I. Transient Signals in Amplifiers Sound in general, and especially music, consists largely of sudden variations. The steep rise portion of these transient signals can be approximated with a unit step function, provided that the transfer functions of the microphone and the amplifiers are considered separately. We may, therefore, divide the amplifier as in Fig. 1. A is the preamplifier including the microphone, C is the power amplifier, and B is the feedback loop around it. If resistive feedback is to be applied in the power amplifier, stability criteria necessitate its transfer function to have not more than two poles and a single zero in the usable frequency range. The transfer function without feedback can thus be approximated to be of the form F,(s) = d o SD? 1 (1) (s + wo)(s + 4 where A. is the midband gain without feedback, and w1 and w0 are the upper and lower cutoff angular frequencies, respectively. The transfer function of the signal source and the preamplifier can be arbitrary. Usually, however, it can be considered as having several poles and zeros, often multiple. In the following we will consider two special cases. Case a: The transfer function is flat in the midband and has a 12 dB per octave rolloff in both the high-frequency 234 lEEE TRANSACTIONS ON AUDIO AND ELECTROACOUSTICS VOL. AU-18, NO. 3 SEPTEMBER 1970 V1 A v 2 + v 3 0 C 1 ; O v4 Fig. 1. The analyzed circuit. A is the preamplifier which includes the transfer function of the signal source. B is the feedback path around the power amplifier C. Fig. 2. The preamplifier f equency response asymptotes used in the analysis. Asymptote o corresponds to a flot response and asymptote b corresponds to o cose where the high-frequency tone control has been turned to maximum. and the low-frequency ranges. This characteristic is shown in Fig. 2 with asymptote a. Case b. The transfer function in the low-frequency range is similar to Case a. A $6 dB/octave emphasis is applied in the high-frequency range starting at an angular frequency w4 and resulting in asymptote b in Fig. 2 . These two cases are, of course, arbitrary, but are con-. sidered as being representative: the first for the flat response case, and the second, for the worst case where the high-frequency tone control has been turned to maximum. The transfer functions of the preamplifier are then for Case a",
"title": ""
},
{
"docid": "aac360802c767fb9594e033341883578",
"text": "The protection mechanisms of computer systems control the access to objects, especially information objects. The range of responsibilities of these mechanisms includes at one extreme completely isolating executing programs from each other, and at the other extreme permitting complete cooperation and shared access among executing programs. Within this range one can identify at least seven levels at which protection mechanisms can be conceived as being required, each level being more difficult than its predecessor to implement:\n 1. No sharing at all (complete isolation).\n 2. Sharing copies of programs or data files.\n 3. Sharing originals of programs or data files.\n 4. Sharing programming systems or subsystems.\n 5. Permitting the cooperation of mutually suspicious subsystems---e.g., as with debugging or proprietary subsystems.\n 6. Providing \"memoryless\" subsystems---i.e., systems which, having performed their tasks, are guaranteed to have kept no secret record of the task performed (an income-tax computing service, for example, must be allowed to keep billing information on its use by customers but not to store information secretly on customers' incomes).\n 7. Providing \"certified\" subsystems---i.e., those whose correctness has been completely validated and is guaranteed a priori.",
"title": ""
},
{
"docid": "41cc4f54df2533897cc678db9818902b",
"text": "Financial statement fraud has reached the epidemic proportion globally. Recently, financial statement fraud has dominated the corporate news causing debacle at number of companies worldwide. In the wake of failure of many organisations, there is a dire need of prevention and detection of financial statement fraud. Prevention of financial statement fraud is a measure to stop its occurrence initially whereas detection means the identification of such fraud as soon as possible. Fraud detection is required only if prevention has failed. Therefore, a continuous fraud detection mechanism should be in place because management may be unaware about the failure of prevention mechanism. In this paper we propose a data mining framework for prevention and detection of financial statement fraud.",
"title": ""
},
{
"docid": "fec2b6b7cdef1ddf88dffd674fe7111a",
"text": "This paper introduces Dex, a reinforcement learning environment toolkit specialized for training and evaluation of continual learning methods as well as general reinforcement learning problems. We also present the novel continual learning method of incremental learning, where a challenging environment is solved using optimal weight initialization learned from first solving a similar easier environment. We show that incremental learning can produce vastly superior results than standard methods by providing a strong baseline method across ten Dex environments. We finally develop a saliency method for qualitative analysis of reinforcement learning, which shows the impact incremental learning has on network attention.",
"title": ""
},
{
"docid": "10f1e89998a7e463f2996270099bebdc",
"text": "This paper proposes an effective algorithm for recognizing objects and accurately estimating their 6DOF pose in scenes acquired by a RGB-D sensor. The proposed method is based on a combination of different recognition pipelines, each exploiting the data in a diverse manner and generating object hypotheses that are ultimately fused together in an Hypothesis Verification stage that globally enforces geometrical consistency between model hypotheses and the scene. Such a scheme boosts the overall recognition performance as it enhances the strength of the different recognition pipelines while diminishing the impact of their specific weaknesses. The proposed method outperforms the state-of-the-art on two challenging benchmark datasets for object recognition comprising 35 object models and, respectively, 176 and 353 scenes.",
"title": ""
},
{
"docid": "a6b6fd9beb4e8d640e7afdd6086a2552",
"text": "Automatic and accurate estimation of disease severity is essential for food security, disease management, and yield loss prediction. Deep learning, the latest breakthrough in computer vision, is promising for fine-grained disease severity classification, as the method avoids the labor-intensive feature engineering and threshold-based segmentation. Using the apple black rot images in the PlantVillage dataset, which are further annotated by botanists with four severity stages as ground truth, a series of deep convolutional neural networks are trained to diagnose the severity of the disease. The performances of shallow networks trained from scratch and deep models fine-tuned by transfer learning are evaluated systemically in this paper. The best model is the deep VGG16 model trained with transfer learning, which yields an overall accuracy of 90.4% on the hold-out test set. The proposed deep learning model may have great potential in disease control for modern agriculture.",
"title": ""
},
{
"docid": "be1bfd488f90deca658937dd20ee0915",
"text": "This research examined the effects of hands-free cell phone conversations on simulated driving. The authors found that these conversations impaired driver's reactions to vehicles braking in front of them. The authors assessed whether this impairment could be attributed to a withdrawal of attention from the visual scene, yielding a form of inattention blindness. Cell phone conversations impaired explicit recognition memory for roadside billboards. Eye-tracking data indicated that this was due to reduced attention to foveal information. This interpretation was bolstered by data showing that cell phone conversations impaired implicit perceptual memory for items presented at fixation. The data suggest that the impairment of driving performance produced by cell phone conversations is mediated, at least in part, by reduced attention to visual inputs.",
"title": ""
},
{
"docid": "60f9aaa5e3814a9f41218255a17eab1d",
"text": "The constant demand to scale down transistors and improve device performance has led to material as well as process changes in the formation of IC interconnect. Traditionally, aluminum has been used to form the IC interconnects. The process involved subtractive etching of blanket aluminum as defined by the patterned photo resist. However, the scaling and performance demands have led to transition from Aluminum to Copper interconnects. The primary motivation behind the introduction of copper for forming interconnects is the advantages that copper offers over Aluminum. The table 1 below gives a comparison between Aluminum and Copper properties.",
"title": ""
}
] |
scidocsrr
|
86cf5b7c33c66b58bd9240d95967ff13
|
Semi-supervised Learning with Encoder-Decoder Recurrent Neural Networks: Experiments with Motion Capture Sequences
|
[
{
"docid": "711daac04e27d0a413c99dd20f6f82e1",
"text": "The gesture recognition using motion capture data and depth sensors has recently drawn more attention in vision recognition. Currently most systems only classify dataset with a couple of dozens different actions. Moreover, feature extraction from the data is often computational complex. In this paper, we propose a novel system to recognize the actions from skeleton data with simple, but effective, features using deep neural networks. Features are extracted for each frame based on the relative positions of joints (PO), temporal differences (TD), and normalized trajectories of motion (NT). Given these features a hybrid multi-layer perceptron is trained, which simultaneously classifies and reconstructs input data. We use deep autoencoder to visualize learnt features. The experiments show that deep neural networks can capture more discriminative information than, for instance, principal component analysis can. We test our system on a public database with 65 classes and more than 2,000 motion sequences. We obtain an accuracy above 95% which is, to our knowledge, the state of the art result for such a large dataset.",
"title": ""
}
] |
[
{
"docid": "9e05a37d781d8a3ee0ecca27510f1ae9",
"text": "Context: Evidence-based software engineering (EBSE) provides a process for solving practical problems based on a rigorous research approach. The primary focus so far was on mapping and aggregating evidence through systematic reviews. Objectives: We extend existing work on evidence-based software engineering by using the EBSE process in an industrial case to help an organization to improve its automotive testing process. With this we contribute in (1) providing experiences on using evidence based processes to analyze a real world automotive test process; and (2) provide evidence of challenges and related solutions for automotive software testing processes. Methods: In this study we perform an in-depth investigation of an automotive test process using an extended EBSE process including case study research (gain an understanding of practical questions to define a research scope), systematic literature review (identify solutions through systematic literature), and value stream mapping (map out an improved automotive test process based on the current situation and improvement suggestions identified). These are followed by reflections on the EBSE process used. Results: In the first step of the EBSE process we identified 10 challenge areas with a total of 26 individual challenges. For 15 out of those 26 challenges our domain specific systematic literature review identified solutions. Based on the input from the challenges and the solutions, we created a value stream map of the current and future process. Conclusions: Overall, we found that the evidence-based process as presented in this study helps in technology transfer of research results to industry, but at the same time some challenges lie ahead (e.g. scoping systematic reviews to focus more on concrete industry problems, and understanding strategies of conducting EBSE with respect to effort and quality of the evidence).",
"title": ""
},
{
"docid": "80394c124d823e7639af06fd33ef99c1",
"text": "We investigate whether income inequality affects subsequent growth in a cross-country sample for 1965-90, using the models of Barro (1997), Bleaney and Nishiyama (2002) and Sachs and Warner (1997), with negative results. We then investigate the evolution of income inequality over the same period and its correlation with growth. The dominating feature is inequality convergence across countries. This convergence has been significantly faster amongst developed countries. Growth does not appear to influence the evolution of inequality over time. Outline",
"title": ""
},
{
"docid": "0ba907b893e3017dd55a67ae7c43b276",
"text": "Android applications (apps for short) can send out users' sensitive information against users' intention. Based on the stats from Genome and Mobile-Sandboxing, 55.8% and 59.7% Android malware families feature privacy leakage. Prior approaches to detecting privacy leakage on smartphones primarily focused on the discovery of sensitive information flows. However, Android apps also send out users' sensitive information for legitimate functions. Due to the fuzzy nature of the privacy leakage detection problem, we formulate it as a justification problem, which aims to justify if a sensitive information transmission in an app serves any purpose, either for intended functions of the app itself or for other related functions. This formulation makes the problem more distinct and objective, and therefore more feasible to solve than before. We propose DroidJust, an automated approach to justifying an app's sensitive information transmission by bridging the gap between the sensitive information transmission and application functions. We also implement a prototype of DroidJust and evaluate it with over 6000 Google Play apps and over 300 known malware collected from VirusTotal. Our experiments show that our tool can effectively and efficiently analyze Android apps w.r.t their sensitive information flows and functionalities, and can greatly assist in detecting privacy leakage.",
"title": ""
},
{
"docid": "da74e402f4542b6cbfb27f04c7640eb4",
"text": "Hand-built verb clusters such as the widely used Levin classes (Levin, 1993) have proved useful, but have limited coverage. Verb classes automatically induced from corpus data such as those from VerbKB (Wijaya, 2016), on the other hand, can give clusters with much larger coverage, and can be adapted to specific corpora such as Twitter. We present a method for clustering the outputs of VerbKB: verbs with their multiple argument types, e.g.“marry(person, person)”, “feel(person, emotion).” We make use of a novel lowdimensional embedding of verbs and their arguments to produce high quality clusters in which the same verb can be in different clusters depending on its argument type. The resulting verb clusters do a better job than hand-built clusters of predicting sarcasm, sentiment, and locus of control in tweets.",
"title": ""
},
{
"docid": "d79d6dd8267c66ad98f33bd54ff68693",
"text": "We propose a multigrid extension of convolutional neural networks (CNNs). Rather than manipulating representations living on a single spatial grid, our network layers operate across scale space, on a pyramid of grids. They consume multigrid inputs and produce multigrid outputs, convolutional filters themselves have both within-scale and cross-scale extent. This aspect is distinct from simple multiscale designs, which only process the input at different scales. Viewed in terms of information flow, a multigrid network passes messages across a spatial pyramid. As a consequence, receptive field size grows exponentially with depth, facilitating rapid integration of context. Most critically, multigrid structure enables networks to learn internal attention and dynamic routing mechanisms, and use them to accomplish tasks on which modern CNNs fail. Experiments demonstrate wide-ranging performance advantages of multigrid. On CIFAR and ImageNet classification tasks, flipping from a single grid to multigrid within the standard CNN paradigm improves accuracy, while being compute and parameter efficient. Multigrid is independent of other architectural choices, we show synergy in combination with residual connections. Multigrid yields dramatic improvement on a synthetic semantic segmentation dataset. Most strikingly, relatively shallow multigrid networks can learn to directly perform spatial transformation tasks, where, in contrast, current CNNs fail. Together, our results suggest that continuous evolution of features on a multigrid pyramid is a more powerful alternative to existing CNN designs on a flat grid.",
"title": ""
},
{
"docid": "8aff34c5a9f80fab499d4014cafba278",
"text": "Social influence is the behavioral change of a person because of the perceived relationship with other people, organizations and society in general. Social influence has been a widely accepted phenomenon in social networks for decades. Many applications have been built based around the implicit notation of social influence between people, such as marketing, advertisement and recommendations. With the exponential growth of online social network services such as Facebook and Twitter, social influence can for the first time be measured over a large population. In this tutorial, we survey the research on social influence analysis with a focus on the computational aspects. First, we introduce how to verify the existence of social influence in various social networks. Second, we present computational models for quantifying social influence. Third, we describe how social influence can help real applications. In particular, we will focus on opinion leader finding and influence maximization for viral marketing. Finally, we apply the selected algorithms of social influence analysis on different social network data, such as twitter, arnetminer data, weibo, and slashdot forum.",
"title": ""
},
{
"docid": "ecb146ae27419d9ca1911dc4f13214c1",
"text": "In this paper, a simple mix integer programming for distribution center location is proposed. Based on this simple model, we introduce two important factors, transport mode and carbon emission, and extend it a model to describe the location problem for green supply chain. Sequently, IBM Watson implosion technologh (WIT) tool was introduced to describe them and solve them. By changing the price of crude oil, we illustrate the its impact on distribution center locations and transportation mode option for green supply chain. From the cases studies, we have known that, as the crude oil price increasing, the profits of the whole supply chain will decrease, carbon emission will also decrease to some degree, while the number of opened distribution center will increase.",
"title": ""
},
{
"docid": "427d0d445985ac4eb31c7adbaf6f1e22",
"text": "In this work, we jointly address the problem of text detection and recognition in natural scene images based on convolutional recurrent neural networks. We propose a unified network that simultaneously localizes and recognizes text with a single forward pass, avoiding intermediate processes, such as image cropping, feature re-calculation, word separation, and character grouping. In contrast to existing approaches that consider text detection and recognition as two distinct tasks and tackle them one by one, the proposed framework settles these two tasks concurrently. The whole framework can be trained end-to-end, requiring only images, ground-truth bounding boxes and text labels. The convolutional features are calculated only once and shared by both detection and recognition, which saves processing time. Through multi-task training, the learned features become more informative and improves the overall performance. Our proposed method has achieved competitive performance on several benchmark datasets.",
"title": ""
},
{
"docid": "a1eff890cfc0d1334ebea1d90d152ae5",
"text": "The purpose of this research was to develop understanding about how vendor firms make choice about agile methodologies in software projects and their fit. Two analytical frameworks were developed from extant literature and the findings were compared with real world decisions. Framework 1 showed that the choice of XP for one project was not supported by the guidelines given by the framework. The choices of SCRUM for other two projects, were partially supported. Analysis using the framework 2 showed that except one XP project, all others had sufficient project management support, limited scope for adaptability and had prominence for rules.",
"title": ""
},
{
"docid": "8c26ab9cb2b5bc30c29b722ab7efe135",
"text": "Conscious \"free will\" is problematic because (1) brain mechanisms causing consciousness are unknown, (2) measurable brain activity correlating with conscious perception apparently occurs too late for real-time conscious response, consciousness thus being considered \"epiphenomenal illusion,\" and (3) determinism, i.e., our actions and the world around us seem algorithmic and inevitable. The Penrose-Hameroff theory of \"orchestrated objective reduction (Orch OR)\" identifies discrete conscious moments with quantum computations in microtubules inside brain neurons, e.g., 40/s in concert with gamma synchrony EEG. Microtubules organize neuronal interiors and regulate synapses. In Orch OR, microtubule quantum computations occur in integration phases in dendrites and cell bodies of integrate-and-fire brain neurons connected and synchronized by gap junctions, allowing entanglement of microtubules among many neurons. Quantum computations in entangled microtubules terminate by Penrose \"objective reduction (OR),\" a proposal for quantum state reduction and conscious moments linked to fundamental spacetime geometry. Each OR reduction selects microtubule states which can trigger axonal firings, and control behavior. The quantum computations are \"orchestrated\" by synaptic inputs and memory (thus \"Orch OR\"). If correct, Orch OR can account for conscious causal agency, resolving problem 1. Regarding problem 2, Orch OR can cause temporal non-locality, sending quantum information backward in classical time, enabling conscious control of behavior. Three lines of evidence for brain backward time effects are presented. Regarding problem 3, Penrose OR (and Orch OR) invokes non-computable influences from information embedded in spacetime geometry, potentially avoiding algorithmic determinism. In summary, Orch OR can account for real-time conscious causal agency, avoiding the need for consciousness to be seen as epiphenomenal illusion. Orch OR can rescue conscious free will.",
"title": ""
},
{
"docid": "714c06da1a728663afd8dbb1cd2d472d",
"text": "This paper proposes hybrid semiMarkov conditional random fields (SCRFs) for neural sequence labeling in natural language processing. Based on conventional conditional random fields (CRFs), SCRFs have been designed for the tasks of assigning labels to segments by extracting features from and describing transitions between segments instead of words. In this paper, we improve the existing SCRF methods by employing word-level and segment-level information simultaneously. First, word-level labels are utilized to derive the segment scores in SCRFs. Second, a CRF output layer and an SCRF output layer are integrated into an unified neural network and trained jointly. Experimental results on CoNLL 2003 named entity recognition (NER) shared task show that our model achieves state-of-the-art performance when no external knowledge is used.",
"title": ""
},
{
"docid": "41131af8c79ddfde932ecb5cff0c274d",
"text": "We investigated whether experts can objectively focus on feature information in fingerprints without being misled by extraneous information, such as context. We took fingerprints that have previously been examined and assessed by latent print experts to make positive identification of suspects. Then we presented these same fingerprints again, to the same experts, but gave a context that suggested that they were a no-match, and hence the suspects could not be identified. Within this new context, most of the fingerprint experts made different judgements, thus contradicting their own previous identification decisions. Cognitive aspects involved in biometric identification can explain why experts are vulnerable to make erroneous identifications.",
"title": ""
},
{
"docid": "4cb94c63d5c32a15977ed08553f8a80c",
"text": "In the machine learning community it is generally believed that graph Laplacians corresponding to a finite sample of data points converge to a continuous Laplace operator if the sample size increases. Even though this assertion serves as a justification for many Laplacianbased algorithms, so far only some aspects of this claim have been rigorously proved. In this paper we close this gap by establishing the strong pointwise consistency of a family of graph Laplacians with datadependent weights to some weighted Laplace operator. Our investigation also includes the important case where the data lies on a submanifold of R.",
"title": ""
},
{
"docid": "18b744209b3918d6636a87feed2597c6",
"text": "Robot learning is critically enabled by the availability of appropriate state representations. We propose a robotics-specific approach to learning such state representations. As robots accomplish tasks by interacting with the physical world, we can facilitate representation learning by considering the structure imposed by physics; this structure is reflected in the changes that occur in the world and in the way a robot can effect them. By exploiting this structure in learning, robots can obtain state representations consistent with the aspects of physics relevant to the learning task. We name this prior knowledge about the structure of interactions with the physical world robotic priors. We identify five robotic priors and explain how they can be used to learn pertinent state representations. We demonstrate the effectiveness of this approach in simulated and real robotic experiments with distracting moving objects. We show that our method extracts task-relevant state representations from high-dimensional observations, even in the presence of taskirrelevant distractions. We also show that the state representations learned by our method greatly improve generalization in reinforcement learning.",
"title": ""
},
{
"docid": "41cfa26891e28a76c1d4508ab7b60dfb",
"text": "This paper analyses the digital simulation of a buck converter to emulate the photovoltaic (PV) system with focus on fuzzy logic control of buck converter. A PV emulator is a DC-DC converter (buck converter in the present case) having same electrical characteristics as that of a PV panel. The emulator helps in the real analysis of PV system in an environment where using actual PV systems can produce inconsistent results due to variation in weather conditions. The paper describes the application of fuzzy algorithms to the control of dynamic processes. The complete system is modelled in MATLAB® Simulink SimPowerSystem software package. The results obtained from the simulation studies are presented and the steady state and dynamic stability of the PV emulator system is discussed.",
"title": ""
},
{
"docid": "e53b56da0d9221528a8020bf422522ce",
"text": "This paper proposed a design of a modern FPGA-based Traffic Light Control (TLC) System to manage the road traffic. The approach is by controlling the access to areas shared among multiple intersections and allocating effective time between various users; during peak and off-peak hours. The implementation is based on real location in a city in Malaysia where the existing traffic light controller is a basic fixed-time method. This method is inefficient and almost always leads to traffic congestion during peak hours while drivers are given unnecessary waiting time during off-peak hours. The proposed design is a more universal and intelligent approach to the situation and has been implemented using FPGA. The system is implemented on ALTERA FLEX10K chip and simulation results are proven to be successful. Theoretically the waiting time for drivers during off-peak hours has been reduced further, therefore making the system better than the one being used at the moment. Future improvements include addition of other functions to the proposed design to suit various traffic conditions at different locations.",
"title": ""
},
{
"docid": "1f88243ef61c52941208a9e92eb1a420",
"text": "The maximum operating distance and the optimum performance (good coupling, lower/moderate power consumption) of even the well-designed NFC-reader-antenna in an RFID system depend largely on the good matching circuit. With the aforementioned objective, the paper presents here a modeling and computer aided design and then parameter extraction technique of a NFC-Reader Antenna. The 3D geometry model of the antenna is then simulated in frequency domain using Comsol multiphysics tool in order to extract the Reader-Antenna parameters. The extracted parameters at 13.56 MHz frequency are required for further RFsimulation, based on which matching circuit components (damping resistance and series & parallel capacitances etc.) of the Reader-Antenna at above frequency have been selected to achieve the best performance of the antenna.",
"title": ""
},
{
"docid": "a5be27d89874b1dfcad85206ad7403ba",
"text": "The upcoming Fifth Generation (5G) networks can provide ultra-reliable ultra-low latency vehicle-to-everything for vehicular ad hoc networks (VANET) to promote road safety, traffic management, information dissemination, and automatic driving for drivers and passengers. However, 5G-VANET also attracts tremendous security and privacy concerns. Although several pseudonymous authentication schemes have been proposed for VANET, the expensive cost for their initial authentication may cause serious denial of service (DoS) attacks, which furthermore enables to do great harm to real space via VANET. Motivated by this, a puzzle-based co-authentication (PCA) scheme is proposed here. In the PCA scheme, the Hash puzzle is carefully designed to mitigate DoS attacks against the pseudonymous authentication process, which is facilitated through collaborative verification. The effectiveness and efficiency of the proposed scheme is approved by performance analysis based on theory and experimental results.",
"title": ""
},
{
"docid": "f65c027ab5baa981667955cc300d2f34",
"text": "In-band full-duplex (FD) wireless communication, i.e. simultaneous transmission and reception at the same frequency, in the same channel, promises up to 2x spectral efficiency, along with advantages in higher network layers [1]. the main challenge is dealing with strong in-band leakage from the transmitter to the receiver (i.e. self-interference (SI)), as TX powers are typically >100dB stronger than the weakest signal to be received, necessitating TX-RX isolation and SI cancellation. Performing this SI-cancellation solely in the digital domain, if at all possible, would require extremely clean (low-EVM) transmission and a huge dynamic range in the RX and ADC, which is currently not feasible [2]. Cancelling SI entirely in analog is not feasible either, since the SI contains delayed TX components reflected by the environment. Cancelling these requires impractically large amounts of tunable analog delay. Hence, FD-solutions proposed thus far combine SI-rejection at RF, analog BB, digital BB and cross-domain.",
"title": ""
},
{
"docid": "3bca1dd8dc1326693f5ebbe0eaf10183",
"text": "This paper presents a novel multi-way multi-stage power divider design method based on the theory of small reflections. Firstly, the application of the theory of small reflections is extended from transmission line to microwave network. Secondly, an explicit closed-form analytical formula of the input reflection coefficient, which consists of the scattering parameters of power divider elements and the lengths of interconnection lines between each element, is derived. Thirdly, the proposed formula is applied to determine the lengths of interconnection lines. A prototype of a 16-way 4-stage power divider working at 4 GHz is designed and fabricated. Both the simulation and measurement results demonstrate the validity of the proposed method.",
"title": ""
}
] |
scidocsrr
|
d39a5d5448ac60f24deca84a42b4f03c
|
Multi-objective Cross-Project Defect Prediction
|
[
{
"docid": "3b7ac492add26938636ae694ebb14b65",
"text": "This paper presents the results of a study conducted at the University of Maryland in which we experimentally investigated the suite of Object-Oriented (OO) design metrics introduced by [Chidamber&Kemerer, 1994]. In order to do this, we assessed these metrics as predictors of fault-prone classes. This study is complementary to [Li&Henry, 1993] where the same suite of metrics had been used to assess frequencies of maintenance changes to clas es. To perform our validation accurately, we collected data on the development of eight medium-sized information management systems based on identical requirements. All eight projects were developed using a sequential life cycle model, a well-known OO analysis/design method and the C++ programming language. Based on experimental results, the advantages and drawbacks of these OO metrics are discussed. Several of Chidamber&Kemerer’s OO metrics appear to be useful to predict class fault-proneness during the early phases of the life-cycle. We also showed that they are, on our data set, better predictors than “traditional” code metrics, which can only be collected at a later phase of the software development processes. Key-words: Object-Oriented Design Metrics; Error Prediction Model; Object-Oriented Software Development; C++ Programming Language. * V. Basili and W. Melo are with the University of Maryland, Institute for Advanced Computer Studies and Computer Science Dept., A. V. Williams Bldg., College Park, MD 20742 USA. {basili | melo}@cs.umd.edu L. Briand is with the CRIM, 1801 McGill College Av., Montréal (Québec), H3A 2N4, Canada. lbriand@crim.ca Technical Report, Univ. of Maryland, Dep. of Computer Science, College Park, MD, 20742 USA. April 1995. CS-TR-3443 2 UMIACS-TR-95-40 1 . Introduction",
"title": ""
},
{
"docid": "a4790fdc5f6469b45fa4a22a871f3501",
"text": "NSGA ( [5]) is a popular non-domination based genetic algorithm for multiobjective optimization. It is a very effective algorithm but has been generally criticized for its computational complexity, lack of elitism and for choosing the optimal parameter value for sharing parameter σshare. A modified version, NSGAII ( [3]) was developed, which has a better sorting algorithm , incorporates elitism and no sharing parameter needs to be chosen a priori. NSGA-II is discussed in detail in this.",
"title": ""
},
{
"docid": "dc66c80a5031c203c41c7b2908c941a3",
"text": "There has been a great deal of interest in defect prediction: using prediction models trained on historical data to help focus quality-control resources in ongoing development. Since most new projects don't have historical data, there is interest in cross-project prediction: using data from one project to predict defects in another. Sadly, results in this area have largely been disheartening. Most experiments in cross-project defect prediction report poor performance, using the standard measures of precision, recall and F-score. We argue that these IR-based measures, while broadly applicable, are not as well suited for the quality-control settings in which defect prediction models are used. Specifically, these measures are taken at specific threshold settings (typically thresholds of the predicted probability of defectiveness returned by a logistic regression model). However, in practice, software quality control processes choose from a range of time-and-cost vs quality tradeoffs: how many files shall we test? how many shall we inspect? Thus, we argue that measures based on a variety of tradeoffs, viz., 5%, 10% or 20% of files tested/inspected would be more suitable. We study cross-project defect prediction from this perspective. We find that cross-project prediction performance is no worse than within-project performance, and substantially better than random prediction!",
"title": ""
}
] |
[
{
"docid": "44bb8c5202edadc2f14fa27c0fbb9705",
"text": "In this paper, a new Near Field Communication (NFC) antenna solution that can be used for portable devices with metal back cover is proposed. In particular, there are two holes on metal back cover, a slit between the two holes, and antenna coil located behind the metal cover. With such an arrangement, the shielding effect of the metal cover can be totally eliminated. Simulated and measured results of the proposed antenna are presented.",
"title": ""
},
{
"docid": "45390290974f347d559cd7e28c33c993",
"text": "Text ambiguity is one of the most interesting phenomenon in human communication and a difficult problem in Natural Language Processing (NLP). Identification of text ambiguities is an important task for evaluating the quality of text and uncovering its vulnerable points. There exist several types of ambiguity. In the present work we review and compare different approaches to ambiguity identification task. We also propose our own approach to this problem. Moreover, we present the prototype of a tool for ambiguity identification and measurement in natural language text. The tool is intended to support the process of writing high quality documents.",
"title": ""
},
{
"docid": "b214d983b0f262fa43bb3a885eed7506",
"text": "The principal reason for providing periodontal therapy is to achieve periodontal health and retain the dentition. Patients with a history of periodontitis represent a unique group of individuals who previously succumbed to a bacterial challenge. Therefore, it is important to address the management and survival rate of implants in these patients. Systematic reviews often are cited in this article, because they provide a high level of evidence and facilitate reviewing a vast amount of information in a succinct manner.",
"title": ""
},
{
"docid": "696f4ba578134d699658b6c303adb4f6",
"text": "This paper is concerned with the event-triggered finite-time control scheme for unicycle robots. First, Lagrange method is used to model the unicycle robot at the roll and pitch axis. Second, on the basis of the established model, an event-triggered finite-time control scheme is proposed to balance the unicycle robot in finite time and to determine whether or not control input should be updated. The control input should be only updated when the triggering condition is violated. As a result, the switching energy of actor can be saved. Third, a stability criterion on unicycle robots with the proposed event-trigged finite-time control scheme is derived by using a Lyapunov method. Finally, the effectiveness of the event-triggered finite-time control scheme is illustrated for unicycle robots.",
"title": ""
},
{
"docid": "49942573c60fa910369b81c44447a9b1",
"text": "Generic generation and manipulation of text is challenging and has limited success compared to recent deep generative modeling in visual domain. This paper aims at generating plausible text sentences, whose attributes are controlled by learning disentangled latent representations with designated semantics. We propose a new neural generative model which combines variational auto-encoders (VAEs) and holistic attribute discriminators for effective imposition of semantic structures. The model can alternatively be seen as enhancing VAEs with the wake-sleep algorithm for leveraging fake samples as extra training data. With differentiable approximation to discrete text samples, explicit constraints on independent attribute controls, and efficient collaborative learning of generator and discriminators, our model learns interpretable representations from even only word annotations, and produces short sentences with desired attributes of sentiment and tenses. Quantitative experiments using trained classifiers as evaluators validate the accuracy of sentence and attribute generation.",
"title": ""
},
{
"docid": "1e31afb6d28b0489e67bb63d4dd60204",
"text": "An educational use of Pepper, a personal robot that was developed by SoftBank Robotics Corp. and Aldebaran Robotics SAS, is described. Applying the two concepts of care-receiving robot (CRR) and total physical response (TPR) into the design of an educational application using Pepper, we offer a scenario in which children learn together with Pepper at their home environments from a human teacher who gives a lesson from a remote classroom. This paper is a case report that explains the developmental process of the application that contains three educational programs that children can select in interacting with Pepper. Feedbacks and knowledge obtained from test trials are also described.",
"title": ""
},
{
"docid": "7165a1158efb3d6c9298ffef13c6f0e8",
"text": "Virtualization of operating systems provides a common way to run different services in the cloud. Recently, the lightweight virtualization technologies claim to offer superior performance. In this paper, we present a detailed performance comparison of traditional hypervisor based virtualization and new lightweight solutions. In our measurements, we use several benchmarks tools in order to understand the strengths, weaknesses, and anomalies introduced by these different platforms in terms of processing, storage, memory and network. Our results show that containers achieve generally better performance when compared with traditional virtual machines and other recent solutions. Albeit containers offer clearly more dense deployment of virtual machines, the performance difference with other technologies is in many cases relatively small.",
"title": ""
},
{
"docid": "21ad29105c4b6772b05156afd33ac145",
"text": "High resolution Digital Surface Models (DSMs) produced from airborne laser-scanning or stereo satellite images provide a very useful source of information for automated 3D building reconstruction. In this paper an investigation is reported about extraction of 3D building models from high resolution DSMs and orthorectified images produced from Worldview-2 stereo satellite imagery. The focus is on the generation of 3D models of parametric building roofs, which is the basis for creating Level Of Detail 2 (LOD2) according to the CityGML standard. In particular the building blocks containing several connected buildings with tilted roofs are investigated and the potentials and limitations of the modeling approach are discussed. The edge information extracted from orthorectified image has been employed as additional source of information in 3D reconstruction algorithm. A model driven approach based on the analysis of the 3D points of DSMs in a 2D projection plane is proposed. Accordingly, a building block is divided into smaller parts according to the direction and number of existing ridge lines for parametric building reconstruction. The 3D model is derived for each building part, and finally, a complete parametric model is formed by merging the 3D models of the individual building parts and adjusting the nodes after the merging step. For the remaining building parts that do not contain ridge lines, a prismatic model using polygon approximation of the corresponding boundary pixels is derived and merged to the parametric models to shape the final model of the building. A qualitative and quantitative assessment of the proposed method for the automatic reconstruction of buildings with parametric roofs is then provided by comparing the final model with the existing surface model as well as some field measurements. Remote Sens. 2013, 5 1682",
"title": ""
},
{
"docid": "3f1d69e8a2fdfc69e451679255782d70",
"text": "This tutorial gives a broad view of modern approaches for scaling up machine learning and data mining methods on parallel/distributed platforms. Demand for scaling up machine learning is task-specific: for some tasks it is driven by the enormous dataset sizes, for others by model complexity or by the requirement for real-time prediction. Selecting a task-appropriate parallelization platform and algorithm requires understanding their benefits, trade-offs and constraints. This tutorial focuses on providing an integrated overview of state-of-the-art platforms and algorithm choices. These span a range of hardware options (from FPGAs and GPUs to multi-core systems and commodity clusters), programming frameworks (including CUDA, MPI, MapReduce, and DryadLINQ), and learning settings (e.g., semi-supervised and online learning). The tutorial is example-driven, covering a number of popular algorithms (e.g., boosted trees, spectral clustering, belief propagation) and diverse applications (e.g., recommender systems and object recognition in vision).\n The tutorial is based on (but not limited to) the material from our upcoming Cambridge U. Press edited book which is currently in production.\n Visit the tutorial website at http://hunch.net/~large_scale_survey/",
"title": ""
},
{
"docid": "de8b6530a1ba405dfe0c5ed1c389a9e3",
"text": "This paper aims to develop an innovative neural network approach to achieve better stock market predictions. Data were obtained from the live stock market for real-time and off-line analysis and results of visualizations and analytics to demonstrate Internet of Multimedia of Things for stock analysis. To study the influence of market characteristics on stock prices, traditional neural network algorithms may incorrectly predict the stock market, since the initial weight of the random selection problem can be easily prone to incorrect predictions. Based on the development of word vector in deep learning, we demonstrate the concept of “stock vector.” The input is no longer a single index or single stock index, but multi-stock high-dimensional historical data. We propose the deep long short-term memory neural network (LSTM) with embedded layer and the long short-term memory neural network with automatic encoder to predict the stock market. In these two models, we use the embedded layer and the automatic encoder, respectively, to vectorize the data, in a bid to forecast the stock via long short-term memory neural network. The experimental results show that the deep LSTM with embedded layer is better. Specifically, the accuracy of two models is 57.2 and 56.9%, respectively, for the Shanghai A-shares composite index. Furthermore, they are 52.4 and 52.5%, respectively, for individual stocks. We demonstrate research contributions in IMMT for neural network-based financial analysis.",
"title": ""
},
{
"docid": "e18d29caa8a161865a590fbd909b80d6",
"text": "Recently, the U.S National Security Agency has published the specifications of two families of lightweight block ciphers, SIMON and SPECK, on ePrint [2]. The ciphers are developed with optimization towards both hardware and software in mind. While the specification paper discusses design requirements and performance of the presented lightweight ciphers thoroughly, no security assessment is given. This paper is a move towards filling that cryptanalysis gap for the SIMON family of ciphers. We present a series of observations on the presented construction that, in some cases, yield attacks, while in other cases may provide basis of further analysis by the cryptographic community. Specifically, we obtain attacks using classicalas well as truncated differentials. In the former case, we show how the smallest version of SIMON, Simon32/64, exhibits a strong differential effect.",
"title": ""
},
{
"docid": "96010bf04c08ace7932fb5c48b2f8798",
"text": "Spatio-temporal databases aim to support extensions to existing models of Spatial Information Systems (SIS) to include time in order to better describe our dynamic environment. Although interest into this area has increased in the past decade, a number of important issues remain to be investigated. With the advances made in temporal database research, we can expect a more uni®ed approach towards aspatial temporal data in SIS and a wider discussion on spatio-temporal data models. This paper provides an overview of previous achievements within the ®eld and highlights areas currently receiving or requiring further investigation.",
"title": ""
},
{
"docid": "c8da5151cc8dd563965c4ee60a6d9002",
"text": "The aim of this paper is to analyze the robustness of the electrostatic separation process control. The objective was to reduce variation in the process outcome by finding operating conditions (high-voltage level, roll speed), under which uncontrollable variation in the noise factors (granule size, composition of the material to be separated) has minimal impact on the quantity (and the quality) of the recovered products. The experiments were carried out on a laboratory roll-type electrostatic separator, provided with a corona electrode and a tubular electrode, both connected to a dc high-voltage supply. The samples of processed material were prepared from genuine chopped electric wire wastes (granule size >1 mm and <5 mm) containing various proportions of copper and PVC. The design and noise factors were combined into one single experimental design, based on Taguchi's approach, and a regression model of the process was fitted. The impact of the noise factors could be estimated, as well as the interactions between the design and noise factors. The conditions of industry application of Taguchi's methodology are discussed, as well as the possibility of adapting it to other electrostatic processes.",
"title": ""
},
{
"docid": "d588258de60f0df2e3675c88bef52d02",
"text": "Review spamming is quite common on many online shopping platforms like Amazon. Previous attempts for fake review and spammer detection use features of reviewer behavior, rating, and review content. However, to the best of our knowledge, there is no work capable of detecting fake reviews and review spammers at the same time. In this paper, we propose an algorithm to achieve the two goals simultaneously. By defining features to describe each review and reviewer, a Review Factor Graph model is proposed to incorporate all the features and to leverage belief propagation between reviews and reviewers. Experimental results show that our algorithm outperforms all of the other baseline methods significantly with respect to both efficiency and accuracy.",
"title": ""
},
{
"docid": "4cf670f937921d4c5eec7e477c126eb9",
"text": "This paper presents particle swarm optimization based on learning from winner particle. (PSO-WS). Instead of considering gbest and pbest particle for position update, each particle considers its distance from immediate winner to update its position. Only winner particle follow general velocity and position update equation. If this strategy performs well for the particle, then that particle updates its position based on this strategy, otherwise its position is replaced by its immediate winner particle’s position. Dimension dependant swarm size is used for better exploration. Proposed method is compared with CSO and CCPSO2, which are available to solve large scale optimization problems. Statistical results show that proposed method performs well for separable as well as non separable problems.",
"title": ""
},
{
"docid": "4d6e9bc0a8c55e65d070d1776e781173",
"text": "As electronic device feature sizes scale-down, the power consumed due to onchip communications as compared to computations will increase dramatically; likewise, the available bandwidth per computational operation will continue to decrease. Integrated photonics can offer savings in power and potential increase in bandwidth for onchip networks. Classical diffraction-limited photonics currently utilized in photonic integrated circuits (PIC) is characterized by bulky and inefficient devices compared to their electronic counterparts due to weak light matter interactions (LMI). Performance critical for the PIC is electro-optic modulators (EOM), whose performances depend inherently on enhancing LMIs. Current EOMs based on diffraction-limited optical modes often deploy ring resonators and are consequently bulky, photon-lifetime modulation limited, and power inefficient due to large electrical...",
"title": ""
},
{
"docid": "49bcfc87d925b886cd88b70376a5f9e8",
"text": "We develop a model that fleshes out, extends, and modifies existing models of reference dependent preferences and loss aversion while accomodating most of the evidence motivating these models. Our approach makes reference-dependent theory more broadly applicable by avoiding some of the ways that prevailing models—if applied literally and without ancillary assumptions—make variously weak and incorrect predictions. Our model combines the reference-dependent gain-loss utility with standard economic “consumption utility” and clarifies the relationship between the two. Most importantly, we posit that a person’s reference point is her recent expectations about outcomes (rather than the status quo), and assume that behavior accords to a personal equilibrium: The person maximizes utility given her rational expectations about outcomes, where these expectations depend on her own anticipated behavior. We apply our theory to consumer behavior, and emphasize that a consumer’s willingness to pay for a good is endogenously determined by the market distribution of prices and how she expects to respond to these prices. Because a buyer’s willingness to buy depends on whether she anticipates buying the good, for a range of market prices there are multiple personal equilibria. This multiplicity disappears when the consumer is sufficiently uncertain about the price she will face. Because paying more than she anticipated induces a sense of loss in the buyer, the lower the prices at which she expects to buy the lower will be her willingness to pay. In some situations, a known stochastic decrease in prices can even lower the quantity demanded.",
"title": ""
},
{
"docid": "ea12fe9b91253634422471024f9d28f8",
"text": "Maximum and minimum computed across channels is used to monitor the Electroencephalogram signals for possible change of the eye state. Upon detection of a possible change, the last two seconds of the signal is passed through Multivariate Empirical Mode Decomposition and relevant features are extracted. The features are then fed into Logistic Regression and Artificial Neural Network classifiers to confirm the eye state change. The proposed algorithm detects the eye state change with 88.2% accuracy in less than two seconds. This provides a valuable improvement in comparison to a recent procedure that takes about 20 minutes to classify new instances with 97.3% accuracy. The introduced algorithm is promising in the real-time eye state classification as increasing the training examples would increase its accuracy. Published by Elsevier Ltd.",
"title": ""
},
{
"docid": "be5b0dd659434e77ce47034a51fd2767",
"text": "Current obstacles in the study of social media marketing include dealing with massive data and real-time updates have motivated to contribute solutions that can be adopted for viral marketing. Since information diffusion and social networks are the core of viral marketing, this article aims to investigate the constellation of diffusion methods for viral marketing. Studies on diffusion methods for viral marketing have applied different computational methods, but a systematic investigation of these methods has limited. Most of the literature have focused on achieving objectives such as influence maximization or community detection. Therefore, this article aims to conduct an in-depth review of works related to diffusion for viral marketing. Viral marketing has applied to business-to-consumer transactions but has seen limited adoption in business-to-business transactions. The literature review reveals a lack of new diffusion methods, especially in dynamic and large-scale networks. It also offers insights into applying various mining methods for viral marketing. It discusses some of the challenges, limitations, and future research directions of information diffusion for viral marketing. The article also introduces a viral marketing information diffusion model. The proposed model attempts to solve the dynamicity and large-scale data of social networks by adopting incremental clustering and a stochastic differential equation for business-to-business transactions. Keywords—information diffusion; viral marketing; social media marketing; social networks",
"title": ""
},
{
"docid": "7ccecec251a5932fd91a3c1f51dca3b2",
"text": "In this paper, the influence of a lossy ground on the input impedance of dipole and bow-tie antennas excited by a short pulse is investigated. It is shown that the ground influence on the input impedance of transient dipole and bow-tie antennas is significant only for elevations smaller than 1/5 of the wavelength that corresponds to the central frequency of the exciting pulse. Furthermore, a principal difference between the input impedance due to traveling-wave and standing-wave current distributions is pointed out.",
"title": ""
}
] |
scidocsrr
|
ec02edc5b59e82dc1d5b837df54e12d3
|
NVC-Hashmap: A Persistent and Concurrent Hashmap For Non-Volatile Memories
|
[
{
"docid": "14e92e2c9cd31db526e084669d15903c",
"text": "This paper presents three building blocks for enabling the efficient and safe design of persistent data stores for emerging non-volatile memory technologies. Taking the fullest advantage of the low latency and high bandwidths of emerging memories such as phase change memory (PCM), spin torque, and memristor necessitates a serious look at placing these persistent storage technologies on the main memory bus. Doing so, however, introduces critical challenges of not sacrificing the data reliability and consistency that users demand from storage. This paper introduces techniques for (1) robust wear-aware memory allocation, (2) preventing of erroneous writes, and (3) consistency-preserving updates that are cache-efficient. We show through our evaluation that these techniques are efficiently implementable and effective by demonstrating a B+-tree implementation modified to make full use of our toolkit.",
"title": ""
}
] |
[
{
"docid": "f09bc6f1b4f37fc4d822ccc4cdc1497f",
"text": "It is generally believed that a metaphor tends to have a stronger emotional impact than a literal statement; however, there is no quantitative study establishing the extent to which this is true. Further, the mechanisms through which metaphors convey emotions are not well understood. We present the first data-driven study comparing the emotionality of metaphorical expressions with that of their literal counterparts. Our results indicate that metaphorical usages are, on average, significantly more emotional than literal usages. We also show that this emotional content is not simply transferred from the source domain into the target, but rather is a result of meaning composition and interaction of the two domains in the metaphor.",
"title": ""
},
{
"docid": "1d03d6f7cd7ff9490dec240a36bf5f65",
"text": "Responses generated by neural conversational models tend to lack informativeness and diversity. We present a novel adversarial learning method, called Adversarial Information Maximization (AIM) model, to address these two related but distinct problems. To foster response diversity, we leverage adversarial training that allows distributional matching of synthetic and real responses. To improve informativeness, we explicitly optimize a variational lower bound on pairwise mutual information between query and response. Empirical results from automatic and human evaluations demonstrate that our methods significantly boost informativeness and diversity.",
"title": ""
},
{
"docid": "25a7f23c146add12bfab3f1fc497a065",
"text": "One of the greatest puzzles of human evolutionary history concerns the how and why of the transition from small-scale, ‘simple’ societies to large-scale, hierarchically complex ones. This paper reviews theoretical approaches to resolving this puzzle. Our discussion integrates ideas and concepts from evolutionary biology, anthropology, and political science. The evolutionary framework of multilevel selection suggests that complex hierarchies can arise in response to selection imposed by intergroup conflict (warfare). The logical coherency of this theory has been investigated with mathematical models, and its predictions were tested empirically by constructing a database of the largest territorial states in the world (with the focus on the preindustrial era).",
"title": ""
},
{
"docid": "bfbd291ce302fc2d7bd8909bd0f7e01a",
"text": "The correlative change analysis of state parameters can provide powerful technical supports for safe, reliable, and high-efficient operation of the power transformers. However, the analysis methods are primarily based on a single or a few state parameters, and hence the potential failures can hardly be found and predicted. In this paper, a data-driven method of association rule mining for transformer state parameters has been proposed by combining the Apriori algorithm and probabilistic graphical model. In this method the disadvantage that whenever the frequent items are searched the whole data items have to be scanned cyclically has been overcame. This method is used in mining association rules of the numerical solutions of differential equations. The result indicates that association rules among the numerical solutions can be accurately mined. Finally, practical measured data of five 500 kV transformers is analyzed by the proposed method. The association rules of various state parameters have been excavated, and then the mined association rules are used in modifying the prediction results of single state parameters. The results indicate that the application of the mined association rules improves the accuracy of prediction. Therefore, the effectiveness and feasibility of the proposed method in association rule mining has been proved.",
"title": ""
},
{
"docid": "da1ac93453bc9da937df4eb49902fbe5",
"text": "A novel hierarchical multimodal attention-based model is developed in this paper to generate more accurate and descriptive captions for images. Our model is an \"end-to-end\" neural network which contains three related sub-networks: a deep convolutional neural network to encode image contents, a recurrent neural network to identify the objects in images sequentially, and a multimodal attention-based recurrent neural network to generate image captions. The main contribution of our work is that the hierarchical structure and multimodal attention mechanism is both applied, thus each caption word can be generated with the multimodal attention on the intermediate semantic objects and the global visual content. Our experiments on two benchmark datasets have obtained very positive results.",
"title": ""
},
{
"docid": "d2f6b3fee7f40eb580451d9cc29b8aa6",
"text": "Compositional Distributional Semantic methods model the distributional behavior of a compound word by exploiting the distributional behavior of its constituent words. In this setting, a constituent word is typically represented by a feature vector conflating all the senses of that word. However, not all the senses of a constituent word are relevant when composing the semantics of the compound. In this paper, we present two different methods for selecting the relevant senses of constituent words. The first one is based on Word Sense Induction and creates a static multi prototype vectors representing the senses of a constituent word. The second creates a single dynamic prototype vector for each constituent word based on the distributional properties of the other constituents in the compound. We use these prototype vectors for composing the semantics of noun-noun compounds and evaluate on a compositionality-based similarity task. Our results show that: (1) selecting relevant senses of the constituent words leads to a better semantic composition of the compound, and (2) dynamic prototypes perform better than static prototypes.",
"title": ""
},
{
"docid": "29df7f7e7739bd78f0d72986d43e3adf",
"text": "2009;53;992-1002; originally published online Feb 19, 2009; J. Am. Coll. Cardiol. and Leonard S. Gettes E. William Hancock, Barbara J. Deal, David M. Mirvis, Peter Okin, Paul Kligfield, International Society for Computerized Electrocardiology Endorsed by the Cardiology Foundation; and the Heart Rhythm Society Committee, Council on Clinical Cardiology; the American College of the American Heart Association Electrocardiography and Arrhythmias Associated With Cardiac Chamber Hypertrophy A Scientific Statement From Interpretation of the Electrocardiogram: Part V: Electrocardiogram Changes AHA/ACCF/HRS Recommendations for the Standardization and This information is current as of August 2, 2011 http://content.onlinejacc.org/cgi/content/full/53/11/992 located on the World Wide Web at: The online version of this article, along with updated information and services, is",
"title": ""
},
{
"docid": "6b1dc94c4c70e1c78ea32a760b634387",
"text": "3d reconstruction from a single image is inherently an ambiguous problem. Yet when we look at a picture, we can often infer 3d information about the scene. Humans perform single-image 3d reconstructions by using a variety of singleimage depth cues, for example, by recognizing objects and surfaces, and reasoning about how these surfaces are connected to each other. In this paper, we focus on the problem of automatic 3d reconstruction of indoor scenes, specifically ones (sometimes called “Manhattan worlds”) that consist mainly of orthogonal planes. We use a Markov random field (MRF) model to identify the different planes and edges in the scene, as well as their orientations. Then, an iterative optimization algorithm is applied to infer the most probable position of all the planes, and thereby obtain a 3d reconstruction. Our approach is fully automatic—given an input image, no human intervention is necessary to obtain an approximate 3d reconstruction.",
"title": ""
},
{
"docid": "531e30bf9610b82f6fc650652e6fc836",
"text": "A versatile microreactor platform featuring a novel chemical-resistant microvalve array has been developed using combined silicon/polymer micromachining and a special polymer membrane transfer process. The basic valve unit in the array has a typical ‘transistor’ structure and a PDMS/parylene double-layer valve membrane. A robust multiplexing algorithm is also proposed for individual addressing of a large array using a minimal number of signal inputs. The in-channel microvalve is leakproof upon pneumatic actuation. In open status it introduces small impedance to the fluidic flow, and allows a significantly larger dynamic range of flow rates (∼ml min−1) compared with most of the microvalves reported. Equivalent electronic circuits were established by modeling the microvalves as PMOS transistors and the fluidic channels as simple resistors to provide theoretical prediction of the device fluidic behavior. The presented microvalve/reactor array showed excellent chemical compatibility in the tests with several typical aggressive chemicals including those seriously degrading PDMS-based microfluidic devices. Combined with the multiplexing strategy, this versatile array platform can find a variety of lab-on-a-chip applications such as addressable multiplex biochemical synthesis/assays, and is particularly suitable for those requiring tough chemicals, large flow rates and/or high-throughput parallel processing. As an example, the device performance was examined through the addressed synthesis of 30-mer DNA oligonucleotides followed by sequence validation using on-chip hybridization. The results showed leakage-free valve array addressing and proper synthesis in target reactors, as well as uniform flow distribution and excellent regional reaction selectivity. (Some figures in this article are in colour only in the electronic version) 0960-1317/06/081433+11$30.00 © 2006 IOP Publishing Ltd Printed in the UK 1433",
"title": ""
},
{
"docid": "b483d6fbe7d41af453e89c2d793eb1a2",
"text": "Representing human decisions is of fundamental importance in agent-based models. However, the rationale for choosing a particular human decision model is often not sufficiently empirically or theoretically substantiated in the model documentation. Furthermore, it is difficult to compare models because the model descriptions are often incomplete, not transparent and difficult to understand. Therefore, we expand and refine the ‘ODD’ (Overview, Design Concepts and Details) protocol to establish a standard for describing ABMs that includes human decision-making (ODD+D). Because the ODD protocol originates mainly from an ecological perspective, some adaptations are necessary to better capture human decision-making. We extended and rearranged the design concepts and related guiding questions to differentiate and describe decision-making, adaptation and learning of the agents in a comprehensive and clearly structured way. The ODD+D protocol also incorporates a section on ‘Theoretical and Empirical Background’ to encourage model designs and model assumptions that are more closely related to theory. The application of the ODD+D protocol is illustrated with a description of a social-ecological ABM on water use. Although the ODD+D protocol was developed on the basis of example implementations within the socio-ecological scientific community, we believe that the ODD+D protocol may prove helpful for describing ABMs in general when human decisions are included.",
"title": ""
},
{
"docid": "2f1862591d5f9ee80d7cdcb930f86d8d",
"text": "In this research convolutional neural networks are used to recognize whether a car on a given image is damaged or not. Using transfer learning to take advantage of available models that are trained on a more general object recognition task, very satisfactory performances have been achieved, which indicate the great opportunities of this approach. In the end, also a promising attempt in classifying car damages into a few different classes is presented. Along the way, the main focus was on the influence of certain hyper-parameters and on seeking theoretically founded ways to adapt them, all with the objective of progressing to satisfactory results as fast as possible. This research open doors for future collaborations on image recognition projects in general and for the car insurance field in particular.",
"title": ""
},
{
"docid": "9828a83e8b28b3b0d302a25da9120763",
"text": "For robotic manipulators that are redundant or with high degrees of freedom (dof ), an analytical solution to the inverse kinematics is very difficult or impossible. Pioneer 2 robotic arm (P2Arm) is a recently developed and widely used 5-dof manipulator. There is no effective solution to its inverse kinematics to date. This paper presents a first complete analytical solution to the inverse kinematics of the P2Arm, which makes it possible to control the arm to any reachable position in an unstructured environment. The strategies developed in this paper could also be useful for solving the inverse kinematics problem of other types of robotic arms.",
"title": ""
},
{
"docid": "4bfb6e5b039dd434e0c8aed461536acf",
"text": "In many applications transactions between the elements of an information hierarchy occur over time. For example, the product offers of a department store can be organized into product groups and subgroups to form an information hierarchy. A market basket consisting of the products bought by a customer forms a transaction. Market baskets of one or more customers can be ordered by time into a sequence of transactions. Each item in a transaction is associated with a measure, for example, the amount paid for a product.\n In this paper we present a novel method for visualizing sequences of these kinds of transactions in information hierarchies. It uses a tree layout to draw the hierarchy and a timeline to represent progression of transactions in the hierarchy. We have developed several interaction techniques that allow the users to explore the data. Smooth animations help them to track the transitions between views. The usefulness of the approach is illustrated by examples from several very different application domains.",
"title": ""
},
{
"docid": "716f8cadac94110c4a00bc81480a4b66",
"text": "The last decade has witnessed the prevalence of sensor and GPS technologies that produce a sheer volume of trajectory data representing the motion history of moving objects. Measuring similarity between trajectories is undoubtedly one of the most important tasks in trajectory data management since it serves as the foundation of many advanced analyses such as similarity search, clustering, and classification. In this light, tremendous efforts have been spent on this topic, which results in a large number of trajectory similarity measures. Generally, each individual work introducing a new distance measure has made specific claims on the superiority of their proposal. However, for most works, the experimental study was focused on demonstrating the efficiency of the search algorithms, leaving the effectiveness aspect unverified empirically. In this paper, we conduct a comparative experimental study on the effectiveness of six widely used trajectory similarity measures based on a real taxi trajectory dataset. By applying a variety of transformations we designed for each original trajectory, our experimental observations demonstrate the advantages and drawbacks of these similarity measures in different circumstances.",
"title": ""
},
{
"docid": "d8d91ea6fe6ce56a357a9b716bdfe849",
"text": "Over the last years, automatic music classification has become a standard benchmark problem in the machine learning community. This is partly due to its inherent difficulty, and also to the impact that a fully automated classification system can have in a commercial application. In this paper we test the efficiency of a relatively new learning tool, Extreme Learning Machines (ELM), for several classification tasks on publicly available song datasets. ELM is gaining increasing attention, due to its versatility and speed in adapting its internal parameters. Since both of these attributes are fundamental in music classification, ELM provides a good alternative to standard learning models. Our results support this claim, showing a sustained gain of ELM over a feedforward neural network architecture. In particular, ELM provides a great decrease in computational training time, and has always higher or comparable results in terms of efficiency.",
"title": ""
},
{
"docid": "abba5d320a4b6bf2a90ba2b836019660",
"text": "We aim at segmenting small organs (e.g., the pancreas) from abdominal CT scans. As the target often occupies a relatively small region in the input image, deep neural networks can be easily confused by the complex and variable background. To alleviate this, researchers proposed a coarse-to-fine approach [46], which used prediction from the first (coarse) stage to indicate a smaller input region for the second (fine) stage. Despite its effectiveness, this algorithm dealt with two stages individually, which lacked optimizing a global energy function, and limited its ability to incorporate multi-stage visual cues. Missing contextual information led to unsatisfying convergence in iterations, and that the fine stage sometimes produced even lower segmentation accuracy than the coarse stage. This paper presents a Recurrent Saliency Transformation Network. The key innovation is a saliency transformation module, which repeatedly converts the segmentation probability map from the previous iteration as spatial weights and applies these weights to the current iteration. This brings us two-fold benefits. In training, it allows joint optimization over the deep networks dealing with different input scales. In testing, it propagates multi-stage visual information throughout iterations to improve segmentation accuracy. Experiments in the NIH pancreas segmentation dataset demonstrate the state-of-the-art accuracy, which outperforms the previous best by an average of over 2%. Much higher accuracies are also reported on several small organs in a larger dataset collected by ourselves. In addition, our approach enjoys better convergence properties, making it more efficient and reliable in practice.",
"title": ""
},
{
"docid": "69624e1501b897bf1a9f9a5a84132da3",
"text": "360° videos and Head-Mounted Displays (HMDs) are geing increasingly popular. However, streaming 360° videos to HMDs is challenging. is is because only video content in viewers’ Fieldof-Views (FoVs) is rendered, and thus sending complete 360° videos wastes resources, including network bandwidth, storage space, and processing power. Optimizing the 360° video streaming to HMDs is, however, highly data and viewer dependent, and thus dictates real datasets. However, to our best knowledge, such datasets are not available in the literature. In this paper, we present our datasets of both content data (such as image saliency maps and motion maps derived from 360° videos) and sensor data (such as viewer head positions and orientations derived from HMD sensors). We put extra eorts to align the content and sensor data using the timestamps in the raw log les. e resulting datasets can be used by researchers, engineers, and hobbyists to either optimize existing 360° video streaming applications (like rate-distortion optimization) and novel applications (like crowd-driven cameramovements). We believe that our dataset will stimulate more research activities along this exciting new research direction. ACM Reference format: Wen-Chih Lo, Ching-Ling Fan, Jean Lee, Chun-Ying Huang, Kuan-Ta Chen, and Cheng-Hsin Hsu. 2017. 360° Video Viewing Dataset in Head-Mounted Virtual Reality. In Proceedings ofMMSys’17, Taipei, Taiwan, June 20-23, 2017, 6 pages. DOI: hp://dx.doi.org/10.1145/3083187.3083219 CCS Concept • Information systems→Multimedia streaming",
"title": ""
},
{
"docid": "68c7509ec0261b1ddccef7e3ad855629",
"text": "This research comprehensively illustrates the design, implementation and evaluation of a novel marker less environment tracking technology for an augmented reality based indoor navigation application, adapted to efficiently operate on a proprietary head-mounted display. Although the display device used, Google Glass, had certain pitfalls such as short battery life, slow processing speed, and lower quality visual display but the tracking technology was able to complement these limitations by rendering a very efficient, precise, and intuitive navigation experience. The performance assessments, conducted on the basis of efficiency and accuracy, substantiated the utility of the device for everyday navigation scenarios, whereas a later conducted subjective evaluation of handheld and wearable devices also corroborated the wearable as the preferred device for indoor navigation.",
"title": ""
},
{
"docid": "75e5308959bfed2cf54af052b66798b2",
"text": "This article describes a design and implementation of an augmented desk system, named EnhancedDesk, which smoothly integrates paper and digital information on a desk. The system provides users an intelligent environment that automatically retrieves and displays digital information corresponding to the real objects (e.g., books) on the desk by using computer vision. The system also provides users direct manipulation of digital information by using the users' own hands and fingers for more natural and more intuitive interaction. Based on the experiments with our first prototype system, some critical issues on augmented desk systems were identified when trying to pursue rapid and fine recognition of hands and fingers. To overcome these issues, we developed a novel method for realtime finger tracking on an augmented desk system by introducing a infrared camera, pattern matching with normalized correlation, and a pan-tilt camera. We then show an interface prototype on EnhancedDesk. It is an application to a computer-supported learning environment, named Interactive Textbook. The system shows how effective the integration of paper and digital information is and how natural and intuitive direct manipulation of digital information with users' hands and fingers is.",
"title": ""
},
{
"docid": "c4b17bc4c36ce3792c6b560f75cc66e9",
"text": "We examined the association among anxiety, religiosity, meaning of life and mental health in a nonclinical sample from a Chinese society. Four hundred fifty-one Taiwanese adults (150 males and 300 females) ranging in age from 17 to 73 years (M = 28.9, SD = 11.53) completed measures of Beck Anxiety Inventory, Medical Outcomes Study Health Survey, Perceived Stress Scale, Social Support Scale, and Personal Religiosity Scale (measuring religiosity and meaning of life). Meaning of life has a significant negative correlation with anxiety and a significant positive correlation with mental health and religiosity; however, religiosity does not correlate significantly anxiety and mental health after controlling for demographic measures, social support and physical health. Anxiety explains unique variance in mental health above meaning of life. Meaning of life was found to partially mediate the relationship between anxiety and mental health. These findings suggest that benefits of meaning of life for mental health can be at least partially accounted for by the effects of underlying anxiety.",
"title": ""
}
] |
scidocsrr
|
2ced3f49260ce2c2e37203d7ac91755c
|
d . tools : Integrated Prototyping for Physical Interaction Design
|
[
{
"docid": "5b07bc318cb0f5dd7424cdcc59290d31",
"text": "The current practice used in the design of physical interactive products (such as handheld devices), often suffers from a divide between exploration of form and exploration of interactivity. This can be attributed, in part, to the fact that working prototypes are typically expensive, take a long time to manufacture, and require specialized skills and tools not commonly available in design studios.We have designed a prototyping tool that, we believe, can significantly reduce this divide. The tool allows designers to rapidly create functioning, interactive, physical prototypes early in the design process using a collection of wireless input components (buttons, sliders, etc.) and a sketch of form. The input components communicate with Macromedia Director to enable interactivity.We believe that this tool can improve the design practice by: a) Improving the designer's ability to explore both the form and interactivity of the product early in the design process, b) Improving the designer's ability to detect problems that emerge from the combination of the form and the interactivity, c) Improving users' ability to communicate their ideas, needs, frustrations and desires, and d) Improving the client's understanding of the proposed design, resulting in greater involvement and support for the design.",
"title": ""
},
{
"docid": "2cdeb7d6c9b595080f896f8e6280625b",
"text": "Physical widgets or phidgets are to physical user interfaces what widgets are to graphical user interfaces. Similar to widgets, phidgets abstract and package input and output devices: they hide implementation and construction details, they expose functionality through a well-defined API, and they have an (optional) on-screen interactive interface for displaying and controlling device state. Unlike widgets, phidgets also require: a connection manager to track how devices appear on-line; a way to link a software phidget with its physical counterpart; and a simulation mode to allow the programmer to develop, debug and test a physical interface even when no physical device is present. Our evaluation shows that everyday programmers using phidgets can rapidly develop physical interfaces.",
"title": ""
}
] |
[
{
"docid": "8fcb30825553e58ff66fd85ded10111e",
"text": "Most ecological processes now show responses to anthropogenic climate change. In terrestrial, freshwater, and marine ecosystems, species are changing genetically, physiologically, morphologically, and phenologically and are shifting their distributions, which affects food webs and results in new interactions. Disruptions scale from the gene to the ecosystem and have documented consequences for people, including unpredictable fisheries and crop yields, loss of genetic diversity in wild crop varieties, and increasing impacts of pests and diseases. In addition to the more easily observed changes, such as shifts in flowering phenology, we argue that many hidden dynamics, such as genetic changes, are also taking place. Understanding shifts in ecological processes can guide human adaptation strategies. In addition to reducing greenhouse gases, climate action and policy must therefore focus equally on strategies that safeguard biodiversity and ecosystems.",
"title": ""
},
{
"docid": "6c3690a45a23edcf070e8ef44a28e769",
"text": "Query optimization is an inherently complex problem, and va lidating the correctness and effectiveness of a query optimizer can be a task of comparable complexity. The overall process of measuring query optimization quality becomes increasingly challenging as mode rn query optimizers provide more advanced optimization strategies and adaptive techniques. In this p aper we present a practitioner’s account of query optimization testing. We discuss some of the unique is s s in testing a query optimizer, and we provide a high-level overview of the testing techniques use d to validate the query optimizer of Microsoft’s SQL Server. We offer our experiences and discuss a few ongoin g challenges, which we hope can inspire additional research in the area of query optimization and DB MS testing.",
"title": ""
},
{
"docid": "b91efd08c1eafc8297a5abb2fc0c41b5",
"text": "Boltzmann machines (BMs) are appealing candidates for powerful priors in variational autoencoders (VAEs), as they are capable of capturing nontrivial and multimodal distributions over discrete variables. However, non-differentiability of the discrete units prohibits using the reparameterization trick, essential for low-noise back propagation. The Gumbel trick resolves this problem in a consistent way by relaxing the variables and distributions, but it is incompatible with BM priors. Here, we propose the GumBolt, a model that extends the Gumbel trick to BM priors in VAEs. GumBolt is significantly simpler than the recently proposed methods with BM prior and outperforms them by a considerable margin. It achieves state-of-theart performance on permutation invariant MNIST and OMNIGLOT datasets in the scope of models with only discrete latent variables. Moreover, the performance can be further improved by allowing multi-sampled (importance-weighted) estimation of log-likelihood in training, which was not possible with previous models.",
"title": ""
},
{
"docid": "b1c6d95b297409a7b47d8fa7e6da6831",
"text": "~I \"e have modified the original model of selective attention, which was previmtsly proposed by Fukushima, and e~tended its ability to recognize attd segment connected characters in cmwive handwriting. Although the or~¢inal model q/'sdective attention ah'ead)' /tad the abilio' to recognize and segment patterns, it did not alwa)w work well when too many patterns were presented simuhaneousl): In order to restrict the nttmher q/patterns to be processed simultaneousO; a search controller has been added to the original model. Tlw new mode/mainly processes the patterns contained in a small \"search area, \" which is mo~vd b)' the search controller A ptvliminao' ev~eriment with compltter simttlatiott has shown that this approach is promisittg. The recogttition arid segmentation q[k'haracters can be sttcces~[itl even thottgh each character itt a handwritten word changes its .shape h)\" the e[]'ect o./the charactetw",
"title": ""
},
{
"docid": "1dc4a8f02dfe105220db5daae06c2229",
"text": "Photosynthesis begins with light harvesting, where specialized pigment-protein complexes transform sunlight into electronic excitations delivered to reaction centres to initiate charge separation. There is evidence that quantum coherence between electronic excited states plays a role in energy transfer. In this review, we discuss how quantum coherence manifests in photosynthetic light harvesting and its implications. We begin by examining the concept of an exciton, an excited electronic state delocalized over several spatially separated molecules, which is the most widely available signature of quantum coherence in light harvesting. We then discuss recent results concerning the possibility that quantum coherence between electronically excited states of donors and acceptors may give rise to a quantum coherent evolution of excitations, modifying the traditional incoherent picture of energy transfer. Key to this (partially) coherent energy transfer appears to be the structure of the environment, in particular the participation of non-equilibrium vibrational modes. We discuss the open questions and controversies regarding quantum coherent energy transfer and how these can be addressed using new experimental techniques.",
"title": ""
},
{
"docid": "0d23946f8a94db5943deee81deb3f322",
"text": "The Spatial Semantic Hierarchy is a model of knowledge of large-scale space consisting of multiple interacting representations, both qualitative and quantitative. The SSH is inspired by the properties of the human cognitive map, and is intended to serve both as a model of the human cognitive map and as a method for robot exploration and map-building. The multiple levels of the SSH express states of partial knowledge, and thus enable the human or robotic agent to deal robustly with uncertainty during both learning and problem-solving. The control level represents useful patterns of sensorimotor interaction with the world in the form of trajectory-following and hill-climbing control laws leading to locally distinctive states. Local geometric maps in local frames of reference can be constructed at the control level to serve as observers for control laws in particular neighborhoods. The causal level abstracts continuous behavior among distinctive states into a discrete model consisting of states linked by actions. The topological level introduces the external ontology of places, paths and regions by abduction to explain the observed pattern of states and actions at the causal level. Quantitative knowledge at the control, causal and topological levels supports a “patchwork map” of local geometric frames of reference linked by causal and topological connections. The patchwork map can be merged into a single global frame of reference at the metrical level when sufficient information and computational resources are available. We describe the assumptions and guarantees behind the generality of the SSH across environments and sensorimotor systems. Evidence is presented from several partial implementations of the SSH on simulated and physical robots. 2000 Elsevier Science B.V. All rights reserved.",
"title": ""
},
{
"docid": "0f853c6ccf6ce4cf025050135662f725",
"text": "This paper describes a technique of applying Genetic Algorithm (GA) to network Intrusion Detection Systems (IDSs). A brief overview of the Intrusion Detection System, genetic algorithm, and related detection techniques is presented. Parameters and evolution process for GA are discussed in detail. Unlike other implementations of the same problem, this implementation considers both temporal and spatial information of network connections in encoding the network connection information into rules in IDS. This is helpful for identification of complex anomalous behaviors. This work is focused on the TCP/IP network protocols.",
"title": ""
},
{
"docid": "ff26c01e6248882ba26b348bcb783913",
"text": "Data warehouses and data marts have long been considered as the unique solution for providing end-users with decisional information. More recently, data lakes have been proposed in order to govern data swamps. However, no formal definition has been proposed in the literature. Existing works are not complete and miss important parts of the topic. In particular, they do not focus on the influence of the data gravity, the infrastructure role of those solutions and of course are proposing divergent definitions and positioning regarding the usage and the interaction with existing decision support system.\n In this paper, we propose a novel definition of data lakes, together with a comparison with other over several criteria as the way to populate them, how to use, what is the Data Lake end user profile. We claim that data lakes are complementary components in decisional information systems and we discuss their position and interactions regarding the other components by proposing an interaction model.",
"title": ""
},
{
"docid": "01b05ea8fcca216e64905da7b5508dea",
"text": "Generative Adversarial Networks (GANs) have recently emerged as powerful generative models. GANs are trained by an adversarial process between a generative network and a discriminative network. It is theoretically guaranteed that, in the nonparametric regime, by arriving at the unique saddle point of a minimax objective function, the generative network generates samples from the data distribution. However, in practice, getting close to this saddle point has proven to be difficult, resulting in the ubiquitous problem of “mode collapse”. The root of the problems in training GANs lies on the unbalanced nature of the game being played. Here, we propose to level the playing field and make the minimax game balanced by “heating” the data distribution. The empirical distribution is frozen at temperature zero; GANs are instead initialized at infinite temperature, where learning is stable. By annealing the heated data distribution, we initialized the network at each temperature with the learnt parameters of the previous higher temperature. We posited a conjecture that learning under continuous annealing in the nonparametric regime is stable, and proposed an algorithm in corollary. In our experiments, the annealed GAN algorithm, dubbed β-GAN, trained with unmodified objective function was stable and did not suffer from mode collapse.",
"title": ""
},
{
"docid": "7bc0250aa9a766ececa4cf9a45db2b05",
"text": "This paper presents a new average d- q model and a control approach with a carrier-based pulsewidth modulation (PWM) implementation for nonregenerative three-phase three-level boost (VIENNA-type) rectifiers. State-space analysis and an averaging technique are used to derive the relationship between the controlled duty cycle and the dc-link neutral-point voltage, based on which an optimal zero-sequence component is found for dc-link voltage balance. By utilizing this zero-sequence component, the behavior of the dc-link voltage unbalance can be modeled in d-q coordinates using averaging over a switching cycle. Therefore, the proposed model is valid for up to half of the switching frequency. With the proposed model, a new control algorithm is developed with carrier-based PWM implementation, which features great simplicity and good dc-link neutral-point regulation. Space vector representation is also utilized to analyze the voltage balancing mechanism and the region of feasible operation. Simulation and experimental results validated the proposed model and control approach.",
"title": ""
},
{
"docid": "8894fb9296d642fcc7c63d074932e85e",
"text": "The ability to self-regulate behavior is one of the most important protective factors in relation with resilience and should be fostered especially in at-risk youth. Previous research has characterized these students as having behaviors indicating lack of foresight. The aim of the present study was to test the hypothetical relationship between these personal variables. It was hypothesized that self-regulation would be associated with and would be a good predictor of resilience, and that low-medium-high levels of self-regulation would lead to similar levels of resilience. The participants were 365 students -aged 15 and 21- from Navarre (Spain) who were enrolled in Initial Vocational Qualification Programs (IVQP). For the assessment, the Connor Davidson Resilience Scale (CD-RISC) and the Short Self-Regulation Questionnaire (SSRQ) were applied. We carried out linear association analyses (correlational and structural) and non-linear interdependence analyses (MANOVA) between the two constructs. Relationships between them were significant and positive. Learning from mistakes (self-regulation) was a significant predictor of coping and confidence, tenacity and adaptation, and tolerance to negative situations (resilience). Likewise, low-medium-high levels of self-regulation correlated with scores on resilience factors. Implications of these results for educational practice and for future research are discussed.",
"title": ""
},
{
"docid": "77b5c49432e2c4c940beda439146c9b1",
"text": "A cost-effective way of reducing sidelobe level and improving front-to-back ratio of the substrate integrated waveguide H-plane horn antennas is proposed in this letter. Metal rectangular patches and dielectric loading are integrated to the aperture of the horn antenna, resulting in an increased gain, narrow E-plane beamwidth, and reduced sidelobes and backward radiation. The overall dimensions of the fabricated antenna are 42 × 18.6 mm2 . The antenna works at 22.7 GHz with a measured gain of 10.1 dBi. A single substrate and a 3.5-mm connector with a transition pin are used to ensure the lowest cost for mass production.",
"title": ""
},
{
"docid": "397a10734b9850629d9b0348baec95af",
"text": "Genetic algorithms (GAs) have been extensively used as a means for performing global optimization in a simple yet reliable manner. However, in some realistic engineering design optimization domains the simple, classical implementation of a GA based on binary encoding and bit mutation and crossover is often ineecient and unable to reach the global optimum. In this paper we describe a GA for continuous design-space optimization that uses new GA operators and strategies tailored to the structure and properties of engineering design domains. Empirical results in the domains of supersonic transport aircraft and supersonic missile inlets demonstrate that the newly formulated GA can be signiicantly better than the classical GA in both eeciency and reliability.",
"title": ""
},
{
"docid": "705ba6bc49669ba22ff2408a3f9a984c",
"text": "Clinicians spend a significant amount of time inputting free-form textual notes into Electronic Health Records (EHR) systems. Much of this documentation work is seen as a burden, reducing time spent with patients and contributing to clinician burnout. With the aspiration of AI-assisted note-writing, we propose a new language modeling task predicting the content of notes conditioned on past data from a patient’s medical record, including patient demographics, labs, medications, and past notes. We train generative models using the public, de-identified MIMIC-III dataset and compare generated notes with those in the dataset on multiple measures. We find that much of the content can be predicted, and that many common templates found in notes can be learned. We discuss how such models can be useful in supporting assistive note-writing features such as error-detection and auto-complete.",
"title": ""
},
{
"docid": "e18671b988444d3edadc05dc7ea71b4c",
"text": "BACKGROUND\nResults of previous single-center, observational studies suggest that daily bathing of patients with chlorhexidine may prevent hospital-acquired bloodstream infections and the acquisition of multidrug-resistant organisms (MDROs).\n\n\nMETHODS\nWe conducted a multicenter, cluster-randomized, nonblinded crossover trial to evaluate the effect of daily bathing with chlorhexidine-impregnated washcloths on the acquisition of MDROs and the incidence of hospital-acquired bloodstream infections. Nine intensive care and bone marrow transplantation units in six hospitals were randomly assigned to bathe patients either with no-rinse 2% chlorhexidine-impregnated washcloths or with nonantimicrobial washcloths for a 6-month period, exchanged for the alternate product during the subsequent 6 months. The incidence rates of acquisition of MDROs and the rates of hospital-acquired bloodstream infections were compared between the two periods by means of Poisson regression analysis.\n\n\nRESULTS\nA total of 7727 patients were enrolled during the study. The overall rate of MDRO acquisition was 5.10 cases per 1000 patient-days with chlorhexidine bathing versus 6.60 cases per 1000 patient-days with nonantimicrobial washcloths (P=0.03), the equivalent of a 23% lower rate with chlorhexidine bathing. The overall rate of hospital-acquired bloodstream infections was 4.78 cases per 1000 patient-days with chlorhexidine bathing versus 6.60 cases per 1000 patient-days with nonantimicrobial washcloths (P=0.007), a 28% lower rate with chlorhexidine-impregnated washcloths. No serious skin reactions were noted during either study period.\n\n\nCONCLUSIONS\nDaily bathing with chlorhexidine-impregnated washcloths significantly reduced the risks of acquisition of MDROs and development of hospital-acquired bloodstream infections. (Funded by the Centers for Disease Control and Prevention and Sage Products; ClinicalTrials.gov number, NCT00502476.).",
"title": ""
},
{
"docid": "75b6168dd008fd1d30851d3cf24d7679",
"text": "We introduce Deep Linear Discriminant Analysis (DeepLDA) which learns linearly separable latent representations in an end-to-end fashion. Classic LDA extracts features which preserve class separability and is used for dimensionality reduction for many classification problems. The central idea of this paper is to put LDA on top of a deep neural network. This can be seen as a non-linear extension of classic LDA. Instead of maximizing the likelihood of target labels for individual samples, we propose an objective function that pushes the network to produce feature distributions which: (a) have low variance within the same class and (b) high variance between different classes. Our objective is derived from the general LDA eigenvalue problem and still allows to train with stochastic gradient descent and back-propagation. For evaluation we test our approach on three different benchmark datasets (MNIST, CIFAR-10 and STL-10). DeepLDA produces competitive results on MNIST and CIFAR-10 and outperforms a network trained with categorical cross entropy (same architecture) on a supervised setting of STL-10.",
"title": ""
},
{
"docid": "aec14ffcc8e2f2cea1e00fd6f0a0d425",
"text": "BACKGROUND\nOne of the reasons women with macromastia chose to undergo a breast reduction is to relieve their complaints of back, neck, and shoulder pain. We hypothesized that changes in posture after surgery may be the reason for the pain relief and that patient posture may correlate with symptomatic macromastia and may serve as an objective measure for complaints. The purpose of our study was to evaluate the effect of reduction mammaplasty on the posture of women with macromastia.\n\n\nMETHODS\nA prospective controlled study at a university medical center. Forty-two patients that underwent breast reduction were studied before surgery and an average of 4.3 years following surgery. Thirty-seven healthy women served as controls. Standardized lateral photos were taken. The inclination angle of the back was measured. Regression analysis was performed for the inclination angle.\n\n\nRESULTS\nPreoperatively, the mean inclination angle was 1.61 degrees ventrally; this diminished postoperatively to 0.72 degrees ventrally. This change was not significant (P-value=0.104). In the control group that angle was 0.28 degrees dorsally. Univariate regression analysis revealed that the inclination was dependent on body mass index (BMI) and having symptomatic macromastia; on multiple regression it was only dependent on BMI.\n\n\nCONCLUSIONS\nThe inclination angle of the back in breast reduction candidates is significantly different from that of controls; however, this difference is small and probably does not account for the symptoms associated with macromastia. Back inclination should not be used as a surrogate \"objective\" measure for symptomatic macromastia.",
"title": ""
},
{
"docid": "4c03c0fc33f8941a7769644b5dfb62ef",
"text": "A multiband MIMO antenna for a 4G mobile terminal is proposed. The antenna structure consists of a multiband main antenna element, a printed inverted-L subantenna element operating in the higher 2.5 GHz bands, and a wideband loop sub-antenna element working in lower 0.9 GHz band. In order to improve the isolation and ECC characteristics of the proposed MIMO antenna, each element is located at a different corner of the ground plane. In addition, the inductive coils are employed to reduce the antenna volume and realize the wideband property of the loop sub-antenna element. Finally, the proposed antenna covers LTE band 7/8, PCS, WiMAX, and WLAN service, simultaneously. The MIMO antenna has ECC lower than 0.15 and isolation higher than 12 dB in both lower and higher frequency bands.",
"title": ""
},
{
"docid": "4df6bbfaa8842d88df0b916946c59ea3",
"text": "Real-time decision making in emerging IoT applications typically relies on computing quantitative summaries of large data streams in an efficient and incremental manner. To simplify the task of programming the desired logic, we propose StreamQRE, which provides natural and high-level constructs for processing streaming data. Our language has a novel integration of linguistic constructs from two distinct programming paradigms: streaming extensions of relational query languages and quantitative extensions of regular expressions. The former allows the programmer to employ relational constructs to partition the input data by keys and to integrate data streams from different sources, while the latter can be used to exploit the logical hierarchy in the input stream for modular specifications. \n We first present the core language with a small set of combinators, formal semantics, and a decidable type system. We then show how to express a number of common patterns with illustrative examples. Our compilation algorithm translates the high-level query into a streaming algorithm with precise complexity bounds on per-item processing time and total memory footprint. We also show how to integrate approximation algorithms into our framework. We report on an implementation in Java, and evaluate it with respect to existing high-performance engines for processing streaming data. Our experimental evaluation shows that (1) StreamQRE allows more natural and succinct specification of queries compared to existing frameworks, (2) the throughput of our implementation is higher than comparable systems (for example, two-to-four times greater than RxJava), and (3) the approximation algorithms supported by our implementation can lead to substantial memory savings.",
"title": ""
},
{
"docid": "e334d3a73205247cadcda5ac01bae748",
"text": "The use of biopesticides and related alternative management products is increasing. New tools, including semiochemicals and plant-incorporated protectants (PIPs), as well as botanical and microbially derived chemicals, are playing an increasing role in pest management, along with plant and animal genetics, biological control, cultural methods, and newer synthetics. The goal of this Perspective is to highlight promising new biopesticide research and development (R&D), based upon recently published work and that presented in the American Chemical Society (ACS) symposium \"Biopesticides: State of the Art and Future Opportunities,\" as well as the authors' own perspectives. Although the focus is on biopesticides, included in this Perspective is progress with products exhibiting similar characteristics, namely those naturally occurring or derived from natural products. These are target specific, of low toxicity to nontarget organisms, reduced in persistence in the environment, and potentially usable in organic agriculture. Progress is being made, illustrated by the number of biopesticides and related products in the registration pipeline, yet major commercial opportunities exist for new bioherbicides and bionematicides, in part occasioned by the emergence of weeds resistant to glyphosate and the phase-out of methyl bromide. The emergence of entrepreneurial start-up companies, the U.S. Environmental Protection Agency (EPA) fast track for biopesticides, and the availability of funding for registration-related R&D for biorational pesticides through the U.S. IR-4 program provide incentives for biopesticide development, but an expanded effort is warranted both in the United States and worldwide to support this relatively nascent industry.",
"title": ""
}
] |
scidocsrr
|
9b2fc5f5b17152af11f15de7d0079cac
|
Generalizing Matching Knowledge using Active Learning
|
[
{
"docid": "2a76205b80c90ff9a4ca3ccb0434bb03",
"text": "Finding out which e-shops offer a specific product is a central challenge for building integrated product catalogs and comparison shopping portals. Determining whether two offers refer to the same product involves extracting a set of features (product attributes) from the web pages containing the offers and comparing these features using a matching function. The existing gold standards for product matching have two shortcomings: (i) they only contain offers from a small number of e-shops and thus do not properly cover the heterogeneity that is found on the Web. (ii) they only provide a small number of generic product attributes and therefore cannot be used to evaluate whether detailed product attributes have been correctly extracted from textual product descriptions. To overcome these shortcomings, we have created two public gold standards: The WDC Product Feature Extraction Gold Standard consists of over 500 product web pages originating from 32 different websites on which we have annotated all product attributes (338 distinct attributes) which appear in product titles, product descriptions, as well as tables and lists. The WDC Product Matching Gold Standard consists of over 75 000 correspondences between 150 products (mobile phones, TVs, and headphones) in a central catalog and offers for these products on the 32 web sites. To verify that the gold standards are challenging enough, we ran several baseline feature extraction and matching methods, resulting in F-score values in the range 0.39 to 0.67. In addition to the gold standards, we also provide a corpus consisting of 13 million product pages from the same websites which might be useful as background knowledge for training feature extraction and matching methods.",
"title": ""
},
{
"docid": "2bed91cd91b2958eb46af613a8cb4978",
"text": "Millions of HTML tables containing structured data can be found on the Web. With their wide coverage, these tables are potentially very useful for filling missing values and extending cross-domain knowledge bases such as DBpedia, YAGO, or the Google Knowledge Graph. As a prerequisite for being able to use table data for knowledge base extension, the HTML tables need to be matched with the knowledge base, meaning that correspondences between table rows/columns and entities/schema elements of the knowledge base need to be found. This paper presents the T2D gold standard for measuring and comparing the performance of HTML table to knowledge base matching systems. T2D consists of 8 700 schema-level and 26 100 entity-level correspondences between the WebDataCommons Web Tables Corpus and the DBpedia knowledge base. In contrast related work on HTML table to knowledge base matching, the Web Tables Corpus (147 million tables), the knowledge base, as well as the gold standard are publicly available. The gold standard is used afterward to evaluate the performance of T2K Match, an iterative matching method which combines schema and instance matching. T2K Match is designed for the use case of matching large quantities of mostly small and narrow HTML tables against large cross-domain knowledge bases. The evaluation using the T2D gold standard shows that T2K Match discovers table-to-class correspondences with a precision of 94%, row-to-entity correspondences with a precision of 90%, and column-to-property correspondences with a precision of 77%.",
"title": ""
},
{
"docid": "c6abeae6e9287f04b472595a47e974ad",
"text": "Data curation is the act of discovering a data source(s) of interest, cleaning and transforming the new data, semantically integrating it with other local data sources, and deduplicating the resulting composite. There has been much research on the various components of curation (especially data integration and deduplication). However, there has been little work on collecting all of the curation components into an integrated end-to-end system. In addition, most of the previous work will not scale to the sizes of problems that we are finding in the field. For example, one web aggregator requires the curation of 80,000 URLs and a second biotech company has the problem of curating 8000 spreadsheets. At this scale, data curation cannot be a manual (human) effort, but must entail machine learning approaches with a human assist only when necessary. This paper describes Data Tamer, an end-to-end curation system we have built at M.I.T. Brandeis, and Qatar Computing Research Institute (QCRI). It expects as input a sequence of data sources to add to a composite being constructed over time. A new source is subjected to machine learning algorithms to perform attribute identification, grouping of attributes into tables, transformation of incoming data and deduplication. When necessary, a human can be asked for guidance. Also, Data Tamer includes a data visualization component so a human can examine a data source at will and specify manual transformations. We have run Data Tamer on three real world enterprise curation problems, and it has been shown to lower curation cost by about 90%, relative to the currently deployed production software. This article is published under a Creative Commons Attribution License (http://creativecommons.org/licenses/by/3.0/), which permits distribution and reproduction in any medium as well allowing derivative works, provided that you attribute the original work to the author(s) and CIDR 2013. 6th Biennial Conference on Innovative Data Systems Research (CIDR ’13) January 6-9, 2013, Asilomar, California, USA.",
"title": ""
}
] |
[
{
"docid": "dfb13625c6c03932b6dd83a77a782073",
"text": "Location Based Service (LBS), although it greatly benefits the daily life of mobile device users, has introduced significant threats to privacy. In an LBS system, even under the protection of pseudonyms, users may become victims of inference attacks, where an adversary reveals a user's real identity and complete moving trajectory with the aid of side information, e.g., accidental identity disclosure through personal encounters. To enhance privacy protection for LBS users, a common approach is to include extra fake location information associated with different pseudonyms, known as dummy users, in normal location reports. Due to the high cost of dummy generation using resource constrained mobile devices, self-interested users may free-ride on others' efforts. The presence of such selfish behaviors may have an adverse effect on privacy protection. In this paper, we study the behaviors of self-interested users in the LBS system from a game-theoretic perspective. We model the distributed dummy user generation as Bayesian games in both static and timing-aware contexts, and analyze the existence and properties of the Bayesian Nash Equilibria for both models. Based on the analysis, we propose a strategy selection algorithm to help users achieve optimized payoffs. Leveraging a beta distribution generalized from real-world location privacy data traces, we perform simulations to assess the privacy protection effectiveness of our approach. The simulation results validate our theoretical analysis for the dummy user generation game models.",
"title": ""
},
{
"docid": "2fd2bbf080f2d1781c5b0f2f7eaf63cd",
"text": "In this paper, we approach electronic commerce Customer Relationship Management (e-CRM) from the perspective of five research areas. Our purpose is to define a conceptual framework to examine the relationships among and between these five research areas within e-CRM and to propose how they might be integrated to further research this area. We begin with a discussion of each of the research areas through brief reviews of relevant literature for each and a discussion of the theoretical and strategic implications associated with some CRM technologies and research areas. Next we present our framework, which focuses on e-CRM from the five research perspectives. We then present a theoretical framework for e-CRM in terms of the five research areas and how they affect one another, as well as e-CRM processes and both performance and non-performance outcomes.",
"title": ""
},
{
"docid": "66e9796bae976d688b92ce0cdb01f3ff",
"text": "In 2 studies, the authors used dyadic interactions to assess the influence of ego threat on likability as a function of self-esteem. In both studies, 2 naive participants engaged in a structured conversation; in half of the dyads, 1 participant received an ego threat prior to the interaction. In the 1st study, threatened high self-esteem participants were rated as less likable than were threatened low self-esteem participants. The 2nd study confirmed that ego threats are associated with decreased liking for those with high self-esteem and with increased liking for those with low self-esteem. A mediational analysis demonstrated that decreased liking among high self-esteem participants was due to being perceived as antagonistic. Study 2 also indicated that the findings could not be explained by trait levels of narcissism. These patterns are interpreted in terms of differential sensitivity to potential interpersonal rejection.",
"title": ""
},
{
"docid": "4d2c8537f4619d9dd5e53edfc901a155",
"text": "Turbidity is an internationally recognized criterion for assessing drinking water quality, because the colloidal particles in turbid water may harbor pathogens, chemically reduce oxidizing disinfectants, and hinder attempts to disinfect water with ultraviolet radiation. A turbidimeter is an electronic/optical instrument that assesses turbidity by measuring the scattering of light passing through a water sample containing such colloidal particles. Commercial turbidimeters cost hundreds or thousands of dollars, putting them beyond the reach of low-resource communities around the world. An affordable open-source turbidimeter based on a single light-to-frequency sensor was designed and constructed, and evaluated against a portable commercial turbidimeter. The final product, which builds on extensive published research, is intended to catalyze further developments in affordable water and sanitation monitoring.",
"title": ""
},
{
"docid": "692b02fa6e3b1d04e24db570b7030a3f",
"text": "Once a business performs a complex activity well, the parent organization often wants to replicate that success. But doing that is surprisingly difficult, and businesses nearly always fail when they try to reproduce a best practice. The reason? People approaching best-practice replication are overly optimistic and overconfident. They try to perfect an operation that's running nearly flawlessly, or they try to piece together different practices to create the perfect hybrid. Getting it right the second time (and all the times after that) involves adjusting for overconfidence in your own abilities and imposing strict discipline on the process and the organization. The authors studied numerous business settings to find out how organizational routines were successfully reproduced, and they identified five steps for successful replication. First, make sure you've got something that can be copied and that's worth copying. Some processes don't lend themselves to duplication; others can be copied but maybe shouldn't be. Second, work from a single template. It provides proof success, performance measurements, a tactical approach, and a reference for when problems arise. Third, copy the example exactly, and fourth, make changes only after you achieve acceptable results. The people who developed the template have probably already encountered many of the problems you want to \"fix,\" so it's best to create a working system before you introduce changes. Fifth, don't throw away the template. If your copy doesn't work, you can use the template to identify and solve problems. Best-practice replication, while less glamorous than pure innovation, contributes enormously to the bottom line of most companies. The article's examples--Banc One, Rank Xerox, Intel, Starbucks, and Re/Max Israel--prove that exact copying is a non-trivial, challenging accomplishment.",
"title": ""
},
{
"docid": "2f02235636c5c0aecd8918cba512888d",
"text": "To determine whether an AIDS prevention mass media campaign influenced risk perception, self-efficacy and other behavioural predictors. We used household survey data collected from 2,213 sexually experienced male and female Kenyans aged 15-39. Respondents were administered a questionnaire asking them about their exposure to branded and generic mass media messages concerning HIV/AIDS and condom use. They were asked questions concerning their personal risk perception, self-efficacy, condom effectiveness, condom availability, and their embarrassment in obtaining condoms. Logistic regression analysis was used to determine the impact of exposure to mass media messages on these predictors of behaviour change. Those exposed to branded advertising messages were significantly more likely to consider themselves at higher risk of acquiring HIV and to believe in the severity of AIDS. Exposure to branded messages was also associated with a higher level of personal self-efficacy, a greater belief in the efficacy of condoms, a lower level of perceived difficulty in obtaining condoms and reduced embarrassment in purchasing condoms. Moreover, there was a dose-response relationship: a higher intensity of exposure to advertising was associated with more positive outcomes. Exposure to generic advertising messages was less frequently associated with positive health beliefs and these relationships were also weaker. Branded mass media campaigns that promote condom use as an attractive lifestyle choice are likely to contribute to the development of perceptions that are conducive to the adoption of condom use.",
"title": ""
},
{
"docid": "b3cdd76dd50bea401ede3bb945c377dc",
"text": "First we report on a new threat campaign, underway in Korea, which infected around 20,000 Android users within two months. The campaign attacked mobile users with malicious applications spread via different channels, such as email attachments or SMS spam. A detailed investigation of the Android malware resulted in the identification of a new Android malware family Android/BadAccents. The family represents current state-of-the-art in mobile malware development for banking trojans. Second, we describe in detail the techniques this malware family uses and confront them with current state-of-the-art static and dynamic codeanalysis techniques for Android applications. We highlight various challenges for automatic malware analysis frameworks that significantly hinder the fully automatic detection of malicious components in current Android malware. Furthermore, the malware exploits a previously unknown tapjacking vulnerability in the Android operating system, which we describe. As a result of this work, the vulnerability, affecting all Android versions, will be patched in one of the next releases of the Android Open Source Project.",
"title": ""
},
{
"docid": "6e690c5aa54b28ba23d9ac63db4cc73a",
"text": "The Topic Detection and Tracking (TDT) evaluation program has included a \"cluster detection\" task since its inception in 1996. Systems were required to process a stream of broadcast news stories and partition them into non-overlapping clusters. A system's effectiveness was measured by comparing the generated clusters to \"truth\" clusters created by human annotators. Starting in 2003, TDT is moving to a more realistic model that permits overlapping clusters (stories may be on more than one topic) and encourages the creation of a hierarchy to structure the relationships between clusters (topics). We explore a range of possible evaluation models for this modified TDT clustering task to understand the best approach for mapping between the human-generated \"truth\" clusters and a much richer hierarchical structure. We demonstrate that some obvious evaluation techniques fail for degenerate cases. For a few others we attempt to develop an intuitive sense of what the evaluation numbers mean. We settle on some approaches that incorporate a strong balance between cluster errors (misses and false alarms) and the distance it takes to travel between stories within the hierarchy.",
"title": ""
},
{
"docid": "db4ed42c9b11ee736ad287eac05f8b29",
"text": "Food is a central part of our lives. Fundamentally, we need food to survive. Socially, food is something that brings people together-individuals interact through and around it. Culturally, food practices reflect our ethnicities and nationalities. Given the importance of food in our daily lives, it is important to understand what role technology currently plays and the roles it can be imagined to play in the future. In this paper we describe the existing and potential design space for HCI in the area of human-food interaction. We present ideas for future work on designing technologies in the area of human-food interaction that celebrate the positive interactions that people have with food as they eat and prepare foods in their everyday lives.",
"title": ""
},
{
"docid": "bc54a39eb7bf57ade7d79efc869be58f",
"text": "Power electronics plays an important role in a wide range of applications in order to achieve high efficiency and performance. Increasing efforts are being made to improve the reliability of power electronics systems to ensure compliance with more stringent constraints on cost, safety, and availability in different applications. This paper presents an overview of the major failure mechanisms of IGBT modules and their handling methods in power converter systems improving reliability. The major failure mechanisms of IGBT modules are presented first, and methods for predicting lifetime and estimating the junction temperature of IGBT modules are then discussed. Subsequently, different methods for detecting open- and short-circuit faults are presented. Finally, fault-tolerant strategies for improving the reliability of power electronic systems under field operation are explained and compared in terms of performance and cost.",
"title": ""
},
{
"docid": "9f60376e3371ac489b4af90026041fa7",
"text": "There is a substantive body of research focusing on women's experiences of intimate partner violence (IPV), but a lack of qualitative studies focusing on men's experiences as victims of IPV. This article addresses this gap in the literature by paying particular attention to hegemonic masculinities and men's perceptions of IPV. Men ( N = 9) participated in in-depth interviews. Interview data were rigorously subjected to thematic analysis, which revealed five key themes in the men's narratives: fear of IPV, maintaining power and control, victimization as a forbidden narrative, critical understanding of IPV, and breaking the silence. Although the men share similar stories of victimization as women, the way this is influenced by their gendered histories is different. While some men reveal a willingness to disclose their victimization and share similar fear to women victims, others reframe their victim status in a way that sustains their own power and control. The men also draw attention to the contextual realities that frame abuse, including histories of violence against the women who used violence and the realities of communities suffering intergenerational affects of colonized histories. The findings reinforce the importance of in-depth qualitative work toward revealing the context of violence, understanding the impact of fear, victimization, and power/control on men's mental health as well as the outcome of legal and support services and lack thereof. A critical discussion regarding the gendered context of violence, power within relationships, and addressing men's need for support without redefining victimization or taking away from policies and support for women's ongoing victimization concludes the work.",
"title": ""
},
{
"docid": "50fe419f19754991e4356212c4fe2fab",
"text": "In a recent book (Stanovich, 2004), I spent a considerable effort trying to work out the implications of dual process theory for the great rationality debate in cognitive science (see Cohen, 1981; Gigerenzer, 1996; Kahneman and Tversky, 1996; Stanovich, 1999; Stein, 1996). In this chapter, I wish to advance that discussion, first by discussing additions and complications to dual-process theory and then by working through the implications of these ideas for our view of human rationality.",
"title": ""
},
{
"docid": "c4490ecc0b0fb0641dc41313d93ccf44",
"text": "Machine learning predictive modeling algorithms are governed by “hyperparameters” that have no clear defaults agreeable to a wide range of applications. The depth of a decision tree, number of trees in a forest, number of hidden layers and neurons in each layer in a neural network, and degree of regularization to prevent overfitting are a few examples of quantities that must be prescribed for these algorithms. Not only do ideal settings for the hyperparameters dictate the performance of the training process, but more importantly they govern the quality of the resulting predictive models. Recent efforts to move from a manual or random adjustment of these parameters include rough grid search and intelligent numerical optimization strategies. This paper presents an automatic tuning implementation that uses local search optimization for tuning hyperparameters of modeling algorithms in SAS® Visual Data Mining and Machine Learning. The AUTOTUNE statement in the TREESPLIT, FOREST, GRADBOOST, NNET, SVMACHINE, and FACTMAC procedures defines tunable parameters, default ranges, user overrides, and validation schemes to avoid overfitting. Given the inherent expense of training numerous candidate models, the paper addresses efficient distributed and parallel paradigms for training and tuning models on the SAS® ViyaTM platform. It also presents sample tuning results that demonstrate improved model accuracy and offers recommendations for efficient and effective model tuning.",
"title": ""
},
{
"docid": "0e2a72898daf2c2e545b6449b0672cbd",
"text": "Recently, some studies have shown that human movement patterns are strongly associated with regional socioeconomic indicators such as per capita income and poverty rate. These studies, however, are limited in numbers and they have not reached a consensus on what indicators or how effectively they can possibly be used to reflect the socioeconomic characteristics of the underlying populations. In this study, we propose an analytical framework — by coupling large scale mobile phone and urban socioeconomic datasets — to better understand human mobility patterns and their relationships with travelers' socioeconomic status (SES). Six mobility indicators, which include radius of gyration, number of activity locations, activity entropy, travel diversity, kradius of gyration, and unicity, are derived to quantify important aspects of mobile phone users' mobility characteristics. A data fusion approach is proposed to approximate, at an aggregate level, the SES of mobile phone users. Using Singapore and Boston as case studies, we compare the statistical properties of the six mobility indicators in the two cities and analyze how they vary across socioeconomic classes. The results provide a multifaceted view of the relationships between mobility and SES. Specifically, it is found that phone user groups that are generally richer tend to travel shorter in Singapore but longer in Boston. One of the potential reasons, as suggested by our analysis, is that the rich neighborhoods in the two cities are respectively central and peripheral. For three other mobility indicators that reflect the diversity of individual travel and activity patterns (i.e., number of activity locations, activity entropy, and travel diversity), we find that for both cities, phone users across different socioeconomic classes exhibit very similar characteristics. This indicates that wealth level, at least in Singapore and Boston, is not a factor that restricts how people travel around in the city. In sum, our comparative analysis suggests that the relationship between mobility and SES could vary among cities, and such relationship is influenced by the spatial arrangement of housing, employment opportunities, and human ac-",
"title": ""
},
{
"docid": "6929c8fc722f108c99ce8966b3989bd9",
"text": "Cisco’s NetFlow protocol and Internet engineering task force’s Internet protocol flow information export open standard are widely deployed protocols for collecting network flow statistics. Understanding intricate traffic patterns in these network statistics requires sophisticated flow analysis tools that can efficiently mine network flow records. We present a network flow query language (NFQL), which can be used to write expressive queries to process flow records, aggregate them into groups, apply absolute or relative filters, and invoke Allen interval algebra rules to merge group records. We demonstrate nfql, an implementation of the language that has comparable execution times to SiLK and flow-tools with absolute filters. However, it trades performance when grouping and merging flows in favor of more operational capabilities that help increase the expressiveness of NFQL. We present two applications to demonstrate richer capabilities of the language. We show queries to identify flow signatures of popular applications and behavioural signatures to identify SSH compromise detection attacks.",
"title": ""
},
{
"docid": "2cc93a5ba7bfd29578b7fe183c7f2fe6",
"text": "Erasure coding schemes provide higher durability at lower storage cost, and thus constitute an attractive alternative to replication in distributed storage systems, in particular for storing rarely accessed \"cold\" data. These schemes, however, require an order of magnitude higher recovery bandwidth for maintaining a constant level of durability in the face of node failures. In this paper we propose lazy recovery, a technique to reduce recovery bandwidth demands down to the level of replicated storage. The key insight is that a careful adjustment of recovery rate substantially reduces recovery bandwidth, while keeping the impact on read performance and data durability low. We demonstrate the benefits of lazy recovery via extensive simulation using a realistic distributed storage configuration and published component failure parameters. For example, when applied to the commonly used RS(14, 10) code, lazy recovery reduces repair bandwidth by up to 76% even below replication, while increasing the amount of degraded stripes by 0.1 percentage points. Lazy recovery works well with a variety of erasure coding schemes, including the recently introduced bandwidth efficient codes, achieving up to a factor of 2 additional bandwidth savings.",
"title": ""
},
{
"docid": "297da17a65e159d66d34d0821484bb3e",
"text": "A hallmark of human cognition is the ability to continually acquire and distill observations of the world into meaningful, predictive theories. In this paper we present a new mechanism for logical theory acquisition which takes a set of observed facts and learns to extract from them a set of logical rules and a small set of core facts which together entail the observations. Our approach is neuro-symbolic in the sense that the rule predicates and core facts are given dense vector representations. The rules are applied to the core facts using a soft unification procedure to infer additional facts. After k steps of forward inference, the consequences are compared to the initial observations and the rules and core facts are then encouraged towards representations that more faithfully generate the observations through inference. Our approach is based on a novel neural forward-chaining differentiable rule induction network. The rules are interpretable and learned compositionally from their predicates, which may be invented. We demonstrate the efficacy of our approach on a variety of ILP rule induction and domain theory learning datasets.",
"title": ""
},
{
"docid": "7b9c80955903f888c423515faaf367e4",
"text": "This paper presents a novel antenna impedance matching system used in the latest Australian SuperDARN class HF radar, at Buckland Park, South Australia. Earlier radar designs used an off-the-shelf log-periodic wideband antenna that is significantly easier to match over the SuperDARN frequency band, but expensive to buy and mount, and had limited capability for azimuthal beamforming. The newer TTFD antenna, used in many recent SuperDARN radars, offers improvement in these areas, but is in essence a narrow band antenna. It is capable of wideband operation at the cost of being difficult to match, frequency dependant, high-impedance and complex load. Previous TTFD matching transformers utilising toroids have been measured and evaluated for their suitability for the Buckland Park radar. A new system based on an LC matching network circuit has been devised to replace them. The design approach and results of the new matching circuit are detailed.",
"title": ""
},
{
"docid": "bb6ed1bf3feedad87fedb302b9864096",
"text": "We present a simple and effective approach to incorporating syntactic structure into neural attention-based encoderdecoder models for machine translation. We rely on graph-convolutional networks (GCNs), a recent class of neural networks developed for modeling graph-structured data. Our GCNs use predicted syntactic dependency trees of source sentences to produce representations of words (i.e. hidden states of the encoder) that are sensitive to their syntactic neighborhoods. GCNs take word representations as input and produce word representations as output, so they can easily be incorporated as layers into standard encoders (e.g., on top of bidirectional RNNs or convolutional neural networks). We evaluate their effectiveness with English-German and English-Czech translation experiments for different types of encoders and observe substantial improvements over their syntax-agnostic versions in all the considered setups.",
"title": ""
},
{
"docid": "a564d62de4afc7e6e5c76f1955809b61",
"text": "The implementation of a polycrystalline silicon solar cell as a microwave groundplane in a low-profile, reduced-footprint microstrip patch antenna design for autonomous communication applications is reported. The effects on the antenna/solar performances due to the integration, different electrical conductivities in the silicon layer and variation in incident light intensity are investigated. The antenna sensitivity to the orientation of the anisotropic solar cell geometry is discussed.",
"title": ""
}
] |
scidocsrr
|
3a7e5d9ff3303b596bfa883a80d26bbb
|
Verilog Implementation of a System for Finding Shortest Path by Using Floyd-Warshall Algorithm
|
[
{
"docid": "333bffc73983bc159248420d76afc7e6",
"text": "In this paper we study approximate landmark-based methods for point-to-point distance estimation in very large networks. These methods involve selecting a subset of nodes as landmarks and computing offline the distances from each node in the graph to those landmarks. At runtime, when the distance between a pair of nodes is needed, it can be estimated quickly by combining the precomputed distances. We prove that selecting the optimal set of landmarks is an NP-hard problem, and thus heuristic solutions need to be employed. We therefore explore theoretical insights to devise a variety of simple methods that scale well in very large networks. The efficiency of the suggested techniques is tested experimentally using five real-world graphs having millions of edges. While theoretical bounds support the claim that random landmarks work well in practice, our extensive experimentation shows that smart landmark selection can yield dramatically more accurate results: for a given target accuracy, our methods require as much as 250 times less space than selecting landmarks at random. In addition, we demonstrate that at a very small accuracy loss our techniques are several orders of magnitude faster than the state-of-the-art exact methods. Finally, we study an application of our methods to the task of social search in large graphs.",
"title": ""
}
] |
[
{
"docid": "7643861888d06aa7d4df682ec960926b",
"text": "This meta-analysis explores the relationship between SNS-use and academic performance. Examination of the literature containing quantitative measurements of both SNS-use and academic performance produced a sample of 28 effects sizes (N 1⁄4 101,441) for review. Results indicated a significant negative relationship between SNS-use and academic performance. Further moderation analysis points to test type as an important source of variability in the relationship. We found a negative correlation between SNS-use and GPA, while a positive one for SNS-use and language test. Moreover, we found that the relationship of SNS-use and GPA was more strongly negative in females and college students. © 2017 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "b7e28e79f938b617ba2e2ed7ef1bade3",
"text": "Computing in schools has gained momentum in the last two years resulting in GCSEs in Computing and teachers looking to up skill from Digital Literacy (ICT). For many students the subject of computer science concerns software code but writing code can be challenging, due to specific requirements on syntax and spelling with new ways of thinking required. Not only do many undergraduate students lack these ways of thinking, but there is a general misrepresentation of computing in education. Were computing taught as a more serious subject like science and mathematics, public understanding of the complexities of computer systems would increase, enabling those not directly involved with IT make better informed decisions and avoid incidents such as over budget and underperforming systems. We present our exploration into teaching a variety of computing skills, most significantly \"computational thinking\", to secondary-school age children through three very different engagements. First, we discuss Print craft, in which participants learn about computer-aided design and additive manufacturing by designing and building a miniature world from scratch using the popular open-world game Mine craft and 3D printers. Second, we look at how students can get a new perspective on familiar technology with a workshop using App Inventor, a graphical Android programming environment. Finally, we look at an ongoing after school robotics club where participants face a number of challenges of their own making as they design and create a variety of robots using a number of common tools such as Scratch and Arduino.",
"title": ""
},
{
"docid": "693ad5651306e883a7065b5f79f2cc1e",
"text": "This paper presents a general framework for agglomerative hierarchical clustering based on graphs. Different hierarchical agglomerative clustering algorithms can be obtained from this framework, by specifying an inter-cluster similarity measure, a subgraph of the 13-similarity graph, and a cover routine. We also describe two methods obtained from this framework called hierarchical compact algorithm and hierarchical star algorithm. These algorithms have been evaluated using standard document collections. The experimental results show that our methods are faster and obtain smaller hierarchies than traditional hierarchical algorithms while achieving a similar clustering quality",
"title": ""
},
{
"docid": "df92fe7057593a9312de91c06e1525ca",
"text": "The Formal Theory of Fun and Creativity (1990–2010) [Schmidhuber, J.: Formal theory of creativity, fun, and intrinsic motivation (1990–2010). IEEE Trans. Auton. Mental Dev. 2(3), 230–247 (2010b)] describes principles of a curious and creative agent that never stops generating nontrivial and novel and surprising tasks and data. Two modules are needed: a data encoder and a data creator. The former encodes the growing history of sensory data as the agent is interacting with its environment; the latter executes actions shaping the history. Both learn. The encoder continually tries to encode the created data more efficiently, by discovering new regularities in it. Its learning progress is the wow-effect or fun or intrinsic reward of the creator, which maximizes future expected reward, being motivated to invent skills leading to interesting data that the encoder does not yet know but can easily learn with little computational effort. I have argued that this simple formal principle explains science and art and music and humor. Note: This overview heavily draws on previous publications since 1990, especially Schmidhuber (2010b), parts of which are reprinted with friendly permission by IEEE.",
"title": ""
},
{
"docid": "2f3046369c717cc3dc15632fc163a429",
"text": "We propose FaceVR, a novel image-based method that enables video teleconferencing in VR based on self-reenactment. State-of-the-art face tracking methods in the VR context are focused on the animation of rigged 3D avatars (Li et al. 2015; Olszewski et al. 2016). Although they achieve good tracking performance, the results look cartoonish and not real. In contrast to these model-based approaches, FaceVR enables VR teleconferencing using an image-based technique that results in nearly photo-realistic outputs. The key component of FaceVR is a robust algorithm to perform real-time facial motion capture of an actor who is wearing a head-mounted display (HMD), as well as a new data-driven approach for eye tracking from monocular videos. Based on reenactment of a prerecorded stereo video of the person without the HMD, FaceVR incorporates photo-realistic re-rendering in real time, thus allowing artificial modifications of face and eye appearances. For instance, we can alter facial expressions or change gaze directions in the prerecorded target video. In a live setup, we apply these newly introduced algorithmic components.",
"title": ""
},
{
"docid": "711d8291683bd23e2060b56ce7120f23",
"text": "Solving simple arithmetic word problems is one of the challenges in Natural Language Understanding. This paper presents a novel method to learn to use formulas to solve simple arithmetic word problems. Our system, analyzes each of the sentences to identify the variables and their attributes; and automatically maps this information into a higher level representation. It then uses that representation to recognize the presence of a formula along with its associated variables. An equation is then generated from the formal description of the formula. In the training phase, it learns to score the <formula, variables> pair from the systematically generated higher level representation. It is able to solve 86.07% of the problems in a corpus of standard primary school test questions and beats the state-of-the-art by",
"title": ""
},
{
"docid": "69b1305375d2839e4aeb186c3afb6b32",
"text": "One of the goals in the field of mobile robotics is the development of mobile platforms which operate in populated environments. For many tasks it is therefore highly desirable that a robot can track the positions of the humans in its surrounding. In this paper we introduce sample-based joint probabilistic data association filters as a new algorithm to track multiple moving objects. Our method applies Bayesian filtering to adapt the tracking process to the number of objects in the perceptual range of the robot. The approach has been implemented and tested on a real robot using laser-range data. We present experiments illustrating that our algorithm is able to robustly keep track of multiple people. The experiments furthermore show that the approach outperforms other techniques developed so far. KEY WORDS—multi-target tracking, data association, particle filters, people tracking, mobile robot perception",
"title": ""
},
{
"docid": "fe11079fdec24ae62afa1c16eb2387e3",
"text": "Methods for alignment of protein sequences typically measure similarity by using a substitution matrix with scores for all possible exchanges of one amino acid with another. The most widely used matrices are based on the Dayhoff model of evolutionary rates. Using a different approach, we have derived substitution matrices from about 2000 blocks of aligned sequence segments characterizing more than 500 groups of related proteins. This led to marked improvements in alignments and in searches using queries from each of the groups.",
"title": ""
},
{
"docid": "e1ab544e1a00cc6b2f7797f65e084378",
"text": "This research investigates how to introduce synchronous interactive peer learning into an online setting appropriate both for crowdworkers (learning new tasks) and students in massive online courses (learning course material). We present an interaction framework in which groups of learners are formed on demand and then proceed through a sequence of activities that include synchronous group discussion about learner-generated responses. Via controlled experiments with crowdworkers, we show that discussing challenging problems leads to better outcomes than working individually, and incentivizing people to help one another yields still better results. We then show that providing a mini-lesson in which workers consider the principles underlying the tested concept and justify their answers leads to further improvements. Combining the mini-lesson with the discussion of the multiple-choice question leads to significant improvements on that question. We also find positive subjective responses to the peer interactions, suggesting that discussions can improve morale in remote work or learning settings.",
"title": ""
},
{
"docid": "f93ebf9beefe35985b6e31445044e6d1",
"text": "Recent genetic studies have suggested that the colonization of East Asia by modern humans was more complex than a single origin from the South, and that a genetic contribution via a Northern route was probably quite substantial. Here we use a spatially-explicit computer simulation approach to investigate the human migration hypotheses of this region based on one-route or two-route models. We test the likelihood of each scenario by using Human Leukocyte Antigen (HLA) − A, −B, and − DRB1 genetic data of East Asian populations, with both selective and demographic parameters considered. The posterior distribution of each parameter is estimated by an Approximate Bayesian Computation (ABC) approach. Our results strongly support a model with two main routes of colonization of East Asia on both sides of the Himalayas, with distinct demographic histories in Northern and Southern populations, characterized by more isolation in the South. In East Asia, gene flow between populations originating from the two routes probably existed until a remote prehistoric period, explaining the continuous pattern of genetic variation currently observed along the latitude. A significant although dissimilar level of balancing selection acting on the three HLA loci is detected, but its effect on the local genetic patterns appears to be minor compared to those of past demographic events.",
"title": ""
},
{
"docid": "bd3776d1dc36d6a91ea73d3c12ca326c",
"text": "Spatial pyramid pooling module or encode-decoder structure are used in deep neural networks for semantic segmentation task. The former networks are able to encode multi-scale contextual information by probing the incoming features with filters or pooling operations at multiple rates and multiple effective fields-of-view, while the latter networks can capture sharper object boundaries by gradually recovering the spatial information. In this work, we propose to combine the advantages from both methods. Specifically, our proposed model, DeepLabv3+, extends DeepLabv3 by adding a simple yet effective decoder module to refine the segmentation results especially along object boundaries. We further explore the Xception model and apply the depthwise separable convolution to both Atrous Spatial Pyramid Pooling and decoder modules, resulting in a faster and stronger encoder-decoder network. We demonstrate the effectiveness of the proposed model on PASCAL VOC 2012 and Cityscapes datasets, achieving the test set performance of 89.0% and 82.1% without any post-processing. Our paper is accompanied with a publicly available reference implementation of the proposed models in Tensorflow at https: //github.com/tensorflow/models/tree/master/research/deeplab.",
"title": ""
},
{
"docid": "541de3d6af2edacf7396e5ca66c385e2",
"text": "This paper presents a simple and intuitive method for mining search engine query logs to get fast query recommendations on a large scale industrial strength search engine. In order to get a more comprehensive solution, we combine two methods together. On the one hand, we study and model search engine users' sequential search behavior, and interpret this consecutive search behavior as client-side query refinement, that should form the basis for the search engine's own query refinement process. On the other hand, we combine this method with a traditional content based similarity method to compensate for the high sparsity of real query log data, and more specifically, the shortness of most query sessions. To evaluate our method, we use one hundred day worth query logs from SINA' search engine to do off-line mining. Then we analyze three independent editors evaluations on a query test set. Based on their judgement, our method was found to be effective for finding related queries, despite its simplicity. In addition to the subjective editors' rating, we also perform tests based on actual anonymous user search sessions.",
"title": ""
},
{
"docid": "81190a4c576f86444a95e75654bddf29",
"text": "Enforcing a variety of security measures (such as intrusion detection systems, and so on) can provide a certain level of protection to computer networks. However, such security practices often fall short in face of zero-day attacks. Due to the information asymmetry between attackers and defenders, detecting zero-day attacks remains a challenge. Instead of targeting individual zero-day exploits, revealing them on an attack path is a substantially more feasible strategy. Such attack paths that go through one or more zero-day exploits are called zero-day attack paths. In this paper, we propose a probabilistic approach and implement a prototype system ZePro for zero-day attack path identification. In our approach, a zero-day attack path is essentially a graph. To capture the zero-day attack, a dependency graph named object instance graph is first built as a supergraph by analyzing system calls. To further reveal the zero-day attack paths hidden in the supergraph, our system builds a Bayesian network based upon the instance graph. By taking intrusion evidence as input, the Bayesian network is able to compute the probabilities of object instances being infected. Connecting the high-probability-instances through dependency relations forms a path, which is the zero-day attack path. The experiment results demonstrate the effectiveness of ZePro for zero-day attack path identification.",
"title": ""
},
{
"docid": "0ded22648ab695e3603784dbead510ff",
"text": "Rendering a face recognition system robust is vital in order to safeguard it against spoof attacks carried out using printed pictures of a victim (also known as print attack) or a replayed video of the person (replay attack). A key property in distinguishing a live, valid access from printed media or replayed videos is by exploiting the information dynamics of the video content, such as blinking eyes, moving lips, and facial dynamics. We advance the state of the art in facial antispoofing by applying a recently developed algorithm called dynamic mode decomposition (DMD) as a general purpose, entirely data-driven approach to capture the above liveness cues. We propose a classification pipeline consisting of DMD, local binary patterns (LBPs), and support vector machines (SVMs) with a histogram intersection kernel. A unique property of DMD is its ability to conveniently represent the temporal information of the entire video as a single image with the same dimensions as those images contained in the video. The pipeline of DMD + LBP + SVM proves to be efficient, convenient to use, and effective. In fact only the spatial configuration for LBP needs to be tuned. The effectiveness of the methodology was demonstrated using three publicly available databases: (1) print-attack; (2) replay-attack; and (3) CASIA-FASD, attaining comparable results with the state of the art, following the respective published experimental protocols.",
"title": ""
},
{
"docid": "ccefef1618c7fa637de366e615333c4b",
"text": "Context: Systems development normally takes place in a specific organizational context, including organizational culture. Previous research has identified organizational culture as a factor that potentially affects the deployment systems development methods. Objective: The purpose is to analyze the relationship between organizational culture and the postadoption deployment of agile methods. Method: This study is a theory development exercise. Based on the Competing Values Model of organizational culture, the paper proposes a number of hypotheses about the relationship between organizational culture and the deployment of agile methods. Results: Inspired by the agile methods thirteen new hypotheses are introduced and discussed. They have interesting implications, when contrasted with ad hoc development and with traditional systems devel-",
"title": ""
},
{
"docid": "bbc936a3b4cd942ba3f2e1905d237b82",
"text": "Silkworm silk is among the most widely used natural fibers for textile and biomedical applications due to its extraordinary mechanical properties and superior biocompatibility. A number of physical and chemical processes have also been developed to reconstruct silk into various forms or to artificially produce silk-like materials. In addition to the direct use and the delicate replication of silk's natural structure and properties, there is a growing interest to introduce more new functionalities into silk while maintaining its advantageous intrinsic properties. In this review we assess various methods and their merits to produce functional silk, specifically those with color and luminescence, through post-processing steps as well as biological approaches. There is a highlight on intrinsically colored and luminescent silk produced directly from silkworms for a wide range of applications, and a discussion on the suitable molecular properties for being incorporated effectively into silk while it is being produced in the silk gland. With these understanding, a new generation of silk containing various functional materials (e.g., drugs, antibiotics and stimuli-sensitive dyes) would be produced for novel applications such as cancer therapy with controlled release feature, wound dressing with monitoring/sensing feature, tissue engineering scaffolds with antibacterial, anticoagulant or anti-inflammatory feature, and many others.",
"title": ""
},
{
"docid": "79623049d961677960ed769d1469fb03",
"text": "Understanding how people communicate during disasters is important for creating systems to support this communication. Twitter is commonly used to broadcast information and to organize support during times of need. During the 2010 Gulf Oil Spill, Twitter was utilized for spreading information, sharing firsthand observations, and to voice concern about the situation. Through building a series of classifiers to detect emotion and sentiment, the distribution of emotion during the Gulf Oil Spill can be analyzed and its propagation compared against released information and corresponding events. We contribute a series of emotion classifiers and a prototype collaborative visualization of the results and discuss their implications.",
"title": ""
},
{
"docid": "f1dd93a6176a45381d226543ce790b5d",
"text": "Staphylococcal cassette chromosome mec (SCCmec) typing is essential for understanding the molecular epidemiology of methicillin-resistant Staphylococcus aureus (MRSA). SCCmec elements are currently classified into types I to V based on the nature of the mec and ccr gene complexes, and are further classified into subtypes according to their junkyard region DNA segments. Previously described traditional SCCmec PCR typing schemes require multiple primer sets and PCR experiments, while a previously published multiplex PCR assay is limited in its ability to detect recently discovered types and subtypes such as SCCmec type V and subtypes IVa, b, c, and d. We designed new sets of SCCmec type- and subtype-unique and specific primers and developed a novel multiplex PCR assay allowing for concomitant detection of the methicillin resistance (mecA gene) (also serving as an internal control) to facilitate detection and classification of all currently described SCCmec types and subtypes I, II, III, IVa, b, c, d, and V. Our assay demonstrated 100% sensitivity and specificity in accurately characterizing 54 MRSA strains belonging to the various known SCCmec types and subtypes, when compared with previously described typing methods. Further application of our assay in 453 randomly selected local clinical isolates confirmed its feasibility and practicality. This novel assay offers a rapid, simple, and feasible method for SCCmec typing of MRSA, and may serve as a useful tool for clinicians and epidemiologists in their efforts to prevent and control infections caused by this organism.",
"title": ""
},
{
"docid": "39ddc850564c3f2a2ca515427629a6d0",
"text": "The structure imposed upon spoken sentences by intonation seems frequently to be orthogohal to their traditional surface-syntactic structure. However, the notion of \"intonational structure\" as formulated by Pierrehumbert, Selkirk, and others, can be subsumed under a rather different notion of syntactic surface structure that emerges from a theory of grammar based on a \"Combinatory\" extension to Categorial Gram, mar. Interpretations of constituents at this level are in tam directly related to \"information structure\", or discourse-related notions of \"theme\", \"rheme\", \"focus\" and \"presupposition\". Some simplifications appear to follow for the problem of integrating syntax and other high-level modules in spoken language systems. One quite normal prosody (13, below) for an answer to the following question (a) intuitively impotes the intonational structure indicated by the brackets (stress, marked in this case by raised pitch, is indicated by capitals): (1) a. I know that Alice prefers velveL But what does MAry prefer? b. ( M A r y prefers) (CORduroy). Such a grouping is orthogonal to the traditional syntactic structure of the sentence. Intonational structure nevertheless remains strongly constrained by meaning. For example, contours imposing bracketings like the following are not allowed: (2) #(Three cats)(in ten prefer corduroy) *I am grateful to Steven Bird, Julia Hirschberg, Aravind Joshi, Mitch Marcus, Janet Pierrehumben, and Bonnie Lynn Webber for comments and advice. They are not to blame for any errors in the translation of their advice into the present form. The research was supposed by DARPA grant no. N0014-85-K0018, and ARO grant no. DAAL03-89-C003 l. 9 Halliday [6] observed that this constraint, which Selkirk [14] has called the \"Sense Unit Condition\", seems to follow from the function of phrasal intonation, which is to convey what will here be called \"information structure\" that is, distinctions of focus, presupposition, and propositional attitude towards enfloes in the discourse model. These discourse entities are more diverse than mere nounphrase or propositional referents, but they do not include such nonconcepts as \"in ten prefer corduroy.\" Among the categories that they do include are what Wilson and Sperber and E. Prince [13] have termed \"open propositions\". One way of introducing an open proposition into the discourse context is by asking a Wh-question. For example, the question in (1), What does Mary prefer? introduces an open proposition. As Jackendoff [7] pointed out, it is natural to think of this open proposition as a functional abstraction, and to express it as follows, using the notation of the A-calculus: (3) Ax [(prefer' x) mary'] (Primes indicate semantic interpretations whose detailed nature is of no direct concern here.) When this function or concept is supplied with an argument corduroy', it reduces to give a proposition, with the same function argument relations as the canonical sentence: (4) (prefer' corduroy') mary' It is the presence of the above open proposition rather than some other that makes the intonation contour in (1)b felicitous. ( l~at is not to say that its presence uniquely determines this response, nor that its explicit mention is necessary for interpreting the response.) These observations have led linguists such as Selkirk to postulate a level of \"intonational structure\", independent of syntactic structure and related to information structure. The theory that results can be viewed as in Figure 1: directionality of their arguments and the type of their result: LF:Argument Structure I Surface Structure ~.____q LF:Information Structure I I",
"title": ""
},
{
"docid": "3192a76e421d37fbe8619a3bc01fb244",
"text": "• Develop and implement an internally consistent set of goals and functional policies (this is, a solution to the agency problem) • These internally consistent set of goals and policies aligns the firm’s strengths and weaknesses with external (industry) opportunities and threats (SWOT) in a dynamic balance • The firm’s strategy has to be concerned with the exploitation of its “distinctive competences” (early reference to RBV)",
"title": ""
}
] |
scidocsrr
|
21c7c6a6472e367f99dbb4f5f2e01d6c
|
Few-layer MoS2: a promising layered semiconductor.
|
[
{
"docid": "4ac734960f264716721a0f0fa5305925",
"text": "Most of recent research on layered chalcogenides is understandably focused on single atomic layers. However, it is unclear if single-layer units are the most ideal structures for enhanced gas-solid interactions. To probe this issue further, we have prepared large-area MoS2 sheets ranging from single to multiple layers on 300 nm SiO2/Si substrates using the micromechanical exfoliation method. The thickness and layering of the sheets were identified by optical microscope, invoking recently reported specific optical color contrast, and further confirmed by AFM and Raman spectroscopy. The MoS2 transistors with different thicknesses were assessed for gas-sensing performances with exposure to NO2, NH3, and humidity in different conditions such as gate bias and light irradiation. The results show that, compared to the single-layer counterpart, transistors of few MoS2 layers exhibit excellent sensitivity, recovery, and ability to be manipulated by gate bias and green light. Further, our ab initio DFT calculations on single-layer and bilayer MoS2 show that the charge transfer is the reason for the decrease in resistance in the presence of applied field.",
"title": ""
}
] |
[
{
"docid": "7777858d21dbf120f2024076fd17b27f",
"text": "BACKGROUND\nAlzheimer's disease is characterized by the deposition of amyloid-beta (Aβ) plaques in the brain. Aβ is produced from the sequential cleavage of amyloid precursor protein by β-site amyloid precursor protein-cleaving enzyme 1 (BACE-1) followed by γ-secretase. Verubecestat is an oral BACE-1 inhibitor that reduces the Aβ level in the cerebrospinal fluid of patients with Alzheimer's disease.\n\n\nMETHODS\nWe conducted a randomized, double-blind, placebo-controlled, 78-week trial to evaluate verubecestat at doses of 12 mg and 40 mg per day, as compared with placebo, in patients who had a clinical diagnosis of mild-to-moderate Alzheimer's disease. The coprimary outcomes were the change from baseline to week 78 in the score on the cognitive subscale of the Alzheimer's Disease Assessment Scale (ADAS-cog; scores range from 0 to 70, with higher scores indicating worse dementia) and in the score on the Alzheimer's Disease Cooperative Study Activities of Daily Living Inventory scale (ADCS-ADL; scores range from 0 to 78, with lower scores indicating worse function).\n\n\nRESULTS\nA total of 1958 patients underwent randomization; 653 were randomly assigned to receive verubecestat at a dose of 12 mg per day (the 12-mg group), 652 to receive verubecestat at a dose of 40 mg per day (the 40-mg group), and 653 to receive matching placebo. The trial was terminated early for futility 50 months after onset, which was within 5 months before its scheduled completion, and after enrollment of the planned 1958 patients was complete. The estimated mean change from baseline to week 78 in the ADAS-cog score was 7.9 in the 12-mg group, 8.0 in the 40-mg group, and 7.7 in the placebo group (P=0.63 for the comparison between the 12-mg group and the placebo group and P=0.46 for the comparison between the 40-mg group and the placebo group). The estimated mean change from baseline to week 78 in the ADCS-ADL score was -8.4 in the 12-mg group, -8.2 in the 40-mg group, and -8.9 in the placebo group (P=0.49 for the comparison between the 12-mg group and the placebo group and P=0.32 for the comparison between the 40-mg group and the placebo group). Adverse events, including rash, falls and injuries, sleep disturbance, suicidal ideation, weight loss, and hair-color change, were more common in the verubecestat groups than in the placebo group.\n\n\nCONCLUSIONS\nVerubecestat did not reduce cognitive or functional decline in patients with mild-to-moderate Alzheimer's disease and was associated with treatment-related adverse events. (Funded by Merck; ClinicalTrials.gov number, NCT01739348 .).",
"title": ""
},
{
"docid": "238adc0417c167aeb64c23b576f434d0",
"text": "This paper studies the problem of matching images captured from an unmanned ground vehicle (UGV) to those from a satellite or high-flying vehicle. We focus on situations where the UGV navigates in remote areas with few man-made structures. This is a difficult problem due to the drastic change in perspective between the ground and aerial imagery and the lack of environmental features for image comparison. We do not rely on GPS, which may be jammed or uncertain. We propose a two-step approach: (1) the UGV images are warped to obtain a bird's eye view of the ground, and (2) this view is compared to a grid of satellite locations using whole-image descriptors. We analyze the performance of a variety of descriptors for different satellite map sizes and various terrain and environment types. We incorporate the air-ground matching into a particle-filter framework for localization using the best-performing descriptor. The results show that vision-based UGV localization from satellite maps is not only possible, but often provides better position estimates than GPS estimates, enabling us to improve the location estimates of Google Street View.",
"title": ""
},
{
"docid": "586d89b6d45fd49f489f7fb40c87eb3a",
"text": "Little research has examined the impacts of enterprise resource planning (ERP) systems implementation on job satisfaction. Based on a 12-month study of 2,794 employees in a telecommunications firm, we found that ERP system implementation moderated the relationships between three job characteristics (skill variety, autonomy, and feedback) and job satisfaction. Our findings highlight the key role that ERP system implementation can have in altering wellestablished relationships in the context of technology-enabled organizational change situations. This work also extends research on technology diffusion by moving beyond a focus on technology-centric outcomes, such as system use, to understanding broader job outcomes. Carol Saunders was the accepting senior editor for this paper.",
"title": ""
},
{
"docid": "777d4e55f3f0bbb0544130931006b237",
"text": "Spatial pyramid matching is a standard architecture for categorical image retrieval. However, its performance is largely limited by the prespecified rectangular spatial regions when pooling local descriptors. In this paper, we propose to learn object-shaped and directional receptive fields for image categorization. In particular, different objects in an image are seamlessly constructed by superpixels, while the direction captures human gaze shifting path. By generating a number of superpixels in each image, we construct graphlets to describe different objects. They function as the object-shaped receptive fields for image comparison. Due to the huge number of graphlets in an image, a saliency-guided graphlet selection algorithm is proposed. A manifold embedding algorithm encodes graphlets with the semantics of training image tags. Then, we derive a manifold propagation to calculate the postembedding graphlets by leveraging visual saliency maps. The sequentially propagated graphlets constitute a path that mimics human gaze shifting. Finally, we use the learned graphlet path as receptive fields for local image descriptor pooling. The local descriptors from similar receptive fields of pairwise images more significantly contribute to the final image kernel. Thorough experiments demonstrate the advantage of our approach.",
"title": ""
},
{
"docid": "c967b56cc7a2046cb34cfea25dd702d7",
"text": "We present GJ, a design that extends the Java programming language with generic types and methods. These are both explained and implemented by translation into the unextended language. The translation closely mimics the way generics are emulated by programmers: it erases all type parameters, maps type variables to their bounds, and inserts casts where needed. Some subtleties of the translation are caused by the handling of overriding.GJ increases expressiveness and safety: code utilizing generic libraries is no longer buried under a plethora of casts, and the corresponding casts inserted by the translation are guaranteed to not fail.GJ is designed to be fully backwards compatible with the current Java language, which simplifies the transition from non-generic to generic programming. In particular, one can retrofit existing library classes with generic interfaces without changing their code.An implementation of GJ has been written in GJ, and is freely available on the web.",
"title": ""
},
{
"docid": "f2ab01eaa6c743f7b327f4766eb73947",
"text": "A novel antenna configuration comprising a helical antenna with an integrated lens is demonstrated in this work. The antenna is manufactured by a unique combination of 3D printing of plastic material (ABS) and inkjet printing of silver nano-particle based metallic ink. The integration of lens enhances the gain by around 7 dB giving a peak gain of about 16.4 dBi at 9.4 GHz. The helical antenna operates in the end-fire mode and radiates a left-hand circularly polarized (LHCP) pattern. The 3-dB axial ratio (AR) bandwidth of the antenna with lens is 3.2 %. Due to integration of lens and fully printed processing, this antenna configuration offers high gain performance and requires low cost for manufacturing.",
"title": ""
},
{
"docid": "90469bbf7cf3216b2ab1ee8441fbce14",
"text": "This work presents the evolution of a solution for predictive maintenance to a Big Data environment. The proposed adaptation aims for predicting failures on wind turbines using a data-driven solution deployed in the cloud and which is composed by three main modules. (i) A predictive model generator which generates predictive models for each monitored wind turbine by means of Random Forest algorithm. (ii) A monitoring agent that makes predictions every 10 minutes about failures in wind turbines during the next hour. Finally, (iii) a dashboard where given predictions can be visualized. To implement the solution Apache Spark, Apache Kafka, Apache Mesos and HDFS have been used. Therefore, we have improved the previous work in terms of data process speed, scalability and automation. In addition, we have provided fault-tolerant functionality with a centralized access point from where the status of all the wind turbines of a company localized all over the world can be monitored, reducing O&M costs.",
"title": ""
},
{
"docid": "25a99f97e034cd3dbdb76819e50e6198",
"text": "Nearest neighbor classiication assumes locally constant class conditional probabilities. This assumption becomes invalid in high dimensions with nite samples due to the curse of dimensionality. Severe bias can be introduced under these conditions when using the nearest neighbor rule. We propose a locally adaptive nearest neighbor classiication method to try to minimize bias. We use a Chi-squared distance analysis to compute a exible metric for producing neighborhoods that are highly adaptive to query locations. Neighborhoods are elongated along less relevant feature dimensions and constricted along most innuential ones. As a result, the class conditional probabilities tend to be smoother in the mod-iied neighborhoods, whereby better classiication performance can be achieved. The eecacy of our method is validated and compared against other techniques using a variety of simulated and real world data.",
"title": ""
},
{
"docid": "0eb4a0cb4a40407aea3025e0a3e1b534",
"text": "Telling the story of \"Moana\" became one of the most ambitious things we've ever done at the Walt Disney Animation Studios. We felt a huge responsibility to properly celebrate the culture and mythology of the Pacific Islands, in an epic tale involving demigods, monsters, vast ocean voyages, beautiful lush islands, and a sweeping musical visit to the village and people of Motunui. Join us as we discuss our partnership with our Pacific Islands consultants, known as our \"Oceanic Story Trust,\" the research and development we pursued, and the tremendous efforts of our team of engineers, artists and storytellers who brought the world of \"Moana\" to life.",
"title": ""
},
{
"docid": "76e6c05e41c4e6d3c70c8fedec5c323b",
"text": "Commercial light field cameras provide spatial and angular information, but their limited resolution becomes an important problem in practical use. In this letter, we present a novel method for light field image super-resolution (SR) to simultaneously up-sample both the spatial and angular resolutions of a light field image via a deep convolutional neural network. We first augment the spatial resolution of each subaperture image by a spatial SR network, then novel views between super-resolved subaperture images are generated by three different angular SR networks according to the novel view locations. We improve both the efficiency of training and the quality of angular SR results by using weight sharing. In addition, we provide a new light field image dataset for training and validating the network. We train our whole network end-to-end, and show state-of-the-art performances on quantitative and qualitative evaluations.",
"title": ""
},
{
"docid": "2710a25b3cf3caf5ebd5fb9f08c9e5e3",
"text": "Your use of the JSTOR archive indicates your acceptance of JSTOR's Terms and Conditions of Use, available at http://www.jstor.org/page/info/about/policies/terms.jsp. JSTOR's Terms and Conditions of Use provides, in part, that unless you have obtained prior permission, you may not download an entire issue of a journal or multiple copies of articles, and you may use content in the JSTOR archive only for your personal, non-commercial use.",
"title": ""
},
{
"docid": "d6496dd2c1e8ac47dc12fde28c83a3d4",
"text": "We describe a natural extension of the banker’s algorithm for deadlock avoidance in operating systems. Representing the control flow of each process as a rooted tree of nodes corresponding to resource requests and releases, we propose a quadratic-time algorithm which decomposes each flow graph into a nested family of regions, such that all allocated resources are released before the control leaves a region. Also, information on the maximum resource claims for each of the regions can be extracted prior to process execution. By inserting operating system calls when entering a new region for each process at runtime, and applying the original banker’s algorithm for deadlock avoidance, this method has the potential to achieve better resource utilization because information on the “localized approximate maximum claims” is used for testing system safety.",
"title": ""
},
{
"docid": "e5c4870acea1c7315cce0561f583626c",
"text": "A discussion of CMOS readout technologies for infrared (IR) imaging systems is presented. First, the description of various types of IR detector materials and structures is given. The advances of detector fabrication technology and microelectronics process technology have led to the development of large format array of IR imaging detectors. For such large IR FPA’s which is the critical component of the advanced infrared imaging system, general requirement and specifications are described. To support a good interface between FPA and downstream signal processing stage, both conventional and recently developed CMOS readout techniques are presented and discussed. Finally, future development directions including the smart focal plane concept are also introduced.",
"title": ""
},
{
"docid": "06a1d90991c5a9039c6758a66205e446",
"text": "In this paper, we study how to improve the domain adaptability of a deletion-based Long Short-Term Memory (LSTM) neural network model for sentence compression. We hypothesize that syntactic information helps in making such models more robust across domains. We propose two major changes to the model: using explicit syntactic features and introducing syntactic constraints through Integer Linear Programming (ILP). Our evaluation shows that the proposed model works better than the original model as well as a traditional non-neural-network-based model in a cross-domain setting.",
"title": ""
},
{
"docid": "6e9ed92dc37e2d7e7ed956ed7b880ff2",
"text": "Sequential quadratic programming (SQP) methods have proved highly effective for solving constrained optimization problems with smooth nonlinear functions in the objective and constraints. Here we consider problems with general inequality constraints (linear and nonlinear). We assume that first derivatives are available and that the constraint gradients are sparse. We discuss an SQP algorithm that uses a smooth augmented Lagrangian merit function and makes explicit provision for infeasibility in the original problem and the QP subproblems. SNOPT is a particular implementation that makes use of a semidefinite QP solver. It is based on a limited-memory quasi-Newton approximation to the Hessian of the Lagrangian and uses a reduced-Hessian algorithm (SQOPT) for solving the QP subproblems. It is designed for problems with many thousands of constraints and variables but a moderate number of degrees of freedom (say, up to 2000). An important application is to trajectory optimization in the aerospace industry. Numerical results are given for most problems in the CUTE and COPS test collections (about 900 examples).",
"title": ""
},
{
"docid": "7267e5082c890dfa56a745d3b28425cc",
"text": "Natural Orifice Translumenal Endoscopic Surgery (NOTES) has recently attracted lots of attention, promising surgical procedures with fewer complications, better cosmesis, lower pains and faster recovery. Several robotic systems were developed aiming to enable abdominal surgeries in a NOTES manner. Although these robotic systems demonstrated the surgical concept, characteristics which could fully enable NOTES procedures remain unclear. This paper presents the development of an endoscopic continuum testbed for finalizing system characteristics of a surgical robot for NOTES procedures, which include i) deployability (the testbed can be deployed in a folded endoscope configuration and then be unfolded into a working configuration), ii) adequate workspace, iii) sufficient distal dexterity (e.g. suturing capability), and iv) desired mechanics properties (e.g. enough load carrying capability). Continuum mechanisms were implemented in the design and a diameter of 12mm of this testbed in its endoscope configuration was achieved. Results of this paper could be used to form design references for future development of NOTES robots.",
"title": ""
},
{
"docid": "f6f6fe9b3b7a3e724b0a5ad986b95ce1",
"text": "This paper presents a dynamic approach to document page segmentation. Current page segmentation algorithms lack the ability to dynamically adapt local variations in the size, orientation and distance of components within a page. Our approach builds upon one of the best algorithms, Kise et. al. work based on Area Voronoi Diagrams, which adapts globally to page content to determine algorithm parameters. In our approach, local thresholds are determined dynamically based on parabolic relations between components, and Docstrum based angular and neighborhood features are integrated to improve accuracy. Zone-based evaluation was performed on four sets of printed and handwritten documents in English and Arabic scripts and an increase of 33% in accuracy is reported.",
"title": ""
},
{
"docid": "1ed9151f81e15db5bb08a7979d5eeddb",
"text": "Deep learning has delivered its powerfulness in many application domains, especially in image and speech recognition. As the backbone of deep learning, deep neural networks (DNNs) consist of multiple layers of various types with hundreds to thousands of neurons. Embedded platforms are now becoming essential for deep learning deployment due to their portability, versatility, and energy efficiency. The large model size of DNNs, while providing excellent accuracy, also burdens the embedded platforms with intensive computation and storage. Researchers have investigated on reducing DNN model size with negligible accuracy loss. This work proposes a Fast Fourier Transform (FFT)-based DNN training and inference model suitable for embedded platforms with reduced asymptotic complexity of both computation and storage, making our approach distinguished from existing approaches. We develop the training and inference algorithms based on FFT as the computing kernel and deploy the FFT-based inference model on embedded platforms achieving extraordinary processing speed.",
"title": ""
},
{
"docid": "32b20af2ccaefb5d8ad5d44ba19a053d",
"text": "Many clinicians expect that a history of penile-vaginal penetration will be associated with examination findings of penetrating trauma. A retrospective case review of 36 pregnant adolescent girls who presented for sexual abuse evaluations was performed to determine the presence or absence of genital findings that indicate penetrating trauma. Historical information and photograph documentation were reviewed. Only 2 of the 36 subjects had definitive findings of penetration. This study may be helpful in assisting clinicians and juries to understand that vaginal penetration generally does not result in observable evidence of healed injury to perihymenal tissues.",
"title": ""
},
{
"docid": "064cedd8f636b3d3c004d68eb85a7166",
"text": "This paper presents a strategy to generate generic summary of documents using Probabilistic Latent Semantic Indexing. Generally a document contains several topics rather than a single one. Summaries created by human beings tend to cover several topics to give the readers an overall idea about the original document. Hence we can expect that a summary containing sentences from better part of the topic spectrum should make a better summary. PLSI has proven to be an effective method in topic detection. In this paper we present a method for creating extractive summary of the document by using PLSI to analyze the features of document such as term frequency and graph structure. We also show our results, which was evaluated using ROUGE, and compare the results with other techniques, proposed in the past.",
"title": ""
}
] |
scidocsrr
|
99ca4f824ee164596049ffe436fb6baf
|
A next generation knowledge management system architecture
|
[
{
"docid": "d22390e43aa4525d810e0de7da075bbf",
"text": "information, including knowledge management and e-business applications. Next-generation knowledge management systems will likely rely on conceptual models in the form of ontologies to precisely define the meaning of various symbols. For example, FRODO (a Framework for Distributed Organizational Memories) uses ontologies for knowledge description in organizational memories,1 CoMMA (Corporate Memory Management through Agents) investigates agent technologies for maintaining ontology-based knowledge management systems,2 and Steffen Staab and his colleagues have discussed the methodologies and processes for building ontology-based systems.3 Here we present an integrated enterprise-knowledge management architecture for implementing an ontology-based knowledge management system (OKMS). We focus on two critical issues related to working with ontologies in real-world enterprise applications. First, we realize that imposing a single ontology on the enterprise is difficult if not impossible. Because organizations must devise multiple ontologies and thus require integration mechanisms, we consider means for combining distributed and heterogeneous ontologies using mappings. Additionally, a system’s ontology often must reflect changes in system requirements and focus, so we developed guidelines and an approach for managing the difficult and complex ontology-evolution process.",
"title": ""
}
] |
[
{
"docid": "75dfe6e25e7c542d13d1712112da4712",
"text": "Obtaining data in the real world is subject to imperfections and the appearance of noise is a common consequence of such flaws. In classification, class noise will deteriorate the performance of a classifier, as it may severely mislead the model building. Among the strategies emerged to deal with class noise, the most popular is that of filtering. However, instance filtering can be harmful as it may eliminate more examples than necessary or produce loss of information. An ideal option would be relabeling the noisy instances, avoiding losing data, but instance correcting is harder to achieve and may lead to wrong information being introduced in the dataset. For this reason, we advance a new proposal based on an ensemble of noise filters with the goal not only of accurately filtering the mislabeled instances, but also correcting them when possible. A noise score is also applied to support the filtering and relabeling process. The proposal, named CNC-NOS (Class Noise Cleaner with Noise Scoring), is compared against state-of-the-art noise filters and correctors, showing that it is able to deliver a quality training instance set that overcomes the limitations of such techniques, both in terms of classification accuracy and properly treated instances.",
"title": ""
},
{
"docid": "cefa0a3c3a80fa0a170538abdb3f7e46",
"text": "This tutorial introduces the basics of emerging nonvolatile memory (NVM) technologies including spin-transfer-torque magnetic random access memory (STTMRAM), phase-change random access memory (PCRAM), and resistive random access memory (RRAM). Emerging NVM cell characteristics are summarized, and device-level engineering trends are discussed. Emerging NVM array architectures are introduced, including the one-transistor-one-resistor (1T1R) array and the cross-point array with selectors. Design challenges such as scaling the write current and minimizing the sneak path current in cross-point array are analyzed. Recent progress on megabit-to gigabit-level prototype chip demonstrations is summarized. Finally, the prospective applications of emerging NVM are discussed, ranging from the last-level cache to the storage-class memory in the memory hierarchy. Topics of three-dimensional (3D) integration and radiation-hard NVM are discussed. Novel applications beyond the conventional memory applications are also surveyed, including physical unclonable function for hardware security, reconfigurable routing switch for field-programmable gate array (FPGA), logic-in-memory and nonvolatile cache/register/flip-flop for nonvolatile processor, and synaptic device for neuro-inspired computing.",
"title": ""
},
{
"docid": "3608939d057889c2731b12194ef28ea6",
"text": "Permanent magnets with rare earth materials are widely used in interior permanent magnet synchronous motors (IPMSMs) in Hybrid Electric Vehicles (HEVs). The recent price rise of rare earth materials has become a serious concern. A Switched Reluctance Motor (SRM) is one of the candidates for HEV rare-earth-free-motors. An SRM has been developed with dimensions, maximum torque, operating area, and maximum efficiency that all compete with the IPMSM. The efficiency map of the SRM is different from that of the IPMSM; thus, direct comparison has been rather difficult. In this paper, a comparison of energy consumption between the SRM and the IPMSM using four standard driving schedules is carried out. In HWFET and NEDC driving schedules, the SRM is found to have better efficiency because its efficiency is high at the high-rotational-speed region.",
"title": ""
},
{
"docid": "836818987ad40fd67d43fbc26f4bdc0f",
"text": "Although psilocybin has been used for centuries for religious purposes, little is known scientifically about its acute and persisting effects. This double-blind study evaluated the acute and longer-term psychological effects of a high dose of psilocybin relative to a comparison compound administered under comfortable, supportive conditions. The participants were hallucinogen-naïve adults reporting regular participation in religious or spiritual activities. Two or three sessions were conducted at 2-month intervals. Thirty volunteers received orally administered psilocybin (30 mg/70 kg) and methylphenidate hydrochloride (40 mg/70 kg) in counterbalanced order. To obscure the study design, six additional volunteers received methylphenidate in the first two sessions and unblinded psilocybin in a third session. The 8-h sessions were conducted individually. Volunteers were encouraged to close their eyes and direct their attention inward. Study monitors rated volunteers’ behavior during sessions. Volunteers completed questionnaires assessing drug effects and mystical experience immediately after and 2 months after sessions. Community observers rated changes in the volunteer’s attitudes and behavior. Psilocybin produced a range of acute perceptual changes, subjective experiences, and labile moods including anxiety. Psilocybin also increased measures of mystical experience. At 2 months, the volunteers rated the psilocybin experience as having substantial personal meaning and spiritual significance and attributed to the experience sustained positive changes in attitudes and behavior consistent with changes rated by community observers. When administered under supportive conditions, psilocybin occasioned experiences similar to spontaneously occurring mystical experiences. The ability to occasion such experiences prospectively will allow rigorous scientific investigations of their causes and consequences.",
"title": ""
},
{
"docid": "5fc02317117c3068d1409a42b025b018",
"text": "Explaining the causes of infeasibility of Boolean formulas has practical applications in numerous fields, such as artificial intelligence (repairing inconsistent knowledge bases), formal verification (abstraction refinement and unbounded model checking), and electronic design (diagnosing and correcting infeasibility). Minimal unsatisfiable subformulas (MUSes) provide useful insights into the causes of infeasibility. An unsatisfiable formula often has many MUSes. Based on the application domain, however, MUSes with specific properties might be of interest. In this paper, we tackle the problem of finding a smallest-cardinality MUS (SMUS) of a given formula. An SMUS provides a succinct explanation of infeasibility and is valuable for applications that are heavily affected by the size of the explanation. We present (1) a baseline algorithm for finding an SMUS, founded on earlier work for finding all MUSes, and (2) a new branch-and-bound algorithm called Digger that computes a strong lower bound on the size of an SMUS and splits the problem into more tractable subformulas in a recursive search tree. Using two benchmark suites, we experimentally compare Digger to the baseline algorithm and to an existing incomplete genetic algorithm approach. Digger is shown to be faster in nearly all cases. It is also able to solve far more instances within a given runtime limit than either of the other approaches.",
"title": ""
},
{
"docid": "80c745ee8535d9d53819ced4ad8f996d",
"text": "Wireless Sensor Networks (WSN) are vulnerable to various sensor faults and faulty measurements. This vulnerability hinders efficient and timely response in various WSN applications, such as healthcare. For example, faulty measurements can create false alarms which may require unnecessary intervention from healthcare personnel. Therefore, an approach to differentiate between real medical conditions and false alarms will improve remote patient monitoring systems and quality of healthcare service afforded by WSN. In this paper, a novel approach is proposed to detect sensor anomaly by analyzing collected physiological data from medical sensors. The objective of this method is to effectively distinguish false alarms from true alarms. It predicts a sensor value from historic values and compares it with the actual sensed value for a particular instance. The difference is compared against a threshold value, which is dynamically adjusted, to ascertain whether the sensor value is anomalous. The proposed approach has been applied to real healthcare datasets and compared with existing approaches. Experimental results demonstrate the effectiveness of the proposed system, providing high Detection Rate (DR) and low False Positive Rate (FPR).",
"title": ""
},
{
"docid": "809d03fd69aebc7573463756a535de18",
"text": "We describe Venture, an interactive virtual machine for probabilistic programming that aims to be sufficiently expressive, extensible, and efficient for general-purpose use. Like Church, probabilistic models and inference problems in Venture are specified via a Turing-complete, higher-order probabilistic language descended from Lisp. Unlike Church, Venture also provides a compositional language for custom inference strategies, assembled from scalable implementations of several exact and approximate techniques. Venture is thus applicable to problems involving widely varying model families, dataset sizes and runtime/accuracy constraints. We also describe four key aspects of Venture’s implementation that build on ideas from probabilistic graphical models. First, we describe the stochastic procedure interface (SPI) that specifies and encapsulates primitive random variables, analogously to conditional probability tables in a Bayesian network. The SPI supports custom control flow, higher-order probabilistic procedures, partially exchangeable sequences and “likelihood-free” stochastic simulators, all with custom proposals. It also supports the integration of external models that dynamically create, destroy and perform inference over latent variables hidden from Venture. Second, we describe probabilistic execution traces (PETs), which represent execution histories of Venture programs. Like Bayesian networks, PETs capture conditional dependencies, but PETs also represent existential dependencies and exchangeable coupling. Third, we describe partitions of execution histories called scaffolds that can be efficiently constructed from PETs and that factor global inference problems into coherent sub-problems. Finally, we describe a family of stochastic regeneration algorithms for efficiently modifying PET fragments contained within scaffolds without visiting conditionally independent random choices. Stochastic regeneration insulates inference algorithms from the complexities introduced by changes in execution structure, with runtime that scales linearly in cases where previous approaches often scaled quadratically and were therefore impractical. We show how to use stochastic regeneration and the SPI to implement general-purpose inference strategies such as Metropolis-Hastings, Gibbs sampling, and blocked proposals based on hybrids with both particle Markov chain Monte Carlo and mean-field variational inference techniques.",
"title": ""
},
{
"docid": "c62742c65b105a83fa756af9b1a45a37",
"text": "This article treats numerical methods for tracking an implicitly defined path. The numerical precision required to successfully track such a path is difficult to predict a priori, and indeed, it may change dramatically through the course of the path. In current practice, one must either choose a conservatively large numerical precision at the outset or re-run paths multiple times in successively higher precision until success is achieved. To avoid unnecessary computational cost, it would be preferable to adaptively adjust the precision as the tracking proceeds in response to the local conditioning of the path. We present an algorithm that can be set to either reactively adjust precision in response to step failure or proactively set the precision using error estimates. We then test the relative merits of reactive and proactive adaptation on several examples arising as homotopies for solving systems of polynomial equations.",
"title": ""
},
{
"docid": "3cd2bfe8257f2212513ecd614f6b9fef",
"text": "Carbon aerogels demonstrate wide applications for their ultralow density, rich porosity, and multifunctionalities. Their compressive elasticity has been achieved by different carbons. However, reversibly high stretchability of neat carbon aerogels is still a great challenge owing to their extremely dilute brittle interconnections and poorly ductile cells. Here we report highly stretchable neat carbon aerogels with a retractable 200% elongation through hierarchical synergistic assembly. The hierarchical buckled structures and synergistic reinforcement between graphene and carbon nanotubes enable a temperature-invariable, recoverable stretching elasticity with small energy dissipation (~0.1, 100% strain) and high fatigue resistance more than 106 cycles. The ultralight carbon aerogels with both stretchability and compressibility were designed as strain sensors for logic identification of sophisticated shape conversions. Our methodology paves the way to highly stretchable carbon and neat inorganic materials with extensive applications in aerospace, smart robots, and wearable devices. Improved compressive elasticity was lately demonstrated for carbon aerogels but the problem of reversible stretchability remained a challenge. Here the authors use a hierarchical structure design and synergistic effects between carbon nanotubes and graphene to achieve high stretchability in carbon aerogels.",
"title": ""
},
{
"docid": "0d28ddef1fa86942da679aec23dff890",
"text": "Electronic patient records remain a rather unexplored, but potentially rich data source for discovering correlations between diseases. We describe a general approach for gathering phenotypic descriptions of patients from medical records in a systematic and non-cohort dependent manner. By extracting phenotype information from the free-text in such records we demonstrate that we can extend the information contained in the structured record data, and use it for producing fine-grained patient stratification and disease co-occurrence statistics. The approach uses a dictionary based on the International Classification of Disease ontology and is therefore in principle language independent. As a use case we show how records from a Danish psychiatric hospital lead to the identification of disease correlations, which subsequently can be mapped to systems biology frameworks.",
"title": ""
},
{
"docid": "dd9e89b7e0c70fcc542a185d6bd98763",
"text": "This study describes metaphorical conceptualizations of the foreign exchange market held by market participants and examines how these metaphors socially construct the financial market. Findings are based on 55 semi-structured interviews with senior foreign exchange experts at banks and at financial news providers in Europe. We analysed interview transcripts by metaphor analysis, a method based on cognitive linguistics. Results indicate that market participants' understanding of financial markets revolves around seven metaphors, namely the market as a bazaar, as a machine, as gambling, as sports, as war, as a living being and as an ocean. Each of these metaphors highlights and conceals certain aspects of the foreign exchange market and entails a different set of implications on crucial market dimensions, such as the role of other market participants and market predictability. A correspondence analysis supports our assumption that metaphorical thinking corresponds with implicit assumptions about market predictability. A comparison of deliberately generated and implicitly used metaphors reveals notable differences. In particular, implicit metaphors are predominantly organic rather than mechanical. In contrast to academic models, interactive and organic metaphors, and not the machine metaphor, dominate the market accounts of participants.",
"title": ""
},
{
"docid": "3ad25dabe3b740a91b939a344143ea9e",
"text": "Recently, much attention in research and practice has been devoted to the topic of IT consumerization, referring to the adoption of private consumer IT in the workplace. However, research lacks an analysis of possible antecedents of the trend on an individual level. To close this gap, we derive a theoretical model for IT consumerization behavior based on the theory of planned behavior and perform a quantitative analysis. Our investigation shows that it is foremost determined by normative pressures, specifically the behavior of friends, co-workers and direct supervisors. In addition, behavioral beliefs and control beliefs were found to affect the intention to use non-corporate IT. With respect to the former, we found expected performance improvements and an increase in ease of use to be two of the key determinants. As for the latter, especially monetary costs and installation knowledge were correlated with IT consumerization intention.",
"title": ""
},
{
"docid": "0d0d11c1e340e67939cfba0cde4783ed",
"text": "Recent research effort in poem composition has focused on the use of automatic language generation to produce a polished poem. A less explored question is how effectively a computer can serve as an interactive assistant to a poet. For this purpose, we built a web application that combines rich linguistic knowledge from classical Chinese philology with statistical natural language processing techniques. The application assists users in composing a ‘couplet’—a pair of lines in a traditional Chinese poem—by making suggestions for the next and corresponding characters. A couplet must meet a complicated set of requirements on phonology, syntax, and parallelism, which are challenging for an amateur poet to master. The application checks conformance to these requirements and makes suggestions for characters based on lexical, syntactic, and semantic properties. A distinguishing feature of the application is its extensive use of linguistic knowledge, enabling it to inform users of specific phonological principles in detail, and to explicitly model semantic parallelism, an essential characteristic of Chinese poetry. We evaluate the quality of poems composed solely with characters suggested by the application, and the coverage of its character suggestions. .................................................................................................................................................................................",
"title": ""
},
{
"docid": "edba5ee93ead361ac4398c0f06d3ba06",
"text": "We describe an Arabic-Hebrew parallel corpus of TED talks built upon WIT, the Web inventory that repurposes the original content of the TED website in a way which is more convenient for MT researchers. The benchmark consists of about 2,000 talks, whose subtitles in Arabic and Hebrew have been accurately aligned and rearranged in sentences, for a total of about 3.5M tokens per language. Talks have been partitioned in train, development and test sets similarly in all respects to the MT tasks of the IWSLT 2016 evaluation campaign. In addition to describing the benchmark, we list the problems encountered in preparing it and the novel methods designed to solve them. Baseline MT results and some measures on sentence length are provided as an extrinsic evaluation of the quality of the benchmark.",
"title": ""
},
{
"docid": "22bbeceff175ee2e9a462b753ce24103",
"text": "BACKGROUND\nEUS-guided FNA can help diagnose and differentiate between various pancreatic and other lesions.The aim of this study was to compare approaches among involved/relevant physicians to the controversies surrounding the use of FNA in EUS.\n\n\nMETHODS\nA five-case survey was developed, piloted, and validated. It was collected from a total of 101 physicians, who were all either gastroenterologists (GIs), surgeons or oncologists. The survey compared the management strategies chosen by members of these relevant disciplines regarding EUS-guided FNA.\n\n\nRESULTS\nFor CT operable T2NOM0 pancreatic tumors the research demonstrated variance as to whether to undertake EUS-guided FNA, at p < 0.05. For inoperable pancreatic tumors 66.7% of oncologists, 62.2% of surgeons and 79.1% of GIs opted for FNA (p < 0.05). For cystic pancreatic lesions, oncologists were more likely to send patients to surgery without FNA. For stable simple pancreatic cysts (23 mm), most physicians (66.67%) did not recommend FNA. For a submucosal gastric 19 mm lesion, 63.2% of surgeons recommended FNA, vs. 90.0% of oncologists (p < 0.05).\n\n\nCONCLUSIONS\nControversies as to ideal application of EUS-FNA persist. Optimal guidelines should reflect the needs and concerns of the multidisciplinary team who treat patients who need EUS-FNA. Multi-specialty meetings assembled to manage patients with these disorders may be enlightening and may help develop consensus.",
"title": ""
},
{
"docid": "ab97caed9c596430c3d76ebda55d5e6e",
"text": "A 1.5 GHz low noise amplifier for a Global Positioning System (GPS) receiver has been implemented in a 0.6 /spl mu/m CMOS process. This amplifier provides a forward gain of 22 dB with a noise figure of only 3.5 dB while drawing 30 mW from a 1.5 V supply. To the authors' knowledge, this represents the lowest noise figure reported to date for a CMOS amplifier operating above 1 GHz.",
"title": ""
},
{
"docid": "57bebb90000790a1d76a400f69d5736d",
"text": "In computer vision and image analysis, image registration between 2D projections and a 3D image that achieves high accuracy and near real-time computation is challenging. In this paper, we propose a novel method that can rapidly detect an object's 3D rigid motion or deformation from a 2D projection image or a small set thereof. The method is called CLARET (Correction via Limited-Angle Residues in External Beam Therapy) and consists of two stages: registration preceded by shape space and regression learning. In the registration stage, linear operators are used to iteratively estimate the motion/deformation parameters based on the current intensity residue between the target projec-tion(s) and the digitally reconstructed radiograph(s) (DRRs) of the estimated 3D image. The method determines the linear operators via a two-step learning process. First, it builds a low-order parametric model of the image region's motion/deformation shape space from its prior 3D images. Second, using learning-time samples produced from the 3D images, it formulates the relationships between the model parameters and the co-varying 2D projection intensity residues by multi-scale linear regressions. The calculated multi-scale regression matrices yield the coarse-to-fine linear operators used in estimating the model parameters from the 2D projection intensity residues in the registration. The method's application to Image-guided Radiation Therapy (IGRT) requires only a few seconds and yields good results in localizing a tumor under rigid motion in the head and neck and under respiratory deformation in the lung, using one treatment-time imaging 2D projection or a small set thereof.",
"title": ""
},
{
"docid": "844a39889bd671a8b9abe085b2e0a982",
"text": "1 One may wonder, ...] how complex organisms evolve at all. They seem to have so many genes, so many multiple or pleiotropic eeects of any one gene, so many possibilities for lethal mutations in early development, and all sorts of problems due to their long development. Abstract: The problem of complex adaptations is studied in two largely disconnected research traditions: evolutionary biology and evolutionary computer science. This paper summarizes the results from both areas and compares their implications. In evolutionary computer science it was found that the Darwinian process of mutation, recombination and selection is not universally eeective in improving complex systems like computer programs or chip designs. For adaptation to occur, these systems must possess \"evolvability\", i.e. the ability of random variations to sometimes produce improvement. It was found that evolvability critically depends on the way genetic variation maps onto phenotypic variation, an issue known as the representation problem. The genotype-phenotype map determines the variability of characters, which is the propensity to vary. Variability needs to be distinguished from variation, which are the actually realized diierences between individuals. The genotype-phenotype map is the common theme underlying such varied biological phenomena as genetic canalization, developmental constraints, biological versatility , developmental dissociability, morphological integration, and many more. For evolutionary biology the representation problem has important implications: how is it that extant species acquired a genotype-phenotype map which allows improvement by mutation and selection? Is the genotype-phenotype map able to change in evolution? What are the selective forces, if any, that shape the genotype-phenotype map? We propose that the genotype-phenotype map can evolve by two main routes: epistatic mutations, or the creation of new genes. A common result for organismic design is modularity. By modularity we mean a genotype-phenotype map in which there are few pleiotropic eeects among characters serving diierent functions, with pleiotropic eeects falling mainly among characters that are part of a single functional complex. Such a design is expected to improve evolvability by limiting the interference between the adaptation of diierent functions. Several population genetic models are reviewed that are intended to explain the evolutionary origin of a modular design. While our current knowledge is insuucient to assess the plausibil-ity of these models, they form the beginning of a framework for understanding the evolution of the genotype-phenotype map.",
"title": ""
},
{
"docid": "67e008db2a218b4e307003c919a32a8a",
"text": "Relay deployment in Orthogonal Frequency Division Multipl e Access (OFDMA) based cellular networks helps in coverage extension and/or capacity improvement. To quantify capacity improvement, blocking probability of voice traffic is typically calculated using Erlang B formula. This calculation is based on the assumption that all users require same amount of resourc es to satisfy their rate requirement. However, in an OFDMA system, each user requires different number of su bcarriers to meet its rate requirement. This resource requirement depends on the Signal to Interference Ratio (SIR) experienced by a user. Therefore, the Erlang B formula can not be employed to compute blocking p robability in an OFDMA network.In this paper, we determine an analytical expression to comput e the blocking probability of relay based cellular OFDMA network. We determine an expression of the probability distribution of the user’s resource requirement based on its experienced SIR. Then, we classify the users into various classes depending upon their subcarrier requirement. We consider the system to be a multi-dimensional system with different classes and evaluate the blocking probabili ty of system using the multi-dimensional Erlang loss formulas. This model is useful in the performance evaluation, design, planning of resources and call admission control of relay based cellular OFDMA networks like LTE.",
"title": ""
},
{
"docid": "7c8f318224a5ca8ffd12ea32c2a560cf",
"text": "BACKGROUND\nDaily bathing with chlorhexidine gluconate (CHG) is increasingly used in intensive care units to prevent hospital-associated infections, but limited evidence exists for noncritical care settings.\n\n\nMETHODS\nA prospective crossover study was conducted on 4 medical inpatient units in an urban, academic Canadian hospital from May 1, 2014-August 10, 2015. Intervention units used CHG over a 7-month period, including a 1-month wash-in phase, while control units used nonmedicated soap and water bathing. Rates of hospital-associated methicillin-resistant Staphylococcus aureus (MRSA) and vancomycin-resistant Enterococcus (VRE) colonization or infection were the primary end point. Hospital-associated S. aureus were investigated for CHG resistance with a qacA/B and smr polymerase chain reaction (PCR) and agar dilution.\n\n\nRESULTS\nCompliance with daily CHG bathing was 58%. Hospital-associated MRSA and VRE was decreased by 55% (5.1 vs 11.4 cases per 10,000 inpatient days, P = .04) and 36% (23.2 vs 36.0 cases per 10,000 inpatient days, P = .03), respectively, compared with control cohorts. There was no significant difference in rates of hospital-associated Clostridium difficile. Chlorhexidine resistance testing identified 1 isolate with an elevated minimum inhibitory concentration (8 µg/mL), but it was PCR negative.\n\n\nCONCLUSIONS\nThis prospective pragmatic study to assess daily bathing for CHG on inpatient medical units was effective in reducing hospital-associated MRSA and VRE. A critical component of CHG bathing on medical units is sustained and appropriate application, which can be a challenge to accurately assess and needs to be considered before systematic implementation.",
"title": ""
}
] |
scidocsrr
|
c0211d3bcfbf90869589d724466370e6
|
Fraud Detection in Credit Card System UsingWeb Mining
|
[
{
"docid": "126d8080f7dd313d534a95d8989b0fbd",
"text": "Intrusion prevention mechanisms are largely insufficient for protection of databases against Information Warfare attacks by authorized users and has drawn interest towards intrusion detection. We visualize the conflicting motives between an attacker and a detection system as a multi-stage game between two players, each trying to maximize his payoff. We consider the specific application of credit card fraud detection and propose a fraud detection system based on a game-theoretic approach. Not only is this approach novel in the domain of Information Warfare, but also it improvises over existing rule-based systems by predicting the next move of the fraudster and learning at each step.",
"title": ""
},
{
"docid": "342e3fd05878ebff3bc2686fb05009f5",
"text": "Due to a rapid advancement in the electronic commerce technology, use of credit cards has dramatically increased. As credit card becomes the most popular mode of payment, credit card frauds are becoming increasingly rampant in recent years. In this paper, we model the sequence of operations in credit card transaction processing using a confidence-based neural network. Receiver operating characteristic (ROC) analysis technology is also introduced to ensure the accuracy and effectiveness of fraud detection. A neural network is initially trained with synthetic data. If an incoming credit card transaction is not accepted by the trained neural network model (NNM) with sufficiently low confidence, it is considered to be fraudulent. This paper shows how confidence value, neural network algorithm and ROC can be combined successfully to perform credit card fraud detection.",
"title": ""
}
] |
[
{
"docid": "645f4db902246c01476ae941004bcd94",
"text": "The Internet of Things is part of our everyday life, which applies to all aspects of human life; from smart phones and environmental sensors to smart devices used in the industry. Although the Internet of Things has many advantages, there are risks and dangers as well that need to be addressed. The information used and transmitted on Internet of Things contain important info about the daily lives of people, banking information, location and geographical information, environmental and medical information, together with many other sensitive data. Therefore, it is critical to identify and address the security issues and challenges of Internet of Things. In this article, considering the broad scope of this field and its literature, we are going to express some comprehensive information on security challenges of the Internet of Things.",
"title": ""
},
{
"docid": "7a8faa4e8ecef8e28aa2203f0aa9d888",
"text": "In today’s global marketplace, individual firms do not compete as independent entities rather as an integral part of a supply chain. This paper proposes a fuzzy mathematical programming model for supply chain planning which considers supply, demand and process uncertainties. The model has been formulated as a fuzzy mixed-integer linear programming model where data are ill-known andmodelled by triangular fuzzy numbers. The fuzzy model provides the decision maker with alternative decision plans for different degrees of satisfaction. This proposal is tested by using data from a real automobile supply chain. © 2009 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "4ccea211a4b3b01361a4205990491764",
"text": "published by the press syndicate of the university of cambridge Vygotsky's educational theory in cultural context / edited by Alex Kozulin. .. [et al.]. p. cm. – (Learning in doing) Includes bibliographical references and index.",
"title": ""
},
{
"docid": "50840b0308e1f884b61c9f824b1bf17f",
"text": "The StreamIt programming model has been proposed to exploit parallelism in streaming applications on general purpose multi-core architectures. This model allows programmers to specify the structure of a program as a set of filters that act upon data, and a set of communication channels between them. The StreamIt graphs describe task, data and pipeline parallelism which can be exploited on modern Graphics Processing Units (GPUs), as they support abundant parallelism in hardware. In this paper, we describe the challenges in mapping StreamIt to GPUs and propose an efficient technique to software pipeline the execution of stream programs on GPUs. We formulate this problem --- both scheduling and assignment of filters to processors --- as an efficient Integer Linear Program (ILP), which is then solved using ILP solvers. We also describe a novel buffer layout technique for GPUs which facilitates exploiting the high memory bandwidth available in GPUs. The proposed scheduling utilizes both the scalar units in GPU, to exploit data parallelism, and multiprocessors, to exploit task and pipeline parallelism. Further it takes into consideration the synchronization and bandwidth limitations of GPUs, and yields speedups between 1.87X and 36.83X over a single threaded CPU.",
"title": ""
},
{
"docid": "fdd7237680ee739b598cd508c4a2ed38",
"text": "Rectovaginal Endometriosis (RVE) is a severe form of endometriosis classified by Kirtner as stage 4 [1,2]. It is less frequent than peritoneal or ovarian endometriosis affecting 3.8% to 37% of patients with endometriosis [3,4]. RVE infiltrates the rectum, vagina, and rectovaginal septum, up to obliteration of the pouch of Douglas [4]. Endometriotic nodules exceeding 30 mm in diameter have 17.9% risk of ureteral involvement [5], while 5.3% to 12% of patients have bowel endometriosis, most commonly found in the recto-sigmoid involving 74% of those patients [3,4].",
"title": ""
},
{
"docid": "8ef4c3c34579c7df96bb56527381d4a7",
"text": "|A simple cascode circuit with the gate voltage of the cascode transistor being controlled by a feedback ampli er and thus named `regulated cascode' is presented. In comparison to the standard cascode circuit the minimum output voltage is lower by about 30 to 60% while the output conductance and the feedback capacitance are lower by about 100 times. An analytical large-signal, small-signal, and noise analysis is carried out. Some applications like current mirrors and voltage ampli ers are discussed. Finally, experimental results con rming the theory are presented.",
"title": ""
},
{
"docid": "bb3ba0a17727d2ea4e2aba74f7144da6",
"text": "A roof automobile antenna module for Long Term Evolution (LTE) application is proposed. The module consists of two LTE antennas for the multiple-input multiple-output (MIMO) method which requests low mutual coupling between the antennas for larger capacity. On the other hand, the installation location for a roof-top module is limited from safety or appearance viewpoint and this makes the multiple LTE antennas located there cannot be separated with enough space. In order to retain high isolation between the two antennas in such compact space, the two antennas are designed to have different shapes, different heights and different polarizations, and their ground planes are placed separately. In the proposed module, one antenna is a monopole type and has its element printed on a shark-fin-shaped substrate which is perpendicular to the car-roof. Another one is a planar inverted-F antenna (PIFA) and has its element on a lower plane parallel to the roof. In this manner, the two antennas cover the LTE-bands with omni-directional radiation in the horizontal directions and high radiation gain. The two antennas have reasonably good isolation between them even the module is compact with a dimension of 62×65×73 mm3.",
"title": ""
},
{
"docid": "eb22a8448b82f6915850fe4d60440b3b",
"text": "In story-based games or other interactive systems, a drama manager (DM) is an omniscient agent that acts to bring about a particular sequence of plot points for the player to experience. Traditionally, the DM's narrative evaluation criteria are solely derived from a human designer. We present a DM that learns a model of the player's storytelling preferences and automatically recommends a narrative experience that is predicted to optimize the player's experience while conforming to the human designer's storytelling intentions. Our DM is also capable of manipulating the space of narrative trajectories such that the player is more likely to make choices that result in the recommended experience. Our DM uses a novel algorithm, called prefix-based collaborative filtering (PBCF), that solves the sequential recommendation problem to find a sequence of plot points that maximizes the player's rating of his or her experience. We evaluate our DM in an interactive storytelling environment based on choose-your-own-adventure novels. Our experiments show that our algorithms can improve the player's experience over the designer's storytelling intentions alone and can deliver more personalized experiences than other interactive narrative systems while preserving players' agency.",
"title": ""
},
{
"docid": "89b54aa0009598a4cb159b196f3749ee",
"text": "Several methods and techniques are potentially useful for the preparation of microparticles in the field of controlled drug delivery. The type and the size of the microparticles, the entrapment, release characteristics and stability of drug in microparticles in the formulations are dependent on the method used. One of the most common methods of preparing microparticles is the single emulsion technique. Poorly soluble, lipophilic drugs are successfully retained within the microparticles prepared by this method. However, the encapsulation of highly water soluble compounds including protein and peptides presents formidable challenges to the researchers. The successful encapsulation of such compounds requires high drug loading in the microparticles, prevention of protein and peptide degradation by the encapsulation method involved and predictable release, both rate and extent, of the drug compound from the microparticles. The above mentioned problems can be overcome by using the double emulsion technique, alternatively called as multiple emulsion technique. Aiming to achieve this various techniques have been examined to prepare stable formulations utilizing w/o/w, s/o/w, w/o/o, and s/o/o type double emulsion methods. This article reviews the current state of the art in double emulsion based technologies for the preparation of microparticles including the investigation of various classes of substances that are pharmaceutically and biopharmaceutically active.",
"title": ""
},
{
"docid": "4748baf1813e3fc55d3ae723a21a5f98",
"text": "YouTube currently accounts for a significant percentage of the Internet’s global traffic. Hence, understanding the characteristics of the YouTube traffic generation pattern can provide a significant advantage in predicting user video quality and in enhancing network design. In this paper we present a characterization of the traffic generated by YouTube when accessed from a regular PC. Based on this characterization, a YouTube server traffic generation model is proposed, which, for example, can be easily implemented in simulation tools. The derived characterization and model are based on experimental evaluations of traffic generated by the application layer of YouTube servers. A YouTube server commences the download with an initial burst and later throttles down the generation rate. If the available bandwidth is reduced (e.g., in the presence of network congestion), the server behaves as if the data excess that cannot be transmitted due to the reduced bandwidth were accumulated at a server’s buffer, which is later drained if the bandwidth availability is recovered. As we will show, the video clip encoding rate plays a relevant role in determining the traffic generation rate, and therefore, a cumulative density function for the most viewed video clips will be presented. The proposed traffic generation model was implemented in a YouTube emulation server, and the generated synthetic traffic traces were compared to downloads from the original YouTube server. The results show that the relative error between downloads from the emulation server and the original server does not exceed 6% for the 90% of the considered videos.",
"title": ""
},
{
"docid": "793a1a5ff7b7d2c7fa65ce1eaa65b0c0",
"text": "In this paper we describe our implementation of algorithms for face detection and recognition in color images under Matlab. For face detection, we trained a feedforward neural network to perform skin segmentation, followed by the eyes detection, face alignment, lips detection and face delimitation. The eyes were detected by analyzing the chrominance and the angle between neighboring pixels and, then, the results were used to perform face alignment. The lips were detected based on the analysis of the Red color component intensity in the lower face region. Finally, the faces were delimited using the eyes and lips positions. The face recognition involved a classifier that used the standard deviation of the difference between color matrices of the faces to identify the input face. The algorithms were run on Faces 1999 dataset. The proposed method achieved 96.9%, 89% and 94% correct detection rate of face, eyes and lips, respectively. The correctness rate of the face recognition algorithm was 70.7%.",
"title": ""
},
{
"docid": "3989aa85b78b211e3d6511cf5fb607bd",
"text": "The specific requirements of UAV-photogrammetry necessitate particular solutions for system development, which have mostly been ignored or not assessed adequately in recent studies. Accordingly, this paper presents the methodological and experimental aspects of correctly implementing a UAV-photogrammetry system. The hardware of the system consists of an electric-powered helicopter, a high-resolution digital camera and an inertial navigation system. The software of the system includes the in-house programs specifically designed for camera calibration, platform calibration, system integration, on-board data acquisition, flight planning and on-the-job self-calibration. The detailed features of the system are discussed, and solutions are proposed in order to enhance the system and its photogrammetric outputs. The developed system is extensively tested for precise modeling of the challenging environment of an open-pit gravel mine. The accuracy of the results is evaluated under various mapping conditions, including direct georeferencing and indirect georeferencing with different numbers, distributions and types of ground control points. Additionally, the effects of imaging configuration and network stability on modeling accuracy are assessed. The experiments demonstrated that 1.55 m horizontal and 3.16 m vertical absolute modeling accuracy could be achieved via direct geo-referencing, which was improved to 0.4 cm and 1.7 cm after indirect geo-referencing.",
"title": ""
},
{
"docid": "77e8e73aecacb174d3975fdf702d281c",
"text": "Psychedelic drugs have a long history of use in healing ceremonies, but despite renewed interest in their therapeutic potential, we continue to know very little about how they work in the brain. Here we used psilocybin, a classic psychedelic found in magic mushrooms, and a task-free functional MRI (fMRI) protocol designed to capture the transition from normal waking consciousness to the psychedelic state. Arterial spin labeling perfusion and blood-oxygen level-dependent (BOLD) fMRI were used to map cerebral blood flow and changes in venous oxygenation before and after intravenous infusions of placebo and psilocybin. Fifteen healthy volunteers were scanned with arterial spin labeling and a separate 15 with BOLD. As predicted, profound changes in consciousness were observed after psilocybin, but surprisingly, only decreases in cerebral blood flow and BOLD signal were seen, and these were maximal in hub regions, such as the thalamus and anterior and posterior cingulate cortex (ACC and PCC). Decreased activity in the ACC/medial prefrontal cortex (mPFC) was a consistent finding and the magnitude of this decrease predicted the intensity of the subjective effects. Based on these results, a seed-based pharmaco-physiological interaction/functional connectivity analysis was performed using a medial prefrontal seed. Psilocybin caused a significant decrease in the positive coupling between the mPFC and PCC. These results strongly imply that the subjective effects of psychedelic drugs are caused by decreased activity and connectivity in the brain's key connector hubs, enabling a state of unconstrained cognition.",
"title": ""
},
{
"docid": "88ab27740e5c957993fd70f0bf6ac841",
"text": "We examine the problem of discrete stock price prediction using a synthesis of linguistic, financial and statistical techniques to create the Arizona Financial Text System (AZFinText). The research within this paper seeks to contribute to the AZFinText system by comparing AZFinText’s predictions against existing quantitative funds and human stock pricing experts. We approach this line of research using textual representation and statistical machine learning methods on financial news articles partitioned by similar industry and sector groupings. Through our research, we discovered that stocks partitioned by Sectors were most predictable in measures of Closeness, Mean Squared Error (MSE) score of 0.1954, predicted Directional Accuracy of 71.18% and a Simulated Trading return of 8.50% (compared to 5.62% for the S&P 500 index). In direct comparisons to existing market experts and quantitative mutual funds, our system’s trading return of 8.50% outperformed well-known trading experts. Our system also performed well against the top 10 quantitative mutual funds of 2005, where our system would have placed fifth. When comparing AZFinText against only those quantitative funds that monitor the same securities, AZFinText had a 2% higher return than the best performing quant fund.",
"title": ""
},
{
"docid": "b4fa4af471f647783e1f596680535c34",
"text": "The World Health Organization (WHO) is revising the tenth version of the International Classification of Diseases and Related Health Problems (ICD-10). This includes a reconceptualization of the definition and positioning of Gender Incongruence of Childhood (GIC). This study aimed to: 1) collect the views of transgender individuals and professionals regarding the retention of the diagnosis; 2) see if the proposed GIC criteria were acceptable to transgender individuals and health care providers; 3) compare results between two countries with two different healthcare systems to see if these differences influence opinions regarding the GIC diagnosis; and 4) determine whether healthcare providers from high-income countries feel that the proposed criteria are clinically useful and easy to use. A total of 628 participants were included in the study: 284 from the Netherlands (NL; 45.2%), 8 from Flanders (Belgium; 1.3%), and 336 (53.5%) from the United Kingdom (UK). Most participants were transgender people (or their partners/relatives; TG) (n = 522), 89 participants were healthcare providers (HCPs) and 17 were both HCP and TG individuals. Participants completed an online survey developed for this study. Overall, the majority response from transgender participants (42.9%) was that if the diagnosis would be removed from the mental health chapter it should also be removed from the ICD-11 completely, while 33.6% thought it should remain in the ICD-11. Participants were generally satisfied with other aspects of the proposed ICD-11 GIC diagnosis: most TG participants (58.4%) thought the term Gender Identity Disorder should change, and most thought Gender Incongruence was an improvement (63.0%). Furthermore, most participants (76.1%) did not consider GIC to be a psychiatric disorder and placement in a separate chapter dealing with Gender and Sexual Health (the majority response in the NL and selected by 37.5% of the TG participants overall) or as a Z-code (the majority response in the UK and selected by 26.7% of the TG participants overall) would be preferable. In the UK, the majority response (35.8%) was that narrowing the GIC diagnosis was an improvement, while the NL majority response (49.5%) was that this was not an improvement. Although generally the results from HCPs were in line with the results from TG participants some differences were found. This study suggests that, although in an ideal world a diagnosis is not welcomed, several participants felt the diagnosis should not be removed. This is likely due to concerns about restricting access to reimbursed healthcare. The choice for positioning of a diagnosis of GIC within the ICD-11 was as a separate chapter dealing with symptoms and/or disorders regarding sexual and gender health. This was the overall first choice for NL participants and second choice for UK participants, after the use of a Z-code. The difference reflects that in the UK, Z-codes carry no negative implications for reimbursement of treatment costs. These findings highlight the challenges faced by the WHO in their attempt to integrate research findings from different countries, with different cultures and healthcare systems in their quest to create a manual that is globally applicable.",
"title": ""
},
{
"docid": "ca745f3f2fa84135f5b7dbf5dbcbbaf5",
"text": "Battery management systems (BMS) are a key element in electric vehicle energy storage systems. The BMS performs several functions concerning to the battery system, its key task being balancing the battery cells. Battery cell unbalancing hampers electric vehicles’ performance, with differing individual cell voltages decreasing the battery pack capacity and cell lifetime, leading to the eventual failure of the total battery system. Quite a lot of cell balancing topologies have been proposed, such as shunt resistor, shuttling capacitor, inductor/transformer based and DC energy converters. The shuttling capacitor balancing systems in particular have not been subject to much research efforts however, due to their perceived low balancing speed and high cost. This paper tries to fill this gap by briefly discussing the shuttling capacitor cell balancing topologies, focusing on the single switched capacitor (SSC) cell balancing and proposing a novel procedure to improve the SSC balancing system performance. This leads to a new control strategy for the SSC system that can decrease the balancing system size, cost, balancing time and that can improve the SSC balancing system efficiency.",
"title": ""
},
{
"docid": "8feb5dce809acf0efb63d322f0526fcf",
"text": "Recent studies of eye movements in reading and other information processing tasks, such as music reading, typing, visual search, and scene perception, are reviewed. The major emphasis of the review is on reading as a specific example of cognitive processing. Basic topics discussed with respect to reading are (a) the characteristics of eye movements, (b) the perceptual span, (c) integration of information across saccades, (d) eye movement control, and (e) individual differences (including dyslexia). Similar topics are discussed with respect to the other tasks examined. The basic theme of the review is that eye movement data reflect moment-to-moment cognitive processes in the various tasks examined. Theoretical and practical considerations concerning the use of eye movement data are also discussed.",
"title": ""
},
{
"docid": "d3eff4c249e464e9e571d80d4fe95bbd",
"text": "CONIKS is a proposed key transparency system which enables a centralized service provider to maintain an auditable yet privacypreserving directory of users’ public keys. In the original CONIKS design, users must monitor that their data is correctly included in every published snapshot of the directory, necessitating either slow updates or trust in an unspecified third-party to audit that the data structure has stayed consistent. We demonstrate that the data structures for CONIKS are very similar to those used in Ethereum, a consensus computation platform with a Turing-complete programming environment. We can take advantage of this to embed the core CONIKS data structures into an Ethereum contract with only minor modifications. Users may then trust the Ethereum network to audit the data structure for consistency and non-equivocation. Users who do not trust (or are unaware of) Ethereum can self-audit the CONIKS data structure as before. We have implemented a prototype contract for our hybrid EthIKS scheme, demonstrating that it adds only modest bandwidth overhead to CONIKS proofs and costs hundredths of pennies per key update in fees at today’s rates.",
"title": ""
},
{
"docid": "c50f03d4486ed850a9a63f0a92f24a0b",
"text": "This paper presents an end-to-end approach for creating 3D shapes by self-folding planar sheets activated by uniform heating. These shapes can be used as the mechanical bodies of robots. The input to this process is a 3D geometry (e.g. an OBJ file). The output is a physical object with the specified geometry. We describe an algorithm pipeline that (1) identifies the overall geometry of the input, (2) computes a crease pattern that causes the sheet to self-fold into the desired 3D geometry when activated by uniform heating, (3) automatically generates the design of a 2D sheet with the desired pattern and (4) automatically generates the design files required to fabricate the 2D structure. We demonstrate these algorithms by applying them to complex 3D shapes. We demonstrate the fabrication of a self-folding object with over 50 faces from automatically generated design files.",
"title": ""
},
{
"docid": "33465b87cdc917904d16eb9d6cb8fece",
"text": "An audio fingerprint is a compact content-based signature that summarizes an audio recording. Audio Fingerprinting technologies have attracted attention since they allow the identification of audio independently of its format and without the need of meta-data or watermark embedding. Other uses of fingerprinting include: integrity verification, watermark support and content-based audio retrieval. The different approaches to fingerprinting have been described with different rationales and terminology: Pattern matching, Multimedia (Music) Information Retrieval or Cryptography (Robust Hashing). In this paper, we review different techniques describing its functional blocks as parts of a common, unified framework.",
"title": ""
}
] |
scidocsrr
|
99c0d8cba2df38cd4e9d6d5d27499dd5
|
An Analysis of Visual Question Answering Algorithms
|
[
{
"docid": "0a625d5f0164f7ed987a96510c1b6092",
"text": "We present a method that learns to answer visual questions by selecting image regions relevant to the text-based query. Our method maps textual queries and visual features from various regions into a shared space where they are compared for relevance with an inner product. Our method exhibits significant improvements in answering questions such as \"what color,\" where it is necessary to evaluate a specific location, and \"what room,\" where it selectively identifies informative image regions. Our model is tested on the recently released VQA [1] dataset, which features free-form human-annotated questions and answers.",
"title": ""
},
{
"docid": "6a26a8a73aedda5d733ff90415707d75",
"text": "Visual question answering (VQA) tasks use two types of images: abstract (illustrations) and real. Domain-specific differences exist between the two types of images with respect to “objectness,” “texture,” and “color.” Therefore, achieving similar performance by applying methods developed for real images to abstract images, and vice versa, is difficult. This is a critical problem in VQA, because image features are crucial clues for correctly answering the questions about the images. However, an effective, domain-invariant method can provide insight into the high-level reasoning required for VQA. We thus propose a method called DualNet that demonstrates performance that is invariant to the differences in real and abstract scene domains. Experimental results show that DualNet outperforms state-of-the-art methods, especially for the abstract images category.",
"title": ""
},
{
"docid": "8328b1dd52bcc081548a534dc40167a3",
"text": "This work aims to address the problem of imagebased question-answering (QA) with new models and datasets. In our work, we propose to use neural networks and visual semantic embeddings, without intermediate stages such as object detection and image segmentation, to predict answers to simple questions about images. Our model performs 1.8 times better than the only published results on an existing image QA dataset. We also present a question generation algorithm that converts image descriptions, which are widely available, into QA form. We used this algorithm to produce an order-of-magnitude larger dataset, with more evenly distributed answers. A suite of baseline results on this new dataset are also presented.",
"title": ""
}
] |
[
{
"docid": "aa0dc468b1b7402e9eb03848af31216e",
"text": "This paper discusses the construction of speech databases for research into speech information processing and describes a problem illustrated by the case of emotional speech synthesis. It introduces a project for the processing of expressive speech, and describes the data collection techniques and the subsequent analysis of supra-linguistic, and emotional features signalled in the speech. It presents annotation guidelines for distinguishing speaking-style differences, and argues that the focus of analysis for expressive speech processing applications should be on the speaker relationships (defined herein), rather than on emotions.",
"title": ""
},
{
"docid": "414160c5d5137def904c38cccc619628",
"text": "Side-channel attacks, particularly differential power analysis (DPA) attacks, are efficient ways to extract secret keys of the attacked devices by leaked physical information. To resist DPA attacks, hiding and masking methods are commonly used, but it usually resulted in high area overhead and performance degradation. In this brief, a DPA countermeasure circuit based on digital controlled ring oscillators is presented to efficiently resist the first-order DPA attack. The implementation of the critical S-box of the advanced encryption standard (AES) algorithm shows that the area overhead of a single S-box is about 19% without any extra delay in the critical path. Moreover, the countermeasure circuit can be mounted onto different S-box implementations based on composite field or look-up table (LUT). Based on our approach, a DPA-resistant AES chip can be proposed to maintain the same throughput with less than 2K extra gates.",
"title": ""
},
{
"docid": "97e33cc9da9cb944c27d93bb4c09ef3d",
"text": "Synchrophasor devices guarantee situation awareness for real-time monitoring and operational visibility of the smart grid. With their widespread implementation, significant challenges have emerged, especially in communication, data quality and cybersecurity. The existing literature treats these challenges as separate problems, when in reality, they have a complex interplay. This paper conducts a comprehensive review of quality and cybersecurity challenges for synchrophasors, and identifies the interdependencies between them. It also summarizes different methods used to evaluate the dependency and surveys how quality checking methods can be used to detect potential cyberattacks. In doing so, this paper serves as a starting point for researchers entering the fields of synchrophasor data analytics and security.",
"title": ""
},
{
"docid": "476f2a1970349b00ee296cf48aaf4983",
"text": "Web personalization systems are used to enhance the user experience by providing tailor-made services based on the user’s interests and preferences which are typically stored in user profiles. For such systems to remain effective, the profiles need to be able to adapt and reflect the users’ changing behaviour. In this paper, we introduce a set of methods designed to capture and track user interests and maintain dynamic user profiles within a personalization system. User interests are represented as ontological concepts which are constructed by mapping web pages visited by a user to a reference ontology and are subsequently used to learn short-term and long-term interests. A multi-agent system facilitates and coordinates the capture, storage, management and adaptation of user interests. We propose a search system that utilizes our dynamic user profile to provide a personalized search experience. We present a series of experiments that show how our system can effectively model a dynamic user profile and is capable of learning and adapting to different user browsing behaviours.",
"title": ""
},
{
"docid": "7d0b37434699aa5c3b36de33549a2b68",
"text": "In Ethiopia, malaria control has been complicated due to resistance of the parasite to the current drugs. Thus, new drugs are required against drug-resistant Plasmodium strains. Historically, many of the present antimalarial drugs were discovered from plants. This study was, therefore, conducted to document antimalarial plants utilized by Sidama people of Boricha District, Sidama Zone, South Region of Ethiopia. An ethnobotanical survey was carried out from September 2011 to February 2012. Data were collected through semistructured interview and field and market observations. Relative frequency of citation (RFC) was calculated and preference ranking exercises were conducted to estimate the importance of the reported medicinal plants in Boricha District. A total of 42 antimalarial plants belonging to 27 families were recorded in the study area. Leaf was the dominant plant part (59.0%) used in the preparation of remedies and oral (97.4%) was the major route of administration. Ajuga integrifolia scored the highest RFC value (0.80). The results of this study revealed the existence of rich knowledge on the use of medicinal plants in the study area to treat malaria. Thus, an attempt should be made to conserve and evaluate the claimed antimalarial medicinal plants with priority given to those that scored the highest RFC values.",
"title": ""
},
{
"docid": "deff50d73af79e57550016e8975de679",
"text": "The phase noise of a phase-locked loop (PLL) has a great impact on the performance of frequency-modulated continuous-wave (FMCW) radar. To examine the effects of the phase noise on FMCW radar performance, a model of an FMCW radar with a noisy PLL is developed. A filter-based technique for modeling the PLL phase noise is described. The radar model shows that PLL in-band phase noise affects the spatial resolution of the FMCW radar, whereas PLL out-of-band phase noise limits the maximum range. Finally, we propose a set of design constraints for PLL based on the model simulation results.",
"title": ""
},
{
"docid": "e913a4d2206be999f0278d48caa4708a",
"text": "Widespread deployment of the Internet enabled building of an emerging IT delivery model, i.e., cloud computing. Albeit cloud computing-based services have rapidly developed, their security aspects are still at the initial stage of development. In order to preserve cybersecurity in cloud computing, cybersecurity information that will be exchanged within it needs to be identified and discussed. For this purpose, we propose an ontological approach to cybersecurity in cloud computing. We build an ontology for cybersecurity operational information based on actual cybersecurity operations mainly focused on non-cloud computing. In order to discuss necessary cybersecurity information in cloud computing, we apply the ontology to cloud computing. Through the discussion, we identify essential changes in cloud computing such as data-asset decoupling and clarify the cybersecurity information required by the changes such as data provenance and resource dependency information.",
"title": ""
},
{
"docid": "e8ff6978cae740152a918284ebe49fe3",
"text": "Cross-lingual sentiment classification aims to predict the sentiment orientation of a text in a language (named as the target language) with the help of the resources from another language (named as the source language). However, current cross-lingual performance is normally far away from satisfaction due to the huge difference in linguistic expression and social culture. In this paper, we suggest to perform active learning for cross-lingual sentiment classification, where only a small scale of samples are actively selected and manually annotated to achieve reasonable performance in a short time for the target language. The challenge therein is that there are normally much more labeled samples in the source language than those in the target language. This makes the small amount of labeled samples from the target language flooded in the aboundance of labeled samples from the source language, which largely reduces their impact on cross-lingual sentiment classification. To address this issue, we propose a data quality controlling approach in the source language to select high-quality samples from the source language. Specifically, we propose two kinds of data quality measurements, intraand extra-quality measurements, from the certainty and similarity perspectives. Empirical studies verify the appropriateness of our active learning approach to cross-lingual sentiment classification.",
"title": ""
},
{
"docid": "fe98f8e9f9fd864c9c94b861f2c1db70",
"text": "The importance of intellectual talent to achievement in all professional domains is well established, but less is known about other individual differences that predict success. The authors tested the importance of 1 noncognitive trait: grit. Defined as perseverance and passion for long-term goals, grit accounted for an average of 4% of the variance in success outcomes, including educational attainment among 2 samples of adults (N=1,545 and N=690), grade point average among Ivy League undergraduates (N=138), retention in 2 classes of United States Military Academy, West Point, cadets (N=1,218 and N=1,308), and ranking in the National Spelling Bee (N=175). Grit did not relate positively to IQ but was highly correlated with Big Five Conscientiousness. Grit nonetheless demonstrated incremental predictive validity of success measures over and beyond IQ and conscientiousness. Collectively, these findings suggest that the achievement of difficult goals entails not only talent but also the sustained and focused application of talent over time.",
"title": ""
},
{
"docid": "89d0ffd0b809acafda10a20bd5f35a77",
"text": "Microscopic analysis of erythrocytes in urine is a valuable diagnostic tool for identifying glomerular hematuria. Indicative of glomerular hematuria is the presence of erythrocyte casts and polyand dysmorphic erythrocytes. In contrast, in non-glomerular hematuria, urine sediment erythrocytes are monoand isomorphic, and erythrocyte casts are absent (1, 2) . To date, various variant forms of dysmorphic erythrocyte morphology have been defi ned and classifi ed. They are categorized as: D1, D2, and D3 cells (2) . D1 and D2 cells are also referred to as acanthocytes or G1 cells which are mickey mouse-like cells with membrane protrusions and severe (D1) to mild (D2) loss of cytoplasmic color (2) . D3 cells are doughnut-like or other polyand dysmorphic forms that include discocytes, knizocytes, anulocytes, stomatocytes, codocytes, and schizocytes (2, 3) . The cellular morphology of these cells is observed to have mild cytoplasmic loss, and symmetrical shaped membranes free of protrusions. Echinocytes and pseudo-acanthocytes (bite-cells) are not considered to be dysmorphic erythrocytes. Glomerular hematuria is likely if more than 40 % of erythrocytes are dysmorphic or 5 % are D1-D2 cells and nephrologic work-up should be considered (2) . For over 20 years, manual microscopy has been the prevailing technique for examining dysmorphic erythrocytes in urine sediments when glomerular pathology is suspected (4, 5) . This labor-intensive method requires signifi cant expertise and experience to ensure consistent and accurate analysis. A more immediate and defi nitive automated technique that classifi es dysmorphic erythrocytes at least as good as the manual method would be an invaluable asset in the routine clinical laboratory practice. Therefore, the aim of the study was to investigate the use of the Iris Diagnostics automated iQ200 (Instrumentation Laboratory, Brussels, Belgium) as an automated platform for screening of dysmorphic erythrocytes. The iQ200 has proven to be an effi cient and reliable asset for our urinalysis (5) , but has not been used for the quantifi cation of dysmorphic erythrocytes. In total, 207 urine specimens of patients with suspected glomerular pathology were initially examined using manual phase contrast microscopy by two independent experienced laboratory technicians at a university medical center. The same specimens were re-evaluated using the Iris iQ200 instrument at our facility, which is a teaching hospital. The accuracy of the iQ200 was compared to the results of manual microscopy for detecting dysmorphic erythrocytes. Urine samples were processed within 2 h of voiding. Upon receipt, uncentrifuged urine samples were used for strip analysis using the AutionMax Urine Analyzer (Menarini, Valkenswaard, The Netherlands). For analysis of dysmorphic erythrocytes 20 mL urine was fi xed with CellFIX TM (a formaldehyde containing fi xative solution; BD Biosciences, Breda, The Netherlands) at a dilution of 100:1 (6) . One half of fi xed urine was centrifuged at 500 × g for 10 min and the pellet analyzed by two independent experienced technicians using phase-contrast microscopy. The other half was analyzed by automated urine sediment analyzer using the iQ200. The iQ200 uses a fl ow cell that hydrodynamically orients the particles within the focal plane of a microscopic lens coupled to a 1.3 megapixel CCD digital camera. Each particle image is digitized and sent to the instrument processor. For our study, the instrument ’ s cellrecognition function for classifying erythrocytes was used. Although the iQ200 can easily recognize and classify normal erythrocytes it cannot automatically classify dysmorphic erythrocytes. Instead, two independent and experienced technicians review the images in categories ‘ normal erythrocytes ’ and ‘ unclassifi ed ’ and reclassify dysmorphic erythrocytes to a separate ‘ dysmorphic ’ category. To minimize *Corresponding author: Ayşe Y. Demir, MD, PhD, Department of Clinical Chemistry and Haematology, Meander Medical Center Utrechtseweg 160, 3818 ES Amersfoort, The Netherlands Phone: + 31 33 8504344, Fax: + 31 33 8502035 , E-mail: ay.demir@meandermc.nl Received September 20, 2011; accepted November 15, 2011; previously published online December 7, 2011",
"title": ""
},
{
"docid": "786ef1b656c182ab71f7a63e7f263b3f",
"text": "The spectrum of a first-order sentence is the set of cardinalities of its finite models. This paper is concerned with spectra of sentences over languages that contain only unary function symbols. In particular, it is shown that a set S of natural numbers is the spectrum of a sentence over the language of one unary function symbol precisely if S is an eventually periodic set.",
"title": ""
},
{
"docid": "04d9f96fcd218e61f41412518c18cf31",
"text": "Squeak is an open, highly-portable Smalltalk implementation whose virtual machine is written entirely in Smalltalk, making it easy to. debug, analyze, and change. To achieve practical performance, a translator produces an equivalent C program whose performance is comparable to commercial Smalltalks.Other noteworthy aspects of Squeak include: a compact object format that typically requires only a single word of overhead per object; a simple yet efficient incremental garbage collector for 32-bit direct pointers; efficient bulk-mutation of objects; extensions of BitBlt to handle color of any depth and anti-aliased image rotation and scaling; and real-time sound and music synthesis written entirely in Smalltalk.",
"title": ""
},
{
"docid": "88ccacd6f14a9c00b54b8f465f3dfba0",
"text": "Autoencoders have been successful in learning meaningful representations from image datasets. However, their performance on text datasets has not been widely studied. Traditional autoencoders tend to learn possibly trivial representations of text documents due to their confoundin properties such as high-dimensionality, sparsity and power-law word distributions. In this paper, we propose a novel k-competitive autoencoder, called KATE, for text documents. Due to the competition between the neurons in the hidden layer, each neuron becomes specialized in recognizing specific data patterns, and overall the model can learn meaningful representations of textual data. A comprehensive set of experiments show that KATE can learn better representations than traditional autoencoders including denoising, contractive, variational, and k-sparse autoencoders. Our model also outperforms deep generative models, probabilistic topic models, and even word representation models (e.g., Word2Vec) in terms of several downstream tasks such as document classification, regression, and retrieval.",
"title": ""
},
{
"docid": "4d93be453dcb767faca082d966af5f3a",
"text": "This paper presents a unified variational formulation for joint object segmentation and stereo matching, which takes both accuracy and efficiency into account. In our approach, depth-map consists of compact objects, each object is represented through three different aspects: the perimeter in image space; the slanted object depth plane; and the planar bias, which is to add an additional level of detail on top of each object plane in order to model depth variations within an object. Compared with traditional high quality solving methods in low level, we use a convex formulation of the multilabel Potts Model with PatchMatch stereo techniques to generate depth-map at each image in object level and show that accurate multiple view reconstruction can be achieved with our formulation by means of induced homography without discretization or staircasing artifacts. Our model is formulated as an energy minimization that is optimized via a fast primal-dual algorithm, which can handle several hundred object depth segments efficiently. Performance evaluations in the Middlebury benchmark data sets show that our method outperforms the traditional integer-valued disparity strategy as well as the original PatchMatch algorithm and its variants in subpixel accurate disparity estimation. The proposed algorithm is also evaluated and shown to produce consistently good results for various real-world data sets (KITTI benchmark data sets and multiview benchmark data sets).",
"title": ""
},
{
"docid": "3f9ebd4116759203856e2387a4f91f4c",
"text": "Many real world stochastic control problems suffer from the “curse of dimensionality”. To overcome this difficulty, we develop a deep learning approach that directly solves high-dimensional stochastic control problems based on Monte-Carlo sampling. We approximate the time-dependent controls as feedforward neural networks and stack these networks together through model dynamics. The objective function for the control problem plays the role of the loss function for the deep neural network. We test this approach using examples from the areas of optimal trading and energy storage. Our results suggest that the algorithm presented here achieves satisfactory accuracy and at the same time, can handle rather high dimensional problems.",
"title": ""
},
{
"docid": "bc6c7fcd98160c48cd3b72abff8fad02",
"text": "A new concept of formality of linguistic expressions is introduced and argued to be the most important dimension of variation between styles or registers. Formality is subdivided into \"deep\" formality and \"surface\" formality. Deep formality is defined as avoidance of ambiguity by minimizing the context-dependence and fuzziness of expressions. This is achieved by explicit and precise description of the elements of the context needed to disambiguate the expression. A formal style is characterized by detachment, accuracy, rigidity and heaviness; an informal style is more flexible, direct, implicit, and involved, but less informative. An empirical measure of formality, the F-score, is proposed, based on the frequencies of different word classes in the corpus. Nouns, adjectives, articles and prepositions are more frequent in formal styles; pronouns, adverbs, verbs and interjections are more frequent in informal styles. It is shown that this measure, though coarse-grained, adequately distinguishes more from less formal genres of language production, for some available corpora in Dutch, French, Italian, and English. A factor similar to the F-score automatically emerges as the most important one from factor analyses applied to extensive data in 7 different languages. Different situational and personality factors are examined which determine the degree of formality in linguistic expression. It is proposed that formality becomes larger when the distance in space, time or background between the interlocutors increases, and when the speaker is male, introverted or academically educated. Some empirical evidence and a preliminary theoretical explanation for these propositions is discussed. Short Abstract: The concept of \"deep\" formality is proposed as the most important dimension of variation between language registers or styles. It is defined as avoidance of ambiguity by minimizing the context-dependence and fuzziness of expressions. An empirical measure, the F-score, is proposed, based on the frequencies of different word classes. This measure adequately distinguishes different genres of language production using data for Dutch, French, Italian, and English. Factor analyses applied to data in 7 different languages produce a similar factor as the most important one. Both the data and the theoretical model suggest that formality increases when the distance in space, time or background between the interlocutors increases, and when the speaker is male, introverted or academically educated.",
"title": ""
},
{
"docid": "e81736e35fe06b0e7e15e61329c6f4c9",
"text": "Aphasia is an acquired communication disorder often resulting from stroke that can impact quality of life and may lead to high levels of stress and depression. Depression diagnosis in this population is often completed through subjective caregiver questionnaires. Stress diagnostic tests have not been modified for language difficulties. This work proposes to use speech analysis as an objective measure of stress and depression in patients with aphasia. Preliminary analysis used linear support vector regression models to predict depression scores and stress scores for a total of 19 and 18 participants respectively. Teager Energy Operator-Amplitude Modulation features performed the best in predicting the Perceived Stress Scale score based on various measures. The complications of speech in people with aphasia are examined and indicate the need for future work on this understudied population.",
"title": ""
},
{
"docid": "a0547eae9a2186d4c6f1b8307317f061",
"text": "Leadership scholars have called for additional research on leadership skill requirements and how those requirements vary by organizational level. In this study, leadership skill requirements are conceptualized as being layered (strata) and segmented (plex), and are thus described using a strataplex. Based on previous conceptualizations, this study proposes a model made up of four categories of leadership skill requirements: Cognitive skills, Interpersonal skills, Business skills, and Strategic skills. The model is then tested in a sample of approximately 1000 junior, midlevel, and senior managers, comprising a full career track in the organization. Findings support the “plex” element of the model through the emergence of four leadership skill requirement categories. Findings also support the “strata” portion of the model in that different categories of leadership skill requirements emerge at different organizational levels, and that jobs at higher levels of the organization require higher levels of all leadership skills. In addition, although certain Cognitive skill requirements are important across organizational levels, certain Strategic skill requirements only fully emerge at the highest levels in the organization. Thus a strataplex proved to be a valuable tool for conceptualizing leadership skill requirements across organizational levels. © 2007 Elsevier Inc. All rights reserved.",
"title": ""
},
{
"docid": "e964a46706179a92b775307166a64c8a",
"text": "I general, perceptions of information systems (IS) success have been investigated within two primary research streams—the user satisfaction literature and the technology acceptance literature. These two approaches have been developed in parallel and have not been reconciled or integrated. This paper develops an integrated research model that distinguishes beliefs and attitudes about the system (i.e., object-based beliefs and attitudes) from beliefs and attitudes about using the system (i.e., behavioral beliefs and attitudes) to build the theoretical logic that links the user satisfaction and technology acceptance literature. The model is then tested using a sample of 465 users from seven different organizations who completed a survey regarding their use of data warehousing software. The proposed model was supported, providing preliminary evidence that the two perspectives can and should be integrated. The integrated model helps build the bridge from design and implementation decisions to system characteristics (a core strength of the user satisfaction literature) to the prediction of usage (a core strength of the technology acceptance literature).",
"title": ""
},
{
"docid": "c2571f794304a6b0efdc4fe22bac89e5",
"text": "PURPOSE\nThe aim of this study was to analyse the psychometric properties of the Portuguese version of the body image scale (BIS; Hopwood, P., Fletcher, I., Lee, A., Al Ghazal, S., 2001. A body image scale for use with cancer patients. European Journal of Cancer, 37, 189-197). This is a brief and psychometric robust measure of body image for use with cancer patients, independently of age, cancer type, treatment or stage of the disease and it was developed in collaboration with the European Organization for Research and Treatment of Cancer (EORTC) Quality of Life Study Group.\n\n\nMETHOD\nThe sample is comprised of 173 Portuguese postoperative breast cancer patients that completed a battery of measures that included the BIS and other scales of body image and quality of life, in order to explore its construct validity.\n\n\nRESULTS\nThe Portuguese version of BIS confirmed the original unidimensional structure and demonstrated adequate internal consistency, both in the global sample (alpha=.93) as in surgical subgroups (mastectomy=.92 and breast-conserving surgery=.93). Evidence for the construct validity was provided through moderate to largely sized correlations between the BIS and other related measures. In further support of its discriminant validity, significant differences in BIS scores were found between women who underwent mastectomy and those who underwent breast-conserving surgery, with the former presenting higher scores. Age and time since diagnosis were not associated with BIS scores.\n\n\nCONCLUSIONS\nThe Portuguese BIS proved to be a reliable and valid measure of body image concerns in a sample of breast cancer patients, allowing a brief and comprehensive assessment, both on clinical and research settings.",
"title": ""
}
] |
scidocsrr
|
f4d90b39e13058075707c7536951332b
|
An MRF Model-Based Active Learning Framework for the Spectral-Spatial Classification of Hyperspectral Imagery
|
[
{
"docid": "8e648261dc529f8e28ce3b2a40d9f0b0",
"text": "C 34 35 36 37 38 39 40 41 42 43 44 Article history: Received 21 July 2006 Received in revised form 25 June 2007 Accepted 27 July 2007 Available online xxxx",
"title": ""
}
] |
[
{
"docid": "b39d393c8fd817f487e8bdfd59d03a55",
"text": "This paper gives an overview of the upcoming IEEE Gigabit Wireless LAN amendments, i.e. IEEE 802.11ac and 802.11ad. Both standard amendments advance wireless networking throughput beyond gigabit rates. 802.11ac adds multi-user access techniques in the form of downlink multi-user (DL MU) multiple input multiple output (MIMO)and 80 and 160 MHz channels in the 5 GHz band for applications such as multiple simultaneous video streams throughout the home. 802.11ad takes advantage of the large swath of available spectrum in the 60 GHz band and defines protocols to enable throughput intensive applications such as wireless I/O or uncompressed video. New waveforms for 60 GHz include single carrier and orthogonal frequency division multiplex (OFDM). Enhancements beyond the new 60 GHz PHY include Personal Basic Service Set (PBSS) operation, directional medium access, and beamforming. We describe 802.11ac channelization, PHY design, MAC modifications, and DL MU MIMO. For 802.11ad, the new PHY layer, MAC enhancements, and beamforming are presented.",
"title": ""
},
{
"docid": "65e297211555a88647eb23a65698531c",
"text": "Game theoretical techniques have recently become prevalen t in many engineering applications, notably in communications. With the emergence of cooperation as a new communicat ion paradigm, and the need for self-organizing, decentrali zed, and autonomic networks, it has become imperative to seek sui table game theoretical tools that allow to analyze and study the behavior and interactions of the nodes in future communi cation networks. In this context, this tutorial introduces the concepts of cooperative game theory, namely coalitiona l games, and their potential applications in communication and wireless networks. For this purpose, we classify coalit i nal games into three categories: Canonical coalitional g ames, coalition formation games, and coalitional graph games. Th is new classification represents an application-oriented a pproach for understanding and analyzing coalitional games. For eac h class of coalitional games, we present the fundamental components, introduce the key properties, mathematical te hniques, and solution concepts, and describe the methodol ogies for applying these games in several applications drawn from the state-of-the-art research in communications. In a nuts hell, this article constitutes a unified treatment of coalitional g me theory tailored to the demands of communications and",
"title": ""
},
{
"docid": "8decac4ff789460595664a38e7527ed6",
"text": "Unit selection synthesis has shown itself to be capable of producing high quality natural sounding synthetic speech when constructed from large databases of well-recorded, well-labeled speech. However, the cost in time and expertise of building such voices is still too expensive and specialized to be able to build individual voices for everyone. The quality in unit selection synthesis is directly related to the quality and size of the database used. As we require our speech synthesizers to have more variation, style and emotion, for unit selection synthesis, much larger databases will be required. As an alternative, more recently we have started looking for parametric models for speech synthesis, that are still trained from databases of natural speech but are more robust to errors and allow for better modeling of variation. This paper presents the CLUSTERGEN synthesizer which is implemented within the Festival/FestVox voice building environment. As well as the basic technique, three methods of modeling dynamics in the signal are presented and compared: a simple point model, a basic trajectory model and a trajectory model with overlap and add.",
"title": ""
},
{
"docid": "46adb7a040a2d8a40910a9f03825588d",
"text": "The aim of this study was to investigate the consequences of friend networking sites (e.g., Friendster, MySpace) for adolescents' self-esteem and well-being. We conducted a survey among 881 adolescents (10-19-year-olds) who had an online profile on a Dutch friend networking site. Using structural equation modeling, we found that the frequency with which adolescents used the site had an indirect effect on their social self-esteem and well-being. The use of the friend networking site stimulated the number of relationships formed on the site, the frequency with which adolescents received feedback on their profiles, and the tone (i.e., positive vs. negative) of this feedback. Positive feedback on the profiles enhanced adolescents' social self-esteem and well-being, whereas negative feedback decreased their self-esteem and well-being.",
"title": ""
},
{
"docid": "f272caa39c08b538d2f3eb983a263809",
"text": "This paper proposes a model based approach for prognosis of DC-DC power converters. We briefly review the prognosis process, and present an overview of different approaches that have been developed. We study the effects of capacitor degradation on DC-DC converter performance by developing a combination of a thermal model for ripple current effects and a physics of failure model of the thermal effects on capacitor degradation. The derived degradation model of the capacitor is reintroduced into the DC-DC converter model to study changes in the system performance using Monte Carlo methods. The simulation results observed under different conditions and experimental setups for model verification are discussed. The paper concludes with comments and future work to be done.",
"title": ""
},
{
"docid": "5562bb6fdc8864a23e7ec7992c7bb023",
"text": "Bacteria are known to communicate primarily via secreted extracellular factors. Here we identify a previously uncharacterized type of bacterial communication mediated by nanotubes that bridge neighboring cells. Using Bacillus subtilis as a model organism, we visualized transfer of cytoplasmic fluorescent molecules between adjacent cells. Additionally, by coculturing strains harboring different antibiotic resistance genes, we demonstrated that molecular exchange enables cells to transiently acquire nonhereditary resistance. Furthermore, nonconjugative plasmids could be transferred from one cell to another, thereby conferring hereditary features to recipient cells. Electron microscopy revealed the existence of variously sized tubular extensions bridging neighboring cells, serving as a route for exchange of intracellular molecules. These nanotubes also formed in an interspecies manner, between B. subtilis and Staphylococcus aureus, and even between B. subtilis and the evolutionary distant bacterium Escherichia coli. We propose that nanotubes represent a major form of bacterial communication in nature, providing a network for exchange of cellular molecules within and between species.",
"title": ""
},
{
"docid": "3adc34e940aecbd4bb8e098e8d5aab3a",
"text": "Internet protocol security (IPSec) is a widely deployed mechanism for implementing virtual private networks (VPNs). This paper evaluates the performance overheads associated with IPSec. We use Openswan, an open source implementation of IPSec, and measure the running times of individual security operations and also the speedup gained by replacing various IPSec components with no-ops. The main findings of this study include: VPN connection establishment and maintenance overheads for short sessions could be significantly higher than those incurred while transferring data, and cryptographic operations contribute 32 - 60% of the total IPSec overheads.",
"title": ""
},
{
"docid": "e7f9e290eb7cc21b4a0785430546a33b",
"text": "In this study, 306 individuals in 3 age groups--adolescents (13-16), youths (18-22), and adults (24 and older)--completed 2 questionnaire measures assessing risk preference and risky decision making, and 1 behavioral task measuring risk taking. Participants in each age group were randomly assigned to complete the measures either alone or with 2 same-aged peers. Analyses indicated that (a) risk taking and risky decision making decreased with age; (b) participants took more risks, focused more on the benefits than the costs of risky behavior, and made riskier decisions when in peer groups than alone; and (c) peer effects on risk taking and risky decision making were stronger among adolescents and youths than adults. These findings support the idea that adolescents are more inclined toward risky behavior and risky decision making than are adults and that peer influence plays an important role in explaining risky behavior during adolescence.",
"title": ""
},
{
"docid": "21c4a6bb8fee4e403c6cd384e1e423be",
"text": "Fault detection prediction of FAB (wafer fabrication) process in semiconductor manufacturing process is possible that improve product quality and reliability in accordance with the classification performance. However, FAB process is sometimes due to a fault occurs. And mostly it occurs “pass”. Hence, data imbalance occurs in the pass/fail class. If the data imbalance occurs, prediction models are difficult to predict “fail” class because increases the bias of majority class (pass class). In this paper, we propose the SMOTE (Synthetic Minority Oversampling Technique) based over sampling method for solving problem of data imbalance. The proposed method solve the imbalance of the between pass and fail by oversampling the minority class of fail. In addition, by applying the fault detection prediction model to measure the performance.",
"title": ""
},
{
"docid": "b363bffde05d6df3803c32116842be36",
"text": "The brain activation of a group of high-functioning autistic participants was measured using functional magnetic resonance imaging during the performance of a Tower of London task, in comparison with a control group matched with respect to intelligent quotient, age, and gender. The 2 groups generally activated the same cortical areas to similar degrees. However, there were 3 indications of underconnectivity in the group with autism. First, the degree of synchronization (i.e., the functional connectivity or the correlation of the time series of the activation) between the frontal and parietal areas of activation was lower for the autistic than the control participants. Second, relevant parts of the corpus callosum, through which many of the bilaterally activated cortical areas communicate, were smaller in cross-sectional area in the autistic participants. Third, within the autism group but not within the control group, the size of the genu of the corpus callosum was correlated with frontal-parietal functional connectivity. These findings suggest that the neural basis of altered cognition in autism entails a lower degree of integration of information across certain cortical areas resulting from reduced intracortical connectivity. The results add support to a new theory of cortical underconnectivity in autism, which posits a deficit in integration of information at the neural and cognitive levels.",
"title": ""
},
{
"docid": "3bee61e95acf274c01f1846233b3c3bb",
"text": "One key difficulty with text classification learning algorithms is that they require many hand-labeled examples to learn accurately. This dissertation demonstrates that supervised learning algorithms that use a small number of labeled examples and many inexpensive unlabeled examples can create high-accuracy text classifiers. By assuming that documents are created by a parametric generative model, Expectation-Maximization (EM) finds local maximum a posteriori models and classifiers from all the data—labeled and unlabeled. These generative models do not capture all the intricacies of text; however on some domains this technique substantially improves classification accuracy, especially when labeled data are sparse. Two problems arise from this basic approach. First, unlabeled data can hurt performance in domains where the generative modeling assumptions are too strongly violated. In this case the assumptions can be made more representative in two ways: by modeling sub-topic class structure, and by modeling super-topic hierarchical class relationships. By doing so, model probability and classification accuracy come into correspondence, allowing unlabeled data to improve classification performance. The second problem is that even with a representative model, the improvements given by unlabeled data do not sufficiently compensate for a paucity of labeled data. Here, limited labeled data provide EM initializations that lead to low-probability models. Performance can be significantly improved by using active learning to select high-quality initializations, and by using alternatives to EM that avoid low-probability local maxima.",
"title": ""
},
{
"docid": "6763301377195d0524ae4666c5d32cd4",
"text": "We present a novel and computationally fast method for automatic human face authentication. Taking a 3D triangular facial mesh as input, the approach first automatically extracts the bilateral symmetry plane of the facial surface. The intersection between the symmetry plane and the facial surface, namely the symmetry profile, is then computed. Using both the mean curvature plot of the facial surface and the curvature plot of the symmetry profile curve, three essential points of the nose on the symmetry profile are automatically extracted. The three essential points uniquely determine a Face Intrinsic Coordinate System (FICS). Different faces are aligned based on the FICS. The symmetry profile, together with two transverse profiles, composes a compact representation, called the SFC representation, of a 3D face surface. The face authentication and recognition steps are finally performed by comparing the SFC representations of the faces. The proposed method was tested on 382 face surfaces, which come from 166 individuals and cover a wide ethnic and age variety. The equal error rate (EER) of face authentication on scans with variable facial expressions is 10.8%. For scans with normal expression, the ERR is 0.8%.",
"title": ""
},
{
"docid": "afc96e4003d7d5fbc281aced794e3e43",
"text": "The increasing use of imaging necessitates familiarity with a wide variety of pathologic conditions, both common and rare, that affect the fallopian tube. These conditions should be considered in the differential diagnosis for pelvic disease in the nonpregnant patient. The most common condition is pelvic inflammatory disease, which represents a spectrum ranging from salpingitis to pyosalpinx to tubo-ovarian abscess. Isolated tubal torsion is rare but is nevertheless an important diagnosis to consider in the acute setting. Hematosalpinx in a nonpregnant patient can be an indicator of tubal endometriosis; however, care should be taken to exclude tubal torsion or malignancy. Current evidence suggests that the prevalence of primary fallopian tube carcinoma (PFTC) is underestimated and that there is a relationship between PFTC and breast cancer. PFTC has characteristic imaging features that can aid in its detection and in differentiating it from other pelvic masses. Familiarity with fallopian tube disease and the imaging appearances of both the normal and abnormal fallopian tube is crucial for optimal diagnosis and management in emergent as well as ambulatory settings.",
"title": ""
},
{
"docid": "4d5317c069450b785a77c98581494782",
"text": "at Columbia University for support during the writing of the early draft of paper, and to numerous readers—particularly the three anonymous reviewers—for their suggestions. Opinions and analysis are the author's, and not necessarily those of Microsoft Corporation. Abstract The paper reviews roughly 200 recent studies of mobile (cellular) phone use in the developing world, and identifies major concentrations of research. It categorizes studies along two dimensions. One dimension distinguishes studies of the determinants of mobile adoption from those that assess the impacts of mobile use, and from those focused on the interrelationships between mobile technologies and users. A secondary dimension identifies a subset of studies with a strong economic development perspective. The discussion considers the implications of the resulting review and typology for future research.",
"title": ""
},
{
"docid": "fa5a07a89f8b52759585ea20124fb3cc",
"text": "Polycystic ovary syndrome (PCOS) is considered as a highly heterogeneous and complex disease. Dimethyldiguanide (DMBG) is widely used to improve the reproductive dysfunction in women with PCOS. However, the precise mechanism by which DMBG exerts its benefical effect on PCOS remains largely unknown. The present study was designed to explore the effects of DMBG on the changes of oxidative stress and the activation of nucleotide leukin rich polypeptide 3 (NLRP3) inflammasome in the ovaries during the development and treatment of PCOS. A letrozole-induced rat PCOS model was developed. The inflammatory status was examined by analyzing the serum high sensitive C-reactive protein (hsCRP) levels in ras. We found that DMBG treatment rescued PCOS rats, which is associated with the reduced chronic low grade inflammation in these rats. In PCOS rats, the NLRP3 and the adaptor protein apoptosis-associated speck-like protein (ASC) mRNA levels, caspase-1 activation, and IL-1β production were unregulated, which was markedly attenuated by DMBG treatment. Moreover, oxidative stress was enhanced in PCOS rats as shown by increased lipid peroxidation (LPO) and activity of superoxide dismutase (SOD) and catalase. DMBG significantly decreased LPO, while it had no effects on SOD and catalase activities. Together, these results indicate that DMBG treatment may rescue PCOS rats by suppressing oxidative stress and NLRP3 inflammasome activation in PCOS ovaries.",
"title": ""
},
{
"docid": "b93446bab637abd4394338615a5ef6e9",
"text": "Genetic programming is a methodology inspired by biological evolution. By using computational analogs to biological crossover and mutation new versions of a program are generated automatically. This population of new programs is then evaluated by an user defined fittness function to only select the programs that show an improved behavior as compared to the original program. In this case the desired behavior is to retain all original functionality and additionally fixing bugs found in the program code.",
"title": ""
},
{
"docid": "c9c5c8441ef15c512afbe4e6079b4bd0",
"text": "Health insurance fraud increases the disorganization and unfairness in our society. Health care fraud leads to substantial losses of money and very costly to health care insurance system. It is horrible because the percentage of health insurance fraud keeps increasing every year in many countries. To address this widespread problem, effective techniques are in need to detect fraudulent claims in health insurance sector. The application of data mining is specifically relevant and it has been successfully applied in medical needs for its reliable precision accuracy and rapid beneficial results. This paper aims to provide a comprehensive survey of the statistical data mining methods applied to detect fraud in health insurance sector.",
"title": ""
},
{
"docid": "480c8d16f3e58742f0164f8c10a206dd",
"text": "Dyna is an architecture for reinforcement learning agents that interleaves planning, acting, and learning in an online setting. This architecture aims to make fuller use of limited experience to achieve better performance with fewer environmental interactions. Dyna has been well studied in problems with a tabular representation of states, and has also been extended to some settings with larger state spaces that require function approximation. However, little work has studied Dyna in environments with high-dimensional state spaces like images. In Dyna, the environment model is typically used to generate one-step transitions from selected start states. We applied one-step Dyna to several games from the Arcade Learning Environment and found that the model-based updates offered surprisingly little benefit, even with a perfect model. However, when the model was used to generate longer trajectories of simulated experience, performance improved dramatically. This observation also holds when using a model that is learned from experience; even though the learned model is flawed, it can still be used to accelerate learning.",
"title": ""
},
{
"docid": "2f35c7fafec1afafba78a0d1853b41ba",
"text": "A fundamental objective of human–computer interaction research is to make systems more usable, more useful, and to provide users with experiences fitting their specific background knowledge and objectives. The challenge in an information-rich world is not only to make information available to people at any time, at any place, and in any form, but specifically to say the “right” thing at the “right” time in the “right” way. Designers of collaborative human–computer systems face the formidable task of writing software for millions of users (at design time) while making it work as if it were designed for each individual user (only known at use time). User modeling research has attempted to address these issues. In this article, I will first review the objectives, progress, and unfulfilled hopes that have occurred over the last ten years, and illustrate them with some interesting computational environments and their underlying conceptual frameworks. A special emphasis is given to high-functionality applications and the impact of user modeling to make them more usable, useful, and learnable. Finally, an assessment of the current state of the art followed by some future challenges is given.",
"title": ""
},
{
"docid": "36ad496263674c6f0f8d250d73b230fe",
"text": "We rst review how wavelets may be used for multi-resolution image processing, describing the lter-bank implementation of the discrete wavelet transform (dwt) and how it may be extended via separable ltering for processing images and other multi-dimensional signals. We then show that the condition for inversion of the dwt (perfect reconstruction) forces many commonly used wavelets to be similar in shape, and that this shape produces severe shift variance (variation of dwt coeecient energy at any given scale with shift of the input signal). It is also shown that separable ltering with the dwt prevents the transform from providing directionally selective lters for diagonal image features. Complex wavelets can provide both shift invariance and good directional se-lectivity, with only modest increases in signal redundancy and computation load. However development of a complex wavelet transform (cwt) with perfect reconstruction and good lter characteristics has proved diicult until recently. We now propose the dual-tree cwt as a solution to this problem, yielding a transform with attractive properties for a range of signal and image processing applications, including motion estimation, denoising, texture analysis and synthesis, and object segmentation.",
"title": ""
}
] |
scidocsrr
|
a174a3dc5f3c9a0879993fd662cf2f5e
|
Attribute-Based Access Control
|
[
{
"docid": "3e7e4b5c2a73837ac5fa111a6dc71778",
"text": "Merging the best features of RBAC and attribute-based systems can provide effective access control for distributed and rapidly changing applications.",
"title": ""
}
] |
[
{
"docid": "e51d3dda4b53a01fbf12ce033321421f",
"text": "The tremendous growth in electronic data of universities creates the need to have some meaningful information extracted from these large volumes of data. The advancement in the data mining field makes it possible to mine educational data in order to improve the quality of the educational processes. This study, thus, uses data mining methods to study the performance of undergraduate students. Two aspects of students' performance have been focused upon. First, predicting students' academic achievement at the end of a fouryear study programme. Second, studying typical progressions and combining them with prediction results. Two important groups of students have been identified: the low and high achieving students. The results indicate that by focusing on a small number of courses that are indicators of particularly good or poor performance, it is possible to provide timely warning and support to low achieving students, and advice and opportunities to high performing students. © 2017 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "150e7a6f46e93fc917e43e32dedd9424",
"text": "This purpose of this introductory paper is threefold. First, it introduces the Monte Carlo method with emphasis on probabilistic machine learning. Second, it reviews the main building blocks of modern Markov chain Monte Carlo simulation, thereby providing and introduction to the remaining papers of this special issue. Lastly, it discusses new interesting research horizons.",
"title": ""
},
{
"docid": "59d3a3ec644d8554cbb2a5ac75a329f8",
"text": "Relational learning can be described as the task of learning first-order logic rules from examples. It has enabled a number of new machine learning applications, e.g. graph mining and link analysis. Inductive Logic Programming (ILP) performs relational learning either directly by manipulating first-order rules or through propositionalization, which translates the relational task into an attribute-value learning task by representing subsets of relations as features. In this paper, we introduce a fast method and system for relational learning based on a novel propositionalization called Bottom Clause Propositionalization (BCP). Bottom clauses are boundaries in the hypothesis search space used by ILP systems Progol and Aleph. Bottom clauses carry semantic meaning and can be mapped directly onto numerical vectors, simplifying the feature extraction process. We have integrated BCP with a well-known neural-symbolic system, C-IL2P, to perform learning from numerical vectors. C-IL2P uses background knowledge in the form of propositional logic programs to build a neural network. The integrated system, which we call CILP++, handles first-order logic knowledge and is available for download from Sourceforge. We have evaluated CILP++ on seven ILP datasets, comparing results with Aleph and a well-known propositionalization method, RSD. The results show that CILP++ can achieve accuracy comparable to Aleph, while being generally faster, BCP achieved statistically significant improvement in accuracy in comparison with RSD when running with a neural network, but BCP and RSD perform similarly when running with C4.5. We have also extended CILP++ to include a statistical feature selection method, mRMR, with preliminary results indicating that a reduction of more than 90 % of features can be achieved with a small loss of accuracy.",
"title": ""
},
{
"docid": "ff9cb45c2142452f447fc178dc9d58f6",
"text": "Logs are ubiquitous for system monitoring and debugging. However, there lacks a comprehensive system that is capable of performing heterogeneous log organization and analysis for various purposes with very limited domain knowledge and human surveillance. In this manuscript, a novel system for heterogeneous log analysis is proposed. The system, denoted as Heterogeneous Log Analyzer (HLAer), achieves the following goals concurrently: 1) heterogeneous log categorization and organization; 2) automatic log format recognition and 3) heterogeneous log indexing. Meanwhile, HLAer supports queries and outlier detection on heterogeneous logs. HLAer provides a framework which is purely dataoriented and thus general enough to adapt to arbitrary log formats, applications or systems. The current implementation of HLAer is scalable to Big Data.",
"title": ""
},
{
"docid": "d4774f784e3b439dfb77b0f10a8c4950",
"text": "As consequence of the considerable increase of the electrical power demand in vehicles, the adoption of a combined direct-drive starter/alternator system is being seriously pursued and a new generation of vehicle alternators delivering power up to 6 kW over the entire range of the engine speed is soon expected for use with connection to a 42 V bus. The surface permanent magnet (SPM) machines offer many of the features sought for such future automotive power generation systems, and thereby a substantial improvement in the control of their output voltage would allow the full exploitation of their attractive characteristics in the direct-drive starter/alternator application without significant penalties otherwise resulting on the machine-fed power converter. Concerning that, this paper reports on the original solution adopted in a proof-of-concept axial-flux permanent magnet machine (AFPM) prototype to provide weakening of the flux linkage with speed and thereby achieve constant-power operation over a wide speed range. The principle being utilized is introduced and described, including design dimensions and experimental data taken from the proof-of-concept machine prototype.",
"title": ""
},
{
"docid": "3e7172904fa0f6f0948ecb5d884ad853",
"text": "The MNIST dataset has become a standard benchmark for learning, classification and computer vision systems. Contributing to its widespread adoption are the understandable and intuitive nature of the task, its relatively small size and storage requirements and the accessibility and ease-of-use of the database itself. The MNIST database was derived from a larger dataset known as the NIST Special Database 19 which contains digits, uppercase and lowercase handwritten letters. This paper introduces a variant of the full NIST dataset, which we have called Extended MNIST (EMNIST), which follows the same conversion paradigm used to create the MNIST dataset. The result is a set of datasets that constitute a more challenging classification tasks involving letters and digits, and that shares the same image structure and parameters as the original MNIST task, allowing for direct compatibility with all existing classifiers and systems. Benchmark results are presented along with a validation of the conversion process through the comparison of the classification results on converted NIST digits and the MNIST digits.",
"title": ""
},
{
"docid": "c95894477d7279deb7ddbb365030c34e",
"text": "Among mammals living in social groups, individuals form communication networks where they signal their identity and social status, facilitating social interaction. In spite of its importance for understanding of mammalian societies, the coding of individual-related information in the vocal signals of non-primate mammals has been relatively neglected. The present study focuses on the spotted hyena Crocuta crocuta, a social carnivore known for its complex female-dominated society. We investigate if and how the well-known hyena's laugh, also known as the giggle call, encodes information about the emitter. By analyzing acoustic structure in both temporal and frequency domains, we show that the hyena's laugh can encode information about age, individual identity and dominant/subordinate status, providing cues to receivers that could enable assessment of the social position of an emitting individual. The range of messages encoded in the hyena's laugh is likely to play a role during social interactions. This call, together with other vocalizations and other sensory channels, should ensure an array of communication signals that support the complex social system of the spotted hyena. Experimental studies are now needed to decipher precisely the communication network of this species.",
"title": ""
},
{
"docid": "f9f251a50d19ef325346c7dd0d3bc9be",
"text": "In this paper, a Cartesian admittance controller with on-line gravity and friction observer compensation based on passivity theory for elastic joint robots is proposed. For this study, a compliance and position controller for joint and Cartesian level has been described with dual loop construction: an outer-loop admittance control, and an inner-loop motor side position with torque feedback control. In terms of torque feedback, a physical interpretation is addressed and analyzed. Regarding as admittance controller of the elastic joint, two type (stiffness and damping) of the systematic control model can be represented. In addition to an on-line gravity compensation based-on output-side encodes has been proposed. Moreover, in order to achieve high trajectory tracking performance, a friction observer based compensation is applied also. Furthermore, aimed at the control performance of the stiffness, damping, and admittance control algorithm are assessed by a simulation, respectively. Finally, the simulation studies reveal a better effect regarding as the position tracking and torque output in contrast to without friction passivity algorithm and proportional-derivative (PD) control. In addition, the response of proposed compliance control method is effective for external force.",
"title": ""
},
{
"docid": "7db9cf29dd676fa3df5a2e0e95842b6e",
"text": "We present a novel approach to still image denoising based on e ective filtering in 3D transform domain by combining sliding-window transform processing with block-matching. We process blocks within the image in a sliding manner and utilize the block-matching concept by searching for blocks which are similar to the currently processed one. The matched blocks are stacked together to form a 3D array and due to the similarity between them, the data in the array exhibit high level of correlation. We exploit this correlation by applying a 3D decorrelating unitary transform and e ectively attenuate the noise by shrinkage of the transform coe cients. The subsequent inverse 3D transform yields estimates of all matched blocks. After repeating this procedure for all image blocks in sliding manner, the final estimate is computed as weighed average of all overlapping blockestimates. A fast and e cient algorithm implementing the proposed approach is developed. The experimental results show that the proposed method delivers state-of-art denoising performance, both in terms of objective criteria and visual quality.",
"title": ""
},
{
"docid": "099d292857e6e363f06eb606b0ce5b36",
"text": "The blockchain technology has evolved beyond traditional payment solutions in the finance sector and offers a potential for transforming many sectors including the public sector. The novel integration of technology and economy that open public block-chains have brought represents both challenges to and opportunities for enhancing digital public services. So far, the public sector has lagged behind other sectors in both research and exploration of this technology, but pilot cases show that there is a great potential for reforming and even transforming public service delivery.\n We argue that the open blockchain technology is best understood as a possible information infrastructure, given its universal, evolving, open and transparent nature. A comparison with Internet is meaningful despite obvious differences between the two. Based on some case studies, we have developed an analytical framework for better understanding the potential benefits as well as the existing challenges when introducing blockchain technology in the public sector.",
"title": ""
},
{
"docid": "242c5d237b2bca8b6008e4c9a2196322",
"text": "In recent years, a growing number of occupational therapists have integrated video game technologies, such as the Nintendo Wii, into rehabilitation programs. 'Wiihabilitation', or the use of the Wii in rehabilitation, has been successful in increasing patients' motivation and encouraging full body movement. The non-rehabilitative focus of Wii applications, however, presents a number of problems: games are too difficult for patients, they mainly target upper-body gross motor functions, and they lack support for task customization, grading, and quantitative measurements. To overcome these problems, we have designed a low-cost, virtual-reality based system. Our system, Virtual Wiihab, records performance and behavioral measurements, allows for activity customization, and uses auditory, visual, and haptic elements to provide extrinsic feedback and motivation to patients.",
"title": ""
},
{
"docid": "068b2bfabd86d1dee8b84474e590ffee",
"text": "Mobile telecommunication has become an important part of our daily lives. Yet, industry standards such as GSM often exclude scenarios with active attackers. Devices participating in communication are seen as trusted and non-malicious. By implementing our own baseband firmware based on OsmocomBB, we violate this trust and are able to evaluate the impact of a rogue device with regard to the usage of broadcast information. Through our analysis we show two new attacks based on the paging procedure used in cellular networks. We demonstrate that for at least GSM, it is feasible to hijack the transmission of mobile terminated services such as calls, perform targeted denial of service attacks against single subscribers and as well against large geographical regions within a metropolitan area.",
"title": ""
},
{
"docid": "6c74312e0ade829feb1986403cce9945",
"text": "In the emerging world of human-robot interaction, people and robots will work together to achieve joint objectives. This paper discusses the design and validation of a general scheme for creating emotionally expressive behaviours for robots, in order that people might better interpret how a robot collaborator is succeeding or failing in its work. It exemplifies a unified approach to creating robot behaviours for two very different robot forms, based on combinations of four groups of design parameters (approach/avoidance, energy, intensity and frequency). 59 people rated video clips of robots performing expressive behaviours both for emotional expressivity on Valence-Arousal-Dominance dimensions, and their judgement of the successfulness of the robots’ work. Results are discussed in terms of the utility of expressive behaviour for facilitating human understanding of robot intentions and the design of cues for basic emotional states.",
"title": ""
},
{
"docid": "335220bbad7798a19403d393bcbbf7fb",
"text": "In today’s computerized and information-based society, text data is rich but messy. People are soaked with vast amounts of natural-language text data, ranging from news articles, social media post, advertisements, to a wide range of textual information from various domains (medical records, corporate reports). To turn such massive unstructured text data into actionable knowledge, one of the grand challenges is to gain an understanding of the factual information (e.g., entities, attributes, relations, events) in the text. In this tutorial, we introduce data-driven methods to construct structured information networks (where nodes are different types of entities attached with attributes, and edges are different relations between entities) for text corpora of different kinds (especially for massive, domain-specific text corpora) to represent their factual information. We focus on methods that are minimally-supervised, domain-independent, and languageindependent for fast network construction across various application domains (news, web, biomedical, reviews). We demonstrate on real datasets including news articles, scientific publications, tweets and reviews how these constructed networks aid in text analytics and knowledge discovery at a large scale.",
"title": ""
},
{
"docid": "b250ac830e1662252069cc85128358a7",
"text": "Several recent works have shown that image descriptors produced by deep convolutional neural networks provide state-of-the-art performance for image classification and retrieval problems. It also has been shown that the activations from the convolutional layers can be interpreted as local features describing particular image regions. These local features can be aggregated using aggregating methods developed for local features (e.g. Fisher vectors), thus providing new powerful global descriptor. In this paper we investigate possible ways to aggregate local deep features to produce compact descriptors for image retrieval. First, we show that deep features and traditional hand-engineered features have quite different distributions of pairwise similarities, hence existing aggregation methods have to be carefully re-evaluated. Such re-evaluation reveals that in contrast to shallow features, the simple aggregation method based on sum pooling provides the best performance for deep convolutional features. This method is efficient, has few parameters, and bears little risk of overfitting when e.g. learning the PCA matrix. In addition, we suggest a simple yet efficient query expansion scheme suitable for the proposed aggregation method. Overall, the new compact global descriptor improves the state-of-the-art on four common benchmarks considerably.",
"title": ""
},
{
"docid": "3207403c7f748bd7935469a74aa1c38f",
"text": "This article briefly reviews the rise of Critical Discourse Analysis and teases out a detailed analysis of the various critiques that have been levelled at CDA and its practitioners over the last twenty years, both by scholars working within the “critical” paradigm and by other critics. A range of criticisms are discussed which target the underlying premises, the analytical methodology and the disputed areas of reader response and the integration of contextual factors. Controversial issues such as the predominantly negative focus of much CDA scholarship, and the status of CDA as an emergent “intellectual orthodoxy”, are also reviewed. The conclusions offer a summary of the principal criticisms that emerge from this overview, and suggest some ways in which these problems could be attenuated.",
"title": ""
},
{
"docid": "0761383a10519f2c2f1aac702c1399c7",
"text": "The IOT is a huge and widely distributed the Internet that things connect things.It connects all the articles to the internet through information sensing devices. It is the second information wave after Computer, Internet and mobile communication network. With the rapid development of the Internet of Things, its security problems have become more concentrated. This paper addresses the security issues and key technologies in IOT. It elaborated the basic concepts and the principle of the IOT and combined the relevant characteristics of the IOT as well as the International main research results to analysis the security issues and key technologies of the IOT which in order to plays a positive role in the construction and the development of the IOT through the research.",
"title": ""
},
{
"docid": "4149fae256da1833825049520816858d",
"text": "Systems need to run a larger and more diverse set of applications, from real-time to interactive to batch, on uniprocessor and multiprocessor platforms. However, most schedulers either do not address latency requirements or are specialized to complex real-time paradigms, limiting their applicability to general-purpose systems.In this paper, we present Borrowed-Virtual-Time (BVT) Scheduling, showing that it provides low-latency for real-time and interactive applications yet weighted sharing of the CPU across applications according to system policy, even with thread failure at the real-time level, all with a low-overhead implementation on multiprocessors as well as uniprocessors. It makes minimal demands on application developers, and can be used with a reservation or admission control module for hard real-time applications.",
"title": ""
},
{
"docid": "0b87dc8cf729116309bb0c65096cc6a7",
"text": "Drawing both from the IS literature on software project risk management and the contingency research in Organization Theory literature, the present study develops an integrative contingency model of software project risk management. Adopting a profile deviation perspective of fit, the outcome of a software development project (Performance) is hypothesized to be influenced by the fit between the project's risk (Risk Exposure) and how project risk is managed (Risk Management Profile). The research model was tested with longitudinal data obtained from project leaders and key users of 75 software projects. The results support the contingency model proposed and suggest that in order to increase project performance a project's risk management profile needs to vary according to the project's risk exposure. Specifically, high-risk projects were found to call for high information processing capacity approaches in their management. However, the most appropriate management approach was found to depend on the perfonnance criterion used. When meeting project budgets was the performance criterion, successful high-risk projects had high levels of intemal integration, as well as high levels of formal planning. When system Journal of Manageintni tnformalioH Syslemi/Spring 2Q0l.Vo\\. 17, No. 4, pp. 37-69. ©2001 M.E. Shufpc, Inr 0742-1222 / 2001 S9.50 -t0.00. 38 BARKl.RIVARD.ANDTALBOT quality was the performance criterion, successful high-risk projects had high levels of user participation.",
"title": ""
},
{
"docid": "d8e8c3ecdb63dcda7fc7e67a02479e07",
"text": "It has been known that salt-sensitivity of blood pressure is defined genetically as well as can be developed secondary to either decreased renal function or by influence of other environmental factors. The aim of the study was to evaluate the possible mechanism for the development of salt-sensitive essential hypertension in the population of Georgia. The Case-Control study included 185 subjects, 94 cases with Essential Hypertension stage I (JNC7) without prior antihypertensive treatment, and 91 controls. Salt-sensitivity test was used to divide both case and control groups into salt-sensitive (n=112) and salt-resistant (n=73) subgroups. Endogenous cardiotonic steroids, sodium and PRA were measured in blood and urine samples at the different sodium conditions. Determinations of circulating levels of endogenous sodium pump inhibitors and PRA were carried out using the ELISA and RIA methods. Descriptive statistics were used to analyze the data. Differences in variables between sodium conditions were assessed using paired t-tests. Salt-sensitivity was found in 60.5% of total population investigated, with higher frequency in females. Salt-sensitivity positively correlated with age in females (r=0.262, p<0.01). Statistically significant positive correlation was found between 24 hour urine sodium concentration changes and salt-sensitivity r=0.334, p<0.01. Significant negative correlation was found between salt-sensitivity and PRA. Since no significant correlations were found between BMI and salt-sensitivity, we assume that BMI and salt-sensitivity should be discussed as different independent risk factors for the development of Essential Hypertension. Significant correlation was found between changes in GFR in salt-sensitive cases and controls p<0.01. This can be explained with comparable hyperfiltration of the kidneys at high sodium load and discussed as early sign of hypertensive nephropathy in salt-sensitive individuals. At the high sodium condition Endogenous MBG and OU were high in salt-sensitive subjects compared to salt-resistant. These compounds decreased after low salt diet in salt-sensitive cases as well as controls but remained within the same level in salt-resistant individuals. MBG and OU levels positively correlated with SBP in salt-sensitive individuals but salt-resistant subjects didn't show any changes. Our results support the idea that chronic high sodium loading (>200 mmol) which is typical in traditional Georgian as well as other diets switch those humoral and pathophysiological mechanisms that can lead to the development of certain type of hypertension in salt-sensitive individuals. Salt intake reduction can prevent development of hypertension in salt-sensitive subjects, although hypertension develops in the salt-resistant individuals but by other mechanism such as RAAS.",
"title": ""
}
] |
scidocsrr
|
5045d0599c9b6ff6bdc12faa80aba595
|
A Neurodynamic Approach for Real-Time Scheduling via Maximizing Piecewise Linear Utility
|
[
{
"docid": "eed5c66d0302c492f2480a888678d1dc",
"text": "In 1988 Kennedy and Chua introduced the dynamical canonical nonlinear programming circuit (NPC) to solve in real time nonlinear programming problems where the objective function and the constraints are smooth (twice continuously differentiable) functions. In this paper, a generalized circuit is introduced (G-NPC), which is aimed at solving in real time a much wider class of nonsmooth nonlinear programming problems where the objective function and the constraints are assumed to satisfy only the weak condition of being regular functions. G-NPC, which derives from a natural extension of NPC, has a neural-like architecture and also features the presence of constraint neurons modeled by ideal diodes with infinite slope in the conducting region. By using the Clarke's generalized gradient of the involved functions, G-NPC is shown to obey a gradient system of differential inclusions, and its dynamical behavior and optimization capabilities, both for convex and nonconvex problems, are rigorously analyzed in the framework of nonsmooth analysis and the theory of differential inclusions. In the special important case of linear and quadratic programming problems, salient dynamical features of G-NPC, namely the presence of sliding modes , trajectory convergence in finite time, and the ability to compute the exact optimal solution of the problem being modeled, are uncovered and explained in the developed analytical framework.",
"title": ""
}
] |
[
{
"docid": "8cc9ab356aa8b0f88d244b2077816ddc",
"text": "Brain control of prehension is thought to rely on two specific brain circuits: a dorsomedial one (involving the areas of the superior parietal lobule and the dorsal premotor cortex) involved in the transport of the hand toward the object and a dorsolateral one (involving the inferior parietal lobule and the ventral premotor cortex) dealing with the preshaping of the hand according to the features of the object. The present study aimed at testing whether a pivotal component of the dorsomedial pathway (area V6A) is involved also in hand preshaping and grip formation to grasp objects of different shapes. Two macaque monkeys were trained to reach and grasp different objects. For each object, animals used a different grip: whole-hand prehension, finger prehension, hook grip, primitive precision grip, and advanced precision grip. Almost half of 235 neurons recorded from V6A displayed selectivity for a grip or a group of grips. Several experimental controls were used to ensure that neural modulation was attributable to grip only. These findings, in concert with previous studies demonstrating that V6A neurons are modulated by reach direction and wrist orientation, that lesion of V6A evokes reaching and grasping deficits, and that dorsal premotor cortex contains both reaching and grasping neurons, indicate that the dorsomedial parieto-frontal circuit may play a central role in all phases of reach-to-grasp action. Our data suggest new directions for the modeling of prehension movements and testable predictions for new brain imaging and neuropsychological experiments.",
"title": ""
},
{
"docid": "df114396d546abfc9b6f1767e3bab8db",
"text": "I briefly highlight the salient properties of modified-inertia formulations of MOND, contrasting them with those of modified-gravity formulations, which describe practically all theories propounded to date. Future data (e.g. the establishment of the Pioneer anomaly as a new physics phenomenon) may prefer one of these broad classes of theories over the other. I also outline some possible starting ideas for modified inertia. 1 Modified MOND inertia vs. modified MOND gravity MOND is a modification of non-relativistic dynamics involving an acceleration constant a 0. In the formal limit a 0 → 0 standard Newtonian dynamics is restored. In the deep MOND limit, a 0 → ∞, a 0 and G appear in the combination (Ga 0). Much of the NR phenomenology follows from this simple prescription, including the asymptotic flatness of rotation curves, the mass-velocity relations (baryonic Tully-fisher and Faber Jackson relations), mass discrepancies in LSB galaxies, etc.. There are many realizations (theories) that embody the above dictates, relativistic and non-relativistic. The possibly very significant fact that a 0 ∼ cH 0 ∼ c(Λ/3) 1/2 may hint at the origin of MOND, and is most probably telling us that a. MOND is an effective theory having to do with how the universe at large shapes local dynamics, and b. in a Lorentz universe (with H 0 = 0, Λ = 0) a 0 = 0 and standard dynamics holds. We can broadly classify modified theories into two classes (with the boundary not so sharply defined): In modified-gravity (MG) formulations the field equation of the gravitational field (potential, metric) is modified; the equations of motion of other degrees of freedom (DoF) in the field are not. In modified-inertia (MI) theories the opposite it true. More precisely, in theories derived from an action modifying inertia is tantamount to modifying the kinetic (free) actions of the non-gravitational degrees of freedom. Local, relativistic theories in which the kinetic",
"title": ""
},
{
"docid": "3cf289ec7d0740dbf59a5c738c68d4a9",
"text": "Feminism is a natural ally to interaction design, due to its central commitments to issues such as agency, fulfillment, identity, equity, empowerment, and social justice. In this paper, I summarize the state of the art of feminism in HCI and propose ways to build on existing successes to more robustly integrate feminism into interaction design research and practice. I explore the productive role of feminism in analogous fields, such as industrial design, architecture, and game design. I introduce examples of feminist interaction design already in the field. Finally, I propose a set of femi-nist interaction design qualities intended to support design and evaluation processes directly as they unfold.",
"title": ""
},
{
"docid": "6c09932a4747c7e2d15b06720b1c48d9",
"text": "A distributed ledger made up of mutually distrusting nodes would allow for a single global database that records the state of deals and obligations between institutions and people. This would eliminate much of the manual, time consuming effort currently required to keep disparate ledgers synchronised with each other. It would also allow for greater levels of code sharing than presently used in the financial industry, reducing the cost of financial services for everyone. We present Corda, a platform which is designed to achieve these goals. This paper provides a high level introduction intended for the general reader. A forthcoming technical white paper elaborates on the design and fundamental architectural decisions.",
"title": ""
},
{
"docid": "7a37df81ad70697549e6da33384b4f19",
"text": "Water scarcity is now one of the major global crises, which has affected many aspects of human health, industrial development and ecosystem stability. To overcome this issue, water desalination has been employed. It is a process to remove salt and other minerals from saline water, and it covers a variety of approaches from traditional distillation to the well-established reverse osmosis. Although current water desalination methods can effectively provide fresh water, they are becoming increasingly controversial due to their adverse environmental impacts including high energy intensity and highly concentrated brine waste. For millions of years, microorganisms, the masters of adaptation, have survived on Earth without the excessive use of energy and resources or compromising their ambient environment. This has encouraged scientists to study the possibility of using biological processes for seawater desalination and the field has been exponentially growing ever since. Here, the term biodesalination is offered to cover all of the techniques which have their roots in biology for producing fresh water from saline solution. In addition to reviewing and categorizing biodesalination processes for the first time, this review also reveals unexplored research areas in biodesalination having potential to be used in water treatment.",
"title": ""
},
{
"docid": "5bf7c59c2cf319f04a8c98a3da12c546",
"text": "Designers face many system optimization problems when building distributed systems. Traditionally, designers have relied on optimization techniques that require either prior knowledge or centrally managed runtime knowledge of the system's environment, but such techniques are not viable in dynamic networks where topology, resource, and node availability are subject to frequent and unpredictable change. To address this problem, we propose collaborative reinforcement learning (CRL) as a technique that enables groups of reinforcement learning agents to solve system optimization problems online in dynamic, decentralized networks. We evaluate an implementation of CRL in a routing protocol for mobile ad hoc networks, called SAMPLE. Simulation results show how feedback in the selection of links by routing agents enables SAMPLE to adapt and optimize its routing behavior to varying network conditions and properties, resulting in optimization of network throughput. In the experiments, SAMPLE displays emergent properties such as traffic flows that exploit stable routes and reroute around areas of wireless interference or congestion. SAMPLE is an example of a complex adaptive distributed system.",
"title": ""
},
{
"docid": "cb266f07461a58493d35f75949c4605e",
"text": "Zero shot learning in Image Classification refers to the setting where images from some novel classes are absent in the training data but other information such as natural language descriptions or attribute vectors of the classes are available. This setting is important in the real world since one may not be able to obtain images of all the possible classes at training. While previous approaches have tried to model the relationship between the class attribute space and the image space via some kind of a transfer function in order to model the image space correspondingly to an unseen class, we take a different approach and try to generate the samples from the given attributes, using a conditional variational autoencoder, and use the generated samples for classification of the unseen classes. By extensive testing on four benchmark datasets, we show that our model outperforms the state of the art, particularly in the more realistic generalized setting, where the training classes can also appear at the test time along with the novel classes.",
"title": ""
},
{
"docid": "35894d8bc2e3e8e03b47801976a88554",
"text": "Visualization of brand positioning based on consumer web search information: using social network analysis Seung-Pyo Jun Do-Hyung Park Article information: To cite this document: Seung-Pyo Jun Do-Hyung Park , (2017),\" Visualization of brand positioning based on consumer web search information: using social network analysis \", Internet Research, Vol. 27 Iss 2 pp. Permanent link to this document: http://dx.doi.org/10.1108/IntR-02-2016-0037",
"title": ""
},
{
"docid": "12d6aab2ecf0802fd59b77ed8a209e99",
"text": "This paper reviews the econometric issues in efforts to estimate the impact of the death penalty on murder, focusing on six recent studies published since 2003. We highlight the large number of choices that must be made when specifying the various panel data models that have been used to address this question. There is little clarity about the knowledge potential murderers have concerning the risk of execution: are they influenced by the passage of a death penalty statute, the number of executions in a state, the proportion of murders in a state that leads to an execution, and details about the limited types of murders that are potentially susceptible to a sentence of death? If an execution rate is a viable proxy, should it be calculated using the ratio of last year’s executions to last year’s murders, last year’s executions to the murders a number of years earlier, or some other values? We illustrate how sensitive various estimates are to these choices. Importantly, the most up-to-date OLS panel data studies generate no evidence of a deterrent effect, while three 2SLS studies purport to find such evidence. The 2SLS studies, none of which shows results that are robust to clustering their standard errors, are unconvincing because they all use a problematic structure based on poorly measured and theoretically inappropriate pseudo-probabilities that are",
"title": ""
},
{
"docid": "6b6e055e4d6aea80d4f01eee47256be1",
"text": "Ponseti treatment for clubfoot has been successful, but recurrence continues to be an issue. After correction, patients are typically braced full time with a static abduction bar and shoes. Patient compliance with bracing is a modifiable risk factor for recurrence. We hypothesized that the use of Mitchell shoes and a dynamic abduction brace would increase compliance and thereby reduce the rate of recurrence. A prospective, randomized trial was carried out with consecutive patients treated for idiopathic clubfeet from 2008 to 2012. After casting and tenotomy, patients were randomized into either the dynamic or static abduction bar group. Both groups used Mitchell shoes. Patient demographics, satisfaction, and compliance were measured with self-reported questionnaires throughout follow-up. Thirty patients were followed up, with 15 in each group. Average follow-up was 18.7 months (range 3-40.7 months). Eight recurrences (26.7%) were found, with four in each group. Recurrences had a statistically significant higher number of casts and a longer follow-up time. Mean income, education level, patient-reported satisfaction and compliance, and age of caregiver tended to be lower in the recurrence group but were not statistically significant. No differences were found between the two brace types. Our study showed excellent patient satisfaction and reported compliance with Mitchell shoes and either the dynamic or static abduction bar. Close attention and careful education should be directed towards patients with known risk factors or difficult casting courses to maximize brace compliance, a modifiable risk factor for recurrence.",
"title": ""
},
{
"docid": "9193aad006395bd3bd76cabf44012da5",
"text": "In recent years, there is growing evidence that plant-foods polyphenols, due to their biological properties, may be unique nutraceuticals and supplementary treatments for various aspects of type 2 diabetes mellitus. In this article we have reviewed the potential efficacies of polyphenols, including phenolic acids, flavonoids, stilbenes, lignans and polymeric lignans, on metabolic disorders and complications induced by diabetes. Based on several in vitro, animal models and some human studies, dietary plant polyphenols and polyphenol-rich products modulate carbohydrate and lipid metabolism, attenuate hyperglycemia, dyslipidemia and insulin resistance, improve adipose tissue metabolism, and alleviate oxidative stress and stress-sensitive signaling pathways and inflammatory processes. Polyphenolic compounds can also prevent the development of long-term diabetes complications including cardiovascular disease, neuropathy, nephropathy and retinopathy. Further investigations as human clinical studies are needed to obtain the optimum dose and duration of supplementation with polyphenolic compounds in diabetic patients.",
"title": ""
},
{
"docid": "1e32662301070a085ce4d3244673c2cd",
"text": "Conventional automatic speech recognition (ASR) based on a hidden Markov model (HMM)/deep neural network (DNN) is a very complicated system consisting of various modules such as acoustic, lexicon, and language models. It also requires linguistic resources, such as a pronunciation dictionary, tokenization, and phonetic context-dependency trees. On the other hand, end-to-end ASR has become a popular alternative to greatly simplify the model-building process of conventional ASR systems by representing complicated modules with a single deep network architecture, and by replacing the use of linguistic resources with a data-driven learning method. There are two major types of end-to-end architectures for ASR; attention-based methods use an attention mechanism to perform alignment between acoustic frames and recognized symbols, and connectionist temporal classification (CTC) uses Markov assumptions to efficiently solve sequential problems by dynamic programming. This paper proposes hybrid CTC/attention end-to-end ASR, which effectively utilizes the advantages of both architectures in training and decoding. During training, we employ the multiobjective learning framework to improve robustness and achieve fast convergence. During decoding, we perform joint decoding by combining both attention-based and CTC scores in a one-pass beam search algorithm to further eliminate irregular alignments. Experiments with English (WSJ and CHiME-4) tasks demonstrate the effectiveness of the proposed multiobjective learning over both the CTC and attention-based encoder–decoder baselines. Moreover, the proposed method is applied to two large-scale ASR benchmarks (spontaneous Japanese and Mandarin Chinese), and exhibits performance that is comparable to conventional DNN/HMM ASR systems based on the advantages of both multiobjective learning and joint decoding without linguistic resources.",
"title": ""
},
{
"docid": "c82901a585d9c924f4686b4d0373e774",
"text": "Object detection is a major challenge in computer vision, involving both object classification and object localization within a scene. While deep neural networks have been shown in recent years to yield very powerful techniques for tackling the challenge of object detection, one of the biggest challenges with enabling such object detection networks for widespread deployment on embedded devices is high computational and memory requirements. Recently, there has been an increasing focus in exploring small deep neural network architectures for object detection that are more suitable for embedded devices, such as Tiny YOLO and SqueezeDet. Inspired by the efficiency of the Fire microarchitecture introduced in SqueezeNet and the object detection performance of the singleshot detection macroarchitecture introduced in SSD, this paper introduces Tiny SSD, a single-shot detection deep convolutional neural network for real-time embedded object detection that is composed of a highly optimized, non-uniform Fire subnetwork stack and a non-uniform sub-network stack of highly optimized SSD-based auxiliary convolutional feature layers designed specifically to minimize model size while maintaining object detection performance. The resulting Tiny SSD possess a model size of 2.3MB (~26X smaller than Tiny YOLO) while still achieving an mAP of 61.3% on VOC 2007 (~4.2% higher than Tiny YOLO). These experimental results show that very small deep neural network architectures can be designed for real-time object detection that are well-suited for embedded scenarios.",
"title": ""
},
{
"docid": "4c990fa8014acd0f2cbcfa9383462734",
"text": "This paper shows a model to conduct an empirical study in Iranian automotive industry in order to improve their performance. There are many factors which are effective factors in improving performance of Iranian automobile industry namely, leadership, customer focus, training, supplier quality management, product design, process management, and team work. The quality improvement plays a fundamental role in determining the performance in Iranian manufacturing industries. In this research, a model has been developed that includes Quality culture, Critical success factors of Total Quality Management and quality improvement to study their influence on the performance of Iranian automotive industry. It is hoped that this paper can provide an academic source for both academicians and managers due to investigate the relationship between Quality culture, critical success factors of Total Quality Management, Quality improvement, and Performance in a systematic manner to increase successful rate of Total Quality Management implementation.",
"title": ""
},
{
"docid": "82e6533bf92395a008a024e880ef61b1",
"text": "A new binary software randomization and ControlFlow Integrity (CFI) enforcement system is presented, which is the first to efficiently resist code-reuse attacks launched by informed adversaries who possess full knowledge of the inmemory code layout of victim programs. The defense mitigates a recent wave of implementation disclosure attacks, by which adversaries can exfiltrate in-memory code details in order to prepare code-reuse attacks (e.g., Return-Oriented Programming (ROP) attacks) that bypass fine-grained randomization defenses. Such implementation-aware attacks defeat traditional fine-grained randomization by undermining its assumption that the randomized locations of abusable code gadgets remain secret. Opaque CFI (O-CFI) overcomes this weakness through a novel combination of fine-grained code-randomization and coarsegrained control-flow integrity checking. It conceals the graph of hijackable control-flow edges even from attackers who can view the complete stack, heap, and binary code of the victim process. For maximal efficiency, the integrity checks are implemented using instructions that will soon be hardware-accelerated on commodity x86-x64 processors. The approach is highly practical since it does not require a modified compiler and can protect legacy binaries without access to source code. Experiments using our fully functional prototype implementation show that O-CFI provides significant probabilistic protection against ROP attacks launched by adversaries with complete code layout knowledge, and exhibits only 4.7% mean performance overhead on current hardware (with further overhead reductions to follow on forthcoming Intel processors). I. MOTIVATION Code-reuse attacks (cf., [5]) have become a mainstay of software exploitation over the past several years, due to the rise of data execution protections that nullify traditional codeinjection attacks. Rather than injecting malicious payload code directly onto the stack or heap, where modern data execution protections block it from being executed, attackers now ingeniously inject addresses of existing in-memory code fragments (gadgets) onto victim stacks, causing the victim process to execute its own binary code in an unanticipated order [38]. With a sufficiently large victim code section, the pool of exploitable gadgets becomes arbitrarily expressive (e.g., Turing-complete) [20], facilitating the construction of arbitrary attack payloads without the need for code-injection. Such payload construction has even been automated [34]. As a result, code-reuse has largely replaced code-injection as one of the top software security threats. Permission to freely reproduce all or part of this paper for noncommercial purposes is granted provided that copies bear this notice and the full citation on the first page. Reproduction for commercial purposes is strictly prohibited without the prior written consent of the Internet Society, the first-named author (for reproduction of an entire paper only), and the author’s employer if the paper was prepared within the scope of employment. NDSS ’15, 8–11 February 2015, San Diego, CA, USA Copyright 2015 Internet Society, ISBN 1-891562-38-X http://dx.doi.org/10.14722/ndss.2015.23271 This has motivated copious work on defenses against codereuse threats. Prior defenses can generally be categorized into: CFI [1] and artificial software diversity [8]. CFI restricts all of a program’s runtime control-flows to a graph of whitelisted control-flow edges. Usually the graph is derived from the semantics of the program source code or a conservative disassembly of its binary code. As a result, CFIprotected programs reject control-flow hijacks that attempt to traverse edges not supported by the original program’s semantics. Fine-grained CFI monitors indirect control-flows precisely; for example, function callees must return to their exact callers. Although such precision provides the highest security, it also tends to incur high performance overheads (e.g., 21% for precise caller-callee return-matching [1]). Because this overhead is often too high for industry adoption, researchers have proposed many optimized, coarser-grained variants of CFI. Coarse-grained CFI trades some security for better performance by reducing the precision of the checks. For example, functions must return to valid call sites (but not necessarily to the particular site that invoked the callee). Unfortunately, such relaxations have proved dangerous—a number of recent proof-of-concept exploits have shown how even minor relaxations of the control-flow policy can be exploited to effect attacks [6, 11, 18, 19]. Table I summarizes the impact of several of these recent exploits. Artificial software diversity offers a different but complementary approach that randomizes programs in such a way that attacks succeeding against one program instance have a very low probability of success against other (independently randomized) instances of the same program. Probabilistic defenses rely on memory secrecy—i.e., the effects of randomization must remain hidden from attackers. One of the simplest and most widely adopted forms of artificial diversity is Address Space Layout Randomization (ASLR), which randomizes the base addresses of program segments at loadtime. Unfortunately, merely randomizing the base addresses does not yield sufficient entropy to preserve memory secrecy in many cases; there are numerous successful derandomization attacks against ASLR [13, 26, 36, 37, 39, 42]. Finer-grained diversity techniques obtain exponentially higher entropy by randomizing the relative distances between all code points. For example, binary-level Self-Transforming Instruction Relocation (STIR) [45] and compilers with randomized code-generation (e.g., [22]) have both realized fine-grained artificial diversity for production-level software at very low overheads. Recently, a new wave of implementation disclosure attacks [4, 10, 35, 40] have threatened to undermine fine-grained artificial diversity defenses. Implementation disclosure attacks exploit information leak vulnerabilities to read memory pages of victim processes at the discretion of the attacker. By reading the TABLE I. OVERVIEW OF CONTROL-FLOW INTEGRITY BYPASSES CFI [1] bin-CFI [50] CCFIR [49] kBouncer [33] ROPecker [7] ROPGuard [16] EMET [30] DeMott [12] Feb 2014 / Göktaş et al. [18] May 2014 / / / Davi et al. [11] Aug 2014 / / / / / Göktaş et al. [19] Aug 2014 / / Carlini and Wagner [6] Aug 2014 / / in-memory code sections, attackers violate the memory secrecy assumptions of artificial diversity, rendering their defenses ineffective. Since finding and closing all information leaks is well known to be prohibitively difficult and often intractable for many large software products, these attacks constitute a very dangerous development in the cyber-threat landscape; there is currently no well-established, practical defense. This paper presents Opaque CFI (O-CFI): a new approach to coarse-grained CFI that strengthens fine-grained artificial diversity to withstand implementation disclosure attacks. The heart of O-CFI is a new form of control-flow check that conceals the graph of abusable control-flow edges even from attackers who have complete read-access to the randomized binary code, the stack, and the heap of victim processes. Such access only affords attackers knowledge of the intended (and therefore nonabusable) edges of the control-flow graph, not the edges left unprotected by the coarse-grained CFI implementation. Artificial diversification is employed to vary the set of unprotected edges between program instances, maintaining the probabilistic guarantees of fine-grained diversity. Experiments show that O-CFI enjoys performance overheads comparable to standard fine-grained diversity and non-opaque, coarse-grained CFI. Moreover, O-CFI’s control-flow checking logic is implemented using Intel x86/x64 memory-protection extensions (MPX) that are expected to be hardware-accelerated in commodity CPUs from 2015 onwards. We therefore expect even better performance for O-CFI in the near future. Our contributions are as follows: • We introduce O-CFI, the first low-overhead code-reuse defense that tolerates implementation disclosures. • We describe our implementation of a fully functional prototype that protects stripped, x86 legacy binaries without source code. • Analysis shows that O-CFI provides quantifiable security against state-of-the-art exploits—including JITROP [40] and Blind-ROP [4]. • Performance evaluation yields competitive overheads of just 4.7% for computation-intensive programs. II. THREAT MODEL Our work is motivated by the emergence of attacks against fine-grained diversity and coarse-grained control-flow integrity. We therefore introduce these attacks and distill them into a single, unified threat model. A. Bypassing Coarse-Grained CFI Ideally, CFI permits only programmer-intended control-flow transfers during a program’s execution. The typical approach is to assign a unique ID to each permissible indirect controlflow target, and check the IDs at runtime. Unfortunately, this introduces performance overhead proportional to the degree of the graph—the more overlaps between valid target sets of indirect branch instructions, the more IDs must be stored and checked at each branch. Moreover, perfect CFI cannot be realized with a purely static control-flow graph; for example, the permissible destinations of function returns depend on the calling context, which is only known at runtime. Fine-grained CFI therefore implements a dynamically computed shadow stack, incurring high overheads [1]. To avoid this, coarse-grained CFI implementations resort to a reduced-degree, static approximation of the control-flow graph, and merge identifiers at the cost of reduced security. For example, bin-CFI [49] and CCFIR [50] use at most three IDs per branch, and omit shadow stacks. Recent work has demonstrated that these optimizations open exploitable",
"title": ""
},
{
"docid": "0dc5a8b5b0c3d8424b510f5910f26976",
"text": "In 1992, Tani et al. proposed remotely operating machines in a factory by manipulating a live video image on a computer screen. In this paper we revisit this metaphor and investigate its suitability for mobile use. We present Touch Projector, a system that enables users to interact with remote screens through a live video image on their mobile device. The handheld device tracks itself with respect to the surrounding displays. Touch on the video image is \"projected\" onto the target display in view, as if it had occurred there. This literal adaptation of Tani's idea, however, fails because handheld video does not offer enough stability and control to enable precise manipulation. We address this with a series of improvements, including zooming and freezing the video image. In a user study, participants selected targets and dragged targets between displays using the literal and three improved versions. We found that participants achieved highest performance with automatic zooming and temporary image freezing.",
"title": ""
},
{
"docid": "cac8f1df581628a7e64e779751fafaf0",
"text": "The vast majority of Web services and sites are hosted in various kinds of cloud services, and ordering some level of quality of service (QoS) in such systems requires effective load-balancing policies that choose among multiple clouds. Recently, software-defined networking (SDN) is one of the most promising solutions for load balancing in cloud data center. SDN is characterized by its two distinguished features, including decoupling the control plane from the data plane and providing programmability for network application development. By using these technologies, SDN and cloud computing can improve cloud reliability, manageability, scalability and controllability. SDN-based cloud is a new type cloud in which SDN technology is used to acquire control on network infrastructure and to provide networking-as-a-service (NaaS) in cloud computing environments. In this paper, we introduce an SDN-enhanced Inter cloud Manager (S-ICM) that allocates network flows in the cloud environment. S-ICM consists of two main parts, monitoring and decision making. For monitoring, S-ICM uses SDN control message that observes and collects data, and decision-making is based on the measured network delay of packets. Measurements are used to compare S-ICM with a round robin (RR) allocation of jobs between clouds which spreads the workload equitably, and with a honeybee foraging algorithm (HFA). We see that S-ICM is better at avoiding system saturation than HFA and RR under heavy load formula using RR job scheduler. Measurements are also used to evaluate whether a simple queueing formula can be used to predict system performance for several clouds being operated under an RR scheduling policy, and show the validity of the theoretical approximation.",
"title": ""
},
{
"docid": "83f1e80a8d4b54184531798559a028d5",
"text": "Fast-response and high-sensitivity deep-ultraviolet (DUV) photodetectors with detection wavelength shorter than 320 nm are in high demand due to their potential applications in diverse fields. However, the fabrication processes of DUV detectors based on traditional semiconductor thin films are complicated and costly. Here we report a high-performance DUV photodetector based on graphene quantum dots (GQDs) fabricated via a facile solution process. The devices are capable of detecting DUV light with wavelength as short as 254 nm. With the aid of an asymmetric electrode structure, the device performance could be significantly improved. An on/off ratio of ∼6000 under 254 nm illumination at a relatively weak light intensity of 42 μW cm(-2) is achieved. The devices also exhibit excellent stability and reproducibility with a fast response speed. Given the solution-processing capability of the devices and extraordinary properties of GQDs, the use of GQDs will open up unique opportunities for future high-performance, low-cost DUV photodetectors.",
"title": ""
},
{
"docid": "a123b7685356b099e8d068ff966c0ea7",
"text": "Problem statement: Accurate weather forecasting plays a vital role fo r planning day to day activities. Neural network has been use in numerous meteorological applications including weather forecasting. Approach: A neural network model has been developed for weat her forecasting, based on various factors obtained from meteorological expert s. This study evaluates the performance of Radial Basis Function (RBF) with Back Propagation (BPN) ne ural network. The back propagation neural network and radial basis function neural network we re used to test the performance in order to investigate effective forecasting technique. Results: The prediction accuracy of RBF was 88.49%. Conclusion: The results indicate that proposed radial basis fu nction neural network is better than back propagation neural network.",
"title": ""
},
{
"docid": "938f49e103d0153c82819becf96f126c",
"text": "Humans interpret texts with respect to some background information, or world knowledge, and we would like to develop automatic reading comprehension systems that can do the same. In this paper, we introduce a task and several models to drive progress towards this goal. In particular, we propose the task of rare entity prediction: given a web document with several entities removed, models are tasked with predicting the correct missing entities conditioned on the document context and the lexical resources. This task is challenging due to the diversity of language styles and the extremely large number of rare entities. We propose two recurrent neural network architectures which make use of external knowledge in the form of entity descriptions. Our experiments show that our hierarchical LSTM model performs significantly better at the rare entity prediction task than those that do not make use of external resources.",
"title": ""
}
] |
scidocsrr
|
2e3f4dbfecdf6b4835e0c068b916cca7
|
What Motivates Consumers to Write Online Travel Reviews?
|
[
{
"docid": "1993b540ff91922d381128e9c8592163",
"text": "The use of the WWW as a venue for voicing opinions, complaints and recommendations on products and firms has been widely reported in the popular media. However little is known how consumers use these reviews and if they subsequently have any influence on evaluations and purchase intentions of products and retailers. This study examines the effect of negative reviews on retailer evaluation and patronage intention given that the consumer has already made a product/brand decision. Our results indicate that the extent of WOM search depends on the consumer’s reasons for choosing an online retailer. Further the influence of negative WOM information on perceived reliability and purchase intentions is determined largely by familiarity with the retailer and differs based on whether the retailer is a pure-Internet or clicks-and-mortar firm. Managerial implications for positioning strategies to minimize the effect of negative word-ofmouth have been discussed.",
"title": ""
},
{
"docid": "c57cbe432fdab3f415d2c923bea905ff",
"text": "Through Web-based consumer opinion platforms (e.g., epinions.com), the Internet enables customers to share their opinions on, and experiences with, goods and services with a multitude of other consumers; that is, to engage in electronic wordof-mouth (eWOM) communication. Drawing on findings from research on virtual communities and traditional word-of-mouth literature, a typology for motives of consumer online articulation is © 2004 Wiley Periodicals, Inc. and Direct Marketing Educational Foundation, Inc.",
"title": ""
}
] |
[
{
"docid": "39007b91989c42880ff96e7c5bdcf519",
"text": "Feature selection has aroused considerable research interests during the last few decades. Traditional learning-based feature selection methods separate embedding learning and feature ranking. In this paper, we propose a novel unsupervised feature selection framework, termed as the joint embedding learning and sparse regression (JELSR), in which the embedding learning and sparse regression are jointly performed. Specifically, the proposed JELSR joins embedding learning with sparse regression to perform feature selection. To show the effectiveness of the proposed framework, we also provide a method using the weight via local linear approximation and adding the ℓ2,1-norm regularization, and design an effective algorithm to solve the corresponding optimization problem. Furthermore, we also conduct some insightful discussion on the proposed feature selection approach, including the convergence analysis, computational complexity, and parameter determination. In all, the proposed framework not only provides a new perspective to view traditional methods but also evokes some other deep researches for feature selection. Compared with traditional unsupervised feature selection methods, our approach could integrate the merits of embedding learning and sparse regression. Promising experimental results on different kinds of data sets, including image, voice data and biological data, have validated the effectiveness of our proposed algorithm.",
"title": ""
},
{
"docid": "7e439ac3ff2304b6e1aaa098ff44b0cb",
"text": "Geological structures, such as faults and fractures, appear as image discontinuities or lineaments in remote sensing data. Geologic lineament mapping is a very important issue in geo-engineering, especially for construction site selection, seismic, and risk assessment, mineral exploration and hydrogeological research. Classical methods of lineaments extraction are based on semi-automated (or visual) interpretation of optical data and digital elevation models. We developed a freely available Matlab based toolbox TecLines (Tectonic Lineament Analysis) for locating and quantifying lineament patterns using satellite data and digital elevation models. TecLines consists of a set of functions including frequency filtering, spatial filtering, tensor voting, Hough transformation, and polynomial fitting. Due to differences in the mathematical background of the edge detection and edge linking procedure as well as the breadth of the methods, we introduce the approach in two-parts. In this first study, we present the steps that lead to edge detection. We introduce the data pre-processing using selected filters in spatial and frequency domains. We then describe the application of the tensor-voting framework to improve position and length accuracies of the detected lineaments. We demonstrate the robustness of the approach in a complex area in the northeast of Afghanistan using a panchromatic QUICKBIRD-2 image with 1-meter resolution. Finally, we compare the results of TecLines with manual lineament extraction, and other lineament extraction algorithms, as well as a published fault map of the study area. OPEN ACCESS Remote Sens. 2014, 6 5939",
"title": ""
},
{
"docid": "1feaf48291b7ea83d173b70c23a3b7c0",
"text": "Machine learning plays a critical role in extracting meaningful information out of the zetabytes of sensor data collected every day. For some applications, the goal is to analyze and understand the data to identify trends (e.g., surveillance, portable/wearable electronics); in other applications, the goal is to take immediate action based the data (e.g., robotics/drones, self-driving cars, smart Internet of Things). For many of these applications, local embedded processing near the sensor is preferred over the cloud due to privacy or latency concerns, or limitations in the communication bandwidth. However, at the sensor there are often stringent constraints on energy consumption and cost in addition to throughput and accuracy requirements. Furthermore, flexibility is often required such that the processing can be adapted for different applications or environments (e.g., update the weights and model in the classifier). In many applications, machine learning often involves transforming the input data into a higher dimensional space, which, along with programmable weights, increases data movement and consequently energy consumption. In this paper, we will discuss how these challenges can be addressed at various levels of hardware design ranging from architecture, hardware-friendly algorithms, mixed-signal circuits, and advanced technologies (including memories and sensors).",
"title": ""
},
{
"docid": "358423f8ef08080935f280d71ae921a0",
"text": "Many of contemporary computer and machine vision applications require finding of corresponding points across multiple images. To that goal, among many features, the most commonly used are corner points. Corners are formed by two or more edges, and mark the boundaries of objects or boundaries between distinctive object parts. This makes corners the feature points that used in a wide range of tasks. Therefore, numerous corner detectors with different properties have been developed. In this paper, we present a complete FPGA architecture implementing corer detection. This architecture is based on the FAST algorithm. The proposed solution is capable of processing the incoming image data with the speed of hundreds of frames per second for a 512 × , 8-bit gray-scale image. The speed is comparable to the results achieved by top-of-the-shelf general purpose processors. However, the use of inexpensive FPGA allows to cut costs, power consumption and to reduce the footprint of a complete system solution. The paper includes also a brief description of the implemented algorithm, resource usage summary, resulting images, as well as block diagrams of the described architecture.",
"title": ""
},
{
"docid": "6c07520a738f068f1bc3bdb8e3fda89b",
"text": "We analyze the role of the Global Brain in the sharing economy, by synthesizing the notion of distributed intelligence with Goertzel’s concept of an offer network. An offer network is an architecture for a future economic system based on the matching of offers and demands without the intermediate of money. Intelligence requires a network of condition-action rules, where conditions represent challenges that elicit action in order to solve a problem or exploit an opportunity. In society, opportunities correspond to offers of goods or services, problems to demands. Tackling challenges means finding the best sequences of condition-action rules to connect all demands to the offers that can satisfy them. This can be achieved with the help of AI algorithms working on a public database of rules, demands and offers. Such a system would provide a universal medium for voluntary collaboration and economic exchange, efficiently coordinating the activities of all people on Earth. It would replace and subsume the patchwork of commercial and community-based sharing platforms presently running on the Internet. It can in principle resolve the traditional problems of the capitalist economy: poverty, inequality, externalities, poor sustainability and resilience, booms and busts, and the neglect of non-monetizable values.",
"title": ""
},
{
"docid": "c49ed75ce48fb92db6e80e4fe8af7127",
"text": "The One Class Classification (OCC) problem is different from the conventional binary/multi-class classification problem in the sense that in OCC, the negative class is either not present or not properly sampled. The problem of classifying positive (or target) cases in the absence of appropriately-characterized negative cases (or outliers) has gained increasing attention in recent years. Researchers have addressed the task of OCC by using different methodologies in a variety of application domains. In this paper we formulate a taxonomy with three main categories based on the way OCC has been envisaged, implemented and applied by various researchers in different application domains. We also present a survey of current state-of-the-art OCC algorithms, their importance, applications and limitations.",
"title": ""
},
{
"docid": "7c10a44e5fa0f9e01951e89336c4b4d6",
"text": "Previous studies have examined the online research behaviors of graduate students in terms of how they seek and retrieve research-related information on the Web across diverse disciplines. However, few have focused on graduate students’ searching activities, and particularly for their research tasks. Drawing on Kuiper, Volman, and Terwel’s (2008) three aspects of web literacy skills (searching, reading, and evaluating), this qualitative study aims to better understand a group of graduate engineering students’ searching, reading, and evaluating processes for research purposes. Through in-depth interviews and the think-aloud protocol, we compared the strategies employed by 22 Taiwanese graduate engineering students. The results showed that the students’ online research behaviors included seeking and obtaining, reading and interpreting, and assessing and evaluating sources. The findings suggest that specialized training for preparing novice researchers to critically evaluate relevant information or scholarly work to fulfill their research purposes is needed. Implications for enhancing the information literacy of engineering students are discussed.",
"title": ""
},
{
"docid": "1a65a6e22d57bb9cd15ba01943eeaa25",
"text": "+ optimal local factor – expensive for general obs. + exploit conj. graph structure + arbitrary inference queries + natural gradients – suboptimal local factor + fast for general obs. – does all local inference – limited inference queries – no natural gradients ± optimal given conj. evidence + fast for general obs. + exploit conj. graph structure + arbitrary inference queries + some natural gradients",
"title": ""
},
{
"docid": "80a61f27dab6a8f71a5c27437254778b",
"text": "5G will have to cope with a high degree of heterogeneity in terms of services and requirements. Among these latter, the flexible and efficient use of non-contiguous unused spectrum for different network deployment scenarios is considered a key challenge for 5G systems. To maximize spectrum efficiency, the 5G air interface technology will also need to be flexible and capable of mapping various services to the best suitable combinations of frequency and radio resources. In this work, we propose a comparison of several 5G waveform candidates (OFDM, UFMC, FBMC and GFDM) under a common framework. We assess spectral efficiency, power spectral density, peak-to-average power ratio and robustness to asynchronous multi-user uplink transmission. Moreover, we evaluate and compare the complexity of the different waveforms. In addition to the complexity analysis, in this work, we also demonstrate the suitability of FBMC for specific 5G use cases via two experimental implementations. The benefits of these new waveforms for the foreseen 5G use cases are clearly highlighted on representative criteria and experiments.",
"title": ""
},
{
"docid": "8770cfba83e16454e5d7244201d47628",
"text": "Representing documents is a crucial component in many NLP tasks, for instance predicting aspect ratings in reviews. Previous methods for this task treat documents globally, and do not acknowledge that target categories are often assigned by their authors with generally no indication of the specific sentences that motivate them. To address this issue, we adopt a weakly supervised learning model, which jointly learns to focus on relevant parts of a document according to the context along with a classifier for the target categories. Derived from the weighted multiple-instance regression (MIR) framework, the model learns decomposable document vectors for each individual category and thus overcomes the representational bottleneck in previous methods due to a fixed-length document vector. During prediction, the estimated relevance or saliency weights explicitly capture the contribution of each sentence to the predicted rating, thus offering an explanation of the rating. Our model achieves state-of-the-art performance on multi-aspect sentiment analysis, improving over several baselines. Moreover, the predicted saliency weights are close to human estimates obtained by crowdsourcing, and increase the performance of lexical and topical features for review segmentation and summarization.",
"title": ""
},
{
"docid": "5e86f40cfc3b2e9664ea1f7cc5bf730c",
"text": "Due to a wide range of applications, wireless sensor networks (WSNs) have recently attracted a lot of interest to the researchers. Limited computational capacity and power usage are two major challenges to ensure security in WSNs. Recently, more secure communication or data aggregation techniques have discovered. So, familiarity with the current research in WSN security will benefit researchers greatly. In this paper, security related issues and challenges in WSNs are investigated. We identify the security threats and review proposed security mechanisms for WSNs. Moreover, we provide a brief discussion on the future research direction in WSN security.",
"title": ""
},
{
"docid": "80b514540933a9cc31136c8cb86ec9b3",
"text": "We tackle the problem of detecting occluded regions in a video stream. Under assumptions of Lambertian reflection and static illumination, the task can be posed as a variational optimization problem, and its solution approximated using convex minimization. We describe efficient numerical schemes that reach the global optimum of the relaxed cost functional, for any number of independently moving objects, and any number of occlusion layers. We test the proposed algorithm on benchmark datasets, expanded to enable evaluation of occlusion detection performance, in addition to optical flow.",
"title": ""
},
{
"docid": "18fd966db335ee53ff4d82781c2f81d8",
"text": "Disastrous events are cordially involved with the momentum of nature. As such mishaps have been showing off own mastery, situations have gone beyond the control of human resistive mechanisms far ago. Fortunately, several technologies are in service to gain affirmative knowledge and analysis of a disaster’s occurrence. Recently, Internet of Things (IoT) paradigm has opened a promising door toward catering of multitude problems related to agriculture, industry, security, and medicine due to its attractive features, such as heterogeneity, interoperability, light-weight, and flexibility. This paper surveys existing approaches to encounter the relevant issues with disasters, such as early warning, notification, data analytics, knowledge aggregation, remote monitoring, real-time analytics, and victim localization. Simultaneous interventions with IoT are also given utmost importance while presenting these facts. A comprehensive discussion on the state-of-the-art scenarios to handle disastrous events is presented. Furthermore, IoT-supported protocols and market-ready deployable products are summarized to address these issues. Finally, this survey highlights open challenges and research trends in IoT-enabled disaster management systems.",
"title": ""
},
{
"docid": "ca932a0b6b71f009f95bad6f2f3f8a38",
"text": "Page 13 Supply chain management is increasingly being recognized as the integration of key business processes across the supply chain. For example, Hammer argues that now that companies have implemented processes within the firm, they need to integrate them between firms: Streamlining cross-company processes is the next great frontier for reducing costs, enhancing quality, and speeding operations. It is where this decade’s productivity wars will be fought. The victors will be those companies that are able to take a new approach to business, working closely with partners to design and manage processes that extend across traditional corporate boundaries. They will be the ones that make the leap from efficiency to super efficiency [1]. Monczka and Morgan also focus on the importance of process integration in supply chain management [2]. The piece that seems to be missing from the literature is a comprehensive definition of the processes that constitute supply chain management. How can companies achieve supply chain integration if there is not a common understanding of the key business processes? It seems that in order to build links between supply chain members it is necessary for companies to implement a standard set of supply chain processes. Practitioners and educators need a common definition of supply chain management, and a shared understanding of the processes. We recommend the definition of supply chain management developed and used by The Global Supply Chain Forum: Supply Chain Management is the integration of key business processes from end user through original suppliers that provides products, services, and information that add value for customers and other stakeholders [3]. The Forum members identified eight key processes that need to be implemented within and across firms in the supply chain. To date, The Supply Chain Management Processes",
"title": ""
},
{
"docid": "8f957dab2aa6b186b61bc309f3f2b5c3",
"text": "Learning deeper convolutional neural networks has become a tendency in recent years. However, many empirical evidences suggest that performance improvement cannot be attained by simply stacking more layers. In this paper, we consider the issue from an information theoretical perspective, and propose a novel method Relay Backpropagation, which encourages the propagation of effective information through the network in training stage. By virtue of the method, we achieved the first place in ILSVRC 2015 Scene Classification Challenge. Extensive experiments on two challenging large scale datasets demonstrate the effectiveness of our method is not restricted to a specific dataset or network architecture.",
"title": ""
},
{
"docid": "f509e4c35a4dbc7b7ba88711d8a7b0ea",
"text": "The promises and potential of Big Data in transforming digital government services, governments, and the interaction between governments, citizens, and the business sector, are substantial. From \"smart\" government to transformational government, Big Data can foster collaboration; create real-time solutions to challenges in agriculture, health, transportation, and more; and usher in a new era of policy- and decision-making. There are, however, a range of policy challenges to address regarding Big Data, including access and dissemination; digital asset management, archiving and preservation; privacy; and security. This paper selectively reviews and analyzes the U.S. policy context regarding Big Data and offers recommendations aimed at facilitating Big Data initiatives.",
"title": ""
},
{
"docid": "9e44f467f7fbcd2ab1c6886bbb0099c0",
"text": "Email has become one of the fastest and most economical forms of communication. However, the increase of email users have resulted in the dramatic increase of spam emails during the past few years. In this paper, email data was classified using four different classifiers (Neural Network, SVM classifier, Naïve Bayesian Classifier, and J48 classifier). The experiment was performed based on different data size and different feature size. The final classification result should be ‘1’ if it is finally spam, otherwise, it should be ‘0’. This paper shows that simple J48 classifier which make a binary tree, could be efficient for the dataset which could be classified as binary tree.",
"title": ""
},
{
"docid": "96508fe94ab9e47534f2cc09b4b186a8",
"text": "A 300 GHz frequency synthesizer incorporating a triple-push VCO with Colpitts-based active varactor (CAV) and a divider with three-phase injection is introduced. The CAV provides frequency tunability, enhances harmonic power, and buffers/injects the VCO fundamental signal from/to the divider. The locking range of the divider is vastly improved due to the fact that the three-phase injection introduces larger allowable phase change and injection power into the divider loop. Implemented in 90 nm SiGe BiCMOS, the synthesizer achieves a phase-noise of -77.8 dBc/Hz (-82.5 dBc/Hz) at 100 kHz (1 MHz) offset with a crystal reference, and an overall locking range of 280.32-303.36 GHz (7.9%).",
"title": ""
},
{
"docid": "7816f9fc22866f2c4f12313715076a20",
"text": "Image-to-image translation has been made much progress with embracing Generative Adversarial Networks (GANs). However, it’s still very challenging for translation tasks that require high quality, especially at high-resolution and photorealism. In this paper, we present Discriminative Region Proposal Adversarial Networks (DRPAN) for highquality image-to-image translation. We decompose the procedure of imageto-image translation task into three iterated steps, first is to generate an image with global structure but some local artifacts (via GAN), second is using our DRPnet to propose the most fake region from the generated image, and third is to implement “image inpainting” on the most fake region for more realistic result through a reviser, so that the system (DRPAN) can be gradually optimized to synthesize images with more attention on the most artifact local part. Experiments on a variety of image-to-image translation tasks and datasets validate that our method outperforms state-of-the-arts for producing high-quality translation results in terms of both human perceptual studies and automatic quantitative measures.",
"title": ""
}
] |
scidocsrr
|
2da1b010d4dc70fffb1101a5e209f79c
|
Occupational therapy with children with pervasive developmental disorders.
|
[
{
"docid": "30e287e44e66e887ad5d689657e019c3",
"text": "OBJECTIVE\nThe purpose of this study was to determine whether the Sensory Profile discriminates between children with and without autism and which items on the profile best discriminate between these groups.\n\n\nMETHOD\nParents of 32 children with autism aged 3 to 13 years and of 64 children without autism aged 3 to 10 years completed the Sensory Profile. A descriptive analysis of the data set of children with autism identified the distribution of responses on each item. A multivariate analysis of covariance (MANCOVA) of each category of the Sensory Profile identified possible differences among subjects without autism, with mild or moderate autism, and with severe autism. Follow-up univariate analyses were conducted for any category that yielded a significant result on the MANCOVA:\n\n\nRESULTS\nEight-four of 99 items (85%) on the Sensory Profile differentiated the sensory processing skills of subjects with autism from those without autism. There were no group differences between subjects with mild or moderate autism and subjects with severe autism.\n\n\nCONCLUSION\nThe Sensory Profile can provide information about the sensory processing skills of children with autism to assist occupational therapists in assessing and planning intervention for these children.",
"title": ""
}
] |
[
{
"docid": "80cee0fa7114113732febe7f55b18a16",
"text": "A novel paradigm that changes the scene for the modern communication and computation systems is the Edge Computing. It is not a coincidence that terms like Mobile Cloud Computing, Cloudlets, Fog Computing, and Mobile-Edge Computing are gaining popularity both in academia and industry. In this paper, we embrace all these terms under the umbrella concept of “Edge Computing” to name the trend where computational infrastructures hence the services themselves are getting closer to the end user. However, we observe that bringing computational infrastructures to the proximity of the user does not magically solve all technical challenges. Moreover, it creates complexities of its own when not carefully handled. In this paper, these challenges are discussed in depth and categorically analyzed. As a solution direction, we propose that another major trend in networking, namely software-defined networking (SDN), should be taken into account. SDN, which is not proposed specifically for Edge Computing, can in fact serve as an enabler to lower the complexity barriers involved and let the real potential of Edge Computing be achieved. To fully demonstrate our ideas, initially, we put forward a clear collaboration model for the SDN-Edge Computing interaction through practical architectures and show that SDN related mechanisms can feasibly operate within the Edge Computing infrastructures. Then, we provide a detailed survey of the approaches that comprise the Edge Computing domain. A comparative discussion elaborates on where these technologies meet as well as how they differ. Later, we discuss the capabilities of SDN and align them with the technical shortcomings of Edge Computing implementations. We thoroughly investigate the possible modes of operation and interaction between the aforementioned technologies in all directions and technically deduce a set of “Benefit Areas” which is discussed in detail. Lastly, as SDN is an evolving technology, we give the future directions for enhancing the SDN development so that it can take this collaboration to a further level.",
"title": ""
},
{
"docid": "691d326a4d59a530f5142d4c15a8467b",
"text": "Previous open Relation Extraction (open RE) approaches mainly rely on linguistic patterns and constraints to extract important relational triples from large-scale corpora. However, they lack of abilities to cover diverse relation expressions or measure the relative importance of candidate triples within a sentence. It is also challenging to name the relation type of a relational triple merely based on context words, which could limit the usefulness of open RE in downstream applications. We propose a novel importancebased open RE approach by exploiting the global structure of a dependency tree to extract salient triples. We design an unsupervised method to name relation types by grounding relational triples to a large-scale Knowledge Base (KB) schema, leveraging KB triples and weighted context words associated with relational triples. Experiments on the English Slot Filling 2013 dataset demonstrate that our approach achieves 8.1% higher F-score over stateof-the-art open RE methods.",
"title": ""
},
{
"docid": "dde7b6a5da4ec5d161bbdc04373d5b54",
"text": "The depth image based rendering (DIBR) plays a key role in 3D video synthesis, by which other virtual views can be generated from a 2D video and its depth map. However, in the synthesis process, the background occluded by the foreground objects might be exposed in the new view, resulting in some holes in the synthetized video. In this paper, a hole filling approach based on background reconstruction is proposed, in which the temporal correlation information in both the 2D video and its corresponding depth map are exploited to construct a background video. To construct a clean background video, the foreground objects are detected and removed. Also motion compensation is applied to make the background reconstruction model suitable for moving camera scenario. Each frame is projected to the current plane where a modified Gaussian mixture model is performed. The constructed background video is used to eliminate the holes in the synthetized video. Our experimental results have indicated that the proposed approach has better quality of the synthetized 3D video compared with the other methods.",
"title": ""
},
{
"docid": "af23545d003a71d49f9665a7a3a5f3a1",
"text": "A parametric study of a wide-band Vivaldi antenna is presented. Four models were simulated using a finite element method design and analysis package Ansoft HFSS v 10.1. The simulated return loss and realized gain of each model for a frequency range of 12 to 20GHz is studied. The location of the phase centre, represented as the distance d (in cm) from the bottom of the antenna, with respect to which the phase of the respective far field copolar patterns (for a scan angle θ of 0 to 60°) in the E and H-planes, constrains to a specified maximum tolerable phase difference Δφ is calculated.",
"title": ""
},
{
"docid": "05fa2bcd251f44f8a62e90104844926f",
"text": "A challenging task in the natural language question answering (Q/A for short) over RDF knowledge graph is how to bridge the gap between unstructured natural language questions (NLQ) and graph-structured RDF data (GOne of the effective tools is the \"template\", which is often used in many existing RDF Q/A systems. However, few of them study how to generate templates automatically. To the best of our knowledge, we are the first to propose a join approach for template generation. Given a workload D of SPARQL queries and a set N of natural language questions, the goal is to find some pairs q, n, for q∈ D ∧ n ∈, N, where SPARQL query q is the best match for natural language question n. These pairs provide promising hints for automatic template generation. Due to the ambiguity of the natural languages, we model the problem above as an uncertain graph join task. We propose several structural and probability pruning techniques to speed up joining. Extensive experiments over real RDF Q/A benchmark datasets confirm both the effectiveness and efficiency of our approach.",
"title": ""
},
{
"docid": "4f57590f8bbf00d35b86aaa1ff476fc0",
"text": "Pedestrian detection has been used in applications such as car safety, video surveillance, and intelligent vehicles. In this paper, we present a pedestrian detection scheme using HOG, LUV and optical flow features with AdaBoost Decision Stump classifier. Our experiments on Caltech-USA pedestrian dataset show that the proposed scheme achieves promising results of about 16.7% log-average miss rate.",
"title": ""
},
{
"docid": "75ed4cabbb53d4c75fda3a291ea0ab67",
"text": "Optimization of energy consumption in future intelligent energy networks (or Smart Grids) will be based on grid-integrated near-real-time communications between various grid elements in generation, transmission, distribution and loads. This paper discusses some of the challenges and opportunities of communications research in the areas of smart grid and smart metering. In particular, we focus on some of the key communications challenges for realizing interoperable and future-proof smart grid/metering networks, smart grid security and privacy, and how some of the existing networking technologies can be applied to energy management. Finally, we also discuss the coordinated standardization efforts in Europe to harmonize communications standards and protocols.",
"title": ""
},
{
"docid": "9df6afad3843f4b0ef881fb9bcc68148",
"text": "Discrete-action algorithms have been central to numerous recent successes of deep reinforcement learning. However, applying these algorithms to high-dimensional action tasks requires tackling the combinatorial increase of the number of possible actions with the number of action dimensions. This problem is further exacerbated for continuous-action tasks that require fine control of actions via discretization. In this paper, we propose a novel neural architecture featuring a shared decision module followed by several network branches, one for each action dimension. This approach achieves a linear increase of the number of network outputs with the number of degrees of freedom by allowing a level of independence for each individual action dimension. To illustrate the approach, we present a novel agent, called Branching Dueling Q-Network (BDQ), as a branching variant of the Dueling Double Deep Q-Network (Dueling DDQN). We evaluate the performance of our agent on a set of challenging continuous control tasks. The empirical results show that the proposed agent scales gracefully to environments with increasing action dimensionality and indicate the significance of the shared decision module in coordination of the distributed action branches. Furthermore, we show that the proposed agent performs competitively against a state-of-the-art continuous control algorithm, Deep Deterministic Policy Gradient (DDPG).",
"title": ""
},
{
"docid": "84dee4781f7bc13711317d0594e97294",
"text": "We present an iterative method for solving linear systems, which has the property of minimizing at every step the norm of the residual vector over a Krylov subspace. The algorithm is derived from the Arnoldi process for constructing an /2-orthogonal basis of Krylov subspaces. It can be considered as a generalization of Paige and Saunders' MINRES algorithm and is theoretically equivalent to the Generalized Conjugate Residual (GCR) method and to ORTHODIR. The new algorithm presents several advantages over GCR and ORTHODIR.",
"title": ""
},
{
"docid": "e26c73004a3f29b1abbadd515a0ca748",
"text": "The situation in which a choice is made is an important information for recommender systems. Context-aware recommenders take this information into account to make predictions. So far, the best performing method for context-aware rating prediction in terms of predictive accuracy is Multiverse Recommendation based on the Tucker tensor factorization model. However this method has two drawbacks: (1) its model complexity is exponential in the number of context variables and polynomial in the size of the factorization and (2) it only works for categorical context variables. On the other hand there is a large variety of fast but specialized recommender methods which lack the generality of context-aware methods.\n We propose to apply Factorization Machines (FMs) to model contextual information and to provide context-aware rating predictions. This approach results in fast context-aware recommendations because the model equation of FMs can be computed in linear time both in the number of context variables and the factorization size. For learning FMs, we develop an iterative optimization method that analytically finds the least-square solution for one parameter given the other ones. Finally, we show empirically that our approach outperforms Multiverse Recommendation in prediction quality and runtime.",
"title": ""
},
{
"docid": "75c61230b24e53bd95f526c1ff74b621",
"text": "Input-series-output-parallel (ISOP) connected DC-DC converters enable low voltage rating switches to be used in high voltage input applications. In this paper, a DSP is adopted to generate digital phase-shifted PWM signals and to fulfill the closed-loop control function for ISOP connected two full-bridge DC-DC converters. Moreover, a stable output current sharing control strategy is proposed for the system, with which equal sharing of the input voltage and the load current can be achieved without any input voltage control loops. Based on small signal analysis with the state space average method, a loop gain design with the proposed scheme is made. Compared with the conventional IVS scheme, the proposed strategy leads to simplification of the output voltage regulator design and better static and dynamic responses. The effectiveness of the proposed control strategy is verified by the simulation and experimental results of an ISOP system made up of two full-bridge DC-DC converters.",
"title": ""
},
{
"docid": "b0bb9c4bcf666dca927d4f747bfb1ca1",
"text": "Remote monitoring of animal behaviour in the environment can assist in managing both the animal and its environmental impact. GPS collars which record animal locations with high temporal frequency allow researchers to monitor both animal behaviour and interactions with the environment. These ground-based sensors can be combined with remotely-sensed satellite images to understand animal-landscape interactions. The key to combining these technologies is communication methods such as wireless sensor networks (WSNs). We explore this concept using a case-study from an extensive cattle enterprise in northern Australia and demonstrate the potential for combining GPS collars and satellite images in a WSN to monitor behavioural preferences and social behaviour of cattle.",
"title": ""
},
{
"docid": "e3a412a62d5e6a253158e2eba9b0fd05",
"text": "Colorectal cancer (CRC) is one of the most common cancers in the western world and is characterised by deregulation of the Wnt signalling pathway. Mutation of the adenomatous polyposis coli (APC) tumour suppressor gene, which encodes a protein that negatively regulates this pathway, occurs in almost 80% of CRC cases. The progression of this cancer from an early adenoma to carcinoma is accompanied by a well-characterised set of mutations including KRAS, SMAD4 and TP53. Using elegant genetic models the current paradigm is that the intestinal stem cell is the origin of CRC. However, human histology and recent studies, showing marked plasticity within the intestinal epithelium, may point to other cells of origin. Here we will review these latest studies and place these in context to provide an up-to-date view of the cell of origin of CRC.",
"title": ""
},
{
"docid": "97e2d66e927c0592b88bef38a8899547",
"text": "Shared services have been heralded as a means of enhancing services and improving the efficiency of their delivery. As such they have been embraced by the private, and increasingly, the public sectors. Yet implementation has proved to be difficult and the number of success stories has been limited. Which factors are critical to success in the development of shared services arrangements is not yet well understood. The current paper examines existing research in the area of critical success factors (CSFs) and suggests that there are actually three distinct types of CSF: outcome, implementation process and operating environment characteristic. Two case studies of public sector shared services in Australia and the Netherlands are examined through a lens that both incorporates all three types of CSF and distinguishes between them.",
"title": ""
},
{
"docid": "7e7fc57baab9f8be5032ce71529603d1",
"text": "Many companies are now providing customer service through social media, helping and engaging their customers on a real-time basis. To study this increasingly popular practice, we examine how major airlines respond to customer comments on Twitter by exploiting a large data set containing all Twitter exchanges between customers and four major airlines from June 2013 to August 2014. We find that these airlines pay significantly more attention to Twitter users with more followers, suggesting that companies literarily discriminate customers based on their social influence. Moreover, our findings suggest that companies in the digital age are increasingly more sensitive to the need to answer both customer complaints and customer compliments.",
"title": ""
},
{
"docid": "eb2459cbb99879b79b94653c3b9ea8ef",
"text": "Extending the success of deep neural networks to natural language understanding and symbolic reasoning requires complex operations and external memory. Recent neural program induction approaches have attempted to address this problem, but are typically limited to differentiable memory, and consequently cannot scale beyond small synthetic tasks. In this work, we propose the Manager-ProgrammerComputer framework, which integrates neural networks with non-differentiable memory to support abstract, scalable and precise operations through a friendly neural computer interface. Specifically, we introduce a Neural Symbolic Machine, which contains a sequence-to-sequence neural \"programmer\", and a nondifferentiable \"computer\" that is a Lisp interpreter with code assist. To successfully apply REINFORCE for training, we augment it with approximate gold programs found by an iterative maximum likelihood training process. NSM is able to learn a semantic parser from weak supervision over a large knowledge base. It achieves new state-of-the-art performance on WEBQUESTIONSSP, a challenging semantic parsing dataset, with weak supervision. Compared to previous approaches, NSM is end-to-end, therefore does not rely on feature engineering or domain specific knowledge.",
"title": ""
},
{
"docid": "f1a162f64838817d78e97a3c3087fae4",
"text": "Most literature on support vector machines (SVMs) concentrates on the dual optimization problem. In this letter, we point out that the primal problem can also be solved efficiently for both linear and nonlinear SVMs and that there is no reason for ignoring this possibility. On the contrary, from the primal point of view, new families of algorithms for large-scale SVM training can be investigated.",
"title": ""
},
{
"docid": "19792ab5db07cd1e6cdde79854ba8cb7",
"text": "Empathy allows us to simulate others' affective and cognitive mental states internally, and it has been proposed that the mirroring or motor representation systems play a key role in such simulation. As emotions are related to important adaptive events linked with benefit or danger, simulating others' emotional states might constitute of a special case of empathy. In this functional magnetic resonance imaging (fMRI) study we tested if emotional versus cognitive empathy would facilitate the recruitment of brain networks involved in motor representation and imitation in healthy volunteers. Participants were presented with photographs depicting people in neutral everyday situations (cognitive empathy blocks), or suffering serious threat or harm (emotional empathy blocks). Participants were instructed to empathize with specified persons depicted in the scenes. Emotional versus cognitive empathy resulted in increased activity in limbic areas involved in emotion processing (thalamus), and also in cortical areas involved in face (fusiform gyrus) and body perception, as well as in networks associated with mirroring of others' actions (inferior parietal lobule). When brain activation resulting from viewing the scenes was controlled, emotional empathy still engaged the mirror neuron system (premotor cortex) more than cognitive empathy. Further, thalamus and primary somatosensory and motor cortices showed increased functional coupling during emotional versus cognitive empathy. The results suggest that emotional empathy is special. Emotional empathy facilitates somatic, sensory, and motor representation of other peoples' mental states, and results in more vigorous mirroring of the observed mental and bodily states than cognitive empathy.",
"title": ""
}
] |
scidocsrr
|
d53664f8fe76ad9ff7354e2cde43b578
|
Model-Free Imitation Learning with Policy Optimization
|
[
{
"docid": "6afdf8c4f509de6481bf4cf8d28c77a4",
"text": "We propose a Learning from Demonstration (LfD) algorithm which leverages expert data, even if they are very few or inaccurate. We achieve this by using both expert data, as well as reinforcement signals gathered through trial-and-error interactions with the environment. The key idea of our approach, Approximate Policy Iteration with Demonstration (APID), is that expert’s suggestions are used to define linear constraints which guide the optimization performed by Approximate Policy Iteration. We prove an upper bound on the Bellman error of the estimate computed by APID at each iteration. Moreover, we show empirically that APID outperforms pure Approximate Policy Iteration, a state-of-the-art LfD algorithm, and supervised learning in a variety of scenarios, including when very few and/or suboptimal demonstrations are available. Our experiments include simulations as well as a real robot path-finding task.",
"title": ""
},
{
"docid": "a4473c2cc7da3fb5ee52b60cee24b9b9",
"text": "The ALVINN (Autonomous h d Vehide In a N d Network) projea addresses the problem of training ani&ial naxal naarork in real time to perform difficult perapaon tasks. A L W is a back-propagation network dmpd to dnve the CMU Navlab. a modided Chevy van. 'Ibis ptpa describes the training techniques which allow ALVIN\" to luun in under 5 minutes to autonomously conm>l the Navlab by wardung ahuamr, dziver's rmaions. Usingthese technrques A L W has b&n trained to drive in a variety of Cirarmstanccs including single-lane paved and unprved roads. and multi-lane lined and rmlinecd roads, at speeds of up IO 20 miles per hour",
"title": ""
}
] |
[
{
"docid": "dd8f969d36d5fe037fdb83cdf4ee450f",
"text": "Electronic commerce (EC) has the potential to improve efficiency and productivity in many areas and has received significant attention in many countries. However, there has been some doubt about the relevance of ecommerce for developing countries. The absence of adequate basic infrastructural, socio-economic, sociocultural, and government ICT strategies have created a significant barrier in the adoption and growth of ecommerce in the Kurdistan region of Iraq. In this paper, the author shows that to understand the adoption and diffusion of ecommerce in Kurdistan, socio-cultural issues like transactional trust and social effect of shopping must be considered. The paper presents and discusses these issues hindering ecommerce adoption in Kurdistan. DOI: 10.4018/jtd.2011040104 48 International Journal of Technology Diffusion, 2(2), 47-59, April-June 2011 Copyright © 2011, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited. business organizations in developing countries to gain greater global access and reduce transaction costs (Kraemer et al., 2002; Humphrey et al., 2003). However, previous research has found that developing countries have not derived the expected benefits from ecommerce (Pare, 2002; Humphrey et al., 2003). Consequently, there is still doubt about how ecommerce will actually lead firms in developing countries to new trading opportunities (Humphrey et al., 2003; Vatanasakdakul et al., 2004). The obstacles to reaping the benefits brought about by ecommerce are often underestimated. Accessing the Web is possible only when telephones and PCs are available, but these technologies are still in very scarce supply. In addition to this problem, Internet access is still very costly both in absolute terms and relative to per-capita income in most part of Kurdistan region. While PC prices have fallen dramatically over the last decade, they remain beyond the reach of most individual users and enterprises in Kurdistan. Add to this, the human capital cost of installing, operating, maintaining, training and support, the costs are beyond the means of many enterprises. There are significant disparities in the level of Internet penetration across parts of Kurdistan, which have profound implications for an individual’s ability to participate in ecommerce. Moreover, skilled personnel are often lacking, the transport facilities are poor, and secure payment facilities non-existent in most parts of the region. Other than the insufficient physical infrastructures, the electronic transaction facilities are deficient and the legal and regulatory framework inadequate. Most consumer markets face severe limitations in terms of connectivity, ability to pay, deliveries, willingness to make purchases on the Web, ownership of credit cards, and access to other means of payment for online purchases and accessibility in terms of physical deliveries. Moreover, the low level of economic development and small per-capita incomes, the limited skills base with which to build ecommerce services (Odedra-Straub, 2003). While Kurdistan has abundant cheap labour, there still remains the issue of developing IT literacy and education to ensure the quality and size of the IT workforce. The need to overcome infrastructural bottlenecks in telecommunication, transport system, electronic payment systems, security, standards, skilled workforce and logistics must be addressed, before ecommerce can be considered suitable for this region. The objective of this paper is to examine the barriers hindering ecommerce adoption, focusing on technological infrastructures, socio-economic, socio-cultural and the lack of governmental policies as they relate to Kurdistan region. It seeks to identify and describe these issues that hinder the adoption and diffusion of ecommerce in the region. Kurdistan region of Iraq is just like any other developing country where the infrastructures are not as developed as they are in developed countries of U.S., Europe, or some Asian countries, and these infrastructural limitations are significant impediments to ecommerce adoption and diffusion. The next section briefly presents background information about Kurdistan region of Iraq. A BRIEF BACKGROUND SUMMARY OF KURDISTAN REGION OF IRAQ This section briefly discusses Kurdistan region which form the background to this study. The choice of Kurdistan as the context of this study is motivated by the quest to understand why the region is lacking behind in the adoption of ecommerce. Kurdistan is an autonomous Region of Iraq; it is one of the only regions which have gained official recognition internationally as an autonomous federal entity, with leverages in foreign relations, defense, internal security, investment and governance – a similar setting is Quebec region of Canada. The region continues to view itself as an integral part of a united Iraq but one in which it administers its own affairs. Kurdistan has a regional government (KRG) as well as a functional parliament and bureaucracy. Kurdistan is a parliamentary democracy with 11 more pages are available in the full version of this document, which may be purchased using the \"Add to Cart\" button on the product's webpage: www.igi-global.com/article/barriers-hindering-ecommerceadoption/57975?camid=4v1 This title is available in InfoSci-Journals, InfoSci-Journal Disciplines Computer Science, Security, and Information Technology, InfoSci-Technology Adoption, Ethics, and Human Computer Interaction eJournal Collection, InfoSciJournal Disciplines Business, Administration, and Management, InfoSci-Select. Recommend this product to",
"title": ""
},
{
"docid": "a13ca3d83e6ec1693bd9ad53323d2f63",
"text": "BACKGROUND\nThis study examined longitudinal patterns of heroin use, other substance use, health, mental health, employment, criminal involvement, and mortality among heroin addicts.\n\n\nMETHODS\nThe sample was composed of 581 male heroin addicts admitted to the California Civil Addict Program (CAP) during the years 1962 through 1964; CAP was a compulsory drug treatment program for heroin-dependent criminal offenders. This 33-year follow-up study updates information previously obtained from admission records and 2 face-to-face interviews conducted in 1974-1975 and 1985-1986; in 1996-1997, at the latest follow-up, 284 were dead and 242 were interviewed.\n\n\nRESULTS\nIn 1996-1997, the mean age of the 242 interviewed subjects was 57.4 years. Age, disability, years since first heroin use, and heavy alcohol use were significant correlates of mortality. Of the 242 interviewed subjects, 20.7% tested positive for heroin (with additional 9.5% urine refusal and 14.0% incarceration, for whom urinalyses were unavailable), 66.9% reported tobacco use, 22.1% were daily alcohol drinkers, and many reported illicit drug use (eg, past-year heroin use was 40.5%; marijuana, 35.5%; cocaine, 19.4%; crack, 10.3%; amphetamine, 11.6%). The group also reported high rates of health problems, mental health problems, and criminal justice system involvement. Long-term heroin abstinence was associated with less criminality, morbidity, psychological distress, and higher employment.\n\n\nCONCLUSIONS\nWhile the number of deaths increased steadily over time, heroin use patterns were remarkably stable for the group as a whole. For some, heroin addiction has been a lifelong condition associated with severe health and social consequences.",
"title": ""
},
{
"docid": "5387c752db7b4335a125df91372099b3",
"text": "We examine how people’s different uses of the Internet predict their later scores on a standard measure of depression, and how their existing social resources moderate these effects. In a longitudinal US survey conducted in 2001 and 2002, almost all respondents reported using the Internet for information, and entertainment and escape; these uses of the Internet had no impact on changes in respondents’ level of depression. Almost all respondents also used the Internet for communicating with friends and family, and they showed lower depression scores six months later. Only about 20 percent of this sample reported using the Internet to meet new people and talk in online groups. Doing so changed their depression scores depending on their initial levels of social support. Those having high or medium levels of social support showed higher depression scores; those with low levels of social support did not experience these increases in depression. Our results suggest that individual differences in social resources and people’s choices of how they use the Internet may account for the different outcomes reported in the literature.",
"title": ""
},
{
"docid": "9b176a25a16b05200341ac54778a8bfc",
"text": "This paper reports on a study of motivations for the use of peer-to-peer or sharing economy services. We interviewed both users and providers of these systems to obtain different perspectives and to determine if providers are matching their system designs to the most important drivers of use. We found that the motivational models implicit in providers' explanations of their systems' designs do not match well with what really seems to motivate users. Providers place great emphasis on idealistic motivations such as creating a better community and increasing sustainability. Users, on the other hand are looking for services that provide what they need whilst increasing value and convenience. We discuss the divergent models of providers and users and offer design implications for peer system providers.",
"title": ""
},
{
"docid": "0a6a170d3ebec3ded7c596d768f9ce85",
"text": "This paper presents the method of our submission for THUMOS15 action recognition challenge. We propose a new action recognition system by exploiting very deep twostream ConvNets and Fisher vector representation of iDT features. Specifically, we utilize those successful very deep architectures in images such as GoogLeNet and VGGNet to design the two-stream ConvNets. From our experiments, we see that deeper architectures obtain higher performance for spatial nets. However, for temporal net, deeper architectures could not yield better recognition accuracy. We analyze that the UCF101 dataset is relatively very small and it is very hard to train such deep networks on the current action datasets. Compared with traditional iDT features, our implemented two-stream ConvNets significantly outperform them. We further combine the recognition scores of both two-stream ConvNets and iDT features, and achieve 68% mAP value on the validation dataset of THUMOS15.",
"title": ""
},
{
"docid": "e1a08e10db4919a3b8761eda92682cea",
"text": "In many real-world settings, a team of agents must coordinate their behaviour while acting in a decentralised way. At the same time, it is often possible to train the agents in a centralised fashion in a simulated or laboratory setting, where global state information is available and communication constraints are lifted. Learning joint actionvalues conditioned on extra state information is an attractive way to exploit centralised learning, but the best strategy for then extracting decentralised policies is unclear. Our solution is QMIX, a novel value-based method that can train decentralised policies in a centralised end-to-end fashion. QMIX employs a network that estimates joint action-values as a complex non-linear combination of per-agent values that condition only on local observations. We structurally enforce that the joint-action value is monotonic in the per-agent values, which allows tractable maximisation of the joint action-value in off-policy learning, and guarantees consistency between the centralised and decentralised policies. We evaluate QMIX on a challenging set of StarCraft II micromanagement tasks, and show that QMIX significantly outperforms existing value-based multi-agent reinforcement learning methods.",
"title": ""
},
{
"docid": "64f45424c2bfa571dd47523633cb5d03",
"text": "We demonstrate how adjustable robust optimization (ARO) problems with fixed recourse can be casted as static robust optimization problems via Fourier-Motzkin elimination (FME). Through the lens of FME, we characterize the structures of the optimal decision rules for a broad class of ARO problems. A scheme based on a blending of classical FME and a simple Linear Programming technique that can efficiently remove redundant constraints, is developed to reformulate ARO problems. This generic reformulation technique enhances the classical approximation scheme via decision rules, and enables us to solve adjustable optimization problems to optimality. We show via numerical experiments that, for small-size ARO problems our novel approach finds the optimal solution. For moderate or large-size instances, we eliminate a subset of the adjustable variables, which improves the solutions from decision rule approximations.",
"title": ""
},
{
"docid": "8fac18c1285875aee8e7a366555a4ca3",
"text": "Automatic speech recognition (ASR) has been under the scrutiny of researchers for many years. Speech Recognition System is the ability to listen what we speak, interpreter and perform actions according to spoken information. After so many detailed study and optimization of ASR and various techniques of features extraction, accuracy of the system is still a big challenge. The selection of feature extraction techniques is completely based on the area of study. In this paper, a detailed theory about features extraction techniques like LPC and LPCC is examined. The goal of this paper is to study the comparative analysis of features extraction techniques like LPC and LPCC.",
"title": ""
},
{
"docid": "1b97a93d4d975e2ea4082616ccd11948",
"text": "This paper presents an optimized wind energy harvesting (WEH) system that uses a specially designed ultra-low-power-management circuit for sustaining the operation of a wireless sensor node. The proposed power management circuit has two distinct features: 1) an active rectifier using MOSFETs for rectifying the low amplitude ac voltage generated by the wind turbine generator under low wind speed condition efficiently and 2) a dc-dc boost converter with resistor emulation algorithm to perform maximum power point tracking (MPPT) under varying wind-speed conditions. As compared to the conventional diode-bridge rectifier, it is shown that the efficiency of the active rectifier with a low input voltage of 1.2 V has been increased from 40% to 70% due to the significant reduction in the ON-state voltage drop (from 0.6 to 0.15 V) across each pair of MOSFETs used. The proposed robust low-power microcontroller-based resistance emulator is implemented with closed-loop resistance feedback control to ensure close impedance matching between the source and the load, resulting in an efficient power conversion. From the experimental test results obtained, an average electrical power of 7.86 mW is harvested by the optimized WEH system at an average wind speed of 3.62 m/s, which is almost four times higher than the conventional energy harvesting method without using the MPPT.",
"title": ""
},
{
"docid": "06b1a00a97eea61ada0d92469254ddbd",
"text": "We propose a model for clustering data with spatiotemporal intervals. This model is used to effectively evaluate clusters of spatiotemporal interval data. A new energy function is used to measure similarity and balance between clusters in spatial and temporal dimensions. We employ as a case study a large collection of parking data from a real CBD area. The proposed model is applied to existing traditional algorithms to address spatiotemporal interval data clustering problem. Results from traditional clustering algorithms are compared and analysed using the proposed energy function.",
"title": ""
},
{
"docid": "712335f6cbe0d00fce07d6bb6d600759",
"text": "Narrowband Internet of Things (NB-IoT) is a new radio access technology, recently standardized in 3GPP to enable support for IoT devices. NB-IoT offers a range of flexible deployment options and provides improved coverage and support for a massive number of devices within a cell. In this paper, we provide a detailed evaluation of the coverage performance of NBIoT and show that it achieves a coverage enhancement of up to 20 dB when compared with existing LTE technology.",
"title": ""
},
{
"docid": "0cfbf69119666c499d58e05c5599d841",
"text": "We present Memory Augmented Policy Optimization (MAPO), a simple and novel way to leverage a memory buffer of promising trajectories to reduce the variance of policy gradient estimates. MAPO is applicable to deterministic environments with discrete actions, such as structured prediction and combinatorial optimization. Our key idea is to express the expected return objective as a weighted sum of two terms: an expectation over the high-reward trajectories inside a memory buffer, and a separate expectation over trajectories outside of the buffer. To design an efficient algorithm based on this idea, we propose: (1) memory weight clipping to accelerate and stabilize training; (2) systematic exploration to discover high-reward trajectories; (3) distributed sampling from inside and outside of the memory buffer to speed up training. MAPO improves the sample efficiency and robustness of policy gradient, especially on tasks with sparse rewards. We evaluate MAPO on weakly supervised program synthesis from natural language (semantic parsing). On the WIKITABLEQUESTIONS benchmark, we improve the state-of-the-art by 2.6%, achieving an accuracy of 46.3%. On the WIKISQL benchmark, MAPO achieves an accuracy of 74.9% with only weak supervision, outperforming several strong baselines with full supervision. Our source code is available at goo.gl/TXBp4e.",
"title": ""
},
{
"docid": "ee2f9d185e7e6b47a79fa8ef3ba227c9",
"text": "Pedestrian behavior modeling and analysis is important for crowd scene understanding and has various applications in video surveillance. Stationary crowd groups are a key factor influencing pedestrian walking patterns but was mostly ignored in the literature. It plays different roles for different pedestrians in a crowded scene and can change over time. In this paper, a novel model is proposed to model pedestrian behaviors by incorporating stationary crowd groups as a key component. Through inference on the interactions between stationary crowd groups and pedestrians, our model can be used to investigate pedestrian behaviors. The effectiveness of the proposed model is demonstrated through multiple applications, including walking path prediction, destination prediction, personality attribute classification, and abnormal event detection. To evaluate our model, two large pedestrian walking route datasets are built. The walking routes of around 15 000 pedestrians from two crowd surveillance videos are manually annotated. The datasets will be released to the public and benefit future research on pedestrian behavior analysis and crowd scene understanding.",
"title": ""
},
{
"docid": "cd13524d825c5253313cf17d46e5a11f",
"text": "This paper documents the application of the Conway-Maxwell-Poisson (COM-Poisson) generalized linear model (GLM) for modeling motor vehicle crashes. The COM-Poisson distribution, originally developed in 1962, has recently been re-introduced by statisticians for analyzing count data subjected to over- and under-dispersion. This innovative distribution is an extension of the Poisson distribution. The objectives of this study were to evaluate the application of the COM-Poisson GLM for analyzing motor vehicle crashes and compare the results with the traditional negative binomial (NB) model. The comparison analysis was carried out using the most common functional forms employed by transportation safety analysts, which link crashes to the entering flows at intersections or on segments. To accomplish the objectives of the study, several NB and COM-Poisson GLMs were developed and compared using two datasets. The first dataset contained crash data collected at signalized four-legged intersections in Toronto, Ont. The second dataset included data collected for rural four-lane divided and undivided highways in Texas. Several methods were used to assess the statistical fit and predictive performance of the models. The results of this study show that COM-Poisson GLMs perform as well as NB models in terms of GOF statistics and predictive performance. Given the fact the COM-Poisson distribution can also handle under-dispersed data (while the NB distribution cannot or has difficulties converging), which have sometimes been observed in crash databases, the COM-Poisson GLM offers a better alternative over the NB model for modeling motor vehicle crashes, especially given the important limitations recently documented in the safety literature about the latter type of model.",
"title": ""
},
{
"docid": "34a7ae3283c4f3bcb3e9afff2383de72",
"text": "Latent variable models have been a preferred choice in conversational modeling compared to sequence-to-sequence (seq2seq) models which tend to generate generic and repetitive responses. Despite so, training latent variable models remains to be difficult. In this paper, we propose Latent Topic Conversational Model (LTCM) which augments seq2seq with a neural latent topic component to better guide response generation and make training easier. The neural topic component encodes information from the source sentence to build a global “topic” distribution over words, which is then consulted by the seq2seq model at each generation step. We study in details how the latent representation is learnt in both the vanilla model and LTCM. Our extensive experiments contribute to better understanding and training of conditional latent models for languages. Our results show that by sampling from the learnt latent representations, LTCM can generate diverse and interesting responses. In a subjective human evaluation, the judges also confirm that LTCM is the overall preferred option.",
"title": ""
},
{
"docid": "78afd117aa7fba5987481de3a2a605b8",
"text": "Character-based sequence labeling framework is flexible and efficient for Chinese word segmentation (CWS). Recently, many character-based neural models have been applied to CWS. While they obtain good performance, they have two obvious weaknesses. The first is that they heavily rely on manually designed bigram feature, i.e. they are not good at capturing n-gram features automatically. The second is that they make no use of full word information. For the first weakness, we propose a convolutional neural model, which is able to capture rich n-gram features without any feature engineering. For the second one, we propose an effective approach to integrate the proposed model with word embeddings. We evaluate the model on two benchmark datasets: PKU and MSR. Without any feature engineering, the model obtains competitive performance — 95.7% on PKU and 97.3% on MSR. Armed with word embeddings, the model achieves state-of-the-art performance on both datasets — 96.5% on PKU and 98.0% on MSR, without using any external labeled resource.",
"title": ""
},
{
"docid": "8904494e20d6761437e4d63c86c43e78",
"text": "Deep residual networks (ResNets) and their variants are widely used in many computer vision applications and natural language processing tasks. However, the theoretical principles for designing and training ResNets are still not fully understood. Recently, several points of view have emerged to try to interpret ResNet theoretically, such as unraveled view, unrolled iterative estimation and dynamical systems view. In this paper, we adopt the dynamical systems point of view, and analyze the lesioning properties of ResNet both theoretically and experimentally. Based on these analyses, we additionally propose a novel method for accelerating ResNet training. We apply the proposed method to train ResNets and Wide ResNets for three image classification benchmarks, reducing training time by more than 40% with superior or on-par accuracy.",
"title": ""
},
{
"docid": "3192a76e421d37fbe8619a3bc01fb244",
"text": "• Develop and implement an internally consistent set of goals and functional policies (this is, a solution to the agency problem) • These internally consistent set of goals and policies aligns the firm’s strengths and weaknesses with external (industry) opportunities and threats (SWOT) in a dynamic balance • The firm’s strategy has to be concerned with the exploitation of its “distinctive competences” (early reference to RBV)",
"title": ""
},
{
"docid": "3aa27e9ffe0580dde75cc2f05244d0fa",
"text": "UNLABELLED\nImperforate hymen is a rare congenital anomaly, with an incidence of about 1 in 2000 female births. It is generally diagnosed during puberty. Treatment generally consists of a hymenotomy or a hymenectomy. Because the hymen is a symbol of virginity in some communities, its destruction can be source of social problems for some girls.\n\n\nOBJECTIVES\nWe discuss the diagnostic but especially therapeutic aspects of imperforate hymens and possible surgical techniques, in particular those that preserve the hymen.\n\n\nMATERIAL AND METHODS\nWe describe the cases of 5 girls treated in our department for imperforate hymen between 2001 and 2007. Two of them required the safeguarding of the normal architecture of their hymen to preserve the appearance of virginity. We analysed diagnostic features and surgical techniques.\n\n\nRESULTS\nThe average age of our patients was 14.8 years (range: 11 and 17 years). The most frequent reason for consultation was pelvic pain with primary amenorrhea. Inspection of the vulva revealed in all cases a dome-shaped purplish-red hymeneal membrane. Hymeneal incision allowed drainage of old previously blocked menstrual blood. Three patients were treated by radial incisions of the hymen. The parents of 2 patients demanded that their hymens be preserved. Accordingly, one had a simple excision of a central flange of the hymen and the other was treated by a similar technique that also used a Foley catheter . All five patients did well after surgical treatment. The techniques used to preserve the hymen resulted in an apparently intact annular hymen.\n\n\nCONCLUSION\nImperforate hymen is a rare anomaly. Its diagnosis is simple. The traditional technique of radial incisions is a simple procedure that yields good results. The technique using the Foley catheter is an adequate alternative when preservation of the hymen is required.",
"title": ""
},
{
"docid": "9afdeab9abb1bfde45c6e9f922181c6b",
"text": "Aiming at the need for autonomous learning in reinforcement learning (RL), a quantitative emotion-based motivation model is proposed by introducing psychological emotional factors as the intrinsic motivation. The curiosity is used to promote or hold back agents' exploration of unknown states, the happiness index is used to determine the current state-action's happiness level, the control power is used to indicate agents' control ability over its surrounding environment, and together to adjust agents' learning preferences and behavioral patterns. To combine intrinsic emotional motivations with classic RL, two methods are proposed. The first method is to use the intrinsic emotional motivations to explore unknown environment and learn the environment transitioning model ahead of time, while the second method is to combine intrinsic emotional motivations with external rewards as the ultimate joint reward function, directly to drive agents' learning. As the result shows, in the simulation experiments in the rat foraging in maze scenario, both methods have achieved relatively good performance, compared with classic RL purely driven by external rewards.",
"title": ""
}
] |
scidocsrr
|
f9acfc37df21f3970f60f3a5c8258cea
|
Measurement Instruments for the Anthropomorphism, Animacy, Likeability, Perceived Intelligence, and Perceived Safety of Robots
|
[
{
"docid": "b86dd4b34965b15af417da275de761c4",
"text": "This article considered the problem of designing joint-actuation mechanisms that can allow fast and accurate operation of a robot arm, while guaranteeing a suitably limited level of injury risk. Different approaches to the problem were presented, and a method of performance evaluation was proposed based on minimum-time optimal control with safety constraints. The variable stiffness transmission (VST) scheme was found to be one of a few different possible schemes that allows the most flexibility and potential performance. Some aspects related to the implementation of the mechanics and control of VST actuation were also reported.",
"title": ""
}
] |
[
{
"docid": "1dca090997ade8e18c19d16b9b9d2450",
"text": "Pretraining with expert demonstrations have been found useful in speeding up the training process of deep reinforcement learning algorithms since less online simulation data is required. Some people use supervised learning to speed up the process of feature learning, others pretrain the policies by imitating expert demonstrations. However, these methods are unstable and not suitable for actor-critic reinforcement learning algorithms. Also, some existing methods rely on the global optimum assumption, which is not true in most scenarios. In this paper, we employ expert demonstrations in a actor-critic reinforcement learning framework, and meanwhile ensure that the performance is not affected by the fact that expert demonstrations are not global optimal. We theoretically derive a method for computing policy gradients and value estimators with only expert demonstrations. Our method is theoretically plausible for actor-critic reinforcement learning algorithms that pretrains both policy and value functions. We apply our method to two of the typical actor-critic reinforcement learning algorithms, DDPG and ACER, and demonstrate with experiments that our method not only outperforms the RL algorithms without pretraining process, but also is more simulation efficient.",
"title": ""
},
{
"docid": "7b40f1e8c3a779fe07387f16e56617a9",
"text": "Social networking sites such as Twitter and Facebook attracts over 500 million users across the world, for those users, their social life, even their practical life, has become interrelated. Their interaction with social networking has affected their life forever. Accordingly, social networking sites have become among the main channels that are responsible for vast dissemination of different kinds of information during real time events. This popularity in Social networking has led to different problems including the possibility of exposing incorrect information to their users through fake accounts which results to the spread of malicious content during life events. This situation can result to a huge damage in the real world to the society in general including citizens, business entities, and others. In this paper, we present a classification method for detecting the fake accounts on Twitter. The study determines the minimized set of the main factors that influence the detection of the fake accounts on Twitter, and then the determined factors are applied using different classification techniques. A comparison of the results of these techniques has been performed and the most accurate algorithm is selected according to the accuracy of the results. The study has been compared with different recent researches in the same area; this comparison has proved the accuracy of the proposed study. We claim that this study can be continuously applied on Twitter social network to automatically detect the fake accounts; moreover, the study can be applied on different social network sites such as Facebook with minor changes according to the nature of the social network which are discussed in this paper. Keywords—Fake accounts detection, classification algorithms, twitter accounts analysis, features based techniques.",
"title": ""
},
{
"docid": "3c778c71f621b2c887dc81e7a919058e",
"text": "We have witnessed the Fixed Internet emerging with virtually every computer being connected today; we are currently witnessing the emergence of the Mobile Internet with the exponential explosion of smart phones, tablets and net-books. However, both will be dwarfed by the anticipated emergence of the Internet of Things (IoT), in which everyday objects are able to connect to the Internet, tweet or be queried. Whilst the impact onto economies and societies around the world is undisputed, the technologies facilitating such a ubiquitous connectivity have struggled so far and only recently commenced to take shape. To this end, this paper introduces in a timely manner and for the first time the wireless communications stack the industry believes to meet the important criteria of power-efficiency, reliability and Internet connectivity. Industrial applications have been the early adopters of this stack, which has become the de-facto standard, thereby bootstrapping early IoT developments with already thousands of wireless nodes deployed. Corroborated throughout this paper and by emerging industry alliances, we believe that a standardized approach, using latest developments in the IEEE 802.15.4 and IETF working groups, is the only way forward. We introduce and relate key embodiments of the power-efficient IEEE 802.15.4-2006 PHY layer, the power-saving and reliable IEEE 802.15.4e MAC layer, the IETF 6LoWPAN adaptation layer enabling universal Internet connectivity, the IETF ROLL routing protocol enabling availability, and finally the IETF CoAP enabling seamless transport and support of Internet applications. The protocol stack proposed in the present work converges towards the standardized notations of the ISO/OSI and TCP/IP stacks. What thus seemed impossible some years back, i.e., building a clearly defined, standards-compliant and Internet-compliant stack given the extreme restrictions of IoT networks, is commencing to become reality.",
"title": ""
},
{
"docid": "bb853c369f37d2d960d6b312f80cfa98",
"text": "The purpose of this platform is to support research and education goals in human-robot interaction and mobile manipulation with applications that require the integration of these abilities. In particular, our research aims to develop personal robots that work with people as capable teammates to assist in eldercare, healthcare, domestic chores, and other physical tasks that require robots to serve as competent members of human-robot teams. The robot’s small, agile design is particularly well suited to human-robot interaction and coordination in human living spaces. Our collaborators include the Laboratory for Perceptual Robotics at the University of Massachusetts at Amherst, Xitome Design, Meka Robotics, and digitROBOTICS.",
"title": ""
},
{
"docid": "db66428e21d473b7d77fde0c3ae6d6c3",
"text": "In order to improve electric vehicle lead-acid battery charging speed, analysis the feasibility of shortening the charging time used the charge method with negative pulse discharge, presenting the negative pulse parameters determined method for the fast charging with pulse discharge, determined the negative pulse amplitude and negative pulse duration in the pulse charge with negative pulse. Experiments show that the determined parameters with this method has some Advantages such as short charging time, small temperature rise etc, and the method of negative pulse parameters determined can used for different capacity of lead-acid batteries.",
"title": ""
},
{
"docid": "843114fa31397e6154c63561e30add48",
"text": "Many animals engage in many behaviors that reduce their exposure to pathogens. Ants line their nests with resins that inhibit the growth of fungi and bacteria (Chapuisat, Oppliger, Magliano, & Christe, 2008). Mice avoid mating with other mice that are infected with parasitic protozoa (Kavaliers & Colwell, 1995). Animals of many kinds—from physiologically primitive nematode worms to neurologically sophisticated chimpanzees—strategically avoid physical contact with specific things (including their own conspecifics) that, on the basis of superficial sensory cues, appear to pose some sort of infection risk (Goodall, 1986; Kiesecker, Skelly, Beard, & Preisser, 1999; Schulenburg & Müller, 2004).",
"title": ""
},
{
"docid": "b1c036f2a003ada4eaa965543e7e6d36",
"text": "Seaweed and their constituents have been traditionally employed for the management of various human pathologic conditions such as edema, urinary disorders and inflammatory anomalies. The current study was performed to investigate the antioxidant and anti-arthritic effects of fucoidan from Undaria pinnatifida. A noteworthy in vitro antioxidant potential at 500μg/ml in 2, 2-diphenyl-1-picrylhydrazyl scavenging assay (80% inhibition), nitrogen oxide inhibition assay (71.83%), hydroxyl scavenging assay (71.92%), iron chelating assay (73.55%) and a substantial ascorbic acid equivalent reducing power (399.35μg/mg ascorbic acid equivalent) and total antioxidant capacity (402.29μg/mg AAE) suggested fucoidan a good antioxidant agent. Down regulation of COX-2 expression in rabbit articular chondrocytes in a dose (0-100μg) and time (0-48h) dependent manner, unveiled its in vitro anti-inflammatory significance. In vivo carrageenan induced inflammatory rat model demonstrated a 68.19% inhibition of inflammation whereas an inflammation inhibition potential of 79.38% was recorded in anti-arthritic complete Freund's adjuvant-induced arthritic rat model. A substantial ameliorating effect on altered hematological and biochemical parameters in arthritic rats was also observed. Therefore, findings of the present study prospects fucoidan as a potential antioxidant that can effectively abrogate oxidative stress, edema and arthritis-mediated inflammation and mechanistic studies are recommended for observed activities.",
"title": ""
},
{
"docid": "6235c7e1682b5406c95f91f9259288f8",
"text": "Model-driven development is an emerging area in software development that provides a way to express system requirements and architecture at a high level of abstraction through models. It involves using these models as the primary artifacts during the development process. One aspect that is holding back MDD from more wide-spread adoption is the lack of a well established and easy way of performing model to model (M2M) transformations. We propose to explore and compare popular M2M model transformation languages in existence: EMT , Kermeta, and ATL. Each of these languages support transformation of Ecore models within the Eclipse Modeling Framework (EMF). We attempt to implement the same transformation rule on identical meta models in each of these languages to achieve the appropriate transformed model. We provide our observations in using each tool to perform the transformation and comment on each language/tool’s expressive power, ease of use, and modularity. We conclude by noting that ATL is our language / tool of choice because it strikes a balance between ease of use and expressive power and still allows for modularity. We believe this, in conjunction with ATL’s role in the official Eclipse M2M project will lead to widespread use of ATL and, hopefully, a step forward in M2M transformations.",
"title": ""
},
{
"docid": "7884c51de6f53d379edccac50fd55caa",
"text": "Objective. We analyze the process of changing ethical attitudes over time by focusing on a specific set of ‘‘natural experiments’’ that occurred over an 18-month period, namely, the accounting scandals that occurred involving Enron/Arthur Andersen and insider-trader allegations related to ImClone. Methods. Given the amount of media attention devoted to these ethical scandals, we test whether respondents in a cross-sectional sample taken over 18 months become less accepting of ethically charged vignettes dealing with ‘‘accounting tricks’’ and ‘‘insider trading’’ over time. Results. We find a significant and gradual decline in the acceptance of the vignettes over the 18-month period. Conclusions. Findings presented here may provide valuable insight into potential triggers of changing ethical attitudes. An intriguing implication of these results is that recent highly publicized ethical breaches may not be only a symptom, but also a cause of changing attitudes.",
"title": ""
},
{
"docid": "f70f4704b23733e6f837fd4e9343be88",
"text": "222 Abstract— This paper investigates the effectiveness of OFDM and proven in other conventional (narrowband) commercial radio technologies (e.g. DS-CDMA in cell phones) (e.g. OFDM in IEEE 802.11a/g).. The main aim was to assess the suitability of OFDM as a modulation technique for a fixed wireless phone system for rural areas. However, its suitability for more general wireless applications is also assessed. Most third generation mobile phone systems are proposing to use Code Division Multiple Access (CDMA) as their modulation technique. For this reason, CDMA is also investigated so that the performance of CDMA could be compared with OFDM on the basis of various wireless parameters. At the end it is concluded that the good features of both the modulation schemes can be combined in an intelligent way to get the best modulation scheme as a solution for wireless communication high speed requirement, channel problems and increased number of users.",
"title": ""
},
{
"docid": "578973539dbc323f812ecaf1bb57400f",
"text": "In light of the Office of the Secretary Defense’s Roadmap for unmanned aircraft systems (UASs), there is a critical need for research examining human interaction with heterogeneous unmanned vehicles. The OSD Roadmap clearly delineates the need to investigate the “appropriate conditions and requirements under which a single pilot would be allowed to control multiple airborne UA [unmanned aircraft] simultaneously”. Towards this end, in this paper, we provide a meta-analysis of research studies across unmanned aerial and ground vehicle domains that investigated single operator control of multiple vehicles. As a result, a hierarchical control model for single operator control of multiple unmanned vehicles (UV) is proposed that demonstrates those requirements that will need to be met for operator cognitive support of multiple UV control, with an emphasis on the introduction of higher levels of autonomy. The challenge in achieving effective management of multiple UV systems in the future is not only to determine if automation can be used to improve human and system performance, but how and to what degree across hierarchical control loops, as well as determining the types of decision support that will be needed by operators given the high workload environment. We address when and how increasing levels of automation should be incorporated in multiple UV systems and discuss the impact on not only human performance, but more importantly, system performance.",
"title": ""
},
{
"docid": "7926bf1d0d41a442a9f699ab3c0cb432",
"text": "Cloud applications heavily rely on the network communication infrastructure, whose stability and latency directly affect the quality of experience. As mobile devices need to rapidly retrieve data from the cloud, it becomes an extremely important goal to deliver the lowest possible access latency at the best reliability. In this paper, we specify a cloud access overlay protocol architecture to improve the cloud access performance in distributed data-center (DC) cloud fabrics. We explore how linking virtual machine (VM) mobility and routing to user mobility can compensate performance decrease due to increased user-cloud network distance, by building an online cloud scheduling solution to optimally switch VM routing locators and to relocate VMs across DC sites, as a function of user-DC overlay network states. We evaluate our solution: 1) on a real distributed DC testbed spanning all of France, showing that we can grant a very high transfer time gain and 2) by emulating the situation of Internet service providers (ISPs) and over-the-top (OTT) cloud providers, exploiting thousands of real France-wide user displacement traces, finding a median throughput gain from 30% for OTT scenarii to 40% for ISP scenarii, the large majority of this gain being granted by adaptive VM mobility.",
"title": ""
},
{
"docid": "9a41380c2f94f222fd31ae1428bdbb17",
"text": "This paper presents a compact system-on-package-based front-end solution for 60-GHz-band wireless communication/sensor applications that consists of fully integrated three-dimensional (3-D) cavity filters/duplexers and antenna. The presented concept is applied to the design, fabrication, and testing of V-band (receiver (Rx): 59-61.5 GHz, transmitter (Tx): 61.5-64 GHz) transceiver front-end module using multilayer low-temperature co-fired ceramic technology. Vertically stacked 3-D low-loss cavity bandpass filters are developed for Rx and Tx channels to realize a fully integrated compact duplexer. Each filter exhibits excellent performance (Rx: IL<2.37 dB, 3-dB bandwidth (BW) /spl sim/3.5%, Tx: IL<2.39 dB, 3-dB BW /spl sim/3.33%). The fabrication tolerances contributing to the resonant frequency experimental downshift were investigated and taken into account in the simulations of the rest devices. The developed cavity filters are utilized to realize the compact duplexers by using microstrip T-junctions. This integrated duplexer shows Rx/Tx BW of 4.20% and 2.66% and insertion loss of 2.22 and 2.48 dB, respectively. The different experimental results of the duplexer compared to the individual filters above are attributed to the fabrication tolerance, especially on microstrip T-junctions. The measured channel-to-channel isolation is better than 35.2 dB across the Rx band (56-58.4 GHz) and better than 38.4 dB across the Tx band (59.3-60.9 GHz). The reported fully integrated Rx and Tx filters and the dual-polarized cross-shaped patch antenna functions demonstrate a novel 3-D deployment of embedded components equipped with an air cavity on the top. The excellent overall performance of the full integrated module is verified through the 10-dB BW of 2.4 GHz (/spl sim/4.18%) at 57.45 and 2.3 GHz (/spl sim/3.84%) at 59.85 GHz and the measured isolation better than 49 dB across the Rx band and better than 51.9 dB across the Tx band.",
"title": ""
},
{
"docid": "ead461ea8f716f6fab42c08bb7b54728",
"text": "Despite the increasing importance of data quality and the rich theoretical and practical contributions in all aspects of data cleaning, there is no single end-to-end off-the-shelf solution to (semi-)automate the detection and the repairing of violations w.r.t. a set of heterogeneous and ad-hoc quality constraints. In short, there is no commodity platform similar to general purpose DBMSs that can be easily customized and deployed to solve application-specific data quality problems. In this paper, we present NADEEF, an extensible, generalized and easy-to-deploy data cleaning platform. NADEEF distinguishes between a programming interface and a core to achieve generality and extensibility. The programming interface allows the users to specify multiple types of data quality rules, which uniformly define what is wrong with the data and (possibly) how to repair it through writing code that implements predefined classes. We show that the programming interface can be used to express many types of data quality rules beyond the well known CFDs (FDs), MDs and ETL rules. Treating user implemented interfaces as black-boxes, the core provides algorithms to detect errors and to clean data. The core is designed in a way to allow cleaning algorithms to cope with multiple rules holistically, i.e. detecting and repairing data errors without differentiating between various types of rules. We showcase two implementations for core repairing algorithms. These two implementations demonstrate the extensibility of our core, which can also be replaced by other user-provided algorithms. Using real-life data, we experimentally verify the generality, extensibility, and effectiveness of our system.",
"title": ""
},
{
"docid": "1a39b10cfdcae83004a1f3248df18ab2",
"text": "This chapter discusses the task of topic segmentation: automatically dividing single long recordings or transcripts into shorter, topically coherent segments. First, we look at the task itself, the applications which require it, and some ways to evaluate accuracy. We then explain the most influential approaches – generative and discriminative, supervised and unsupervised – and discuss their application in particular domains.",
"title": ""
},
{
"docid": "4f43a692ff8f6aed3a3fc4521c86d35e",
"text": "LEARNING OBJECTIVES\nAfter reading this article, the participant should be able to: 1. Understand the challenges in restoring volume and structural integrity in rhinoplasty. 2. Identify the appropriate uses of various autografts in aesthetic and reconstructive rhinoplasty (septal cartilage, auricular cartilage, costal cartilage, calvarial and nasal bone, and olecranon process of the ulna). 3. Identify the advantages and disadvantages of each of these autografts.\n\n\nSUMMARY\nThis review specifically addresses the use of autologous grafts in rhinoplasty. Autologous materials remain the preferred graft material for use in rhinoplasty because of their high biocompatibility and low risk of infection and extrusion. However, these advantages should be counterbalanced with the concerns of donor-site morbidity, graft availability, and graft resorption.",
"title": ""
},
{
"docid": "df3d91489c8c39ffb36f4c09a132c7d6",
"text": "In this paper, we introduce a wheel-based cable climbing robot system developed for maintenance of the suspension bridges. The robot consists of three parts: a wheel based driving mechanism, adhesion mechanism, and safe landing mechanism. The driving mechanism is a combination of pantograph mechanism, and wheels driven by motors. In addition, we propose a special design of safe landing mechanism which can assure the safety of the robot on the cables when the power is lost. Finally, the proposed robotic system is manufactured and validated in the indoor experimental environments.",
"title": ""
},
{
"docid": "ae890543be64e32e4a52f8b782f52e39",
"text": "Data Type ++ + weak positive satisficing strong positive satisficing very critical ! ! ! critical denied neutral undetermined satisficed ? Figure 5: Goal graph selecting among architectural alternatives for a KWIC system. The software architect also establishes new goals, such as Comprehensibility [System], which add to the number of goals. Goal Criticality. The software architect, on the one hand, has limited time; she has a number of goal con ict and synergy to deal with, on the other. In order to handle the situation, the architect prioritises goals, here into three categories: non-critical, critical, and very critical. This decision can be justi ed by way of design rationale. For example, treating modi ability, performance, and reusability as critical goals can be supported, via the vital few argumentation method, possibly with a market survey. With the prioritisation, the software architect can put emphasis on (very) critical goals, and readily resolve goal con ict. For example, as Modifiability [Data Rep] is considered very important, architectural design alternatives which strongly hurt the goal might be eliminated from further consideration, here Shared Data and Pipe & Filter. Evaluation and Selection. A particular architectural design can make a positive, negative, or no contribution to a goal. For example, the use of an abstract data type may help updatability, but at the cost of poorer time performance (Figure 5). Hence, selecting an architectural design requires careful examination of the degree of goal achievement, particularly for critical ones. Throughout the goal graph expansion process, the evaluation procedure propagates, via labels, the e ect of each design decision from o spring to parents. In assessing the degree of goal achievement, the evaluation procedure considers the type of link, and interacts with the software architect when uncertainties arise. Figure 5 shows a stage during software architectural design process, where one alternative is marked acceptable while the others are either denied or neutral. This kind of approach is made possible by earler goal reduction as application of \\divide-and-conquer\" paradigm. Disambiguation and re nement, via decomposition, have facilitated systematic codi cation of and search for NFR-related reusable knowledge, clearer understanding of tradeo s, and con ict resolution with design rationale which re ects the needs and characteristics of the intended application domain. Here, for example, the impact of using an abstract data type upon system modi ability is initially unclear, but re nement shows that ADTs support modi ability of data representation, which is critical, but hinder process modi ability. Throughout the process of architectural design, the software architect has been in control, posting NFR goals, browsing and choosing decomposition methods, design alternatives, and correlations, supporting or denying design decisions by way of design rationale, and observing goal assessment and selecting a particular architectural design. This process promotes communication, analysis, and comparison of architectural designs and principles. This process is used to support architectural design and results in a history record (with design rationale for both accepted and discarded alternatives considered), for later review, justi cation and evolution. 4 Related Work Our proposal draws on concepts, such as elements, components, and connectors, that have been identi ed as essential to portray architectural infrastructure, as advocated by Perry and Wolf [38], Garlan and Shaw [17], Abowd, Allen, and Garlan [1], Callahan [5], Mettala and Graham [28], and on earlier notions on information system architecture by Zachman [45]. In our view, our emphasis on NFRs is complementary to e orts directed towards identi cation and formalization of concepts for functional design. Concerning the role of NFRs, design rationale, and goal assessment, the proposal by Perry and Wolf [38] is of close relevance to our work. Perry and Wolf propose to use architectural style for constraining the architecture and coordinating cooperating software architects. They also propose that rationale, together with elements and form, constitute the model of software architecture. In our approach, weighted properties of the architectural form are justi ed with respect to their positive and negative contributions to the stated NFRs, and weighted relationships of the architectural form are abstracted into link types and labels, which can be interactively and semi-automatically determined. In our framework, we focus on the problem of systematically capturing and reusing knowledge about NFRs, design alternatives, tradeo s and rationale. Kazman, Bass, Abowd, and Webb [24] proposes a basis (called SAAM) for understanding and evaluating software architectures, and gives an illustration using modi ability. This proposal is similar to ours, in spirit, as both take a qualitative approach, instead of a metrics approach. In a similar vein but towards software reuse, Ning, Miriyala, and Kozaczynski [33] proposes an approach (called ABC), in which they suggest the use of NFRs to evaluate the architectural design, chosen from a reuse repository of domain-speci c software architectures, which closely meets very high-level requirements. Both SAAM and ABC are product-oriented, i.e., they use NFRs to understand and/or evaluate architectural products; ours, however, is process-oriented, i.e., it provides support for systematically dealing with NFRs during the process of architectural design. The NFR-Framework [7] [30] aims to improve software quality [9] [10] and has been tested on system types with a variety of NFRs, including accuracy, security and performance. Systems studied [13] include credit card [34, 8], public health insurance [7], government administration (Cabinet Documents [7] and Taxation Appeals [35]) and bank loan [12] information systems. The last study considered dealing with changes in requirements, including informativeness. The NFR-Framework also has an associated prototype tool: the NFR-Assistant [11] has been designed and implemented to deal with a variety of NFRs, primarily security, accuracy [7] [8], and (in progress) performance [35]. The NFR-Framework has been one of the subjects in a comparative study on several goal-oriented approaches by Finkelstein and Green [15] who use the meeting scheduler example as a basis of comparison. There are other uses of the NFR-Framework, including organization modelling by Yu [43, 44], and project risk management by Parmakson [36]. In our view, there are parallels to NFR-related work on information systems, and the current paper is one of the rst which considers adaptation of the NFR-Framework speci cally in the context of software architecture. 5 Conclusion This paper has proposed an approach to systematically guiding selection among architectural design alternatives, thus providing an alternative to an ad hoc approach. Our approach is intended to improve the software architect's ability to understand the high level system constraints and the rationale behind architectural choices, to reuse architectural knowledge concerning NFRs, to make the system more evolvable, and to analyse the design with respect to NFR-related concerns. More speci cally, our approach facilitates explicit representation of NFRs as (potentially) con icting or synergistic goals to be addressed during the process of architectural design, and use of such goals to rationalise the overall architectural design and selection process. Our approach also facilitates codi cation of knowledge about NFR-related architectural design and tradeo s, and systematic management and use of such knowledge. In order to help the software architect analyse design tradeo s, assess goal achievement, and select a particular architectural design, our approach o ers an interactive evaluation scheme in which design decisions are rationalised in terms of design rationale which re ects the needs and characteristics of the intended application domain. The underlying framework has already been applied to information systems. We have studied a number of such systems, considered NFRs which are relevant to them, and provided tool support. In the context of architectural design, however, our proposal is only preliminary, with its use illustrated only on a pedagogical example. Broader case studies are needed to gain experience and feedback, both bene ts and weaknesses, and to see whether our approach can be e ectively applied to industrial-strength domain-speci c software architectures, application-frameworks, and reference architectures. Our studies of NFRs in the context of information system development have helped us and domain experts evaluate the e ectiveness of the framework for such systems. Once a more codi ed catalogue of architectural methods is developed, it could be used in studies of using the framework to deal with architectural design. Along with feedback from industrial and academic experts, such studies would enhance the framework's coverage and evaluate its usefulness. A tool which embeds our approach is also needed to assist the software architect to systematically select among architectural design alternatives. Such a tool would incorporate knowledge about a variety of methods and correlations for a wide range of non-functional requirements. Our NFR-Assistant tool could serve as a starting point for such an architecture assistant tool. We have only illustrated the application of the NFR Framework to selecting among architectural design alternatives at a very abstract level. An important aspect of future work is to deal with more complex architectural problems, and to show the scalability of the framework. This of course requires codi cation of current and future knowledge about architectural alternatives and design criteria. The results, we trust, would be the provision of more satisfactory goal ass",
"title": ""
},
{
"docid": "db3d1a63d5505693bd6677e9b268e8d4",
"text": "This paper presents a system for calibrating the extrinsic parameters and timing offsets of an array of cameras, 3-D lidars, and global positioning system/inertial navigation system sensors, without the requirement of any markers or other calibration aids. The aim of the approach is to achieve calibration accuracies comparable with state-of-the-art methods, while requiring less initial information about the system being calibrated and thus being more suitable for use by end users. The method operates by utilizing the motion of the system being calibrated. By estimating the motion each individual sensor observes, an estimate of the extrinsic calibration of the sensors is obtained. Our approach extends standard techniques for motion-based calibration by incorporating estimates of the accuracy of each sensor's readings. This yields a probabilistic approach that calibrates all sensors simultaneously and facilitates the estimation of the uncertainty in the final calibration. In addition, we combine this motion-based approach with appearance information. This gives an approach that requires no initial calibration estimate and takes advantage of all available alignment information to provide an accurate and robust calibration for the system. The new framework is validated with datasets collected with different platforms and different sensors' configurations, and compared with state-of-the-art approaches.",
"title": ""
},
{
"docid": "c460da4083842102fcf2a59ef73702a1",
"text": "I describe two aspects of metacognition, knowledge of cognition and regulation of cognition, and how they are related to domain-specific knowledge and cognitive abilities. I argue that metacognitive knowledge is multidimensional, domain-general in nature, and teachable. Four instructional strategies are described for promoting the construction and acquisition of metacognitive awareness. These include promoting general awareness, improving selfknowledge and regulatory skills, and promoting learning environments that are conducive to the construction and use of metacognition. This paper makes three proposals: (a) metacognition is a multidimensional phenomenon, (b) it is domain-general in nature, and (c) metacognitive knowledge and regulation can be improved using a variety of instructional strategies. Let me acknowledge at the beginning that each of these proposals is somewhat speculative. While there is a limited amount of research that supports them, more research is needed to clarify them. Each one of these proposals is addressed in a separate section of the paper. The first makes a distinction between knowledge of cognition and regulation of cognition. The second summarizes some of the recent research examining the relationship of metacognition to expertise and cognitive abilities. The third section describes four general instructional strategies for improving metacognition. These include fostering construction of new knowledge, explicating conditional knowledge, automatizing a monitoring heuristic, and creating a supportive motivational environment in the classroom. I conclude with a few thoughts about general cognitive skills instruction. A framework for understanding metacognition Researchers have been studying metacognition for over twenty years. Most agree that cognition and metacognition differ in that cognitive skills are necessary to perform a task, while metacognition is necessary to understand how the task was performed (Garner, 1987). Most researchers also make a VICTORY: PIPS No.: 136750 LAWKAP truchh7.tex; 9/12/1997; 18:12; v.6; p.1",
"title": ""
}
] |
scidocsrr
|
18e28b1cd98026fece5ebd53662e78dd
|
Critical Hyper-Parameters: No Random, No Cry
|
[
{
"docid": "f1c1a0baa9f96d841d23e76b2b00a68d",
"text": "Introduction to Derivative-Free Optimization Andrew R. Conn, Katya Scheinberg, and Luis N. Vicente The absence of derivatives, often combined with the presence of noise or lack of smoothness, is a major challenge for optimization. This book explains how sampling and model techniques are used in derivative-free methods and how these methods are designed to efficiently and rigorously solve optimization problems. Although readily accessible to readers with a modest background in computational mathematics, it is also intended to be of interest to researchers in the field. 2009 · xii + 277 pages · Softcover · ISBN 978-0-898716-68-9 List Price $73.00 · RUNDBRIEF Price $51.10 · Code MP08",
"title": ""
},
{
"docid": "c8d235d1fd40e972e9bc7078d6472776",
"text": "Performance of machine learning algorithms depends critically on identifying a good set of hyperparameters. While current methods offer efficiencies by adaptively choosing new configurations to train, an alternative strategy is to adaptively allocate resources across the selected configurations. We formulate hyperparameter optimization as a pure-exploration non-stochastic infinitely many armed bandit problem where allocation of additional resources to an arm corresponds to training a configuration on larger subsets of the data. We introduce HYPERBAND for this framework and analyze its theoretical properties, providing several desirable guarantees. We compare HYPERBAND with state-ofthe-art Bayesian optimization methods and a random search baseline on a comprehensive benchmark including 117 datasets. Our results on this benchmark demonstrate that while Bayesian optimization methods do not outperform random search trained for twice as long, HYPERBAND in favorable settings offers valuable speedups.",
"title": ""
}
] |
[
{
"docid": "d72f7b99293770eed2764a76c5ee6651",
"text": "The successful motor rehabilitation of stroke, traumatic brain/spinal cord/sport injured patients requires a highly intensive and task-specific therapy based approach. Significant budget, time and logistic constraints limits a direct hand-to-hand therapy approach, so that intelligent assistive machines may offer a solution to promote motor recovery and obtain a better understanding of human motor control. This paper will address the development of a lower limb exoskeleton legs for force augmentation and active assistive walking training. The twin wearable legs are powered by pneumatic muscle actuators (pMAs), an experimental low mass high power to weight and volume actuation system. In addition, the pMA being pneumatic produces a more natural muscle like contact and as such can be considered a soft and biomimetic actuation system. This capacity to \"replicate\" the function of natural muscle and inherent safety is extremely important when working in close proximity to humans. The integration of the components sections and testing of the performance will also be considered to show how the structure and actuators can be combined to produce the various systems needed for a highly flexible/low weight clinically viable rehabilitation exoskeleton",
"title": ""
},
{
"docid": "e08e42c8f146e6a74213643e306446c6",
"text": "Disclaimer The opinions and positions expressed in this practice guide are the authors' and do not necessarily represent the opinions and positions of the Institute of Education Sciences or the U.S. Department of Education. This practice guide should be reviewed and applied according to the specific needs of the educators and education agencies using it and with full realization that it represents only one approach that might be taken, based on the research that was available at the time of publication. This practice guide should be used as a tool to assist in decision-making rather than as a \" cookbook. \" Any references within the document to specific education products are illustrative and do not imply endorsement of these products to the exclusion of other products that are not referenced. Alternative Formats On request, this publication can be made available in alternative formats, such as Braille, large print, audiotape, or computer diskette. For more information, call the Alternative Format Center at (202) 205-8113.",
"title": ""
},
{
"docid": "1eb6514f825be9d6a4af9646b6a7a9e2",
"text": "Maritime tasks, such as surveillance and patrolling, aquaculture inspection, and wildlife monitoring, typically require large operational crews and expensive equipment. Only recently have unmanned vehicles started to be used for such missions. These vehicles, however, tend to be expensive and have limited coverage, which prevents large-scale deployment. In this paper, we propose a scalable robotics system based on swarms of small and inexpensive aquatic drones. We take advantage of bio-inspired artificial evolution techniques in order to synthesize scalable and robust collective behaviors for the drones. The behaviors are then combined hierarchically with preprogrammed control in an engineeredcentric approach, allowing the overall behavior for a particular mission to be quickly configured and tested in simulation before the aquatic drones are deployed. We demonstrate the scalability of our hybrid approach by successfully deploying up to 1,000 simulated drones to patrol a 20 km long strip for 24 hours.",
"title": ""
},
{
"docid": "612e460c0f6e328d7516bfba7b674517",
"text": "There is universality in the transactional-transformational leadership paradigm. That is, the same conception of phenomena and relationships can be observed in a wide range of organizations and cultures. Exceptions can be understood as a consequence of unusual attributes of the organizations or cultures. Three corollaries are discussed. Supportive evidence has been gathered in studies conducted in organizations in business, education, the military, the government, and the independent sector. Likewise, supportive evidence has been accumulated from all but 1 continent to document the applicability of the paradigm.",
"title": ""
},
{
"docid": "bb9829b182241f70dbc1addd1452c09d",
"text": "This paper presents the first complete 2.5 V, 77 GHz chipset for Doppler radar and imaging applications fabricated in 0.13 mum SiGe HBT technology. The chipset includes a voltage-controlled oscillator with -101.6 dBc/Hz phase noise at 1 MHz offset, an 25 dB gain low-noise amplifier, a novel low-voltage double-balanced Gilbert-cell mixer with two mm-wave baluns and IF amplifier achieving 12.8 dB noise figure and an OP1dB of +5 dBm, a 99 GHz static frequency divider consuming a record low 75 mW, and a power amplifier with 19 dB gain, +14.4 dBm saturated power, and 15.7% PAE. Monolithic spiral inductors and transformers result in the lowest reported 77 GHz receiver core area of only 0.45 mm times 0.30 mm. Simplified circuit topologies allow 77 GHz operation up to 125degC from 2.5 V/1.8 V supplies. Technology splits of the SiGe HBTs are employed to determine the optimum HBT profile for mm-wave performance.",
"title": ""
},
{
"docid": "1e868977ef9377d0dca9ba39b6ba5898",
"text": "During last decade, tremendous efforts have been devoted to the research of time series classification. Indeed, many previous works suggested that the simple nearest-neighbor classification is effective and difficult to beat. However, we usually need to determine the distance metric (e.g., Euclidean distance and Dynamic Time Warping) for different domains, and current evidence shows that there is no distance metric that is best for all time series data. Thus, the choice of distance metric has to be done empirically, which is time expensive and not always effective. To automatically determine the distance metric, in this paper, we investigate the distance metric learning and propose a novel Convolutional Nonlinear Neighbourhood Components Analysis model for time series classification. Specifically, our model performs supervised learning to project original time series into a transformed space. When classifying, nearest neighbor classifier is then performed in this transformed space. Finally, comprehensive experimental results demonstrate that our model can improve the classification accuracy to some extent, which indicates that it can learn a good distance metric.",
"title": ""
},
{
"docid": "9a6249777e0137121df0c02cffe63b73",
"text": "With the goal of supporting close-range observation tasks of a spherical amphibious robot, such as ecological observations and intelligent surveillance, a moving target detection and tracking system was designed and implemented in this study. Given the restrictions presented by the amphibious environment and the small-sized spherical amphibious robot, an industrial camera and vision algorithms using adaptive appearance models were adopted to construct the proposed system. To handle the problem of light scattering and absorption in the underwater environment, the multi-scale retinex with color restoration algorithm was used for image enhancement. Given the environmental disturbances in practical amphibious scenarios, the Gaussian mixture model was used to detect moving targets entering the field of view of the robot. A fast compressive tracker with a Kalman prediction mechanism was used to track the specified target. Considering the limited load space and the unique mechanical structure of the robot, the proposed vision system was fabricated with a low power system-on-chip using an asymmetric and heterogeneous computing architecture. Experimental results confirmed the validity and high efficiency of the proposed system. The design presented in this paper is able to meet future demands of spherical amphibious robots in biological monitoring and multi-robot cooperation.",
"title": ""
},
{
"docid": "df677d32bdbba01d27c8eb424b9893e9",
"text": "Active learning is an area of machine learning examining strategies for allocation of finite resources, particularly human labeling efforts and to an extent feature extraction, in situations where available data exceeds available resources. In this open problem paper, we motivate the necessity of active learning in the security domain, identify problems caused by the application of present active learning techniques in adversarial settings, and propose a framework for experimentation and implementation of active learning systems in adversarial contexts. More than other contexts, adversarial contexts particularly need active learning as ongoing attempts to evade and confuse classifiers necessitate constant generation of labels for new content to keep pace with adversarial activity. Just as traditional machine learning algorithms are vulnerable to adversarial manipulation, we discuss assumptions specific to active learning that introduce additional vulnerabilities, as well as present vulnerabilities that are amplified in the active learning setting. Lastly, we present a software architecture, Security-oriented Active Learning Testbed (SALT), for the research and implementation of active learning applications in adversarial contexts.",
"title": ""
},
{
"docid": "0eff7d128badc29997bfae8834271703",
"text": "In this paper, we propose a people counting algorithm using an impulse radio ultra-wideband radar sensor. The proposed algorithm is based on a strategy of understanding the pattern of the received signal according to the number of people, not detecting each of a large number of people in the radar’s received signal. To understand the pattern of the signal, we detect the major clusters from the signal and find the amplitudes of main pulses having the maximum amplitude among the pulses constituting each cluster. We generate a probability density function of the amplitudes of the main pulses from the major clusters according to the number of people and distances. Then, we derive maximum likelihood (ML) equation for people counting. Using the derived ML equation, real-time people counting is possible with a small amount of computation. In addition, since the proposed algorithm does not detect individual clusters for each person but based on the overall cluster behavior of the signals according to the number of people, it enables people counting even in a dense multipath environment, such as a metal-rich environment. In order to prove that the proposed algorithm can be operated in real time in various environments, we performed experiments in an indoor environment and an elevator with a metal structure. Experimental results show that people counting is performed with an mean absolute error of less than one person on average.",
"title": ""
},
{
"docid": "971227f276624394bf87678186d99e2d",
"text": "Some of the most challenging issues in data outsourcing scenario are the enforcement of authorization policies and the support of policy updates. Ciphertext-policy attribute-based encryption is a promising cryptographic solution to these issues for enforcing access control policies defined by a data owner on outsourced data. However, the problem of applying the attribute-based encryption in an outsourced architecture introduces several challenges with regard to the attribute and user revocation. In this paper, we propose an access control mechanism using ciphertext-policy attribute-based encryption to enforce access control policies with efficient attribute and user revocation capability. The fine-grained access control can be achieved by dual encryption mechanism which takes advantage of the attribute-based encryption and selective group key distribution in each attribute group. We demonstrate how to apply the proposed mechanism to securely manage the outsourced data. The analysis results indicate that the proposed scheme is efficient and secure in the data outsourcing systems.",
"title": ""
},
{
"docid": "90ba548ae91dbd94ea547a372422181f",
"text": "The hypothesis that Attention-Deficit/Hyperactivity Disorder (ADHD) reflects a primary inhibitory executive function deficit has spurred a substantial literature. However, empirical findings and methodological issues challenge the etiologic primacy of inhibitory and executive deficits in ADHD. Based on accumulating evidence of increased intra-individual variability in ADHD, we reconsider executive dysfunction in light of distinctions between 'hot' and 'cool' executive function measures. We propose an integrative model that incorporates new neuroanatomical findings and emphasizes the interactions between parallel processing pathways as potential loci for dysfunction. Such a reconceptualization provides a means to transcend the limits of current models of executive dysfunction in ADHD and suggests a plan for future research on cognition grounded in neurophysiological and developmental considerations.",
"title": ""
},
{
"docid": "2c87f9ef35795c89de6b60e1ceff18c8",
"text": "The paper presents a fusion-tracker and pedestrian classifier for color and thermal cameras. The tracker builds a background model as a multi-modal distribution of colors and temperatures. It is constructed as a particle filter that makes a number of informed reversible transformations to sample the model probability space in order to maximize posterior probability of the scene model. Observation likelihoods of moving objects account their 3D locations with respect to the camera and occlusions by other tracked objects as well as static obstacles. After capturing the coordinates and dimensions of moving objects we apply a pedestrian classifier based on periodic gait analysis. To separate humans from other moving objects, such as cars, we detect, in human gait, a symmetrical double helical pattern, that can then be analyzed using the Frieze Group theory. The results of tracking on color and thermal sequences demonstrate that our algorithm is robust to illumination noise and performs well in the outdoor environments.",
"title": ""
},
{
"docid": "7ea89697894cb9e0da5bfcebf63be678",
"text": "This paper develops a frequency-domain iterative machine learning (IML) approach for output tracking. Frequency-domain iterative learning control allows bounded noncausal inversion of system dynamics and is, therefore, applicable to nonminimum phase systems. The model used in the frequency-domain control update can be obtained from the input–output data acquired during the iteration process. However, such data-based approaches can have challenges if the noise-to-output-signal ratio is large. The main contribution of this paper is the use of kernel-based machine learning during the iterations to estimate both the model (and its inverse) for the control update, as well as the model uncertainty needed to establish bounds on the iteration gain for ensuring convergence. Another contribution is the proposed use of augmented inputs with persistency of excitation to promote learning of the model during iterations. The improved model can be used to better infer the inverse input resulting in lower initial error for new output trajectories. The proposed IML approach with the augmented input is illustrated with simulations for a benchmark nonminimum phase example.",
"title": ""
},
{
"docid": "0b2ae99927b9006fd41b07e4d58a2e82",
"text": "Our increasingly digital life provides a wealth of data about our behavior, beliefs, mood, and well-being. This data provides some insight into the lives of patients outside the healthcare setting, and in aggregate can be insightful for the person's mental health and emotional crisis. Here, we introduce this community to some of the recent advancement in using natural language processing and machine learning to provide insight into mental health of both individuals and populations. We advocate using these linguistic signals as a supplement to those that are collected in the health care system, filling in some of the so-called “whitespace” between visits.",
"title": ""
},
{
"docid": "3ae5e7ac5433f2449cd893e49f1b2553",
"text": "We propose a category-independent method to produce a bag of regions and rank them, such that top-ranked regions are likely to be good segmentations of different objects. Our key objectives are completeness and diversity: Every object should have at least one good proposed region, and a diverse set should be top-ranked. Our approach is to generate a set of segmentations by performing graph cuts based on a seed region and a learned affinity function. Then, the regions are ranked using structured learning based on various cues. Our experiments on the Berkeley Segmentation Data Set and Pascal VOC 2011 demonstrate our ability to find most objects within a small bag of proposed regions.",
"title": ""
},
{
"docid": "10117f9d3b8b4720ea37cbf36073c130",
"text": "This biomechanical study was performed to measure tissue pressure in the infrapatellar fat pad and the volume changes of the anterior knee compartment during knee flexion–extension motion. Knee motion from 120° of flexion to full extension was simulated on ten fresh frozen human knee specimens (six from males, four from females, average age 44 years) using a hydraulic kinematic simulator (30, 40, and 50 Nm extension moment). Infrapatellar tissue pressure was measured using a closed cell sensor. Infrapatellar volume change in the anterior knee compartment was evaluated subsequent to removal of the fat pad using a water-filled bladder. We found a significant increase of the infrapatellar tissue pressure during knee flexion, at flexion angles of <20° and >100°. The average tissue pressure ranged from 343 (±223) mbar at 0° to 60 (±64) mbar at 60° of flexion. The smallest volume in the anterior knee compartment was measured at full extension and 120° of flexion, whereas the maximum volume was observed at 50° of flexion. In conclusion, the data suggest a biomechanical function of the infrapatellar fat pad at flexion angles of <20° and >100°, which suggests a role of the infrapatellar fat pad in stabilizing the patella in the extremes of knee motion.",
"title": ""
},
{
"docid": "bb6857df2dbcb19228e80a410a1fc6d6",
"text": "We introduce a new large-scale data set of video URLs with densely-sampled object bounding box annotations called YouTube-BoundingBoxes (YT-BB). The data set consists of approximately 380,000 video segments about 19s long, automatically selected to feature objects in natural settings without editing or post-processing, with a recording quality often akin to that of a hand-held cell phone camera. The objects represent a subset of the COCO [32] label set. All video segments were human-annotated with high-precision classification labels and bounding boxes at 1 frame per second. The use of a cascade of increasingly precise human annotations ensures a label accuracy above 95% for every class and tight bounding boxes. Finally, we train and evaluate well-known deep network architectures and report baseline figures for per-frame classification and localization. We also demonstrate how the temporal contiguity of video can potentially be used to improve such inferences. The data set can be found at https://research.google.com/youtube-bb. We hope the availability of such large curated corpus will spur new advances in video object detection and tracking.",
"title": ""
},
{
"docid": "b6b58b7a1c5d9112ea24c74539c95950",
"text": "We describe a view-management component for interactive 3D user interfaces. By view management, we mean maintaining visual constraints on the projections of objects on the view plane, such as locating related objects near each other, or preventing objects from occluding each other. Our view-management component accomplishes this by modifying selected object properties, including position, size, and transparency, which are tagged to indicate their constraints. For example, some objects may have geometric properties that are determined entirely by a physical simulation and which cannot be modified, while other objects may be annotations whose position and size are flexible.We introduce algorithms that use upright rectangular extents to represent on the view plane a dynamic and efficient approximation of the occupied space containing the projections of visible portions of 3D objects, as well as the unoccupied space in which objects can be placed to avoid occlusion. Layout decisions from previous frames are taken into account to reduce visual discontinuities. We present augmented reality and virtual reality examples to which we have applied our approach, including a dynamically labeled and annotated environment.",
"title": ""
},
{
"docid": "ceda1c07db49bc0d3e78f526ed13f178",
"text": "The paper presents a descriptive model for measuring the salient traits and tendencies of a translation as compared with the source text. We present some results from applying the model to the texts of the Linköping Translation Corpus (LTC) that have been produced by different kinds of translation aids, and discuss its application to MT evaluation.",
"title": ""
},
{
"docid": "88fa70ef8c6dfdef7d1c154438ff53c2",
"text": "There has been substantial progress in the field of text based sentiment analysis but little effort has been made to incorporate other modalities. Previous work in sentiment analysis has shown that using multimodal data yields to more accurate models of sentiment. Efforts have been made towards expressing sentiment as a spectrum of intensity rather than just positive or negative. Such models are useful not only for detection of positivity or negativity, but also giving out a score of how positive or negative a statement is. Based on the state of the art studies in sentiment analysis, prediction in terms of sentiment score is still far from accurate, even in large datasets [27]. Another challenge in sentiment analysis is dealing with small segments or micro opinions as they carry less context than large segments thus making analysis of the sentiment harder. This paper presents a Ph.D. thesis shaped towards comprehensive studies in multimodal micro-opinion sentiment intensity analysis.",
"title": ""
}
] |
scidocsrr
|
6ebfa259ce68060dd4a8057689f40df1
|
Linear Algebraic Structure of Word Senses, with Applications to Polysemy
|
[
{
"docid": "fe99cf42e35cc0b7523247e126f3d8a3",
"text": "Current distributed representations of words show little resemblance to theories of lexical semantics. The former are dense and uninterpretable, the latter largely based on familiar, discrete classes (e.g., supersenses) and relations (e.g., synonymy and hypernymy). We propose methods that transform word vectors into sparse (and optionally binary) vectors. The resulting representations are more similar to the interpretable features typically used in NLP, though they are discovered automatically from raw corpora. Because the vectors are highly sparse, they are computationally easy to work with. Most importantly, we find that they outperform the original vectors on benchmark tasks.",
"title": ""
}
] |
[
{
"docid": "b87cf41b31b8d163d6e44c9b1fa68cae",
"text": "This paper gives a security analysis of Microsoft's ASP.NET technology. The main part of the paper is a list of threats which is structured according to an architecture of Web services and attack points. We also give a reverse table of threats against security requirements as well as a summary of security guidelines for IT developers. This paper has been worked out in collaboration with five University teams each of which is focussing on a different security problem area. We use the same architecture for Web services and attack points.",
"title": ""
},
{
"docid": "49fed572de904ac3bb9aab9cdc874cc6",
"text": "Factorized Hidden Layer (FHL) adaptation has been proposed for speaker adaptation of deep neural network (DNN) based acoustic models. In FHL adaptation, a speaker-dependent (SD) transformation matrix and an SD bias are included in addition to the standard affine transformation. The SD transformation is a linear combination of rank-1 matrices whereas the SD bias is a linear combination of vectors. Recently, the Long ShortTerm Memory (LSTM) Recurrent Neural Networks (RNNs) have shown to outperform DNN acoustic models in many Automatic Speech Recognition (ASR) tasks. In this work, we investigate the effectiveness of SD transformations for LSTM-RNN acoustic models. Experimental results show that when combined with scaling of LSTM cell states’ outputs, SD transformations achieve 2.3% and 2.1% absolute improvements over the baseline LSTM systems for the AMI IHM and AMI SDM tasks respectively.",
"title": ""
},
{
"docid": "aeda16415cb3414745493f1c356ffd99",
"text": "Recent estimates based on the 1991 census (Schuring 1993) indicate that approximately 45 per cent of the South African population have a speaking knowledge of English (the majority of the population speaking an African language, such as Zulu, Xhosa, Tswana, or Venda, as home language). The number of individuals who cite English as a home language appears to be, however, only about 10 per cent of the population. Of this figure it would seem that at least one in three English-speakers come from ethnic groups other than the white one (in proportionally descending order, from the South African Indian, Coloured, and Black ethnic groups). This figure has shown some increase in recent years.",
"title": ""
},
{
"docid": "6a9e30fd08b568ef6607158cab4f82b2",
"text": "Expertise with unfamiliar objects (‘greebles’) recruits face-selective areas in the fusiform gyrus (FFA) and occipital lobe (OFA). Here we extend this finding to other homogeneous categories. Bird and car experts were tested with functional magnetic resonance imaging during tasks with faces, familiar objects, cars and birds. Homogeneous categories activated the FFA more than familiar objects. Moreover, the right FFA and OFA showed significant expertise effects. An independent behavioral test of expertise predicted relative activation in the right FFA for birds versus cars within each group. The results suggest that level of categorization and expertise, rather than superficial properties of objects, determine the specialization of the FFA.",
"title": ""
},
{
"docid": "a9ac1250c9be5c7f95086f82251d5157",
"text": "In 3D reconstruction, the recovery of the calibration parameters of the cameras is paramount since it provides metric information about the observed scene, e.g., measures of angles and ratios of distances. Autocalibration enables the estimation of the camera parameters without using a calibration device, but by enforcing simple constraints on the camera parameters. In the absence of information about the internal camera parameters such as the focal length and the principal point, the knowledge of the camera pixel shape is usually the only available constraint. Given a projective reconstruction of a rigid scene, we address the problem of the autocalibration of a minimal set of cameras with known pixel shape and otherwise arbitrarily varying intrinsic and extrinsic parameters. We propose an algorithm that only requires 5 cameras (the theoretical minimum), thus halving the number of cameras required by previous algorithms based on the same constraint. To this purpose, we introduce as our basic geometric tool the six-line conic variety (SLCV), consisting in the set of planes intersecting six given lines of 3D space in points of a conic. We show that the set of solutions of the Euclidean upgrading problem for three cameras with known pixel shape can be parameterized in a computationally efficient way. This parameterization is then used to solve autocalibration from five or more cameras, reducing the three-dimensional search space to a two-dimensional one. We provide experiments with real images showing the good performance of the technique.",
"title": ""
},
{
"docid": "bd960da75daf8c268d4def33ada5964d",
"text": "(SCADA), have lately gained the attention of IT security researchers as critical components of modern industrial infrastructure. One main reason for this attention is that ICS have not been built with security in mind and are thus particularly vulnerable when they are connected to computer networks and the Internet. ICS consists of SCADA, Programmable Logic Controller (PLC), Human-Machine Interfaces (HMI), sensors, and actuators such as motors. These components are connected to each other over fieldbus or IP-based protocols. In this thesis, we have developed methods and tools for assessing the security of ICSs. By applying the STRIDE threat modeling methodology, we have conducted a high level threat analysis of ICSs. Based on the threat analysis, we created security analysis guidelines for Industrial Control System devices. These guidelines can be applied to many ICS devices and are mostly vendor independent. Moreover, we have integrated support for Modbus/TCP in the Scapy packet manipulation library, which can be used for robustness testing of ICS software. In a case study, we applied our security-assessment methodology to a detailed security analysis of a demonstration ICS, consisting of current products. As a result of the analysis, we discovered several security weaknesses. Most of the discovered vulnerabilities were common IT security problems, such as web-application and software-update issues, but some are specific to ICS. For example, we show how the data visualized by the Human-Machine Interface can be altered and modified without limit. Furthermore, sensor data, such as temperature values, can be spoofed within the PLC. Moreover, we show that input validation is critical for security also in the ICS world. Thus, we disclose several security vulnerabilities in production devices. However, in the interest of responsible disclosure of security flaws, the most severe security flaws found are not detailed in the thesis. Our analysis guidelines and the case study provide a basis for conducting vulnerability assessment on further ICS devices and entire systems. In addition, we briefly describe existing solutions for securing ICSs. Acknowledgements I would like to thank Nixu Oy and the colleagues (especially Lauri Vuornos, Juhani Mäkelä and Michael Przybilski) for making it possible to conduct my thesis on Industrial Control Systems. The industrial environment enabled us to take advantage of the research and to apply it to practical projects. Moreover, without the help and involvement of Schneider Electric such an applied analysis would not have been possible. Furthermore, I would like to thank Tuomas …",
"title": ""
},
{
"docid": "554fc3e28147738a9faa80f593ffe9df",
"text": "The issue of cyberbullying is a social concern that has arisen due to the prevalent use of computer technology today. In this paper, we present a multi-faceted solution to mitigate the effects of cyberbullying, one that uses computer technology in order to combat the problem. We propose to provide assistance for various groups affected by cyberbullying (the bullied and the bully, both). Our solution was developed through a series of group projects and includes i) technology to detect the occurrence of cyberbullying ii) technology to enable reporting of cyberbullying iii) proposals to integrate third-party assistance when cyberbullying is detected iv) facilities for those with authority to manage online social networks or to take actions against detected bullies. In all, we demonstrate how this important social problem which arises due to computer technology can also leverage computer technology in order to take steps to better cope with the undesirable effects that have arisen.",
"title": ""
},
{
"docid": "6ddf62a60b0d56c76b54ca6cd0b28ab9",
"text": "Improvement of vehicle safety performance is one of the targets of ITS development. A pre-crash safety system has been developed that utilizes ITS technologies. The Pre-crash Safety system reduces collision injury by estimating TTC(time-tocollision) to preemptively activate safety devices, which consist of “Pre-crash Seatbelt” system and “Pre-crash Brake Assist” system. The key technology of these systems is a “Pre-crash Sensor” to detect obstacles and estimate TTC. In this paper, the Pre-crash Sensor is presented. The Pre-crash Sensor uses millimeter-wave radar to detect preceding vehicles, oncoming vehicles, roadside objects, etc. on the road ahead. Furthermore, by using a phased array system as a vehicle radar for the first time, a compact electronically scanned millimeter-wave radar with high recognition performance has been achieved. With respect to the obstacle determination algorithm, a crash determination algorithm has been newly developed, taking into account estimation of the direction of advance of the vehicle, in addition to the distance, relative speed and direction of the object.",
"title": ""
},
{
"docid": "13ee1c00203fd12486ee84aa4681dc60",
"text": "Mobile crowdsensing has emerged as an efficient sensing paradigm which combines the crowd intelligence and the sensing power of mobile devices, e.g., mobile phones and Internet of Things (IoT) gadgets. This article addresses the contradicting incentives of privacy preservation by crowdsensing users and accuracy maximization and collection of true data by service providers. We firstly define the individual contributions of crowdsensing users based on the accuracy in data analytics achieved by the service provider from buying their data. We then propose a truthful mechanism for achieving high service accuracy while protecting the privacy based on the user preferences. The users are incentivized to provide true data by being paid based on their individual contribution to the overall service accuracy. Moreover, we propose a coalition strategy which allows users to cooperate in providing their data under one identity, increasing their anonymity privacy protection, and sharing the resulting payoff. Finally, we outline important open research directions in mobile and people-centric crowdsensing.",
"title": ""
},
{
"docid": "bd7a011f47fd48e19e2bbdb2f426ae1d",
"text": "In social networks, link prediction predicts missing links in current networks and new or dissolution links in future networks, is important for mining and analyzing the evolution of social networks. In the past decade, many works have been done about the link prediction in social networks. The goal of this paper is to comprehensively review, analyze and discuss the state-of-the-art of the link prediction in social networks. A systematical category for link prediction techniques and problems is presented. Then link prediction techniques and problems are analyzed and discussed. Typical applications of link prediction are also addressed. Achievements and roadmaps of some active research groups are introduced. Finally, some future challenges of the link prediction in social networks are discussed. 对社交网络中的链接预测研究现状进行系统回顾、分析和讨论, 并指出未来研究挑战. 在动态社交网络中, 链接预测是挖掘和分析网络演化的一项重要任务, 其目的是预测当前未知的链接以及未来链接的变化. 过去十余年中, 在社交网络链接预测问题上已有大量研究工作. 本文旨在对该问题的研究现状和趋势进行全面回顾、分析和讨论. 提出一种分类法组织链接预测技术和问题. 详细分析和讨论了链接预测的技术、问题和应用. 介绍了该问题的活跃研究组. 分析和讨论了社交网络链接预测研究的未来挑战.",
"title": ""
},
{
"docid": "1efdb6ff65c1aa8f8ecb95b4d466335f",
"text": "This paper provides a linguistic and pragmatic analysis of the phenomenon of irony in order to represent how Twitter’s users exploit irony devices within their communication strategies for generating textual contents. We aim to measure the impact of a wide-range of pragmatic phenomena in the interpretation of irony, and to investigate how these phenomena interact with contexts local to the tweet. Informed by linguistic theories, we propose for the first time a multi-layered annotation schema for irony and its application to a corpus of French, English and Italian tweets.We detail each layer, explore their interactions, and discuss our results according to a qualitative and quantitative perspective.",
"title": ""
},
{
"docid": "b495407cb455186ecad9a45aa88ec509",
"text": "This article provides a comprehensive introduction into the field of robotic mapping, with a focus on indoor mapping. It describes and compares various probabilistic techniques, as they are presently being applied to a vast array of mobile robot mapping problems. The history of robotic mapping is also described, along with an extensive list of open research problems. This research is sponsored by by DARPA’s MARS Program (Contract number N66001-01-C-6018) and the National Science Foundation (CAREER grant number IIS-9876136 and regular grant number IIS-9877033), all of which is gratefully acknowledged. The views and conclusions contained in this document are those of the author and should not be interpreted as necessarily representing official policies or endorsements, either expressed or implied, of the United States Government or any of the sponsoring institutions.",
"title": ""
},
{
"docid": "194db5da505acab27bbe14232b255d09",
"text": "Latent Dirichlet allocation defines hidden topics to capture latent semantics in text documents. However, it assumes that all the documents are represented by the same topics, resulting in the “forced topic” problem. To solve this problem, we developed a group latent Dirichlet allocation (GLDA). GLDA uses two kinds of topics: local topics and global topics. The highly related local topics are organized into groups to describe the local semantics, whereas the global topics are shared by all the documents to describe the background semantics. GLDA uses variational inference algorithms for both offline and online data. We evaluated the proposed model for topic modeling and document clustering. Our experimental results indicated that GLDA can achieve a competitive performance when compared with state-of-the-art approaches.",
"title": ""
},
{
"docid": "09b273c9e77f6fc1b2de20f50227c44d",
"text": "Age and gender are complementary soft biometric traits for face recognition. Successful estimation of age and gender from facial images taken under real-world conditions can contribute improving the identification results in the wild. In this study, in order to achieve robust age and gender classification in the wild, we have benefited from Deep Convolutional Neural Networks based representation. We have explored transferability of existing deep convolutional neural network (CNN) models for age and gender classification. The generic AlexNet-like architecture and domain specific VGG-Face CNN model are employed and fine-tuned with the Adience dataset prepared for age and gender classification in uncontrolled environments. In addition, task specific GilNet CNN model has also been utilized and used as a baseline method in order to compare with transferred models. Experimental results show that both transferred deep CNN models outperform the GilNet CNN model, which is the state-of-the-art age and gender classification approach on the Adience dataset, by an absolute increase of 7% and 4.5% in accuracy, respectively. This outcome indicates that transferring a deep CNN model can provide better classification performance than a task specific CNN model, which has a limited number of layers and trained from scratch using a limited amount of data as in the case of GilNet. Domain specific VGG-Face CNN model has been found to be more useful and provided better performance for both age and gender classification tasks, when compared with generic AlexNet-like model, which shows that transfering from a closer domain is more useful.",
"title": ""
},
{
"docid": "7a9572c3c74f9305ac0d817b04e4399a",
"text": "Due to the limited length and freely constructed sentence structures, it is a difficult classification task for short text classification. In this paper, a short text classification framework based on Siamese CNNs and few-shot learning is proposed. The Siamese CNNs will learn the discriminative text encoding so as to help classifiers distinguish those obscure or informal sentence. The different sentence structures and different descriptions of a topic are viewed as ‘prototypes’, which will be learned by few-shot learning strategy to improve the classifier’s generalization. Our experimental results show that the proposed framework leads to better results in accuracies on twitter classifications and outperforms some popular traditional text classification methods and a few deep network approaches.",
"title": ""
},
{
"docid": "9721f7f54bfcfcf8c3efb10257002ad9",
"text": "Audio description (AD) provides linguistic descriptions of movies and allows visually impaired people to follow a movie along with their peers. Such descriptions are by design mainly visual and thus naturally form an interesting data source for computer vision and computational linguistics. In this work we propose a novel dataset which contains transcribed ADs, which are temporally aligned to full length movies. In addition we also collected and aligned movie scripts used in prior work and compare the two sources of descriptions. We introduce the Large Scale Movie Description Challenge (LSMDC) which contains a parallel corpus of 128,118 sentences aligned to video clips from 200 movies (around 150 h of video in total). The goal of the challenge is to automatically generate descriptions for the movie clips. First we characterize the dataset by benchmarking different approaches for generating video descriptions. Comparing ADs to scripts, we find that ADs are more visual and describe precisely what is shown rather than what should happen according to the scripts created prior to movie production. Furthermore, we present and compare the results of several teams who participated in the challenges organized in the context of two workshops at ICCV 2015 and ECCV 2016.",
"title": ""
},
{
"docid": "00b2d45d6810b727ab531f215d2fa73e",
"text": "Parental preparation for a child's discharge from the hospital sets the stage for successful transitioning to care and recovery at home. In this study of 135 parents of hospitalized children, the quality of discharge teaching, particularly the nurses' skills in \"delivery\" of parent teaching, was associated with increased parental readiness for discharge, which was associated with less coping difficulty during the first 3 weeks postdischarge. Parental coping difficulty was predictive of greater utilization of posthospitalization health services. These results validate the role of the skilled nurse as a teacher in promoting positive outcomes at discharge and beyond the hospitalization.",
"title": ""
},
{
"docid": "70f35b19ba583de3b9942d88c94b9148",
"text": "ARCHEOGUIDE (Augmented Reality-based Cultural Heritage On-site GUIDE) is an IST project, funded by the EU, aiming at providing a personalized Virtual Reality guide and tour assistant to archaeological site visitors and a multimedia repository and information system for archaeologists and site curators. The system provides monument reconstructions, ancient life simulation, and database tools for creating and archiving archaeological multimedia material.",
"title": ""
},
{
"docid": "b27038accdabab12d8e0869aba20a083",
"text": "Two architectures that generalize convolutional neural networks (CNNs) for the processing of signals supported on graphs are introduced. We start with the selection graph neural network (GNN), which replaces linear time invariant filters with linear shift invariant graph filters to generate convolutional features and reinterprets pooling as a possibly nonlinear subsampling stage where nearby nodes pool their information in a set of preselected sample nodes. A key component of the architecture is to remember the position of sampled nodes to permit computation of convolutional features at deeper layers. The second architecture, dubbed aggregation GNN, diffuses the signal through the graph and stores the sequence of diffused components observed by a designated node. This procedure effectively aggregates all components into a stream of information having temporal structure to which the convolution and pooling stages of regular CNNs can be applied. A multinode version of aggregation GNNs is further introduced for operation in large-scale graphs. An important property of selection and aggregation GNNs is that they reduce to conventional CNNs when particularized to time signals reinterpreted as graph signals in a circulant graph. Comparative numerical analyses are performed in a source localization application over synthetic and real-world networks. Performance is also evaluated for an authorship attribution problem and text category classification. Multinode aggregation GNNs are consistently the best-performing GNN architecture.",
"title": ""
},
{
"docid": "7bac448a5754c168c897125a4f080548",
"text": "BACKGROUND\nOne of the main methods for evaluation of fetal well-being is analysis of Doppler flow velocity waveform of fetal vessels. Evaluation of Doppler wave of the middle cerebral artery can predict most of the at-risk fetuses in high-risk pregnancies. In this study, we tried to determine the normal ranges and their trends during pregnancy of Doppler flow velocity indices (resistive index, pulsatility index, systolic-to-diastolic ratio, and peak systolic velocity) of middle cerebral artery in 20 - 40 weeks normal pregnancies in Iranians.\n\n\nMETHODS\nIn this cross-sectional study, 1037 women with normal pregnancy and gestational age of 20 to 40 weeks were investigated for fetal middle cerebral artery Doppler examination.\n\n\nRESULTS\nResistive index, pulsatility index, and systolic-to-diastolic ratio values of middle cerebral artery decreased in a parabolic pattern while the peak systolic velocity value increased linearly with progression of the gestational age. These changes were statistically significant (P<0.001 for all four variables) and were more characteristic during late weeks of pregnancy. The mean fetal heart rate was also significantly (P<0.001) reduced in correlation with the gestational age.\n\n\nCONCLUSION\nDoppler waveform indices of fetal middle cerebral artery are useful means for determining fetal well-being. Herewith, the normal ranges of Doppler waveform indices for an Iranian population are presented.",
"title": ""
}
] |
scidocsrr
|
a7fab56e5dbc06d39ff0ec4046a3cb94
|
Benchmark Machine Learning Approaches with Classical Time Series Approaches on the Blood Glucose Level Prediction Challenge
|
[
{
"docid": "83f970bc22a2ada558aaf8f6a7b5a387",
"text": "The imputeTS package specializes on univariate time series imputation. It offers multiple state-of-the-art imputation algorithm implementations along with plotting functions for time series missing data statistics. While imputation in general is a well-known problem and widely covered by R packages, finding packages able to fill missing values in univariate time series is more complicated. The reason for this lies in the fact, that most imputation algorithms rely on inter-attribute correlations, while univariate time series imputation instead needs to employ time dependencies. This paper provides an introduction to the imputeTS package and its provided algorithms and tools. Furthermore, it gives a short overview about univariate time series imputation in R. Introduction In almost every domain from industry (Billinton et al., 1996) to biology (Bar-Joseph et al., 2003), finance (Taylor, 2007) up to social science (Gottman, 1981) different time series data are measured. While the recorded datasets itself may be different, one common problem are missing values. Many analysis methods require missing values to be replaced with reasonable values up-front. In statistics this process of replacing missing values is called imputation. Time series imputation thereby is a special sub-field in the imputation research area. Most popular techniques like Multiple Imputation (Rubin, 1987), Expectation-Maximization (Dempster et al., 1977), Nearest Neighbor (Vacek and Ashikaga, 1980) and Hot Deck (Ford, 1983) rely on interattribute correlations to estimate values for the missing data. Since univariate time series do not possess more than one attribute, these algorithms cannot be applied directly. Effective univariate time series imputation algorithms instead need to employ the inter-time correlations. On CRAN there are several packages solving the problem of imputation of multivariate data. Most popular and mature (among others) are AMELIA (Honaker et al., 2011), mice (van Buuren and Groothuis-Oudshoorn, 2011), VIM (Kowarik and Templ, 2016) and missMDA (Josse and Husson, 2016). However, since these packages are designed for multivariate data imputation only they do not work for univariate time series. At the moment imputeTS (Moritz, 2016a) is the only package on CRAN that is solely dedicated to univariate time series imputation and includes multiple algorithms. Nevertheless, there are some other packages that include imputation functions as addition to their core package functionality. Most noteworthy being zoo (Zeileis and Grothendieck, 2005) and forecast (Hyndman, 2016). Both packages offer also some advanced time series imputation functions. The packages spacetime (Pebesma, 2012), timeSeries (Rmetrics Core Team et al., 2015) and xts (Ryan and Ulrich, 2014) should also be mentioned, since they contain some very simple but quick time series imputation methods. For a broader overview about available time series imputation packages in R see also (Moritz et al., 2015). In this technical report we evaluate the performance of several univariate imputation functions in R on different time series. This paper is structured as follows: Section Overview imputeTS package gives an overview, about all features and functions included in the imputeTS package. This is followed by Usage examples of the different provided functions. The paper ends with a Conclusions section. Overview imputeTS package The imputeTS package can be found on CRAN and is an easy to use package that offers several utilities for ’univariate, equi-spaced, numeric time series’. Univariate means there is just one attribute that is observed over time. Which leads to a sequence of single observations o1, o2, o3, ... on at successive points t1, t2, t3, ... tn in time. Equi-spaced means, that time increments between successive data points are equal |t1 − t2| = |t2 − t3| = ... = |tn−1 − tn|. Numeric means that the observations are measurable quantities that can be described as a number. In the first part of this section, a general overview about all available functions and datasets is given. The R Journal Vol. XX/YY, AAAA 20ZZ ISSN 2073-4859 Contributed research article 2 This is followed by more detailed overviews about the three areas covered by the package: ’Plots & Statistics’, ’Imputation’ and ’Datasets’. Information about how to apply these functions and tools can be found later in the Usage examples section. General overview As can be seen in Table 1, beyond several imputation algorithm implementations the package also includes plotting functions and datasets. The imputation algorithms can be divided into rather simple but fast approaches like mean imputation and more advanced algorithms that need more computation time like kalman smoothing on a structural model. Simple Imputation Imputation Plots & Statistics Datasets na.locf na.interpolation plotNA.distribution tsAirgap na.mean na.kalman plotNA.distributionBar tsAirgapComplete na.random na.ma plotNA.gapsize tsHeating na.replace na.seadec plotNA.imputations tsHeatingComplete na.remove na.seasplit statsNA tsNH4 tsNH4Complete Table 1: General Overview imputeTS package As a whole, the package aims to support the user in the complete process of replacing missing values in time series. This process starts with analyzing the distribution of the missing values using the statsNA function and the plots of plotNA.distribution, plotNA.distributionBar, plotNA.gapsize. In the next step the actual imputation can take place with one of the several algorithm options. Finally, the imputation results can be visualized with the plotNA.imputations function. Additionally, the package contains three datasets, each in a version with and without missing values, that can be used to test imputation algorithms. Plots & Statistics functions An overview about the available plots and statistics functions can be found in Table 2. To get a good impression what the plots look like section Usage examples is recommended. Function Description plotNA.distribution Visualize Distribution of Missing Values plotNA.distributionBar Visualize Distribution of Missing Values (Barplot) plotNA.gapsize Visualize Distribution of NA gap sizes plotNA.imputations Visualize Imputed Values statsNA Print Statistics about the Missing Data Table 2: Overview Plots & Statistics The statsNA function calculates several missing data statistics of the input data. This includes overall percentage of missing values, absolute amount of missing values, amount of missing value in different sections of the data, longest series of consecutive NAs and occurrence of consecutive NAs. The plotNA.distribution function visualizes the distribution of NAs in a time series. This is done using a standard time series plot, in which areas with missing data are colored red. This enables the user to see at first sight where in the series most of the missing values are located. The plotNA.distributionBar function provides the same insights to users, but is designed for very large time series. This is necessary for time series with 1000 and more observations, where it is not possible to plot each observation as a single point. The plotNA.gapsize function provides information about consecutive NAs by showing the most common NA gap sizes in the time series. The plotNA.imputations function is designated for visual inspection of the results after applying an imputation algorithm. Therefore, newly imputed observations are shown in a different color than the rest of the series. The R Journal Vol. XX/YY, AAAA 20ZZ ISSN 2073-4859 Contributed research article 3 Imputation functions An overview about all available imputation algorithms can be found in Table 3. Even if these functions are really easy applicable, some examples can be found later in section Usage examples. More detailed information about the theoretical background of the algorithms can be found in the imputeTS manual (Moritz, 2016b). Function Option Description na.interpolation linear Imputation by Linear Interpolation spline Imputation by Spline Interpolation stine Imputation by Stineman Interpolation na.kalman StructTS Imputation by Structural Model & Kalman Smoothing auto.arima Imputation by ARIMA State Space Representation & Kalman Sm. na.locf locf Imputation by Last Observation Carried Forward nocb Imputation by Next Observation Carried Backward na.ma simple Missing Value Imputation by Simple Moving Average linear Missing Value Imputation by Linear Weighted Moving Average exponential Missing Value Imputation by Exponential Weighted Moving Average na.mean mean MissingValue Imputation by Mean Value median Missing Value Imputation by Median Value mode Missing Value Imputation by Mode Value na.random Missing Value Imputation by Random Sample na.replace Replace Missing Values by a Defined Value na.seadec Seasonally Decomposed Missing Value Imputation na.seasplit Seasonally Splitted Missing Value Imputation na.remove Remove Missing Values Table 3: Overview Imputation Algorithms For convenience similar algorithms are available under one function name as parameter option. For example linear, spline and stineman interpolation are all included in the na.interpolation function. The na.mean, na.locf, na.replace, na.random functions are all simple and fast. In comparison, na.interpolation, na.kalman, na.ma, na.seasplit, na.seadec are more advanced algorithms that need more computation time. The na.remove function is a special case, since it only deletes all missing values. Thus, it is not really an imputation function. It should be handled with care since removing observations may corrupt the time information of the series. The na.seasplit and na.seadec functions are as well exceptions. These perform seasonal split / decomposition operations as a preprocessing step. For the imputation itself, one out of the other imputation algorithms can be used (which one can be set as option). Looking at all available imputation methods, no single overall best method can b",
"title": ""
},
{
"docid": "68295a432f68900911ba29e5a6ca5e42",
"text": "In many forecasting applications, it is valuable to predict not only the value of a signal at a certain time point in the future, but also the values leading up to that point. This is especially true in clinical applications, where the future state of the patient can be less important than the patient's overall trajectory. This requires multi-step forecasting, a forecasting variant where one aims to predict multiple values in the future simultaneously. Standard methods to accomplish this can propagate error from prediction to prediction, reducing quality over the long term. In light of these challenges, we propose multi-output deep architectures for multi-step forecasting in which we explicitly model the distribution of future values of the signal over a prediction horizon. We apply these techniques to the challenging and clinically relevant task of blood glucose forecasting. Through a series of experiments on a real-world dataset consisting of 550K blood glucose measurements, we demonstrate the effectiveness of our proposed approaches in capturing the underlying signal dynamics. Compared to existing shallow and deep methods, we find that our proposed approaches improve performance individually and capture complementary information, leading to a large improvement over the baseline when combined (4.87 vs. 5.31 absolute percentage error (APE)). Overall, the results suggest the efficacy of our proposed approach in predicting blood glucose level and multi-step forecasting more generally.",
"title": ""
}
] |
[
{
"docid": "fb904fc99acf8228ae7585e29074f96c",
"text": "One of the biggest problems in manufacturing is the failure of machine tools due to loss of surface material in cutting operations like drilling and milling. Carrying on the process with a dull tool may damage the workpiece material fabricated. On the other hand, it is unnecessary to change the cutting tool if it is still able to continue cutting operation. Therefore, an effective diagnosis mechanism is necessary for the automation of machining processes so that production loss and downtime can be avoided. This study concerns with the development of a tool wear condition-monitoring technique based on a two-stage fuzzy logic scheme. For this, signals acquired from various sensors were processed to make a decision about the status of the tool. In the first stage of the proposed scheme, statistical parameters derived from thrust force, machine sound (acquired via a very sensitive microphone) and vibration signals were used as inputs to fuzzy process; and the crisp output values of this process were then taken as the input parameters of the second stage. Conclusively, outputs of this stage were taken into a threshold function, the output of which is used to assess the condition of the tool. r 2008 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "4dcdb2520ec5f9fc9c32f2cbb343808c",
"text": "Shannon’s mathematical theory of communication defines fundamental limits on how much information can be transmitted between the different components of any man-made or biological system. This paper is an informal but rigorous introduction to the main ideas implicit in Shannon’s theory. An annotated reading list is provided for further reading.",
"title": ""
},
{
"docid": "356a72153f61311546f6ff874ee79bb4",
"text": "In this paper, an object cosegmentation method based on shape conformability is proposed. Different from the previous object cosegmentation methods which are based on the region feature similarity of the common objects in image set, our proposed SaCoseg cosegmentation algorithm focuses on the shape consistency of the foreground objects in image set. In the proposed method, given an image set where the implied foreground objects may be varied in appearance but share similar shape structures, the implied common shape pattern in the image set can be automatically mined and regarded as the shape prior of those unsatisfactorily segmented images. The SaCoseg algorithm mainly consists of four steps: 1) the initial Grabcut segmentation; 2) the shape mapping by coherent point drift registration; 3) the common shape pattern discovery by affinity propagation clustering; and 4) the refinement by Grabcut with common shape constraint. To testify our proposed algorithm and establish a benchmark for future work, we built the CoShape data set to evaluate the shape-based cosegmentation. The experiments on CoShape data set and the comparison with some related cosegmentation algorithms demonstrate the good performance of the proposed SaCoseg algorithm.",
"title": ""
},
{
"docid": "528ef696a9932f87763d66264da515af",
"text": "Ethical, philosophical and religious values are central to the continuing controversy over capital punishment. Nevertheless, factual evidence can and should inform policy making. The evidence for capital punishment as an uniquely effective deterrent to murder is especially important, since deterrence is the only major pragmatic argument on the pro-death penalty side.1 The purpose of this paper is to survey and evaluate the evidence for deterrence.",
"title": ""
},
{
"docid": "43ec6774e1352443f41faf8d3780059b",
"text": "Cloud computing is currently one of the most hyped information technology fields and it has become one of the fastest growing segments of IT. Cloud computing allows us to scale our servers in magnitude and availability in order to provide services to a greater number of end users. Moreover, adopters of the cloud service model are charged based on a pay-per-use basis of the cloud's server and network resources, aka utility computing. With this model, a conventional DDoS attack on server and network resources is transformed in a cloud environment to a new breed of attack that targets the cloud adopter's economic resource, namely Economic Denial of Sustainability attack (EDoS). In this paper, we advocate a novel solution, named EDoS-Shield, to mitigate the Economic Denial of Sustainability (EDoS) attack in the cloud computing systems. We design a discrete simulation experiment to evaluate its performance and the results show that it is a promising solution to mitigate the EDoS.",
"title": ""
},
{
"docid": "1dc4a8f02dfe105220db5daae06c2229",
"text": "Photosynthesis begins with light harvesting, where specialized pigment-protein complexes transform sunlight into electronic excitations delivered to reaction centres to initiate charge separation. There is evidence that quantum coherence between electronic excited states plays a role in energy transfer. In this review, we discuss how quantum coherence manifests in photosynthetic light harvesting and its implications. We begin by examining the concept of an exciton, an excited electronic state delocalized over several spatially separated molecules, which is the most widely available signature of quantum coherence in light harvesting. We then discuss recent results concerning the possibility that quantum coherence between electronically excited states of donors and acceptors may give rise to a quantum coherent evolution of excitations, modifying the traditional incoherent picture of energy transfer. Key to this (partially) coherent energy transfer appears to be the structure of the environment, in particular the participation of non-equilibrium vibrational modes. We discuss the open questions and controversies regarding quantum coherent energy transfer and how these can be addressed using new experimental techniques.",
"title": ""
},
{
"docid": "8dee3ada764a40fce6b5676287496ccd",
"text": "We study the problem of video-to-video synthesis, whose goal is to learn a mapping function from an input source video (e.g., a sequence of semantic segmentation masks) to an output photorealistic video that precisely depicts the content of the source video. While its image counterpart, the image-to-image translation problem, is a popular topic, the video-to-video synthesis problem is less explored in the literature. Without modeling temporal dynamics, directly applying existing image synthesis approaches to an input video often results in temporally incoherent videos of low visual quality. In this paper, we propose a video-to-video synthesis approach under the generative adversarial learning framework. Through carefully-designed generators and discriminators, coupled with a spatio-temporal adversarial objective, we achieve high-resolution, photorealistic, temporally coherent video results on a diverse set of input formats including segmentation masks, sketches, and poses. Experiments on multiple benchmarks show the advantage of our method compared to strong baselines. In particular, our model is capable of synthesizing 2K resolution videos of street scenes up to 30 seconds long, which significantly advances the state-of-the-art of video synthesis. Finally, we apply our method to future video prediction, outperforming several competing systems. Code, models, and more results are available at our website.",
"title": ""
},
{
"docid": "1fdb9fdea37c042187407451aef02297",
"text": "Websites have gained vital importance for organizations along with the growing competition in the world market. It is known that usability requirements heavily depend on the type, audience and purpose of websites. For the e-commerce environment, usability assessment of a website is required to figure out the impact of website design on customer purchases. Thus, usability assessment and design of online pages have become the subject of many scientific studies. However, in any of these studies, design parameters were not identified in such a detailed way, and they were not classified in line with customer expectations to assess the overall usability of an e-commerce website. This study therefore aims to analyze and classify design parameters according to customer expectations in order to evaluate the usability of e-commerce websites in a more comprehensive manner. Four websites are assessed using the proposed novel approach with respect to the identified design parameters and the usability scores of the websites are examined. It is revealed that the websites with high usability score are more preferred by customers. Therefore, it is indicated that usability of e-commerce websites affects customer purchases.",
"title": ""
},
{
"docid": "1af028a0cf88d0ac5c52e84019554d51",
"text": "Robots exhibit life-like behavior by performing intelligent actions. To enhance human-robot interaction it is necessary to investigate and understand how end-users perceive such animate behavior. In this paper, we report an experiment to investigate how people perceived different robot embodiments in terms of animacy and intelligence. iCat and Robovie II were used as the two embodiments in this experiment. We conducted a between-subject experiment where robot type was the independent variable, and perceived animacy and intelligence of the robot were the dependent variables. Our findings suggest that a robots perceived intelligence is significantly correlated with animacy. The correlation between the intelligence and the animacy of a robot was observed to be stronger in the case of the iCat embodiment. Our results also indicate that the more animated the face of the robot, the more likely it is to attract the attention of a user. We also discuss the possible and probable explanations of the results obtained.",
"title": ""
},
{
"docid": "cc3b5ee3c8c890499f3d52db00520563",
"text": "We report results from an oyster hatchery on the Oregon coast, where intake waters experienced variable carbonate chemistry (aragonite saturation state , 0.8 to . 3.2; pH , 7.6 to . 8.2) in the early summer of 2009. Both larval production and midstage growth (, 120 to , 150 mm) of the oyster Crassostrea gigas were significantly negatively correlated with the aragonite saturation state of waters in which larval oysters were spawned and reared for the first 48 h of life. The effects of the initial spawning conditions did not have a significant effect on early-stage growth (growth from D-hinge stage to , 120 mm), suggesting a delayed effect of water chemistry on larval development. Rising atmospheric carbon dioxide (CO2) driven by anthropogenic emissions has resulted in the addition of over 140 Pg-C (1 Pg 5 1015 g) to the ocean (Sabine et al. 2011). The thermodynamics of the reactions between carbon dioxide and water require this addition to cause a decline of ocean pH and carbonate ion concentrations ([CO3 ]). For the observed change between current-day and preindustrial atmospheric CO2, the surface oceans have lost approximately 16% of their [CO3 ] and decreased in pH by 0.1 unit, although colder surface waters are likely to have experienced a greater effect (Feely et al. 2009). Projections for the open ocean suggest that wide areas, particularly at high latitudes, could reach low enough [CO3 ] levels that dissolution of biogenic carbonate minerals is thermodynamically favored by the end of the century (Feely et al. 2009; Steinacher et al. 2009), with implications for commercially significant higher trophic levels (Aydin et al. 2005). There is considerable spatial and temporal variability in ocean carbonate chemistry, and there is evidence that these natural variations affect marine biota, with ecological assemblages next to cold-seep high-CO2 sources having been shown to be distinct from those nearby but less affected by the elevated CO2 levels (Hall-Spencer et al. 2008). Coastal environments that are subject to upwelling events also experience exposure to elevated CO2 conditions where deep water enriched by additions of respiratory CO2 is brought up from depth to the nearshore surface by physical processes. Feely et al. (2008) showed that upwelling on the Pacific coast of central North America markedly increased corrosiveness for calcium carbonate minerals in surface nearshore waters. A small but significant amount of anthropogenic CO2 present in the upwelled source waters provided enough additional CO2 to cause widespread corrosiveness on the continental shelves (Feely et al. 2008). Because the source water for upwelling on the North American Pacific coast takes on the order of decades to transit from the point of subduction to the upwelling locales (Feely et al. 2008), this anthropogenic CO2 was added to the water under a substantially lowerCO2 atmosphere than today’s, and water already en route to this location is likely carrying an increasing burden of anthropogenic CO2. Understanding the effects of natural variations in CO2 in these waters on the local fauna is critical for anticipating how more persistently corrosive conditions will affect marine ecosystems. The responses of organisms to rising CO2 are potentially numerous and include negative effects on respiration, motility, and fertility (Portner 2008). From a geochemical perspective, however, the easiest process to understand conceptually is that of solid calcium carbonate (CaCO3,s) mineral formation. In nearly all ocean surface waters, formation of CaCO3,s is thermodynamically favored by the abundance of the reactants, dissolved calcium ([Ca2+]), and carbonate ([CO3 ]) ions. While oceanic [Ca 2+] is relatively constant at high levels that are well described by conservative relationships with salinity, ocean [CO 3 ] decreases as atmospheric CO2 rises, lowering the energetic favorability of CaCO3,s formation. This energetic favorability is proportional to the saturation state, V, defined by",
"title": ""
},
{
"docid": "30bc96451dd979a8c08810415e4a2478",
"text": "An adaptive circulator fabricated on a 130 nm CMOS is presented. Circulator has two adaptive blocks for gain and phase mismatch correction and leakage cancelation. The impedance matching circuit corrects mismatches for antenna, divider, and LNTA. The cancelation block cancels the Tx leakage. Measured isolation between transmitter and receiver for single tone at 2.4 GHz is 90 dB, and for a 40 MHz wide-band signal is 50dB. The circulator Rx gain is 10 dB, with NF = 4.7 dB and 5 dB insertion loss.",
"title": ""
},
{
"docid": "33dedeabc83271223a1b3fb50bfb1824",
"text": "Quantum computers can be used to address electronic-structure problems and problems in materials science and condensed matter physics that can be formulated as interacting fermionic problems, problems which stretch the limits of existing high-performance computers. Finding exact solutions to such problems numerically has a computational cost that scales exponentially with the size of the system, and Monte Carlo methods are unsuitable owing to the fermionic sign problem. These limitations of classical computational methods have made solving even few-atom electronic-structure problems interesting for implementation using medium-sized quantum computers. Yet experimental implementations have so far been restricted to molecules involving only hydrogen and helium. Here we demonstrate the experimental optimization of Hamiltonian problems with up to six qubits and more than one hundred Pauli terms, determining the ground-state energy for molecules of increasing size, up to BeH2. We achieve this result by using a variational quantum eigenvalue solver (eigensolver) with efficiently prepared trial states that are tailored specifically to the interactions that are available in our quantum processor, combined with a compact encoding of fermionic Hamiltonians and a robust stochastic optimization routine. We demonstrate the flexibility of our approach by applying it to a problem of quantum magnetism, an antiferromagnetic Heisenberg model in an external magnetic field. In all cases, we find agreement between our experiments and numerical simulations using a model of the device with noise. Our results help to elucidate the requirements for scaling the method to larger systems and for bridging the gap between key problems in high-performance computing and their implementation on quantum hardware.",
"title": ""
},
{
"docid": "ba7081afe9e734c5895ccbe7307c8707",
"text": "Research effort in ontology visualization has largely focused on developing new visualization techniques. At the same time, researchers have paid less attention to investigating the usability of common visualization techniques that many practitioners regularly use to visualize ontological data. In this paper, we focus on two popular ontology visualization techniques: indented tree and graph. We conduct a controlled usability study with an emphasis on the effectiveness, efficiency, workload and satisfaction of these visualization techniques in the context of assisting users during evaluation of ontology mappings. Findings from this study have revealed both strengths and weaknesses of each visualization technique. In particular, while the indented tree visualization is more organized and familiar to novice users, subjects found the graph visualization to be more controllable and intuitive without visual redundancy, particularly for ontologies with multiple inheritance.",
"title": ""
},
{
"docid": "c05fc37d9f33ec94f4c160b3317dda00",
"text": "We consider the coordination control for multiagent systems in a very general framework where the position and velocity interactions among agents are modeled by independent graphs. Different algorithms are proposed and analyzed for different settings, including the case without leaders and the case with a virtual leader under fixed position and velocity interaction topologies, as well as the case with a group velocity reference signal under switching velocity interaction. It is finally shown that the proposed algorithms are feasible in achieving the desired coordination behavior provided the interaction topologies satisfy the weakest possible connectivity conditions. Such conditions relate only to the structure of the interactions among agents while irrelevant to their magnitudes and thus are easy to verify. Rigorous convergence analysis is preformed based on a combined use of tools from algebraic graph theory, matrix analysis as well as the Lyapunov stability theory.",
"title": ""
},
{
"docid": "464439e2c9e45045aeee5ca0b88b90e1",
"text": "We calculate the average number of critical points of a Gaussian field on a high-dimensional space as a function of their energy and their index. Our results give a complete picture of the organization of critical points and are of relevance to glassy and disordered systems and landscape scenarios coming from the anthropic approach to string theory.",
"title": ""
},
{
"docid": "1d9361cffd8240f3b691c887def8e2f5",
"text": "Twenty seven essential oils, isolated from plants representing 11 families of Portuguese flora, were screened for their nematicidal activity against the pinewood nematode (PWN), Bursaphelenchus xylophilus. The essential oils were isolated by hydrodistillation and the volatiles by distillation-extraction, and both were analysed by GC and GC-MS. High nematicidal activity was achieved with essential oils from Chamaespartium tridentatum, Origanum vulgare, Satureja montana, Thymbra capitata, and Thymus caespititius. All of these essential oils had an estimated minimum inhibitory concentration ranging between 0.097 and 0.374 mg/ml and a lethal concentration necessary to kill 100% of the population (LC(100)) between 0.858 and 1.984 mg/ml. Good nematicidal activity was also obtained with the essential oil from Cymbopogon citratus. The dominant components of the effective oils were 1-octen-3-ol (9%), n-nonanal, and linalool (both 7%) in C. tridentatum, geranial (43%), neral (29%), and β-myrcene (25%) in C. citratus, carvacrol (36% and 39%), γ-terpinene (24% and 40%), and p-cymene (14% and 7%) in O. vulgare and S. montana, respectively, and carvacrol (75% and 65%, respectively) in T. capitata and T. caespititius. The other essential oils obtained from Portuguese flora yielded weak or no activity. Five essential oils with nematicidal activity against PWN are reported for the first time.",
"title": ""
},
{
"docid": "0e644fc1c567356a2e099221a774232c",
"text": "We present a coupled two-way clustering approach to gene microarray data analysis. The main idea is to identify subsets of the genes and samples, such that when one of these is used to cluster the other, stable and significant partitions emerge. The search for such subsets is a computationally complex task. We present an algorithm, based on iterative clustering, that performs such a search. This analysis is especially suitable for gene microarray data, where the contributions of a variety of biological mechanisms to the gene expression levels are entangled in a large body of experimental data. The method was applied to two gene microarray data sets, on colon cancer and leukemia. By identifying relevant subsets of the data and focusing on them we were able to discover partitions and correlations that were masked and hidden when the full dataset was used in the analysis. Some of these partitions have clear biological interpretation; others can serve to identify possible directions for future research.",
"title": ""
},
{
"docid": "3207a4b3d199db8f43d96f1096e8eb81",
"text": "Recently, a branch of machine learning algorithms called deep learning gained huge attention to boost up accuracy of a variety of sensing applications. However, execution of deep learning algorithm such as convolutional neural network on mobile processor is non-trivial due to intensive computational requirements. In this paper, we present our early design of DeepSense - a mobile GPU-based deep convolutional neural network (CNN) framework. For its design, we first explored the differences between server-class and mobile-class GPUs, and studied effectiveness of various optimization strategies such as branch divergence elimination and memory vectorization. Our results show that DeepSense is able to execute a variety of CNN models for image recognition, object detection and face recognition in soft real time with no or marginal accuracy tradeoffs. Experiments also show that our framework is scalable across multiple devices with different GPU architectures (e.g. Adreno and Mali).",
"title": ""
},
{
"docid": "7143c97b6ea484566f521e36a3fa834e",
"text": "To determine the reliability and concurrent validity of a visual analogue scale (VAS) for disability as a single-item instrument measuring disability in chronic pain patients was the objective of the study. For the reliability study a test-retest design and for the validity study a cross-sectional design was used. A general rehabilitation centre and a university rehabilitation centre was the setting for the study. The study population consisted of patients over 18 years of age, suffering from chronic musculoskeletal pain; 52 patients in the reliability study, 344 patients in the validity study. Main outcome measures were as follows. Reliability study: Spearman's correlation coefficients (rho values) of the test and retest data of the VAS for disability; validity study: rho values of the VAS disability scores with the scores on four domains of the Short-Form Health Survey (SF-36) and VAS pain scores, and with Roland-Morris Disability Questionnaire scores in chronic low back pain patients. Results were as follows: in the reliability study rho values varied from 0.60 to 0.77; and in the validity study rho values of VAS disability scores with SF-36 domain scores varied from 0.16 to 0.51, with Roland-Morris Disability Questionnaire scores from 0.38 to 0.43 and with VAS pain scores from 0.76 to 0.84. The conclusion of the study was that the reliability of the VAS for disability is moderate to good. Because of a weak correlation with other disability instruments and a strong correlation with the VAS for pain, however, its validity is questionable.",
"title": ""
},
{
"docid": "d9b8c9c1427fc68f9e40e24ae517c7e8",
"text": "Although studies have shown that Instagram use and young adults' mental health are cross-sectionally associated, longitudinal evidence is lacking. In addition, no study thus far examined this association, or the reverse, among adolescents. To address these gaps, we set up a longitudinal panel study among 12- to 19-year-old Flemish adolescents to investigate the reciprocal relationships between different types of Instagram use and depressed mood. Self-report data from 671 adolescent Instagram users (61% girls; MAge = 14.96; SD = 1.29) were used to examine our research question and test our hypotheses. Structural equation modeling showed that Instagram browsing at Time 1 was related to increases in adolescents' depressed mood at Time 2. In addition, adolescents' depressed mood at Time 1 was related to increases in Instagram posting at Time 2. These relationships were similar among boys and girls. Potential explanations for the study findings and suggestions for future research are discussed.",
"title": ""
}
] |
scidocsrr
|
575a09d1ef1455c43c8e2306e8b5f04c
|
Path Loss Estimation for Wireless Underground Sensor Network in Agricultural Application
|
[
{
"docid": "e062d88651a8bdc637ecf57b4cbb1b2b",
"text": "Wireless Underground Sensor Networks (WUSNs) consist of wirelessly connected underground sensor nodes that communicate untethered through soil. WUSNs have the potential to impact a wide variety of novel applications including intelligent irrigation, environment monitoring, border patrol, and assisted navigation. Although its deployment is mainly based on underground sensor nodes, a WUSN still requires aboveground devices for data retrieval, management, and relay functionalities. Therefore, the characterization of the bi-directional communication between a buried node and an aboveground device is essential for the realization of WUSNs. In this work, empirical evaluations of underground-to- aboveground (UG2AG) and aboveground-to-underground (AG2UG) communication are presented. More specifically, testbed experiments have been conducted with commodity sensor motes in a real-life agricultural field. The results highlight the asymmetry between UG2AG and AG2UG communication with distinct behaviors for different burial depths. To combat the adverse effects of the change in wavelength in soil, an ultra wideband antenna scheme is deployed, which increases the communication range by more than 350% compared to the original antennas. The results also reveal that a 21% increase in the soil moisture decreases the communication range by more than 70%. To the best of our knowledge, this is the first empirical study that highlights the effects of the antenna design, burial depth, and soil moisture on both UG2AG and AG2UG communication performance. These results have a significant impact on the development of multi-hop networking protocols for WUSNs.",
"title": ""
}
] |
[
{
"docid": "9ee0c9aa2868ea2a12c3a368b4744f35",
"text": "To assess the efficacy and feasibility of vertebroplasty and posterior short-segment pedicle screw fixation for the treatment of traumatic lumbar burst fractures. Short-segment pedicle screw instrumentation is a well described technique to reduce and stabilize thoracic and lumbar spine fractures. It is relatively a easy procedure but can only indirectly reduce a fractured vertebral body, and the means of augmenting the anterior column are limited. Hardware failure and a loss of reduction are recognized complications caused by insufficient anterior column support. Patients with traumatic lumbar burst fractures without neurologic deficits were included. After a short segment posterior reduction and fixation, bilateral transpedicular reduction of the endplate was performed using a balloon, and polymethyl methacrylate cement was injected. Pre-operative and post-operative central and anterior heights were assessed with radiographs and MRI. Sixteen patients underwent this procedure, and a substantial reduction of the endplates could be achieved with the technique. All patients recovered uneventfully, and the neurologic examination revealed no deficits. The post-operative radiographs and magnetic resonance images demonstrated a good fracture reduction and filling of the bone defect without unwarranted bone displacement. The central and anterior height of the vertebral body could be restored to 72 and 82% of the estimated intact height, respectively. Complications were cement leakage in three cases without clinical implications and one superficial wound infection. Posterior short-segment pedicle fixation in conjunction with balloon vertebroplasty seems to be a feasible option in the management of lumbar burst fractures, thereby addressing all the three columns through a single approach. Although cement leakage occurred but had no clinical consequences or neurological deficit.",
"title": ""
},
{
"docid": "c4814dea797964107d2178c265eba0b2",
"text": "•We propose to combine string kernels (low-level character n-gram features) and word embeddings (high-level semantic features) for automated essay scoring (AED) •TOK, string kernels have never been used for AED •TOK, this is the first successful attempt to combine string kernels and word embeddings •Using a shallow approach, we surpass recent deep learning approaches [Dong et al, EMNLP 2016; Dong et al, CONLL 2017;Tay et al, AAAI 2018]",
"title": ""
},
{
"docid": "0867ccf808dda2d08195a6cbd8f83514",
"text": "Existing algorithms for joint clustering and feature selection can be categorized as either global or local approaches. Global methods select a single cluster-independent subset of features, whereas local methods select cluster-specific subsets of features. In this paper, we present a unified probabilistic model that can perform both global and local feature selection for clustering. Our approach is based on a hierarchical beta-Bernoulli prior combined with a Dirichlet process mixture model. We obtain global or local feature selection by adjusting the variance of the beta prior. We provide a variational inference algorithm for our model. In addition to simultaneously learning the clusters and features, this Bayesian formulation allows us to learn both the number of clusters and the number of features to retain. Experiments on synthetic and real data show that our unified model can find global and local features and cluster data as well as competing methods of each type.",
"title": ""
},
{
"docid": "2c38b6af96d8393660c4c700b9322f7a",
"text": "According to what we call the Principle of Procreative Beneficence (PB),couples who decide to have a child have a significant moral reason to select the child who, given his or her genetic endowment, can be expected to enjoy the most well-being. In the first part of this paper, we introduce PB,explain its content, grounds, and implications, and defend it against various objections. In the second part, we argue that PB is superior to competing principles of procreative selection such as that of procreative autonomy.In the third part of the paper, we consider the relation between PB and disability. We develop a revisionary account of disability, in which disability is a species of instrumental badness that is context- and person-relative.Although PB instructs us to aim to reduce disability in future children whenever possible, it does not privilege the normal. What matters is not whether future children meet certain biological or statistical norms, but what level of well-being they can be expected to have.",
"title": ""
},
{
"docid": "f3e39ffeec0da10294073b9899d8f016",
"text": "Nomophobia is considered a modern age phobia introduced to our lives as a byproduct of the interaction between people and mobile information and communication technologies, especially smartphones. This study sought to contribute to the nomophobia research literature by identifying and describing the dimensions of nomophobia and developing a questionnaire to measure nomophobia. Consequently, this study adopted a two-phase, exploratory sequential mixed methods design. The first phase was a qualitative exploration of nomophobia through semi-structured interviews conducted with nine undergraduate students at a large Midwestern university in the U.S. As a result of the first phase, four dimensions of nomophobia were identified: not being able to communicate, losing connectedness, not being able to access information and giving up convenience. The qualitative findings from this initial exploration were then developed into a 20-item nomophobia questionnaire (NMP-Q). In the second phase, the NMP-Q was validated with a sample of 301 undergraduate students. Exploratory factor analysis revealed a four-factor structure for the NMP-Q, corresponding to the dimensions of nomophobia. The NMP-Q was shown to produce valid and reliable scores; and thus, can be used to assess the severity of nomophobia. 2015 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "52c07b30d95ab7dc74b5be8a2a60ea91",
"text": "Although deep learning models are highly effective for various learning tasks, their high computational costs prohibit the deployment to scenarios where either memory or computational resources are limited. In this paper, we focus on compressing and accelerating deep models with network weights represented by very small numbers of bits, referred to as extremely low bit neural network. We model this problem as a discretely constrained optimization problem. Borrowing the idea from Alternating Direction Method of Multipliers (ADMM), we decouple the continuous parameters from the discrete constraints of network, and cast the original hard problem into several subproblems. We propose to solve these subproblems using extragradient and iterative quantization algorithms that lead to considerably faster convergency compared to conventional optimization methods. Extensive experiments on image recognition and object detection verify that the proposed algorithm is more effective than state-of-theart approaches when coming to extremely low bit neural network.",
"title": ""
},
{
"docid": "e31d58cd45dec35e439f67b7f53d7f20",
"text": "The altered energy metabolism of tumor cells provides a viable target for a non toxic chemotherapeutic approach. An increased glucose consumption rate has been observed in malignant cells. Warburg (Nobel Laureate in medicine) postulated that the respiratory process of malignant cells was impaired and that the transformation of a normal cell to malignant was due to defects in the aerobic respiratory pathways. Szent-Györgyi (Nobel Laureate in medicine) also viewed cancer as originating from insufficient availability of oxygen. Oxygen by itself has an inhibitory action on malignant cell proliferation by interfering with anaerobic respiration (fermentation and lactic acid production). Interestingly, during cell differentiation (where cell energy level is high) there is an increased cellular production of oxidants that appear to provide one type of physiological stimulation for changes in gene expression that may lead to a terminal differentiated state. The failure to maintain high ATP production (high cell energy levels) may be a consequence of inactivation of key enzymes, especially those related to the Krebs cycle and the electron transport system. A distorted mitochondrial function (transmembrane potential) may result. This aspect could be suggestive of an important mitochondrial involvement in the carcinogenic process in addition to presenting it as a possible therapeutic target for cancer. Intermediate metabolic correction of the mitochondria is postulated as a possible non-toxic therapeutic approach for cancer.",
"title": ""
},
{
"docid": "5067a208fd8ad389482fbe49e7c79b1f",
"text": "Even though main memory is becoming large enough to fit most OLTP databases, it may not always be the best option. OLTP workloads typically exhibit skewed access patterns where some records are hot (frequently accessed) but many records are cold (infrequently or never accessed). Therefore, it is more economical to store the coldest records on a fast secondary storage device such as a solid-state disk. However, main-memory DBMS have no knowledge of secondary storage, while traditional disk-based databases, designed for workloads where data resides on HDD, introduce too much overhead for the common case where the working set is memory resident.\n In this paper, we propose a simple and low-overhead technique that enables main-memory databases to efficiently migrate cold data to secondary storage by relying on the OS's virtual memory paging mechanism. We propose to log accesses at the tuple level, process the access traces offline to identify relevant access patterns, and then transparently re-organize the in-memory data structures to reduce paging I/O and improve hit rates. The hot/cold data separation is performed on demand and incrementally through careful memory management, without any change to the underlying data structures. We validate experimentally the data re-organization proposal and show that OS paging can be efficient: a TPC-C database can grow two orders of magnitude larger than the available memory size without a noticeable impact on performance.",
"title": ""
},
{
"docid": "e934c6e5797148d9cfa6cff5e3bec698",
"text": "Ego level is a broad construct that summarizes individual differences in personality development 1 . We examine ego level as it is represented in natural language, using a composite sample of four datasets comprising nearly 44,000 responses. We find support for a developmental sequence in the structure of correlations between ego levels, in analyses of Linguistic Inquiry and Word Count (LIWC) categories 2 and in an examination of the individual words that are characteristic of each level. The LIWC analyses reveal increasing complexity and, to some extent, increasing breadth of perspective with higher levels of development. The characteristic language of each ego level suggests, for example, a shift from consummatory to appetitive desires at the lowest stages, a dawning of doubt at the Self-aware stage, the centrality of achievement motivation at the Conscientious stage, an increase in mutuality and intellectual growth at the Individualistic stage and some renegotiation of life goals and reflection on identity at the highest levels of development. Continuing empirical analysis of ego level and language will provide a deeper understanding of ego development, its relationship with other models of personality and individual differences, and its utility in characterizing people, texts and the cultural contexts that produce them. A linguistic analysis of nearly 44,000 responses to the Washington University Sentence Completion Test elucidates the construct of ego development (personality development through adulthood) and identifies unique linguistic markers of each level of development.",
"title": ""
},
{
"docid": "4bfead8529019e084465d4583685434b",
"text": "Task: Paraphrase generation Problem: The existing sequence-to-sequence model tends to memorize the words and the patterns in the training dataset instead of learning the meaning of the words. Therefore, the generated sentences are often grammatically correct but semantically improper. Proposal: a novel model based on the encoder-decoder framework, called Word Embedding Attention Network (WEAN). Our proposed model generates the words by querying distributed word representations (i.e. neural word embeddings), hoping to capturing the meaning of the according words. Example of RNN Generated Summary Text: 昨晚,中联航空成都飞北京一架航班被发现有多人吸烟。后 因天气原因,飞机备降太原机场。有乘客要求重新安检,机长决 定继续飞行,引起机组人员与未吸烟乘客冲突。 Last night, several people were caught to smoke on a flight of China United Airlines from Chendu to Beijing. Later the flight temporarily landed on Taiyuan Airport. Some passengers asked for a security check but were denied by the captain, which led to a collision between crew and passengers. RNN: 中联航空机场发生爆炸致多人死亡。 China United Airlines exploded in the airport, leaving several people dead. Gold: 航班多人吸烟机组人员与乘客冲突。 Several people smoked on a flight which led to a collision between crew and passengers. Proposed Model Our Semantic Relevance Based neural model. It consists of decoder (above), encoder (below) and cosine similarity function. Experiments Dataset: Large Scale Chinese Short Text Summarization Dataset (LCSTS) Results of our model and baseline systems. Our models achieve substantial improvement of all ROUGE scores over baseline systems. (W: Word level; C: Character level). Example of SRB Generated Summary Text: 仔细一算,上海的互联网公司不乏成功案例,但最终成为BAT一 类巨头的几乎没有,这也能解释为何纳税百强的榜单中鲜少互联网公司 的身影。有一类是被并购,比如:易趣、土豆网、PPS、PPTV、一号店 等;有一类是数年偏安于细分市场。 With careful calculation, there are many successful Internet companies in Shanghai, but few of them becomes giant company like BAT. This is also the reason why few Internet companies are listed in top hundred companies of paying tax. Some of them are merged, such as Ebay, Tudou, PPS, PPTV, Yihaodian and so on. Others are satisfied with segment market for years. Gold:为什么上海出不了互联网巨头? Why Shanghai comes out no giant company? RNN context:上海的互联网巨头。 Shanghai's giant company. SRB:上海鲜少互联网巨头的身影。 Shanghai has few giant companies. Proposed Model Text Representation Source text representation Vt = hN Generated summary representation Vs = sM − hN Semantic Relevance cosine similarity function cos Vs, Vt = Vt∙Vs Vt Vs Training Objective function L = −p y x; θ − λ cos Vs, Vt Conclusion Our work aims at improving semantic relevance of generated summaries and source texts for Chinese social media text summarization. Our model is able to transform the text and the summary into a dense vector, and encourage high similarity of their representation. Experiments show that our model outperforms baseline systems, and the generated summary has higher semantic relevance.",
"title": ""
},
{
"docid": "e106afaefd5e61f4a5787a7ae0c92934",
"text": "Novelty detection is concerned with recognising inputs that differ in some way from those that are usually seen. It is a useful technique in cases where an important class of data is under-represented in the training set. This means that the performance of the network will be poor for those classes. In some circumstances, such as medical data and fault detection, it is often precisely the class that is under-represented in the data, the disease or potential fault, that the network should detect. In novelty detection systems the network is trained only on the negative examples where that class is not present, and then detects inputs that do not fits into the model that it has acquired, that it, members of the novel class. This paper reviews the literature on novelty detection in neural networks and other machine learning techniques, as well as providing brief overviews of the related topics of statistical outlier detection and novelty detection in biological organisms.",
"title": ""
},
{
"docid": "db2953ae2d59e74b8b650963d32a9f1f",
"text": "In this paper we describe the design and preliminary evaluation of an energetically-autonomous powered knee exoskeleton to facilitate running. The device consists of a knee brace in which a motorized mechanism actively places and removes a spring in parallel with the knee joint. This mechanism is controlled such that the spring is in parallel with the knee joint from approximately heel-strike to toe-off, and is removed from this state during the swing phase of running. In this way, the spring is intended to store energy at heel-strike which is then released when the heel leaves the ground, reducing the effort required by the quadriceps to exert this energy, thereby reducing the metabolic cost of running.",
"title": ""
},
{
"docid": "aa6dd2e44b992dd7f11c5d82f0b11556",
"text": "It is well known that violent video games increase aggression, and that stress increases aggression. Many violent video games can be stressful because enemies are trying to kill players. The present study investigates whether violent games increase aggression by inducing stress in players. Stress was measured using cardiac coherence, defined as the synchronization of the rhythm of breathing to the rhythm of the heart. We predicted that cardiac coherence would mediate the link between exposure to violent video games and subsequent aggression. Specifically, we predicted that playing a violent video game would decrease cardiac coherence, and that cardiac coherence, in turn, would correlate negatively with aggression. Participants (N = 77) played a violent or nonviolent video game for 20 min. Cardiac coherence was measured before and during game play. After game play, participants had the opportunity to blast a confederate with loud noise through headphones during a reaction time task. The intensity and duration of noise blasts given to the confederate was used to measure aggression. As expected, violent video game players had lower cardiac coherence levels and higher aggression levels than did nonviolent game players. Cardiac coherence, in turn, was negatively related to aggression. This research offers another possible reason why violent games can increase aggression-by inducing stress. Cardiac coherence can be a useful tool to measure stress induced by violent video games. Cardiac coherence has several desirable methodological features as well: it is noninvasive, stable against environmental disturbances, relatively inexpensive, not subject to demand characteristics, and easy to use.",
"title": ""
},
{
"docid": "4d7b4fe86b906baae887c80e872d71a4",
"text": "The use of serologic testing and its value in the diagnosis of Lyme disease remain confusing and controversial for physicians, especially concerning persons who are at low risk for the disease. The approach to diagnosing Lyme disease varies depending on the probability of disease (based on endemicity and clinical findings) and the stage at which the disease may be. In patients from endemic areas, Lyme disease may be diagnosed on clinical grounds alone in the presence of erythema migrans. These patients do not require serologic testing, although it may be considered according to patient preference. When the pretest probability is moderate (e.g., in a patient from a highly or moderately endemic area who has advanced manifestations of Lyme disease), serologic testing should be performed with the complete two-step approach in which a positive or equivocal serology is followed by a more specific Western blot test. Samples drawn from patients within four weeks of disease onset are tested by Western blot technique for both immunoglobulin M and immunoglobulin G antibodies; samples drawn more than four weeks after disease onset are tested for immunoglobulin G only. Patients who show no objective signs of Lyme disease have a low probability of the disease, and serologic testing in this group should be kept to a minimum because of the high risk of false-positive results. When unexplained nonspecific systemic symptoms such as myalgia, fatigue, and paresthesias have persisted for a long time in a person from an endemic area, serologic testing should be performed with the complete two-step approach described above.",
"title": ""
},
{
"docid": "678558c9c8d629f98b77a61082bd9b95",
"text": "Internet of Things (IoT) makes all objects become interconnected and smart, which has been recognized as the next technological revolution. As its typical case, IoT-based smart rehabilitation systems are becoming a better way to mitigate problems associated with aging populations and shortage of health professionals. Although it has come into reality, critical problems still exist in automating design and reconfiguration of such a system enabling it to respond to the patient's requirements rapidly. This paper presents an ontology-based automating design methodology (ADM) for smart rehabilitation systems in IoT. Ontology aids computers in further understanding the symptoms and medical resources, which helps to create a rehabilitation strategy and reconfigure medical resources according to patients' specific requirements quickly and automatically. Meanwhile, IoT provides an effective platform to interconnect all the resources and provides immediate information interaction. Preliminary experiments and clinical trials demonstrate valuable information on the feasibility, rapidity, and effectiveness of the proposed methodology.",
"title": ""
},
{
"docid": "d34b81ac6c521cbf466b4b898486a201",
"text": "We introduce the novel task of identifying important citations in scholarly literature, i.e., citations that indicate that the cited work is used or extended in the new effort. We believe this task is a crucial component in algorithms that detect and follow research topics and in methods that measure the quality of publications. We model this task as a supervised classification problem at two levels of detail: a coarse one with classes (important vs. non-important), and a more detailed one with four importance classes. We annotate a dataset of approximately 450 citations with this information, and release it publicly. We propose a supervised classification approach that addresses this task with a battery of features that range from citation counts to where the citation appears in the body of the paper, and show that, our approach achieves a precision of 65% for a recall of 90%.",
"title": ""
},
{
"docid": "3938e2e498724d5cb3c4875439c06a98",
"text": "To enable collaboration and communication between humans and agents, this paper investigates learning to acquire commonsense evidence for action justification. In particular, we have developed an approach based on the generative Conditional Variational Autoencoder (CVAE) that models object relations/attributes of the world as latent variables and jointly learns a performer that predicts actions and an explainer that gathers commonsense evidence to justify the action. Our empirical results have shown that, compared to a typical attention-based model, CVAE achieves significantly higher performance in both action prediction and justification. A human subject study further shows that the commonsense evidence gathered by CVAE can be communicated to humans to achieve a significantly higher common ground between humans and agents.",
"title": ""
},
{
"docid": "fb128fdbd2975edee014ad86113595dd",
"text": "Recurrent neural networks have become ubiquitous in computing representations of sequential data, especially textual data in natural language processing. In particular, Bidirectional LSTMs are at the heart of several neural models achieving state-of-the-art performance in a wide variety of tasks in NLP. However, BiLSTMs are known to suffer from sequential bias – the contextual representation of a token is heavily influenced by tokens close to it in a sentence. We propose a general and effective improvement to the BiLSTM model which encodes each suffix and prefix of a sequence of tokens in both forward and reverse directions. We call our model Suffix Bidirectional LSTM or SuBiLSTM. This introduces an alternate bias that favors long range dependencies. We apply SuBiLSTMs to several tasks that require sentence modeling. We demonstrate that using SuBiLSTM instead of a BiLSTM in existing models leads to improvements in performance in learning general sentence representations, text classification, textual entailment and paraphrase detection. Using SuBiLSTM we achieve new state-of-the-art results for fine-grained sentiment classification and question classification.",
"title": ""
},
{
"docid": "07e69863c4c6531e310b0302d290cbad",
"text": "Recently two-stage detectors have surged ahead of single-shot detectors in the accuracy-vs-speed trade-off. Nevertheless single-shot detectors are immensely popular in embedded vision applications. This paper brings singleshot detectors up to the same level as current two-stage techniques. We do this by improving training for the stateof-the-art single-shot detector, RetinaNet, in three ways: integrating instance mask prediction for the first time, making the loss function adaptive and more stable, and including additional hard examples in training. We call the resulting augmented network RetinaMask. The detection component of RetinaMask has the same computational cost as the original RetinaNet, but is more accurate. COCO test-dev results are up to 41.4 mAP for RetinaMask-101 vs 39.1mAP for RetinaNet-101, while the runtime is the same during evaluation. Adding Group Normalization increases the performance of RetinaMask-101 to 41.7 mAP. Code is at: https://github.com/chengyangfu/",
"title": ""
},
{
"docid": "7b851dc49265c7be5199fb887305b0f5",
"text": "— A set of customers with known locations and known requirements for some commodity, is to be supplied from a single depot by delivery vehicles o f known capacity. The problem of designing routes for these vehicles so as to minimise the cost of distribution is known as the vehicle routing problem ( VRP). In this paper we catégorise, discuss and extend both exact and approximate methods for solving VRP's, and we give some results on the properties offeasible solutions which help to reduce the computational effort invohed in solving such problems.",
"title": ""
}
] |
scidocsrr
|
3c980757f0e4207b85bb3e13c247df96
|
The In fl uence of Chomsky on the Neuroscience of Language
|
[
{
"docid": "566a2b2ff835d10e0660fb89fd6ae618",
"text": "We argue that an understanding of the faculty of language requires substantial interdisciplinary cooperation. We suggest how current developments in linguistics can be profitably wedded to work in evolutionary biology, anthropology, psychology, and neuroscience. We submit that a distinction should be made between the faculty of language in the broad sense (FLB) and in the narrow sense (FLN). FLB includes a sensory-motor system, a conceptual-intentional system, and the computational mechanisms for recursion, providing the capacity to generate an infinite range of expressions from a finite set of elements. We hypothesize that FLN only includes recursion and is the only uniquely human component of the faculty of language. We further argue that FLN may have evolved for reasons other than language, hence comparative studies might look for evidence of such computations outside of the domain of communication (for example, number, navigation, and social relations).",
"title": ""
}
] |
[
{
"docid": "ffc7dfa4d97622199c22b885059c29d5",
"text": "Original citation: Kardefelt-Winther, Daniel (2014) A conceptual and methodological critique of internet addiction research: towards a model of compensatory internet use. LSE has developed LSE Research Online so that users may access research output of the School. Copyright © and Moral Rights for the papers on this site are retained by the individual authors and/or other copyright owners. Users may download and/or print one copy of any article(s) in LSE Research Online to facilitate their private study or for non-commercial research. You may not engage in further distribution of the material or use it for any profit-making activities or any commercial gain. You may freely distribute the URL Keywords: Internet addiction Compulsive internet use Problematic internet use Compensatory internet use Motivations for internet use a b s t r a c t Internet addiction is a rapidly growing field of research, receiving attention from researchers, journalists and policy makers. Despite much empirical data being collected and analyzed clear results and conclusions are surprisingly absent. This paper argues that conceptual issues and methodological shortcomings surrounding internet addiction research have made theoretical development difficult. An alternative model termed compensatory internet use is presented in an attempt to properly theorize the frequent assumption that people go online to escape real life issues or alleviate dysphoric moods and that this sometimes leads to negative outcomes. An empirical approach to studying compensatory internet use is suggested by combining the psychological literature on internet addiction with research on motivations for internet use. The theoretical argument is that by understanding how motivations mediate the relationship between psychosocial well-being and internet addiction, we can draw conclusions about how online activities may compensate for psychosocial problems. This could help explain why some people keep spending so much time online despite experiencing negative outcomes. There is also a method-ological argument suggesting that in order to accomplish this, research needs to move away from a focus on direct effects models and consider mediation and interaction effects between psychosocial well-being and motivations in the context of internet addiction. This is key to further exploring the notion of internet use as a coping strategy; a proposition often mentioned but rarely investigated. Internet addiction 1 is typically described as a state where an individual has lost control of the internet use and keeps using inter-net excessively to the point where he/she experiences problematic outcomes that negatively affects his/her life (Young & Abreu, 2011). Examples …",
"title": ""
},
{
"docid": "d5024344f4eb9b5e7537d7be0e2d345e",
"text": "The assessment of financial credit risk is an important and challenging research topic in the area of accounting and finance. Numerous efforts have been devoted into this field since the first attempt last century. Today the study of financial credit risk assessment attracts increasing attentions in the face of one of the most severe financial crisis ever observed in the world. The accurate assessment of financial credit risk and prediction of business failure play an essential role both on economics and society. For this reason, more and more methods and algorithms were proposed in the past years. From this point, it is of crucial importance to review the nowadays methods applied to financial credit risk assessment. In this paper, we summarize the traditional statistical models and state-of-the-art intelligent methods for financial distress forecasting, with the emphasis on the most recent achievements as the promising trend in this area.",
"title": ""
},
{
"docid": "4aa590129b4b49cf190c874c4a0bf7b4",
"text": "3D semantic scene labeling is fundamental to agents operating in the real world. In particular, labeling raw 3D point sets from sensors provides fine-grained semantics. Recent works leverage the capabilities of Neural Networks(NNs), but are limited to coarse voxel predictions and do not explicitly enforce global consistency. We present SEGCloud, an end-to-end framework to obtain 3D point-level segmentation that combines the advantages of NNs, trilinear interpolation(TI) and fully connected Conditional Random Fields (FC-CRF). Coarse voxel predictions from a 3D Fully Convolutional NN are transferred back to the raw 3D points via trilinear interpolation. Then the FC-CRF enforces global consistency and provides fine-grained semantics on the points. We implement the latter as a differentiable Recurrent NN to allow joint optimization. We evaluate the framework on two indoor and two outdoor 3D datasets (NYU V2, S3DIS, KITTI, Semantic3D.net), and show performance comparable or superior to the state-of-the-art on all datasets.",
"title": ""
},
{
"docid": "cab386acd4cf89803325e5d33a095a62",
"text": "Dipyridamole is a widely prescribed drug in ischemic disorders, and it is here investigated for potential clinical use as a new treatment for breast cancer. Xenograft mice bearing triple-negative breast cancer 4T1-Luc or MDA-MB-231T cells were generated. In these in vivo models, dipyridamole effects were investigated for primary tumor growth, metastasis formation, cell cycle, apoptosis, signaling pathways, immune cell infiltration, and serum inflammatory cytokines levels. Dipyridamole significantly reduced primary tumor growth and metastasis formation by intraperitoneal administration. Treatment with 15 mg/kg/day dipyridamole reduced mean primary tumor size by 67.5 % (p = 0.0433), while treatment with 30 mg/kg/day dipyridamole resulted in an almost a total reduction in primary tumors (p = 0.0182). Experimental metastasis assays show dipyridamole reduces metastasis formation by 47.5 % in the MDA-MB-231T xenograft model (p = 0.0122), and by 50.26 % in the 4T1-Luc xenograft model (p = 0.0292). In vivo dipyridamole decreased activated β-catenin by 38.64 % (p < 0.0001), phospho-ERK1/2 by 25.05 % (p = 0.0129), phospho-p65 by 67.82 % (p < 0.0001) and doubled the expression of IkBα (p = 0.0019), thus revealing significant effects on Wnt, ERK1/2-MAPK and NF-kB pathways in both animal models. Moreover dipyridamole significantly decreased the infiltration of tumor-associated macrophages and myeloid-derived suppressor cells in primary tumors (p < 0.005), and the inflammatory cytokines levels in the sera of the treated mice. We suggest that when used at appropriate doses and with the correct mode of administration, dipyridamole is a promising agent for breast-cancer treatment, thus also implying its potential use in other cancers that show those highly activated pathways.",
"title": ""
},
{
"docid": "e55b1e5ba811737b88fffba65f8e312d",
"text": "Rollback recovery is a trustworthy and key approach to fault tolerance in high performance computing and to parallel program debugging. In various rollback recovery protocols, causal message logging shows some desirable characteristics, but its high piggybacking overhead obstructs its applications, especially in large-scale distributed systems. Its high overhead arises from its conservation in the assumption on program execution model. This paper identifies the influence of non-deterministic message delivery on the correct outcome of a process, and then gives a scheme to relax the constraints from the piecewise deterministic execution model. Subsequently, a lightweight implementation of causal message logging is proposed to decrease the overhead of piggybacking and rolling forward. The experimental results of 3 NAS NPB2.3 benchmarks show that the proposed scheme achieves a significant improvement in the overhead reduction.",
"title": ""
},
{
"docid": "abd70a747fa984fbd61f0935ab882430",
"text": "N Engl J Med 2006;355:1253-61. Copyright © 2006 Massachusetts Medical Society. The deepening of our understanding of normal biology has made it clear that stem cells have a critical role not only in the generation of complex multicellular organisms, but also in the development of tumors. Recent findings support the concept that cells with the properties of stem cells are integral to the development and perpetuation of several forms of human cancer.1-3 Eradication of the stem-cell compartment of a tumor also may be essential to achieve stable, long-lasting remission, and even a cure, of cancer.4,5 Advances in our knowledge of the properties of stem cells have made specific targeting and eradication of cancer stem cells a topic of considerable interest. In this article, we discuss the properties of cancer stem cells, outline initial therapeutic strategies against them, and present challenges for the future.",
"title": ""
},
{
"docid": "d2f6b3fee7f40eb580451d9cc29b8aa6",
"text": "Compositional Distributional Semantic methods model the distributional behavior of a compound word by exploiting the distributional behavior of its constituent words. In this setting, a constituent word is typically represented by a feature vector conflating all the senses of that word. However, not all the senses of a constituent word are relevant when composing the semantics of the compound. In this paper, we present two different methods for selecting the relevant senses of constituent words. The first one is based on Word Sense Induction and creates a static multi prototype vectors representing the senses of a constituent word. The second creates a single dynamic prototype vector for each constituent word based on the distributional properties of the other constituents in the compound. We use these prototype vectors for composing the semantics of noun-noun compounds and evaluate on a compositionality-based similarity task. Our results show that: (1) selecting relevant senses of the constituent words leads to a better semantic composition of the compound, and (2) dynamic prototypes perform better than static prototypes.",
"title": ""
},
{
"docid": "47b7b688ec6d8d88f89b13a4775860ff",
"text": "Cross-sectional studies revealed that inclusion of unstable elements in core-strengthening exercises produced increases in trunk muscle activity and thus potential extra stimuli to induce more pronounced performance enhancements in youth athletes. Thus, the purpose of the study was to investigate changes in neuromuscular and athletic performance following core strength training performed on unstable (CSTU) compared with stable surfaces (CSTS) in youth soccer players. Thirty-nine male elite soccer players (age: 17 ± 1 years) were assigned to two groups performing a progressive core strength-training program for 9 weeks (2-3 times/week) in addition to regular in-season soccer training. CSTS group conducted core exercises on stable (i.e., floor, bench) and CSTU group on unstable (e.g., Thera-Band® Stability Trainer, Togu© Swiss ball) surfaces. Measurements included tests for assessing trunk muscle strength/activation, countermovement jump height, sprint time, agility time, and kicking performance. Statistical analysis revealed significant main effects of test (pre vs post) for trunk extensor strength (5%, P < 0.05, d = 0.86), 10-20-m sprint time (3%, P < 0.05, d = 2.56), and kicking performance (1%, P < 0.01, d = 1.28). No significant Group × test interactions were observed for any variable. In conclusion, trunk muscle strength, sprint, and kicking performance improved following CSTU and CSTS when conducted in combination with regular soccer training.",
"title": ""
},
{
"docid": "ff5fb2a555c9bcdfad666406b94ebc71",
"text": "Driven by profits, spam reviews for product promotion or suppression become increasingly rampant in online shopping platforms. This paper focuses on detecting hidden spam users based on product reviews. In the literature, there have been tremendous studies suggesting diversified methods for spammer detection, but whether these methods can be combined effectively for higher performance remains unclear. Along this line, a hybrid PU-learning-based Spammer Detection (hPSD) model is proposed in this paper. On one hand, hPSD can detect multi-type spammers by injecting or recognizing only a small portion of positive samples, which meets particularly real-world application scenarios. More importantly, hPSD can leverage both user features and user relations to build a spammer classifier via a semi-supervised hybrid learning framework. Experimental results on movie data sets with shilling injection show that hPSD outperforms several state-of-the-art baseline methods. In particular, hPSD shows great potential in detecting hidden spammers as well as their underlying employers from a real-life Amazon data set. These demonstrate the effectiveness and practical value of hPSD for real-life applications.",
"title": ""
},
{
"docid": "59b7afc5c2af7de75248c90fdf5c9cd3",
"text": "Non-uniform blind deblurring for general dynamic scenes is a challenging computer vision problem as blurs arise not only from multiple object motions but also from camera shake, scene depth variation. To remove these complicated motion blurs, conventional energy optimization based methods rely on simple assumptions such that blur kernel is partially uniform or locally linear. Moreover, recent machine learning based methods also depend on synthetic blur datasets generated under these assumptions. This makes conventional deblurring methods fail to remove blurs where blur kernel is difficult to approximate or parameterize (e.g. object motion boundaries). In this work, we propose a multi-scale convolutional neural network that restores sharp images in an end-to-end manner where blur is caused by various sources. Together, we present multi-scale loss function that mimics conventional coarse-to-fine approaches. Furthermore, we propose a new large-scale dataset that provides pairs of realistic blurry image and the corresponding ground truth sharp image that are obtained by a high-speed camera. With the proposed model trained on this dataset, we demonstrate empirically that our method achieves the state-of-the-art performance in dynamic scene deblurring not only qualitatively, but also quantitatively.",
"title": ""
},
{
"docid": "4357d48125a6d805c1932f6c02481cc4",
"text": "The success of any learning environment is determined by the degree to which there is adequate alignment among eight critical factors: 1) goals, 2) content, 3) instructional design, 4) learner tasks, 5) instructor roles, 6) student roles, 7) technological affordances, and 8) assessment. Evaluations of traditional, online, and blended approaches to higher education teaching indicate that the most commonly misaligned factor is assessment. Simply put, instructors may have lofty goals, high-quality content, and even advanced instructional designs, but most instructors tend to focus their assessment strategies on what is easy to measure rather than on what is important. Adequate assessment should encompass all four learning domains: cognitive, affective, conative, and psychomotor. This paper describes procedures for the development and use of reliable and valid assessments in higher education.",
"title": ""
},
{
"docid": "134173c98bceafddbf7f12a108525ff4",
"text": "Rough surfaces pose a challenging shape extraction problem. Images of rough surfaces are often characterized by high frequency intensity variations, and it is difficult to perceive the shapes of these surfaces from their images. The shape-from-focus method described in this paper uses different focus levels to obtain a sequence of object images. The sum-modified-Laplacian (SML) operator is developed to compute local measures of the quality of image focus. The SML operator is applied to the image sequence, and the set of focus measures obtained at each image point are used to compute local depth estimates. We present two algorithms for depth estimation. The first algorithm simply looks for the focus level that maximizes the focus measure at each point. The other algorithm models the SML focus measure variations at each point as a Gaussian distribution and use this model to interpolate the computed focus measures to obtain more accurate depth estimates. The algorithms were implemented and tested using surfaces of different roughness and reflectance properties. We conclude with a brief discussion on how the proposed method can be applied to smooth textured and smooth non-textured surfaces.",
"title": ""
},
{
"docid": "e8e1bf877e45de0d955d8736c342ec76",
"text": "Parking guidance and information (PGI) systems are becoming important parts of intelligent transportation systems due to the fact that cars and infrastructure are becoming more and more connected. One major challenge in developing efficient PGI systems is the uncertain nature of parking availability in parking facilities (both on-street and off-street). A reliable PGI system should have the capability of predicting the availability of parking at the arrival time with reliable accuracy. In this paper, we study the nature of the parking availability data in a big city and propose a multivariate autoregressive model that takes into account both temporal and spatial correlations of parking availability. The model is used to predict parking availability with high accuracy. The prediction errors are used to recommend the parking location with the highest probability of having at least one parking spot available at the estimated arrival time. The results are demonstrated using real-time parking data in the areas of San Francisco and Los Angeles.",
"title": ""
},
{
"docid": "8fb6794fbcad3ba69a0fd7fea7a5628d",
"text": "Ajay Kalra • Mengze Shi • Kannan Srinivasan Graduate School of Industrial Administration, Carnegie Mellon University, Pittsburgh, Pennsylvania 15213 Rotman School of Management, University of Toronto, Toronto, Ontario, Canada M5S 3G4 Graduate School of Industrial Administration, Carnegie Mellon University, Pittsburgh, Pennsylvania 15213 kalra@andrew.cmu.edu • mshi@rotman.utoronto.ca • kannans@andrew.cmu.edu",
"title": ""
},
{
"docid": "4fa9db557f53fa3099862af87337cfa9",
"text": "With the rapid development of E-commerce, recent years have witnessed the booming of online advertising industry, which raises extensive concerns of both academic and business circles. Among all the issues, the task of Click-through rates (CTR) prediction plays a central role, as it may influence the ranking and pricing of online ads. To deal with this task, the Factorization Machines (FM) model is designed for better revealing proper combinations of basic features. However, the sparsity of ads transaction data, i.e., a large proportion of zero elements, may severely disturb the performance of FM models. To address this problem, in this paper, we propose a novel Sparse Factorization Machines (SFM) model, in which the Laplace distribution is introduced instead of traditional Gaussian distribution to model the parameters, as Laplace distribution could better fit the sparse data with higher ratio of zero elements. Along this line, it will be beneficial to select the most important features or conjunctions with the proposed SFM model. Furthermore, we develop a distributed implementation of our SFM model on Spark platform to support the prediction task on mass dataset in practice. Comprehensive experiments on two large-scale real-world datasets clearly validate both the effectiveness and efficiency of our SFM model compared with several state-of-the-art baselines, which also proves our assumption that Laplace distribution could be more suitable to describe the online ads transaction data.",
"title": ""
},
{
"docid": "c6005a99e6a60a4ee5f958521dcad4d3",
"text": "We document initial experiments with Canid, a freestanding, power-autonomous quadrupedal robot equipped with a parallel actuated elastic spine. Research into robotic bounding and galloping platforms holds scientific and engineering interest because it can both probe biological hypotheses regarding bounding and galloping mammals and also provide the engineering community with a new class of agile, efficient and rapidly-locomoting legged robots. We detail the design features of Canid that promote our goals of agile operation in a relatively cheap, conventionally prototyped, commercial off-the-shelf actuated platform. We introduce new measurement methodology aimed at capturing our robot’s “body energy” during real time operation as a means of quantifying its potential for agile behavior. Finally, we present joint motor, inertial and motion capture data taken from Canid’s initial leaps into highly energetic regimes exhibiting large accelerations that illustrate the use of this measure and suggest its future potential as a platform for developing efficient, stable, hence useful bounding gaits. For more information: Kod*Lab Disciplines Electrical and Computer Engineering | Engineering | Systems Engineering Comments BibTeX entry @article{canid_spie_2013, author = {Pusey, Jason L. and Duperret, Jeffrey M. and Haynes, G. Clark and Knopf, Ryan and Koditschek , Daniel E.}, title = {Free-Standing Leaping Experiments with a PowerAutonomous, Elastic-Spined Quadruped}, pages = {87410W-87410W-15}, year = {2013}, doi = {10.1117/ 12.2016073} } This work is supported by the National Science Foundation Graduate Research Fellowship under Grant Number DGE-0822, and by the Army Research Laboratory under Cooperative Agreement Number W911NF-10–2−0016. Copyright 2013 Society of Photo-Optical Instrumentation Engineers. Postprint version. This paper was (will be) published in Proceedings of the SPIE Defense, Security, and Sensing Conference, Unmanned Systems Technology XV (8741), and is made available as an electronic reprint with permission of SPIE. One print or electronic copy may be made for personal use only. Systematic or multiple reproduction, distribution to multiple locations via electronic or other means, duplication of any material in this paper for a fee or for commercial purposes, or modification of the content of the paper are prohibited. This conference paper is available at ScholarlyCommons: http://repository.upenn.edu/ese_papers/655 Free-Standing Leaping Experiments with a Power-Autonomous, Elastic-Spined Quadruped Jason L. Pusey a , Jeffrey M. Duperret b , G. Clark Haynes c , Ryan Knopf b , and Daniel E. Koditschek b a U.S. Army Research Laboratory, Aberdeen Proving Ground, MD, b University of Pennsylvania, Philadelphia, PA, c National Robotics Engineering Center, Carnegie Mellon University, Pittsburgh, PA",
"title": ""
},
{
"docid": "460d6a8a5f78e6fa5c42fb6c219b3254",
"text": "Generative Adversarial Networks (GANs) have been successfully applied to the problem of policy imitation in a model-free setup. However, the computation graph of GANs, that include a stochastic policy as the generative model, is no longer differentiable end-to-end, which requires the use of high-variance gradient estimation. In this paper, we introduce the Modelbased Generative Adversarial Imitation Learning (MGAIL) algorithm. We show how to use a forward model to make the computation fully differentiable, which enables training policies using the exact gradient of the discriminator. The resulting algorithm trains competent policies using relatively fewer expert samples and interactions with the environment. We test it on both discrete and continuous action domains and report results that surpass the state-of-the-art.",
"title": ""
},
{
"docid": "8bd7e0d63729c9450fbe8541fb40278c",
"text": "The black soldier fly, Heretia illucens (L.), is a nonpest tropical and warm-temperate region insect that is useful for managing large concentrations of animal manure and other biosolids. Manure management relying on wild fly oviposition has been successful in several studies. However, confidence in this robust natural system was low and biological studies were hampered by the lack of a dependable source of eggs and larvae. Larvae had been reared easily by earlier investigators, but achieving mating had been problematic. We achieved mating reliably in a 2 by 2 by 4-m screen cage in a 7 by 9 by 5-m greenhouse where sunlight and adequate space for aerial mating were available. Mating occurred during the shortest days of winter if the sun was not obscured by clouds. Adults were provided with water, but no food was required. Techniques for egg collection and larval rearing are given. Larvae were fed a moist mixture of wheat bran, corn meal, and alfalfa meal. This culture has been maintained for 3 yr. Maintainance of a black soldier fly laboratory colony will allow for development of manure management systems in fully enclosed animal housing and in colder regions.",
"title": ""
},
{
"docid": "c9c29c091c9851920315c4d4b38b4c9f",
"text": "BACKGROUND\nThe presence of six or more café au lait (CAL) spots is a criterion for the diagnosis of neurofibromatosis type 1 (NF-1). Children with multiple CAL spots are often referred to dermatologists for NF-1 screening. The objective of this case series is to characterize a subset of fair-complected children with red or blond hair and multiple feathery CAL spots who did not meet the criteria for NF-1 at the time of their last evaluation.\n\n\nMETHODS\nWe conducted a chart review of eight patients seen in our pediatric dermatology clinic who were previously identified as having multiple CAL spots and no other signs or symptoms of NF-1.\n\n\nRESULTS\nWe describe eight patients ages 2 to 9 years old with multiple, irregular CAL spots with feathery borders and no other signs or symptoms of NF-1. Most of these patients had red or blond hair and were fair complected. All patients were evaluated in our pediatric dermatology clinic, some with a geneticist. The number of CAL spots per patient ranged from 5 to 15 (mean 9.4, median 9).\n\n\nCONCLUSION\nA subset of children, many with fair complexions and red or blond hair, has an increased number of feathery CAL spots and appears unlikely to develop NF-1, although genetic testing was not conducted. It is important to recognize the benign nature of CAL spots in these patients so that appropriate screening and follow-up recommendations may be made.",
"title": ""
},
{
"docid": "a5c9de4127df50d495c7372b363691cf",
"text": "This book is an accompaniment to the computer software package mathStatica (which runs as an add-on to Mathematica). The book comes with two CD-ROMS: mathStatica, and a 30-day trial version of Mathematica 4.1. The mathStatica CD-ROM includes an applications pack for doing mathematical statistics, custom Mathematica palettes and an electronic version of the book that is identical to the printed text, but can be used interactively to generate animations of some of the book's figures (e.g. as a parameter is varied). (I found this last feature particularly valuable.) MathStatica has statistical operators for determining expectations (and hence characteristic functions, for example) and probabilities, for finding the distributions of transformations of random variables and generally for dealing with the kinds of problems and questions that arise in mathematical statistics. Applications include estimation, curve-fitting, asymptotics, decision theory and moment conversion formulae (e.g. central to cumulant). To give an idea of the coverage of the book: after an introductory chapter, there are three chapters on random variables, then chapters on systems of distributions (e.g. Pearson), multivariate distributions, moments, asymptotic theory, decision theory and then three chapters on estimation. There is an appendix, which deals with technical Mathematica details. What distinguishes mathStatica from statistical packages such as S-PLUS, R, SPSS and SAS is its ability to deal with the algebraic/symbolic problems that are the main concern of mathematical statistics. This is, of course, because it is based on Mathematica, and this is also the reason that it has a note–book interface (which enables one to incorporate text, equations and pictures into a single line), and why arbitrary-precision calculations can be performed. According to the authors, 'this book can be used as a course text in mathematical statistics or as an accompaniment to a more traditional text'. Assumed knowledge includes preliminary courses in statistics, probability and calculus. The emphasis is on problem solving. The material is supposedly pitched at the same level as Hogg and Craig (1995). However some topics are treated in much more depth than in Hogg and Craig (characteristic functions for instance, which rate less than one page in Hogg and Craig). Also, the coverage is far broader than that of Hogg and Craig; additional topics include for instance stable distributions, cumulants, Pearson families, Gram-Charlier expansions and copulae. Hogg and Craig can be used as a textbook for a third-year course in mathematical statistics in some Australian universities , whereas there is …",
"title": ""
}
] |
scidocsrr
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.